title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Representation Learning with Deconvolution for Multivariate Time Series
Classification and Visualization | cs.LG cs.NE | We propose a new model based on the deconvolutional networks and SAX
discretization to learn the representation for multivariate time series.
Deconvolutional networks fully exploit the advantage the powerful
expressiveness of deep neural networks in the manner of unsupervised learning.
We design a network structure specifically to capture the cross-channel
correlation with deconvolution, forcing the pooling operation to perform the
dimension reduction along each position in the individual channel.
Discretization based on Symbolic Aggregate Approximation is applied on the
feature vectors to further extract the bag of features. We show how this
representation and bag of features helps on classification. A full comparison
with the sequence distance based approach is provided to demonstrate the
effectiveness of our approach on the standard datasets. We further build the
Markov matrix from the discretized representation from the deconvolution to
visualize the time series as complex networks, which show more class-specific
statistical properties and clear structures with respect to different labels.
| Zhiguang Wang, Wei Song, Lu Liu, Fan Zhang, Junxiao Xue, Yangdong Ye,
Ming Fan, Mingliang Xu | null | 1610.07258 | null | null |
Encoding Temporal Markov Dynamics in Graph for Visualizing and Mining
Time Series | cs.LG cs.HC | Time series and signals are attracting more attention across statistics,
machine learning and pattern recognition as it appears widely in the industry
especially in sensor and IoT related research and applications, but few
advances has been achieved in effective time series visual analytics and
interaction due to its temporal dimensionality and complex dynamics. Inspired
by recent effort on using network metrics to characterize time series for
classification, we present an approach to visualize time series as complex
networks based on the first order Markov process in its temporal ordering. In
contrast to the classical bar charts, line plots and other statistics based
graph, our approach delivers more intuitive visualization that better preserves
both the temporal dependency and frequency structures. It provides a natural
inverse operation to map the graph back to raw signals, making it possible to
use graph statistics to characterize time series for better visual exploration
and statistical analysis. Our experimental results suggest the effectiveness on
various tasks such as pattern discovery and classification on both synthetic
and the real time series and sensor data.
| Lu Liu, Zhiguang Wang | null | 1610.07273 | null | null |
Truncated Variance Reduction: A Unified Approach to Bayesian
Optimization and Level-Set Estimation | stat.ML cs.IT cs.LG math.IT | We present a new algorithm, truncated variance reduction (TruVaR), that
treats Bayesian optimization (BO) and level-set estimation (LSE) with Gaussian
processes in a unified fashion. The algorithm greedily shrinks a sum of
truncated variances within a set of potential maximizers (BO) or unclassified
points (LSE), which is updated based on confidence bounds. TruVaR is effective
in several important settings that are typically non-trivial to incorporate
into myopic algorithms, including pointwise costs and heteroscedastic noise. We
provide a general theoretical guarantee for TruVaR covering these aspects, and
use it to recover and strengthen existing results on BO and LSE. Moreover, we
provide a new result for a setting where one can select from a number of noise
levels having associated costs. We demonstrate the effectiveness of the
algorithm on both synthetic and real-world data sets.
| Ilija Bogunovic and Jonathan Scarlett and Andreas Krause and Volkan
Cevher | null | 1610.07379 | null | null |
Using Machine Learning to Detect Noisy Neighbors in 5G Networks | cs.NI cs.LG | 5G networks are expected to be more dynamic and chaotic in their structure
than current networks. With the advent of Network Function Virtualization
(NFV), Network Functions (NF) will no longer be tightly coupled with the
hardware they are running on, which poses new challenges in network management.
Noisy neighbor is a term commonly used to describe situations in NFV
infrastructure where an application experiences degradation in performance due
to the fact that some of the resources it needs are occupied by other
applications in the same cloud node. These situations cannot be easily
identified using straightforward approaches, which calls for the use of
sophisticated methods for NFV infrastructure management. In this paper we
demonstrate how Machine Learning (ML) techniques can be used to identify such
events. Through experiments using data collected at real NFV infrastructure, we
show that standard models for automated classification can detect the noisy
neighbor phenomenon with an accuracy of more than 90% in a simple scenario.
| Udi Margolin, Alberto Mozo, Bruno Ordozgoiti, Danny Raz, Elisha
Rosensweig, Itai Segall | null | 1610.07419 | null | null |
A Framework for Parallel and Distributed Training of Neural Networks | stat.ML cs.LG | The aim of this paper is to develop a general framework for training neural
networks (NNs) in a distributed environment, where training data is partitioned
over a set of agents that communicate with each other through a sparse,
possibly time-varying, connectivity pattern. In such distributed scenario, the
training problem can be formulated as the (regularized) optimization of a
non-convex social cost function, given by the sum of local (non-convex) costs,
where each agent contributes with a single error term defined with respect to
its local dataset. To devise a flexible and efficient solution, we customize a
recently proposed framework for non-convex optimization over networks, which
hinges on a (primal) convexification-decomposition technique to handle
non-convexity, and a dynamic consensus procedure to diffuse information among
the agents. Several typical choices for the training criterion (e.g., squared
loss, cross entropy, etc.) and regularization (e.g., $\ell_2$ norm, sparsity
inducing penalties, etc.) are included in the framework and explored along the
paper. Convergence to a stationary solution of the social non-convex problem is
guaranteed under mild assumptions. Additionally, we show a principled way
allowing each agent to exploit a possible multi-core architecture (e.g., a
local cloud) in order to parallelize its local optimization step, resulting in
strategies that are both distributed (across the agents) and parallel (inside
each agent) in nature. A comprehensive set of experimental results validate the
proposed approach.
| Simone Scardapane and Paolo Di Lorenzo | 10.1016/j.neunet.2017.04.004 | 1610.07448 | null | null |
A Variational Bayesian Approach for Image Restoration. Application to
Image Deblurring with Poisson-Gaussian Noise | math.OC cs.LG stat.ML | In this paper, a methodology is investigated for signal recovery in the
presence of non-Gaussian noise. In contrast with regularized minimization
approaches often adopted in the literature, in our algorithm the regularization
parameter is reliably estimated from the observations. As the posterior density
of the unknown parameters is analytically intractable, the estimation problem
is derived in a variational Bayesian framework where the goal is to provide a
good approximation to the posterior distribution in order to compute posterior
mean estimates. Moreover, a majorization technique is employed to circumvent
the difficulties raised by the intricate forms of the non-Gaussian likelihood
and of the prior density. We demonstrate the potential of the proposed approach
through comparisons with state-of-the-art techniques that are specifically
tailored to signal recovery in the presence of mixed Poisson-Gaussian noise.
Results show that the proposed approach is efficient and achieves performance
comparable with other methods where the regularization parameter is manually
tuned from the ground truth.
| Yosra Marnissi, Yuling Zheng, Emilie Chouzenoux, Jean-Christophe
Pesquet | null | 1610.07519 | null | null |
Nonlinear Adaptive Algorithms on Rank-One Tensor Models | cs.SY cs.LG | This work proposes a low complexity nonlinearity model and develops adaptive
algorithms over it. The model is based on the decomposable---or rank-one, in
tensor language---Volterra kernels. It may also be described as a product of
FIR filters, which explains its low-complexity. The rank-one model is also
interesting because it comes from a well-posed problem in approximation theory.
The paper uses such model in an estimation theory context to develop an exact
gradient-type algorithm, from which adaptive algorithms such as the least mean
squares (LMS) filter and its data-reuse version---the TRUE-LMS---are derived.
Stability and convergence issues are addressed. The algorithms are then tested
in simulations, which show its good performance when compared to other
nonlinear processing algorithms in the literature.
| Felipe C. Pinheiro, Cassio G. Lopes | null | 1610.0752 | null | null |
On Multiplicative Multitask Feature Learning | cs.LG | We investigate a general framework of multiplicative multitask feature
learning which decomposes each task's model parameters into a multiplication of
two components. One of the components is used across all tasks and the other
component is task-specific. Several previous methods have been proposed as
special cases of our framework. We study the theoretical properties of this
framework when different regularization conditions are applied to the two
decomposed components. We prove that this framework is mathematically
equivalent to the widely used multitask feature learning methods that are based
on a joint regularization of all model parameters, but with a more general form
of regularizers. Further, an analytical formula is derived for the across-task
component as related to the task-specific component for all these regularizers,
leading to a better understanding of the shrinkage effect. Study of this
framework motivates new multitask learning algorithms. We propose two new
learning formulations by varying the parameters in the proposed framework.
Empirical studies have revealed the relative advantages of the two new
formulations by comparing with the state of the art, which provides instructive
insights into the feature learning problem with multiple tasks.
| Xin Wang, Jinbo Bi, Shipeng Yu, Jiangwen Sun | null | 1610.07563 | null | null |
Geometry of Polysemy | cs.CL cs.LG stat.ML | Vector representations of words have heralded a transformational approach to
classical problems in NLP; the most popular example is word2vec. However, a
single vector does not suffice to model the polysemous nature of many
(frequent) words, i.e., words with multiple meanings. In this paper, we propose
a three-fold approach for unsupervised polysemy modeling: (a) context
representations, (b) sense induction and disambiguation and (c) lexeme (as a
word and sense pair) representations. A key feature of our work is the finding
that a sentence containing a target word is well represented by a low rank
subspace, instead of a point in a vector space. We then show that the subspaces
associated with a particular sense of the target word tend to intersect over a
line (one-dimensional subspace), which we use to disambiguate senses using a
clustering algorithm that harnesses the Grassmannian geometry of the
representations. The disambiguation algorithm, which we call $K$-Grassmeans,
leads to a procedure to label the different senses of the target word in the
corpus -- yielding lexeme vector representations, all in an unsupervised manner
starting from a large (Wikipedia) corpus in English. Apart from several
prototypical target (word,sense) examples and a host of empirical studies to
intuit and justify the various geometric representations, we validate our
algorithms on standard sense induction and disambiguation datasets and present
new state-of-the-art results.
| Jiaqi Mu, Suma Bhat, Pramod Viswanath | null | 1610.07569 | null | null |
Learning a Probabilistic Latent Space of Object Shapes via 3D
Generative-Adversarial Modeling | cs.CV cs.LG | We study the problem of 3D object generation. We propose a novel framework,
namely 3D Generative Adversarial Network (3D-GAN), which generates 3D objects
from a probabilistic space by leveraging recent advances in volumetric
convolutional networks and generative adversarial nets. The benefits of our
model are three-fold: first, the use of an adversarial criterion, instead of
traditional heuristic criteria, enables the generator to capture object
structure implicitly and to synthesize high-quality 3D objects; second, the
generator establishes a mapping from a low-dimensional probabilistic space to
the space of 3D objects, so that we can sample objects without a reference
image or CAD models, and explore the 3D object manifold; third, the adversarial
discriminator provides a powerful 3D shape descriptor which, learned without
supervision, has wide applications in 3D object recognition. Experiments
demonstrate that our method generates high-quality 3D objects, and our
unsupervisedly learned features achieve impressive performance on 3D object
recognition, comparable with those of supervised learning methods.
| Jiajun Wu, Chengkai Zhang, Tianfan Xue, William T. Freeman, Joshua B.
Tenenbaum | null | 1610.07584 | null | null |
A Learned Representation For Artistic Style | cs.CV cs.LG | The diversity of painting styles represents a rich visual vocabulary for the
construction of an image. The degree to which one may learn and parsimoniously
capture this visual vocabulary measures our understanding of the higher level
features of paintings, if not images in general. In this work we investigate
the construction of a single, scalable deep network that can parsimoniously
capture the artistic style of a diversity of paintings. We demonstrate that
such a network generalizes across a diversity of artistic styles by reducing a
painting to a point in an embedding space. Importantly, this model permits a
user to explore new painting styles by arbitrarily combining the styles learned
from individual paintings. We hope that this work provides a useful step
towards building rich models of paintings and offers a window on to the
structure of the learned representation of artistic style.
| Vincent Dumoulin, Jonathon Shlens, Manjunath Kudlur | null | 1610.07629 | null | null |
A Theoretical Analysis of Noisy Sparse Subspace Clustering on
Dimensionality-Reduced Data | stat.ML cs.LG | Subspace clustering is the problem of partitioning unlabeled data points into
a number of clusters so that data points within one cluster lie approximately
on a low-dimensional linear subspace. In many practical scenarios, the
dimensionality of data points to be clustered are compressed due to constraints
of measurement, computation or privacy. In this paper, we study the theoretical
properties of a popular subspace clustering algorithm named sparse subspace
clustering (SSC) and establish formal success conditions of SSC on
dimensionality-reduced data. Our analysis applies to the most general fully
deterministic model where both underlying subspaces and data points within each
subspace are deterministically positioned, and also a wide range of
dimensionality reduction techniques (e.g., Gaussian random projection, uniform
subsampling, sketching) that fall into a subspace embedding framework (Meng &
Mahoney, 2013; Avron et al., 2014). Finally, we apply our analysis to a
differentially private SSC algorithm and established both privacy and utility
guarantees of the proposed method.
| Yining Wang, Yu-Xiang Wang and Aarti Singh | null | 1610.0765 | null | null |
Predicting Counterfactuals from Large Historical Data and Small
Randomized Trials | cs.LG | When a new treatment is considered for use, whether a pharmaceutical drug or
a search engine ranking algorithm, a typical question that arises is, will its
performance exceed that of the current treatment? The conventional way to
answer this counterfactual question is to estimate the effect of the new
treatment in comparison to that of the conventional treatment by running a
controlled, randomized experiment. While this approach theoretically ensures an
unbiased estimator, it suffers from several drawbacks, including the difficulty
in finding representative experimental populations as well as the cost of
running such trials. Moreover, such trials neglect the huge quantities of
available control-condition data which are often completely ignored.
In this paper we propose a discriminative framework for estimating the
performance of a new treatment given a large dataset of the control condition
and data from a small (and possibly unrepresentative) randomized trial
comparing new and old treatments. Our objective, which requires minimal
assumptions on the treatments, models the relation between the outcomes of the
different conditions. This allows us to not only estimate mean effects but also
to generate individual predictions for examples outside the randomized sample.
We demonstrate the utility of our approach through experiments in three
areas: Search engine operation, treatments to diabetes patients, and market
value estimation for houses. Our results demonstrate that our approach can
reduce the number and size of the currently performed randomized controlled
experiments, thus saving significant time, money and effort on the part of
practitioners.
| Nir Rosenfeld, Yishay Mansour, Elad Yom-Tov | null | 1610.07667 | null | null |
Surprisal-Driven Zoneout | cs.LG cs.AI cs.NE | We propose a novel method of regularization for recurrent neural networks
called suprisal-driven zoneout. In this method, states zoneout (maintain their
previous value rather than updating), when the suprisal (discrepancy between
the last state's prediction and target) is small. Thus regularization is
adaptive and input-driven on a per-neuron basis. We demonstrate the
effectiveness of this idea by achieving state-of-the-art bits per character of
1.31 on the Hutter Prize Wikipedia dataset, significantly reducing the gap to
the best known highly-engineered compression methods.
| Kamil Rocki, Tomasz Kornuta, Tegan Maharaj | null | 1610.07675 | null | null |
A Bayesian Ensemble for Unsupervised Anomaly Detection | stat.ML cs.LG | Methods for unsupervised anomaly detection suffer from the fact that the data
is unlabeled, making it difficult to assess the optimality of detection
algorithms. Ensemble learning has shown exceptional results in classification
and clustering problems, but has not seen as much research in the context of
outlier detection. Existing methods focus on combining output scores of
individual detectors, but this leads to outputs that are not easily
interpretable. In this paper, we introduce a theoretical foundation for
combining individual detectors with Bayesian classifier combination. Not only
are posterior distributions easily interpreted as the probability distribution
of anomalies, but bias, variance, and individual error rates of detectors are
all easily obtained. Performance on real-world datasets shows high accuracy
across varied types of time series data.
| Edward Yu, Parth Parekh | null | 1610.07677 | null | null |
Co-Occuring Directions Sketching for Approximate Matrix Multiply | cs.LG | We introduce co-occurring directions sketching, a deterministic algorithm for
approximate matrix product (AMM), in the streaming model. We show that
co-occuring directions achieves a better error bound for AMM than other
randomized and deterministic approaches for AMM. Co-occurring directions gives
a $1 + \epsilon$ -approximation of the optimal low rank approximation of a
matrix product. Empirically our algorithm outperforms competing methods for
AMM, for a small sketch size. We validate empirically our theoretical findings
and algorithms
| Youssef Mroueh, Etienne Marcheret, Vaibhava Goel | null | 1610.07686 | null | null |
Distributed and parallel time series feature extraction for industrial
big data applications | cs.LG | The all-relevant problem of feature selection is the identification of all
strongly and weakly relevant attributes. This problem is especially hard to
solve for time series classification and regression in industrial applications
such as predictive maintenance or production line optimization, for which each
label or regression target is associated with several time series and
meta-information simultaneously. Here, we are proposing an efficient, scalable
feature extraction algorithm for time series, which filters the available
features in an early stage of the machine learning pipeline with respect to
their significance for the classification or regression task, while controlling
the expected percentage of selected but irrelevant features. The proposed
algorithm combines established feature extraction methods with a feature
importance filter. It has a low computational complexity, allows to start on a
problem with only limited domain knowledge available, can be trivially
parallelized, is highly scalable and based on well studied non-parametric
hypothesis tests. We benchmark our proposed algorithm on all binary
classification problems of the UCR time series classification archive as well
as time series from a production line optimization project and simulated
stochastic processes with underlying qualitative change of dynamics.
| Maximilian Christ, Andreas W. Kempa-Liehr, Michael Feindt | null | 1610.07717 | null | null |
Sparse Hierarchical Tucker Factorization and its Application to
Healthcare | cs.LG cs.NA | We propose a new tensor factorization method, called the Sparse
Hierarchical-Tucker (Sparse H-Tucker), for sparse and high-order data tensors.
Sparse H-Tucker is inspired by its namesake, the classical Hierarchical Tucker
method, which aims to compute a tree-structured factorization of an input data
set that may be readily interpreted by a domain expert. However, Sparse
H-Tucker uses a nested sampling technique to overcome a key scalability problem
in Hierarchical Tucker, which is the creation of an unwieldy intermediate dense
core tensor; the result of our approach is a faster, more space-efficient, and
more accurate method. We extensively test our method on a real healthcare
dataset, which is collected from 30K patients and results in an 18th order
sparse data tensor. Unlike competing methods, Sparse H-Tucker can analyze the
full data set on a single multi-threaded machine. It can also do so more
accurately and in less time than the state-of-the-art: on a 12th order subset
of the input data, Sparse H-Tucker is 18x more accurate and 7.5x faster than a
previously state-of-the-art method. Even for analyzing low order tensors (e.g.,
4-order), our method requires close to an order of magnitude less time and over
two orders of magnitude less memory, as compared to traditional tensor
factorization methods such as CP and Tucker. Moreover, we observe that Sparse
H-Tucker scales nearly linearly in the number of non-zero tensor elements. The
resulting model also provides an interpretable disease hierarchy, which is
confirmed by a clinical expert.
| Ioakeim Perros and Robert Chen and Richard Vuduc and Jimeng Sun | null | 1610.07722 | null | null |
Approximate cross-validation formula for Bayesian linear regression | stat.ML cs.LG | Cross-validation (CV) is a technique for evaluating the ability of
statistical models/learning systems based on a given data set. Despite its wide
applicability, the rather heavy computational cost can prevent its use as the
system size grows. To resolve this difficulty in the case of Bayesian linear
regression, we develop a formula for evaluating the leave-one-out CV error
approximately without actually performing CV. The usefulness of the developed
formula is tested by statistical mechanical analysis for a synthetic model.
This is confirmed by application to a real-world supernova data set as well.
| Yoshiyuki Kabashima, Tomoyuki Obuchi, Makoto Uemura | null | 1610.07733 | null | null |
Big Models for Big Data using Multi objective averaged one dependence
estimators | cs.NE cs.LG | Even though, many researchers tried to explore the various possibilities on
multi objective feature selection, still it is yet to be explored with best of
its capabilities in data mining applications rather than going for developing
new ones. In this paper, multi-objective evolutionary algorithm ENORA is used
to select the features in a multi-class classification problem. The fusion of
AnDE (averaged n-dependence estimators) with n=1, a variant of naive Bayes with
efficient feature selection by ENORA is performed in order to obtain a fast
hybrid classifier which can effectively learn from big data. This method aims
at solving the problem of finding optimal feature subset from full data which
at present still remains to be a difficult problem. The efficacy of the
obtained classifier is extensively evaluated with a range of most popular 21
real world dataset, ranging from small to big. The results obtained are
encouraging in terms of time, Root mean square error, zero-one loss and
classification accuracy.
| Mrutyunjaya Panda | null | 1610.07752 | null | null |
Frank-Wolfe Algorithms for Saddle Point Problems | math.OC cs.LG stat.ML | We extend the Frank-Wolfe (FW) optimization algorithm to solve constrained
smooth convex-concave saddle point (SP) problems. Remarkably, the method only
requires access to linear minimization oracles. Leveraging recent advances in
FW optimization, we provide the first proof of convergence of a FW-type saddle
point solver over polytopes, thereby partially answering a 30 year-old
conjecture. We also survey other convergence results and highlight gaps in the
theoretical underpinnings of FW-style algorithms. Motivating applications
without known efficient alternatives are explored through structured prediction
with combinatorial penalties as well as games over matching polytopes involving
an exponential number of constraints.
| Gauthier Gidel, Tony Jebara and Simon Lacoste-Julien | null | 1610.07797 | null | null |
Hybrid clustering-classification neural network in the medical
diagnostics of reactive arthritis | cs.LG cs.NE stat.ML | The hybrid clustering-classification neural network is proposed. This network
allows increasing a quality of information processing under the condition of
overlapping classes due to the rational choice of a learning rate parameter and
introducing a special procedure of fuzzy reasoning in the clustering process,
which occurs both with an external learning signal (supervised) and without the
one (unsupervised). As similarity measure neighborhood function or membership
one, cosine structures are used, which allow to provide a high flexibility due
to self-learning-learning process and to provide some new useful properties.
Many realized experiments have confirmed the efficiency of proposed hybrid
clustering-classification neural network; also, this network was used for
solving diagnostics task of reactive arthritis.
| Yevgeniy Bodyanskiy, Olena Vynokurova, Volodymyr Savvo, Tatiana
Tverdokhlib, Pavlo Mulesa | null | 1610.07857 | null | null |
Generalization Bounds for Weighted Automata | cs.LG cs.FL | This paper studies the problem of learning weighted automata from a finite
labeled training sample. We consider several general families of weighted
automata defined in terms of three different measures: the norm of an
automaton's weights, the norm of the function computed by an automaton, or the
norm of the corresponding Hankel matrix. We present new data-dependent
generalization guarantees for learning weighted automata expressed in terms of
the Rademacher complexity of these families. We further present upper bounds on
these Rademacher complexities, which reveal key new data-dependent terms
related to the complexity of learning weighted automata.
| Borja Balle and Mehryar Mohri | null | 1610.07883 | null | null |
A statistical framework for fair predictive algorithms | stat.ML cs.LG | Predictive modeling is increasingly being employed to assist human
decision-makers. One purported advantage of replacing human judgment with
computer models in high stakes settings-- such as sentencing, hiring, policing,
college admissions, and parole decisions-- is the perceived "neutrality" of
computers. It is argued that because computer models do not hold personal
prejudice, the predictions they produce will be equally free from prejudice.
There is growing recognition that employing algorithms does not remove the
potential for bias, and can even amplify it, since training data were
inevitably generated by a process that is itself biased. In this paper, we
provide a probabilistic definition of algorithmic bias. We propose a method to
remove bias from predictive models by removing all information regarding
protected variables from the permitted training data. Unlike previous work in
this area, our framework is general enough to accommodate arbitrary data types,
e.g. binary, continuous, etc. Motivated by models currently in use in the
criminal justice system that inform decisions on pre-trial release and
paroling, we apply our proposed method to a dataset on the criminal histories
of individuals at the time of sentencing to produce "race-neutral" predictions
of re-arrest. In the process, we demonstrate that the most common approach to
creating "race-neutral" models-- omitting race as a covariate-- still results
in racially disparate predictions. We then demonstrate that the application of
our proposed method to these data removes racial disparities from predictions
with minimal impact on predictive accuracy.
| Kristian Lum and James Johndrow | null | 1610.08077 | null | null |
Image Segmentation for Fruit Detection and Yield Estimation in Apple
Orchards | cs.RO cs.CV cs.LG | Ground vehicles equipped with monocular vision systems are a valuable source
of high resolution image data for precision agriculture applications in
orchards. This paper presents an image processing framework for fruit detection
and counting using orchard image data. A general purpose image segmentation
approach is used, including two feature learning algorithms; multi-scale
Multi-Layered Perceptrons (MLP) and Convolutional Neural Networks (CNN). These
networks were extended by including contextual information about how the image
data was captured (metadata), which correlates with some of the appearance
variations and/or class distributions observed in the data. The pixel-wise
fruit segmentation output is processed using the Watershed Segmentation (WS)
and Circular Hough Transform (CHT) algorithms to detect and count individual
fruits. Experiments were conducted in a commercial apple orchard near
Melbourne, Australia. The results show an improvement in fruit segmentation
performance with the inclusion of metadata on the previously benchmarked MLP
network. We extend this work with CNNs, bringing agrovision closer to the
state-of-the-art in computer vision, where although metadata had negligible
influence, the best pixel-wise F1-score of $0.791$ was achieved. The WS
algorithm produced the best apple detection and counting results, with a
detection F1-score of $0.858$. As a final step, image fruit counts were
accumulated over multiple rows at the orchard and compared against the
post-harvest fruit counts that were obtained from a grading and counting
machine. The count estimates using CNN and WS resulted in the best performance
for this dataset, with a squared correlation coefficient of $r^2=0.826$.
| Suchet Bargoti, James Underwood | null | 1610.0812 | null | null |
Socratic Learning: Augmenting Generative Models to Incorporate Latent
Subsets in Training Data | cs.LG stat.ML | A challenge in training discriminative models like neural networks is
obtaining enough labeled training data. Recent approaches use generative models
to combine weak supervision sources, like user-defined heuristics or knowledge
bases, to label training data. Prior work has explored learning accuracies for
these sources even without ground truth labels, but they assume that a single
accuracy parameter is sufficient to model the behavior of these sources over
the entire training set. In particular, they fail to model latent subsets in
the training data in which the supervision sources perform differently than on
average. We present Socratic learning, a paradigm that uses feedback from a
corresponding discriminative model to automatically identify these subsets and
augments the structure of the generative model accordingly. Experimentally, we
show that without any ground truth labels, the augmented generative model
reduces error by up to 56.06% for a relation extraction task compared to a
state-of-the-art weak supervision technique that utilizes generative models.
| Paroma Varma, Bryan He, Dan Iter, Peng Xu, Rose Yu, Christopher De Sa,
Christopher R\'e | null | 1610.08123 | null | null |
Fast Bayesian Non-Negative Matrix Factorisation and Tri-Factorisation | cs.LG cs.AI cs.NA stat.ML | We present a fast variational Bayesian algorithm for performing non-negative
matrix factorisation and tri-factorisation. We show that our approach achieves
faster convergence per iteration and timestep (wall-clock) than Gibbs sampling
and non-probabilistic approaches, and do not require additional samples to
estimate the posterior. We show that in particular for matrix tri-factorisation
convergence is difficult, but our variational Bayesian approach offers a fast
solution, allowing the tri-factorisation approach to be used more effectively.
| Thomas Brouwer, Jes Frellsen, Pietro Lio' | null | 1610.08127 | null | null |
Automatic measurement of vowel duration via structured prediction | stat.ML cs.LG cs.SD | A key barrier to making phonetic studies scalable and replicable is the need
to rely on subjective, manual annotation. To help meet this challenge, a
machine learning algorithm was developed for automatic measurement of a widely
used phonetic measure: vowel duration. Manually-annotated data were used to
train a model that takes as input an arbitrary length segment of the acoustic
signal containing a single vowel that is preceded and followed by consonants
and outputs the duration of the vowel. The model is based on the structured
prediction framework. The input signal and a hypothesized set of a vowel's
onset and offset are mapped to an abstract vector space by a set of acoustic
feature functions. The learning algorithm is trained in this space to minimize
the difference in expectations between predicted and manually-measured vowel
durations. The trained model can then automatically estimate vowel durations
without phonetic or orthographic transcription. Results comparing the model to
three sets of manually annotated data suggest it out-performed the current gold
standard for duration measurement, an HMM-based forced aligner (which requires
orthographic or phonetic transcription as an input).
| Yossi Adi, Joseph Keshet, Emily Cibelli, Erin Gustafson, Cynthia
Clopper, Matthew Goldrick | 10.1121/1.4972527 | 1610.08166 | null | null |
Word Embeddings and Their Use In Sentence Classification Tasks | cs.LG cs.CL | This paper have two parts. In the first part we discuss word embeddings. We
discuss the need for them, some of the methods to create them, and some of
their interesting properties. We also compare them to image embeddings and see
how word embedding and image embedding can be combined to perform different
tasks. In the second part we implement a convolutional neural network trained
on top of pre-trained word vectors. The network is used for several
sentence-level classification tasks, and achieves state-of-art (or comparable)
results, demonstrating the great power of pre-trainted word embeddings over
random ones.
| Amit Mandelbaum and Adi Shalev | null | 1610.08229 | null | null |
Things Bayes can't do | cs.LG math.ST stat.ML stat.TH | The problem of forecasting conditional probabilities of the next event given
the past is considered in a general probabilistic setting. Given an arbitrary
(large, uncountable) set C of predictors, we would like to construct a single
predictor that performs asymptotically as well as the best predictor in C, on
any data. Here we show that there are sets C for which such predictors exist,
but none of them is a Bayesian predictor with a prior concentrated on C. In
other words, there is a predictor with sublinear regret, but every Bayesian
predictor must have a linear regret. This negative finding is in sharp contrast
with previous results that establish the opposite for the case when one of the
predictors in $C$ achieves asymptotically vanishing error. In such a case, if
there is a predictor that achieves asymptotically vanishing error for any
measure in C, then there is a Bayesian predictor that also has this property,
and whose prior is concentrated on (a countable subset of) C.
| Daniil Ryabko | 10.1007/978-3-319-46379-7_17 | 1610.08239 | null | null |
Universality of Bayesian mixture predictors | math.ST cs.IT cs.LG math.IT stat.TH | The problem is that of sequential probability forecasting for finite-valued
time series. The data is generated by an unknown probability distribution over
the space of all one-way infinite sequences. It is known that this measure
belongs to a given set C, but the latter is completely arbitrary (uncountably
infinite, without any structure given). The performance is measured with
asymptotic average log loss. In this work it is shown that the minimax
asymptotic performance is always attainable, and it is attained by a convex
combination of a countably many measures from the set C (a Bayesian mixture).
This was previously only known for the case when the best achievable asymptotic
error is 0. This also contrasts previous results that show that in the
non-realizable case all Bayesian mixtures may be suboptimal, while there is a
predictor that achieves the optimal performance.
| Daniil Ryabko | null | 1610.08249 | null | null |
An Improved Approach for Prediction of Parkinson's Disease using Machine
Learning Techniques | cs.LG | Parkinson's disease (PD) is one of the major public health problems in the
world. It is a well-known fact that around one million people suffer from
Parkinson's disease in the United States whereas the number of people suffering
from Parkinson's disease worldwide is around 5 million. Thus, it is important
to predict Parkinson's disease in early stages so that early plan for the
necessary treatment can be made. People are mostly familiar with the motor
symptoms of Parkinson's disease, however, an increasing amount of research is
being done to predict the Parkinson's disease from non-motor symptoms that
precede the motor ones. If an early and reliable prediction is possible then a
patient can get a proper treatment at the right time. Nonmotor symptoms
considered are Rapid Eye Movement (REM) sleep Behaviour Disorder (RBD) and
olfactory loss. Developing machine learning models that can help us in
predicting the disease can play a vital role in early prediction. In this
paper, we extend a work which used the non-motor features such as RBD and
olfactory loss. Along with this the extended work also uses important
biomarkers. In this paper, we try to model this classifier using different
machine learning models that have not been used before. We developed automated
diagnostic models using Multilayer Perceptron, BayesNet, Random Forest and
Boosted Logistic Regression. It has been observed that Boosted Logistic
Regression provides the best performance with an impressive accuracy of 97.159
% and the area under the ROC curve was 98.9%. Thus, it is concluded that these
models can be used for early prediction of Parkinson's disease.
| Kamal Nayan Reddy Challa, Venkata Sasank Pagolu, Ganapati Panda,
Babita Majhi | null | 1610.0825 | null | null |
Quantum-enhanced machine learning | quant-ph cs.AI cs.LG | The emerging field of quantum machine learning has the potential to
substantially aid in the problems and scope of artificial intelligence. This is
only enhanced by recent successes in the field of classical machine learning.
In this work we propose an approach for the systematic treatment of machine
learning, from the perspective of quantum information. Our approach is general
and covers all three main branches of machine learning: supervised,
unsupervised and reinforcement learning. While quantum improvements in
supervised and unsupervised learning have been reported, reinforcement learning
has received much less attention. Within our approach, we tackle the problem of
quantum enhancements in reinforcement learning as well, and propose a
systematic scheme for providing improvements. As an example, we show that
quadratic improvements in learning efficiency, and exponential improvements in
performance over limited time periods, can be obtained for a broad class of
learning problems.
| Vedran Dunjko, Jacob M. Taylor, Hans J. Briegel | 10.1103/PhysRevLett.117.130501 | 1610.08251 | null | null |
Universal adversarial perturbations | cs.CV cs.AI cs.LG stat.ML | Given a state-of-the-art deep neural network classifier, we show the
existence of a universal (image-agnostic) and very small perturbation vector
that causes natural images to be misclassified with high probability. We
propose a systematic algorithm for computing universal perturbations, and show
that state-of-the-art deep neural networks are highly vulnerable to such
perturbations, albeit being quasi-imperceptible to the human eye. We further
empirically analyze these universal perturbations and show, in particular, that
they generalize very well across neural networks. The surprising existence of
universal perturbations reveals important geometric correlations among the
high-dimensional decision boundary of classifiers. It further outlines
potential security breaches with the existence of single directions in the
input space that adversaries can possibly exploit to break a classifier on most
natural images.
| Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, Pascal
Frossard | null | 1610.08401 | null | null |
Counterfactual Reasoning about Intent for Interactive Navigation in
Dynamic Environments | cs.RO cs.LG | Many modern robotics applications require robots to function autonomously in
dynamic environments including other decision making agents, such as people or
other robots. This calls for fast and scalable interactive motion planning.
This requires models that take into consideration the other agent's intended
actions in one's own planning. We present a real-time motion planning framework
that brings together a few key components including intention inference by
reasoning counterfactually about potential motion of the other agents as they
work towards different goals. By using a light-weight motion model, we achieve
efficient iterative planning for fluid motion when avoiding pedestrians, in
parallel with goal inference for longer range movement prediction. This
inference framework is coupled with a novel distributed visual tracking method
that provides reliable and robust models for the current belief-state of the
monitored environment. This combined approach represents a computationally
efficient alternative to previously studied policy learning methods that often
require significant offline training or calibration and do not yet scale to
densely populated environments. We validate this framework with experiments
involving multi-robot and human-robot navigation. We further validate the
tracker component separately on much larger scale unconstrained pedestrian data
sets.
| A. Bordallo, F. Previtali, N. Nardelli, S. Ramamoorthy | 10.1109/IROS.2015.7353783 | 1610.08424 | null | null |
Fairness Beyond Disparate Treatment & Disparate Impact: Learning
Classification without Disparate Mistreatment | stat.ML cs.LG | Automated data-driven decision making systems are increasingly being used to
assist, or even replace humans in many settings. These systems function by
learning from historical decisions, often taken by humans. In order to maximize
the utility of these systems (or, classifiers), their training involves
minimizing the errors (or, misclassifications) over the given historical data.
However, it is quite possible that the optimally trained classifier makes
decisions for people belonging to different social groups with different
misclassification rates (e.g., misclassification rates for females are higher
than for males), thereby placing these groups at an unfair disadvantage. To
account for and avoid such unfairness, in this paper, we introduce a new notion
of unfairness, disparate mistreatment, which is defined in terms of
misclassification rates. We then propose intuitive measures of disparate
mistreatment for decision boundary-based classifiers, which can be easily
incorporated into their formulation as convex-concave constraints. Experiments
on synthetic as well as real world datasets show that our methodology is
effective at avoiding disparate mistreatment, often at a small cost in terms of
accuracy.
| Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, Krishna
P. Gummadi | 10.1145/3038912.3052660 | 1610.08452 | null | null |
Adaptive matching pursuit for sparse signal recovery | cs.LG stat.ML | Spike and Slab priors have been of much recent interest in signal processing
as a means of inducing sparsity in Bayesian inference. Applications domains
that benefit from the use of these priors include sparse recovery, regression
and classification. It is well-known that solving for the sparse coefficient
vector to maximize these priors results in a hard non-convex and mixed integer
programming problem. Most existing solutions to this optimization problem
either involve simplifying assumptions/relaxations or are computationally
expensive. We propose a new greedy and adaptive matching pursuit (AMP)
algorithm to directly solve this hard problem. Essentially, in each step of the
algorithm, the set of active elements would be updated by either adding or
removing one index, whichever results in better improvement. In addition, the
intermediate steps of the algorithm are calculated via an inexpensive Cholesky
decomposition which makes the algorithm much faster. Results on simulated data
sets as well as real-world image recovery challenges confirm the benefits of
the proposed AMP, particularly in providing a superior cost-quality trade-off
over existing alternatives.
| Tiep H. Vu, Hojjat S. Mousavi, Vishal Monga | null | 1610.08495 | null | null |
Synthesis of Shared Control Protocols with Provable Safety and
Performance Guarantees | cs.RO cs.AI cs.LG | We formalize synthesis of shared control protocols with correctness
guarantees for temporal logic specifications. More specifically, we introduce a
modeling formalism in which both a human and an autonomy protocol can issue
commands to a robot towards performing a certain task. These commands are
blended into a joint input to the robot. The autonomy protocol is synthesized
using an abstraction of possible human commands accounting for randomness in
decisions caused by factors such as fatigue or incomprehensibility of the
problem at hand. The synthesis is designed to ensure that the resulting robot
behavior satisfies given safety and performance specifications, e.g., in
temporal logic. Our solution is based on nonlinear programming and we address
the inherent scalability issue by presenting alternative methods. We assess the
feasibility and the scalability of the approach by an experimental evaluation.
| Nils Jansen and Murat Cubuktepe and Ufuk Topcu | null | 1610.085 | null | null |
Causal Network Learning from Multiple Interventions of Unknown
Manipulated Targets | stat.ML cs.LG | In this paper, we discuss structure learning of causal networks from multiple
data sets obtained by external intervention experiments where we do not know
what variables are manipulated. For example, the conditions in these
experiments are changed by changing temperature or using drugs, but we do not
know what target variables are manipulated by the external interventions. From
such data sets, the structure learning becomes more difficult. For this case,
we first discuss the identifiability of causal structures. Next we present a
graph-merging method for learning causal networks for the case that the sample
sizes are large for these interventions. Then for the case that the sample
sizes of these interventions are relatively small, we propose a data-pooling
method for learning causal networks in which we pool all data sets of these
interventions together for the learning. Further we propose a re-sampling
approach to evaluate the edges of the causal network learned by the
data-pooling method. Finally we illustrate the proposed learning methods by
simulations.
| Yango He, Zhi Geng | null | 1610.08611 | null | null |
Can Active Memory Replace Attention? | cs.LG cs.CL | Several mechanisms to focus attention of a neural network on selected parts
of its input or memory have been used successfully in deep learning models in
recent years. Attention has improved image classification, image captioning,
speech recognition, generative models, and learning algorithmic tasks, but it
had probably the largest impact on neural machine translation.
Recently, similar improvements have been obtained using alternative
mechanisms that do not focus on a single part of a memory but operate on all of
it in parallel, in a uniform way. Such mechanism, which we call active memory,
improved over attention in algorithmic tasks, image processing, and in
generative modelling.
So far, however, active memory has not improved over attention for most
natural language processing tasks, in particular for machine translation. We
analyze this shortcoming in this paper and propose an extended model of active
memory that matches existing attention models on neural machine translation and
generalizes better to longer sentences. We investigate this model and explain
why previous active memory models did not succeed. Finally, we discuss when
active memory brings most benefits and where attention can be a better choice.
| {\L}ukasz Kaiser and Samy Bengio | null | 1610.08613 | null | null |
Regret Bounds for Lifelong Learning | stat.ML cs.LG | We consider the problem of transfer learning in an online setting. Different
tasks are presented sequentially and processed by a within-task algorithm. We
propose a lifelong learning strategy which refines the underlying data
representation used by the within-task algorithm, thereby transferring
information from one task to the next. We show that when the within-task
algorithm comes with some regret bound, our strategy inherits this good
property. Our bounds are in expectation for a general loss function, and
uniform for a convex loss. We discuss applications to dictionary learning and
finite set of predictors. In the latter case, we improve previous
$O(1/\sqrt{m})$ bounds to $O(1/m)$ where $m$ is the per task sample size.
| Pierre Alquier and The Tien Mai and Massimiliano Pontil | null | 1610.08628 | null | null |
A random version of principal component analysis in data clustering | q-bio.QM cs.LG | Principal component analysis (PCA) is a widespread technique for data
analysis that relies on the covariance-correlation matrix of the analyzed data.
However to properly work with high-dimensional data, PCA poses severe
mathematical constraints on the minimum number of different replicates or
samples that must be included in the analysis. Here we show that a modified
algorithm works not only on well dimensioned datasets, but also on degenerated
ones.
| Luigi Leonardo Palese | 10.1016/j.compbiolchem.2018.01.009 | 1610.08664 | null | null |
Learning Bound for Parameter Transfer Learning | stat.ML cs.LG | We consider a transfer-learning problem by using the parameter transfer
approach, where a suitable parameter of feature mapping is learned through one
task and applied to another objective task. Then, we introduce the notion of
the local stability and parameter transfer learnability of parametric feature
mapping,and thereby derive a learning bound for parameter transfer algorithms.
As an application of parameter transfer learning, we discuss the performance of
sparse coding in self-taught learning. Although self-taught learning algorithms
with plentiful unlabeled data often show excellent empirical performance, their
theoretical analysis has not been studied. In this paper, we also provide the
first theoretical learning bound for self-taught learning.
| Wataru Kumagai | null | 1610.08696 | null | null |
Compressive K-means | cs.LG stat.ML | The Lloyd-Max algorithm is a classical approach to perform K-means
clustering. Unfortunately, its cost becomes prohibitive as the training dataset
grows large. We propose a compressive version of K-means (CKM), that estimates
cluster centers from a sketch, i.e. from a drastically compressed
representation of the training dataset. We demonstrate empirically that CKM
performs similarly to Lloyd-Max, for a sketch size proportional to the number
of cen-troids times the ambient dimension, and independent of the size of the
original dataset. Given the sketch, the computational complexity of CKM is also
independent of the size of the dataset. Unlike Lloyd-Max which requires several
replicates, we further demonstrate that CKM is almost insensitive to
initialization. For a large dataset of 10^7 data points, we show that CKM can
run two orders of magnitude faster than five replicates of Lloyd-Max, with
similar clustering performance on artificial data. Finally, CKM achieves lower
classification errors on handwritten digits classification.
| Nicolas Keriven (PANAMA), Nicolas Tremblay (GIPSA-CICS), Yann
Traonmilin (PANAMA), R\'emi Gribonval (PANAMA) | null | 1610.08738 | null | null |
Differentially Private Variational Inference for Non-conjugate Models | stat.ML cs.CR cs.LG stat.ME | Many machine learning applications are based on data collected from people,
such as their tastes and behaviour as well as biological traits and genetic
data. Regardless of how important the application might be, one has to make
sure individuals' identities or the privacy of the data are not compromised in
the analysis. Differential privacy constitutes a powerful framework that
prevents breaching of data subject privacy from the output of a computation.
Differentially private versions of many important Bayesian inference methods
have been proposed, but there is a lack of an efficient unified approach
applicable to arbitrary models. In this contribution, we propose a
differentially private variational inference method with a very wide
applicability. It is built on top of doubly stochastic variational inference, a
recent advance which provides a variational solution to a large class of
models. We add differential privacy into doubly stochastic variational
inference by clipping and perturbing the gradients. The algorithm is made more
efficient through privacy amplification from subsampling. We demonstrate the
method can reach an accuracy close to non-private level under reasonably strong
privacy guarantees, clearly improving over previous sampling-based alternatives
especially in the strong privacy regime.
| Joonas J\"alk\"o and Onur Dikmen and Antti Honkela | null | 1610.08749 | null | null |
CoType: Joint Extraction of Typed Entities and Relations with Knowledge
Bases | cs.CL cs.LG | Extracting entities and relations for types of interest from text is
important for understanding massive text corpora. Traditionally, systems of
entity relation extraction have relied on human-annotated corpora for training
and adopted an incremental pipeline. Such systems require additional human
expertise to be ported to a new domain, and are vulnerable to errors cascading
down the pipeline. In this paper, we investigate joint extraction of typed
entities and relations with labeled data heuristically obtained from knowledge
bases (i.e., distant supervision). As our algorithm for type labeling via
distant supervision is context-agnostic, noisy training data poses unique
challenges for the task. We propose a novel domain-independent framework,
called CoType, that runs a data-driven text segmentation algorithm to extract
entity mentions, and jointly embeds entity mentions, relation mentions, text
features and type labels into two low-dimensional spaces (for entity and
relation mentions respectively), where, in each space, objects whose types are
close will also have similar representations. CoType, then using these learned
embeddings, estimates the types of test (unlinkable) mentions. We formulate a
joint optimization problem to learn embeddings from text corpora and knowledge
bases, adopting a novel partial-label loss function for noisy labeled data and
introducing an object "translation" function to capture the cross-constraints
of entities and relations on each other. Experiments on three public datasets
demonstrate the effectiveness of CoType across different domains (e.g., news,
biomedical), with an average of 25% improvement in F1 score compared to the
next best method.
| Xiang Ren, Zeqiu Wu, Wenqi He, Meng Qu, Clare R. Voss, Heng Ji, Tarek
F. Abdelzaher, Jiawei Han | null | 1610.08763 | null | null |
A Category Space Approach to Supervised Dimensionality Reduction | stat.ML cs.LG | Supervised dimensionality reduction has emerged as an important theme in the
last decade. Despite the plethora of models and formulations, there is a lack
of a simple model which aims to project the set of patterns into a space
defined by the classes (or categories). To this end, we set up a model in which
each class is represented as a 1D subspace of the vector space formed by the
features. Assuming the set of classes does not exceed the cardinality of the
features, the model results in multi-class supervised learning in which the
features of each class are projected into the class subspace. Class
discrimination is automatically guaranteed via the imposition of orthogonality
of the 1D class sub-spaces. The resulting optimization problem - formulated as
the minimization of a sum of quadratic functions on a Stiefel manifold - while
being non-convex (due to the constraints), nevertheless has a structure for
which we can identify when we have reached a global minimum. After formulating
a version with standard inner products, we extend the formulation to
reproducing kernel Hilbert spaces in a straightforward manner. The optimization
approach also extends in a similar fashion to the kernel version. Results and
comparisons with the multi-class Fisher linear (and kernel) discriminants and
principal component analysis (linear and kernel) showcase the relative merits
of this approach to dimensionality reduction.
| Anthony O. Smith and Anand Rangarajan | null | 1610.08838 | null | null |
On Bochner's and Polya's Characterizations of Positive-Definite Kernels
and the Respective Random Feature Maps | stat.ML cs.LG | Positive-definite kernel functions are fundamental elements of kernel methods
and Gaussian processes. A well-known construction of such functions comes from
Bochner's characterization, which connects a positive-definite function with a
probability distribution. Another construction, which appears to have attracted
less attention, is Polya's criterion that characterizes a subset of these
functions. In this paper, we study the latter characterization and derive a
number of novel kernels little known previously.
In the context of large-scale kernel machines, Rahimi and Recht (2007)
proposed a random feature map (random Fourier) that approximates a kernel
function, through independent sampling of the probability distribution in
Bochner's characterization. The authors also suggested another feature map
(random binning), which, although not explicitly stated, comes from Polya's
characterization. We show that with the same number of random samples, the
random binning map results in an Euclidean inner product closer to the kernel
than does the random Fourier map. The superiority of the random binning map is
confirmed empirically through regressions and classifications in the
reproducing kernel Hilbert space.
| Jie Chen, Dehua Cheng, Yan Liu | null | 1610.08861 | null | null |
Local Similarity-Aware Deep Feature Embedding | cs.CV cs.LG | Existing deep embedding methods in vision tasks are capable of learning a
compact Euclidean space from images, where Euclidean distances correspond to a
similarity metric. To make learning more effective and efficient, hard sample
mining is usually employed, with samples identified through computing the
Euclidean feature distance. However, the global Euclidean distance cannot
faithfully characterize the true feature similarity in a complex visual feature
space, where the intraclass distance in a high-density region may be larger
than the interclass distance in low-density regions. In this paper, we
introduce a Position-Dependent Deep Metric (PDDM) unit, which is capable of
learning a similarity metric adaptive to local feature structure. The metric
can be used to select genuinely hard samples in a local neighborhood to guide
the deep embedding learning in an online and robust manner. The new layer is
appealing in that it is pluggable to any convolutional networks and is trained
end-to-end. Our local similarity-aware feature embedding not only demonstrates
faster convergence and boosted performance on two complex image retrieval
datasets, its large margin nature also leads to superior generalization results
under the large and open set scenarios of transfer learning and zero-shot
learning on ImageNet 2010 and ImageNet-10K datasets.
| Chen Huang, Chen Change Loy, Xiaoou Tang | null | 1610.08904 | null | null |
Learning Scalable Deep Kernels with Recurrent Structure | cs.LG cs.AI stat.ML | Many applications in speech, robotics, finance, and biology deal with
sequential data, where ordering matters and recurrent structures are common.
However, this structure cannot be easily captured by standard kernel functions.
To model such structure, we propose expressive closed-form kernel functions for
Gaussian processes. The resulting model, GP-LSTM, fully encapsulates the
inductive biases of long short-term memory (LSTM) recurrent networks, while
retaining the non-parametric probabilistic advantages of Gaussian processes. We
learn the properties of the proposed kernels by optimizing the Gaussian process
marginal likelihood using a new provably convergent semi-stochastic gradient
procedure and exploit the structure of these kernels for scalable training and
prediction. This approach provides a practical representation for Bayesian
LSTMs. We demonstrate state-of-the-art performance on several benchmarks, and
thoroughly investigate a consequential autonomous driving application, where
the predictive uncertainties provided by GP-LSTM are uniquely valuable.
| Maruan Al-Shedivat, Andrew Gordon Wilson, Yunus Saatchi, Zhiting Hu,
Eric P. Xing | null | 1610.08936 | null | null |
SoundNet: Learning Sound Representations from Unlabeled Video | cs.CV cs.LG cs.SD | We learn rich natural sound representations by capitalizing on large amounts
of unlabeled sound data collected in the wild. We leverage the natural
synchronization between vision and sound to learn an acoustic representation
using two-million unlabeled videos. Unlabeled video has the advantage that it
can be economically acquired at massive scales, yet contains useful signals
about natural sound. We propose a student-teacher training procedure which
transfers discriminative visual knowledge from well established visual
recognition models into the sound modality using unlabeled video as a bridge.
Our sound representation yields significant performance improvements over the
state-of-the-art results on standard benchmarks for acoustic scene/object
classification. Visualizations suggest some high-level semantics automatically
emerge in the sound network, even though it is trained without ground truth
labels.
| Yusuf Aytar, Carl Vondrick, Antonio Torralba | null | 1610.09001 | null | null |
Cross-Modal Scene Networks | cs.CV cs.LG cs.MM | People can recognize scenes across many different modalities beyond natural
images. In this paper, we investigate how to learn cross-modal scene
representations that transfer across modalities. To study this problem, we
introduce a new cross-modal scene dataset. While convolutional neural networks
can categorize scenes well, they also learn an intermediate representation not
aligned across modalities, which is undesirable for cross-modal transfer
applications. We present methods to regularize cross-modal convolutional neural
networks so that they have a shared representation that is agnostic of the
modality. Our experiments suggest that our scene representation can help
transfer representations across modalities for retrieval. Moreover, our
visualizations suggest that units emerge in the shared representation that tend
to activate on consistent concepts independently of the modality.
| Yusuf Aytar, Lluis Castrejon, Carl Vondrick, Hamed Pirsiavash, Antonio
Torralba | null | 1610.09003 | null | null |
Scaling Memory-Augmented Neural Networks with Sparse Reads and Writes | cs.LG | Neural networks augmented with external memory have the ability to learn
algorithmic solutions to complex tasks. These models appear promising for
applications such as language modeling and machine translation. However, they
scale poorly in both space and time as the amount of memory grows --- limiting
their applicability to real-world domains. Here, we present an end-to-end
differentiable memory access scheme, which we call Sparse Access Memory (SAM),
that retains the representational power of the original approaches whilst
training efficiently with very large memories. We show that SAM achieves
asymptotic lower bounds in space and time complexity, and find that an
implementation runs $1,\!000\times$ faster and with $3,\!000\times$ less
physical memory than non-sparse models. SAM learns with comparable data
efficiency to existing models on a range of synthetic tasks and one-shot
Omniglot character recognition, and can scale to tasks requiring $100,\!000$s
of time steps and memories. As well, we show how our approach can be adapted
for models that maintain temporal associations between memories, as with the
recently introduced Differentiable Neural Computer.
| Jack W Rae, Jonathan J Hunt, Tim Harley, Ivo Danihelka, Andrew Senior,
Greg Wayne, Alex Graves, Timothy P Lillicrap | null | 1610.09027 | null | null |
Operator Variational Inference | stat.ML cs.LG stat.CO stat.ME | Variational inference is an umbrella term for algorithms which cast Bayesian
inference as optimization. Classically, variational inference uses the
Kullback-Leibler divergence to define the optimization. Though this divergence
has been widely used, the resultant posterior approximation can suffer from
undesirable statistical properties. To address this, we reexamine variational
inference from its roots as an optimization problem. We use operators, or
functions of functions, to design variational objectives. As one example, we
design a variational objective with a Langevin-Stein operator. We develop a
black box algorithm, operator variational inference (OPVI), for optimizing any
operator objective. Importantly, operators enable us to make explicit the
statistical and computational tradeoffs for variational inference. We can
characterize different properties of variational objectives, such as objectives
that admit data subsampling---allowing inference to scale to massive data---as
well as objectives that admit variational programs---a rich class of posterior
approximations that does not require a tractable density. We illustrate the
benefits of OPVI on a mixture model and a generative model of images.
| Rajesh Ranganath, Jaan Altosaar, Dustin Tran, David M. Blei | null | 1610.09033 | null | null |
Professor Forcing: A New Algorithm for Training Recurrent Networks | stat.ML cs.LG | The Teacher Forcing algorithm trains recurrent networks by supplying observed
sequence values as inputs during training and using the network's own
one-step-ahead predictions to do multi-step sampling. We introduce the
Professor Forcing algorithm, which uses adversarial domain adaptation to
encourage the dynamics of the recurrent network to be the same when training
the network and when sampling from the network over multiple time steps. We
apply Professor Forcing to language modeling, vocal synthesis on raw waveforms,
handwriting generation, and image generation. Empirically we find that
Professor Forcing acts as a regularizer, improving test likelihood on character
level Penn Treebank and sequential MNIST. We also find that the model
qualitatively improves samples, especially when sampling for a large number of
time steps. This is supported by human evaluation of sample quality. Trade-offs
between Professor Forcing and Scheduled Sampling are discussed. We produce
T-SNEs showing that Professor Forcing successfully makes the dynamics of the
network during training and sampling more similar.
| Alex Lamb, Anirudh Goyal, Ying Zhang, Saizheng Zhang, Aaron Courville,
Yoshua Bengio | null | 1610.09038 | null | null |
Orthogonal Random Features | cs.LG stat.ML | We present an intriguing discovery related to Random Fourier Features: in
Gaussian kernel approximation, replacing the random Gaussian matrix by a
properly scaled random orthogonal matrix significantly decreases kernel
approximation error. We call this technique Orthogonal Random Features (ORF),
and provide theoretical and empirical justification for this behavior.
Motivated by this discovery, we further propose Structured Orthogonal Random
Features (SORF), which uses a class of structured discrete orthogonal matrices
to speed up the computation. The method reduces the time cost from
$\mathcal{O}(d^2)$ to $\mathcal{O}(d \log d)$, where $d$ is the data
dimensionality, with almost no compromise in kernel approximation quality
compared to ORF. Experiments on several datasets verify the effectiveness of
ORF and SORF over the existing methods. We also provide discussions on using
the same type of discrete orthogonal structure for a broader range of
applications.
| Felix X. Yu, Ananda Theertha Suresh, Krzysztof Choromanski, Daniel
Holtmann-Rice, Sanjiv Kumar | null | 1610.09072 | null | null |
Missing Data Imputation for Supervised Learning | stat.ML cs.LG | Missing data imputation can help improve the performance of prediction models
in situations where missing data hide useful information. This paper compares
methods for imputing missing categorical data for supervised classification
tasks. We experiment on two machine learning benchmark datasets with missing
categorical data, comparing classifiers trained on non-imputed (i.e., one-hot
encoded) or imputed data with different levels of additional missing-data
perturbation. We show imputation methods can increase predictive accuracy in
the presence of missing-data perturbation, which can actually improve
prediction accuracy by regularizing the classifier. We achieve the
state-of-the-art on the Adult dataset with missing-data perturbation and
k-nearest-neighbors (k-NN) imputation.
| Jason Poulos and Rafael Valle | 10.1080/08839514.2018.1448143 | 1610.09075 | null | null |
SOL: A Library for Scalable Online Learning Algorithms | cs.LG stat.ML | SOL is an open-source library for scalable online learning algorithms, and is
particularly suitable for learning with high-dimensional data. The library
provides a family of regular and sparse online learning algorithms for
large-scale binary and multi-class classification tasks with high efficiency,
scalability, portability, and extensibility. SOL was implemented in C++, and
provided with a collection of easy-to-use command-line tools, python wrappers
and library calls for users and developers, as well as comprehensive documents
for both beginners and advanced users. SOL is not only a practical machine
learning toolbox, but also a comprehensive experimental platform for online
learning research. Experiments demonstrate that SOL is highly efficient and
scalable for large-scale machine learning with high-dimensional data.
| Yue Wu, Steven C.H. Hoi, Chenghao Liu, Jing Lu, Doyen Sahoo, Nenghai
Yu | null | 1610.09083 | null | null |
$f$-Divergence Inequalities via Functional Domination | cs.IT cs.LG math.IT math.PR math.ST stat.TH | This paper considers derivation of $f$-divergence inequalities via the
approach of functional domination. Bounds on an $f$-divergence based on one or
several other $f$-divergences are introduced, dealing with pairs of probability
measures defined on arbitrary alphabets. In addition, a variety of bounds are
shown to hold under boundedness assumptions on the relative information. The
journal paper, which includes more approaches for the derivation of
f-divergence inequalities and proofs, is available on the arXiv at
https://arxiv.org/abs/1508.00335, and it has been published in the IEEE Trans.
on Information Theory, vol. 62, no. 11, pp. 5973-6006, November 2016.
| Igal Sason and Sergio Verd\'u | null | 1610.0911 | null | null |
Adaptive regularization for Lasso models in the context of
non-stationary data streams | stat.ML cs.LG | Large scale, streaming datasets are ubiquitous in modern machine learning.
Streaming algorithms must be scalable, amenable to incremental training and
robust to the presence of non-stationarity. In this work consider the problem
of learning $\ell_1$ regularized linear models in the context of streaming
data. In particular, the focus of this work revolves around how to select the
regularization parameter when data arrives sequentially and the underlying
distribution is non-stationary (implying the choice of optimal regularization
parameter is itself time-varying). We propose a framework through which to
infer an adaptive regularization parameter. Our approach employs an $\ell_1$
penalty constraint where the corresponding sparsity parameter is iteratively
updated via stochastic gradient descent. This serves to reformulate the choice
of regularization parameter in a principled framework for online learning. The
proposed method is derived for linear regression and subsequently extended to
generalized linear models. We validate our approach using simulated and real
datasets and present an application to a neuroimaging dataset.
| Ricardo Pio Monti, Christoforos Anagnostopoulos, Giovanni Montana | null | 1610.09127 | null | null |
Towards a continuous modeling of natural language domains | cs.CL cs.LG | Humans continuously adapt their style and language to a variety of domains.
However, a reliable definition of `domain' has eluded researchers thus far.
Additionally, the notion of discrete domains stands in contrast to the
multiplicity of heterogeneous domains that humans navigate, many of which
overlap. In order to better understand the change and variation of human
language, we draw on research in domain adaptation and extend the notion of
discrete domains to the continuous spectrum. We propose representation
learning-based models that can adapt to continuous domains and detail how these
can be used to investigate variation in language. To this end, we propose to
use dialogue modeling as a test bed due to its proximity to language modeling
and its social component.
| Sebastian Ruder, Parsa Ghaffari, and John G. Breslin | null | 1610.09158 | null | null |
A Conceptual Development of Quench Prediction App build on LSTM and ELQA
framework | cs.LG | This article presents a development of web application for quench prediction
in \gls{te-mpe-ee} at CERN. The authors describe an ELectrical Quality
Assurance (ELQA) framework, a platform which was designed for rapid development
of web integrated data analysis applications for different analysis needed
during the hardware commissioning of the Large Hadron Collider (LHC). In second
part the article describes a research carried out with the data collected from
Quench Detection System by means of using an LSTM recurrent neural network. The
article discusses and presents a conceptual work of implementing quench
prediction application for \gls{te-mpe-ee} based on the ELQA and quench
prediction algorithm.
| Matej Mertik and Maciej Wielgosz and Andrzej Skocze\'n | null | 1610.09201 | null | null |
Hierarchical Clustering via Spreading Metrics | cs.LG | We study the cost function for hierarchical clusterings introduced by
[arXiv:1510.05043] where hierarchies are treated as first-class objects rather
than deriving their cost from projections into flat clusters. It was also shown
in [arXiv:1510.05043] that a top-down algorithm returns a hierarchical
clustering of cost at most $O\left(\alpha_n \log n\right)$ times the cost of
the optimal hierarchical clustering, where $\alpha_n$ is the approximation
ratio of the Sparsest Cut subroutine used. Thus using the best known
approximation algorithm for Sparsest Cut due to Arora-Rao-Vazirani, the top
down algorithm returns a hierarchical clustering of cost at most
$O\left(\log^{3/2} n\right)$ times the cost of the optimal solution. We improve
this by giving an $O(\log{n})$-approximation algorithm for this problem. Our
main technical ingredients are a combinatorial characterization of ultrametrics
induced by this cost function, deriving an Integer Linear Programming (ILP)
formulation for this family of ultrametrics, and showing how to iteratively
round an LP relaxation of this formulation by using the idea of \emph{sphere
growing} which has been extensively used in the context of graph partitioning.
We also prove that our algorithm returns an $O(\log{n})$-approximate
hierarchical clustering for a generalization of this cost function also studied
in [arXiv:1510.05043]. Experiments show that the hierarchies found by using the
ILP formulation as well as our rounding algorithm often have better projections
into flat clusters than the standard linkage based algorithms. We also give
constant factor inapproximability results for this problem.
| Aurko Roy and Sebastian Pokutta | null | 1610.09269 | null | null |
Toward Implicit Sample Noise Modeling: Deviation-driven Matrix
Factorization | cs.LG cs.IR stat.ML | The objective function of a matrix factorization model usually aims to
minimize the average of a regression error contributed by each element.
However, given the existence of stochastic noises, the implicit deviations of
sample data from their true values are almost surely diverse, which makes each
data point not equally suitable for fitting a model. In this case, simply
averaging the cost among data in the objective function is not ideal.
Intuitively we would like to emphasize more on the reliable instances (i.e.,
those contain smaller noise) while training a model. Motivated by such
observation, we derive our formula from a theoretical framework for optimal
weighting under heteroscedastic noise distribution. Specifically, by modeling
and learning the deviation of data, we design a novel matrix factorization
model. Our model has two advantages. First, it jointly learns the deviation and
conducts dynamic reweighting of instances, allowing the model to converge to a
better solution. Second, during learning the deviated instances are assigned
lower weights, which leads to faster convergence since the model does not need
to overfit the noise. The experiments are conducted in clean recommendation and
noisy sensor datasets to test the effectiveness of the model in various
scenarios. The results show that our model outperforms the state-of-the-art
factorization and deep learning models in both accuracy and efficiency.
| Guang-He Lee, Shao-Wen Yang, Shou-De Lin | null | 1610.09274 | null | null |
Improving Sampling from Generative Autoencoders with Markov Chains | cs.LG cs.AI stat.ML | We focus on generative autoencoders, such as variational or adversarial
autoencoders, which jointly learn a generative model alongside an inference
model. Generative autoencoders are those which are trained to softly enforce a
prior on the latent distribution learned by the inference model. We call the
distribution to which the inference model maps observed samples, the learned
latent distribution, which may not be consistent with the prior. We formulate a
Markov chain Monte Carlo (MCMC) sampling process, equivalent to iteratively
decoding and encoding, which allows us to sample from the learned latent
distribution. Since, the generative model learns to map from the learned latent
distribution, rather than the prior, we may use MCMC to improve the quality of
samples drawn from the generative model, especially when the learned latent
distribution is far from the prior. Using MCMC sampling, we are able to reveal
previously unseen differences between generative autoencoders trained either
with or without a denoising criterion.
| Antonia Creswell, Kai Arulkumaran, Anil Anthony Bharath | null | 1610.09296 | null | null |
Globally Optimal Training of Generalized Polynomial Neural Networks with
Nonlinear Spectral Methods | cs.LG math.OC stat.ML | The optimization problem behind neural networks is highly non-convex.
Training with stochastic gradient descent and variants requires careful
parameter tuning and provides no guarantee to achieve the global optimum. In
contrast we show under quite weak assumptions on the data that a particular
class of feedforward neural networks can be trained globally optimal with a
linear convergence rate with our nonlinear spectral method. Up to our knowledge
this is the first practically feasible method which achieves such a guarantee.
While the method can in principle be applied to deep networks, we restrict
ourselves for simplicity in this paper to one and two hidden layer networks.
Our experiments confirm that these models are rich enough to achieve good
performance on a series of real-world datasets.
| Antoine Gautier, Quynh Nguyen and Matthias Hein | null | 1610.093 | null | null |
Homotopy Analysis for Tensor PCA | stat.ML cs.LG | Developing efficient and guaranteed nonconvex algorithms has been an
important challenge in modern machine learning. Algorithms with good empirical
performance such as stochastic gradient descent often lack theoretical
guarantees. In this paper, we analyze the class of homotopy or continuation
methods for global optimization of nonconvex functions. These methods start
from an objective function that is efficient to optimize (e.g. convex), and
progressively modify it to obtain the required objective, and the solutions are
passed along the homotopy path. For the challenging problem of tensor PCA, we
prove global convergence of the homotopy method in the "high noise" regime. The
signal-to-noise requirement for our algorithm is tight in the sense that it
matches the recovery guarantee for the best degree-4 sum-of-squares algorithm.
In addition, we prove a phase transition along the homotopy path for tensor
PCA. This allows to simplify the homotopy method to a local search algorithm,
viz., tensor power iterations, with a specific initialization and a noise
injection procedure, while retaining the theoretical guarantees.
| Anima Anandkumar, Yuan Deng, Rong Ge, Hossein Mobahi | null | 1610.09322 | null | null |
Discriminative Gaifman Models | cs.LG | We present discriminative Gaifman models, a novel family of relational
machine learning models. Gaifman models learn feature representations bottom up
from representations of locally connected and bounded-size regions of knowledge
bases (KBs). Considering local and bounded-size neighborhoods of knowledge
bases renders logical inference and learning tractable, mitigates the problem
of overfitting, and facilitates weight sharing. Gaifman models sample
neighborhoods of knowledge bases so as to make the learned relational models
more robust to missing objects and relations which is a common situation in
open-world KBs. We present the core ideas of Gaifman models and apply them to
large-scale relational learning problems. We also discuss the ways in which
Gaifman models relate to some existing relational machine learning approaches.
| Mathias Niepert | null | 1610.09369 | null | null |
Dynamic matrix recovery from incomplete observations under an exact
low-rank constraint | stat.ML cs.LG | Low-rank matrix factorizations arise in a wide variety of applications --
including recommendation systems, topic models, and source separation, to name
just a few. In these and many other applications, it has been widely noted that
by incorporating temporal information and allowing for the possibility of
time-varying models, significant improvements are possible in practice.
However, despite the reported superior empirical performance of these dynamic
models over their static counterparts, there is limited theoretical
justification for introducing these more complex models. In this paper we aim
to address this gap by studying the problem of recovering a dynamically
evolving low-rank matrix from incomplete observations. First, we propose the
locally weighted matrix smoothing (LOWEMS) framework as one possible approach
to dynamic matrix recovery. We then establish error bounds for LOWEMS in both
the {\em matrix sensing} and {\em matrix completion} observation models. Our
results quantify the potential benefits of exploiting dynamic constraints both
in terms of recovery accuracy and sample complexity. To illustrate these
benefits we provide both synthetic and real-world experimental results.
| Liangbei Xu and Mark A. Davenport | null | 1610.0942 | null | null |
Beyond Exchangeability: The Chinese Voting Process | cs.LG cs.IR cs.SI | Many online communities present user-contributed responses such as reviews of
products and answers to questions. User-provided helpfulness votes can
highlight the most useful responses, but voting is a social process that can
gain momentum based on the popularity of responses and the polarity of existing
votes. We propose the Chinese Voting Process (CVP) which models the evolution
of helpfulness votes as a self-reinforcing process dependent on position and
presentation biases. We evaluate this model on Amazon product reviews and more
than 80 StackExchange forums, measuring the intrinsic quality of individual
responses and behavioral coefficients of different communities.
| Moontae Lee, Seok Hyun Jin, David Mimno | null | 1610.09428 | null | null |
Asynchronous Stochastic Block Coordinate Descent with Variance Reduction | cs.LG | Asynchronous parallel implementations for stochastic optimization have
received huge successes in theory and practice recently. Asynchronous
implementations with lock-free are more efficient than the one with writing or
reading lock. In this paper, we focus on a composite objective function
consisting of a smooth convex function $f$ and a block separable convex
function, which widely exists in machine learning and computer vision. We
propose an asynchronous stochastic block coordinate descent algorithm with the
accelerated technology of variance reduction (AsySBCDVR), which are with
lock-free in the implementation and analysis. AsySBCDVR is particularly
important because it can scale well with the sample size and dimension
simultaneously. We prove that AsySBCDVR achieves a linear convergence rate when
the function $f$ is with the optimal strong convexity property, and a sublinear
rate when $f$ is with the general convexity. More importantly, a near-linear
speedup on a parallel system with shared memory can be obtained.
| Bin Gu, Zhouyuan Huo, Heng Huang | null | 1610.09447 | null | null |
KeystoneML: Optimizing Pipelines for Large-Scale Advanced Analytics | cs.LG cs.DC | Modern advanced analytics applications make use of machine learning
techniques and contain multiple steps of domain-specific and general-purpose
processing with high resource requirements. We present KeystoneML, a system
that captures and optimizes the end-to-end large-scale machine learning
applications for high-throughput training in a distributed environment with a
high-level API. This approach offers increased ease of use and higher
performance over existing systems for large scale learning. We demonstrate the
effectiveness of KeystoneML in achieving high quality statistical accuracy and
scalable training using real world datasets in several domains. By optimizing
execution KeystoneML achieves up to 15x training throughput over unoptimized
execution on a real image classification application.
| Evan R. Sparks, Shivaram Venkataraman, Tomer Kaftan, Michael J.
Franklin, Benjamin Recht | null | 1610.09451 | null | null |
Sparse Signal Recovery for Binary Compressed Sensing by Majority Voting
Neural Networks | cs.IT cs.LG math.IT stat.ML | In this paper, we propose majority voting neural networks for sparse signal
recovery in binary compressed sensing. The majority voting neural network is
composed of several independently trained feedforward neural networks employing
the sigmoid function as an activation function. Our empirical study shows that
a choice of a loss function used in training processes for the network is of
prime importance. We found a loss function suitable for sparse signal recovery,
which includes a cross entropy-like term and an $L_1$ regularized term. From
the experimental results, we observed that the majority voting neural network
achieves excellent recovery performance, which is approaching the optimal
performance as the number of component nets grows. The simple architecture of
the majority voting neural networks would be beneficial for both software and
hardware implementations.
| Daisuke Ito and Tadashi Wadayama | null | 1610.09463 | null | null |
SDP Relaxation with Randomized Rounding for Energy Disaggregation | cs.LG | We develop a scalable, computationally efficient method for the task of
energy disaggregation for home appliance monitoring. In this problem the goal
is to estimate the energy consumption of each appliance over time based on the
total energy-consumption signal of a household. The current state of the art is
to model the problem as inference in factorial HMMs, and use quadratic
programming to find an approximate solution to the resulting quadratic integer
program. Here we take a more principled approach, better suited to integer
programming problems, and find an approximate optimum by combining convex
semidefinite relaxations randomized rounding, as well as a scalable ADMM method
that exploits the special structure of the resulting semidefinite program.
Simulation results both in synthetic and real-world datasets demonstrate the
superiority of our method.
| Kiarash Shaloudegi, Andr\'as Gy\"orgy, Csaba Szepesv\'ari, and Wilsun
Xu | null | 1610.09491 | null | null |
Contextual Decision Processes with Low Bellman Rank are PAC-Learnable | cs.LG stat.ML | This paper studies systematic exploration for reinforcement learning with
rich observations and function approximation. We introduce a new model called
contextual decision processes, that unifies and generalizes most prior
settings. Our first contribution is a complexity measure, the Bellman rank,
that we show enables tractable learning of near-optimal behavior in these
processes and is naturally small for many well-studied reinforcement learning
settings. Our second contribution is a new reinforcement learning algorithm
that engages in systematic exploration to learn contextual decision processes
with low Bellman rank. Our algorithm provably learns near-optimal behavior with
a number of samples that is polynomial in all relevant parameters but
independent of the number of unique observations. The approach uses Bellman
error minimization with optimistic exploration and provides new insights into
efficient exploration for reinforcement learning with function approximation.
| Nan Jiang, Akshay Krishnamurthy, Alekh Agarwal, John Langford, Robert
E. Schapire | null | 1610.09512 | null | null |
Phased LSTM: Accelerating Recurrent Network Training for Long or
Event-based Sequences | cs.LG | Recurrent Neural Networks (RNNs) have become the state-of-the-art choice for
extracting patterns from temporal sequences. However, current RNN models are
ill-suited to process irregularly sampled data triggered by events generated in
continuous time by sensors or other neurons. Such data can occur, for example,
when the input comes from novel event-driven artificial sensors that generate
sparse, asynchronous streams of events or from multiple conventional sensors
with different update intervals. In this work, we introduce the Phased LSTM
model, which extends the LSTM unit by adding a new time gate. This gate is
controlled by a parametrized oscillation with a frequency range that produces
updates of the memory cell only during a small percentage of the cycle. Even
with the sparse updates imposed by the oscillation, the Phased LSTM network
achieves faster convergence than regular LSTMs on tasks which require learning
of long sequences. The model naturally integrates inputs from sensors of
arbitrary sampling rates, thereby opening new areas of investigation for
processing asynchronous sensory events that carry timing information. It also
greatly improves the performance of LSTMs in standard RNN applications, and
does so with an order-of-magnitude fewer computes at runtime.
| Daniel Neil, Michael Pfeiffer, and Shih-Chii Liu | null | 1610.09513 | null | null |
FEAST: An Automated Feature Selection Framework for Compilation Tasks | cs.PL cs.LG cs.SE | The success of the application of machine-learning techniques to compilation
tasks can be largely attributed to the recent development and advancement of
program characterization, a process that numerically or structurally quantifies
a target program. While great achievements have been made in identifying key
features to characterize programs, choosing a correct set of features for a
specific compiler task remains an ad hoc procedure. In order to guarantee a
comprehensive coverage of features, compiler engineers usually need to select
excessive number of features. This, unfortunately, would potentially lead to a
selection of multiple similar features, which in turn could create a new
problem of bias that emphasizes certain aspects of a program's characteristics,
hence reducing the accuracy and performance of the target compiler task. In
this paper, we propose FEAture Selection for compilation Tasks (FEAST), an
efficient and automated framework for determining the most relevant and
representative features from a feature pool. Specifically, FEAST utilizes
widely used statistics and machine-learning tools, including LASSO, sequential
forward and backward selection, for automatic feature selection, and can in
general be applied to any numerical feature set. This paper further proposes an
automated approach to compiler parameter assignment for assessing the
performance of FEAST. Intensive experimental results demonstrate that, under
the compiler parameter assignment task, FEAST can achieve comparable results
with about 18% of features that are automatically selected from the entire
feature pool. We also inspect these selected features and discuss their roles
in program execution.
| Pai-Shun Ting, Chun-Chen Tu, Pin-Yu Chen, Ya-Yun Lo, Shin-Ming Cheng | null | 1610.09543 | null | null |
TensorLy: Tensor Learning in Python | cs.LG | Tensors are higher-order extensions of matrices. While matrix methods form
the cornerstone of machine learning and data analysis, tensor methods have been
gaining increasing traction. However, software support for tensor operations is
not on the same footing. In order to bridge this gap, we have developed
\emph{TensorLy}, a high-level API for tensor methods and deep tensorized neural
networks in Python. TensorLy aims to follow the same standards adopted by the
main projects of the Python scientific community, and seamlessly integrates
with them. Its BSD license makes it suitable for both academic and commercial
applications. TensorLy's backend system allows users to perform computations
with NumPy, MXNet, PyTorch, TensorFlow and CuPy. They can be scaled on multiple
CPU or GPU machines. In addition, using the deep-learning frameworks as backend
allows users to easily design and train deep tensorized neural networks.
TensorLy is available at https://github.com/tensorly/tensorly
| Jean Kossaifi, Yannis Panagakis, Anima Anandkumar and Maja Pantic | null | 1610.09555 | null | null |
Fair Algorithms for Infinite and Contextual Bandits | cs.LG | We study fairness in linear bandit problems. Starting from the notion of
meritocratic fairness introduced in Joseph et al. [2016], we carry out a more
refined analysis of a more general problem, achieving better performance
guarantees with fewer modelling assumptions on the number and structure of
available choices as well as the number selected. We also analyze the
previously-unstudied question of fairness in infinite linear bandit problems,
obtaining instance-dependent regret upper bounds as well as lower bounds
demonstrating that this instance-dependence is necessary. The result is a
framework for meritocratic fairness in an online linear setting that is
substantially more powerful, general, and realistic than the current state of
the art.
| Matthew Joseph, Michael Kearns, Jamie Morgenstern, Seth Neel, and
Aaron Roth | null | 1610.09559 | null | null |
A Theoretical Study of The Relationship Between Whole An ELM Network and
Its Subnetworks | cs.LG cs.NE | A biological neural network is constituted by numerous subnetworks and
modules with different functionalities. For an artificial neural network, the
relationship between a network and its subnetworks is also important and useful
for both theoretical and algorithmic research, i.e. it can be exploited to
develop incremental network training algorithm or parallel network training
algorithm. In this paper we explore the relationship between an ELM neural
network and its subnetworks. To the best of our knowledge, we are the first to
prove a theorem that shows an ELM neural network can be scattered into
subnetworks and its optimal solution can be constructed recursively by the
optimal solutions of these subnetworks. Based on the theorem we also present
two algorithms to train a large ELM neural network efficiently: one is a
parallel network training algorithm and the other is an incremental network
training algorithm. The experimental results demonstrate the usefulness of the
theorem and the validity of the developed algorithms.
| Enmei Tu, Guanghao Zhang, Lily Rachmawati, Eshan Rajabally and
Guang-Bin Huang | null | 1610.09608 | null | null |
Discovering containment: from infants to machines | q-bio.NC cs.CV cs.LG | Current artificial learning systems can recognize thousands of visual
categories, or play Go at a champion"s level, but cannot explain infants
learning, in particular the ability to learn complex concepts without guidance,
in a specific order. A notable example is the category of 'containers' and the
notion of containment, one of the earliest spatial relations to be learned,
starting already at 2.5 months, and preceding other common relations (e.g.,
support). Such spontaneous unsupervised learning stands in contrast with
current highly successful computational models, which learn in a supervised
manner, that is, by using large data sets of labeled examples. How can
meaningful concepts be learned without guidance, and what determines the
trajectory of infant learning, making some notions appear consistently earlier
than others?
| Shimon Ullman, Nimrod Dorfman, Daniel Harari | 10.1016/j.cognition.2018.11.001 | 1610.09625 | null | null |
Compact Deep Convolutional Neural Networks With Coarse Pruning | cs.LG cs.NE | The learning capability of a neural network improves with increasing depth at
higher computational costs. Wider layers with dense kernel connectivity
patterns furhter increase this cost and may hinder real-time inference. We
propose feature map and kernel level pruning for reducing the computational
complexity of a deep convolutional neural network. Pruning feature maps reduces
the width of a layer and hence does not need any sparse representation.
Further, kernel pruning converts the dense connectivity pattern into a sparse
one. Due to coarse nature, these pruning granularities can be exploited by GPUs
and VLSI based implementations. We propose a simple and generic strategy to
choose the least adversarial pruning masks for both granularities. The pruned
networks are retrained which compensates the loss in accuracy. We obtain the
best pruning ratios when we prune a network with both granularities.
Experiments with the CIFAR-10 dataset show that more than 85% sparsity can be
induced in the convolution layers with less than 1% increase in the
missclassification rate of the baseline network.
| Sajid Anwar, Wonyong Sung | null | 1610.09639 | null | null |
Deep Model Compression: Distilling Knowledge from Noisy Teachers | cs.LG | The remarkable successes of deep learning models across various applications
have resulted in the design of deeper networks that can solve complex problems.
However, the increasing depth of such models also results in a higher storage
and runtime complexity, which restricts the deployability of such very deep
models on mobile and portable devices, which have limited storage and battery
capacity. While many methods have been proposed for deep model compression in
recent years, almost all of them have focused on reducing storage complexity.
In this work, we extend the teacher-student framework for deep model
compression, since it has the potential to address runtime and train time
complexity too. We propose a simple methodology to include a noise-based
regularizer while training the student from the teacher, which provides a
healthy improvement in the performance of the student network. Our experiments
on the CIFAR-10, SVHN and MNIST datasets show promising improvement, with the
best performance on the CIFAR-10 dataset. We also conduct a comprehensive
empirical evaluation of the proposed method under related settings on the
CIFAR-10 dataset to show the promise of the proposed approach.
| Bharat Bhusan Sau and Vineeth N. Balasubramanian | null | 1610.0965 | null | null |
Doubly Convolutional Neural Networks | cs.LG | Building large models with parameter sharing accounts for most of the success
of deep convolutional neural networks (CNNs). In this paper, we propose doubly
convolutional neural networks (DCNNs), which significantly improve the
performance of CNNs by further exploring this idea. In stead of allocating a
set of convolutional filters that are independently learned, a DCNN maintains
groups of filters where filters within each group are translated versions of
each other. Practically, a DCNN can be easily implemented by a two-step
convolution procedure, which is supported by most modern deep learning
libraries. We perform extensive experiments on three image classification
benchmarks: CIFAR-10, CIFAR-100 and ImageNet, and show that DCNNs consistently
outperform other competing architectures. We have also verified that replacing
a convolutional layer with a doubly convolutional layer at any depth of a CNN
can improve its performance. Moreover, various design choices of DCNNs are
demonstrated, which shows that DCNN can serve the dual purpose of building more
accurate models and/or reducing the memory footprint without sacrificing the
accuracy.
| Shuangfei Zhai, Yu Cheng, Weining Lu, Zhongfei Zhang | null | 1610.09716 | null | null |
The Multi-fidelity Multi-armed Bandit | cs.LG | We study a variant of the classical stochastic $K$-armed bandit where
observing the outcome of each arm is expensive, but cheap approximations to
this outcome are available. For example, in online advertising the performance
of an ad can be approximated by displaying it for shorter time periods or to
narrower audiences. We formalise this task as a multi-fidelity bandit, where,
at each time step, the forecaster may choose to play an arm at any one of $M$
fidelities. The highest fidelity (desired outcome) expends cost
$\lambda^{(m)}$. The $m^{\text{th}}$ fidelity (an approximation) expends
$\lambda^{(m)} < \lambda^{(M)}$ and returns a biased estimate of the highest
fidelity. We develop MF-UCB, a novel upper confidence bound procedure for this
setting and prove that it naturally adapts to the sequence of available
approximations and costs thus attaining better regret than naive strategies
which ignore the approximations. For instance, in the above online advertising
example, MF-UCB would use the lower fidelities to quickly eliminate suboptimal
ads and reserve the larger expensive experiments on a small set of promising
candidates. We complement this result with a lower bound and show that MF-UCB
is nearly optimal under certain conditions.
| Kirthevasan Kandasamy and Gautam Dasarathy and Jeff Schneider and
Barnab\'as P\'oczos | null | 1610.09726 | null | null |
Active Learning from Imperfect Labelers | cs.LG stat.ML | We study active learning where the labeler can not only return incorrect
labels but also abstain from labeling. We consider different noise and
abstention conditions of the labeler. We propose an algorithm which utilizes
abstention responses, and analyze its statistical consistency and query
complexity under fairly natural assumptions on the noise and abstention rate of
the labeler. This algorithm is adaptive in a sense that it can automatically
request less queries with a more informed or less noisy labeler. We couple our
algorithm with lower bounds to show that under some technical conditions, it
achieves nearly optimal query complexity.
| Songbai Yan, Kamalika Chaudhuri and Tara Javidi | null | 1610.0973 | null | null |
Towards Deep Learning in Hindi NER: An approach to tackle the Labelled
Data Scarcity | cs.CL cs.LG | In this paper we describe an end to end Neural Model for Named Entity
Recognition NER) which is based on Bi-Directional RNN-LSTM. Almost all NER
systems for Hindi use Language Specific features and handcrafted rules with
gazetteers. Our model is language independent and uses no domain specific
features or any handcrafted rules. Our models rely on semantic information in
the form of word vectors which are learnt by an unsupervised learning algorithm
on an unannotated corpus. Our model attained state of the art performance in
both English and Hindi without the use of any morphological analysis or without
using gazetteers of any sort.
| Vinayak Athavale, Shreenivas Bharadwaj, Monik Pamecha, Ameya Prabhu
and Manish Shrivastava | null | 1610.09756 | null | null |
Meta-Path Guided Embedding for Similarity Search in Large-Scale
Heterogeneous Information Networks | cs.SI cs.LG | Most real-world data can be modeled as heterogeneous information networks
(HINs) consisting of vertices of multiple types and their relationships. Search
for similar vertices of the same type in large HINs, such as bibliographic
networks and business-review networks, is a fundamental problem with broad
applications. Although similarity search in HINs has been studied previously,
most existing approaches neither explore rich semantic information embedded in
the network structures nor take user's preference as a guidance.
In this paper, we re-examine similarity search in HINs and propose a novel
embedding-based framework. It models vertices as low-dimensional vectors to
explore network structure-embedded similarity. To accommodate user preferences
at defining similarity semantics, our proposed framework, ESim, accepts
user-defined meta-paths as guidance to learn vertex vectors in a user-preferred
embedding space. Moreover, an efficient and parallel sampling-based
optimization algorithm has been developed to learn embeddings in large-scale
HINs. Extensive experiments on real-world large-scale HINs demonstrate a
significant improvement on the effectiveness of ESim over several
state-of-the-art algorithms as well as its scalability.
| Jingbo Shang, Meng Qu, Jialu Liu, Lance M. Kaplan, Jiawei Han, Jian
Peng | null | 1610.09769 | null | null |
DPPred: An Effective Prediction Framework with Concise Discriminative
Patterns | cs.LG cs.AI | In the literature, two series of models have been proposed to address
prediction problems including classification and regression. Simple models,
such as generalized linear models, have ordinary performance but strong
interpretability on a set of simple features. The other series, including
tree-based models, organize numerical, categorical and high dimensional
features into a comprehensive structure with rich interpretable information in
the data.
In this paper, we propose a novel Discriminative Pattern-based Prediction
framework (DPPred) to accomplish the prediction tasks by taking their
advantages of both effectiveness and interpretability. Specifically, DPPred
adopts the concise discriminative patterns that are on the prefix paths from
the root to leaf nodes in the tree-based models. DPPred selects a limited
number of the useful discriminative patterns by searching for the most
effective pattern combination to fit generalized linear models. Extensive
experiments show that in many scenarios, DPPred provides competitive accuracy
with the state-of-the-art as well as the valuable interpretability for
developers and experts. In particular, taking a clinical application dataset as
a case study, our DPPred outperforms the baselines by using only 40 concise
discriminative patterns out of a potentially exponentially large set of
patterns.
| Jingbo Shang, Meng Jiang, Wenzhu Tong, Jinfeng Xiao, Jian Peng, Jiawei
Han | null | 1610.09778 | null | null |
Depth-Width Tradeoffs in Approximating Natural Functions with Neural
Networks | cs.LG cs.NE stat.ML | We provide several new depth-based separation results for feed-forward neural
networks, proving that various types of simple and natural functions can be
better approximated using deeper networks than shallower ones, even if the
shallower networks are much larger. This includes indicators of balls and
ellipses; non-linear functions which are radial with respect to the $L_1$ norm;
and smooth non-linear functions. We also show that these gaps can be observed
experimentally: Increasing the depth indeed allows better learning than
increasing width, when training neural networks to learn an indicator of a unit
ball.
| Itay Safran, Ohad Shamir | null | 1610.09887 | null | null |
LightRNN: Memory and Computation-Efficient Recurrent Neural Networks | cs.CL cs.LG | Recurrent neural networks (RNNs) have achieved state-of-the-art performances
in many natural language processing tasks, such as language modeling and
machine translation. However, when the vocabulary is large, the RNN model will
become very big (e.g., possibly beyond the memory capacity of a GPU device) and
its training will become very inefficient. In this work, we propose a novel
technique to tackle this challenge. The key idea is to use 2-Component (2C)
shared embedding for word representations. We allocate every word in the
vocabulary into a table, each row of which is associated with a vector, and
each column associated with another vector. Depending on its position in the
table, a word is jointly represented by two components: a row vector and a
column vector. Since the words in the same row share the row vector and the
words in the same column share the column vector, we only need $2 \sqrt{|V|}$
vectors to represent a vocabulary of $|V|$ unique words, which are far less
than the $|V|$ vectors required by existing approaches. Based on the
2-Component shared embedding, we design a new RNN algorithm and evaluate it
using the language modeling task on several benchmark datasets. The results
show that our algorithm significantly reduces the model size and speeds up the
training process, without sacrifice of accuracy (it achieves similar, if not
better, perplexity as compared to state-of-the-art language models).
Remarkably, on the One-Billion-Word benchmark Dataset, our algorithm achieves
comparable perplexity to previous language models, whilst reducing the model
size by a factor of 40-100, and speeding up the training process by a factor of
2. We name our proposed algorithm \emph{LightRNN} to reflect its very small
model size and very high training speed.
| Xiang Li and Tao Qin and Jian Yang and Tie-Yan Liu | null | 1610.09893 | null | null |
Inference Compilation and Universal Probabilistic Programming | cs.AI cs.LG stat.ML | We introduce a method for using deep neural networks to amortize the cost of
inference in models from the family induced by universal probabilistic
programming languages, establishing a framework that combines the strengths of
probabilistic programming and deep learning methods. We call what we do
"compilation of inference" because our method transforms a denotational
specification of an inference problem in the form of a probabilistic program
written in a universal programming language into a trained neural network
denoted in a neural network specification language. When at test time this
neural network is fed observational data and executed, it performs approximate
inference in the original model specified by the probabilistic program. Our
training objective and learning procedure are designed to allow the trained
neural network to be used as a proposal distribution in a sequential importance
sampling inference engine. We illustrate our method on mixture models and
Captcha solving and show significant speedups in the efficiency of inference.
| Tuan Anh Le, Atilim Gunes Baydin, Frank Wood | null | 1610.099 | null | null |
Learning Runtime Parameters in Computer Systems with Delayed Experience
Injection | cs.LG | Learning effective configurations in computer systems without hand-crafting
models for every parameter is a long-standing problem. This paper investigates
the use of deep reinforcement learning for runtime parameters of cloud
databases under latency constraints. Cloud services serve up to thousands of
concurrent requests per second and can adjust critical parameters by leveraging
performance metrics. In this work, we use continuous deep reinforcement
learning to learn optimal cache expirations for HTTP caching in content
delivery networks. To this end, we introduce a technique for asynchronous
experience management called delayed experience injection, which facilitates
delayed reward and next-state computation in concurrent environments where
measurements are not immediately available. Evaluation results show that our
approach based on normalized advantage functions and asynchronous CPU-only
training outperforms a statistical estimator.
| Michael Schaarschmidt, Felix Gessert, Valentin Dalibard, Eiko Yoneki | null | 1610.09903 | null | null |
Complex-Valued Kernel Methods for Regression | stat.ML cs.LG | Usually, complex-valued RKHS are presented as an straightforward application
of the real-valued case. In this paper we prove that this procedure yields a
limited solution for regression. We show that another kernel, here denoted as
pseudo kernel, is needed to learn any function in complex-valued fields.
Accordingly, we derive a novel RKHS to include it, the widely RKHS (WRKHS).
When the pseudo-kernel cancels, WRKHS reduces to complex-valued RKHS of
previous approaches. We address the kernel and pseudo-kernel design, paying
attention to the kernel and the pseudo-kernel being complex-valued. In the
experiments included we report remarkable improvements in simple scenarios
where real a imaginary parts have different similitude relations for given
inputs or cases where real and imaginary parts are correlated. In the context
of these novel results we revisit the problem of non-linear channel
equalization, to show that the WRKHS helps to design more efficient solutions.
| Rafael Boloix-Tortosa, Juan Jos\'e Murillo-Fuentes, Irene Santos
Vel\'azquez, and Fernando P\'erez-Cruz | 10.1109/TSP.2017.2726991 | 1610.09915 | null | null |
Support Vector Machines and Generalisation in HEP | physics.data-an cs.LG hep-ex | We review the concept of support vector machines (SVMs) and discuss examples
of their use. One of the benefits of SVM algorithms, compared with neural
networks and decision trees is that they can be less susceptible to over
fitting than those other algorithms are to over training. This issue is related
to the generalisation of a multivariate algorithm (MVA); a problem that has
often been overlooked in particle physics. We discuss cross validation and how
this can be used to improve the generalisation of a MVA in the context of High
Energy Physics analyses. The examples presented use the Toolkit for
Multivariate Analysis (TMVA) based on ROOT and describe our improvements to the
SVM functionality and new tools introduced for cross validation within this
framework.
| A. Bethani, A. J. Bevan, J. Hays and T. J. Stevenson | 10.1088/1742-6596/762/1/012052 | 1610.09932 | null | null |
Neural Speech Recognizer: Acoustic-to-Word LSTM Model for Large
Vocabulary Speech Recognition | cs.CL cs.LG cs.NE | We present results that show it is possible to build a competitive, greatly
simplified, large vocabulary continuous speech recognition system with whole
words as acoustic units. We model the output vocabulary of about 100,000 words
directly using deep bi-directional LSTM RNNs with CTC loss. The model is
trained on 125,000 hours of semi-supervised acoustic training data, which
enables us to alleviate the data sparsity problem for word models. We show that
the CTC word models work very well as an end-to-end all-neural speech
recognition model without the use of traditional context-dependent sub-word
phone units that require a pronunciation lexicon, and without any language
model removing the need to decode. We demonstrate that the CTC word models
perform better than a strong, more complex, state-of-the-art baseline with
sub-word units.
| Hagen Soltau, Hank Liao, Hasim Sak | null | 1610.09975 | null | null |
Optimization for Large-Scale Machine Learning with Distributed Features
and Observations | stat.ML cs.LG | As the size of modern data sets exceeds the disk and memory capacities of a
single computer, machine learning practitioners have resorted to parallel and
distributed computing. Given that optimization is one of the pillars of machine
learning and predictive modeling, distributed optimization methods have
recently garnered ample attention in the literature. Although previous research
has mostly focused on settings where either the observations, or features of
the problem at hand are stored in distributed fashion, the situation where both
are partitioned across the nodes of a computer cluster (doubly distributed) has
barely been studied. In this work we propose two doubly distributed
optimization algorithms. The first one falls under the umbrella of distributed
dual coordinate ascent methods, while the second one belongs to the class of
stochastic gradient/coordinate descent hybrid methods. We conduct numerical
experiments in Spark using real-world and simulated data sets and study the
scaling properties of our methods. Our empirical evaluation of the proposed
algorithms demonstrates the out-performance of a block distributed ADMM method,
which, to the best of our knowledge is the only other existing doubly
distributed optimization algorithm.
| Alexandros Nathan, Diego Klabjan | null | 1610.1006 | null | null |
Tensor Switching Networks | cs.NE cs.LG stat.ML | We present a novel neural network algorithm, the Tensor Switching (TS)
network, which generalizes the Rectified Linear Unit (ReLU) nonlinearity to
tensor-valued hidden units. The TS network copies its entire input vector to
different locations in an expanded representation, with the location determined
by its hidden unit activity. In this way, even a simple linear readout from the
TS representation can implement a highly expressive deep-network-like function.
The TS network hence avoids the vanishing gradient problem by construction, at
the cost of larger representation size. We develop several methods to train the
TS network, including equivalent kernels for infinitely wide and deep TS
networks, a one-pass linear learning algorithm, and two
backpropagation-inspired representation learning algorithms. Our experimental
results demonstrate that the TS network is indeed more expressive and
consistently learns faster than standard ReLU networks.
| Chuan-Yung Tsai, Andrew Saxe, David Cox | null | 1610.10087 | null | null |
Neural Machine Translation in Linear Time | cs.CL cs.LG | We present a novel neural network for processing sequences. The ByteNet is a
one-dimensional convolutional neural network that is composed of two parts, one
to encode the source sequence and the other to decode the target sequence. The
two network parts are connected by stacking the decoder on top of the encoder
and preserving the temporal resolution of the sequences. To address the
differing lengths of the source and the target, we introduce an efficient
mechanism by which the decoder is dynamically unfolded over the representation
of the encoder. The ByteNet uses dilation in the convolutional layers to
increase its receptive field. The resulting network has two core properties: it
runs in time that is linear in the length of the sequences and it sidesteps the
need for excessive memorization. The ByteNet decoder attains state-of-the-art
performance on character-level language modelling and outperforms the previous
best results obtained with recurrent networks. The ByteNet also achieves
state-of-the-art performance on character-to-character machine translation on
the English-to-German WMT translation task, surpassing comparable neural
translation models that are based on recurrent networks with attentional
pooling and run in quadratic time. We find that the latent alignment structure
contained in the representations reflects the expected alignment between the
tokens.
| Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord,
Alex Graves, Koray Kavukcuoglu | null | 1610.10099 | null | null |
Neural Symbolic Machines: Learning Semantic Parsers on Freebase with
Weak Supervision | cs.CL cs.AI cs.LG | Harnessing the statistical power of neural networks to perform language
understanding and symbolic reasoning is difficult, when it requires executing
efficient discrete operations against a large knowledge-base. In this work, we
introduce a Neural Symbolic Machine, which contains (a) a neural "programmer",
i.e., a sequence-to-sequence model that maps language utterances to programs
and utilizes a key-variable memory to handle compositionality (b) a symbolic
"computer", i.e., a Lisp interpreter that performs program execution, and helps
find good programs by pruning the search space. We apply REINFORCE to directly
optimize the task reward of this structured prediction problem. To train with
weak supervision and improve the stability of REINFORCE, we augment it with an
iterative maximum-likelihood training process. NSM outperforms the
state-of-the-art on the WebQuestionsSP dataset when trained from
question-answer pairs only, without requiring any feature engineering or
domain-specific knowledge.
| Chen Liang, Jonathan Berant, Quoc Le, Kenneth D. Forbus, Ni Lao | null | 1611.0002 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.