title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Enhanced perceptrons using contrastive biclusters | cs.NE cs.LG stat.ML | Perceptrons are neuronal devices capable of fully discriminating linearly
separable classes. Although straightforward to implement and train, their
applicability is usually hindered by non-trivial requirements imposed by
real-world classification problems. Therefore, several approaches, such as
kernel perceptrons, have been conceived to counteract such difficulties. In
this paper, we investigate an enhanced perceptron model based on the notion of
contrastive biclusters. From this perspective, a good discriminative bicluster
comprises a subset of data instances belonging to one class that show high
coherence across a subset of features and high differentiation from nearest
instances of the other class under the same features (referred to as its
contrastive bicluster). Upon each local subspace associated with a pair of
contrastive biclusters a perceptron is trained and the model with highest area
under the receiver operating characteristic curve (AUC) value is selected as
the final classifier. Experiments conducted on a range of data sets, including
those related to a difficult biosignal classification problem, show that the
proposed variant can be indeed very useful, prevailing in most of the cases
upon standard and kernel perceptrons in terms of accuracy and AUC measures.
| Andr\'e L. V. Coelho and Fabr\'icio O. de Fran\c{c}a | null | 1603.06859 | null | null |
Trading-off variance and complexity in stochastic gradient descent | stat.ML cs.IT cs.LG math.IT math.OC | Stochastic gradient descent is the method of choice for large-scale machine
learning problems, by virtue of its light complexity per iteration. However, it
lags behind its non-stochastic counterparts with respect to the convergence
rate, due to high variance introduced by the stochastic updates. The popular
Stochastic Variance-Reduced Gradient (SVRG) method mitigates this shortcoming,
introducing a new update rule which requires infrequent passes over the entire
input dataset to compute the full-gradient.
In this work, we propose CheapSVRG, a stochastic variance-reduction
optimization scheme. Our algorithm is similar to SVRG but instead of the full
gradient, it uses a surrogate which can be efficiently computed on a small
subset of the input data. It achieves a linear convergence rate ---up to some
error level, depending on the nature of the optimization problem---and features
a trade-off between the computational complexity and the convergence rate.
Empirical evaluation shows that CheapSVRG performs at least competitively
compared to the state of the art.
| Vatsal Shah, Megasthenis Asteris, Anastasios Kyrillidis, Sujay
Sanghavi | null | 1603.06861 | null | null |
Feeling the Bern: Adaptive Estimators for Bernoulli Probabilities of
Pairwise Comparisons | cs.LG cs.AI cs.IT math.IT stat.ML | We study methods for aggregating pairwise comparison data in order to
estimate outcome probabilities for future comparisons among a collection of n
items. Working within a flexible framework that imposes only a form of strong
stochastic transitivity (SST), we introduce an adaptivity index defined by the
indifference sets of the pairwise comparison probabilities. In addition to
measuring the usual worst-case risk of an estimator, this adaptivity index also
captures the extent to which the estimator adapts to instance-specific
difficulty relative to an oracle estimator. We prove three main results that
involve this adaptivity index and different algorithms. First, we propose a
three-step estimator termed Count-Randomize-Least squares (CRL), and show that
it has adaptivity index upper bounded as $\sqrt{n}$ up to logarithmic factors.
We then show that that conditional on the hardness of planted clique, no
computationally efficient estimator can achieve an adaptivity index smaller
than $\sqrt{n}$. Second, we show that a regularized least squares estimator can
achieve a poly-logarithmic adaptivity index, thereby demonstrating a
$\sqrt{n}$-gap between optimal and computationally achievable adaptivity.
Finally, we prove that the standard least squares estimator, which is known to
be optimally adaptive in several closely related problems, fails to adapt in
the context of estimating pairwise probabilities.
| Nihar B. Shah, Sivaraman Balakrishnan, Martin J. Wainwright | null | 1603.06881 | null | null |
Recurrent Neural Network Encoder with Attention for Community Question
Answering | cs.CL cs.LG cs.NE | We apply a general recurrent neural network (RNN) encoder framework to
community question answering (cQA) tasks. Our approach does not rely on any
linguistic processing, and can be applied to different languages or domains.
Further improvements are observed when we extend the RNN encoders with a neural
attention mechanism that encourages reasoning over entire sequences. To deal
with practical issues such as data sparsity and imbalanced labels, we apply
various techniques such as transfer learning and multitask learning. Our
experiments on the SemEval-2016 cQA task show 10% improvement on a MAP score
compared to an information retrieval-based approach, and achieve comparable
performance to a strong handcrafted feature-based method.
| Wei-Ning Hsu, Yu Zhang and James Glass | null | 1603.07044 | null | null |
Predicting Glaucoma Visual Field Loss by Hierarchically Aggregating
Clustering-based Predictors | stat.ML cs.LG | This study addresses the issue of predicting the glaucomatous visual field
loss from patient disease datasets. Our goal is to accurately predict the
progress of the disease in individual patients. As very few measurements are
available for each patient, it is difficult to produce good predictors for
individuals. A recently proposed clustering-based method enhances the power of
prediction using patient data with similar spatiotemporal patterns. Each
patient is categorized into a cluster of patients, and a predictive model is
constructed using all of the data in the class. Predictions are highly
dependent on the quality of clustering, but it is difficult to identify the
best clustering method. Thus, we propose a method for aggregating cluster-based
predictors to obtain better prediction accuracy than from a single
cluster-based prediction. Further, the method shows very high performances by
hierarchically aggregating experts generated from several cluster-based
methods. We use real datasets to demonstrate that our method performs
significantly better than conventional clustering-based and patient-wise
regression methods, because the hierarchical aggregating strategy has a
mechanism whereby good predictors in a small community can thrive.
| Motohide Higaki, Kai Morino, Hiroshi Murata, Ryo Asaoka, and Kenji
Yamanishi | null | 1603.07094 | null | null |
A Decentralized Quasi-Newton Method for Dual Formulations of Consensus
Optimization | math.OC cs.DC cs.LG | This paper considers consensus optimization problems where each node of a
network has access to a different summand of an aggregate cost function. Nodes
try to minimize the aggregate cost function, while they exchange information
only with their neighbors. We modify the dual decomposition method to
incorporate a curvature correction inspired by the
Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method. The resulting dual
D-BFGS method is a fully decentralized algorithm in which nodes approximate
curvature information of themselves and their neighbors through the
satisfaction of a secant condition. Dual D-BFGS is of interest in consensus
optimization problems that are not well conditioned, making first order
decentralized methods ineffective, and in which second order information is not
readily available, making decentralized second order methods infeasible.
Asynchronous implementation is discussed and convergence of D-BFGS is
established formally for both synchronous and asynchronous implementations.
Performance advantages relative to alternative decentralized algorithms are
shown numerically.
| Mark Eisen, Aryan Mokhtari, Alejandro Ribeiro | null | 1603.07195 | null | null |
Global-Local Face Upsampling Network | cs.CV cs.LG | Face hallucination, which is the task of generating a high-resolution face
image from a low-resolution input image, is a well-studied problem that is
useful in widespread application areas. Face hallucination is particularly
challenging when the input face resolution is very low (e.g., 10 x 12 pixels)
and/or the image is captured in an uncontrolled setting with large pose and
illumination variations. In this paper, we revisit the algorithm introduced in
[1] and present a deep interpretation of this framework that achieves
state-of-the-art under such challenging scenarios. In our deep network
architecture the global and local constraints that define a face can be
efficiently modeled and learned end-to-end using training data. Conceptually
our network design can be partitioned into two sub-networks: the first one
implements the holistic face reconstruction according to global constraints,
and the second one enhances face-specific details and enforces local patch
statistics. We optimize the deep network using a new loss function for
super-resolution that combines reconstruction error with a learned face quality
measure in adversarial setting, producing improved visual results. We conduct
extensive experiments in both controlled and uncontrolled setups and show that
our algorithm improves the state of the art both numerically and visually.
| Oncel Tuzel, Yuichi Taguchi, and John R. Hershey | null | 1603.07235 | null | null |
A Tutorial on Deep Neural Networks for Intelligent Systems | cs.NE cs.LG | Developing Intelligent Systems involves artificial intelligence approaches
including artificial neural networks. Here, we present a tutorial of Deep
Neural Networks (DNNs), and some insights about the origin of the term "deep";
references to deep learning are also given. Restricted Boltzmann Machines,
which are the core of DNNs, are discussed in detail. An example of a simple
two-layer network, performing unsupervised learning for unlabeled data, is
shown. Deep Belief Networks (DBNs), which are used to build networks with more
than two layers, are also described. Moreover, examples for supervised learning
with DNNs performing simple prediction and classification tasks, are presented
and explained. This tutorial includes two intelligent pattern recognition
applications: hand- written digits (benchmark known as MNIST) and speech
recognition.
| Juan C. Cuevas-Tello and Manuel Valenzuela-Rendon and Juan A.
Nolazco-Flores | null | 1603.07249 | null | null |
A guide to convolution arithmetic for deep learning | stat.ML cs.LG cs.NE | We introduce a guide to help deep learning practitioners understand and
manipulate convolutional neural network architectures. The guide clarifies the
relationship between various properties (input shape, kernel shape, zero
padding, strides and output shape) of convolutional, pooling and transposed
convolutional layers, as well as the relationship between convolutional and
transposed convolutional layers. Relationships are derived for various cases,
and are illustrated in order to make them intuitive.
| Vincent Dumoulin, Francesco Visin | null | 1603.07285 | null | null |
Debugging Machine Learning Tasks | cs.LG cs.AI cs.PL stat.ML | Unlike traditional programs (such as operating systems or word processors)
which have large amounts of code, machine learning tasks use programs with
relatively small amounts of code (written in machine learning libraries), but
voluminous amounts of data. Just like developers of traditional programs debug
errors in their code, developers of machine learning tasks debug and fix errors
in their data. However, algorithms and tools for debugging and fixing errors in
data are less common, when compared to their counterparts for detecting and
fixing errors in code. In this paper, we consider classification tasks where
errors in training data lead to misclassifications in test points, and propose
an automated method to find the root causes of such misclassifications. Our
root cause analysis is based on Pearl's theory of causation, and uses Pearl's
PS (Probability of Sufficiency) as a scoring metric. Our implementation, Psi,
encodes the computation of PS as a probabilistic program, and uses recent work
on probabilistic programs and transformations on probabilistic programs (along
with gray-box models of machine learning algorithms) to efficiently compute PS.
Psi is able to identify root causes of data errors in interesting data sets.
| Aleksandar Chakarov, Aditya Nori, Sriram Rajamani, Shayak Sen, and
Deepak Vijaykeerthy | null | 1603.07292 | null | null |
On the Theory and Practice of Privacy-Preserving Bayesian Data Analysis | cs.LG cs.AI cs.CR stat.ML | Bayesian inference has great promise for the privacy-preserving analysis of
sensitive data, as posterior sampling automatically preserves differential
privacy, an algorithmic notion of data privacy, under certain conditions
(Dimitrakakis et al., 2014; Wang et al., 2015). While this one posterior sample
(OPS) approach elegantly provides privacy "for free," it is data inefficient in
the sense of asymptotic relative efficiency (ARE). We show that a simple
alternative based on the Laplace mechanism, the workhorse of differential
privacy, is as asymptotically efficient as non-private posterior inference,
under general assumptions. This technique also has practical advantages
including efficient use of the privacy budget for MCMC. We demonstrate the
practicality of our approach on a time-series analysis of sensitive military
records from the Afghanistan and Iraq wars disclosed by the Wikileaks
organization.
| James Foulds, Joseph Geumlek, Max Welling, Kamalika Chaudhuri | null | 1603.07294 | null | null |
Learning Mixtures of Plackett-Luce Models | cs.LG | In this paper we address the identifiability and efficient learning problems
of finite mixtures of Plackett-Luce models for rank data. We prove that for any
$k\geq 2$, the mixture of $k$ Plackett-Luce models for no more than $2k-1$
alternatives is non-identifiable and this bound is tight for $k=2$. For generic
identifiability, we prove that the mixture of $k$ Plackett-Luce models over $m$
alternatives is generically identifiable if $k\leq\lfloor\frac {m-2}
2\rfloor!$. We also propose an efficient generalized method of moments (GMM)
algorithm to learn the mixture of two Plackett-Luce models and show that the
algorithm is consistent. Our experiments show that our GMM algorithm is
significantly faster than the EMM algorithm by Gormley and Murphy (2008), while
achieving competitive statistical efficiency.
| Zhibing Zhao, Peter Piech, Lirong Xia | null | 1603.07323 | null | null |
Acceleration of Deep Neural Network Training with Resistive Cross-Point
Devices | cs.LG cs.NE stat.ML | In recent years, deep neural networks (DNN) have demonstrated significant
business impact in large scale analysis and classification tasks such as speech
recognition, visual object detection, pattern extraction, etc. Training of
large DNNs, however, is universally considered as time consuming and
computationally intensive task that demands datacenter-scale computational
resources recruited for many days. Here we propose a concept of resistive
processing unit (RPU) devices that can potentially accelerate DNN training by
orders of magnitude while using much less power. The proposed RPU device can
store and update the weight values locally thus minimizing data movement during
training and allowing to fully exploit the locality and the parallelism of the
training algorithm. We identify the RPU device and system specifications for
implementation of an accelerator chip for DNN training in a realistic
CMOS-compatible technology. For large DNNs with about 1 billion weights this
massively parallel RPU architecture can achieve acceleration factors of 30,000X
compared to state-of-the-art microprocessors while providing power efficiency
of 84,000 GigaOps/s/W. Problems that currently require days of training on a
datacenter-size cluster with thousands of machines can be addressed within
hours on a single RPU accelerator. A system consisted of a cluster of RPU
accelerators will be able to tackle Big Data problems with trillions of
parameters that is impossible to address today like, for example, natural
speech recognition and translation between all world languages, real-time
analytics on large streams of business and scientific data, integration and
analysis of multimodal sensory data flows from massive number of IoT (Internet
of Things) sensors.
| Tayfun Gokmen, Yurii Vlasov | 10.3389/fnins.2016.00333 | 1603.07341 | null | null |
A Reconfigurable Low Power High Throughput Architecture for Deep Network
Training | cs.LG cs.AR cs.DC | General purpose computing systems are used for a large variety of
applications. Extensive supports for flexibility in these systems limit their
energy efficiencies. Neural networks, including deep networks, are widely used
for signal processing and pattern recognition applications. In this paper we
propose a multicore architecture for deep neural network based processing.
Memristor crossbars are utilized to provide low power high throughput execution
of neural networks. The system has both training and recognition (evaluation of
new input) capabilities. The proposed system could be used for classification,
dimensionality reduction, feature extraction, and anomaly detection
applications. The system level area and power benefits of the specialized
architecture is compared with the NVIDIA Telsa K20 GPGPU. Our experimental
evaluations show that the proposed architecture can provide up to five orders
of magnitude more energy efficiency over GPGPUs for deep neural network
processing.
| Raqibul Hasan, and Tarek Taha | null | 1603.07400 | null | null |
On the Powerball Method for Optimization | cs.SY cs.LG math.OC | We propose a new method to accelerate the convergence of optimization
algorithms. This method simply adds a power coefficient $\gamma\in[0,1)$ to the
gradient during optimization. We call this the Powerball method and analyze the
convergence rate for the Powerball method for strongly convex functions. While
theoretically the Powerball method is guaranteed to have a linear convergence
rate in the same order of the gradient method, we show that empirically it
significantly outperforms the gradient descent and Newton's method, especially
during the initial iterations. We demonstrate that the Powerball method
provides a $10$-fold speedup of the convergence of both gradient descent and
L-BFGS on multiple real datasets.
| Ye Yuan, Mu Li, Jun Liu, Claire J. Tomlin | 10.1109/LCSYS.2019.2913770 | 1603.07421 | null | null |
Deep Extreme Feature Extraction: New MVA Method for Searching Particles
in High Energy Physics | cs.LG cs.NE | In this paper, we present Deep Extreme Feature Extraction (DEFE), a new
ensemble MVA method for searching $\tau^{+}\tau^{-}$ channel of Higgs bosons in
high energy physics. DEFE can be viewed as a deep ensemble learning scheme that
trains a strongly diverse set of neural feature learners without explicitly
encouraging diversity and penalizing correlations. This is achieved by adopting
an implicit neural controller (not involved in feedforward compuation) that
directly controls and distributes gradient flows from higher level deep
prediction network. Such model-independent controller results in that every
single local feature learned are used in the feature-to-output mapping stage,
avoiding the blind averaging of features. DEFE makes the ensembles 'deep' in
the sense that it allows deep post-process of these features that tries to
learn to select and abstract the ensemble of neural feature learners. With the
application of this model, a selection regions full of signal process can be
obtained through the training of a miniature collision events set. In
comparison of the Classic Deep Neural Network, DEFE shows a state-of-the-art
performance: the error rate has decreased by about 37\%, the accuracy has
broken through 90\% for the first time, along with the discovery significance
has reached a standard deviation of 6.0 $\sigma$. Experimental data shows that,
DEFE is able to train an ensemble of discriminative feature learners that
boosts the overperformance of final prediction.
| Chao Ma, Tianchenghou, Bin Lan, Jinhui Xu, Zhenhua Zhang | null | 1603.07454 | null | null |
Source Localization on Graphs via l1 Recovery and Spectral Graph Theory | cs.LG | We cast the problem of source localization on graphs as the simultaneous
problem of sparse recovery and diffusion kernel learning. An l1 regularization
term enforces the sparsity constraint while we recover the sources of diffusion
from a single snapshot of the diffusion process. The diffusion kernel is
estimated by assuming the process to be as generic as the standard heat
diffusion. We show with synthetic data that we can concomitantly learn the
diffusion kernel and the sources, given an estimated initialization. We
validate our model with cholera mortality and atmospheric tracer diffusion
data, showing also that the accuracy of the solution depends on the
construction of the graph from the data points.
| Rodrigo Pena, Xavier Bresson, Pierre Vandergheynst | 10.1109/IVMSPW.2016.7528230 | 1603.07584 | null | null |
Recursive Neural Language Architecture for Tag Prediction | cs.IR cs.CL cs.LG cs.NE | We consider the problem of learning distributed representations for tags from
their associated content for the task of tag recommendation. Considering
tagging information is usually very sparse, effective learning from content and
tag association is very crucial and challenging task. Recently, various neural
representation learning models such as WSABIE and its variants show promising
performance, mainly due to compact feature representations learned in a
semantic space. However, their capacity is limited by a linear compositional
approach for representing tags as sum of equal parts and hurt their
performance. In this work, we propose a neural feedback relevance model for
learning tag representations with weighted feature representations. Our
experiments on two widely used datasets show significant improvement for
quality of recommendations over various baselines.
| Saurabh Kataria | null | 1603.07646 | null | null |
Probabilistic Reasoning via Deep Learning: Neural Association Models | cs.AI cs.LG cs.NE | In this paper, we propose a new deep learning approach, called neural
association model (NAM), for probabilistic reasoning in artificial
intelligence. We propose to use neural networks to model association between
any two events in a domain. Neural networks take one event as input and compute
a conditional probability of the other event to model how likely these two
events are to be associated. The actual meaning of the conditional
probabilities varies between applications and depends on how the models are
trained. In this work, as two case studies, we have investigated two NAM
structures, namely deep neural networks (DNN) and relation-modulated neural
nets (RMNN), on several probabilistic reasoning tasks in AI, including
recognizing textual entailment, triple classification in multi-relational
knowledge bases and commonsense reasoning. Experimental results on several
popular datasets derived from WordNet, FreeBase and ConceptNet have all
demonstrated that both DNNs and RMNNs perform equally well and they can
significantly outperform the conventional methods available for these reasoning
tasks. Moreover, compared with DNNs, RMNNs are superior in knowledge transfer,
where a pre-trained model can be quickly extended to an unseen relation after
observing only a few training samples. To further prove the effectiveness of
the proposed models, in this work, we have applied NAMs to solving challenging
Winograd Schema (WS) problems. Experiments conducted on a set of WS problems
prove that the proposed models have the potential for commonsense reasoning.
| Quan Liu, Hui Jiang, Andrew Evdokimov, Zhen-Hua Ling, Xiaodan Zhu, Si
Wei, Yu Hu | null | 1603.07704 | null | null |
Co-occurrence Feature Learning for Skeleton based Action Recognition
using Regularized Deep LSTM Networks | cs.CV cs.LG | Skeleton based action recognition distinguishes human actions using the
trajectories of skeleton joints, which provide a very good representation for
describing actions. Considering that recurrent neural networks (RNNs) with Long
Short-Term Memory (LSTM) can learn feature representations and model long-term
temporal dependencies automatically, we propose an end-to-end fully connected
deep LSTM network for skeleton based action recognition. Inspired by the
observation that the co-occurrences of the joints intrinsically characterize
human actions, we take the skeleton as the input at each time slot and
introduce a novel regularization scheme to learn the co-occurrence features of
skeleton joints. To train the deep LSTM network effectively, we propose a new
dropout algorithm which simultaneously operates on the gates, cells, and output
responses of the LSTM neurons. Experimental results on three human action
recognition datasets consistently demonstrate the effectiveness of the proposed
model.
| Wentao Zhu, Cuiling Lan, Junliang Xing, Wenjun Zeng, Yanghao Li, Li
Shen, Xiaohui Xie | null | 1603.07772 | null | null |
Conditional Similarity Networks | cs.CV cs.AI cs.LG | What makes images similar? To measure the similarity between images, they are
typically embedded in a feature-vector space, in which their distance preserve
the relative dissimilarity. However, when learning such similarity embeddings
the simplifying assumption is commonly made that images are only compared to
one unique measure of similarity. A main reason for this is that contradicting
notions of similarities cannot be captured in a single space. To address this
shortcoming, we propose Conditional Similarity Networks (CSNs) that learn
embeddings differentiated into semantically distinct subspaces that capture the
different notions of similarities. CSNs jointly learn a disentangled embedding
where features for different similarities are encoded in separate dimensions as
well as masks that select and reweight relevant dimensions to induce a subspace
that encodes a specific similarity notion. We show that our approach learns
interpretable image representations with visually relevant semantic subspaces.
Further, when evaluating on triplet questions from multiple similarity notions
our model even outperforms the accuracy obtained by training individual
specialized networks for each notion separately.
| Andreas Veit, Serge Belongie, Theofanis Karaletsos | null | 1603.07810 | null | null |
Privacy-Preserved Big Data Analysis Based on Asymmetric Imputation
Kernels and Multiside Similarities | cs.LG cs.CR | This study presents an efficient approach for incomplete data classification,
where the entries of samples are missing or masked due to privacy preservation.
To deal with these incomplete data, a new kernel function with asymmetric
intrinsic mappings is proposed in this study. Such a new kernel uses three-side
similarities for kernel matrix formation. The similarity between a testing
instance and a training sample relies not only on their distance but also on
the relation between the testing sample and the centroid of the class, where
the training sample belongs. This reduces biased estimation compared with
typical methods when only one training sample is used for kernel matrix
formation. Furthermore, centroid generation does not involve any clustering
algorithms. The proposed kernel is capable of performing data imputation by
using class-dependent averages. This enhances Fisher Discriminant Ratios and
data discriminability. Experiments on two open databases were carried out for
evaluating the proposed method. The result indicated that the accuracy of the
proposed method was higher than that of the baseline. These findings thereby
demonstrated the effectiveness of the proposed idea.
| Bo-Wei Chen | null | 1603.07828 | null | null |
An end-to-end convolutional selective autoencoder approach to Soybean
Cyst Nematode eggs detection | cs.CV cs.LG stat.ML | This paper proposes a novel selective autoencoder approach within the
framework of deep convolutional networks. The crux of the idea is to train a
deep convolutional autoencoder to suppress undesired parts of an image frame
while allowing the desired parts resulting in efficient object detection. The
efficacy of the framework is demonstrated on a critical plant science problem.
In the United States, approximately $1 billion is lost per annum due to a
nematode infection on soybean plants. Currently, plant-pathologists rely on
labor-intensive and time-consuming identification of Soybean Cyst Nematode
(SCN) eggs in soil samples via manual microscopy. The proposed framework
attempts to significantly expedite the process by using a series of manually
labeled microscopic images for training followed by automated high-throughput
egg detection. The problem is particularly difficult due to the presence of a
large population of non-egg particles (disturbances) in the image frames that
are very similar to SCN eggs in shape, pose and illumination. Therefore, the
selective autoencoder is trained to learn unique features related to the
invariant shapes and sizes of the SCN eggs without handcrafting. After that, a
composite non-maximum suppression and differencing is applied at the
post-processing stage.
| Adedotun Akintayo, Nigel Lee, Vikas Chawla, Mark Mullaney, Christopher
Marett, Asheesh Singh, Arti Singh, Greg Tylka, Baskar Ganapathysubramaniam,
Soumik Sarkar | null | 1603.07834 | null | null |
Early Detection of Combustion Instabilities using Deep Convolutional
Selective Autoencoders on Hi-speed Flame Video | cs.CV cs.LG cs.NE | This paper proposes an end-to-end convolutional selective autoencoder
approach for early detection of combustion instabilities using rapidly arriving
flame image frames. The instabilities arising in combustion processes cause
significant deterioration and safety issues in various human-engineered systems
such as land and air based gas turbine engines. These properties are described
as self-sustaining, large amplitude pressure oscillations and show varying
spatial scales periodic coherent vortex structure shedding. However, such
instability is extremely difficult to detect before a combustion process
becomes completely unstable due to its sudden (bifurcation-type) nature. In
this context, an autoencoder is trained to selectively mask stable flame and
allow unstable flame image frames. In that process, the model learns to
identify and extract rich descriptive and explanatory flame shape features.
With such a training scheme, the selective autoencoder is shown to be able to
detect subtle instability features as a combustion process makes transition
from stable to unstable region. As a consequence, the deep learning tool-chain
can perform as an early detection framework for combustion instabilities that
will have a transformative impact on the safety and performance of modern
engines.
| Adedotun Akintayo, Kin Gwn Lore, Soumalya Sarkar, Soumik Sarkar | 10.1145/1235 | 1603.07839 | null | null |
Deep Learning At Scale and At Ease | cs.LG cs.DC | Recently, deep learning techniques have enjoyed success in various multimedia
applications, such as image classification and multi-modal data analysis. Large
deep learning models are developed for learning rich representations of complex
data. There are two challenges to overcome before deep learning can be widely
adopted in multimedia and other applications. One is usability, namely the
implementation of different models and training algorithms must be done by
non-experts without much effort especially when the model is large and complex.
The other is scalability, that is the deep learning system must be able to
provision for a huge demand of computing resources for training large models
with massive datasets. To address these two challenges, in this paper, we
design a distributed deep learning platform called SINGA which has an intuitive
programming model based on the common layer abstraction of deep learning
models. Good scalability is achieved through flexible distributed training
architecture and specific optimization techniques. SINGA runs on GPUs as well
as on CPUs, and we show that it outperforms many other state-of-the-art deep
learning systems. Our experience with developing and training deep learning
models for real-life multimedia applications in SINGA shows that the platform
is both usable and scalable.
| Wei Wang, Gang Chen, Haibo Chen, Tien Tuan Anh Dinh, Jinyang Gao, Beng
Chin Ooi, Kian-Lee Tan and Sheng Wang | null | 1603.07846 | null | null |
A multinomial probabilistic model for movie genre predictions | cs.IR cs.LG | This paper proposes a movie genre-prediction based on multinomial probability
model. To the best of our knowledge, this problem has not been addressed yet in
the field of recommender system. The prediction of a movie genre has many
practical applications including complementing the items categories given by
experts and providing a surprise effect in the recommendations given to a user.
We employ mulitnomial event model to estimate a likelihood of a movie given
genre and the Bayes rule to evaluate the posterior probability of a genre given
a movie. Experiments with the MovieLens dataset validate our approach. We
achieved 70% prediction rate using only 15% of the whole set for training.
| Eric Makita, Artem Lenskiy | null | 1603.07849 | null | null |
The Asymptotic Performance of Linear Echo State Neural Networks | cs.LG cs.NE math.PR | In this article, a study of the mean-square error (MSE) performance of linear
echo-state neural networks is performed, both for training and testing tasks.
Considering the realistic setting of noise present at the network nodes, we
derive deterministic equivalents for the aforementioned MSE in the limit where
the number of input data $T$ and network size $n$ both grow large. Specializing
then the network connectivity matrix to specific random settings, we further
obtain simple formulas that provide new insights on the performance of such
networks.
| Romain Couillet, Gilles Wainrib, Harry Sevi, Hafiz Tiomoko Ali | null | 1603.07866 | null | null |
Hybridization of Expectation-Maximization and K-Means Algorithms for
Better Clustering Performance | cs.LG stat.ML | The present work proposes hybridization of Expectation-Maximization (EM) and
K-Means techniques as an attempt to speed-up the clustering process. Though
both K-Means and EM techniques look into different areas, K-means can be viewed
as an approximate way to obtain maximum likelihood estimates for the means.
Along with the proposed algorithm for hybridization, the present work also
experiments with the Standard EM algorithm. Six different datasets are used for
the experiments of which three are synthetic datasets. Clustering fitness and
Sum of Squared Errors (SSE) are computed for measuring the clustering
performance. In all the experiments it is observed that the proposed algorithm
for hybridization of EM and K-Means techniques is consistently taking less
execution time with acceptable Clustering Fitness value and less SSE than the
standard EM algorithm. It is also observed that the proposed algorithm is
producing better clustering results than the Cluster package of Purdue
University.
| D. Raja Kishor, N. B. Venkateswarlu | null | 1603.07879 | null | null |
A Novel Biologically Mechanism-Based Visual Cognition Model--Automatic
Extraction of Semantics, Formation of Integrated Concepts and Re-selection
Features for Ambiguity | cs.CV cs.AI cs.LG | Integration between biology and information science benefits both fields.
Many related models have been proposed, such as computational visual cognition
models, computational motor control models, integrations of both and so on. In
general, the robustness and precision of recognition is one of the key problems
for object recognition models.
In this paper, inspired by features of human recognition process and their
biological mechanisms, a new integrated and dynamic framework is proposed to
mimic the semantic extraction, concept formation and feature re-selection in
human visual processing. The main contributions of the proposed model are as
follows:
(1) Semantic feature extraction: Local semantic features are learnt from
episodic features that are extracted from raw images through a deep neural
network;
(2) Integrated concept formation: Concepts are formed with local semantic
information and structural information learnt through network.
(3) Feature re-selection: When ambiguity is detected during recognition
process, distinctive features according to the difference between ambiguous
candidates are re-selected for recognition.
Experimental results on hand-written digits and facial shape dataset show
that, compared with other methods, the new proposed model exhibits higher
robustness and precision for visual recognition, especially in the condition
when input samples are smantic ambiguous. Meanwhile, the introduced biological
mechanisms further strengthen the interaction between neuroscience and
information science.
| Peijie Yin, Hong Qiao, Wei Wu, Lu Qi, YinLin Li, Shanlin Zhong, Bo
Zhang | null | 1603.07886 | null | null |
Investigation Into The Effectiveness Of Long Short Term Memory Networks
For Stock Price Prediction | cs.NE cs.LG | The effectiveness of long short term memory networks trained by
backpropagation through time for stock price prediction is explored in this
paper. A range of different architecture LSTM networks are constructed trained
and tested.
| Hengjian Jia | null | 1603.07893 | null | null |
Developing Quantum Annealer Driven Data Discovery | quant-ph cs.LG | Machine learning applications are limited by computational power. In this
paper, we gain novel insights into the application of quantum annealing (QA) to
machine learning (ML) through experiments in natural language processing (NLP),
seizure prediction, and linear separability testing. These experiments are
performed on QA simulators and early-stage commercial QA hardware and compared
to an unprecedented number of traditional ML techniques. We extend QBoost, an
early implementation of a binary classifier that utilizes a quantum annealer,
via resampling and ensembling of predicted probabilities to produce a more
robust class estimator. To determine the strengths and weaknesses of this
approach, resampled QBoost (RQBoost) is tested across several datasets and
compared to QBoost and traditional ML. We show and explain how QBoost in
combination with a commercial QA device are unable to perfectly separate binary
class data which is linearly separable via logistic regression with shrinkage.
We further explore the performance of RQBoost in the space of NLP and seizure
prediction and find QA-enabled ML using QBoost and RQBoost is outperformed by
traditional techniques. Additionally, we provide a detailed discussion of
algorithmic constraints and trade-offs imposed by the use of this QA hardware.
Through these experiments, we provide unique insights into the state of quantum
ML via boosting and the use of quantum annealing hardware that are valuable to
institutions interested in applying QA to problems in ML and beyond.
| Joseph Dulny III and Michael Kim | null | 1603.07980 | null | null |
How NOT To Evaluate Your Dialogue System: An Empirical Study of
Unsupervised Evaluation Metrics for Dialogue Response Generation | cs.CL cs.AI cs.LG cs.NE | We investigate evaluation metrics for dialogue response generation systems
where supervised labels, such as task completion, are not available. Recent
works in response generation have adopted metrics from machine translation to
compare a model's generated response to a single target response. We show that
these metrics correlate very weakly with human judgements in the non-technical
Twitter domain, and not at all in the technical Ubuntu domain. We provide
quantitative and qualitative results highlighting specific weaknesses in
existing metrics, and provide recommendations for future development of better
automatic evaluation metrics for dialogue systems.
| Chia-Wei Liu, Ryan Lowe, Iulian V. Serban, Michael Noseworthy, Laurent
Charlin, Joelle Pineau | null | 1603.08023 | null | null |
On the Simultaneous Preservation of Privacy and Community Structure in
Anonymized Networks | cs.LG cs.CR cs.SI | We consider the problem of performing community detection on a network, while
maintaining privacy, assuming that the adversary has access to an auxiliary
correlated network. We ask the question "Does there exist a regime where the
network cannot be deanonymized perfectly, yet the community structure could be
learned?." To answer this question, we derive information theoretic converses
for the perfect deanonymization problem using the Stochastic Block Model and
edge sub-sampling. We also provide an almost tight achievability result for
perfect deanonymization.
We also evaluate the performance of percolation based deanonymization
algorithm on Stochastic Block Model data-sets that satisfy the conditions of
our converse. Although our converse applies to exact deanonymization, the
algorithm fails drastically when the conditions of the converse are met.
Additionally, we study the effect of edge sub-sampling on the community
structure of a real world dataset. Results show that the dataset falls under
the purview of the idea of this paper. There results suggest that it may be
possible to prove stronger partial deanonymizability converses, which would
enable better privacy guarantees.
| Daniel Cullina, Kushagra Singhal, Negar Kiyavash, Prateek Mittal | null | 1603.08028 | null | null |
Resnet in Resnet: Generalizing Residual Architectures | cs.LG cs.CV cs.NE stat.ML | Residual networks (ResNets) have recently achieved state-of-the-art on
challenging computer vision tasks. We introduce Resnet in Resnet (RiR): a deep
dual-stream architecture that generalizes ResNets and standard CNNs and is
easily implemented with no computational overhead. RiR consistently improves
performance over ResNets, outperforms architectures with similar amounts of
augmentation on CIFAR-10, and establishes a new state-of-the-art on CIFAR-100.
| Sasha Targ, Diogo Almeida, Kevin Lyman | null | 1603.08029 | null | null |
On kernel methods for covariates that are rankings | stat.ML cs.DM cs.LG | Permutation-valued features arise in a variety of applications, either in a
direct way when preferences are elicited over a collection of items, or an
indirect way in which numerical ratings are converted to a ranking. To date,
there has been relatively limited study of regression, classification, and
testing problems based on permutation-valued features, as opposed to
permutation-valued responses. This paper studies the use of reproducing kernel
Hilbert space methods for learning from permutation-valued features. These
methods embed the rankings into an implicitly defined function space, and allow
for efficient estimation of regression and test functions in this richer space.
Our first contribution is to characterize both the feature spaces and spectral
properties associated with two kernels for rankings, the Kendall and Mallows
kernels. Using tools from representation theory, we explain the limited
expressive power of the Kendall kernel by characterizing its degenerate
spectrum, and in sharp contrast, we prove that Mallows' kernel is universal and
characteristic. We also introduce families of polynomial kernels that
interpolate between the Kendall (degree one) and Mallows' (infinite degree)
kernels. We show the practical effectiveness of our methods via applications to
Eurobarometer survey data as well as a Movielens ratings dataset.
| Horia Mania, Aaditya Ramdas, Martin J. Wainwright, Michael I. Jordan,
Benjamin Recht | null | 1603.08035 | null | null |
On the Detection of Mixture Distributions with applications to the Most
Biased Coin Problem | cs.LG | This paper studies the trade-off between two different kinds of pure
exploration: breadth versus depth. The most biased coin problem asks how many
total coin flips are required to identify a "heavy" coin from an infinite bag
containing both "heavy" coins with mean $\theta_1 \in (0,1)$, and "light" coins
with mean $\theta_0 \in (0,\theta_1)$, where heavy coins are drawn from the bag
with probability $\alpha \in (0,1/2)$. The key difficulty of this problem lies
in distinguishing whether the two kinds of coins have very similar means, or
whether heavy coins are just extremely rare. This problem has applications in
crowdsourcing, anomaly detection, and radio spectrum search. Chandrasekaran et.
al. (2014) recently introduced a solution to this problem but it required
perfect knowledge of $\theta_0,\theta_1,\alpha$. In contrast, we derive
algorithms that are adaptive to partial or absent knowledge of the problem
parameters. Moreover, our techniques generalize beyond coins to more general
instances of infinitely many armed bandit problems. We also prove lower bounds
that show our algorithm's upper bounds are tight up to $\log$ factors, and on
the way characterize the sample complexity of differentiating between a single
parametric distribution and a mixture of two such distributions. As a result,
these bounds have surprising implications both for solutions to the most biased
coin problem and for anomaly detection when only partial information about the
parameters is known.
| Kevin Jamieson and Daniel Haas and Ben Recht | null | 1603.08037 | null | null |
On the Compression of Recurrent Neural Networks with an Application to
LVCSR acoustic modeling for Embedded Speech Recognition | cs.CL cs.LG cs.NE | We study the problem of compressing recurrent neural networks (RNNs). In
particular, we focus on the compression of RNN acoustic models, which are
motivated by the goal of building compact and accurate speech recognition
systems which can be run efficiently on mobile devices. In this work, we
present a technique for general recurrent model compression that jointly
compresses both recurrent and non-recurrent inter-layer weight matrices. We
find that the proposed technique allows us to reduce the size of our Long
Short-Term Memory (LSTM) acoustic model to a third of its original size with
negligible loss in accuracy.
| Rohit Prabhavalkar, Ouais Alsharif, Antoine Bruguier, Ian McGraw | null | 1603.08042 | null | null |
Pointing the Unknown Words | cs.CL cs.LG cs.NE | The problem of rare and unknown words is an important issue that can
potentially influence the performance of many NLP systems, including both the
traditional count-based and the deep learning models. We propose a novel way to
deal with the rare and unseen words for the neural network models using
attention. Our model uses two softmax layers in order to predict the next word
in conditional language models: one predicts the location of a word in the
source sentence, and the other predicts a word in the shortlist vocabulary. At
each time-step, the decision of which softmax layer to use choose adaptively
made by an MLP which is conditioned on the context.~We motivate our work from a
psychological evidence that humans naturally have a tendency to point towards
objects in the context or the environment when the name of an object is not
known.~We observe improvements on two tasks, neural machine translation on the
Europarl English to French parallel corpora and text summarization on the
Gigaword dataset using our proposed model.
| Caglar Gulcehre, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou and Yoshua
Bengio | null | 1603.08148 | null | null |
Perceptual Losses for Real-Time Style Transfer and Super-Resolution | cs.CV cs.LG | We consider image transformation problems, where an input image is
transformed into an output image. Recent methods for such problems typically
train feed-forward convolutional neural networks using a \emph{per-pixel} loss
between the output and ground-truth images. Parallel work has shown that
high-quality images can be generated by defining and optimizing
\emph{perceptual} loss functions based on high-level features extracted from
pretrained networks. We combine the benefits of both approaches, and propose
the use of perceptual loss functions for training feed-forward networks for
image transformation tasks. We show results on image style transfer, where a
feed-forward network is trained to solve the optimization problem proposed by
Gatys et al in real-time. Compared to the optimization-based method, our
network gives similar qualitative results but is three orders of magnitude
faster. We also experiment with single-image super-resolution, where replacing
a per-pixel loss with a perceptual loss gives visually pleasing results.
| Justin Johnson, Alexandre Alahi, Li Fei-Fei | null | 1603.08155 | null | null |
Human Pose Estimation using Deep Consensus Voting | cs.CV cs.LG | In this paper we consider the problem of human pose estimation from a single
still image. We propose a novel approach where each location in the image votes
for the position of each keypoint using a convolutional neural net. The voting
scheme allows us to utilize information from the whole image, rather than rely
on a sparse set of keypoint locations. Using dense, multi-target votes, not
only produces good keypoint predictions, but also enables us to compute
image-dependent joint keypoint probabilities by looking at consensus voting.
This differs from most previous methods where joint probabilities are learned
from relative keypoint locations and are independent of the image. We finally
combine the keypoints votes and joint probabilities in order to identify the
optimal pose configuration. We show our competitive performance on the MPII
Human Pose and Leeds Sports Pose datasets.
| Ita Lifshitz, Ethan Fetaya and Shimon Ullman | null | 1603.08212 | null | null |
Evolution of active categorical image classification via saccadic eye
movement | cs.CV cs.LG cs.NE | Pattern recognition and classification is a central concern for modern
information processing systems. In particular, one key challenge to image and
video classification has been that the computational cost of image processing
scales linearly with the number of pixels in the image or video. Here we
present an intelligent machine (the "active categorical classifier," or ACC)
that is inspired by the saccadic movements of the eye, and is capable of
classifying images by selectively scanning only a portion of the image. We
harness evolutionary computation to optimize the ACC on the MNIST hand-written
digit classification task, and provide a proof-of-concept that the ACC works on
noisy multi-class data. We further analyze the ACC and demonstrate its ability
to classify images after viewing only a fraction of the pixels, and provide
insight on future research paths to further improve upon the ACC presented
here.
| Randal S. Olson, Jason H. Moore, Christoph Adami | null | 1603.08233 | null | null |
Negative Learning Rates and P-Learning | cs.AI cs.LG | We present a method of training a differentiable function approximator for a
regression task using negative examples. We effect this training using negative
learning rates. We also show how this method can be used to perform direct
policy learning in a reinforcement learning setting.
| Devon Merrill | null | 1603.08253 | null | null |
Towards Machine Intelligence | cs.AI cs.LG cs.NE | There exists a theory of a single general-purpose learning algorithm which
could explain the principles of its operation. This theory assumes that the
brain has some initial rough architecture, a small library of simple innate
circuits which are prewired at birth and proposes that all significant mental
algorithms can be learned. Given current understanding and observations, this
paper reviews and lists the ingredients of such an algorithm from both
architectural and functional perspectives.
| Kamil Rocki | null | 1603.08262 | null | null |
Non-Greedy L21-Norm Maximization for Principal Component Analysis | cs.LG | Principal Component Analysis (PCA) is one of the most important unsupervised
methods to handle high-dimensional data. However, due to the high computational
complexity of its eigen decomposition solution, it hard to apply PCA to the
large-scale data with high dimensionality. Meanwhile, the squared L2-norm based
objective makes it sensitive to data outliers. In recent research, the L1-norm
maximization based PCA method was proposed for efficient computation and being
robust to outliers. However, this work used a greedy strategy to solve the
eigen vectors. Moreover, the L1-norm maximization based objective may not be
the correct robust PCA formulation, because it loses the theoretical connection
to the minimization of data reconstruction error, which is one of the most
important intuitions and goals of PCA. In this paper, we propose to maximize
the L21-norm based robust PCA objective, which is theoretically connected to
the minimization of reconstruction error. More importantly, we propose the
efficient non-greedy optimization algorithms to solve our objective and the
more general L21-norm maximization problem with theoretically guaranteed
convergence. Experimental results on real world data sets show the
effectiveness of the proposed method for principal component analysis.
| Feiping Nie and Heng Huang | null | 1603.08293 | null | null |
The SVM Classifier Based on the Modified Particle Swarm Optimization | cs.LG cs.NE | The problem of development of the SVM classifier based on the modified
particle swarm optimization has been considered. This algorithm carries out the
simultaneous search of the kernel function type, values of the kernel function
parameters and value of the regularization parameter for the SVM classifier.
Such SVM classifier provides the high quality of data classification. The idea
of particles' {\guillemotleft}regeneration{\guillemotright} is put on the basis
of the modified particle swarm optimization algorithm. At the realization of
this idea, some particles change their kernel function type to the one which
corresponds to the particle with the best value of the classification accuracy.
The offered particle swarm optimization algorithm allows reducing the time
expenditures for development of the SVM classifier. The results of experimental
studies confirm the efficiency of this algorithm.
| L. Demidova, E. Nikulchev, Yu. Sokolova | 10.14569/IJACSA.2016.070203 | 1603.08296 | null | null |
Exclusivity Regularized Machine | cs.LG | It has been recognized that the diversity of base learners is of utmost
importance to a good ensemble. This paper defines a novel measurement of
diversity, termed as exclusivity. With the designed exclusivity, we further
propose an ensemble model, namely Exclusivity Regularized Machine (ERM), to
jointly suppress the training error of ensemble and enhance the diversity
between bases. Moreover, an Augmented Lagrange Multiplier based algorithm is
customized to effectively and efficiently seek the optimal solution of ERM.
Theoretical analysis on convergence and global optimality of the proposed
algorithm, as well as experiments are provided to reveal the efficacy of our
method and show its superiority over state-of-the-art alternatives in terms of
accuracy and efficiency.
| Xiaojie Guo | null | 1603.08318 | null | null |
Audio Visual Emotion Recognition with Temporal Alignment and Perception
Attention | cs.CV cs.CL cs.LG | This paper focuses on two key problems for audio-visual emotion recognition
in the video. One is the audio and visual streams temporal alignment for
feature level fusion. The other one is locating and re-weighting the perception
attentions in the whole audio-visual stream for better recognition. The Long
Short Term Memory Recurrent Neural Network (LSTM-RNN) is employed as the main
classification architecture. Firstly, soft attention mechanism aligns the audio
and visual streams. Secondly, seven emotion embedding vectors, which are
corresponding to each classification emotion type, are added to locate the
perception attentions. The locating and re-weighting process is also based on
the soft attention mechanism. The experiment results on EmotiW2015 dataset and
the qualitative analysis show the efficiency of the proposed two techniques.
| Linlin Chao, Jianhua Tao, Minghao Yang, Ya Li and Zhengqi Wen | null | 1603.08321 | null | null |
Hierarchical Gaussian Mixture Model with Objects Attached to Terminal
and Non-terminal Dendrogram Nodes | cs.LG cs.CV | A hierarchical clustering algorithm based on Gaussian mixture model is
presented. The key difference to regular hierarchical mixture models is the
ability to store objects in both terminal and nonterminal nodes. Upper levels
of the hierarchy contain sparsely distributed objects, while lower levels
contain densely represented ones. As it was shown by experiments, this ability
helps in noise detection (modelling). Furthermore, compared to regular
hierarchical mixture model, the presented method generates more compact
dendrograms with higher quality measured by adopted F-measure.
| {\L}ukasz P. Olech and Mariusz Paradowski | 10.1007/978-3-319-26227-7_18 | 1603.08342 | null | null |
Fast, Exact and Multi-Scale Inference for Semantic Image Segmentation
with Deep Gaussian CRFs | cs.CV cs.LG | In this work we propose a structured prediction technique that combines the
virtues of Gaussian Conditional Random Fields (G-CRF) with Deep Learning: (a)
our structured prediction task has a unique global optimum that is obtained
exactly from the solution of a linear system (b) the gradients of our model
parameters are analytically computed using closed form expressions, in contrast
to the memory-demanding contemporary deep structured prediction approaches that
rely on back-propagation-through-time, (c) our pairwise terms do not have to be
simple hand-crafted expressions, as in the line of works building on the
DenseCRF, but can rather be `discovered' from data through deep architectures,
and (d) out system can trained in an end-to-end manner. Building on standard
tools from numerical analysis we develop very efficient algorithms for
inference and learning, as well as a customized technique adapted to the
semantic segmentation task. This efficiency allows us to explore more
sophisticated architectures for structured prediction in deep learning: we
introduce multi-resolution architectures to couple information across scales in
a joint optimization framework, yielding systematic improvements. We
demonstrate the utility of our approach on the challenging VOC PASCAL 2012
image segmentation benchmark, showing substantial improvements over strong
baselines. We make all of our code and experiments available at
{https://github.com/siddharthachandra/gcrf}
| Siddhartha Chandra and Iasonas Kokkinos | null | 1603.08358 | null | null |
Sparse Activity and Sparse Connectivity in Supervised Learning | cs.LG cs.CG cs.CV cs.NE | Sparseness is a useful regularizer for learning in a wide range of
applications, in particular in neural networks. This paper proposes a model
targeted at classification tasks, where sparse activity and sparse connectivity
are used to enhance classification capabilities. The tool for achieving this is
a sparseness-enforcing projection operator which finds the closest vector with
a pre-defined sparseness for any given vector. In the theoretical part of this
paper, a comprehensive theory for such a projection is developed. In
conclusion, it is shown that the projection is differentiable almost everywhere
and can thus be implemented as a smooth neuronal transfer function. The entire
model can hence be tuned end-to-end using gradient-based methods. Experiments
on the MNIST database of handwritten digits show that classification
performance can be boosted by sparse activity or sparse connectivity. With a
combination of both, performance can be significantly better compared to
classical non-sparse approaches.
| Markus Thom and G\"unther Palm | null | 1603.08367 | null | null |
Deep Embedding for Spatial Role Labeling | cs.CL cs.CV cs.LG cs.NE | This paper introduces the visually informed embedding of word (VIEW), a
continuous vector representation for a word extracted from a deep neural model
trained using the Microsoft COCO data set to forecast the spatial arrangements
between visual objects, given a textual description. The model is composed of a
deep multilayer perceptron (MLP) stacked on the top of a Long Short Term Memory
(LSTM) network, the latter being preceded by an embedding layer. The VIEW is
applied to transferring multimodal background knowledge to Spatial Role
Labeling (SpRL) algorithms, which recognize spatial relations between objects
mentioned in the text. This work also contributes with a new method to select
complementary features and a fine-tuning method for MLP that improves the $F1$
measure in classifying the words into spatial roles. The VIEW is evaluated with
the Task 3 of SemEval-2013 benchmark data set, SpaceEval.
| Oswaldo Ludwig, Xiao Liu, Parisa Kordjamshidi, Marie-Francine Moens | null | 1603.08474 | null | null |
Estimating Mixture Models via Mixtures of Polynomials | stat.ML cs.LG | Mixture modeling is a general technique for making any simple model more
expressive through weighted combination. This generality and simplicity in part
explains the success of the Expectation Maximization (EM) algorithm, in which
updates are easy to derive for a wide class of mixture models. However, the
likelihood of a mixture model is non-convex, so EM has no known global
convergence guarantees. Recently, method of moments approaches offer global
guarantees for some mixture models, but they do not extend easily to the range
of mixture models that exist. In this work, we present Polymom, an unifying
framework based on method of moments in which estimation procedures are easily
derivable, just as in EM. Polymom is applicable when the moments of a single
mixture component are polynomials of the parameters. Our key observation is
that the moments of the mixture model are a mixture of these polynomials, which
allows us to cast estimation as a Generalized Moment Problem. We solve its
relaxations using semidefinite optimization, and then extract parameters using
ideas from computer algebra. This framework allows us to draw insights and
apply tools from convex optimization, computer algebra and the theory of
moments to study problems in statistical estimation.
| Sida I. Wang and Arun Tejasvi Chaganty and Percy Liang | null | 1603.08482 | null | null |
Shuffle and Learn: Unsupervised Learning using Temporal Order
Verification | cs.CV cs.AI cs.LG | In this paper, we present an approach for learning a visual representation
from the raw spatiotemporal signals in videos. Our representation is learned
without supervision from semantic labels. We formulate our method as an
unsupervised sequential verification task, i.e., we determine whether a
sequence of frames from a video is in the correct temporal order. With this
simple task and no semantic labels, we learn a powerful visual representation
using a Convolutional Neural Network (CNN). The representation contains
complementary information to that learned from supervised image datasets like
ImageNet. Qualitative results show that our method captures information that is
temporally varying, such as human pose. When used as pre-training for action
recognition, our method gives significant gains over learning without external
data on benchmark datasets like UCF101 and HMDB51. To demonstrate its
sensitivity to human pose, we show results for pose estimation on the FLIC and
MPII datasets that are competitive, or better than approaches using
significantly more supervision. Our method can be combined with supervised
representations to provide an additional boost in accuracy.
| Ishan Misra and C. Lawrence Zitnick and Martial Hebert | null | 1603.08561 | null | null |
Attend, Infer, Repeat: Fast Scene Understanding with Generative Models | cs.CV cs.LG | We present a framework for efficient inference in structured image models
that explicitly reason about objects. We achieve this by performing
probabilistic inference using a recurrent neural network that attends to scene
elements and processes them one at a time. Crucially, the model itself learns
to choose the appropriate number of inference steps. We use this scheme to
learn to perform inference in partially specified 2D models (variable-sized
variational auto-encoders) and fully specified 3D models (probabilistic
renderers). We show that such models learn to identify multiple objects -
counting, locating and classifying the elements of a scene - without any
supervision, e.g., decomposing 3D images with various numbers of objects in a
single forward pass of a neural network. We further show that the networks
produce accurate inferences when compared to supervised counterparts, and that
their structure leads to improved generalization.
| S. M. Ali Eslami, Nicolas Heess, Theophane Weber, Yuval Tassa, David
Szepesvari, Koray Kavukcuoglu and Geoffrey E. Hinton | null | 1603.08575 | null | null |
Classification-based Financial Markets Prediction using Deep Neural
Networks | cs.LG cs.CE | Deep neural networks (DNNs) are powerful types of artificial neural networks
(ANNs) that use several hidden layers. They have recently gained considerable
attention in the speech transcription and image recognition community
(Krizhevsky et al., 2012) for their superior predictive properties including
robustness to overfitting. However their application to algorithmic trading has
not been previously researched, partly because of their computational
complexity. This paper describes the application of DNNs to predicting
financial market movement directions. In particular we describe the
configuration and training approach and then demonstrate their application to
backtesting a simple trading strategy over 43 different Commodity and FX future
mid-prices at 5-minute intervals. All results in this paper are generated using
a C++ implementation on the Intel Xeon Phi co-processor which is 11.4x faster
than the serial version and a Python strategy backtesting environment both of
which are available as open source code written by the authors.
| Matthew Dixon, Diego Klabjan, Jin Hoon Bang | null | 1603.08604 | null | null |
Submodular Variational Inference for Network Reconstruction | cs.LG cs.DS cs.SI stat.ML | In real-world and online social networks, individuals receive and transmit
information in real time. Cascading information transmissions (e.g. phone
calls, text messages, social media posts) may be understood as a realization of
a diffusion process operating on the network, and its branching path can be
represented by a directed tree. The process only traverses and thus reveals a
limited portion of the edges. The network reconstruction/inference problem is
to infer the unrevealed connections. Most existing approaches derive a
likelihood and attempt to find the network topology maximizing the likelihood,
a problem that is highly intractable. In this paper, we focus on the network
reconstruction problem for a broad class of real-world diffusion processes,
exemplified by a network diffusion scheme called respondent-driven sampling
(RDS). We prove that under realistic and general models of network diffusion,
the posterior distribution of an observed RDS realization is a Bayesian
log-submodular model.We then propose VINE (Variational Inference for Network
rEconstruction), a novel, accurate, and computationally efficient variational
inference algorithm, for the network reconstruction problem under this model.
Crucially, we do not assume any particular probabilistic model for the
underlying network. VINE recovers any connected graph with high accuracy as
shown by our experimental results on real-life networks.
| Lin Chen, Forrest W Crawford, Amin Karbasi | null | 1603.08616 | null | null |
Regret Analysis of the Anytime Optimally Confident UCB Algorithm | cs.LG math.ST stat.ML stat.TH | I introduce and analyse an anytime version of the Optimally Confident UCB
(OCUCB) algorithm designed for minimising the cumulative regret in finite-armed
stochastic bandits with subgaussian noise. The new algorithm is simple,
intuitive (in hindsight) and comes with the strongest finite-time regret
guarantees for a horizon-free algorithm so far. I also show a finite-time lower
bound that nearly matches the upper bound.
| Tor Lattimore | null | 1603.08661 | null | null |
Interpretability of Multivariate Brain Maps in Brain Decoding:
Definition and Quantification | stat.ML cs.LG | Brain decoding is a popular multivariate approach for hypothesis testing in
neuroimaging. It is well known that the brain maps derived from weights of
linear classifiers are hard to interpret because of high correlations between
predictors, low signal to noise ratios, and the high dimensionality of
neuroimaging data. Therefore, improving the interpretability of brain decoding
approaches is of primary interest in many neuroimaging studies. Despite
extensive studies of this type, at present, there is no formal definition for
interpretability of multivariate brain maps. As a consequence, there is no
quantitative measure for evaluating the interpretability of different brain
decoding methods. In this paper, first, we present a theoretical definition of
interpretability in brain decoding; we show that the interpretability of
multivariate brain maps can be decomposed into their reproducibility and
representativeness. Second, as an application of the proposed theoretical
definition, we formalize a heuristic method for approximating the
interpretability of multivariate brain maps in a binary magnetoencephalography
(MEG) decoding scenario. Third, we propose to combine the approximated
interpretability and the performance of the brain decoding model into a new
multi-objective criterion for model selection. Our results for the MEG data
show that optimizing the hyper-parameters of the regularized linear classifier
based on the proposed criterion results in more informative multivariate brain
maps. More importantly, the presented definition provides the theoretical
background for quantitative evaluation of interpretability, and hence,
facilitates the development of more effective brain decoding algorithms in the
future.
| Seyed Mostafa Kia | null | 1603.08704 | null | null |
Machine Learning and Cloud Computing: Survey of Distributed and SaaS
Solutions | cs.DC cs.LG | Applying popular machine learning algorithms to large amounts of data raised
new challenges for the ML practitioners. Traditional ML libraries does not
support well processing of huge datasets, so that new approaches were needed.
Parallelization using modern parallel computing frameworks, such as MapReduce,
CUDA, or Dryad gained in popularity and acceptance, resulting in new ML
libraries developed on top of these frameworks. We will briefly introduce the
most prominent industrial and academic outcomes, such as Apache Mahout,
GraphLab or Jubatus.
We will investigate how cloud computing paradigm impacted the field of ML.
First direction is of popular statistics tools and libraries (R system, Python)
deployed in the cloud. A second line of products is augmenting existing tools
with plugins that allow users to create a Hadoop cluster in the cloud and run
jobs on it. Next on the list are libraries of distributed implementations for
ML algorithms, and on-premise deployments of complex systems for data analytics
and data mining. Last approach on the radar of this survey is ML as
Software-as-a-Service, several BigData start-ups (and large companies as well)
already opening their solutions to the market.
| Daniel Pop | null | 1603.08767 | null | null |
Spectral M-estimation with Applications to Hidden Markov Models | stat.CO cs.LG stat.ME | Method of moment estimators exhibit appealing statistical properties, such as
asymptotic unbiasedness, for nonconvex problems. However, they typically
require a large number of samples and are extremely sensitive to model
misspecification. In this paper, we apply the framework of M-estimation to
develop both a generalized method of moments procedure and a principled method
for regularization. Our proposed M-estimator obtains optimal sample efficiency
rates (in the class of moment-based estimators) and the same well-known rates
on prediction accuracy as other spectral estimators. It also makes it
straightforward to incorporate regularization into the sample moment
conditions. We demonstrate empirically the gains in sample efficiency from our
approach on hidden Markov models.
| Dustin Tran, Minjae Kim, Finale Doshi-Velez | null | 1603.08815 | null | null |
Towards Understanding Sparse Filtering: A Theoretical Perspective | cs.LG | In this paper we present a theoretical analysis to understand sparse
filtering, a recent and effective algorithm for unsupervised learning. The aim
of this research is not to show whether or how well sparse filtering works, but
to understand why and when sparse filtering does work. We provide a thorough
theoretical analysis of sparse filtering and its properties, and further offer
an experimental validation of the main outcomes of our theoretical analysis. We
show that sparse filtering works by explicitly maximizing the entropy of the
learned representation through the maximization of the proxy of sparsity, and
by implicitly preserving mutual information between original and learned
representations through the constraint of preserving a structure of the data,
specifically the structure defined by relations of neighborhoodness under the
cosine distance. Furthermore, we empirically validate our theoretical results
with artificial and real data sets, and we apply our theoretical understanding
to explain the success of sparse filtering on real-world problems. Our work
provides a strong theoretical basis for understanding sparse filtering: it
highlights assumptions and conditions for success behind this feature
distribution learning algorithm, and provides insights for developing new
feature distribution learning algorithms.
| Fabio Massimo Zennaro, Ke Chen | 10.1016/j.neunet.2017.11.010 | 1603.08831 | null | null |
Revisiting Semi-Supervised Learning with Graph Embeddings | cs.LG | We present a semi-supervised learning framework based on graph embeddings.
Given a graph between instances, we train an embedding for each instance to
jointly predict the class label and the neighborhood context in the graph. We
develop both transductive and inductive variants of our method. In the
transductive variant of our method, the class labels are determined by both the
learned embeddings and input feature vectors, while in the inductive variant,
the embeddings are defined as a parametric function of the feature vectors, so
predictions can be made on instances not seen during training. On a large and
diverse set of benchmark tasks, including text classification, distantly
supervised entity extraction, and entity classification, we show improved
performance over many of the existing models.
| Zhilin Yang and William W. Cohen and Ruslan Salakhutdinov | null | 1603.08861 | null | null |
Detecting weak changes in dynamic events over networks | cs.LG stat.ML | Large volume of networked streaming event data are becoming increasingly
available in a wide variety of applications, such as social network analysis,
Internet traffic monitoring and healthcare analytics. Streaming event data are
discrete observation occurred in continuous time, and the precise time interval
between two events carries a great deal of information about the dynamics of
the underlying systems. How to promptly detect changes in these dynamic systems
using these streaming event data? In this paper, we propose a novel
change-point detection framework for multi-dimensional event data over
networks. We cast the problem into sequential hypothesis test, and derive the
likelihood ratios for point processes, which are computed efficiently via an
EM-like algorithm that is parameter-free and can be computed in a distributed
fashion. We derive a highly accurate theoretical characterization of the
false-alarm-rate, and show that it can achieve weak signal detection by
aggregating local statistics over time and networks. Finally, we demonstrate
the good performance of our algorithm on numerical examples and real-world
datasets from twitter and Memetracker.
| Shuang Li, Yao Xie, Mehrdad Farajtabar, Apurv Verma, and Le Song | null | 1603.08981 | null | null |
Towards Practical Bayesian Parameter and State Estimation | cs.AI cs.LG stat.ML | Joint state and parameter estimation is a core problem for dynamic Bayesian
networks. Although modern probabilistic inference toolkits make it relatively
easy to specify large and practically relevant probabilistic models, the silver
bullet---an efficient and general online inference algorithm for such
problems---remains elusive, forcing users to write special-purpose code for
each application. We propose a novel blackbox algorithm -- a hybrid of particle
filtering for state variables and assumed density filtering for parameter
variables. It has following advantages: (a) it is efficient due to its online
nature, and (b) it is applicable to both discrete and continuous parameter
spaces . On a variety of toy and real models, our system is able to generate
more accurate results within a fixed computation budget. This preliminary
evidence indicates that the proposed approach is likely to be of practical use.
| Yusuf Bugra Erol, Yi Wu, Lei Li, Stuart Russell | null | 1603.08988 | null | null |
Online Rules for Control of False Discovery Rate and False Discovery
Exceedance | math.ST cs.LG stat.AP stat.ME stat.ML stat.TH | Multiple hypothesis testing is a core problem in statistical inference and
arises in almost every scientific field. Given a set of null hypotheses
$\mathcal{H}(n) = (H_1,\dotsc, H_n)$, Benjamini and Hochberg introduced the
false discovery rate (FDR), which is the expected proportion of false positives
among rejected null hypotheses, and proposed a testing procedure that controls
FDR below a pre-assigned significance level. Nowadays FDR is the criterion of
choice for large scale multiple hypothesis testing. In this paper we consider
the problem of controlling FDR in an "online manner". Concretely, we consider
an ordered --possibly infinite-- sequence of null hypotheses $\mathcal{H} =
(H_1,H_2,H_3,\dots )$ where, at each step $i$, the statistician must decide
whether to reject hypothesis $H_i$ having access only to the previous
decisions. This model was introduced by Foster and Stine. We study a class of
"generalized alpha-investing" procedures and prove that any rule in this class
controls online FDR, provided $p$-values corresponding to true nulls are
independent from the other $p$-values. (Earlier work only established mFDR
control.) Next, we obtain conditions under which generalized alpha-investing
controls FDR in the presence of general $p$-values dependencies. Finally, we
develop a modified set of procedures that also allow to control the false
discovery exceedance (the tail of the proportion of false discoveries).
Numerical simulations and analytical results indicate that online procedures do
not incur a large loss in statistical power with respect to offline approaches,
such as Benjamini-Hochberg.
| Adel Javanmard and Andrea Montanari | null | 1603.09000 | null | null |
Recurrent Batch Normalization | cs.LG | We propose a reparameterization of LSTM that brings the benefits of batch
normalization to recurrent neural networks. Whereas previous works only apply
batch normalization to the input-to-hidden transformation of RNNs, we
demonstrate that it is both possible and beneficial to batch-normalize the
hidden-to-hidden transition, thereby reducing internal covariate shift between
time steps. We evaluate our proposal on various sequential problems such as
sequence classification, language modeling and question answering. Our
empirical results show that our batch-normalized LSTM consistently leads to
faster convergence and improved generalization.
| Tim Cooijmans, Nicolas Ballas, C\'esar Laurent, \c{C}a\u{g}lar
G\"ul\c{c}ehre and Aaron Courville | null | 1603.09025 | null | null |
Towards Geo-Distributed Machine Learning | cs.LG cs.DC stat.ML | Latency to end-users and regulatory requirements push large companies to
build data centers all around the world. The resulting data is "born"
geographically distributed. On the other hand, many machine learning
applications require a global view of such data in order to achieve the best
results. These types of applications form a new class of learning problems,
which we call Geo-Distributed Machine Learning (GDML). Such applications need
to cope with: 1) scarce and expensive cross-data center bandwidth, and 2)
growing privacy concerns that are pushing for stricter data sovereignty
regulations. Current solutions to learning from geo-distributed data sources
revolve around the idea of first centralizing the data in one data center, and
then training locally. As machine learning algorithms are
communication-intensive, the cost of centralizing the data is thought to be
offset by the lower cost of intra-data center communication during training. In
this work, we show that the current centralized practice can be far from
optimal, and propose a system for doing geo-distributed training. Furthermore,
we argue that the geo-distributed approach is structurally more amenable to
dealing with regulatory constraints, as raw data never leaves the source data
center. Our empirical evaluation on three real datasets confirms the general
validity of our approach, and shows that GDML is not only possible but also
advisable in many scenarios.
| Ignacio Cano, Markus Weimer, Dhruv Mahajan, Carlo Curino and Giovanni
Matteo Fumarola | null | 1603.09035 | null | null |
Cost-Sensitive Label Embedding for Multi-Label Classification | cs.LG stat.ML | Label embedding (LE) is an important family of multi-label classification
algorithms that digest the label information jointly for better performance.
Different real-world applications evaluate performance by different cost
functions of interest. Current LE algorithms often aim to optimize one specific
cost function, but they can suffer from bad performance with respect to other
cost functions. In this paper, we resolve the performance issue by proposing a
novel cost-sensitive LE algorithm that takes the cost function of interest into
account. The proposed algorithm, cost-sensitive label embedding with
multidimensional scaling (CLEMS), approximates the cost information with the
distances of the embedded vectors by using the classic multidimensional scaling
approach for manifold learning. CLEMS is able to deal with both symmetric and
asymmetric cost functions, and effectively makes cost-sensitive decisions by
nearest-neighbor decoding within the embedded vectors. We derive theoretical
results that justify how CLEMS achieves the desired cost-sensitivity.
Furthermore, extensive experimental results demonstrate that CLEMS is
significantly better than a wide spectrum of existing LE algorithms and
state-of-the-art cost-sensitive algorithms across different cost functions.
| Kuan-Hao Huang and Hsuan-Tien Lin | 10.1007/s10994-017-5659-z | 1603.09048 | null | null |
Robustness of Bayesian Pool-based Active Learning Against Prior
Misspecification | cs.LG stat.ML | We study the robustness of active learning (AL) algorithms against prior
misspecification: whether an algorithm achieves similar performance using a
perturbed prior as compared to using the true prior. In both the average and
worst cases of the maximum coverage setting, we prove that all
$\alpha$-approximate algorithms are robust (i.e., near $\alpha$-approximate) if
the utility is Lipschitz continuous in the prior. We further show that
robustness may not be achieved if the utility is non-Lipschitz. This suggests
we should use a Lipschitz utility for AL if robustness is required. For the
minimum cost setting, we can also obtain a robustness result for approximate AL
algorithms. Our results imply that many commonly used AL algorithms are robust
against perturbed priors. We then propose the use of a mixture prior to
alleviate the problem of prior misspecification. We analyze the robustness of
the uniform mixture prior and show experimentally that it performs reasonably
well in practice.
| Nguyen Viet Cuong, Nan Ye, Wee Sun Lee | null | 1603.09050 | null | null |
Semi-Supervised Learning on Graphs through Reach and Distance Diffusion | cs.LG | Semi-supervised learning (SSL) is an indispensable tool when there are few
labeled entities and many unlabeled entities for which we want to predict
labels. With graph-based methods, entities correspond to nodes in a graph and
edges represent strong relations. At the heart of SSL algorithms is the
specification of a dense {\em kernel} of pairwise affinity values from the
graph structure. A learning algorithm is then trained on the kernel together
with labeled entities. The most popular kernels are {\em spectral} and include
the highly scalable "symmetric" Laplacian methods, that compute a soft labels
using Jacobi iterations, and "asymmetric" methods including Personalized Page
Rank (PPR) which use short random walks and apply with directed relations, such
as like, follow, or hyperlinks.
We introduce {\em Reach diffusion} and {\em Distance diffusion} kernels that
build on powerful social and economic models of centrality and influence in
networks and capture the directed pairwise relations that underline social
influence. Inspired by the success of social influence as an alternative to
spectral centrality such as Page Rank, we explore SSL with our kernels and
develop highly scalable algorithms for parameter setting, label learning, and
sampling. We perform preliminary experiments that demonstrate the properties
and potential of our kernels.
| Edith Cohen | null | 1603.09064 | null | null |
deepTarget: End-to-end Learning Framework for microRNA Target Prediction
using Deep Recurrent Neural Networks | cs.LG q-bio.GN | MicroRNAs (miRNAs) are short sequences of ribonucleic acids that control the
expression of target messenger RNAs (mRNAs) by binding them. Robust prediction
of miRNA-mRNA pairs is of utmost importance in deciphering gene regulations but
has been challenging because of high false positive rates, despite a deluge of
computational tools that normally require laborious manual feature extraction.
This paper presents an end-to-end machine learning framework for miRNA target
prediction. Leveraged by deep recurrent neural networks-based auto-encoding and
sequence-sequence interaction learning, our approach not only delivers an
unprecedented level of accuracy but also eliminates the need for manual feature
extraction. The performance gap between the proposed method and existing
alternatives is substantial (over 25% increase in F-measure), and deepTarget
delivers a quantum leap in the long-standing challenge of robust miRNA target
prediction.
| Byunghan Lee, Junghwan Baek, Seunghyun Park, and Sungroh Yoon | null | 1603.09123 | null | null |
Bilingual Learning of Multi-sense Embeddings with Discrete Autoencoders | cs.CL cs.LG stat.ML | We present an approach to learning multi-sense word embeddings relying both
on monolingual and bilingual information. Our model consists of an encoder,
which uses monolingual and bilingual context (i.e. a parallel sentence) to
choose a sense for a given word, and a decoder which predicts context words
based on the chosen sense. The two components are estimated jointly. We observe
that the word representations induced from bilingual data outperform the
monolingual counterparts across a range of evaluation tasks, even though
crosslingual information is not available at test time.
| Simon \v{S}uster and Ivan Titov and Gertjan van Noord | null | 1603.09128 | null | null |
Model Interpolation with Trans-dimensional Random Field Language Models
for Speech Recognition | cs.CL cs.LG stat.ML | The dominant language models (LMs) such as n-gram and neural network (NN)
models represent sentence probabilities in terms of conditionals. In contrast,
a new trans-dimensional random field (TRF) LM has been recently introduced to
show superior performances, where the whole sentence is modeled as a random
field. In this paper, we examine how the TRF models can be interpolated with
the NN models, and obtain 12.1\% and 17.9\% relative error rate reductions over
6-gram LMs for English and Chinese speech recognition respectively through
log-linear combination.
| Bin Wang, Zhijian Ou, Yong He, Akinori Kawamura | null | 1603.09170 | null | null |
Optimal Recommendation to Users that React: Online Learning for a Class
of POMDPs | cs.LG | We describe and study a model for an Automated Online Recommendation System
(AORS) in which a user's preferences can be time-dependent and can also depend
on the history of past recommendations and play-outs. The three key features of
the model that makes it more realistic compared to existing models for
recommendation systems are (1) user preference is inherently latent, (2)
current recommendations can affect future preferences, and (3) it allows for
the development of learning algorithms with provable performance guarantees.
The problem is cast as an average-cost restless multi-armed bandit for a given
user, with an independent partially observable Markov decision process (POMDP)
for each item of content. We analyze the POMDP for a single arm, describe its
structural properties, and characterize its optimal policy. We then develop a
Thompson sampling-based online reinforcement learning algorithm to learn the
parameters of the model and optimize utility from the binary responses of the
users to continuous recommendations. We then analyze the performance of the
learning algorithm and characterize the regret. Illustrative numerical results
and directions for extension to the restless hidden Markov multi-armed bandit
problem are also presented.
| Rahul Meshram, Aditya Gopalan and D. Manjunath | null | 1603.09233 | null | null |
Degrees of Freedom in Deep Neural Networks | cs.LG stat.ML | In this paper, we explore degrees of freedom in deep sigmoidal neural
networks. We show that the degrees of freedom in these models is related to the
expected optimism, which is the expected difference between test error and
training error. We provide an efficient Monte-Carlo method to estimate the
degrees of freedom for multi-class classification methods. We show degrees of
freedom are lower than the parameter count in a simple XOR network. We extend
these results to neural nets trained on synthetic and real data, and
investigate impact of network's architecture and different regularization
choices. The degrees of freedom in deep networks are dramatically smaller than
the number of parameters, in some real datasets several orders of magnitude.
Further, we observe that for fixed number of parameters, deeper networks have
less degrees of freedom exhibiting a regularization-by-depth.
| Tianxiang Gao and Vladimir Jojic | null | 1603.09260 | null | null |
Clinical Information Extraction via Convolutional Neural Network | cs.LG cs.CL cs.NE | We report an implementation of a clinical information extraction tool that
leverages deep neural network to annotate event spans and their attributes from
raw clinical notes and pathology reports. Our approach uses context words and
their part-of-speech tags and shape information as features. Then we hire
temporal (1D) convolutional neural network to learn hidden feature
representations. Finally, we use Multilayer Perceptron (MLP) to predict event
spans. The empirical evaluation demonstrates that our approach significantly
outperforms baselines.
| Peng Li and Heng Huang | null | 1603.09381 | null | null |
Deep Networks with Stochastic Depth | cs.LG cs.CV cs.NE | Very deep convolutional networks with hundreds of layers have led to
significant reductions in error on competitive benchmarks. Although the
unmatched expressiveness of the many layers can be highly desirable at test
time, training very deep networks comes with its own set of challenges. The
gradients can vanish, the forward flow often diminishes, and the training time
can be painfully slow. To address these problems, we propose stochastic depth,
a training procedure that enables the seemingly contradictory setup to train
short networks and use deep networks at test time. We start with very deep
networks but during training, for each mini-batch, randomly drop a subset of
layers and bypass them with the identity function. This simple approach
complements the recent success of residual networks. It reduces training time
substantially and improves the test error significantly on almost all data sets
that we used for evaluation. With stochastic depth we can increase the depth of
residual networks even beyond 1200 layers and still yield meaningful
improvements in test error (4.91% on CIFAR-10).
| Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, Kilian Weinberger | null | 1603.09382 | null | null |
Minimal Gated Unit for Recurrent Neural Networks | cs.NE cs.LG | Recently recurrent neural networks (RNN) has been very successful in handling
sequence data. However, understanding RNN and finding the best practices for
RNN is a difficult task, partly because there are many competing and complex
hidden units (such as LSTM and GRU). We propose a gated unit for RNN, named as
Minimal Gated Unit (MGU), since it only contains one gate, which is a minimal
design among all gated hidden units. The design of MGU benefits from evaluation
results on LSTM and GRU in the literature. Experiments on various sequence data
show that MGU has comparable accuracy with GRU, but has a simpler structure,
fewer parameters, and faster training. Hence, MGU is suitable in RNN's
applications. Its simple architecture also means that it is easier to evaluate
and tune, and in principle it is easier to study MGU's properties theoretically
and empirically.
| Guo-Bing Zhou and Jianxin Wu and Chen-Lin Zhang and Zhi-Hua Zhou | null | 1603.09420 | null | null |
A Stratified Analysis of Bayesian Optimization Methods | cs.LG stat.ML | Empirical analysis serves as an important complement to theoretical analysis
for studying practical Bayesian optimization. Often empirical insights expose
strengths and weaknesses inaccessible to theoretical analysis. We define two
metrics for comparing the performance of Bayesian optimization methods and
propose a ranking mechanism for summarizing performance within various genres
or strata of test functions. These test functions serve to mimic the complexity
of hyperparameter optimization problems, the most prominent application of
Bayesian optimization, but with a closed form which allows for rapid evaluation
and more predictable behavior. This offers a flexible and efficient way to
investigate functions with specific properties of interest, such as oscillatory
behavior or an optimum on the domain boundary.
| Ian Dewancker, Michael McCourt, Scott Clark, Patrick Hayes, Alexandra
Johnson and George Ke | null | 1603.09441 | null | null |
A ParaBoost Stereoscopic Image Quality Assessment (PBSIQA) System | cs.CV cs.LG | The problem of stereoscopic image quality assessment, which finds
applications in 3D visual content delivery such as 3DTV, is investigated in
this work. Specifically, we propose a new ParaBoost (parallel-boosting)
stereoscopic image quality assessment (PBSIQA) system. The system consists of
two stages. In the first stage, various distortions are classified into a few
types, and individual quality scorers targeting at a specific distortion type
are developed. These scorers offer complementary performance in face of a
database consisting of heterogeneous distortion types. In the second stage,
scores from multiple quality scorers are fused to achieve the best overall
performance, where the fuser is designed based on the parallel boosting idea
borrowed from machine learning. Extensive experimental results are conducted to
compare the performance of the proposed PBSIQA system with those of existing
stereo image quality assessment (SIQA) metrics. The developed quality metric
can serve as an objective function to optimize the performance of a 3D content
delivery system.
| Hyunsuk Ko, Rui Song, C.-C. Jay Kuo | 10.1016/j.jvcir.2017.02.014 | 1603.09469 | null | null |
Learning Compatibility Across Categories for Heterogeneous Item
Recommendation | cs.IR cs.CV cs.LG | Identifying relationships between items is a key task of an online
recommender system, in order to help users discover items that are functionally
complementary or visually compatible. In domains like clothing recommendation,
this task is particularly challenging since a successful system should be
capable of handling a large corpus of items, a huge amount of relationships
among them, as well as the high-dimensional and semantically complicated
features involved. Furthermore, the human notion of "compatibility" to capture
goes beyond mere similarity: For two items to be compatible---whether jeans and
a t-shirt, or a laptop and a charger---they should be similar in some ways, but
systematically different in others.
In this paper we propose a novel method, Monomer, to learn complicated and
heterogeneous relationships between items in product recommendation settings.
Recently, scalable methods have been developed that address this task by
learning similarity metrics on top of the content of the products involved.
Here our method relaxes the metricity assumption inherent in previous work and
models multiple localized notions of 'relatedness,' so as to uncover ways in
which related items should be systematically similar, and systematically
different. Quantitatively, we show that our system achieves state-of-the-art
performance on large-scale compatibility prediction tasks, especially in cases
where there is substantial heterogeneity between related items. Qualitatively,
we demonstrate that richer notions of compatibility can be learned that go
beyond similarity, and that our model can make effective recommendations of
heterogeneous content.
| Ruining He and Charles Packer and Julian McAuley | null | 1603.09473 | null | null |
Learning Multiscale Features Directly From Waveforms | cs.CL cs.LG cs.NE cs.SD | Deep learning has dramatically improved the performance of speech recognition
systems through learning hierarchies of features optimized for the task at
hand. However, true end-to-end learning, where features are learned directly
from waveforms, has only recently reached the performance of hand-tailored
representations based on the Fourier transform. In this paper, we detail an
approach to use convolutional filters to push past the inherent tradeoff of
temporal and frequency resolution that exists for spectral representations. At
increased computational cost, we show that increasing temporal resolution via
reduced stride and increasing frequency resolution via additional filters
delivers significant performance improvements. Further, we find more efficient
representations by simultaneously learning at multiple scales, leading to an
overall decrease in word error rate on a difficult internal speech test set by
20.7% relative to networks with the same number of parameters trained on
spectrograms.
| Zhenyao Zhu, Jesse H. Engel, Awni Hannun | null | 1603.09509 | null | null |
Online Optimization with Costly and Noisy Measurements using Random
Fourier Expansions | cs.LG math.OC stat.ML | This paper analyzes DONE, an online optimization algorithm that iteratively
minimizes an unknown function based on costly and noisy measurements. The
algorithm maintains a surrogate of the unknown function in the form of a random
Fourier expansion (RFE). The surrogate is updated whenever a new measurement is
available, and then used to determine the next measurement point. The algorithm
is comparable to Bayesian optimization algorithms, but its computational
complexity per iteration does not depend on the number of measurements. We
derive several theoretical results that provide insight on how the
hyper-parameters of the algorithm should be chosen. The algorithm is compared
to a Bayesian optimization algorithm for a benchmark problem and three
applications, namely, optical coherence tomography, optical beam-forming
network tuning, and robot arm control. It is found that the DONE algorithm is
significantly faster than Bayesian optimization in the discussed problems,
while achieving a similar or better performance.
| Laurens Bliek, Hans R. G. W. Verstraete, Michel Verhaegen, and Sander
Wahls | null | 1603.09620 | null | null |
Differentiable Pooling for Unsupervised Acoustic Model Adaptation | cs.CL cs.LG | We present a deep neural network (DNN) acoustic model that includes
parametrised and differentiable pooling operators. Unsupervised acoustic model
adaptation is cast as the problem of updating the decision boundaries
implemented by each pooling operator. In particular, we experiment with two
types of pooling parametrisations: learned $L_p$-norm pooling and weighted
Gaussian pooling, in which the weights of both operators are treated as
speaker-dependent. We perform investigations using three different large
vocabulary speech recognition corpora: AMI meetings, TED talks and Switchboard
conversational telephone speech. We demonstrate that differentiable pooling
operators provide a robust and relatively low-dimensional way to adapt acoustic
models, with relative word error rates reductions ranging from 5--20% with
respect to unadapted systems, which themselves are better than the baseline
fully-connected DNN-based acoustic models. We also investigate how the proposed
techniques work under various adaptation conditions including the quality of
adaptation data and complementarity to other feature- and model-space
adaptation methods, as well as providing an analysis of the characteristics of
each of the proposed approaches.
| Pawel Swietojanski and Steve Renals | 10.1109/TASLP.2016.2584700 | 1603.09630 | null | null |
Data Collection for Interactive Learning through the Dialog | cs.CL cs.LG | This paper presents a dataset collected from natural dialogs which enables to
test the ability of dialog systems to learn new facts from user utterances
throughout the dialog. This interactive learning will help with one of the most
prevailing problems of open domain dialog system, which is the sparsity of
facts a dialog system can reason about. The proposed dataset, consisting of
1900 collected dialogs, allows simulation of an interactive gaining of
denotations and questions explanations from users which can be used for the
interactive learning.
| Miroslav Vodol\'an, Filip Jur\v{c}\'i\v{c}ek | null | 1603.09631 | null | null |
Detection under Privileged Information | cs.CR cs.LG stat.ML | For well over a quarter century, detection systems have been driven by models
learned from input features collected from real or simulated environments. An
artifact (e.g., network event, potential malware sample, suspicious email) is
deemed malicious or non-malicious based on its similarity to the learned model
at runtime. However, the training of the models has been historically limited
to only those features available at runtime. In this paper, we consider an
alternate learning approach that trains models using "privileged"
information--features available at training time but not at runtime--to improve
the accuracy and resilience of detection systems. In particular, we adapt and
extend recent advances in knowledge transfer, model influence, and distillation
to enable the use of forensic or other data unavailable at runtime in a range
of security domains. An empirical evaluation shows that privileged information
increases precision and recall over a system with no privileged information: we
observe up to 7.7% relative decrease in detection error for fast-flux bot
detection, 8.6% for malware traffic detection, 7.3% for malware classification,
and 16.9% for face recognition. We explore the limitations and applications of
different privileged information techniques in detection systems. Such
techniques provide a new means for detection systems to learn from data that
would otherwise not be available at runtime.
| Z. Berkay Celik, Patrick McDaniel, Rauf Izmailov, Nicolas Papernot,
Ryan Sheatsley, Raquel Alvarez, Ananthram Swami | null | 1603.09638 | null | null |
Multi-task Recurrent Model for Speech and Speaker Recognition | cs.CL cs.LG cs.NE stat.ML | Although highly correlated, speech and speaker recognition have been regarded
as two independent tasks and studied by two communities. This is certainly not
the way that people behave: we decipher both speech content and speaker traits
at the same time. This paper presents a unified model to perform speech and
speaker recognition simultaneously and altogether. The model is based on a
unified neural network where the output of one task is fed to the input of the
other, leading to a multi-task recurrent network. Experiments show that the
joint model outperforms the task-specific models on both the two tasks.
| Zhiyuan Tang, Lantian Li and Dong Wang | null | 1603.09643 | null | null |
Pessimistic Uplift Modeling | cs.LG | Uplift modeling is a machine learning technique that aims to model treatment
effects heterogeneity. It has been used in business and health sectors to
predict the effect of a specific action on a given individual. Despite its
advantages, uplift models show high sensitivity to noise and disturbance, which
leads to unreliable results. In this paper we show different approaches to
address the problem of uplift modeling, we demonstrate how disturbance in data
can affect uplift measurement. We propose a new approach, we call it
Pessimistic Uplift Modeling, that minimizes disturbance effects. We compared
our approach with the existing uplift methods, on simulated and real data-sets.
The experiments show that our approach outperforms the existing approaches,
especially in the case of high noise data environment.
| Atef Shaar, Talel Abdessalem, Olivier Segard | null | 1603.09738 | null | null |
Hierarchical Quickest Change Detection via Surrogates | cs.LG cs.IT math.IT stat.ML | Change detection (CD) in time series data is a critical problem as it reveal
changes in the underlying generative processes driving the time series. Despite
having received significant attention, one important unexplored aspect is how
to efficiently utilize additional correlated information to improve the
detection and the understanding of changepoints. We propose hierarchical
quickest change detection (HQCD), a framework that formalizes the process of
incorporating additional correlated sources for early changepoint detection.
The core ideas behind HQCD are rooted in the theory of quickest detection and
HQCD can be regarded as its novel generalization to a hierarchical setting. The
sources are classified into targets and surrogates, and HQCD leverages this
structure to systematically assimilate observed data to update changepoint
statistics across layers. The decision on actual changepoints are provided by
minimizing the delay while still maintaining reliability bounds. In addition,
HQCD also uncovers interesting relations between changes at targets from
changes across surrogates. We validate HQCD for reliability and performance
against several state-of-the-art methods for both synthetic dataset (known
changepoints) and several real-life examples (unknown changepoints). Our
experiments indicate that we gain significant robustness without loss of
detection delay through HQCD. Our real-life experiments also showcase the
usefulness of the hierarchical setting by connecting the surrogate sources
(such as Twitter chatter) to target sources (such as Employment related
protests that ultimately lead to major uprisings).
| Prithwish Chakraborty and Sathappan Muthiah and Ravi Tandon and Naren
Ramakrishnan | null | 1603.09739 | null | null |
Variational reaction-diffusion systems for semantic segmentation | cs.CV cs.LG | A novel global energy model for multi-class semantic image segmentation is
proposed that admits very efficient exact inference and derivative calculations
for learning. Inference in this model is equivalent to MAP inference in a
particular kind of vector-valued Gaussian Markov random field, and ultimately
reduces to solving a linear system of linear PDEs known as a reaction-diffusion
system. Solving this system can be achieved in time scaling near-linearly in
the number of image pixels by reducing it to sequential FFTs, after a linear
change of basis. The efficiency and differentiability of the model make it
especially well-suited for integration with convolutional neural networks, even
allowing it to be used in interior, feature-generating layers and stacked
multiple times. Experimental results are shown demonstrating that the model can
be employed profitably in conjunction with different convolutional net
architectures, and that doing so compares favorably to joint training of a
fully-connected CRF with a convolutional net.
| Paul Vernaza | null | 1604.00092 | null | null |
Semi-supervised and Unsupervised Methods for Categorizing Posts in Web
Discussion Forums | cs.CL cs.IR cs.LG cs.SI | Web discussion forums are used by millions of people worldwide to share
information belonging to a variety of domains such as automotive vehicles,
pets, sports, etc. They typically contain posts that fall into different
categories such as problem, solution, feedback, spam, etc. Automatic
identification of these categories can aid information retrieval that is
tailored for specific user requirements. Previously, a number of supervised
methods have attempted to solve this problem; however, these depend on the
availability of abundant training data. A few existing unsupervised and
semi-supervised approaches are either focused on identifying a single category
or do not report category-specific performance. In contrast, this work proposes
unsupervised and semi-supervised methods that require no or minimal training
data to achieve this objective without compromising on performance. A
fine-grained analysis is also carried out to discuss their limitations. The
proposed methods are based on sequence models (specifically, Hidden Markov
Models) that can model language for each category using word and part-of-speech
probability distributions, and manually specified features. Empirical
evaluations across domains demonstrate that the proposed methods are better
suited for this task than existing ones.
| Krish Perumal | null | 1604.00119 | null | null |
Nonparametric Spherical Topic Modeling with Word Embeddings | cs.CL cs.IR cs.LG stat.ML | Traditional topic models do not account for semantic regularities in
language. Recent distributional representations of words exhibit semantic
consistency over directional metrics such as cosine similarity. However,
neither categorical nor Gaussian observational distributions used in existing
topic models are appropriate to leverage such correlations. In this paper, we
propose to use the von Mises-Fisher distribution to model the density of words
over a unit sphere. Such a representation is well-suited for directional data.
We use a Hierarchical Dirichlet Process for our base topic model and propose an
efficient inference algorithm based on Stochastic Variational Inference. This
model enables us to naturally exploit the semantic structures of word
embeddings while flexibly discovering the number of topics. Experiments
demonstrate that our method outperforms competitive approaches in terms of
topic coherence on two different text corpora while offering efficient
inference.
| Kayhan Batmanghelich, Ardavan Saeedi, Karthik Narasimhan, Sam Gershman | null | 1604.00126 | null | null |
Using Recurrent Neural Networks to Optimize Dynamical Decoupling for
Quantum Memory | quant-ph cs.LG cs.NE | We utilize machine learning models which are based on recurrent neural
networks to optimize dynamical decoupling (DD) sequences. DD is a relatively
simple technique for suppressing the errors in quantum memory for certain noise
models. In numerical simulations, we show that with minimum use of prior
knowledge and starting from random sequences, the models are able to improve
over time and eventually output DD-sequences with performance better than that
of the well known DD-families. Furthermore, our algorithm is easy to implement
in experiments to find solutions tailored to the specific hardware, as it
treats the figure of merit as a black box.
| Moritz August, Xiaotong Ni | 10.1103/PhysRevA.95.012335 | 1604.00279 | null | null |
Building Machines That Learn and Think Like People | cs.AI cs.CV cs.LG cs.NE stat.ML | Recent progress in artificial intelligence (AI) has renewed interest in
building systems that learn and think like people. Many advances have come from
using deep neural networks trained end-to-end in tasks such as object
recognition, video games, and board games, achieving performance that equals or
even beats humans in some respects. Despite their biological inspiration and
performance achievements, these systems differ from human intelligence in
crucial ways. We review progress in cognitive science suggesting that truly
human-like learning and thinking machines will have to reach beyond current
engineering trends in both what they learn, and how they learn it.
Specifically, we argue that these machines should (a) build causal models of
the world that support explanation and understanding, rather than merely
solving pattern recognition problems; (b) ground learning in intuitive theories
of physics and psychology, to support and enrich the knowledge that is learned;
and (c) harness compositionality and learning-to-learn to rapidly acquire and
generalize knowledge to new tasks and situations. We suggest concrete
challenges and promising routes towards these goals that can combine the
strengths of recent neural network advances with more structured cognitive
models.
| Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J.
Gershman | null | 1604.00289 | null | null |
A Semisupervised Approach for Language Identification based on Ladder
Networks | cs.CL cs.LG cs.NE | In this study we address the problem of training a neuralnetwork for language
identification using both labeled and unlabeled speech samples in the form of
i-vectors. We propose a neural network architecture that can also handle
out-of-set languages. We utilize a modified version of the recently proposed
Ladder Network semisupervised training procedure that optimizes the
reconstruction costs of a stack of denoising autoencoders. We show that this
approach can be successfully applied to the case where the training dataset is
composed of both labeled and unlabeled acoustic data. The results show enhanced
language identification on the NIST 2015 language identification dataset.
| Ehud Ben-Reuven and Jacob Goldberger | null | 1604.00317 | null | null |
Embedding Lexical Features via Low-Rank Tensors | cs.CL cs.AI cs.LG | Modern NLP models rely heavily on engineered features, which often combine
word and contextual information into complex lexical features. Such combination
results in large numbers of features, which can lead to over-fitting. We
present a new model that represents complex lexical features---comprised of
parts for words, contextual information and labels---in a tensor that captures
conjunction information among these parts. We apply low-rank tensor
approximations to the corresponding parameter tensors to reduce the parameter
space and improve prediction speed. Furthermore, we investigate two methods for
handling features that include $n$-grams of mixed lengths. Our model achieves
state-of-the-art results on tasks in relation extraction, PP-attachment, and
preposition disambiguation.
| Mo Yu, Mark Dredze, Raman Arora, Matthew Gormley | null | 1604.00461 | null | null |
SAM: Support Vector Machine Based Active Queue Management | cs.NI cs.LG | Recent years have seen an increasing interest in the design of AQM (Active
Queue Management) controllers. The purpose of these controllers is to manage
the network congestion under varying loads, link delays and bandwidth. In this
paper, a new AQM controller is proposed which is trained by using the SVM
(Support Vector Machine) with the RBF (Radial Basis Function) kernal. The
proposed controller is called the support vector based AQM (SAM) controller.
The performance of the proposed controller has been compared with three
conventional AQM controllers, namely the Random Early Detection, Blue and
Proportional Plus Integral Controller. The preliminary simulation studies show
that the performance of the proposed controller is comparable to the
conventional controllers. However, the proposed controller is more efficient in
controlling the queue size than the conventional controllers.
| Muhammad Saleh Shah, Asim Imdad Wagan, Mukhtiar Ali Unar | null | 1604.00557 | null | null |
Multi-Relational Learning at Scale with ADMM | stat.ML cs.AI cs.LG | Learning from multiple-relational data which contains noise, ambiguities, or
duplicate entities is essential to a wide range of applications such as
statistical inference based on Web Linked Data, recommender systems,
computational biology, and natural language processing. These tasks usually
require working with very large and complex datasets - e.g., the Web graph -
however, current approaches to multi-relational learning are not practical for
such scenarios due to their high computational complexity and poor scalability
on large data.
In this paper, we propose a novel and scalable approach for multi-relational
factorization based on consensus optimization. Our model, called ConsMRF, is
based on the Alternating Direction Method of Multipliers (ADMM) framework,
which enables us to optimize each target relation using a smaller set of
parameters than the state-of-the-art competitors in this task.
Due to ADMM's nature, ConsMRF can be easily parallelized which makes it
suitable for large multi-relational data. Experiments on large Web datasets -
derived from DBpedia, Wikipedia and YAGO - show the efficiency and performance
improvement of ConsMRF over strong competitors. In addition, ConsMRF
near-linear scalability indicates great potential to tackle Web-scale problem
sizes.
| Lucas Drumond, Ernesto Diaz-Aviles, and Lars Schmidt-Thieme | null | 1604.00647 | null | null |
A Characterization of the Non-Uniqueness of Nonnegative Matrix
Factorizations | cs.LG stat.ML | Nonnegative matrix factorization (NMF) is a popular dimension reduction
technique that produces interpretable decomposition of the data into parts.
However, this decompostion is not generally identifiable (even up to
permutation and scaling). While other studies have provide criteria under which
NMF is identifiable, we present the first (to our knowledge) characterization
of the non-identifiability of NMF. We describe exactly when and how
non-uniqueness can occur, which has important implications for algorithms to
efficiently discover alternate solutions, if they exist.
| W. Pan, F. Doshi-Velez | null | 1604.00653 | null | null |
Character-Level Question Answering with Attention | cs.CL cs.AI cs.LG | We show that a character-level encoder-decoder framework can be successfully
applied to question answering with a structured knowledge base. We use our
model for single-relation question answering and demonstrate the effectiveness
of our approach on the SimpleQuestions dataset (Bordes et al., 2015), where we
improve state-of-the-art accuracy from 63.9% to 70.9%, without use of
ensembles. Importantly, our character-level model has 16x fewer parameters than
an equivalent word-level model, can be learned with significantly less data
compared to previous work, which relies on data augmentation, and is robust to
new entities in testing.
| David Golub, Xiaodong He | null | 1604.00727 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.