title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
A Survey of Credit Card Fraud Detection Techniques: Data and Technique
Oriented Perspective | cs.CR cs.AI cs.LG | Credit card plays a very important rule in today's economy. It becomes an
unavoidable part of household, business and global activities. Although using
credit cards provides enormous benefits when used carefully and
responsibly,significant credit and financial damages may be caused by
fraudulent activities. Many techniques have been proposed to confront the
growth in credit card fraud. However, all of these techniques have the same
goal of avoiding the credit card fraud; each one has its own drawbacks,
advantages and characteristics. In this paper, after investigating difficulties
of credit card fraud detection, we seek to review the state of the art in
credit card fraud detection techniques, data sets and evaluation criteria.The
advantages and disadvantages of fraud detection methods are enumerated and
compared.Furthermore, a classification of mentioned techniques into two main
fraud detection approaches, namely, misuses (supervised) and anomaly detection
(unsupervised) is presented. Again, a classification of techniques is proposed
based on capability to process the numerical and categorical data sets.
Different data sets used in literature are then described and grouped into real
and synthesized data and the effective and common attributes are extracted for
further usage.Moreover, evaluation employed criterions in literature are
collected and discussed.Consequently, open issues for credit card fraud
detection are explained as guidelines for new researchers.
| SamanehSorournejad, Zahra Zojaji, Reza Ebrahimi Atani, Amir Hassan
Monadjemi | null | 1611.06439 | null | null |
Pruning Convolutional Neural Networks for Resource Efficient Inference | cs.LG stat.ML | We propose a new formulation for pruning convolutional kernels in neural
networks to enable efficient inference. We interleave greedy criteria-based
pruning with fine-tuning by backpropagation - a computationally efficient
procedure that maintains good generalization in the pruned network. We propose
a new criterion based on Taylor expansion that approximates the change in the
cost function induced by pruning network parameters. We focus on transfer
learning, where large pretrained networks are adapted to specialized tasks. The
proposed criterion demonstrates superior performance compared to other
criteria, e.g. the norm of kernel weights or feature map activation, for
pruning large CNNs after adaptation to fine-grained classification tasks
(Birds-200 and Flowers-102) relaying only on the first order gradient
information. We also show that pruning can lead to more than 10x theoretical
(5x practical) reduction in adapted 3D-convolutional filters with a small drop
in accuracy in a recurrent gesture classifier. Finally, we show results for the
large-scale ImageNet dataset to emphasize the flexibility of our approach.
| Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz | null | 1611.0644 | null | null |
Fast Video Classification via Adaptive Cascading of Deep Models | cs.CV cs.LG cs.NE | Recent advances have enabled "oracle" classifiers that can classify across
many classes and input distributions with high accuracy without retraining.
However, these classifiers are relatively heavyweight, so that applying them to
classify video is costly. We show that day-to-day video exhibits highly skewed
class distributions over the short term, and that these distributions can be
classified by much simpler models. We formulate the problem of detecting the
short-term skews online and exploiting models based on it as a new sequential
decision making problem dubbed the Online Bandit Problem, and present a new
algorithm to solve it. When applied to recognizing faces in TV shows and
movies, we realize end-to-end classification speedups of 2.4-7.8x/2.6-11.2x (on
GPU/CPU) relative to a state-of-the-art convolutional neural network, at
competitive accuracy.
| Haichen Shen, Seungyeop Han, Matthai Philipose, Arvind Krishnamurthy | null | 1611.06453 | null | null |
Time Series Classification from Scratch with Deep Neural Networks: A
Strong Baseline | cs.LG cs.NE stat.ML | We propose a simple but strong baseline for time series classification from
scratch with deep neural networks. Our proposed baseline models are pure
end-to-end without any heavy preprocessing on the raw data or feature crafting.
The proposed Fully Convolutional Network (FCN) achieves premium performance to
other state-of-the-art approaches and our exploration of the very deep neural
networks with the ResNet structure is also competitive. The global average
pooling in our convolutional model enables the exploitation of the Class
Activation Map (CAM) to find out the contributing region in the raw data for
the specific labels. Our models provides a simple choice for the real world
application and a good starting point for the future research. An overall
analysis is provided to discuss the generalization capability of our models,
learned features, network structures and the classification semantics.
| Zhiguang Wang, Weizhong Yan, Tim Oates | null | 1611.06455 | null | null |
Dealing with Range Anxiety in Mean Estimation via Statistical Queries | cs.LG stat.ML | We give algorithms for estimating the expectation of a given real-valued
function $\phi:X\to {\bf R}$ on a sample drawn randomly from some unknown
distribution $D$ over domain $X$, namely ${\bf E}_{{\bf x}\sim D}[\phi({\bf
x})]$. Our algorithms work in two well-studied models of restricted access to
data samples. The first one is the statistical query (SQ) model in which an
algorithm has access to an SQ oracle for the input distribution $D$ over $X$
instead of i.i.d. samples from $D$. Given a query function $\phi:X \to [0,1]$,
the oracle returns an estimate of ${\bf E}_{{\bf x}\sim D}[\phi({\bf x})]$
within some tolerance $\tau$. The second, is a model in which only a single bit
is communicated from each sample. In both of these models the error obtained
using a naive implementation would scale polynomially with the range of the
random variable $\phi({\bf x})$ (which might even be infinite). In contrast,
without restrictions on access to data the expected error scales with the
standard deviation of $\phi({\bf x})$. Here we give a simple algorithm whose
error scales linearly in standard deviation of $\phi({\bf x})$ and
logarithmically with an upper bound on the second moment of $\phi({\bf x})$.
As corollaries, we obtain algorithms for high dimensional mean estimation and
stochastic convex optimization in these models that work in more general
settings than previously known solutions.
| Vitaly Feldman | null | 1611.06475 | null | null |
Prototypical Recurrent Unit | cs.LG | Despite the great successes of deep learning, the effectiveness of deep
neural networks has not been understood at any theoretical depth. This work is
motivated by the thrust of developing a deeper understanding of recurrent
neural networks, particularly LSTM/GRU-like networks. As the highly complex
structure of the recurrent unit in LSTM and GRU networks makes them difficult
to analyze, our methodology in this research theme is to construct an
alternative recurrent unit that is as simple as possible and yet also captures
the key components of LSTM/GRU recurrent units. Such a unit can then be used
for the study of recurrent networks and its structural simplicity may allow
easier analysis. Towards that goal, we take a system-theoretic perspective to
design a new recurrent unit, which we call the prototypical recurrent unit
(PRU). Not only having minimal complexity, PRU is demonstrated experimentally
to have comparable performance to GRU and LSTM unit. This establishes PRU
networks as a prototype for future study of LSTM/GRU-like recurrent networks.
This paper also studies the memorization abilities of LSTM, GRU and PRU
networks, motivated by the folk belief that such networks possess long-term
memory. For this purpose, we design a simple and controllable task, called
``memorization problem'', where the networks are trained to memorize certain
targeted information. We show that the memorization performance of all three
networks depends on the amount of targeted information, the amount of
``interfering" information, and the state space dimension of the recurrent
unit. Experiments are also performed for another controllable task, the adding
problem, and similar conclusions are obtained.
| Dingkun Long, Richong Zhang, Yongyi Mao | null | 1611.0653 | null | null |
Efficient Stochastic Inference of Bitwise Deep Neural Networks | cs.NE cs.LG | Recently published methods enable training of bitwise neural networks which
allow reduced representation of down to a single bit per weight. We present a
method that exploits ensemble decisions based on multiple stochastically
sampled network models to increase performance figures of bitwise neural
networks in terms of classification accuracy at inference. Our experiments with
the CIFAR-10 and GTSRB datasets show that the performance of such network
ensembles surpasses the performance of the high-precision base model. With this
technique we achieve 5.81% best classification error on CIFAR-10 test set using
bitwise networks. Concerning inference on embedded systems we evaluate these
bitwise networks using a hardware efficient stochastic rounding procedure. Our
work contributes to efficient embedded bitwise neural networks.
| Sebastian Vogel, Christoph Schorn, Andre Guntoro, Gerd Ascheid | null | 1611.06539 | null | null |
Variational Boosting: Iteratively Refining Posterior Approximations | stat.ML cs.LG stat.ME | We propose a black-box variational inference method to approximate
intractable distributions with an increasingly rich approximating class. Our
method, termed variational boosting, iteratively refines an existing
variational approximation by solving a sequence of optimization problems,
allowing the practitioner to trade computation time for accuracy. We show how
to expand the variational approximating class by incorporating additional
covariance structure and by introducing new components to form a mixture. We
apply variational boosting to synthetic and real statistical models, and show
that resulting posterior inferences compare favorably to existing posterior
approximation algorithms in both accuracy and efficiency.
| Andrew C. Miller, Nicholas Foti, Ryan P. Adams | null | 1611.06585 | null | null |
Temporal Generative Adversarial Nets with Singular Value Clipping | cs.LG cs.CV | In this paper, we propose a generative model, Temporal Generative Adversarial
Nets (TGAN), which can learn a semantic representation of unlabeled videos, and
is capable of generating videos. Unlike existing Generative Adversarial Nets
(GAN)-based methods that generate videos with a single generator consisting of
3D deconvolutional layers, our model exploits two different types of
generators: a temporal generator and an image generator. The temporal generator
takes a single latent variable as input and outputs a set of latent variables,
each of which corresponds to an image frame in a video. The image generator
transforms a set of such latent variables into a video. To deal with
instability in training of GAN with such advanced networks, we adopt a recently
proposed model, Wasserstein GAN, and propose a novel method to train it stably
in an end-to-end manner. The experimental results demonstrate the effectiveness
of our methods.
| Masaki Saito, Eiichi Matsumoto, Shunta Saito | null | 1611.06624 | null | null |
Deep Learning for the Classification of Lung Nodules | q-bio.QM cs.CV cs.LG | Deep learning, as a promising new area of machine learning, has attracted a
rapidly increasing attention in the field of medical imaging. Compared to the
conventional machine learning methods, deep learning requires no hand-tuned
feature extractor, and has shown a superior performance in many visual object
recognition applications. In this study, we develop a deep convolutional neural
network (CNN) and apply it to thoracic CT images for the classification of lung
nodules. We present the CNN architecture and classification accuracy for the
original images of lung nodules. In order to understand the features of lung
nodules, we further construct new datasets, based on the combination of
artificial geometric nodules and some transformations of the original images,
as well as a stochastic nodule shape model. It is found that simplistic
geometric nodules cannot capture the important features of lung nodules.
| He Yang, Hengyong Yu and Ge Wang | null | 1611.06651 | null | null |
Scalable Adaptive Stochastic Optimization Using Random Projections | stat.ML cs.LG | Adaptive stochastic gradient methods such as AdaGrad have gained popularity
in particular for training deep neural networks. The most commonly used and
studied variant maintains a diagonal matrix approximation to second order
information by accumulating past gradients which are used to tune the step size
adaptively. In certain situations the full-matrix variant of AdaGrad is
expected to attain better performance, however in high dimensions it is
computationally impractical. We present Ada-LR and RadaGrad two computationally
efficient approximations to full-matrix AdaGrad based on randomized
dimensionality reduction. They are able to capture dependencies between
features and achieve similar performance to full-matrix AdaGrad but at a much
smaller computational cost. We show that the regret of Ada-LR is close to the
regret of full-matrix AdaGrad which can have an up-to exponentially smaller
dependence on the dimension than the diagonal variant. Empirically, we show
that Ada-LR and RadaGrad perform similarly to full-matrix AdaGrad. On the task
of training convolutional neural networks as well as recurrent neural networks,
RadaGrad achieves faster convergence than diagonal AdaGrad.
| Gabriel Krummenacher and Brian McWilliams and Yannic Kilcher and
Joachim M. Buhmann and Nicolai Meinshausen | null | 1611.06652 | null | null |
Error analysis of regularized least-square regression with Fredholm
kernel | math.ST cs.LG stat.TH | Learning with Fredholm kernel has attracted increasing attention recently
since it can effectively utilize the data information to improve the prediction
performance. Despite rapid progress on theoretical and experimental
evaluations, its generalization analysis has not been explored in learning
theory literature. In this paper, we establish the generalization bound of
least square regularized regression with Fredholm kernel, which implies that
the fast learning rate O(l^{-1}) can be reached under mild capacity conditions.
Simulated examples show that this Fredholm regression algorithm can achieve the
satisfactory prediction performance.
| Yanfang Tao, Peipei Yuan, Biqin Song | null | 1611.0667 | null | null |
Probabilistic Duality for Parallel Gibbs Sampling without Graph Coloring | cs.LG math.PR stat.ML | We present a new notion of probabilistic duality for random variables
involving mixture distributions. Using this notion, we show how to implement a
highly-parallelizable Gibbs sampler for weakly coupled discrete pairwise
graphical models with strictly positive factors that requires almost no
preprocessing and is easy to implement. Moreover, we show how our method can be
combined with blocking to improve mixing. Even though our method leads to
inferior mixing times compared to a sequential Gibbs sampler, we argue that our
method is still very useful for large dynamic networks, where factors are added
and removed on a continuous basis, as it is hard to maintain a graph coloring
in this setup. Similarly, our method is useful for parallelizing Gibbs sampling
in graphical models that do not allow for graph colorings with a small number
of colors such as densely connected graphs.
| Lars Mescheder, Sebastian Nowozin and Andreas Geiger | null | 1611.06684 | null | null |
Training Sparse Neural Networks | cs.CV cs.LG | Deep neural networks with lots of parameters are typically used for
large-scale computer vision tasks such as image classification. This is a
result of using dense matrix multiplications and convolutions. However, sparse
computations are known to be much more efficient. In this work, we train and
build neural networks which implicitly use sparse computations. We introduce
additional gate variables to perform parameter selection and show that this is
equivalent to using a spike-and-slab prior. We experimentally validate our
method on both small and large networks and achieve state-of-the-art
compression results for sparse neural network models.
| Suraj Srinivas, Akshayvarun Subramanya, R. Venkatesh Babu | null | 1611.06694 | null | null |
On the convergence of gradient-like flows with noisy gradient input | math.OC cs.LG math.DS | In view of solving convex optimization problems with noisy gradient input, we
analyze the asymptotic behavior of gradient-like flows under stochastic
disturbances. Specifically, we focus on the widely studied class of mirror
descent schemes for convex programs with compact feasible regions, and we
examine the dynamics' convergence and concentration properties in the presence
of noise. In the vanishing noise limit, we show that the dynamics converge to
the solution set of the underlying problem (a.s.). Otherwise, when the noise is
persistent, we show that the dynamics are concentrated around interior
solutions in the long run, and they converge to boundary solutions that are
sufficiently "sharp". Finally, we show that a suitably rectified variant of the
method converges irrespective of the magnitude of the noise (or the structure
of the underlying convex program), and we derive an explicit estimate for its
rate of convergence.
| Panayotis Mertikopoulos and Mathias Staudigl | null | 1611.0673 | null | null |
Emergence of Compositional Representations in Restricted Boltzmann
Machines | physics.data-an cond-mat.dis-nn cs.LG stat.ML | Extracting automatically the complex set of features composing real
high-dimensional data is crucial for achieving high performance in
machine--learning tasks. Restricted Boltzmann Machines (RBM) are empirically
known to be efficient for this purpose, and to be able to generate distributed
and graded representations of the data. We characterize the structural
conditions (sparsity of the weights, low effective temperature, nonlinearities
in the activation functions of hidden units, and adaptation of fields
maintaining the activity in the visible layer) allowing RBM to operate in such
a compositional phase. Evidence is provided by the replica analysis of an
adequate statistical ensemble of random RBMs and by RBM trained on the
handwritten digits dataset MNIST.
| J\'er\^ome Tubiana (LPTENS), R\'emi Monasson (LPTENS) | 10.1103/PhysRevLett.118.138301 | 1611.06759 | null | null |
Effective Deterministic Initialization for $k$-Means-Like Methods via
Local Density Peaks Searching | cs.LG cs.CV | The $k$-means clustering algorithm is popular but has the following main
drawbacks: 1) the number of clusters, $k$, needs to be provided by the user in
advance, 2) it can easily reach local minima with randomly selected initial
centers, 3) it is sensitive to outliers, and 4) it can only deal with well
separated hyperspherical clusters. In this paper, we propose a Local Density
Peaks Searching (LDPS) initialization framework to address these issues. The
LDPS framework includes two basic components: one of them is the local density
that characterizes the density distribution of a data set, and the other is the
local distinctiveness index (LDI) which we introduce to characterize how
distinctive a data point is compared with its neighbors. Based on these two
components, we search for the local density peaks which are characterized with
high local densities and high LDIs to deal with 1) and 2). Moreover, we detect
outliers characterized with low local densities but high LDIs, and exclude them
out before clustering begins. Finally, we apply the LDPS initialization
framework to $k$-medoids, which is a variant of $k$-means and chooses data
samples as centers, with diverse similarity measures other than the Euclidean
distance to fix the last drawback of $k$-means. Combining the LDPS
initialization framework with $k$-means and $k$-medoids, we obtain two novel
clustering methods called LDPS-means and LDPS-medoids, respectively.
Experiments on synthetic data sets verify the effectiveness of the proposed
methods, especially when the ground truth of the cluster number $k$ is large.
Further, experiments on several real world data sets, Handwritten Pendigits,
Coil-20, Coil-100 and Olivetti Face Database, illustrate that our methods give
a superior performance than the analogous approaches on both estimating $k$ and
unsupervised object categorization.
| Fengfu Li, Hong Qiao, and Bo Zhang | null | 1611.06777 | null | null |
Generalized Dropout | cs.LG cs.AI cs.CV cs.NE | Deep Neural Networks often require good regularizers to generalize well.
Dropout is one such regularizer that is widely used among Deep Learning
practitioners. Recent work has shown that Dropout can also be viewed as
performing Approximate Bayesian Inference over the network parameters. In this
work, we generalize this notion and introduce a rich family of regularizers
which we call Generalized Dropout. One set of methods in this family, called
Dropout++, is a version of Dropout with trainable parameters. Classical Dropout
emerges as a special case of this method. Another member of this family selects
the width of neural network layers. Experiments show that these methods help in
improving generalization performance over Dropout.
| Suraj Srinivas, R. Venkatesh Babu | null | 1611.06791 | null | null |
Options Discovery with Budgeted Reinforcement Learning | cs.LG cs.AI | We consider the problem of learning hierarchical policies for Reinforcement
Learning able to discover options, an option corresponding to a sub-policy over
a set of primitive actions. Different models have been proposed during the last
decade that usually rely on a predefined set of options. We specifically
address the problem of automatically discovering options in decision processes.
We describe a new learning model called Budgeted Option Neural Network (BONN)
able to discover options based on a budgeted learning objective. The BONN model
is evaluated on different classical RL problems, demonstrating both
quantitative and qualitative interesting results.
| Aur\'elia L\'eon, Ludovic Denoyer | null | 1611.06824 | null | null |
Probabilistic structure discovery in time series data | stat.ML cs.LG | Existing methods for structure discovery in time series data construct
interpretable, compositional kernels for Gaussian process regression models.
While the learned Gaussian process model provides posterior mean and variance
estimates, typically the structure is learned via a greedy optimization
procedure. This restricts the space of possible solutions and leads to
over-confident uncertainty estimates. We introduce a fully Bayesian approach,
inferring a full posterior over structures, which more reliably captures the
uncertainty of the model.
| David Janz, Brooks Paige, Tom Rainforth, Jan-Willem van de Meent,
Frank Wood | null | 1611.06863 | null | null |
Learning From Graph Neighborhoods Using LSTMs | cs.LG cs.AI stat.ML | Many prediction problems can be phrased as inferences over local
neighborhoods of graphs. The graph represents the interaction between entities,
and the neighborhood of each entity contains information that allows the
inferences or predictions. We present an approach for applying machine learning
directly to such graph neighborhoods, yielding predicitons for graph nodes on
the basis of the structure of their local neighborhood and the features of the
nodes in it. Our approach allows predictions to be learned directly from
examples, bypassing the step of creating and tuning an inference model or
summarizing the neighborhoods via a fixed set of hand-crafted features. The
approach is based on a multi-level architecture built from Long Short-Term
Memory neural nets (LSTMs); the LSTMs learn how to summarize the neighborhood
from data. We demonstrate the effectiveness of the proposed technique on a
synthetic example and on real-world data related to crowdsourced grading,
Bitcoin transactions, and Wikipedia edit reversions.
| Rakshit Agrawal, Luca de Alfaro, Vassilis Polychronopoulos | null | 1611.06882 | null | null |
Unsupervised Learning for Lexicon-Based Classification | cs.LG cs.CL stat.ML | In lexicon-based classification, documents are assigned labels by comparing
the number of words that appear from two opposed lexicons, such as positive and
negative sentiment. Creating such words lists is often easier than labeling
instances, and they can be debugged by non-experts if classification
performance is unsatisfactory. However, there is little analysis or
justification of this classification heuristic. This paper describes a set of
assumptions that can be used to derive a probabilistic justification for
lexicon-based classification, as well as an analysis of its expected accuracy.
One key assumption behind lexicon-based classification is that all words in
each lexicon are equally predictive. This is rarely true in practice, which is
why lexicon-based approaches are usually outperformed by supervised classifiers
that learn distinct weights on each word from labeled instances. This paper
shows that it is possible to learn such weights without labeled data, by
leveraging co-occurrence statistics across the lexicons. This offers the best
of both worlds: light supervision in the form of lexicons, and data-driven
classification with higher accuracy than traditional word-counting heuristics.
| Jacob Eisenstein | null | 1611.06933 | null | null |
Statistical Learning for OCR Text Correction | cs.CV cs.CL cs.LG | The accuracy of Optical Character Recognition (OCR) is crucial to the success
of subsequent applications used in text analyzing pipeline. Recent models of
OCR post-processing significantly improve the quality of OCR-generated text,
but are still prone to suggest correction candidates from limited observations
while insufficiently accounting for the characteristics of OCR errors. In this
paper, we show how to enlarge candidate suggestion space by using external
corpus and integrating OCR-specific features in a regression approach to
correct OCR-generated errors. The evaluation results show that our model can
correct 61.5% of the OCR-errors (considering the top 1 suggestion) and 71.5% of
the OCR-errors (considering the top 3 suggestions), for cases where the
theoretical correction upper-bound is 78%.
| Jie Mei, Aminul Islam, Yajing Wu, Abidalrahman Moh'd, Evangelos E.
Milios | null | 1611.0695 | null | null |
Associative Adversarial Networks | cs.LG cs.AI | We propose a higher-level associative memory for learning adversarial
networks. Generative adversarial network (GAN) framework has a discriminator
and a generator network. The generator (G) maps white noise (z) to data samples
while the discriminator (D) maps data samples to a single scalar. To do so, G
learns how to map from high-level representation space to data space, and D
learns to do the opposite. We argue that higher-level representation spaces
need not necessarily follow a uniform probability distribution. In this work,
we use Restricted Boltzmann Machines (RBMs) as a higher-level associative
memory and learn the probability distribution for the high-level features
generated by D. The associative memory samples its underlying probability
distribution and G learns how to map these samples to data space. The proposed
associative adversarial networks (AANs) are generative models in the
higher-levels of the learning, and use adversarial non-stochastic models D and
G for learning the mapping between data and higher-level representation spaces.
Experiments show the potential of the proposed networks.
| Tarik Arici and Asli Celikyilmaz | null | 1611.06953 | null | null |
Measuring Sample Quality with Diffusions | stat.ML cs.LG math.PR | Stein's method for measuring convergence to a continuous target distribution
relies on an operator characterizing the target and Stein factor bounds on the
solutions of an associated differential equation. While such operators and
bounds are readily available for a diversity of univariate targets, few
multivariate targets have been analyzed. We introduce a new class of
characterizing operators based on Ito diffusions and develop explicit
multivariate Stein factor bounds for any target with a fast-coupling Ito
diffusion. As example applications, we develop computable and
convergence-determining diffusion Stein discrepancies for log-concave,
heavy-tailed, and multimodal targets and use these quality measures to select
the hyperparameters of biased Markov chain Monte Carlo (MCMC) samplers, compare
random and deterministic quadrature rules, and quantify bias-variance tradeoffs
in approximate MCMC. Our results establish a near-linear relationship between
diffusion Stein discrepancies and Wasserstein distances, improving upon past
work even for strongly log-concave targets. The exposed relationship between
Stein factors and Markov process coupling may be of independent interest.
| Jackson Gorham, Andrew B. Duncan, Sebastian J. Vollmer, and Lester
Mackey | null | 1611.06972 | null | null |
Robust end-to-end deep audiovisual speech recognition | cs.CL cs.LG cs.SD | Speech is one of the most effective ways of communication among humans. Even
though audio is the most common way of transmitting speech, very important
information can be found in other modalities, such as vision. Vision is
particularly useful when the acoustic signal is corrupted. Multi-modal speech
recognition however has not yet found wide-spread use, mostly because the
temporal alignment and fusion of the different information sources is
challenging.
This paper presents an end-to-end audiovisual speech recognizer (AVSR), based
on recurrent neural networks (RNN) with a connectionist temporal classification
(CTC) loss function. CTC creates sparse "peaky" output activations, and we
analyze the differences in the alignments of output targets (phonemes or
visemes) between audio-only, video-only, and audio-visual feature
representations. We present the first such experiments on the large vocabulary
IBM ViaVoice database, which outperform previously published approaches on
phone accuracy in clean and noisy conditions.
| Ramon Sanabria, Florian Metze and Fernando De La Torre | null | 1611.06986 | null | null |
Spatial contrasting for deep unsupervised learning | stat.ML cs.LG | Convolutional networks have marked their place over the last few years as the
best performing model for various visual tasks. They are, however, most suited
for supervised learning from large amounts of labeled data. Previous attempts
have been made to use unlabeled data to improve model performance by applying
unsupervised techniques. These attempts require different architectures and
training methods. In this work we present a novel approach for unsupervised
training of Convolutional networks that is based on contrasting between spatial
regions within images. This criterion can be employed within conventional
neural networks and trained using standard techniques such as SGD and
back-propagation, thus complementing supervised methods.
| Elad Hoffer, Itay Hubara, Nir Ailon | null | 1611.06996 | null | null |
GRAM: Graph-based Attention Model for Healthcare Representation Learning | cs.LG stat.ML | Deep learning methods exhibit promising performance for predictive modeling
in healthcare, but two important challenges remain: -Data insufficiency:Often
in healthcare predictive modeling, the sample size is insufficient for deep
learning methods to achieve satisfactory results. -Interpretation:The
representations learned by deep learning methods should align with medical
knowledge. To address these challenges, we propose a GRaph-based Attention
Model, GRAM that supplements electronic health records (EHR) with hierarchical
information inherent to medical ontologies. Based on the data volume and the
ontology structure, GRAM represents a medical concept as a combination of its
ancestors in the ontology via an attention mechanism. We compared predictive
performance (i.e. accuracy, data needs, interpretability) of GRAM to various
methods including the recurrent neural network (RNN) in two sequential
diagnoses prediction tasks and one heart failure prediction task. Compared to
the basic RNN, GRAM achieved 10% higher accuracy for predicting diseases rarely
observed in the training data and 3% improved area under the ROC curve for
predicting heart failure using an order of magnitude less training data.
Additionally, unlike other methods, the medical concept representations learned
by GRAM are well aligned with the medical ontology. Finally, GRAM exhibits
intuitive attention behaviors by adaptively generalizing to higher level
concepts when facing data insufficiency at the lower level concepts.
| Edward Choi, Mohammad Taha Bahadori, Le Song, Walter F. Stewart,
Jimeng Sun | null | 1611.07012 | null | null |
An Efficient Training Algorithm for Kernel Survival Support Vector
Machines | cs.LG cs.AI stat.ML | Survival analysis is a fundamental tool in medical research to identify
predictors of adverse events and develop systems for clinical decision support.
In order to leverage large amounts of patient data, efficient optimisation
routines are paramount. We propose an efficient training algorithm for the
kernel survival support vector machine (SSVM). We directly optimise the primal
objective function and employ truncated Newton optimisation and order statistic
trees to significantly lower computational costs compared to previous training
algorithms, which require $O(n^4)$ space and $O(p n^6)$ time for datasets with
$n$ samples and $p$ features. Our results demonstrate that our proposed
optimisation scheme allows analysing data of a much larger scale with no loss
in prediction performance. Experiments on synthetic and 5 real-world datasets
show that our technique outperforms existing kernel SSVM formulations if the
amount of right censoring is high ($\geq85\%$), and performs comparably
otherwise.
| Sebastian P\"olsterl, Nassir Navab, Amin Katouzian | null | 1611.07054 | null | null |
The Recycling Gibbs Sampler for Efficient Learning | stat.CO cs.LG stat.ML | Monte Carlo methods are essential tools for Bayesian inference. Gibbs
sampling is a well-known Markov chain Monte Carlo (MCMC) algorithm, extensively
used in signal processing, machine learning, and statistics, employed to draw
samples from complicated high-dimensional posterior distributions. The key
point for the successful application of the Gibbs sampler is the ability to
draw efficiently samples from the full-conditional probability density
functions. Since in the general case this is not possible, in order to speed up
the convergence of the chain, it is required to generate auxiliary samples
whose information is eventually disregarded. In this work, we show that these
auxiliary samples can be recycled within the Gibbs estimators, improving their
efficiency with no extra cost. This novel scheme arises naturally after
pointing out the relationship between the standard Gibbs sampler and the chain
rule used for sampling purposes. Numerical simulations involving simple and
real inference problems confirm the excellent performance of the proposed
scheme in terms of accuracy and computational efficiency. In particular we give
empirical evidence of performance in a toy example, inference of Gaussian
processes hyperparameters, and learning dependence graphs through regression.
| Luca Martino, Victor Elvira, Gustau Camps-Valls | 10.1016/j.dsp.2017.11.012 | 1611.07056 | null | null |
A Deep Learning Approach for Joint Video Frame and Reward Prediction in
Atari Games | cs.AI cs.LG stat.ML | Reinforcement learning is concerned with identifying reward-maximizing
behaviour policies in environments that are initially unknown. State-of-the-art
reinforcement learning approaches, such as deep Q-networks, are model-free and
learn to act effectively across a wide range of environments such as Atari
games, but require huge amounts of data. Model-based techniques are more
data-efficient, but need to acquire explicit knowledge about the environment.
In this paper, we take a step towards using model-based techniques in
environments with a high-dimensional visual state space by demonstrating that
it is possible to learn system dynamics and the reward structure jointly. Our
contribution is to extend a recently developed deep neural network for video
frame prediction in Atari games to enable reward prediction as well. To this
end, we phrase a joint optimization problem for minimizing both video frame and
reward reconstruction loss, and adapt network parameters accordingly. Empirical
evaluations on five Atari games demonstrate accurate cumulative reward
prediction of up to 200 frames. We consider these results as opening up
important directions for model-based reinforcement learning in complex,
initially unknown environments.
| Felix Leibfried, Nate Kushman, Katja Hofmann | null | 1611.07078 | null | null |
Structured Prediction by Conditional Risk Minimization | stat.ML cs.LG | We propose a general approach for supervised learning with structured output
spaces, such as combinatorial and polyhedral sets, that is based on minimizing
estimated conditional risk functions. Given a loss function defined over pairs
of output labels, we first estimate the conditional risk function by solving a
(possibly infinite) collection of regularized least squares problems. A
prediction is made by solving an inference problem that minimizes the estimated
conditional risk function over the output space. We show that this approach
enables, in some cases, efficient training and inference without explicitly
introducing a convex surrogate for the original loss function, even when it is
discontinuous. Empirical evaluations on real-world and synthetic data sets
demonstrate the effectiveness of our method in adapting to a variety of loss
functions.
| Chong Yang Goh, Patrick Jaillet | null | 1611.07096 | null | null |
Risk-Sensitive Learning and Pricing for Demand Response | cs.LG math.OC | We consider the setting in which an electric power utility seeks to curtail
its peak electricity demand by offering a fixed group of customers a uniform
price for reductions in consumption relative to their predetermined baselines.
The underlying demand curve, which describes the aggregate reduction in
consumption in response to the offered price, is assumed to be affine and
subject to unobservable random shocks. Assuming that both the parameters of the
demand curve and the distribution of the random shocks are initially unknown to
the utility, we investigate the extent to which the utility might dynamically
adjust its offered prices to maximize its cumulative risk-sensitive payoff over
a finite number of $T$ days. In order to do so effectively, the utility must
design its pricing policy to balance the tradeoff between the need to learn the
unknown demand model (exploration) and maximize its payoff (exploitation) over
time. In this paper, we propose such a pricing policy, which is shown to
exhibit an expected payoff loss over $T$ days that is at most
$O(\sqrt{T}\log(T))$, relative to an oracle pricing policy that knows the
underlying demand model. Moreover, the proposed pricing policy is shown to
yield a sequence of prices that converge to the oracle optimal prices in the
mean square sense.
| Kia Khezeli and Eilyan Bitar | null | 1611.07098 | null | null |
Tree Space Prototypes: Another Look at Making Tree Ensembles
Interpretable | stat.ML cs.LG | Ensembles of decision trees perform well on many problems, but are not
interpretable. In contrast to existing approaches in interpretability that
focus on explaining relationships between features and predictions, we propose
an alternative approach to interpret tree ensemble classifiers by surfacing
representative points for each class -- prototypes. We introduce a new distance
for Gradient Boosted Tree models, and propose new, adaptive prototype selection
methods with theoretical guarantees, with the flexibility to choose a different
number of prototypes in each class. We demonstrate our methods on random
forests and gradient boosted trees, showing that the prototypes can perform as
well as or even better than the original tree ensemble when used as a
nearest-prototype classifier. In a user study, humans were better at predicting
the output of a tree ensemble classifier when using prototypes than when using
Shapley values, a popular feature attribution method. Hence, prototypes present
a viable alternative to feature-based explanations for tree ensembles.
| Sarah Tan, Matvey Soloviev, Giles Hooker, Martin T. Wells | null | 1611.07115 | null | null |
Max-Margin Deep Generative Models for (Semi-)Supervised Learning | cs.CV cs.LG stat.ML | Deep generative models (DGMs) are effective on learning multilayered
representations of complex data and performing inference of input data by
exploring the generative ability. However, it is relatively insufficient to
empower the discriminative ability of DGMs on making accurate predictions. This
paper presents max-margin deep generative models (mmDGMs) and a
class-conditional variant (mmDCGMs), which explore the strongly discriminative
principle of max-margin learning to improve the predictive performance of DGMs
in both supervised and semi-supervised learning, while retaining the generative
capability. In semi-supervised learning, we use the predictions of a max-margin
classifier as the missing labels instead of performing full posterior inference
for efficiency; we also introduce additional max-margin and label-balance
regularization terms of unlabeled data for effectiveness. We develop an
efficient doubly stochastic subgradient algorithm for the piecewise linear
objectives in different settings. Empirical results on various datasets
demonstrate that: (1) max-margin learning can significantly improve the
prediction performance of DGMs and meanwhile retain the generative ability; (2)
in supervised learning, mmDGMs are competitive to the best fully discriminative
networks when employing convolutional neural networks as the generative and
recognition models; and (3) in semi-supervised learning, mmDCGMs can perform
efficient inference and achieve state-of-the-art classification results on
several benchmarks.
| Chongxuan Li and Jun Zhu and Bo Zhang | null | 1611.07119 | null | null |
Fast and Energy-Efficient CNN Inference on IoT Devices | cs.DC cs.LG | Convolutional Neural Networks (CNNs) exhibit remarkable performance in
various machine learning tasks. As sensor-equipped internet of things (IoT)
devices permeate into every aspect of modern life, it is increasingly important
to run CNN inference, a computationally intensive application, on resource
constrained devices. We present a technique for fast and energy-efficient CNN
inference on mobile SoC platforms, which are projected to be a major player in
the IoT space. We propose techniques for efficient parallelization of CNN
inference targeting mobile GPUs, and explore the underlying tradeoffs.
Experiments with running Squeezenet on three different mobile devices confirm
the effectiveness of our approach. For further study, please refer to the
project repository available on our GitHub page:
https://github.com/mtmd/Mobile_ConvNet
| Mohammad Motamedi, Daniel Fong, Soheil Ghiasi | null | 1611.07151 | null | null |
Deep Recurrent Convolutional Neural Network: Improving Performance For
Speech Recognition | cs.CL cs.LG | A deep learning approach has been widely applied in sequence modeling
problems. In terms of automatic speech recognition (ASR), its performance has
significantly been improved by increasing large speech corpus and deeper neural
network. Especially, recurrent neural network and deep convolutional neural
network have been applied in ASR successfully. Given the arising problem of
training speed, we build a novel deep recurrent convolutional network for
acoustic modeling and then apply deep residual learning to it. Our experiments
show that it has not only faster convergence speed but better recognition
accuracy over traditional deep convolutional recurrent network. In the
experiments, we compare the convergence speed of our novel deep recurrent
convolutional networks and traditional deep convolutional recurrent networks.
With faster convergence speed, our novel deep recurrent convolutional networks
can reach the comparable performance. We further show that applying deep
residual learning can boost the convergence speed of our novel deep recurret
convolutional networks. Finally, we evaluate all our experimental networks by
phoneme error rate (PER) with our proposed bidirectional statistical n-gram
language model. Our evaluation results show that our newly proposed deep
recurrent convolutional network applied with deep residual learning can reach
the best PER of 17.33\% with the fastest convergence speed on TIMIT database.
The outstanding performance of our novel deep recurrent convolutional neural
network with deep residual learning indicates that it can be potentially
adopted in other sequential problems.
| Zewang Zhang, Zheng Sun, Jiaqi Liu, Jingwen Chen, Zhao Huo, Xiao Zhang | null | 1611.07174 | null | null |
Interpretable Recurrent Neural Networks Using Sequential Sparse Recovery | stat.ML cs.LG | Recurrent neural networks (RNNs) are powerful and effective for processing
sequential data. However, RNNs are usually considered "black box" models whose
internal structure and learned parameters are not interpretable. In this paper,
we propose an interpretable RNN based on the sequential iterative
soft-thresholding algorithm (SISTA) for solving the sequential sparse recovery
problem, which models a sequence of correlated observations with a sequence of
sparse latent vectors. The architecture of the resulting SISTA-RNN is
implicitly defined by the computational structure of SISTA, which results in a
novel stacked RNN architecture. Furthermore, the weights of the SISTA-RNN are
perfectly interpretable as the parameters of a principled statistical model,
which in this case include a sparsifying dictionary, iterative step size, and
regularization parameters. In addition, on a particular sequential compressive
sensing task, the SISTA-RNN trains faster and achieves better performance than
conventional state-of-the-art black box RNNs, including long-short term memory
(LSTM) RNNs.
| Scott Wisdom, Thomas Powers, James Pitton, Les Atlas | null | 1611.07252 | null | null |
Investigating the influence of noise and distractors on the
interpretation of neural networks | stat.ML cs.LG | Understanding neural networks is becoming increasingly important. Over the
last few years different types of visualisation and explanation methods have
been proposed. However, none of them explicitly considered the behaviour in the
presence of noise and distracting elements. In this work, we will show how
noise and distracting dimensions can influence the result of an explanation
model. This gives a new theoretical insights to aid selection of the most
appropriate explanation model within the deep-Taylor decomposition framework.
| Pieter-Jan Kindermans, Kristof Sch\"utt, Klaus-Robert M\"uller, Sven
D\"ahne | null | 1611.0727 | null | null |
Correlation Clustering with Low-Rank Matrices | cs.LG cs.DS cs.NA | Correlation clustering is a technique for aggregating data based on
qualitative information about which pairs of objects are labeled 'similar' or
'dissimilar.' Because the optimization problem is NP-hard, much of the previous
literature focuses on finding approximation algorithms. In this paper we
explore how to solve the correlation clustering objective exactly when the data
to be clustered can be represented by a low-rank matrix. We prove in particular
that correlation clustering can be solved in polynomial time when the
underlying matrix is positive semidefinite with small constant rank, but that
the task remains NP-hard in the presence of even one negative eigenvalue. Based
on our theoretical results, we develop an algorithm for efficiently "solving"
low-rank positive semidefinite correlation clustering by employing a procedure
for zonotope vertex enumeration. We demonstrate the effectiveness and speed of
our algorithm by using it to solve several clustering problems on both
synthetic and real-world data.
| Nate Veldt and Anthony Wirth and David F. Gleich | null | 1611.07305 | null | null |
Variational Graph Auto-Encoders | stat.ML cs.LG | We introduce the variational graph auto-encoder (VGAE), a framework for
unsupervised learning on graph-structured data based on the variational
auto-encoder (VAE). This model makes use of latent variables and is capable of
learning interpretable latent representations for undirected graphs. We
demonstrate this model using a graph convolutional network (GCN) encoder and a
simple inner product decoder. Our model achieves competitive results on a link
prediction task in citation networks. In contrast to most existing models for
unsupervised learning on graph-structured data and link prediction, our model
can naturally incorporate node features, which significantly improves
predictive performance on a number of benchmark datasets.
| Thomas N. Kipf, Max Welling | null | 1611.07308 | null | null |
Limbo: A Fast and Flexible Library for Bayesian Optimization | cs.LG cs.AI cs.RO stat.ML | Limbo is an open-source C++11 library for Bayesian optimization which is
designed to be both highly flexible and very fast. It can be used to optimize
functions for which the gradient is unknown, evaluations are expensive, and
runtime cost matters (e.g., on embedded systems or robots). Benchmarks on
standard functions show that Limbo is about 2 times faster than BayesOpt
(another C++ library) for a similar accuracy.
| Antoine Cully, Konstantinos Chatzilygeroudis, Federico Allocati,
Jean-Baptiste Mouret | null | 1611.07343 | null | null |
Deep Learning Approximation for Stochastic Control Problems | cs.LG cs.AI cs.NE math.OC stat.ML | Many real world stochastic control problems suffer from the "curse of
dimensionality". To overcome this difficulty, we develop a deep learning
approach that directly solves high-dimensional stochastic control problems
based on Monte-Carlo sampling. We approximate the time-dependent controls as
feedforward neural networks and stack these networks together through model
dynamics. The objective function for the control problem plays the role of the
loss function for the deep neural network. We test this approach using examples
from the areas of optimal trading and energy storage. Our results suggest that
the algorithm presented here achieves satisfactory accuracy and at the same
time, can handle rather high dimensional problems.
| Jiequn Han, Weinan E | null | 1611.07422 | null | null |
TreeView: Peeking into Deep Neural Networks Via Feature-Space
Partitioning | stat.ML cs.LG | With the advent of highly predictive but opaque deep learning models, it has
become more important than ever to understand and explain the predictions of
such models. Existing approaches define interpretability as the inverse of
complexity and achieve interpretability at the cost of accuracy. This
introduces a risk of producing interpretable but misleading explanations. As
humans, we are prone to engage in this kind of behavior \cite{mythos}. In this
paper, we take a step in the direction of tackling the problem of
interpretability without compromising the model accuracy. We propose to build a
Treeview representation of the complex model via hierarchical partitioning of
the feature space, which reveals the iterative rejection of unlikely class
labels until the correct association is predicted.
| Jayaraman J. Thiagarajan, Bhavya Kailkhura, Prasanna Sattigeri and
Karthikeyan Natesan Ramamurthy | null | 1611.07429 | null | null |
Achieving non-discrimination in data release | cs.LG | Discrimination discovery and prevention/removal are increasingly important
tasks in data mining. Discrimination discovery aims to unveil discriminatory
practices on the protected attribute (e.g., gender) by analyzing the dataset of
historical decision records, and discrimination prevention aims to remove
discrimination by modifying the biased data before conducting predictive
analysis. In this paper, we show that the key to discrimination discovery and
prevention is to find the meaningful partitions that can be used to provide
quantitative evidences for the judgment of discrimination. With the support of
the causal graph, we present a graphical condition for identifying a meaningful
partition. Based on that, we develop a simple criterion for the claim of
non-discrimination, and propose discrimination removal algorithms which
accurately remove discrimination while retaining good data utility. Experiments
using real datasets show the effectiveness of our approaches.
| Lu Zhang (1), Yongkai Wu (1), Xintao Wu (1) ((1) University of
Arkansas) | null | 1611.07438 | null | null |
Grad-CAM: Why did you say that? | stat.ML cs.CV cs.LG | We propose a technique for making Convolutional Neural Network (CNN)-based
models more transparent by visualizing input regions that are 'important' for
predictions -- or visual explanations. Our approach, called Gradient-weighted
Class Activation Mapping (Grad-CAM), uses class-specific gradient information
to localize important regions. These localizations are combined with existing
pixel-space visualizations to create a novel high-resolution and
class-discriminative visualization called Guided Grad-CAM. These methods help
better understand CNN-based models, including image captioning and visual
question answering (VQA) models. We evaluate our visual explanations by
measuring their ability to discriminate between classes, to inspire trust in
humans, and their correlation with occlusion maps. Grad-CAM provides a new way
to understand CNN-based models.
We have released code, an online demo hosted on CloudCV, and a full version
of this extended abstract.
| Ramprasaath R Selvaraju, Abhishek Das, Ramakrishna Vedantam, Michael
Cogswell, Devi Parikh, Dhruv Batra | null | 1611.0745 | null | null |
Eigenvalues of the Hessian in Deep Learning: Singularity and Beyond | cs.LG | We look at the eigenvalues of the Hessian of a loss function before and after
training. The eigenvalue distribution is seen to be composed of two parts, the
bulk which is concentrated around zero, and the edges which are scattered away
from zero. We present empirical evidence for the bulk indicating how
over-parametrized the system is, and for the edges that depend on the input
data.
| Levent Sagun, Leon Bottou, Yann LeCun | null | 1611.07476 | null | null |
Can Co-robots Learn to Teach? | cs.RO cs.LG | We explore beyond existing work on learning from demonstration by asking the
question: Can robots learn to teach?, that is, can a robot autonomously learn
an instructional policy from expert demonstration and use it to instruct or
collaborate with humans in executing complex tasks in uncertain environments?
In this paper we pursue a solution to this problem by leveraging the idea that
humans often implicitly decompose a higher level task into several subgoals
whose execution brings the task closer to completion. We propose Dirichlet
process based non-parametric Inverse Reinforcement Learning (DPMIRL) approach
for reward based unsupervised clustering of task space into subgoals. This
approach is shown to capture the latent subgoals that a human teacher would
have utilized to train a novice. The notion of action primitive is introduced
as the means to communicate instruction policy to humans in the least
complicated manner, and as a computationally efficient tool to segment
demonstration data. We evaluate our approach through experiments on hydraulic
actuated scaled model of an excavator and evaluate and compare different
teaching strategies utilized by the robot.
| Harshal Maske, Emily Kieson, Girish Chowdhary, and Charles Abramson | null | 1611.0749 | null | null |
Inducing Interpretable Representations with Variational Autoencoders | stat.ML cs.CV cs.LG | We develop a framework for incorporating structured graphical models in the
\emph{encoders} of variational autoencoders (VAEs) that allows us to induce
interpretable representations through approximate variational inference. This
allows us to both perform reasoning (e.g. classification) under the structural
constraints of a given graphical model, and use deep generative models to deal
with messy, high-dimensional domains where it is often difficult to model all
the variation. Learning in this framework is carried out end-to-end with a
variational objective, applying to both unsupervised and semi-supervised
schemes.
| N. Siddharth and Brooks Paige and Alban Desmaison and Jan-Willem Van
de Meent and Frank Wood and Noah D. Goodman and Pushmeet Kohli and Philip
H.S. Torr | null | 1611.07492 | null | null |
Variational Intrinsic Control | cs.LG cs.AI | In this paper we introduce a new unsupervised reinforcement learning method
for discovering the set of intrinsic options available to an agent. This set is
learned by maximizing the number of different states an agent can reliably
reach, as measured by the mutual information between the set of options and
option termination states. To this end, we instantiate two policy gradient
based algorithms, one that creates an explicit embedding space of options and
one that represents options implicitly. The algorithms also provide an explicit
measure of empowerment in a given state that can be used by an empowerment
maximizing agent. The algorithm scales well with function approximation and we
demonstrate the applicability of the algorithm on a range of tasks.
| Karol Gregor, Danilo Jimenez Rezende, Daan Wierstra | null | 1611.07507 | null | null |
A causal framework for discovering and removing direct and indirect
discrimination | cs.LG | Anti-discrimination is an increasingly important task in data science. In
this paper, we investigate the problem of discovering both direct and indirect
discrimination from the historical data, and removing the discriminatory
effects before the data is used for predictive analysis (e.g., building
classifiers). We make use of the causal network to capture the causal structure
of the data. Then we model direct and indirect discrimination as the
path-specific effects, which explicitly distinguish the two types of
discrimination as the causal effects transmitted along different paths in the
network. Based on that, we propose an effective algorithm for discovering
direct and indirect discrimination, as well as an algorithm for precisely
removing both types of discrimination while retaining good data utility.
Different from previous works, our approaches can ensure that the predictive
models built from the modified data will not incur discrimination in decision
making. Experiments using real datasets show the effectiveness of our
approaches.
| Lu Zhang (1), Yongkai Wu (1), Xintao Wu (1) ((1) University of
Arkansas) | null | 1611.07509 | null | null |
Feature Importance Measure for Non-linear Learning Algorithms | cs.AI cs.LG stat.ML | Complex problems may require sophisticated, non-linear learning methods such
as kernel machines or deep neural networks to achieve state of the art
prediction accuracies. However, high prediction accuracies are not the only
objective to consider when solving problems using machine learning. Instead,
particular scientific applications require some explanation of the learned
prediction function. Unfortunately, most methods do not come with out of the
box straight forward interpretation. Even linear prediction functions are not
straight forward to explain if features exhibit complex correlation structure.
In this paper, we propose the Measure of Feature Importance (MFI). MFI is
general and can be applied to any arbitrary learning machine (including kernel
machines and deep learning). MFI is intrinsically non-linear and can detect
features that by itself are inconspicuous and only impact the prediction
function through their interaction with other features. Lastly, MFI can be used
for both --- model-based feature importance and instance-based feature
importance (i.e, measuring the importance of a feature for a particular data
point).
| Marina M.-C. Vidovic, Nico G\"ornitz, Klaus-Robert M\"uller, Marius
Kloft | null | 1611.07567 | null | null |
Quad-networks: unsupervised learning to rank for interest point
detection | cs.CV cs.LG cs.NE | Several machine learning tasks require to represent the data using only a
sparse set of interest points. An ideal detector is able to find the
corresponding interest points even if the data undergo a transformation typical
for a given domain. Since the task is of high practical interest in computer
vision, many hand-crafted solutions were proposed. In this paper, we ask a
fundamental question: can we learn such detectors from scratch? Since it is
often unclear what points are "interesting", human labelling cannot be used to
find a truly unbiased solution. Therefore, the task requires an unsupervised
formulation. We are the first to propose such a formulation: training a neural
network to rank points in a transformation-invariant manner. Interest points
are then extracted from the top/bottom quantiles of this ranking. We validate
our approach on two tasks: standard RGB image interest point detection and
challenging cross-modal interest point detection between RGB and depth images.
We quantitatively show that our unsupervised method performs better or on-par
with baselines.
| Nikolay Savinov, Akihito Seki, Lubor Ladicky, Torsten Sattler and Marc
Pollefeys | null | 1611.07571 | null | null |
Programs as Black-Box Explanations | stat.ML cs.AI cs.LG | Recent work in model-agnostic explanations of black-box machine learning has
demonstrated that interpretability of complex models does not have to come at
the cost of accuracy or model flexibility. However, it is not clear what kind
of explanations, such as linear models, decision trees, and rule lists, are the
appropriate family to consider, and different tasks and models may benefit from
different kinds of explanations. Instead of picking a single family of
representations, in this work we propose to use "programs" as model-agnostic
explanations. We show that small programs can be expressive yet intuitive as
explanations, and generalize over a number of existing interpretable families.
We propose a prototype program induction method based on simulated annealing
that approximates the local behavior of black-box classifiers around a specific
prediction using random perturbations. Finally, we present preliminary
application on small datasets and show that the generated explanations are
intuitive and accurate for a number of classifiers.
| Sameer Singh and Marco Tulio Ribeiro and Carlos Guestrin | null | 1611.07579 | null | null |
A Neural Network Model to Classify Liver Cancer Patients Using Data
Expansion and Compression | stat.ML cs.LG q-bio.QM | We develop a neural network model to classify liver cancer patients into
high-risk and low-risk groups using genomic data. Our approach provides a novel
technique to classify big data sets using neural network models. We preprocess
the data before training the neural network models. We first expand the data
using wavelet analysis. We then compress the wavelet coefficients by mapping
them onto a new scaled orthonormal coordinate system. Then the data is used to
train a neural network model that enables us to classify cancer patients into
two different classes of high-risk and low-risk patients. We use the
leave-one-out approach to build a neural network model. This neural network
model enables us to classify a patient using genomic data as a high-risk or
low-risk patient without any information about the survival time of the
patient. The results from genomic data analysis are compared with survival time
analysis. It is shown that the expansion and compression of data using wavelet
analysis and singular value decomposition (SVD) is essential to train the
neural network model.
| Ashkan Zeinalzadeh, Tom Wenska, Gordon Okimoto | null | 1611.07588 | null | null |
SyGuS-Comp 2016: Results and Analysis | cs.SE cs.LG cs.LO | Syntax-Guided Synthesis (SyGuS) is the computational problem of finding an
implementation f that meets both a semantic constraint given by a logical
formula $\varphi$ in a background theory T, and a syntactic constraint given by
a grammar G, which specifies the allowed set of candidate implementations. Such
a synthesis problem can be formally defined in SyGuS-IF, a language that is
built on top of SMT-LIB.
The Syntax-Guided Synthesis Competition (SyGuS-Comp) is an effort to
facilitate, bring together and accelerate research and development of efficient
solvers for SyGuS by providing a platform for evaluating different synthesis
techniques on a comprehensive set of benchmarks. In this year's competition we
added a new track devoted to programming by examples. This track consisted of
two categories, one using the theory of bit-vectors and one using the theory of
strings. This paper presents and analyses the results of SyGuS-Comp'16.
| Rajeev Alur (University of Pennsylvania), Dana Fisman (Ben-Gurion
University), Rishabh Singh (Microsoft Research, Redmond), Armando
Solar-Lezama (Massachusetts Institute of Technology) | 10.4204/EPTCS.229.13 | 1611.07627 | null | null |
Interpretation of Prediction Models Using the Input Gradient | stat.ML cs.LG | State of the art machine learning algorithms are highly optimized to provide
the optimal prediction possible, naturally resulting in complex models. While
these models often outperform simpler more interpretable models by order of
magnitudes, in terms of understanding the way the model functions, we are often
facing a "black box".
In this paper we suggest a simple method to interpret the behavior of any
predictive model, both for regression and classification. Given a particular
model, the information required to interpret it can be obtained by studying the
partial derivatives of the model with respect to the input. We exemplify this
insight by interpreting convolutional and multi-layer neural networks in the
field of natural language processing.
| Yotam Hechtlinger | null | 1611.07634 | null | null |
Improving Efficiency of SVM k-fold Cross-validation by Alpha Seeding | cs.LG | The k-fold cross-validation is commonly used to evaluate the effectiveness of
SVMs with the selected hyper-parameters. It is known that the SVM k-fold
cross-validation is expensive, since it requires training k SVMs. However,
little work has explored reusing the h-th SVM for training the (h+1)-th SVM for
improving the efficiency of k-fold cross-validation. In this paper, we propose
three algorithms that reuse the h-th SVM for improving the efficiency of
training the (h+1)-th SVM. Our key idea is to efficiently identify the support
vectors and to accurately estimate their associated weights (also called alpha
values) of the next SVM by using the previous SVM. Our experimental results
show that our algorithms are several times faster than the k-fold
cross-validation which does not make use of the previously trained SVM.
Moreover, our algorithms produce the same results (hence same accuracy) as the
k-fold cross-validation which does not make use of the previously trained SVM.
| Zeyi Wen, Bin Li, Rao Kotagiri, Jian Chen, Yawen Chen and Rui Zhang | null | 1611.07659 | null | null |
Multigrid Neural Architectures | cs.CV cs.LG cs.NE | We propose a multigrid extension of convolutional neural networks (CNNs).
Rather than manipulating representations living on a single spatial grid, our
network layers operate across scale space, on a pyramid of grids. They consume
multigrid inputs and produce multigrid outputs; convolutional filters
themselves have both within-scale and cross-scale extent. This aspect is
distinct from simple multiscale designs, which only process the input at
different scales. Viewed in terms of information flow, a multigrid network
passes messages across a spatial pyramid. As a consequence, receptive field
size grows exponentially with depth, facilitating rapid integration of context.
Most critically, multigrid structure enables networks to learn internal
attention and dynamic routing mechanisms, and use them to accomplish tasks on
which modern CNNs fail.
Experiments demonstrate wide-ranging performance advantages of multigrid. On
CIFAR and ImageNet classification tasks, flipping from a single grid to
multigrid within the standard CNN paradigm improves accuracy, while being
compute and parameter efficient. Multigrid is independent of other
architectural choices; we show synergy in combination with residual
connections. Multigrid yields dramatic improvement on a synthetic semantic
segmentation dataset. Most strikingly, relatively shallow multigrid networks
can learn to directly perform spatial transformation tasks, where, in contrast,
current CNNs fail. Together, our results suggest that continuous evolution of
features on a multigrid pyramid is a more powerful alternative to existing CNN
designs on a flat grid.
| Tsung-Wei Ke, Michael Maire, Stella X. Yu | null | 1611.07661 | null | null |
iCaRL: Incremental Classifier and Representation Learning | cs.CV cs.LG stat.ML | A major open problem on the road to artificial intelligence is the
development of incrementally learning systems that learn about more and more
concepts over time from a stream of data. In this work, we introduce a new
training strategy, iCaRL, that allows learning in such a class-incremental way:
only the training data for a small number of classes has to be present at the
same time and new classes can be added progressively. iCaRL learns strong
classifiers and a data representation simultaneously. This distinguishes it
from earlier works that were fundamentally limited to fixed data
representations and therefore incompatible with deep learning architectures. We
show by experiments on CIFAR-100 and ImageNet ILSVRC 2012 data that iCaRL can
learn many classes incrementally over a long period of time where other
strategies quickly fail.
| Sylvestre-Alvise Rebuffi, Alexander Kolesnikov, Georg Sperl, Christoph
H. Lampert | null | 1611.07725 | null | null |
Tunable Sensitivity to Large Errors in Neural Network Training | stat.ML cs.LG cs.NE | When humans learn a new concept, they might ignore examples that they cannot
make sense of at first, and only later focus on such examples, when they are
more useful for learning. We propose incorporating this idea of tunable
sensitivity for hard examples in neural network learning, using a new
generalization of the cross-entropy gradient step, which can be used in place
of the gradient in any gradient-based training method. The generalized gradient
is parameterized by a value that controls the sensitivity of the training
process to harder training examples. We tested our method on several benchmark
datasets. We propose, and corroborate in our experiments, that the optimal
level of sensitivity to hard example is positively correlated with the depth of
the network. Moreover, the test prediction error obtained by our method is
generally lower than that of the vanilla cross-entropy gradient learner. We
therefore conclude that tunable sensitivity can be helpful for neural network
learning.
| Gil Keren, Sivan Sabato, Bj\"orn Schuller | null | 1611.07743 | null | null |
Adaptive Down-Sampling and Dimension Reduction in Time Elastic Kernel
Machines for Efficient Recognition of Isolated Gestures | cs.CV cs.LG | In the scope of gestural action recognition, the size of the feature vector
representing movements is in general quite large especially when full body
movements are considered. Furthermore, this feature vector evolves during the
movement performance so that a complete movement is fully represented by a
matrix M of size DxT , whose element M i, j represents the value of feature i
at timestamps j. Many studies have addressed dimensionality reduction
considering only the size of the feature vector lying in R D to reduce both the
variability of gestural sequences expressed in the reduced space, and the
computational complexity of their processing. In return, very few of these
methods have explicitly addressed the dimensionality reduction along the time
axis. Yet this is a major issue when considering the use of elastic distances
which are characterized by a quadratic complexity along the time axis. We
present in this paper an evaluation of straightforward approaches aiming at
reducing the dimensionality of the matrix M for each movement, leading to
consider both the dimensionality reduction of the feature vector as well as its
reduction along the time axis. The dimensionality reduction of the feature
vector is achieved by selecting remarkable joints in the skeleton performing
the movement, basically the extremities of the articulatory chains composing
the skeleton. The temporal dimen-sionality reduction is achieved using either a
regular or adaptive down-sampling that seeks to minimize the reconstruction
error of the movements. Elastic and Euclidean kernels are then compared through
support vector machine learning. Two data sets 1 that are widely referenced in
the domain of human gesture recognition, and quite distinctive in terms of
quality of motion capture, are used for the experimental assessment of the
proposed approaches. On these data sets we experimentally show that it is
feasible, and possibly desirable, to significantly reduce simultaneously the
size of the feature vector and the number of skeleton frames to represent body
movements while maintaining a very good recognition rate. The method proves to
give satisfactory results at a level currently reached by state-of-the-art
methods on these data sets. We experimentally show that the computational
complexity reduction that is obtained makes this approach eligible for
real-time applications.
| Pierre-Fran\c{c}ois Marteau (EXPRESSION), Sylvie Gibet (EXPRESSION),
Cl\'ement Reverdy (EXPRESSION) | 10.1007/978-3-319-45763-5_3 | 1611.07781 | null | null |
Infinite Variational Autoencoder for Semi-Supervised Learning | cs.LG stat.ML | This paper presents an infinite variational autoencoder (VAE) whose capacity
adapts to suit the input data. This is achieved using a mixture model where the
mixing coefficients are modeled by a Dirichlet process, allowing us to
integrate over the coefficients when performing inference. Critically, this
then allows us to automatically vary the number of autoencoders in the mixture
based on the data. Experiments show the flexibility of our method, particularly
for semi-supervised learning, where only a small number of training samples are
available.
| Ehsan Abbasnejad, Anthony Dick, Anton van den Hengel | null | 1611.078 | null | null |
Learning Generic Sentence Representations Using Convolutional Neural
Networks | cs.CL cs.LG | We propose a new encoder-decoder approach to learn distributed sentence
representations that are applicable to multiple purposes. The model is learned
by using a convolutional neural network as an encoder to map an input sentence
into a continuous vector, and using a long short-term memory recurrent neural
network as a decoder. Several tasks are considered, including sentence
reconstruction and future sentence prediction. Further, a hierarchical
encoder-decoder model is proposed to encode a sentence to predict multiple
future sentences. By training our models on a large collection of novels, we
obtain a highly generic convolutional sentence encoder that performs well in
practice. Experimental results on several benchmark datasets, and across a
broad range of applications, demonstrate the superiority of the proposed model
over competing methods.
| Zhe Gan, Yunchen Pu, Ricardo Henao, Chunyuan Li, Xiaodong He, Lawrence
Carin | null | 1611.07897 | null | null |
Deep Restricted Boltzmann Networks | cs.LG | Building a good generative model for image has long been an important topic
in computer vision and machine learning. Restricted Boltzmann machine (RBM) is
one of such models that is simple but powerful. However, its restricted form
also has placed heavy constraints on the models representation power and
scalability. Many extensions have been invented based on RBM in order to
produce deeper architectures with greater power. The most famous ones among
them are deep belief network, which stacks multiple layer-wise pretrained RBMs
to form a hybrid model, and deep Boltzmann machine, which allows connections
between hidden units to form a multi-layer structure. In this paper, we present
a new method to compose RBMs to form a multi-layer network style architecture
and a training method that trains all layers jointly. We call the resulted
structure deep restricted Boltzmann network. We further explore the combination
of convolutional RBM with the normal fully connected RBM, which is made trivial
under our composition framework. Experiments show that our model can generate
descent images and outperform the normal RBM significantly in terms of image
quality and feature quality, without losing much efficiency for training.
| Hengyuan Hu and Lisheng Gao and Quanbin Ma | null | 1611.07917 | null | null |
Semantic Compositional Networks for Visual Captioning | cs.CV cs.CL cs.LG | A Semantic Compositional Network (SCN) is developed for image captioning, in
which semantic concepts (i.e., tags) are detected from the image, and the
probability of each tag is used to compose the parameters in a long short-term
memory (LSTM) network. The SCN extends each weight matrix of the LSTM to an
ensemble of tag-dependent weight matrices. The degree to which each member of
the ensemble is used to generate an image caption is tied to the
image-dependent probability of the corresponding tag. In addition to captioning
images, we also extend the SCN to generate captions for video clips. We
qualitatively analyze semantic composition in SCNs, and quantitatively evaluate
the algorithm on three benchmark datasets: COCO, Flickr30k, and Youtube2Text.
Experimental results show that the proposed method significantly outperforms
prior state-of-the-art approaches, across multiple evaluation metrics.
| Zhe Gan, Chuang Gan, Xiaodong He, Yunchen Pu, Kenneth Tran, Jianfeng
Gao, Lawrence Carin, Li Deng | null | 1611.08002 | null | null |
EEGNet: A Compact Convolutional Network for EEG-based Brain-Computer
Interfaces | cs.LG q-bio.NC stat.ML | Brain computer interfaces (BCI) enable direct communication with a computer,
using neural activity as the control signal. This neural signal is generally
chosen from a variety of well-studied electroencephalogram (EEG) signals. For a
given BCI paradigm, feature extractors and classifiers are tailored to the
distinct characteristics of its expected EEG control signal, limiting its
application to that specific signal. Convolutional Neural Networks (CNNs),
which have been used in computer vision and speech recognition, have
successfully been applied to EEG-based BCIs; however, they have mainly been
applied to single BCI paradigms and thus it remains unclear how these
architectures generalize to other paradigms. Here, we ask if we can design a
single CNN architecture to accurately classify EEG signals from different BCI
paradigms, while simultaneously being as compact as possible. In this work we
introduce EEGNet, a compact convolutional network for EEG-based BCIs. We
introduce the use of depthwise and separable convolutions to construct an
EEG-specific model which encapsulates well-known EEG feature extraction
concepts for BCI. We compare EEGNet to current state-of-the-art approaches
across four BCI paradigms: P300 visual-evoked potentials, error-related
negativity responses (ERN), movement-related cortical potentials (MRCP), and
sensory motor rhythms (SMR). We show that EEGNet generalizes across paradigms
better than the reference algorithms when only limited training data is
available. We demonstrate three different approaches to visualize the contents
of a trained EEGNet model to enable interpretation of the learned features. Our
results suggest that EEGNet is robust enough to learn a wide variety of
interpretable features over a range of BCI tasks, suggesting that the observed
performances were not due to artifact or noise sources in the data.
| Vernon J. Lawhern, Amelia J. Solon, Nicholas R. Waytowich, Stephen M.
Gordon, Chou P. Hung, Brent J. Lance | 10.1088/1741-2552/aace8c | 1611.08024 | null | null |
Scalable Bayesian Learning of Recurrent Neural Networks for Language
Modeling | cs.CL cs.LG | Recurrent neural networks (RNNs) have shown promising performance for
language modeling. However, traditional training of RNNs using back-propagation
through time often suffers from overfitting. One reason for this is that
stochastic optimization (used for large training sets) does not provide good
estimates of model uncertainty. This paper leverages recent advances in
stochastic gradient Markov Chain Monte Carlo (also appropriate for large
training sets) to learn weight uncertainty in RNNs. It yields a principled
Bayesian learning algorithm, adding gradient noise during training (enhancing
exploration of the model-parameter space) and model averaging when testing.
Extensive experiments on various RNN models and across a broad range of
applications demonstrate the superiority of the proposed approach over
stochastic optimization.
| Zhe Gan, Chunyuan Li, Changyou Chen, Yunchen Pu, Qinliang Su, Lawrence
Carin | null | 1611.08034 | null | null |
Multiscale Inverse Reinforcement Learning using Diffusion Wavelets | cs.LG cs.AI | This work presents a multiscale framework to solve an inverse reinforcement
learning (IRL) problem for continuous-time/state stochastic systems. We take
advantage of a diffusion wavelet representation of the associated Markov chain
to abstract the state space. This not only allows for effectively handling the
large (and geometrically complex) decision space but also provides more
interpretable representations of the demonstrated state trajectories and also
of the resulting policy of IRL. In the proposed framework, the problem is
divided into the global and local IRL, where the global approximation of the
optimal value functions are obtained using coarse features and the local
details are quantified using fine local features. An illustrative numerical
example on robot path control in a complex environment is presented to verify
the proposed method.
| Jung-Su Ha and Han-Lim Choi | null | 1611.0807 | null | null |
Survey of Expressivity in Deep Neural Networks | stat.ML cs.LG cs.NE | We survey results on neural network expressivity described in "On the
Expressive Power of Deep Neural Networks". The paper motivates and develops
three natural measures of expressiveness, which all display an exponential
dependence on the depth of the network. In fact, all of these measures are
related to a fourth quantity, trajectory length. This quantity grows
exponentially in the depth of the network, and is responsible for the depth
sensitivity observed. These results translate to consequences for networks
during and after training. They suggest that parameters earlier in a network
have greater influence on its expressive power -- in particular, given a layer,
its influence on expressivity is determined by the remaining depth of the
network after that layer. This is verified with experiments on MNIST and
CIFAR-10. We also explore the effect of training on the input-output map, and
find that it trades off between the stability and expressivity.
| Maithra Raghu, Ben Poole, Jon Kleinberg, Surya Ganguli, Jascha
Sohl-Dickstein | null | 1611.08083 | null | null |
Dynamic Key-Value Memory Networks for Knowledge Tracing | cs.AI cs.LG | Knowledge Tracing (KT) is a task of tracing evolving knowledge state of
students with respect to one or more concepts as they engage in a sequence of
learning activities. One important purpose of KT is to personalize the practice
sequence to help students learn knowledge concepts efficiently. However,
existing methods such as Bayesian Knowledge Tracing and Deep Knowledge Tracing
either model knowledge state for each predefined concept separately or fail to
pinpoint exactly which concepts a student is good at or unfamiliar with. To
solve these problems, this work introduces a new model called Dynamic Key-Value
Memory Networks (DKVMN) that can exploit the relationships between underlying
concepts and directly output a student's mastery level of each concept. Unlike
standard memory-augmented neural networks that facilitate a single memory
matrix or two static memory matrices, our model has one static matrix called
key, which stores the knowledge concepts and the other dynamic matrix called
value, which stores and updates the mastery levels of corresponding concepts.
Experiments show that our model consistently outperforms the state-of-the-art
model in a range of KT datasets. Moreover, the DKVMN model can automatically
discover underlying concepts of exercises typically performed by human
annotations and depict the changing knowledge state of a student.
| Jiani Zhang, Xingjian Shi, Irwin King and Dit-Yan Yeung | null | 1611.08108 | null | null |
Interpreting the Predictions of Complex ML Models by Layer-wise
Relevance Propagation | stat.ML cs.LG | Complex nonlinear models such as deep neural network (DNNs) have become an
important tool for image classification, speech recognition, natural language
processing, and many other fields of application. These models however lack
transparency due to their complex nonlinear structure and to the complex data
distributions to which they typically apply. As a result, it is difficult to
fully characterize what makes these models reach a particular decision for a
given input. This lack of transparency can be a drawback, especially in the
context of sensitive applications such as medical analysis or security. In this
short paper, we summarize a recent technique introduced by Bach et al. [1] that
explains predictions by decomposing the classification decision of DNN models
in terms of input variables.
| Wojciech Samek, Gr\'egoire Montavon, Alexander Binder, Sebastian
Lapuschkin, Klaus-Robert M\"uller | null | 1611.08191 | null | null |
Fast Orthonormal Sparsifying Transforms Based on Householder Reflectors | cs.LG stat.ML | Dictionary learning is the task of determining a data-dependent transform
that yields a sparse representation of some observed data. The dictionary
learning problem is non-convex, and usually solved via computationally complex
iterative algorithms. Furthermore, the resulting transforms obtained generally
lack structure that permits their fast application to data. To address this
issue, this paper develops a framework for learning orthonormal dictionaries
which are built from products of a few Householder reflectors. Two algorithms
are proposed to learn the reflector coefficients: one that considers a
sequential update of the reflectors and one with a simultaneous update of all
reflectors that imposes an additional internal orthogonal constraint. The
proposed methods have low computational complexity and are shown to converge to
local minimum points which can be described in terms of the spectral properties
of the matrices involved. The resulting dictionaries balance between the
computational complexity and the quality of the sparse representations by
controlling the number of Householder reflectors in their product. Simulations
of the proposed algorithms are shown in the image processing setting where
well-known fast transforms are available for comparisons. The proposed
algorithms have favorable reconstruction error and the advantage of a fast
implementation relative to the classical, unstructured, dictionaries.
| Cristian Rusu, Nuria Gonzalez-Prelcic, Robert Heath | 10.1109/TSP.2016.2612168 | 1611.08229 | null | null |
Learning Fast Sparsifying Transforms | cs.LG | Given a dataset, the task of learning a transform that allows sparse
representations of the data bears the name of dictionary learning. In many
applications, these learned dictionaries represent the data much better than
the static well-known transforms (Fourier, Hadamard etc.). The main downside of
learned transforms is that they lack structure and therefore they are not
computationally efficient, unlike their classical counterparts. These posse
several difficulties especially when using power limited hardware such as
mobile devices, therefore discouraging the application of sparsity techniques
in such scenarios. In this paper we construct orthogonal and non-orthogonal
dictionaries that are factorized as a product of a few basic transformations.
In the orthogonal case, we solve exactly the dictionary update problem for one
basic transformation, which can be viewed as a generalized Givens rotation, and
then propose to construct orthogonal dictionaries that are a product of these
transformations, guaranteeing their fast manipulation. We also propose a method
to construct fast square but non-orthogonal dictionaries that are factorized as
a product of few transforms that can be viewed as a further generalization of
Givens rotations to the non-orthogonal setting. We show how the proposed
transforms can balance very well data representation performance and
computational complexity. We also compare with classical fast and learned
general and orthogonal transforms.
| Cristian Rusu and John Thompson | 10.1109/TSP.2017.2712120 | 1611.0823 | null | null |
Identifying Significant Predictive Bias in Classifiers | stat.ML cs.LG | We present a novel subset scan method to detect if a probabilistic binary
classifier has statistically significant bias -- over or under predicting the
risk -- for some subgroup, and identify the characteristics of this subgroup.
This form of model checking and goodness-of-fit test provides a way to
interpretably detect the presence of classifier bias or regions of poor
classifier fit. This allows consideration of not just subgroups of a priori
interest or small dimensions, but the space of all possible subgroups of
features. To address the difficulty of considering these exponentially many
possible subgroups, we use subset scan and parametric bootstrap-based methods.
Extending this method, we can penalize the complexity of the detected subgroup
and also identify subgroups with high classification errors. We demonstrate
these methods and find interesting results on the COMPAS crime recidivism and
credit delinquency data.
| Zhe Zhang and Daniel B. Neill | null | 1611.08292 | null | null |
On Human Intellect and Machine Failures: Troubleshooting Integrative
Machine Learning Systems | cs.LG | We study the problem of troubleshooting machine learning systems that rely on
analytical pipelines of distinct components. Understanding and fixing errors
that arise in such integrative systems is difficult as failures can occur at
multiple points in the execution workflow. Moreover, errors can propagate,
become amplified or be suppressed, making blame assignment difficult. We
propose a human-in-the-loop methodology which leverages human intellect for
troubleshooting system failures. The approach simulates potential component
fixes through human computation tasks and measures the expected improvements in
the holistic behavior of the system. The method provides guidance to designers
about how they can best improve the system. We demonstrate the effectiveness of
the approach on an automated image captioning system that has been pressed into
real-world use.
| Besmira Nushi, Ece Kamar, Eric Horvitz, Donald Kossmann | null | 1611.08309 | null | null |
Training and Evaluating Multimodal Word Embeddings with Large-scale Web
Annotated Images | cs.LG cs.CL cs.CV | In this paper, we focus on training and evaluating effective word embeddings
with both text and visual information. More specifically, we introduce a
large-scale dataset with 300 million sentences describing over 40 million
images crawled and downloaded from publicly available Pins (i.e. an image with
sentence descriptions uploaded by users) on Pinterest. This dataset is more
than 200 times larger than MS COCO, the standard large-scale image dataset with
sentence descriptions. In addition, we construct an evaluation dataset to
directly assess the effectiveness of word embeddings in terms of finding
semantically similar or related words and phrases. The word/phrase pairs in
this evaluation dataset are collected from the click data with millions of
users in an image search system, thus contain rich semantic relationships.
Based on these datasets, we propose and compare several Recurrent Neural
Networks (RNNs) based multimodal (text and image) models. Experiments show that
our model benefits from incorporating the visual information into the word
embeddings, and a weight sharing strategy is crucial for learning such
multimodal embeddings. The project page is:
http://www.stat.ucla.edu/~junhua.mao/multimodal_embedding.html
| Junhua Mao, Jiajing Xu, Yushi Jing, Alan Yuille | null | 1611.08321 | null | null |
An Overview on Data Representation Learning: From Traditional Feature
Learning to Recent Deep Learning | cs.LG stat.ML | Since about 100 years ago, to learn the intrinsic structure of data, many
representation learning approaches have been proposed, including both linear
ones and nonlinear ones, supervised ones and unsupervised ones. Particularly,
deep architectures are widely applied for representation learning in recent
years, and have delivered top results in many tasks, such as image
classification, object detection and speech recognition. In this paper, we
review the development of data representation learning methods. Specifically,
we investigate both traditional feature learning algorithms and
state-of-the-art deep learning models. The history of data representation
learning is introduced, while available resources (e.g. online course, tutorial
and book information) and toolboxes are provided. Finally, we conclude this
paper with remarks and some interesting research directions on data
representation learning.
| Guoqiang Zhong, Li-Na Wang, Junyu Dong | null | 1611.08331 | null | null |
Local Discriminant Hyperalignment for multi-subject fMRI data alignment | stat.ML cs.AI cs.LG | Multivariate Pattern (MVP) classification can map different cognitive states
to the brain tasks. One of the main challenges in MVP analysis is validating
the generated results across subjects. However, analyzing multi-subject fMRI
data requires accurate functional alignments between neuronal activities of
different subjects, which can rapidly increase the performance and robustness
of the final results. Hyperalignment (HA) is one of the most effective
functional alignment methods, which can be mathematically formulated by the
Canonical Correlation Analysis (CCA) methods. Since HA mostly uses the
unsupervised CCA techniques, its solution may not be optimized for MVP
analysis. By incorporating the idea of Local Discriminant Analysis (LDA) into
CCA, this paper proposes Local Discriminant Hyperalignment (LDHA) as a novel
supervised HA method, which can provide better functional alignment for MVP
analysis. Indeed, the locality is defined based on the stimuli categories in
the train-set, where the correlation between all stimuli in the same category
will be maximized and the correlation between distinct categories of stimuli
approaches to near zero. Experimental studies on multi-subject MVP analysis
confirm that the LDHA method achieves superior performance to other
state-of-the-art HA algorithms.
| Muhammad Yousefnezhad, Daoqiang Zhang | null | 1611.08366 | null | null |
A Unified Convex Surrogate for the Schatten-$p$ Norm | stat.ML cs.LG math.NA math.OC | The Schatten-$p$ norm ($0<p<1$) has been widely used to replace the nuclear
norm for better approximating the rank function. However, existing methods are
either 1) not scalable for large scale problems due to relying on singular
value decomposition (SVD) in every iteration, or 2) specific to some $p$
values, e.g., $1/2$, and $2/3$. In this paper, we show that for any $p$, $p_1$,
and $p_2 >0$ satisfying $1/p=1/p_1+1/p_2$, there is an equivalence between the
Schatten-$p$ norm of one matrix and the Schatten-$p_1$ and the Schatten-$p_2$
norms of its two factor matrices. We further extend the equivalence to multiple
factor matrices and show that all the factor norms can be convex and smooth for
any $p>0$. In contrast, the original Schatten-$p$ norm for $0<p<1$ is
non-convex and non-smooth. As an example we conduct experiments on matrix
completion. To utilize the convexity of the factor matrix norms, we adopt the
accelerated proximal alternating linearized minimization algorithm and
establish its sequence convergence. Experiments on both synthetic and real
datasets exhibit its superior performance over the state-of-the-art methods.
Its speed is also highly competitive.
| Chen Xu, Zhouchen Lin, Hongbin Zha | null | 1611.08372 | null | null |
Bidirectional LSTM-CRF for Clinical Concept Extraction | stat.ML cs.CL cs.LG | Automated extraction of concepts from patient clinical records is an
essential facilitator of clinical research. For this reason, the 2010 i2b2/VA
Natural Language Processing Challenges for Clinical Records introduced a
concept extraction task aimed at identifying and classifying concepts into
predefined categories (i.e., treatments, tests and problems). State-of-the-art
concept extraction approaches heavily rely on handcrafted features and
domain-specific resources which are hard to collect and define. For this
reason, this paper proposes an alternative, streamlined approach: a recurrent
neural network (the bidirectional LSTM with CRF decoding) initialized with
general-purpose, off-the-shelf word embeddings. The experimental results
achieved on the 2010 i2b2/VA reference corpora using the proposed framework
outperform all recent methods and ranks closely to the best submission from the
original 2010 i2b2/VA challenge.
| Raghavendra Chalapathy, Ehsan Zare Borzeshi, Massimo Piccardi | null | 1611.08373 | null | null |
Distributed Optimization of Multi-Class SVMs | stat.ML cs.LG | Training of one-vs.-rest SVMs can be parallelized over the number of classes
in a straight forward way. Given enough computational resources, one-vs.-rest
SVMs can thus be trained on data involving a large number of classes. The same
cannot be stated, however, for the so-called all-in-one SVMs, which require
solving a quadratic program of size quadratically in the number of classes. We
develop distributed algorithms for two all-in-one SVM formulations (Lee et al.
and Weston and Watkins) that parallelize the computation evenly over the number
of classes. This allows us to compare these models to one-vs.-rest SVMs on
unprecedented scale. The results indicate superior accuracy on text
classification data.
| Maximilian Alber, Julian Zimmert, Urun Dogan, Marius Kloft | 10.1371/journal.pone.0178161 | 1611.0848 | null | null |
On the Exponentially Weighted Aggregate with the Laplace Prior | math.ST cs.LG stat.TH | In this paper, we study the statistical behaviour of the Exponentially
Weighted Aggregate (EWA) in the problem of high-dimensional regression with
fixed design. Under the assumption that the underlying regression vector is
sparse, it is reasonable to use the Laplace distribution as a prior. The
resulting estimator and, specifically, a particular instance of it referred to
as the Bayesian lasso, was already used in the statistical literature because
of its computational convenience, even though no thorough mathematical analysis
of its statistical properties was carried out. The present work fills this gap
by establishing sharp oracle inequalities for the EWA with the Laplace prior.
These inequalities show that if the temperature parameter is small, the EWA
with the Laplace prior satisfies the same type of oracle inequality as the
lasso estimator does, as long as the quality of estimation is measured by the
prediction loss. Extensions of the proposed methodology to the problem of
prediction with low-rank matrices are considered.
| Arnak S. Dalalyan, Edwin Grappin, Quentin Paris | null | 1611.08483 | null | null |
Bottleneck Conditional Density Estimation | stat.ML cs.LG | We introduce a new framework for training deep generative models for
high-dimensional conditional density estimation. The Bottleneck Conditional
Density Estimator (BCDE) is a variant of the conditional variational
autoencoder (CVAE) that employs layer(s) of stochastic variables as the
bottleneck between the input $x$ and target $y$, where both are
high-dimensional. Crucially, we propose a new hybrid training method that
blends the conditional generative model with a joint generative model. Hybrid
blending is the key to effective training of the BCDE, which avoids overfitting
and provides a novel mechanism for leveraging unlabeled data. We show that our
hybrid training procedure enables models to achieve competitive results in the
MNIST quadrant prediction task in the fully-supervised setting, and sets new
benchmarks in the semi-supervised regime for MNIST, SVHN, and CelebA.
| Rui Shu, Hung H. Bui, Mohammad Ghavamzadeh | null | 1611.08568 | null | null |
A Benchmark and Comparison of Active Learning for Logistic Regression | stat.ML cs.LG | Logistic regression is by far the most widely used classifier in real-world
applications. In this paper, we benchmark the state-of-the-art active learning
methods for logistic regression and discuss and illustrate their underlying
characteristics. Experiments are carried out on three synthetic datasets and 44
real-world datasets, providing insight into the behaviors of these active
learning methods with respect to the area of the learning curve (which plots
classification accuracy as a function of the number of queried examples) and
their computational costs. Surprisingly, one of the earliest and simplest
suggested active learning methods, i.e., uncertainty sampling, performs
exceptionally well overall. Another remarkable finding is that random sampling,
which is the rudimentary baseline to improve upon, is not overwhelmed by
individual active learning techniques in many cases.
| Yazhou Yang, Marco Loog | 10.1016/j.patcog.2018.06.004 | 1611.08618 | null | null |
Patient-Driven Privacy Control through Generalized Distillation | cs.CR cs.CY cs.LG stat.ML | The introduction of data analytics into medicine has changed the nature of
patient treatment. In this, patients are asked to disclose personal information
such as genetic markers, lifestyle habits, and clinical history. This data is
then used by statistical models to predict personalized treatments. However,
due to privacy concerns, patients often desire to withhold sensitive
information. This self-censorship can impede proper diagnosis and treatment,
which may lead to serious health complications and even death over time. In
this paper, we present privacy distillation, a mechanism which allows patients
to control the type and amount of information they wish to disclose to the
healthcare providers for use in statistical models. Meanwhile, it retains the
accuracy of models that have access to all patient data under a sufficient but
not full set of privacy-relevant information. We validate privacy distillation
using a corpus of patients prescribed to warfarin for a personalized dosage. We
use a deep neural network to implement privacy distillation for training and
making dose predictions. We find that privacy distillation with sufficient
privacy-relevant information i) retains accuracy almost as good as having all
patient data (only 3\% worse), and ii) is effective at preventing errors that
introduce health-related risks (only 3.9\% worse under- or over-prescriptions).
| Z. Berkay Celik, David Lopez-Paz, Patrick McDaniel | null | 1611.08648 | null | null |
A Deep Neural Network to identify foreshocks in real time | physics.geo-ph cs.LG | Foreshock events provide valuable insight to predict imminent major
earthquakes. However, it is difficult to identify them in real time. In this
paper, I propose an algorithm based on deep learning to instantaneously
classify a seismic waveform as a foreshock, mainshock or an aftershock event
achieving a high accuracy of 99% in classification. As a result, this is by far
the most reliable method to predict major earthquakes that are preceded by
foreshocks. In addition, I discuss methods to create an earthquake dataset that
is compatible with deep networks.
| K.Vikraman | null | 1611.08655 | null | null |
Training an Interactive Humanoid Robot Using Multimodal Deep
Reinforcement Learning | cs.LG cs.AI cs.RO | Training robots to perceive, act and communicate using multiple modalities
still represents a challenging problem, particularly if robots are expected to
learn efficiently from small sets of example interactions. We describe a
learning approach as a step in this direction, where we teach a humanoid robot
how to play the game of noughts and crosses. Given that multiple multimodal
skills can be trained to play this game, we focus our attention to training the
robot to perceive the game, and to interact in this game. Our multimodal deep
reinforcement learning agent perceives multimodal features and exhibits verbal
and non-verbal actions while playing. Experimental results using simulations
show that the robot can learn to win or draw up to 98% of the games. A pilot
test of the proposed multimodal system for the targeted game---integrating
speech, vision and gestures---reports that reasonable and fluent interactions
can be achieved using the proposed approach.
| Heriberto Cuay\'ahuitl, Guillaume Couly, Cl\'ement Olalainty | null | 1611.08666 | null | null |
Visual Dialog | cs.CV cs.AI cs.CL cs.LG | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org
| Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav,
Jos\'e M. F. Moura, Devi Parikh, Dhruv Batra | null | 1611.08669 | null | null |
Deep Reinforcement Learning for Multi-Domain Dialogue Systems | cs.AI cs.CL cs.LG | Standard deep reinforcement learning methods such as Deep Q-Networks (DQN)
for multiple tasks (domains) face scalability problems. We propose a method for
multi-domain dialogue policy learning---termed NDQN, and apply it to an
information-seeking spoken dialogue system in the domains of restaurants and
hotels. Experimental results comparing DQN (baseline) versus NDQN (proposed)
using simulations report that our proposed method exhibits better scalability
and is promising for optimising the behaviour of multi-domain dialogue systems.
| Heriberto Cuay\'ahuitl, Seunghak Yu, Ashley Williamson, Jacob Carse | null | 1611.08675 | null | null |
Machine Learning on Human Connectome Data from MRI | cs.LG q-bio.NC stat.ML | Functional MRI (fMRI) and diffusion MRI (dMRI) are non-invasive imaging
modalities that allow in-vivo analysis of a patient's brain network (known as a
connectome). Use of these technologies has enabled faster and better diagnoses
and treatments of neurological disorders and a deeper understanding of the
human brain. Recently, researchers have been exploring the application of
machine learning models to connectome data in order to predict clinical
outcomes and analyze the importance of subnetworks in the brain. Connectome
data has unique properties, which present both special challenges and
opportunities when used for machine learning. The purpose of this work is to
review the literature on the topic of applying machine learning models to
MRI-based connectome data. This field is growing rapidly and now encompasses a
large body of research. To summarize the research done to date, we provide a
comparative, structured summary of 77 relevant works, tabulated according to
different criteria, that represent the majority of the literature on this
topic. (We also published a living version of this table online at
http://connectomelearning.cs.sfu.ca that the community can continue to
contribute to.) After giving an overview of how connectomes are constructed
from dMRI and fMRI data, we discuss the variety of machine learning tasks that
have been explored with connectome data. We then compare the advantages and
drawbacks of different machine learning approaches that have been employed,
discussing different feature selection and feature extraction schemes, as well
as the learning models and regularization penalties themselves. Throughout this
discussion, we focus particularly on how the methods are adapted to the unique
nature of graphical connectome data. Finally, we conclude by summarizing the
current state of the art and by outlining what we believe are strategic
directions for future research.
| Colin J Brown, Ghassan Hamarneh | null | 1611.08699 | null | null |
BliStrTune: Hierarchical Invention of Theorem Proving Strategies | cs.LO cs.AI cs.LG | Inventing targeted proof search strategies for specific problem sets is a
difficult task. State-of-the-art automated theorem provers (ATPs) such as E
allow a large number of user-specified proof search strategies described in a
rich domain specific language. Several machine learning methods that invent
strategies automatically for ATPs were proposed previously. One of them is the
Blind Strategymaker (BliStr), a system for automated invention of ATP
strategies.
In this paper we introduce BliStrTune -- a hierarchical extension of BliStr.
BliStrTune allows exploring much larger space of E strategies by interleaving
search for high-level parameters with their fine-tuning. We use BliStrTune to
invent new strategies based also on new clause weight functions targeted at
problems from large ITP libraries. We show that the new strategies
significantly improve E's performance in solving problems from the Mizar
Mathematical Library.
| Jan Jakubuv, Josef Urban | null | 1611.08733 | null | null |
Structural Correspondence Learning for Cross-lingual Sentiment
Classification with One-to-many Mappings | cs.LG cs.CL stat.ML | Structural correspondence learning (SCL) is an effective method for
cross-lingual sentiment classification. This approach uses unlabeled documents
along with a word translation oracle to automatically induce task specific,
cross-lingual correspondences. It transfers knowledge through identifying
important features, i.e., pivot features. For simplicity, however, it assumes
that the word translation oracle maps each pivot feature in source language to
exactly only one word in target language. This one-to-one mapping between words
in different languages is too strict. Also the context is not considered at
all. In this paper, we propose a cross-lingual SCL based on distributed
representation of words; it can learn meaningful one-to-many mappings for pivot
words using large amounts of monolingual data and a small dictionary. We
conduct experiments on NLP\&CC 2013 cross-lingual sentiment analysis dataset,
employing English as source language, and Chinese as target language. Our
method does not rely on the parallel corpora and the experimental results show
that our approach is more competitive than the state-of-the-art methods in
cross-lingual sentiment classification.
| Nana Li, Shuangfei Zhai, Zhongfei Zhang, Boying Liu | null | 1611.08737 | null | null |
What Can Be Predicted from Six Seconds of Driver Glances? | cs.CV cs.HC cs.LG | We consider a large dataset of real-world, on-road driving from a 100-car
naturalistic study to explore the predictive power of driver glances and,
specifically, to answer the following question: what can be predicted about the
state of the driver and the state of the driving environment from a 6-second
sequence of macro-glances? The context-based nature of such glances allows for
application of supervised learning to the problem of vision-based gaze
estimation, making it robust, accurate, and reliable in messy, real-world
conditions. So, it's valuable to ask whether such macro-glances can be used to
infer behavioral, environmental, and demographic variables? We analyze 27
binary classification problems based on these variables. The takeaway is that
glance can be used as part of a multi-sensor real-time system to predict
radio-tuning, fatigue state, failure to signal, talking, and several
environment variables.
| Lex Fridman, Heishiro Toyoda, Sean Seaman, Bobbie Seppelt, Linda
Angell, Joonbum Lee, Bruce Mehler, Bryan Reimer | null | 1611.08754 | null | null |
Should I use TensorFlow | cs.LG stat.ML | Google's Machine Learning framework TensorFlow was open-sourced in November
2015 [1] and has since built a growing community around it. TensorFlow is
supposed to be flexible for research purposes while also allowing its models to
be deployed productively. This work is aimed towards people with experience in
Machine Learning considering whether they should use TensorFlow in their
environment. Several aspects of the framework important for such a decision are
examined, such as the heterogenity, extensibility and its computation graph. A
pure Python implementation of linear classification is compared with an
implementation utilizing TensorFlow. I also contrast TensorFlow to other
popular frameworks with respect to modeling capability, deployment and
performance and give a brief description of the current adaption of the
framework.
| Martin Schrimpf | null | 1611.08903 | null | null |
Deep attractor network for single-microphone speaker separation | cs.SD cs.LG | Despite the overwhelming success of deep learning in various speech
processing tasks, the problem of separating simultaneous speakers in a mixture
remains challenging. Two major difficulties in such systems are the arbitrary
source permutation and unknown number of sources in the mixture. We propose a
novel deep learning framework for single channel speech separation by creating
attractor points in high dimensional embedding space of the acoustic signals
which pull together the time-frequency bins corresponding to each source.
Attractor points in this study are created by finding the centroids of the
sources in the embedding space, which are subsequently used to determine the
similarity of each bin in the mixture to each source. The network is then
trained to minimize the reconstruction error of each source by optimizing the
embeddings. The proposed model is different from prior works in that it
implements an end-to-end training, and it does not depend on the number of
sources in the mixture. Two strategies are explored in the test time, K-means
and fixed attractor points, where the latter requires no post-processing and
can be implemented in real-time. We evaluated our system on Wall Street Journal
dataset and show 5.49\% improvement over the previous state-of-the-art methods.
| Zhuo Chen, Yi Luo, Nima Mesgarani | 10.1109/ICASSP.2017.7952155 | 1611.0893 | null | null |
Learning a Natural Language Interface with Neural Programmer | cs.CL cs.LG stat.ML | Learning a natural language interface for database tables is a challenging
task that involves deep language understanding and multi-step reasoning. The
task is often approached by mapping natural language queries to logical forms
or programs that provide the desired response when executed on the database. To
our knowledge, this paper presents the first weakly supervised, end-to-end
neural network model to induce such programs on a real-world dataset. We
enhance the objective function of Neural Programmer, a neural network with
built-in discrete operations, and apply it on WikiTableQuestions, a natural
language question-answering dataset. The model is trained end-to-end with weak
supervision of question-answer pairs, and does not require domain-specific
grammars, rules, or annotations that are key elements in previous approaches to
program induction. The main experimental result in this paper is that a single
Neural Programmer model achieves 34.2% accuracy using only 10,000 examples with
weak supervision. An ensemble of 15 models, with a trivial combination
technique, achieves 37.7% accuracy, which is competitive to the current
state-of-the-art accuracy of 37.1% obtained by a traditional natural language
semantic parser.
| Arvind Neelakantan, Quoc V. Le, Martin Abadi, Andrew McCallum, Dario
Amodei | null | 1611.08945 | null | null |
DeepSetNet: Predicting Sets with Deep Neural Networks | cs.CV cs.AI cs.LG | This paper addresses the task of set prediction using deep learning. This is
important because the output of many computer vision tasks, including image
tagging and object detection, are naturally expressed as sets of entities
rather than vectors. As opposed to a vector, the size of a set is not fixed in
advance, and it is invariant to the ordering of entities within it. We define a
likelihood for a set distribution and learn its parameters using a deep neural
network. We also derive a loss for predicting a discrete distribution
corresponding to set cardinality. Set prediction is demonstrated on the problem
of multi-class image classification. Moreover, we show that the proposed
cardinality loss can also trivially be applied to the tasks of object counting
and pedestrian detection. Our approach outperforms existing methods in all
three cases on standard datasets.
| S. Hamid Rezatofighi, Vijay Kumar B G, Anton Milan, Ehsan Abbasnejad,
Anthony Dick, Ian Reid | null | 1611.08998 | null | null |
Image Based Appraisal of Real Estate Properties | cs.CV cs.LG | Real estate appraisal, which is the process of estimating the price for real
estate properties, is crucial for both buys and sellers as the basis for
negotiation and transaction. Traditionally, the repeat sales model has been
widely adopted to estimate real estate price. However, it depends the design
and calculation of a complex economic related index, which is challenging to
estimate accurately. Today, real estate brokers provide easy access to detailed
online information on real estate properties to their clients. We are
interested in estimating the real estate price from these large amounts of
easily accessed data. In particular, we analyze the prediction power of online
house pictures, which is one of the key factors for online users to make a
potential visiting decision. The development of robust computer vision
algorithms makes the analysis of visual content possible. In this work, we
employ a Recurrent Neural Network (RNN) to predict real estate price using the
state-of-the-art visual features. The experimental results indicate that our
model outperforms several of other state-of-the-art baseline algorithms in
terms of both mean absolute error (MAE) and mean absolute percentage error
(MAPE).
| Quanzeng You, Ran Pang, Liangliang Cao, Jiebo Luo | 10.1109/TMM.2017.2710804 | 1611.0918 | null | null |
Times series averaging and denoising from a probabilistic perspective on
time-elastic kernels | cs.LG cs.IR | In the light of regularized dynamic time warping kernels, this paper
re-considers the concept of time elastic centroid for a setof time series. We
derive a new algorithm based on a probabilistic interpretation of kernel
alignment matrices. This algorithm expressesthe averaging process in terms of a
stochastic alignment automata. It uses an iterative agglomerative heuristic
method for averagingthe aligned samples, while also averaging the times of
occurrence of the aligned samples. By comparing classification accuracies for45
heterogeneous time series datasets obtained by first nearest centroid/medoid
classifiers we show that: i) centroid-basedapproaches significantly outperform
medoid-based approaches, ii) for the considered datasets, our algorithm that
combines averagingin the sample space and along the time axes, emerges as the
most significantly robust model for time-elastic averaging with apromising
noise reduction capability. We also demonstrate its benefit in an isolated
gesture recognition experiment and its ability tosignificantly reduce the size
of training instance sets. Finally we highlight its denoising capability using
demonstrative synthetic data:we show that it is possible to retrieve, from few
noisy instances, a signal whose components are scattered in a wide spectral
band.
| Pierre-Fran\c{c}ois Marteau (EXPRESSION) | 10.2478/amcs-2019-0028 | 1611.09194 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.