title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
The CMA Evolution Strategy: A Tutorial | cs.LG stat.ML | This tutorial introduces the CMA Evolution Strategy (ES), where CMA stands
for Covariance Matrix Adaptation. The CMA-ES is a stochastic, or randomized,
method for real-parameter (continuous domain) optimization of non-linear,
non-convex functions. We try to motivate and derive the algorithm from
intuitive concepts and from requirements of non-linear, non-convex search in
continuous domain.
| Nikolaus Hansen (TAO) | null | 1604.00772 | null | null |
Topic Model Based Multi-Label Classification from the Crowd | cs.LG | Multi-label classification is a common supervised machine learning problem
where each instance is associated with multiple classes. The key challenge in
this problem is learning the correlations between the classes. An additional
challenge arises when the labels of the training instances are provided by
noisy, heterogeneous crowdworkers with unknown qualities. We first assume
labels from a perfect source and propose a novel topic model where the present
as well as the absent classes generate the latent topics and hence the words.
We non-trivially extend our topic model to the scenario where the labels are
provided by noisy crowdworkers. Extensive experimentation on real world
datasets reveals the superior performance of the proposed model. The proposed
model learns the qualities of the annotators as well, even with minimal
training data.
| Divya Padmanabhan, Satyanath Bhat, Shirish Shevade, Y. Narahari | null | 1604.00783 | null | null |
Achieving Open Vocabulary Neural Machine Translation with Hybrid
Word-Character Models | cs.CL cs.LG | Nearly all previous work on neural machine translation (NMT) has used quite
restricted vocabularies, perhaps with a subsequent method to patch in unknown
words. This paper presents a novel word-character solution to achieving open
vocabulary NMT. We build hybrid systems that translate mostly at the word level
and consult the character components for rare words. Our character-level
recurrent neural networks compute source word representations and recover
unknown target words when needed. The twofold advantage of such a hybrid
approach is that it is much faster and easier to train than character-based
ones; at the same time, it never produces unknown words as in the case of
word-based models. On the WMT'15 English to Czech translation task, this hybrid
approach offers an addition boost of +2.1-11.4 BLEU points over models that
already handle unknown words. Our best system achieves a new state-of-the-art
result with 20.7 BLEU score. We demonstrate that our character models can
successfully learn to not only generate well-formed words for Czech, a
highly-inflected language with a very complex vocabulary, but also build
correct representations for English source words.
| Minh-Thang Luong, Christopher D. Manning | null | 1604.00788 | null | null |
Recurrent Neural Networks for Polyphonic Sound Event Detection in Real
Life Recordings | cs.SD cs.LG cs.NE | In this paper we present an approach to polyphonic sound event detection in
real life recordings based on bi-directional long short term memory (BLSTM)
recurrent neural networks (RNNs). A single multilabel BLSTM RNN is trained to
map acoustic features of a mixture signal consisting of sounds from multiple
classes, to binary activity indicators of each event class. Our method is
tested on a large database of real-life recordings, with 61 classes (e.g.
music, car, speech) from 10 different everyday contexts. The proposed method
outperforms previous approaches by a large margin, and the results are further
improved using data augmentation techniques. Overall, our system reports an
average F1-score of 65.5% on 1 second blocks and 64.7% on single frames, a
relative improvement over previous state-of-the-art approach of 6.8% and 15.1%
respectively.
| Giambattista Parascandolo, Heikki Huttunen, Tuomas Virtanen | 10.1109/ICASSP.2016.7472917 | 1604.00861 | null | null |
Data-Efficient Off-Policy Policy Evaluation for Reinforcement Learning | cs.LG cs.AI | In this paper we present a new way of predicting the performance of a
reinforcement learning policy given historical data that may have been
generated by a different policy. The ability to evaluate a policy from
historical data is important for applications where the deployment of a bad
policy can be dangerous or costly. We show empirically that our algorithm
produces estimates that often have orders of magnitude lower mean squared error
than existing methods---it makes more efficient use of the available data. Our
new estimator is based on two advances: an extension of the doubly robust
estimator (Jiang and Li, 2015), and a new way to mix between model based
estimates and importance sampling based estimates.
| Philip S. Thomas and Emma Brunskill | null | 1604.00923 | null | null |
Revisiting Distributed Synchronous SGD | cs.LG cs.DC cs.NE | Distributed training of deep learning models on large-scale training data is
typically conducted with asynchronous stochastic optimization to maximize the
rate of updates, at the cost of additional noise introduced from asynchrony. In
contrast, the synchronous approach is often thought to be impractical due to
idle time wasted on waiting for straggling workers. We revisit these
conventional beliefs in this paper, and examine the weaknesses of both
approaches. We demonstrate that a third approach, synchronous optimization with
backup workers, can avoid asynchronous noise while mitigating for the worst
stragglers. Our approach is empirically validated and shown to converge faster
and to better test accuracies.
| Jianmin Chen, Xinghao Pan, Rajat Monga, Samy Bengio and Rafal
Jozefowicz | null | 1604.00981 | null | null |
Accurate and scalable social recommendation using mixed-membership
stochastic block models | cs.SI cs.IR cs.LG physics.soc-ph | With ever-increasing amounts of online information available, modeling and
predicting individual preferences-for books or articles, for example-is
becoming more and more important. Good predictions enable us to improve advice
to users, and obtain a better understanding of the socio-psychological
processes that determine those preferences. We have developed a collaborative
filtering model, with an associated scalable algorithm, that makes accurate
predictions of individuals' preferences. Our approach is based on the explicit
assumption that there are groups of individuals and of items, and that the
preferences of an individual for an item are determined only by their group
memberships. Importantly, we allow each individual and each item to belong
simultaneously to mixtures of different groups and, unlike many popular
approaches, such as matrix factorization, we do not assume implicitly or
explicitly that individuals in each group prefer items in a single group of
items. The resulting overlapping groups and the predicted preferences can be
inferred with a expectation-maximization algorithm whose running time scales
linearly (per iteration). Our approach enables us to predict individual
preferences in large datasets, and is considerably more accurate than the
current algorithms for such large datasets.
| Antonia Godoy-Lorite, Roger Guimera, Cristopher Moore, Marta
Sales-Pardo | 10.1073/pnas.1606316113 | 1604.01170 | null | null |
Feature extraction using Latent Dirichlet Allocation and Neural
Networks: A case study on movie synopses | cs.CL cs.AI cs.IR cs.LG stat.ML | Feature extraction has gained increasing attention in the field of machine
learning, as in order to detect patterns, extract information, or predict
future observations from big data, the urge of informative features is crucial.
The process of extracting features is highly linked to dimensionality reduction
as it implies the transformation of the data from a sparse high-dimensional
space, to higher level meaningful abstractions. This dissertation employs
Neural Networks for distributed paragraph representations, and Latent Dirichlet
Allocation to capture higher level features of paragraph vectors. Although
Neural Networks for distributed paragraph representations are considered the
state of the art for extracting paragraph vectors, we show that a quick topic
analysis model such as Latent Dirichlet Allocation can provide meaningful
features too. We evaluate the two methods on the CMU Movie Summary Corpus, a
collection of 25,203 movie plot summaries extracted from Wikipedia. Finally,
for both approaches, we use K-Nearest Neighbors to discover similar movies, and
plot the projected representations using T-Distributed Stochastic Neighbor
Embedding to depict the context similarities. These similarities, expressed as
movie distances, can be used for movies recommendation. The recommended movies
of this approach are compared with the recommended movies from IMDB, which use
a collaborative filtering recommendation approach, to show that our two models
could constitute either an alternative or a supplementary recommendation
approach.
| Despoina Christou | null | 1604.01272 | null | null |
Towards Label Imbalance in Multi-label Classification with Many Labels | cs.LG | In multi-label classification, an instance may be associated with a set of
labels simultaneously. Recently, the research on multi-label classification has
largely shifted its focus to the other end of the spectrum where the number of
labels is assumed to be extremely large. The existing works focus on how to
design scalable algorithms that offer fast training procedures and have a small
memory footprint. However they ignore and even compound another challenge - the
label imbalance problem. To address this drawback, we propose a novel
Representation-based Multi-label Learning with Sampling (RMLS) approach. To the
best of our knowledge, we are the first to tackle the imbalance problem in
multi-label classification with many labels. Our experimentations with
real-world datasets demonstrate the effectiveness of the proposed approach.
| Li Li and Houfeng Wang | null | 1604.01304 | null | null |
Bayesian Optimization with Exponential Convergence | stat.ML cs.LG | This paper presents a Bayesian optimization method with exponential
convergence without the need of auxiliary optimization and without the
delta-cover sampling. Most Bayesian optimization methods require auxiliary
optimization: an additional non-convex global optimization problem, which can
be time-consuming and hard to implement in practice. Also, the existing
Bayesian optimization method with exponential convergence requires access to
the delta-cover sampling, which was considered to be impractical. Our approach
eliminates both requirements and achieves an exponential convergence rate.
| Kenji Kawaguchi, Leslie Pack Kaelbling, Tom\'as Lozano-P\'erez | null | 1604.01348 | null | null |
Bounded Optimal Exploration in MDP | cs.AI cs.LG | Within the framework of probably approximately correct Markov decision
processes (PAC-MDP), much theoretical work has focused on methods to attain
near optimality after a relatively long period of learning and exploration.
However, practical concerns require the attainment of satisfactory behavior
within a short period of time. In this paper, we relax the PAC-MDP conditions
to reconcile theoretically driven exploration methods and practical needs. We
propose simple algorithms for discrete and continuous state spaces, and
illustrate the benefits of our proposed relaxation via theoretical analyses and
numerical examples. Our algorithms also maintain anytime error bounds and
average loss bounds. Our approach accommodates both Bayesian and non-Bayesian
methods.
| Kenji Kawaguchi | null | 1604.01350 | null | null |
Heavy hitters via cluster-preserving clustering | cs.DS cs.LG | In turnstile $\ell_p$ $\varepsilon$-heavy hitters, one maintains a
high-dimensional $x\in\mathbb{R}^n$ subject to $\texttt{update}(i,\Delta)$
causing $x_i\leftarrow x_i + \Delta$, where $i\in[n]$, $\Delta\in\mathbb{R}$.
Upon receiving a query, the goal is to report a small list $L\subset[n]$, $|L|
= O(1/\varepsilon^p)$, containing every "heavy hitter" $i\in[n]$ with $|x_i|
\ge \varepsilon \|x_{\overline{1/\varepsilon^p}}\|_p$, where $x_{\overline{k}}$
denotes the vector obtained by zeroing out the largest $k$ entries of $x$ in
magnitude.
For any $p\in(0,2]$ the CountSketch solves $\ell_p$ heavy hitters using
$O(\varepsilon^{-p}\log n)$ words of space with $O(\log n)$ update time,
$O(n\log n)$ query time to output $L$, and whose output after any query is
correct with high probability (whp) $1 - 1/poly(n)$. Unfortunately the query
time is very slow. To remedy this, the work [CM05] proposed for $p=1$ in the
strict turnstile model, a whp correct algorithm achieving suboptimal space
$O(\varepsilon^{-1}\log^2 n)$, worse update time $O(\log^2 n)$, but much better
query time $O(\varepsilon^{-1}poly(\log n))$.
We show this tradeoff between space and update time versus query time is
unnecessary. We provide a new algorithm, ExpanderSketch, which in the most
general turnstile model achieves optimal $O(\varepsilon^{-p}\log n)$ space,
$O(\log n)$ update time, and fast $O(\varepsilon^{-p}poly(\log n))$ query time,
and whp correctness. Our main innovation is an efficient reduction from the
heavy hitters to a clustering problem in which each heavy hitter is encoded as
some form of noisy spectral cluster in a much bigger graph, and the goal is to
identify every cluster. Since every heavy hitter must be found, correctness
requires that every cluster be found. We then develop a "cluster-preserving
clustering" algorithm, partitioning the graph into clusters without destroying
any original cluster.
| Kasper Green Larsen, Jelani Nelson, Huy L. Nguyen, Mikkel Thorup | null | 1604.01357 | null | null |
Lipschitz Continuity of Mahalanobis Distances and Bilinear Forms | cs.NA cs.LG | Many theoretical results in the machine learning domain stand only for
functions that are Lipschitz continuous. Lipschitz continuity is a strong form
of continuity that linearly bounds the variations of a function. In this paper,
we derive tight Lipschitz constants for two families of metrics: Mahalanobis
distances and bounded-space bilinear forms. To our knowledge, this is the first
time the Mahalanobis distance is formally proved to be Lipschitz continuous and
that such tight Lipschitz constants are derived.
| Valentina Zantedeschi, R\'emi Emonet, Marc Sebban | null | 1604.01376 | null | null |
Self-Paced Multi-Task Learning | cs.LG | In this paper, we propose a novel multi-task learning (MTL) framework, called
Self-Paced Multi-Task Learning (SPMTL). Different from previous works treating
all tasks and instances equally when training, SPMTL attempts to jointly learn
the tasks by taking into consideration the complexities of both tasks and
instances. This is inspired by the cognitive process of human brain that often
learns from the easy to the hard. We construct a compact SPMTL formulation by
proposing a new task-oriented regularizer that can jointly prioritize the tasks
and the instances. Thus it can be interpreted as a self-paced learner for MTL.
A simple yet effective algorithm is designed for optimizing the proposed
objective function. An error bound for a simplified formulation is also
analyzed theoretically. Experimental results on toy and real-world datasets
demonstrate the effectiveness of the proposed approach, compared to the
state-of-the-art methods.
| Changsheng Li, Junchi Yan, Fan Wei, Weishan Dong, Qingshan Liu,
Hongyuan Zha | null | 1604.01474 | null | null |
Learning A Deep $\ell_\infty$ Encoder for Hashing | cs.LG cs.CV | We investigate the $\ell_\infty$-constrained representation which
demonstrates robustness to quantization errors, utilizing the tool of deep
learning. Based on the Alternating Direction Method of Multipliers (ADMM), we
formulate the original convex minimization problem as a feed-forward neural
network, named \textit{Deep $\ell_\infty$ Encoder}, by introducing the novel
Bounded Linear Unit (BLU) neuron and modeling the Lagrange multipliers as
network biases. Such a structural prior acts as an effective network
regularization, and facilitates the model initialization. We then investigate
the effective use of the proposed model in the application of hashing, by
coupling the proposed encoders under a supervised pairwise loss, to develop a
\textit{Deep Siamese $\ell_\infty$ Network}, which can be optimized from end to
end. Extensive experiments demonstrate the impressive performances of the
proposed model. We also provide an in-depth analysis of its behaviors against
the competitors.
| Zhangyang Wang, Yingzhen Yang, Shiyu Chang, Qing Ling, Thomas S. Huang | null | 1604.01475 | null | null |
Simple and Efficient Learning using Privileged Information | cs.LG | The Support Vector Machine using Privileged Information (SVM+) has been
proposed to train a classifier to utilize the additional privileged information
that is only available in the training phase but not available in the test
phase. In this work, we propose an efficient solution for SVM+ by simply
utilizing the squared hinge loss instead of the hinge loss as in the existing
SVM+ formulation, which interestingly leads to a dual form with less variables
and in the same form with the dual of the standard SVM. The proposed algorithm
is utilized to leverage the additional web knowledge that is only available
during training for the image categorization tasks. The extensive experimental
results on both Caltech101 andWebQueries datasets show that our proposed method
can achieve a factor of up to hundred times speedup with the comparable
accuracy when compared with the existing SVM+ method.
| Xinxing Xu, Joey Tianyi Zhou, IvorW. Tsang, Zheng Qin, Rick Siow Mong
Goh and Yong Liu | null | 1604.01518 | null | null |
A Survey on Bayesian Deep Learning | stat.ML cs.AI cs.CV cs.LG cs.NE | A comprehensive artificial intelligence system needs to not only perceive the
environment with different `senses' (e.g., seeing and hearing) but also infer
the world's conditional (or even causal) relations and corresponding
uncertainty. The past decade has seen major advances in many perception tasks
such as visual object recognition and speech recognition using deep learning
models. For higher-level inference, however, probabilistic graphical models
with their Bayesian nature are still more powerful and flexible. In recent
years, Bayesian deep learning has emerged as a unified probabilistic framework
to tightly integrate deep learning and Bayesian models. In this general
framework, the perception of text or images using deep learning can boost the
performance of higher-level inference and in turn, the feedback from the
inference process is able to enhance the perception of text or images. This
survey provides a comprehensive introduction to Bayesian deep learning and
reviews its recent applications on recommender systems, topic models, control,
etc. Besides, we also discuss the relationship and differences between Bayesian
deep learning and other related topics such as Bayesian treatment of neural
networks. For a constantly updating project page, please refer to
https://github.com/js05212/BayesianDeepLearning-Survey.
| Hao Wang and Dit-Yan Yeung | null | 1604.01662 | null | null |
Relationship between Variants of One-Class Nearest Neighbours and
Creating their Accurate Ensembles | cs.LG | In one-class classification problems, only the data for the target class is
available, whereas the data for the non-target class may be completely absent.
In this paper, we study one-class nearest neighbour (OCNN) classifiers and
their different variants. We present a theoretical analysis to show the
relationships among different variants of OCNN that may use different
neighbours or thresholds to identify unseen examples of the non-target class.
We also present a method based on inter-quartile range for optimising
parameters used in OCNN in the absence of non-target data during training.
Then, we propose two ensemble approaches based on random subspace and random
projection methods to create accurate OCNN ensembles. We tested the proposed
methods on 15 benchmark and real world domain-specific datasets and show that
random-projection ensembles of OCNN perform best.
| Shehroz S. Khan, Amir Ahmad | null | 1604.01686 | null | null |
Safe Probability | stat.ME cs.AI cs.LG math.ST stat.TH | We formalize the idea of probability distributions that lead to reliable
predictions about some, but not all aspects of a domain. The resulting notion
of `safety' provides a fresh perspective on foundational issues in statistics,
providing a middle ground between imprecise probability and multiple-prior
models on the one hand and strictly Bayesian approaches on the other. It also
allows us to formalize fiducial distributions in terms of the set of random
variables that they can safely predict, thus taking some of the sting out of
the fiducial idea. By restricting probabilistic inference to safe uses, one
also automatically avoids paradoxes such as the Monty Hall problem. Safety
comes in a variety of degrees, such as "validity" (the strongest notion),
"calibration", "confidence safety" and "unbiasedness" (almost the weakest
notion).
| Peter Gr\"unwald | null | 1604.01785 | null | null |
Advances in Very Deep Convolutional Neural Networks for LVCSR | cs.CL cs.LG cs.NE | Very deep CNNs with small 3x3 kernels have recently been shown to achieve
very strong performance as acoustic models in hybrid NN-HMM speech recognition
systems. In this paper we investigate how to efficiently scale these models to
larger datasets. Specifically, we address the design choice of pooling and
padding along the time dimension which renders convolutional evaluation of
sequences highly inefficient. We propose a new CNN design without timepadding
and without timepooling, which is slightly suboptimal for accuracy, but has two
significant advantages: it enables sequence training and deployment by allowing
efficient convolutional evaluation of full utterances, and, it allows for batch
normalization to be straightforwardly adopted to CNNs on sequence data. Through
batch normalization, we recover the lost peformance from removing the
time-pooling, while keeping the benefit of efficient convolutional evaluation.
We demonstrate the performance of our models both on larger scale data than
before, and after sequence training. Our very deep CNN model sequence trained
on the 2000h switchboard dataset obtains 9.4 word error rate on the Hub5
test-set, matching with a single model the performance of the 2015 IBM system
combination, which was the previous best published result.
| Tom Sercu, Vaibhava Goel | null | 1604.01792 | null | null |
Learning to Track at 100 FPS with Deep Regression Networks | cs.CV cs.AI cs.LG cs.RO | Machine learning techniques are often used in computer vision due to their
ability to leverage large amounts of training data to improve performance.
Unfortunately, most generic object trackers are still trained from scratch
online and do not benefit from the large number of videos that are readily
available for offline training. We propose a method for offline training of
neural networks that can track novel objects at test-time at 100 fps. Our
tracker is significantly faster than previous methods that use neural networks
for tracking, which are typically very slow to run and not practical for
real-time applications. Our tracker uses a simple feed-forward network with no
online training required. The tracker learns a generic relationship between
object motion and appearance and can be used to track novel objects that do not
appear in the training set. We test our network on a standard tracking
benchmark to demonstrate our tracker's state-of-the-art performance. Further,
our performance improves as we add more videos to our offline training set. To
the best of our knowledge, our tracker is the first neural-network tracker that
learns to track generic objects at 100 fps.
| David Held, Sebastian Thrun, Silvio Savarese | null | 1604.01802 | null | null |
Differential TD Learning for Value Function Approximation | cs.SY cs.LG math.OC | Value functions arise as a component of algorithms as well as performance
metrics in statistics and engineering applications. Computation of the
associated Bellman equations is numerically challenging in all but a few
special cases. A popular approximation technique is known as Temporal
Difference (TD) learning. The algorithm introduced in this paper is intended to
resolve two well-known problems with this approach: In the discounted-cost
setting, the variance of the algorithm diverges as the discount factor
approaches unity. Second, for the average cost setting, unbiased algorithms
exist only in special cases. It is shown that the gradient of any of these
value functions admits a representation that lends itself to algorithm design.
Based on this result, the new differential TD method is obtained for Markovian
models on Euclidean space with smooth dynamics. Numerical examples show
remarkable improvements in performance. In application to speed scaling,
variance is reduced by two orders of magnitude.
| Adithya M. Devraj, Sean P. Meyn | null | 1604.01828 | null | null |
Clustering Via Crowdsourcing | cs.DS cs.IT cs.LG math.IT | In recent years, crowdsourcing, aka human aided computation has emerged as an
effective platform for solving problems that are considered complex for
machines alone. Using human is time-consuming and costly due to monetary
compensations. Therefore, a crowd based algorithm must judiciously use any
information computed through an automated process, and ask minimum number of
questions to the crowd adaptively.
One such problem which has received significant attention is {\em entity
resolution}. Formally, we are given a graph $G=(V,E)$ with unknown edge set $E$
where $G$ is a union of $k$ (again unknown, but typically large $O(n^\alpha)$,
for $\alpha>0$) disjoint cliques $G_i(V_i, E_i)$, $i =1, \dots, k$. The goal is
to retrieve the sets $V_i$s by making minimum number of pair-wise queries $V
\times V\to\{\pm1\}$ to an oracle (the crowd). When the answer to each query is
correct, e.g. via resampling, then this reduces to finding connected components
in a graph. On the other hand, when crowd answers may be incorrect, it
corresponds to clustering over minimum number of noisy inputs. Even, with
perfect answers, a simple lower and upper bound of $\Theta(nk)$ on query
complexity can be shown. A major contribution of this paper is to reduce the
query complexity to linear or even sublinear in $n$ when mild side information
is provided by a machine, and even in presence of crowd errors which are not
correctable via resampling. We develop new information theoretic lower bounds
on the query complexity of clustering with side information and errors, and our
upper bounds closely match with them. Our algorithms are naturally
parallelizable, and also give near-optimal bounds on the number of adaptive
rounds required to match the query complexity.
| Arya Mazumdar, Barna Saha | null | 1604.01839 | null | null |
Building Ensembles of Adaptive Nested Dichotomies with Random-Pair
Selection | stat.ML cs.LG | A system of nested dichotomies is a method of decomposing a multi-class
problem into a collection of binary problems. Such a system recursively splits
the set of classes into two subsets, and trains a binary classifier to
distinguish between each subset. Even though ensembles of nested dichotomies
with random structure have been shown to perform well in practice, using a more
sophisticated class subset selection method can be used to improve
classification accuracy. We investigate an approach to this problem called
random-pair selection, and evaluate its effectiveness compared to other
published methods of subset selection. We show that our method outperforms
other methods in many cases when forming ensembles of nested dichotomies, and
is at least on par in all other cases.
| Tim Leathart, Bernhard Pfahringer and Eibe Frank | null | 1604.01854 | null | null |
Efficient Globally Convergent Stochastic Optimization for Canonical
Correlation Analysis | cs.LG | We study the stochastic optimization of canonical correlation analysis (CCA),
whose objective is nonconvex and does not decouple over training samples.
Although several stochastic gradient based optimization algorithms have been
recently proposed to solve this problem, no global convergence guarantee was
provided by any of them. Inspired by the alternating least squares/power
iterations formulation of CCA, and the shift-and-invert preconditioning method
for PCA, we propose two globally convergent meta-algorithms for CCA, both of
which transform the original problem into sequences of least squares problems
that need only be solved approximately. We instantiate the meta-algorithms with
state-of-the-art SGD methods and obtain time complexities that significantly
improve upon that of previous work. Experimental results demonstrate their
superior performance.
| Weiran Wang, Jialei Wang, Dan Garber, Nathan Srebro | null | 1604.01870 | null | null |
When is Nontrivial Estimation Possible for Graphons and Stochastic Block
Models? | math.ST cs.LG stat.TH | Block graphons (also called stochastic block models) are an important and
widely-studied class of models for random networks. We provide a lower bound on
the accuracy of estimators for block graphons with a large number of blocks. We
show that, given only the number $k$ of blocks and an upper bound $\rho$ on the
values (connection probabilities) of the graphon, every estimator incurs error
at least on the order of $\min(\rho, \sqrt{\rho k^2/n^2})$ in the $\delta_2$
metric with constant probability, in the worst case over graphons. In
particular, our bound rules out any nontrivial estimation (that is, with
$\delta_2$ error substantially less than $\rho$) when $k\geq n\sqrt{\rho}$.
Combined with previous upper and lower bounds, our results characterize, up to
logarithmic terms, the minimax accuracy of graphon estimation in the $\delta_2$
metric. A similar lower bound to ours was obtained independently by Klopp,
Tsybakov and Verzelen (2016).
| Audra McMillan and Adam Smith | null | 1604.01871 | null | null |
Optimizing Performance of Recurrent Neural Networks on GPUs | cs.LG cs.NE | As recurrent neural networks become larger and deeper, training times for
single networks are rising into weeks or even months. As such there is a
significant incentive to improve the performance and scalability of these
networks. While GPUs have become the hardware of choice for training and
deploying recurrent models, the implementations employed often make use of only
basic optimizations for these architectures. In this article we demonstrate
that by exposing parallelism between operations within the network, an order of
magnitude speedup across a range of network sizes can be achieved over a naive
implementation. We describe three stages of optimization that have been
incorporated into the fifth release of NVIDIA's cuDNN: firstly optimizing a
single cell, secondly a single layer, and thirdly the entire network.
| Jeremy Appleyard, Tomas Kocisky, Phil Blunsom | null | 1604.01946 | null | null |
Deep Online Convex Optimization with Gated Games | cs.LG cs.GT cs.NE stat.ML | Methods from convex optimization are widely used as building blocks for deep
learning algorithms. However, the reasons for their empirical success are
unclear, since modern convolutional networks (convnets), incorporating
rectifier units and max-pooling, are neither smooth nor convex. Standard
guarantees therefore do not apply. This paper provides the first convergence
rates for gradient descent on rectifier convnets. The proof utilizes the
particular structure of rectifier networks which consists in binary
active/inactive gates applied on top of an underlying linear network. The
approach generalizes to max-pooling, dropout and maxout. In other words, to
precisely the neural networks that perform best empirically. The key step is to
introduce gated games, an extension of convex games with similar convergence
properties that capture the gating function of rectifiers. The main result is
that rectifier convnets converge to a critical point at a rate controlled by
the gated-regret of the units in the network. Corollaries of the main result
include: (i) a game-theoretic description of the representations learned by a
neural network; (ii) a logarithmic-regret algorithm for training neural nets;
and (iii) a formal setting for analyzing conditional computation in neural nets
that can be applied to recently developed models of attention.
| David Balduzzi | null | 1604.01952 | null | null |
Online Optimization of Smoothed Piecewise Constant Functions | cs.LG stat.ML | We study online optimization of smoothed piecewise constant functions over
the domain [0, 1). This is motivated by the problem of adaptively picking
parameters of learning algorithms as in the recently introduced framework by
Gupta and Roughgarden (2016). Majority of the machine learning literature has
focused on Lipschitz-continuous functions or functions with bounded gradients.
1 This is with good reason---any learning algorithm suffers linear regret even
against piecewise constant functions that are chosen adversarially, arguably
the simplest of non-Lipschitz continuous functions. The smoothed setting we
consider is inspired by the seminal work of Spielman and Teng (2004) and the
recent work of Gupta and Roughgarden---in this setting, the sequence of
functions may be chosen by an adversary, however, with some uncertainty in the
location of discontinuities. We give algorithms that achieve sublinear regret
in the full information and bandit settings.
| Vincent Cohen-Addad, Varun Kanade | null | 1604.01999 | null | null |
Combinatorial Topic Models using Small-Variance Asymptotics | cs.LG cs.CL stat.ML | Topic models have emerged as fundamental tools in unsupervised machine
learning. Most modern topic modeling algorithms take a probabilistic view and
derive inference algorithms based on Latent Dirichlet Allocation (LDA) or its
variants. In contrast, we study topic modeling as a combinatorial optimization
problem, and propose a new objective function derived from LDA by passing to
the small-variance limit. We minimize the derived objective by using ideas from
combinatorial optimization, which results in a new, fast, and high-quality
topic modeling algorithm. In particular, we show that our results are
competitive with popular LDA-based topic modeling approaches, and also discuss
the (dis)similarities between our approach and its probabilistic counterparts.
| Ke Jiang and Suvrit Sra and Brian Kulis | null | 1604.02027 | null | null |
Sentence Level Recurrent Topic Model: Letting Topics Speak for
Themselves | cs.LG cs.CL cs.IR | We propose Sentence Level Recurrent Topic Model (SLRTM), a new topic model
that assumes the generation of each word within a sentence to depend on both
the topic of the sentence and the whole history of its preceding words in the
sentence. Different from conventional topic models that largely ignore the
sequential order of words or their topic coherence, SLRTM gives full
characterization to them by using a Recurrent Neural Networks (RNN) based
framework. Experimental results have shown that SLRTM outperforms several
strong baselines on various tasks. Furthermore, SLRTM can automatically
generate sentences given a topic (i.e., topics to sentences), which is a key
technology for real world applications such as personalized short text
conversation.
| Fei Tian, Bin Gao, Di He, Tie-Yan Liu | null | 1604.02038 | null | null |
Multilevel Weighted Support Vector Machine for Classification on
Healthcare Data with Missing Values | stat.ML cs.LG stat.AP | This work is motivated by the needs of predictive analytics on healthcare
data as represented by Electronic Medical Records. Such data is invariably
problematic: noisy, with missing entries, with imbalance in classes of
interests, leading to serious bias in predictive modeling. Since standard data
mining methods often produce poor performance measures, we argue for
development of specialized techniques of data-preprocessing and classification.
In this paper, we propose a new method to simultaneously classify large
datasets and reduce the effects of missing values. It is based on a multilevel
framework of the cost-sensitive SVM and the expected maximization imputation
method for missing values, which relies on iterated regression analyses. We
compare classification results of multilevel SVM-based algorithms on public
benchmark datasets with imbalanced classes and missing values as well as real
data in health applications, and show that our multilevel SVM-based method
produces fast, and more accurate and robust classification results.
| Talayeh Razzaghi, Oleg Roderick, Ilya Safro, Nicholas Marko | 10.1371/journal.pone.0155119 | 1604.02123 | null | null |
Resolving Language and Vision Ambiguities Together: Joint Segmentation &
Prepositional Attachment Resolution in Captioned Scenes | cs.CV cs.CL cs.LG | We present an approach to simultaneously perform semantic segmentation and
prepositional phrase attachment resolution for captioned images. Some
ambiguities in language cannot be resolved without simultaneously reasoning
about an associated image. If we consider the sentence "I shot an elephant in
my pajamas", looking at language alone (and not using common sense), it is
unclear if it is the person or the elephant wearing the pajamas or both. Our
approach produces a diverse set of plausible hypotheses for both semantic
segmentation and prepositional phrase attachment resolution that are then
jointly reranked to select the most consistent pair. We show that our semantic
segmentation and prepositional phrase attachment resolution modules have
complementary strengths, and that joint reasoning produces more accurate
results than any module operating in isolation. Multiple hypotheses are also
shown to be crucial to improved multiple-module reasoning. Our vision and
language approach significantly outperforms the Stanford Parser (De Marneffe et
al., 2006) by 17.91% (28.69% relative) and 12.83% (25.28% relative) in two
different experiments. We also make small improvements over DeepLab-CRF (Chen
et al., 2015).
| Gordon Christie, Ankit Laddha, Aishwarya Agrawal, Stanislaw Antol,
Yash Goyal, Kevin Kochersberger, Dhruv Batra | null | 1604.02125 | null | null |
A Low Complexity Algorithm with $O(\sqrt{T})$ Regret and $O(1)$
Constraint Violations for Online Convex Optimization with Long Term
Constraints | math.OC cs.LG stat.ML | This paper considers online convex optimization over a complicated constraint
set, which typically consists of multiple functional constraints and a set
constraint. The conventional online projection algorithm (Zinkevich, 2003) can
be difficult to implement due to the potentially high computation complexity of
the projection operation. In this paper, we relax the functional constraints by
allowing them to be violated at each round but still requiring them to be
satisfied in the long term. This type of relaxed online convex optimization
(with long term constraints) was first considered in Mahdavi et al. (2012).
That prior work proposes an algorithm to achieve $O(\sqrt{T})$ regret and
$O(T^{3/4})$ constraint violations for general problems and another algorithm
to achieve an $O(T^{2/3})$ bound for both regret and constraint violations when
the constraint set can be described by a finite number of linear constraints. A
recent extension in \citet{Jenatton16ICML} can achieve
$O(T^{\max\{\theta,1-\theta\}})$ regret and $O(T^{1-\theta/2})$ constraint
violations where $\theta\in (0,1)$. The current paper proposes a new simple
algorithm that yields improved performance in comparison to prior works. The
new algorithm achieves an $O(\sqrt{T})$ regret bound with $O(1)$ constraint
violations.
| Hao Yu and Michael J. Neely | null | 1604.02218 | null | null |
Probabilistic classifiers with low rank indefinite kernels | cs.LG | Indefinite similarity measures can be frequently found in bio-informatics by
means of alignment scores, but are also common in other fields like shape
measures in image retrieval. Lacking an underlying vector space, the data are
given as pairwise similarities only. The few algorithms available for such data
do not scale to larger datasets. Focusing on probabilistic batch classifiers,
the Indefinite Kernel Fisher Discriminant (iKFD) and the Probabilistic
Classification Vector Machine (PCVM) are both effective algorithms for this
type of data but, with cubic complexity. Here we propose an extension of iKFD
and PCVM such that linear runtime and memory complexity is achieved for low
rank indefinite kernels. Employing the Nystr\"om approximation for indefinite
kernels, we also propose a new almost parameter free approach to identify the
landmarks, restricted to a supervised learning problem. Evaluations at several
larger similarity data from various domains show that the proposed methods
provides similar generalization capabilities while being easier to parametrize
and substantially faster for large scale data.
| Frank-Michael Schleif and Andrej Gisbrecht and Peter Tino | null | 1604.02264 | null | null |
Single-Molecule Protein Identification by Sub-Nanopore Sensors | q-bio.QM cs.LG | Recent advances in top-down mass spectrometry enabled identification of
intact proteins, but this technology still faces challenges. For example,
top-down mass spectrometry suffers from a lack of sensitivity since the ion
counts for a single fragmentation event are often low. In contrast, nanopore
technology is exquisitely sensitive to single intact molecules, but it has only
been successfully applied to DNA sequencing, so far. Here, we explore the
potential of sub-nanopores for single-molecule protein identification (SMPI)
and describe an algorithm for identification of the electrical current blockade
signal (nanospectrum) resulting from the translocation of a denaturated,
linearly charged protein through a sub-nanopore. The analysis of identification
p-values suggests that the current technology is already sufficient for
matching nanospectra against small protein databases, e.g., protein
identification in bacterial proteomes.
| Mikhail Kolmogorov, Eamonn Kennedy, Zhuxin Dong, Gregory Timp and
Pavel Pevzner | 10.1371/journal.pcbi.1005356 | 1604.02270 | null | null |
Online Open World Recognition | cs.CV cs.LG stat.ML | As we enter into the big data age and an avalanche of images have become
readily available, recognition systems face the need to move from close, lab
settings where the number of classes and training data are fixed, to dynamic
scenarios where the number of categories to be recognized grows continuously
over time, as well as new data providing useful information to update the
system. Recent attempts, like the open world recognition framework, tried to
inject dynamics into the system by detecting new unknown classes and adding
them incrementally, while at the same time continuously updating the models for
the known classes. incrementally adding new classes and detecting instances
from unknown classes, while at the same time continuously updating the models
for the known classes. In this paper we argue that to properly capture the
intrinsic dynamic of open world recognition, it is necessary to add to these
aspects (a) the incremental learning of the underlying metric, (b) the
incremental estimate of confidence thresholds for the unknown classes, and (c)
the use of local learning to precisely describe the space of classes. We extend
three existing metric learning algorithms towards these goals by using online
metric learning. Experimentally we validate our approach on two large-scale
datasets in different learning scenarios. For all these scenarios our proposed
methods outperform their non-online counterparts. We conclude that local and
online learning is important to capture the full dynamics of open world
recognition.
| Rocco De Rosa, Thomas Mensink and Barbara Caputo | null | 1604.02275 | null | null |
Back to the Basics: Bayesian extensions of IRT outperform neural
networks for proficiency estimation | cs.AI cs.LG | Estimating student proficiency is an important task for computer based
learning systems. We compare a family of IRT-based proficiency estimation
methods to Deep Knowledge Tracing (DKT), a recently proposed recurrent neural
network model with promising initial results. We evaluate how well each model
predicts a student's future response given previous responses using two
publicly available and one proprietary data set. We find that IRT-based methods
consistently matched or outperformed DKT across all data sets at the finest
level of content granularity that was tractable for them to be trained on. A
hierarchical extension of IRT that captured item grouping structure performed
best overall. When data sets included non-trivial autocorrelations in student
response patterns, a temporal extension of IRT improved performance over
standard IRT while the RNN-based method did not. We conclude that IRT-based
models provide a simpler, better-performing alternative to existing RNN-based
models of student interaction data while also affording more interpretability
and guarantees due to their formulation as Bayesian probabilistic models.
| Kevin H. Wilson, Yan Karklin, Bojian Han, and Chaitanya Ekanadham | null | 1604.02336 | null | null |
Bayesian Neighbourhood Component Analysis | cs.CV cs.LG | Learning a good distance metric in feature space potentially improves the
performance of the KNN classifier and is useful in many real-world
applications. Many metric learning algorithms are however based on the point
estimation of a quadratic optimization problem, which is time-consuming,
susceptible to overfitting, and lack a natural mechanism to reason with
parameter uncertainty, an important property useful especially when the
training set is small and/or noisy. To deal with these issues, we present a
novel Bayesian metric learning method, called Bayesian NCA, based on the
well-known Neighbourhood Component Analysis method, in which the metric
posterior is characterized by the local label consistency constraints of
observations, encoded with a similarity graph instead of independent pairwise
constraints. For efficient Bayesian optimization, we explore the variational
lower bound over the log-likelihood of the original NCA objective. Experiments
on several publicly available datasets demonstrate that the proposed method is
able to learn robust metric measures from small size dataset and/or from
challenging training set with labels contaminated by errors. The proposed
method is also shown to outperform a previous pairwise constrained Bayesian
metric learning method.
| Dong Wang, Xiaoyang Tan | null | 1604.02354 | null | null |
Finding Optimal Combination of Kernels using Genetic Programming | cs.CV cs.LG cs.NE | In Computer Vision, problem of identifying or classifying the objects present
in an image is called Object Categorization. It is a challenging problem,
especially when the images have clutter background, occlusions or different
lighting conditions. Many vision features have been proposed which aid object
categorization even in such adverse conditions. Past research has shown that,
employing multiple features rather than any single features leads to better
recognition. Multiple Kernel Learning (MKL) framework has been developed for
learning an optimal combination of features for object categorization. Existing
MKL methods use linear combination of base kernels which may not be optimal for
object categorization. Real-world object categorization may need to consider
complex combination of kernels(non-linear) and not only linear combination.
Evolving non-linear functions of base kernels using Genetic Programming is
proposed in this report. Experiment results show that non-kernel generated
using genetic programming gives good accuracy as compared to linear combination
of kernels.
| Jyothi Korra | null | 1604.02376 | null | null |
One-class classifiers based on entropic spanning graphs | cs.LG cs.CV cs.IT math.IT | One-class classifiers offer valuable tools to assess the presence of outliers
in data. In this paper, we propose a design methodology for one-class
classifiers based on entropic spanning graphs. Our approach takes into account
the possibility to process also non-numeric data by means of an embedding
procedure. The spanning graph is learned on the embedded input data and the
outcoming partition of vertices defines the classifier. The final partition is
derived by exploiting a criterion based on mutual information minimization.
Here, we compute the mutual information by using a convenient formulation
provided in terms of the $\alpha$-Jensen difference. Once training is
completed, in order to associate a confidence level with the classifier
decision, a graph-based fuzzy model is constructed. The fuzzification process
is based only on topological information of the vertices of the entropic
spanning graph. As such, the proposed one-class classifier is suitable also for
data characterized by complex geometric structures. We provide experiments on
well-known benchmarks containing both feature vectors and labeled graphs. In
addition, we apply the method to the protein solubility recognition problem by
considering several representations for the input samples. Experimental results
demonstrate the effectiveness and versatility of the proposed method with
respect to other state-of-the-art approaches.
| Lorenzo Livi, Cesare Alippi | 10.1109/TNNLS.2016.2608983 | 1604.02477 | null | null |
Challenges in Bayesian Adaptive Data Analysis | cs.LG stat.ML | Traditional statistical analysis requires that the analysis process and data
are independent. By contrast, the new field of adaptive data analysis hopes to
understand and provide algorithms and accuracy guarantees for research as it is
commonly performed in practice, as an iterative process of interacting
repeatedly with the same data set, such as repeated tests against a holdout
set. Previous work has defined a model with a rather strong lower bound on
sample complexity in terms of the number of queries, $n\sim\sqrt q$, arguing
that adaptive data analysis is much harder than static data analysis, where
$n\sim\log q$ is possible. Instead, we argue that those strong lower bounds
point to a limitation of the previous model in that it must consider wildly
asymmetric scenarios which do not hold in typical applications.
To better understand other difficulties of adaptivity, we propose a new
Bayesian version of the problem that mandates symmetry. Since the other lower
bound techniques are ruled out, we can more effectively see difficulties that
might otherwise be overshadowed. As a first contribution to this model, we
produce a new problem using error-correcting codes on which a large family of
methods, including all previously proposed algorithms, require roughly
$n\sim\sqrt[4]q$. These early results illustrate new difficulties in adaptive
data analysis regarding slightly correlated queries on problems with
concentrated uncertainty.
| Sam Elder | null | 1604.02492 | null | null |
Word embeddings and recurrent neural networks based on Long-Short Term
Memory nodes in supervised biomedical word sense disambiguation | cs.CL cs.LG | Word sense disambiguation helps identifying the proper sense of ambiguous
words in text. With large terminologies such as the UMLS Metathesaurus
ambiguities appear and highly effective disambiguation methods are required.
Supervised learning algorithm methods are used as one of the approaches to
perform disambiguation. Features extracted from the context of an ambiguous
word are used to identify the proper sense of such a word. The type of features
have an impact on machine learning methods, thus affect disambiguation
performance. In this work, we have evaluated several types of features derived
from the context of the ambiguous word and we have explored as well more global
features derived from MEDLINE using word embeddings. Results show that word
embeddings improve the performance of more traditional features and allow as
well using recurrent neural network classifiers based on Long-Short Term Memory
(LSTM) nodes. The combination of unigrams and word embeddings with an SVM sets
a new state of the art performance with a macro accuracy of 95.97 in the MSH
WSD data set.
| Antonio Jimeno Yepes | null | 1604.02506 | null | null |
Learning Compact Recurrent Neural Networks | cs.LG cs.CL cs.NE | Recurrent neural networks (RNNs), including long short-term memory (LSTM)
RNNs, have produced state-of-the-art results on a variety of speech recognition
tasks. However, these models are often too large in size for deployment on
mobile devices with memory and latency constraints. In this work, we study
mechanisms for learning compact RNNs and LSTMs via low-rank factorizations and
parameter sharing schemes. Our goal is to investigate redundancies in recurrent
architectures where compression can be admitted without losing performance. A
hybrid strategy of using structured matrices in the bottom layers and shared
low-rank factors on the top layers is found to be particularly effective,
reducing the parameters of a standard LSTM by 75%, at a small cost of 0.3%
increase in WER, on a 2,000-hr English Voice Search task.
| Zhiyun Lu, Vikas Sindhwani, Tara N. Sainath | null | 1604.02594 | null | null |
A General Retraining Framework for Scalable Adversarial Classification | cs.GT cs.LG stat.ML | Traditional classification algorithms assume that training and test data come
from similar distributions. This assumption is violated in adversarial
settings, where malicious actors modify instances to evade detection. A number
of custom methods have been developed for both adversarial evasion attacks and
robust learning. We propose the first systematic and general-purpose retraining
framework which can: a) boost robustness of an \emph{arbitrary} learning
algorithm, in the face of b) a broader class of adversarial models than any
prior methods. We show that, under natural conditions, the retraining framework
minimizes an upper bound on optimal adversarial risk, and show how to extend
this result to account for approximations of evasion attacks. Extensive
experimental evaluation demonstrates that our retraining methods are nearly
indistinguishable from state-of-the-art algorithms for optimizing adversarial
risk, but are more general and far more scalable. The experiments also confirm
that without retraining, our adversarial framework dramatically reduces the
effectiveness of learning. In contrast, retraining significantly boosts
robustness to evasion attacks without significantly compromising overall
accuracy.
| Bo Li, Yevgeniy Vorobeychik, Xinyun Chen | null | 1604.02606 | null | null |
Online Nonnegative Matrix Factorization with Outliers | stat.ML cs.IT cs.LG math.IT math.OC stat.ME | We propose a unified and systematic framework for performing online
nonnegative matrix factorization in the presence of outliers. Our framework is
particularly suited to large-scale data. We propose two solvers based on
projected gradient descent and the alternating direction method of multipliers.
We prove that the sequence of objective values converges almost surely by
appealing to the quasi-martingale convergence theorem. We also show the
sequence of learned dictionaries converges to the set of stationary points of
the expected loss function almost surely. In addition, we extend our basic
problem formulation to various settings with different constraints and
regularizers. We also adapt the solvers and analyses to each setting. We
perform extensive experiments on both synthetic and real datasets. These
experiments demonstrate the computational efficiency and efficacy of our
algorithms on tasks such as (parts-based) basis learning, image denoising,
shadow removal and foreground-background separation.
| Renbo Zhao and Vincent Y. F. Tan | 10.1109/TSP.2016.2620967 | 1604.02634 | null | null |
Visualization Regularizers for Neural Network based Image Recognition | cs.LG cs.CV cs.NE | The success of deep neural networks is mostly due their ability to learn
meaningful features from the data. Features learned in the hidden layers of
deep neural networks trained in computer vision tasks have been shown to be
similar to mid-level vision features. We leverage this fact in this work and
propose the visualization regularizer for image tasks. The proposed
regularization technique enforces smoothness of the features learned by hidden
nodes and turns out to be a special case of Tikhonov regularization. We achieve
higher classification accuracy as compared to existing regularizers such as the
L2 norm regularizer and dropout, on benchmark datasets without changing the
training computational complexity.
| Biswajit Paria, Vikas Reddy, Anirban Santara, Pabitra Mitra | null | 1604.02646 | null | null |
Performance Trade-Offs in Multi-Processor Approximate Message Passing | cs.IT cs.DC cs.LG math.IT | We consider large-scale linear inverse problems in Bayesian settings. Our
general approach follows a recent line of work that applies the approximate
message passing (AMP) framework in multi-processor (MP) computational systems
by storing and processing a subset of rows of the measurement matrix along with
corresponding measurements at each MP node. In each MP-AMP iteration, nodes of
the MP system and its fusion center exchange lossily compressed messages
pertaining to their estimates of the input. There is a trade-off between the
physical costs of the reconstruction process including computation time,
communication loads, and the reconstruction quality, and it is impossible to
simultaneously minimize all the costs. We pose this minimization as a
multi-objective optimization problem (MOP), and study the properties of the
best trade-offs (Pareto optimality) in this MOP. We prove that the achievable
region of this MOP is convex, and conjecture how the combined cost of
computation and communication scales with the desired mean squared error. These
properties are verified numerically.
| Junan Zhu, Ahmad Beirami, Dror Baron | 10.1109/ISIT.2016.7541385 | 1604.02752 | null | null |
Active Learning for Online Recognition of Human Activities from
Streaming Videos | stat.ML cs.CV cs.LG | Recognising human activities from streaming videos poses unique challenges to
learning algorithms: predictive models need to be scalable, incrementally
trainable, and must remain bounded in size even when the data stream is
arbitrarily long. Furthermore, as parameter tuning is problematic in a
streaming setting, suitable approaches should be parameterless, and make no
assumptions on what class labels may occur in the stream. We present here an
approach to the recognition of human actions from streaming data which meets
all these requirements by: (1) incrementally learning a model which adaptively
covers the feature space with simple local classifiers; (2) employing an active
learning strategy to reduce annotation requests; (3) achieving promising
accuracy within a fixed model size. Extensive experiments on standard
benchmarks show that our approach is competitive with state-of-the-art
non-incremental methods, and outperforms the existing active incremental
baselines.
| Rocco De Rosa, Ilaria Gori, Fabio Cuzzolin, Barbara Caputo and
Nicol\`o Cesa-Bianchi | null | 1604.02855 | null | null |
Gaussian Process Domain Experts for Model Adaptation in Facial Behavior
Analysis | stat.ML cs.CV cs.LG | We present a novel approach for supervised domain adaptation that is based
upon the probabilistic framework of Gaussian processes (GPs). Specifically, we
introduce domain-specific GPs as local experts for facial expression
classification from face images. The adaptation of the classifier is
facilitated in probabilistic fashion by conditioning the target expert on
multiple source experts. Furthermore, in contrast to existing adaptation
approaches, we also learn a target expert from available target data solely.
Then, a single and confident classifier is obtained by combining the
predictions from multiple experts based on their confidence. Learning of the
model is efficient and requires no retraining/reweighting of the source
classifiers. We evaluate the proposed approach on two publicly available
datasets for multi-class (MultiPIE) and multi-label (DISFA) facial expression
classification. To this end, we perform adaptation of two contextual factors:
'where' (view) and 'who' (subject). We show in our experiments that the
proposed approach consistently outperforms both source and target classifiers,
while using as few as 30 target examples. It also outperforms the
state-of-the-art approaches for supervised domain adaptation.
| Stefanos Eleftheriadis and Ognjen Rudovic and Marc P. Deisenroth and
Maja Pantic | null | 1604.02917 | null | null |
Demystifying Fixed k-Nearest Neighbor Information Estimators | cs.LG cs.IT math.IT stat.ML | Estimating mutual information from i.i.d. samples drawn from an unknown joint
density function is a basic statistical problem of broad interest with
multitudinous applications. The most popular estimator is one proposed by
Kraskov and St\"ogbauer and Grassberger (KSG) in 2004, and is nonparametric and
based on the distances of each sample to its $k^{\rm th}$ nearest neighboring
sample, where $k$ is a fixed small integer. Despite its widespread use (part of
scientific software packages), theoretical properties of this estimator have
been largely unexplored. In this paper we demonstrate that the estimator is
consistent and also identify an upper bound on the rate of convergence of the
bias as a function of number of samples. We argue that the superior performance
benefits of the KSG estimator stems from a curious "correlation boosting"
effect and build on this intuition to modify the KSG estimator in novel ways to
construct a superior estimator. As a byproduct of our investigations, we obtain
nearly tight rates of convergence of the $\ell_2$ error of the well known fixed
$k$ nearest neighbor estimator of differential entropy by Kozachenko and
Leonenko.
| Weihao Gao, Sewoong Oh, Pramod Viswanath | null | 1604.03006 | null | null |
Semi-supervised learning of local structured output predictors | cs.LG cs.CV | In this paper, we study the problem of semi-supervised structured output
prediction, which aims to learn predictors for structured outputs, such as
sequences, tree nodes, vectors, etc., from a set of data points of both
input-output pairs and single inputs without outputs. The traditional methods
to solve this problem usually learns one single predictor for all the data
points, and ignores the variety of the different data points. Different parts
of the data set may have different local distributions, and requires different
optimal local predictors. To overcome this disadvantage of existing methods, we
propose to learn different local predictors for neighborhoods of different data
points, and the missing structured outputs simultaneously. In the neighborhood
of each data point, we proposed to learn a linear predictor by minimizing both
the complexity of the predictor and the upper bound of the structured
prediction loss. The minimization is conducted by gradient descent algorithms.
Experiments over four benchmark data sets, including DDSM mammography medical
images, SUN natural image data set, Cora research paper data set, and Spanish
news wire article sentence data set, show the advantages of the proposed
method.
| Xin Du | null | 1604.03010 | null | null |
M3: Scaling Up Machine Learning via Memory Mapping | cs.LG cs.DC | To process data that do not fit in RAM, conventional wisdom would suggest
using distributed approaches. However, recent research has demonstrated virtual
memory's strong potential in scaling up graph mining algorithms on a single
machine. We propose to use a similar approach for general machine learning. We
contribute: (1) our latest finding that memory mapping is also a feasible
technique for scaling up general machine learning algorithms like logistic
regression and k-means, when data fits in or exceeds RAM (we tested datasets up
to 190GB); (2) an approach, called M3, that enables existing machine learning
algorithms to work with out-of-core datasets through memory mapping, achieving
a speed that is significantly faster than a 4-instance Spark cluster, and
comparable to an 8-instance cluster.
| Dezhi Fang, Duen Horng Chau | 10.1145/1235 | 1604.03034 | null | null |
Binarized Neural Networks on the ImageNet Classification Task | cs.CV cs.LG cs.NE | We trained Binarized Neural Networks (BNNs) on the high resolution ImageNet
ILSVRC-2102 dataset classification task and achieved a good performance. With a
moderate size network of 13 layers, we obtained top-5 classification accuracy
rate of 84.1 % on validation set through network distillation, much better than
previous published results of 73.2% on XNOR network and 69.1% on binarized
GoogleNET. We expect networks of better performance can be obtained by
following our current strategies. We provide a detailed discussion and
preliminary analysis on strategies used in the network training.
| Xundong Wu, Yong Wu and Yong Zhao | null | 1604.03058 | null | null |
Reservoir computing for spatiotemporal signal classification without
trained output weights | cs.NE cs.CV cs.LG | Reservoir computing is a recently introduced machine learning paradigm that
has been shown to be well-suited for the processing of spatiotemporal data.
Rather than training the network node connections and weights via
backpropagation in traditional recurrent neural networks, reservoirs instead
have fixed connections and weights among the `hidden layer' nodes, and
traditionally only the weights to the output layer of neurons are trained using
linear regression. We claim that for signal classification tasks one may forgo
the weight training step entirely and instead use a simple supervised
clustering method based upon principal components of norms of reservoir states.
The proposed method is mathematically analyzed and explored through numerical
experiments on real-world data. The examples demonstrate that the proposed may
outperform the traditional trained output weight approach in terms of
classification accuracy and sensitivity to reservoir parameters.
| Ashley Prater | null | 1604.03073 | null | null |
Symbolic Knowledge Extraction using {\L}ukasiewicz Logics | cs.AI cs.LG | This work describes a methodology that combines logic-based systems and
connectionist systems. Our approach uses finite truth-valued {\L}ukasiewicz
logic, wherein every connective can be defined by a neuron in an artificial
network. This allowed the injection of first-order formulas into a network
architecture, and also simplified symbolic rule extraction. For that we trained
a neural networks using the Levenderg-Marquardt algorithm, where we restricted
the knowledge dissemination in the network structure. This procedure reduces
neural network plasticity without drastically damaging the learning
performance, thus making the descriptive power of produced neural networks
similar to the descriptive power of {\L}ukasiewicz logic language and
simplifying the translation between symbolic and connectionist structures. We
used this method for reverse engineering truth table and in extraction of
formulas from real data sets.
| Carlos Leandro | null | 1604.03099 | null | null |
Learning Simple Auctions | cs.LG cs.GT | We present a general framework for proving polynomial sample complexity
bounds for the problem of learning from samples the best auction in a class of
"simple" auctions. Our framework captures all of the most prominent examples of
"simple" auctions, including anonymous and non-anonymous item and bundle
pricings, with either a single or multiple buyers. The technique we propose is
to break the analysis of auctions into two natural pieces. First, one shows
that the set of allocation rules have large amounts of structure; second,
fixing an allocation on a sample, one shows that the set of auctions agreeing
with this allocation on that sample have revenue functions with low
dimensionality. Our results effectively imply that whenever it's possible to
compute a near-optimal simple auction with a known prior, it is also possible
to compute such an auction with an unknown prior (given a polynomial number of
samples).
| Jamie Morgenstern, Tim Roughgarden | null | 1604.03171 | null | null |
Efficient Classification of Multi-Labelled Text Streams by Clashing | cs.AI cs.LG | We present a method for the classification of multi-labelled text documents
explicitly designed for data stream applications that require to process a
virtually infinite sequence of data using constant memory and constant
processing time. Our method is composed of an online procedure used to
efficiently map text into a low-dimensional feature space and a partition of
this space into a set of regions for which the system extracts and keeps
statistics used to predict multi-label text annotations. Documents are fed into
the system as a sequence of words, mapped to a region of the partition, and
annotated using the statistics computed from the labelled instances colliding
in the same region. This approach is referred to as clashing. We illustrate the
method in real-world text data, comparing the results with those obtained using
other text classifiers. In addition, we provide an analysis about the effect of
the representation space dimensionality on the predictive performance of the
system. Our results show that the online embedding indeed approximates the
geometry of the full corpus-wise TF and TF-IDF space. The model obtains
competitive F measures with respect to the most accurate methods, using
significantly fewer computational resources. In addition, the method achieves a
higher macro-averaged F measure than methods with similar running time.
Furthermore, the system is able to learn faster than the other methods from
partially labelled streams.
| Ricardo \~Nanculef, Ilias Flaounas, Nello Cristianini | 10.1016/j.eswa.2014.02.017 | 1604.03200 | null | null |
Leveraging Network Dynamics for Improved Link Prediction | cs.SI cs.LG | The aim of link prediction is to forecast connections that are most likely to
occur in the future, based on examples of previously observed links. A key
insight is that it is useful to explicitly model network dynamics, how
frequently links are created or destroyed when doing link prediction. In this
paper, we introduce a new supervised link prediction framework, RPM (Rate
Prediction Model). In addition to network similarity measures, RPM uses the
predicted rate of link modifications, modeled using time series data; it is
implemented in Spark-ML and trained with the original link distribution, rather
than a small balanced subset. We compare the use of this network dynamics model
to directly creating time series of network similarity measures. Our
experiments show that RPM, which leverages predicted rates, outperforms the use
of network similarity measures, either individually or within a time series.
| Alireza Hajibagheri, Gita Sukthankar, Kiran Lakkaraju | null | 1604.03221 | null | null |
Recurrent Attentional Networks for Saliency Detection | cs.CV cs.LG stat.ML | Convolutional-deconvolution networks can be adopted to perform end-to-end
saliency detection. But, they do not work well with objects of multiple scales.
To overcome such a limitation, in this work, we propose a recurrent attentional
convolutional-deconvolution network (RACDNN). Using spatial transformer and
recurrent network units, RACDNN is able to iteratively attend to selected image
sub-regions to perform saliency refinement progressively. Besides tackling the
scale problem, RACDNN can also learn context-aware features from past
iterations to enhance saliency refinement in future iterations. Experiments on
several challenging saliency detection datasets validate the effectiveness of
RACDNN, and show that RACDNN outperforms state-of-the-art saliency detection
methods.
| Jason Kuen, Zhenhua Wang, Gang Wang | null | 1604.03227 | null | null |
Thesis: Multiple Kernel Learning for Object Categorization | cs.CV cs.LG | Object Categorization is a challenging problem, especially when the images
have clutter background, occlusions or different lighting conditions. In the
past, many descriptors have been proposed which aid object categorization even
in such adverse conditions. Each descriptor has its own merits and de-merits.
Some descriptors are invariant to transformations while the others are more
discriminative. Past research has shown that, employing multiple descriptors
rather than any single descriptor leads to better recognition. The problem of
learning the optimal combination of the available descriptors for a particular
classification task is studied. Multiple Kernel Learning (MKL) framework has
been developed for learning an optimal combination of descriptors for object
categorization. Existing MKL formulations often employ block l-1 norm
regularization which is equivalent to selecting a single kernel from a library
of kernels. Since essentially a single descriptor is selected, the existing
formulations maybe sub- optimal for object categorization. A MKL formulation
based on block l-infinity norm regularization has been developed, which chooses
an optimal combination of kernels as opposed to selecting a single kernel. A
Composite Multiple Kernel Learning(CKL) formulation based on mixed l-infinity
and l-1 norm regularization has been developed. These formulations end in
Second Order Cone Programs(SOCP). Other efficient alter- native algorithms for
these formulation have been implemented. Empirical results on benchmark
datasets show significant improvement using these new MKL formulations.
| Dinesh Govindaraj | null | 1604.03247 | null | null |
The Univariate Flagging Algorithm (UFA): a Fully-Automated Approach for
Identifying Optimal Thresholds in Data | cs.LG stat.AP | In many data classification problems, there is no linear relationship between
an explanatory and the dependent variables. Instead, there may be ranges of the
input variable for which the observed outcome is signficantly more or less
likely. This paper describes an algorithm for automatic detection of such
thresholds, called the Univariate Flagging Algorithm (UFA). The algorithm
searches for a separation that optimizes the difference between separated areas
while providing the maximum support. We evaluate its performance using three
examples and demonstrate that thresholds identified by the algorithm align well
with visual inspection and subject matter expertise. We also introduce two
classification approaches that use UFA and show that the performance attained
on unseen test data is equal to or better than that of more traditional
classifiers. We demonstrate that the proposed algorithm is robust against
missing data and noise, is scalable, and is easy to interpret and visualize. It
is also well suited for problems where incidence of the target is low.
| Mallory Sheth, Roy Welsch, Natasha Markuzon | null | 1604.03248 | null | null |
Online Learning of Portfolio Ensembles with Sector Exposure
Regularization | cs.LG | We consider online learning of ensembles of portfolio selection algorithms
and aim to regularize risk by encouraging diversification with respect to a
predefined risk-driven grouping of stocks. Our procedure uses online convex
optimization to control capital allocation to underlying investment algorithms
while encouraging non-sparsity over the given grouping. We prove a logarithmic
regret for this procedure with respect to the best-in-hindsight ensemble. We
applied the procedure with known mean-reversion portfolio selection algorithms
using the standard GICS industry sector grouping. Empirical Experimental
results showed an impressive percentage increase of risk-adjusted return
(Sharpe ratio).
| Guy Uziel and Ran El-Yaniv | null | 1604.03266 | null | null |
Confidence Decision Trees via Online and Active Learning for Streaming
(BIG) Data | stat.ML cs.LG | Decision tree classifiers are a widely used tool in data stream mining. The
use of confidence intervals to estimate the gain associated with each split
leads to very effective methods, like the popular Hoeffding tree algorithm.
From a statistical viewpoint, the analysis of decision tree classifiers in a
streaming setting requires knowing when enough new information has been
collected to justify splitting a leaf. Although some of the issues in the
statistical analysis of Hoeffding trees have been already clarified, a general
and rigorous study of confidence intervals for splitting criteria is missing.
We fill this gap by deriving accurate confidence intervals to estimate the
splitting gain in decision tree learning with respect to three criteria:
entropy, Gini index, and a third index proposed by Kearns and Mansour. Our
confidence intervals depend in a more detailed way on the tree parameters. We
also extend our confidence analysis to a selective sampling setting, in which
the decision tree learner adaptively decides which labels to query in the
stream. We furnish theoretical guarantee bounding the probability that the
classification is non-optimal learning the decision tree via our selective
sampling strategy. Experiments on real and synthetic data in a streaming
setting show that our trees are indeed more accurate than trees with the same
number of leaves generated by other techniques and our active learning module
permits to save labeling cost. In addition, comparing our labeling strategy
with recent methods, we show that our approach is more robust and consistent
respect all the other techniques applied to incremental decision trees.
| Rocco De Rosa | null | 1604.03278 | null | null |
Typical Stability | cs.LG cs.DS | In this paper, we introduce a notion of algorithmic stability called typical
stability. When our goal is to release real-valued queries (statistics)
computed over a dataset, this notion does not require the queries to be of
bounded sensitivity -- a condition that is generally assumed under differential
privacy [DMNS06, Dwork06] when used as a notion of algorithmic stability
[DFHPRR15a, DFHPRR15b, BNSSSU16] -- nor does it require the samples in the
dataset to be independent -- a condition that is usually assumed when
generalization-error guarantees are sought. Instead, typical stability requires
the output of the query, when computed on a dataset drawn from the underlying
distribution, to be concentrated around its expected value with respect to that
distribution.
We discuss the implications of typical stability on the generalization error
(i.e., the difference between the value of the query computed on the dataset
and the expected value of the query with respect to the true data
distribution). We show that typical stability can control generalization error
in adaptive data analysis even when the samples in the dataset are not
necessarily independent and when queries to be computed are not necessarily of
bounded-sensitivity as long as the results of the queries over the dataset
(i.e., the computed statistics) follow a distribution with a "light" tail.
Examples of such queries include, but not limited to, subgaussian and
subexponential queries.
We also discuss the composition guarantees of typical stability and prove
composition theorems that characterize the degradation of the parameters of
typical stability under $k$-fold adaptive composition. We also give simple
noise-addition algorithms that achieve this notion. These algorithms are
similar to their differentially private counterparts, however, the added noise
is calibrated differently.
| Raef Bassily and Yoav Freund | null | 1604.03336 | null | null |
Loss Bounds and Time Complexity for Speed Priors | cs.LG stat.ML | This paper establishes for the first time the predictive performance of speed
priors and their computational complexity. A speed prior is essentially a
probability distribution that puts low probability on strings that are not
efficiently computable. We propose a variant to the original speed prior
(Schmidhuber, 2002), and show that our prior can predict sequences drawn from
probability measures that are estimable in polynomial time. Our speed prior is
computable in doubly-exponential time, but not in polynomial time. On a
polynomial time computable sequence our speed prior is computable in
exponential time. We show better upper complexity bounds for Schmidhuber's
speed prior under the same conditions, and that it predicts deterministic
sequences that are computable in polynomial time; however, we also show that it
is not computable in polynomial time, and the question of its predictive
properties for stochastic sequences remains open.
| Daniel Filan, Marcus Hutter, Jan Leike | null | 1604.03343 | null | null |
An incremental linear-time learning algorithm for the Optimum-Path
Forest classifier | cs.LG cs.CV | We present a classification method with incremental capabilities based on the
Optimum-Path Forest classifier (OPF). The OPF considers instances as nodes of a
fully-connected training graph, arc weights represent distances between two
feature vectors. Our algorithm includes new instances in an OPF in linear-time,
while keeping similar accuracies when compared with the original quadratic-time
model.
| Moacir Ponti and Mateus Riva | 10.1016/j.ipl.2017.05.004 | 1604.03346 | null | null |
Optimal Margin Distribution Machine | cs.LG | Support vector machine (SVM) has been one of the most popular learning
algorithms, with the central idea of maximizing the minimum margin, i.e., the
smallest distance from the instances to the classification boundary. Recent
theoretical results, however, disclosed that maximizing the minimum margin does
not necessarily lead to better generalization performances, and instead, the
margin distribution has been proven to be more crucial. Based on this idea, we
propose a new method, named Optimal margin Distribution Machine (ODM), which
tries to achieve a better generalization performance by optimizing the margin
distribution. We characterize the margin distribution by the first- and
second-order statistics, i.e., the margin mean and variance. The proposed
method is a general learning approach which can be used in any place where SVM
can be applied, and their superiority is verified both theoretically and
empirically in this paper.
| Teng Zhang and Zhi-Hua Zhou | null | 1604.03348 | null | null |
A Convex Surrogate Operator for General Non-Modular Loss Functions | stat.ML cs.LG | Empirical risk minimization frequently employs convex surrogates to
underlying discrete loss functions in order to achieve computational
tractability during optimization. However, classical convex surrogates can only
tightly bound modular loss functions, sub-modular functions or supermodular
functions separately while maintaining polynomial time computation. In this
work, a novel generic convex surrogate for general non-modular loss functions
is introduced, which provides for the first time a tractable solution for loss
functions that are neither super-modular nor submodular. This convex surro-gate
is based on a submodular-supermodular decomposition for which the existence and
uniqueness is proven in this paper. It takes the sum of two convex surrogates
that separately bound the supermodular component and the submodular component
using slack-rescaling and the Lov{\'a}sz hinge, respectively. It is further
proven that this surrogate is convex , piecewise linear, an extension of the
loss function, and for which subgradient computation is polynomial time.
Empirical results are reported on a non-submodular loss based on the
S{{\o}}rensen-Dice difference function, and a real-world face track dataset
with tens of thousands of frames, demonstrating the improved performance,
efficiency, and scalabil-ity of the novel convex surrogate.
| Jiaqian Yu (CVC, GALEN), Matthew Blaschko | null | 1604.03373 | null | null |
Video Description using Bidirectional Recurrent Neural Networks | cs.CV cs.CL cs.LG | Although traditionally used in the machine translation field, the
encoder-decoder framework has been recently applied for the generation of video
and image descriptions. The combination of Convolutional and Recurrent Neural
Networks in these models has proven to outperform the previous state of the
art, obtaining more accurate video descriptions. In this work we propose
pushing further this model by introducing two contributions into the encoding
stage. First, producing richer image representations by combining object and
location information from Convolutional Neural Networks and second, introducing
Bidirectional Recurrent Neural Networks for capturing both forward and backward
temporal relationships in the input frames.
| \'Alvaro Peris, Marc Bola\~nos, Petia Radeva and Francisco Casacuberta | null | 1604.03390 | null | null |
An Unbiased Data Collection and Content Exploitation/Exploration
Strategy for Personalization | cs.IR cs.LG | One of missions for personalization systems and recommender systems is to
show content items according to users' personal interests. In order to achieve
such goal, these systems are learning user interests over time and trying to
present content items tailoring to user profiles. Recommending items according
to users' preferences has been investigated extensively in the past few years,
mainly thanks for the popularity of Netflix competition. In a real setting,
users may be attracted by a subset of those items and interact with them, only
leaving partial feedbacks to the system to learn in the next cycle, which leads
to significant biases into systems and hence results in a situation where user
engagement metrics cannot be improved over time. The problem is not just for
one component of the system. The data collected from users is usually used in
many different tasks, including learning ranking functions, building user
profiles and constructing content classifiers. Once the data is biased, all
these downstream use cases would be impacted as well. Therefore, it would be
beneficial to gather unbiased data through user interactions. Traditionally,
unbiased data collection is done through showing items uniformly sampling from
the content pool. However, this simple scheme is not feasible as it risks user
engagement metrics and it takes long time to gather user feedbacks. In this
paper, we introduce a user-friendly unbiased data collection framework, by
utilizing methods developed in the exploitation and exploration literature. We
discuss how the framework is different from normal multi-armed bandit problems
and why such method is needed. We layout a novel Thompson sampling for
Bernoulli ranked-list to effectively balance user experiences and data
collection. The proposed method is validated from a real bucket test and we
show strong results comparing to old algorithms
| Liangjie Hong, Adnan Boz | null | 1604.03506 | null | null |
Going Deeper with Contextual CNN for Hyperspectral Image Classification | cs.CV cs.LG | In this paper, we describe a novel deep convolutional neural network (CNN)
that is deeper and wider than other existing deep networks for hyperspectral
image classification. Unlike current state-of-the-art approaches in CNN-based
hyperspectral image classification, the proposed network, called contextual
deep CNN, can optimally explore local contextual interactions by jointly
exploiting local spatio-spectral relationships of neighboring individual pixel
vectors. The joint exploitation of the spatio-spectral information is achieved
by a multi-scale convolutional filter bank used as an initial component of the
proposed CNN pipeline. The initial spatial and spectral feature maps obtained
from the multi-scale filter bank are then combined together to form a joint
spatio-spectral feature map. The joint feature map representing rich spectral
and spatial properties of the hyperspectral image is then fed through a fully
convolutional network that eventually predicts the corresponding label of each
pixel vector. The proposed approach is tested on three benchmark datasets: the
Indian Pines dataset, the Salinas dataset and the University of Pavia dataset.
Performance comparison shows enhanced classification performance of the
proposed approach over the current state-of-the-art on the three datasets.
| Hyungtae Lee and Heesung Kwon | 10.1109/TIP.2017.2725580 | 1604.03519 | null | null |
Cross-stitch Networks for Multi-task Learning | cs.CV cs.LG | Multi-task learning in Convolutional Networks has displayed remarkable
success in the field of recognition. This success can be largely attributed to
learning shared representations from multiple supervisory tasks. However,
existing multi-task approaches rely on enumerating multiple network
architectures specific to the tasks at hand, that do not generalize. In this
paper, we propose a principled approach to learn shared representations in
ConvNets using multi-task learning. Specifically, we propose a new sharing
unit: "cross-stitch" unit. These units combine the activations from multiple
networks and can be trained end-to-end. A network with cross-stitch units can
learn an optimal combination of shared and task-specific representations. Our
proposed method generalizes across multiple tasks and shows dramatically
improved performance over baseline methods for categories with few training
examples.
| Ishan Misra and Abhinav Shrivastava and Abhinav Gupta and Martial
Hebert | null | 1604.03539 | null | null |
Training Region-based Object Detectors with Online Hard Example Mining | cs.CV cs.LG | The field of object detection has made significant advances riding on the
wave of region-based ConvNets, but their training procedure still includes many
heuristics and hyperparameters that are costly to tune. We present a simple yet
surprisingly effective online hard example mining (OHEM) algorithm for training
region-based ConvNet detectors. Our motivation is the same as it has always
been -- detection datasets contain an overwhelming number of easy examples and
a small number of hard examples. Automatic selection of these hard examples can
make training more effective and efficient. OHEM is a simple and intuitive
algorithm that eliminates several heuristics and hyperparameters in common use.
But more importantly, it yields consistent and significant boosts in detection
performance on benchmarks like PASCAL VOC 2007 and 2012. Its effectiveness
increases as datasets become larger and more difficult, as demonstrated by the
results on the MS COCO dataset. Moreover, combined with complementary advances
in the field, OHEM leads to state-of-the-art results of 78.9% and 76.3% mAP on
PASCAL VOC 2007 and 2012 respectively.
| Abhinav Shrivastava, Abhinav Gupta, Ross Girshick | null | 1604.03540 | null | null |
Asynchronous Stochastic Gradient Descent with Variance Reduction for
Non-Convex Optimization | cs.LG math.OC | We provide the first theoretical analysis on the convergence rate of the
asynchronous stochastic variance reduced gradient (SVRG) descent algorithm on
non-convex optimization. Recent studies have shown that the asynchronous
stochastic gradient descent (SGD) based algorithms with variance reduction
converge with a linear convergent rate on convex problems. However, there is no
work to analyze asynchronous SGD with variance reduction technique on
non-convex problem. In this paper, we study two asynchronous parallel
implementations of SVRG: one is on a distributed memory system and the other is
on a shared memory system. We provide the theoretical analysis that both
algorithms can obtain a convergence rate of $O(1/T)$, and linear speed up is
achievable if the number of workers is upper bounded. V1,v2,v3 have been
withdrawn due to reference issue, please refer the newest version v4.
| Zhouyuan Huo, Heng Huang | null | 1604.03584 | null | null |
Joint Unsupervised Learning of Deep Representations and Image Clusters | cs.CV cs.LG | In this paper, we propose a recurrent framework for Joint Unsupervised
LEarning (JULE) of deep representations and image clusters. In our framework,
successive operations in a clustering algorithm are expressed as steps in a
recurrent process, stacked on top of representations output by a Convolutional
Neural Network (CNN). During training, image clusters and representations are
updated jointly: image clustering is conducted in the forward pass, while
representation learning in the backward pass. Our key idea behind this
framework is that good representations are beneficial to image clustering and
clustering results provide supervisory signals to representation learning. By
integrating two processes into a single model with a unified weighted triplet
loss and optimizing it end-to-end, we can obtain not only more powerful
representations, but also more precise image clusters. Extensive experiments
show that our method outperforms the state-of-the-art on image clustering
across a variety of image datasets. Moreover, the learned representations
generalize well when transferred to other tasks.
| Jianwei Yang, Devi Parikh, Dhruv Batra | null | 1604.03628 | null | null |
Bridging the Gaps Between Residual Learning, Recurrent Neural Networks
and Visual Cortex | cs.LG cs.NE | We discuss relations between Residual Networks (ResNet), Recurrent Neural
Networks (RNNs) and the primate visual cortex. We begin with the observation
that a special type of shallow RNN is exactly equivalent to a very deep ResNet
with weight sharing among the layers. A direct implementation of such a RNN,
although having orders of magnitude fewer parameters, leads to a performance
similar to the corresponding ResNet. We propose 1) a generalization of both RNN
and ResNet architectures and 2) the conjecture that a class of moderately deep
RNNs is a biologically-plausible model of the ventral stream in visual cortex.
We demonstrate the effectiveness of the architectures by testing them on the
CIFAR-10 and ImageNet dataset.
| Qianli Liao, Tomaso Poggio | null | 1604.03640 | null | null |
Learning Social Affordance for Human-Robot Interaction | cs.RO cs.AI cs.CV cs.LG | In this paper, we present an approach for robot learning of social affordance
from human activity videos. We consider the problem in the context of
human-robot interaction: Our approach learns structural representations of
human-human (and human-object-human) interactions, describing how body-parts of
each agent move with respect to each other and what spatial relations they
should maintain to complete each sub-event (i.e., sub-goal). This enables the
robot to infer its own movement in reaction to the human body motion, allowing
it to naturally replicate such interactions.
We introduce the representation of social affordance and propose a generative
model for its weakly supervised learning from human demonstration videos. Our
approach discovers critical steps (i.e., latent sub-events) in an interaction
and the typical motion associated with them, learning what body-parts should be
involved and how. The experimental results demonstrate that our Markov Chain
Monte Carlo (MCMC) based learning algorithm automatically discovers
semantically meaningful interactive affordance from RGB-D videos, which allows
us to generate appropriate full body motion for an agent.
| Tianmin Shu, M. S. Ryoo and Song-Chun Zhu | null | 1604.03692 | null | null |
A Differentiable Transition Between Additive and Multiplicative Neurons | cs.LG stat.ML | Existing approaches to combine both additive and multiplicative neural units
either use a fixed assignment of operations or require discrete optimization to
determine what function a neuron should perform. However, this leads to an
extensive increase in the computational complexity of the training procedure.
We present a novel, parameterizable transfer function based on the
mathematical concept of non-integer functional iteration that allows the
operation each neuron performs to be smoothly and, most importantly,
differentiablely adjusted between addition and multiplication. This allows the
decision between addition and multiplication to be integrated into the standard
backpropagation training procedure.
| Wiebke K\"opp, Patrick van der Smagt, Sebastian Urban | null | 1604.03736 | null | null |
A General Distributed Dual Coordinate Optimization Framework for
Regularized Loss Minimization | cs.LG cs.DC math.OC stat.ML | In modern large-scale machine learning applications, the training data are
often partitioned and stored on multiple machines. It is customary to employ
the "data parallelism" approach, where the aggregated training loss is
minimized without moving data across machines. In this paper, we introduce a
novel distributed dual formulation for regularized loss minimization problems
that can directly handle data parallelism in the distributed setting. This
formulation allows us to systematically derive dual coordinate optimization
procedures, which we refer to as Distributed Alternating Dual Maximization
(DADM). The framework extends earlier studies described in (Boyd et al., 2011;
Ma et al., 2015a; Jaggi et al., 2014; Yang, 2013) and has rigorous theoretical
analyses. Moreover with the help of the new formulation, we develop the
accelerated version of DADM (Acc-DADM) by generalizing the acceleration
technique from (Shalev-Shwartz and Zhang, 2014) to the distributed setting. We
also provide theoretical results for the proposed accelerated version and the
new result improves previous ones (Yang, 2013; Ma et al., 2015a) whose runtimes
grow linearly on the condition number. Our empirical studies validate our
theory and show that our accelerated approach significantly improves the
previous state-of-the-art distributed dual coordinate optimization algorithms.
| Shun Zheng, Jialei Wang, Fen Xia, Wei Xu, Tong Zhang | null | 1604.03763 | null | null |
Animation and Chirplet-Based Development of a PIR Sensor Array for
Intruder Classification in an Outdoor Environment | cs.LG | This paper presents the development of a passive infra-red sensor tower
platform along with a classification algorithm to distinguish between human
intrusion, animal intrusion and clutter arising from wind-blown vegetative
movement in an outdoor environment. The research was aimed at exploring the
potential use of wireless sensor networks as an early-warning system to help
mitigate human-wildlife conflicts occurring at the edge of a forest. There are
three important features to the development. Firstly, the sensor platform
employs multiple sensors arranged in the form of a two-dimensional array to
give it a key spatial-resolution capability that aids in classification.
Secondly, given the challenges of collecting data involving animal intrusion,
an Animation-based Simulation tool for Passive Infra-Red sEnsor (ASPIRE) was
developed that simulates signals corresponding to human and animal intrusion
and some limited models of vegetative clutter. This speeded up the process of
algorithm development by allowing us to test different hypotheses in a
time-efficient manner. Finally, a chirplet-based model for intruder signal was
developed that significantly helped boost classification accuracy despite
drawing data from a smaller number of sensors. An SVM-based classifier was used
which made use of chirplet, energy and signal cross-correlation-based features.
The average accuracy obtained for intruder detection and classification on
real-world and simulated data sets was in excess of 97%.
| Raviteja Upadrashta, Tarun Choubisa, A. Praneeth, Tony G., Aswath V.
S., P. Vijay Kumar, Sripad Kowshik, Hari Prasad Gokul R, T. V. Prabhakar | null | 1604.03829 | null | null |
Hierarchical Compound Poisson Factorization | cs.LG cs.AI stat.ML | Non-negative matrix factorization models based on a hierarchical
Gamma-Poisson structure capture user and item behavior effectively in extremely
sparse data sets, making them the ideal choice for collaborative filtering
applications. Hierarchical Poisson factorization (HPF) in particular has proved
successful for scalable recommendation systems with extreme sparsity. HPF,
however, suffers from a tight coupling of sparsity model (absence of a rating)
and response model (the value of the rating), which limits the expressiveness
of the latter. Here, we introduce hierarchical compound Poisson factorization
(HCPF) that has the favorable Gamma-Poisson structure and scalability of HPF to
high-dimensional extremely sparse matrices. More importantly, HCPF decouples
the sparsity model from the response model, allowing us to choose the most
suitable distribution for the response. HCPF can capture binary, non-negative
discrete, non-negative continuous, and zero-inflated continuous responses. We
compare HCPF with HPF on nine discrete and three continuous data sets and
conclude that HCPF captures the relationship between sparsity and response
better than HPF.
| Mehmet E. Basbug, Barbara E. Engelhardt | null | 1604.03853 | null | null |
Inverse Reinforcement Learning with Simultaneous Estimation of Rewards
and Dynamics | cs.AI cs.LG cs.SY stat.ML | Inverse Reinforcement Learning (IRL) describes the problem of learning an
unknown reward function of a Markov Decision Process (MDP) from observed
behavior of an agent. Since the agent's behavior originates in its policy and
MDP policies depend on both the stochastic system dynamics as well as the
reward function, the solution of the inverse problem is significantly
influenced by both. Current IRL approaches assume that if the transition model
is unknown, additional samples from the system's dynamics are accessible, or
the observed behavior provides enough samples of the system's dynamics to solve
the inverse problem accurately. These assumptions are often not satisfied. To
overcome this, we present a gradient-based IRL approach that simultaneously
estimates the system's dynamics. By solving the combined optimization problem,
our approach takes into account the bias of the demonstrations, which stems
from the generating policy. The evaluation on a synthetic MDP and a transfer
learning task shows improvements regarding the sample efficiency as well as the
accuracy of the estimated reward functions and transition models.
| Michael Herman, Tobias Gindele, J\"org Wagner, Felix Schmitt, Wolfram
Burgard | null | 1604.03912 | null | null |
Removing Clouds and Recovering Ground Observations in Satellite Image
Sequences via Temporally Contiguous Robust Matrix Completion | cs.CV cs.LG | We consider the problem of removing and replacing clouds in satellite image
sequences, which has a wide range of applications in remote sensing. Our
approach first detects and removes the cloud-contaminated part of the image
sequences. It then recovers the missing scenes from the clean parts using the
proposed "TECROMAC" (TEmporally Contiguous RObust MAtrix Completion) objective.
The objective function balances temporal smoothness with a low rank solution
while staying close to the original observations. The matrix whose the rows are
pixels and columnsare days corresponding to the image, has low-rank because the
pixels reflect land-types such as vegetation, roads and lakes and there are
relatively few variations as a result. We provide efficient optimization
algorithms for TECROMAC, so we can exploit images containing millions of
pixels. Empirical results on real satellite image sequences, as well as
simulated data, demonstrate that our approach is able to recover underlying
images from heavily cloud-contaminated observations.
| Jialei Wang, Peder A. Olsen, Andrew R. Conn, Aurelie C. Lozano | null | 1604.03915 | null | null |
Max-Information, Differential Privacy, and Post-Selection Hypothesis
Testing | cs.LG | In this paper, we initiate a principled study of how the generalization
properties of approximate differential privacy can be used to perform adaptive
hypothesis testing, while giving statistically valid $p$-value corrections. We
do this by observing that the guarantees of algorithms with bounded approximate
max-information are sufficient to correct the $p$-values of adaptively chosen
hypotheses, and then by proving that algorithms that satisfy
$(\epsilon,\delta)$-differential privacy have bounded approximate max
information when their inputs are drawn from a product distribution. This
substantially extends the known connection between differential privacy and
max-information, which previously was only known to hold for (pure)
$(\epsilon,0)$-differential privacy. It also extends our understanding of
max-information as a partially unifying measure controlling the generalization
properties of adaptive data analyses. We also show a lower bound, proving that
(despite the strong composition properties of max-information), when data is
drawn from a product distribution, $(\epsilon,\delta)$-differentially private
algorithms can come first in a composition with other algorithms satisfying
max-information bounds, but not necessarily second if the composition is
required to itself satisfy a nontrivial max-information bound. This, in
particular, implies that the connection between
$(\epsilon,\delta)$-differential privacy and max-information holds only for
inputs drawn from product distributions, unlike the connection between
$(\epsilon,0)$-differential privacy and max-information.
| Ryan Rogers, Aaron Roth, Adam Smith, Om Thakkar | null | 1604.03924 | null | null |
Efficient Algorithms for Large-scale Generalized Eigenvector Computation
and Canonical Correlation Analysis | cs.LG math.OC stat.ML | This paper considers the problem of canonical-correlation analysis (CCA)
(Hotelling, 1936) and, more broadly, the generalized eigenvector problem for a
pair of symmetric matrices. These are two fundamental problems in data analysis
and scientific computing with numerous applications in machine learning and
statistics (Shi and Malik, 2000; Hardoon et al., 2004; Witten et al., 2009).
We provide simple iterative algorithms, with improved runtimes, for solving
these problems that are globally linearly convergent with moderate dependencies
on the condition numbers and eigenvalue gaps of the matrices involved.
We obtain our results by reducing CCA to the top-$k$ generalized eigenvector
problem. We solve this problem through a general framework that simply requires
black box access to an approximate linear system solver. Instantiating this
framework with accelerated gradient descent we obtain a running time of
$O(\frac{z k \sqrt{\kappa}}{\rho} \log(1/\epsilon) \log
\left(k\kappa/\rho\right))$ where $z$ is the total number of nonzero entries,
$\kappa$ is the condition number and $\rho$ is the relative eigenvalue gap of
the appropriate matrices.
Our algorithm is linear in the input size and the number of components $k$ up
to a $\log(k)$ factor. This is essential for handling large-scale matrices that
appear in practice. To the best of our knowledge this is the first such
algorithm with global linear convergence. We hope that our results prompt
further research and ultimately improve the practical running time for
performing these important data analysis procedures on large data sets.
| Rong Ge, Chi Jin, Sham M. Kakade, Praneeth Netrapalli, Aaron Sidford | null | 1604.03930 | null | null |
Theoretically-Grounded Policy Advice from Multiple Teachers in
Reinforcement Learning Settings with Applications to Negative Transfer | cs.LG | Policy advice is a transfer learning method where a student agent is able to
learn faster via advice from a teacher. However, both this and other
reinforcement learning transfer methods have little theoretical analysis. This
paper formally defines a setting where multiple teacher agents can provide
advice to a student and introduces an algorithm to leverage both autonomous
exploration and teacher's advice. Our regret bounds justify the intuition that
good teachers help while bad teachers hurt. Using our formalization, we are
also able to quantify, for the first time, when negative transfer can occur
within such a reinforcement learning setting.
| Yusen Zhan, Haitham Bou Ammar, Matthew E. taylor | null | 1604.03986 | null | null |
Fast Parallel Randomized Algorithm for Nonnegative Matrix Factorization
with KL Divergence for Large Sparse Datasets | math.OC cs.LG cs.NA | Nonnegative Matrix Factorization (NMF) with Kullback-Leibler Divergence
(NMF-KL) is one of the most significant NMF problems and equivalent to
Probabilistic Latent Semantic Indexing (PLSI), which has been successfully
applied in many applications. For sparse count data, a Poisson distribution and
KL divergence provide sparse models and sparse representation, which describe
the random variation better than a normal distribution and Frobenius norm.
Specially, sparse models provide more concise understanding of the appearance
of attributes over latent components, while sparse representation provides
concise interpretability of the contribution of latent components over
instances. However, minimizing NMF with KL divergence is much more difficult
than minimizing NMF with Frobenius norm; and sparse models, sparse
representation and fast algorithms for large sparse datasets are still
challenges for NMF with KL divergence. In this paper, we propose a fast
parallel randomized coordinate descent algorithm having fast convergence for
large sparse datasets to archive sparse models and sparse representation. The
proposed algorithm's experimental results overperform the current studies' ones
in this problem.
| Duy Khuong Nguyen, Tu Bao Ho | null | 1604.04026 | null | null |
Multi-Source Multi-View Clustering via Discrepancy Penalty | cs.LG | With the advance of technology, entities can be observed in multiple views.
Multiple views containing different types of features can be used for
clustering. Although multi-view clustering has been successfully applied in
many applications, the previous methods usually assume the complete instance
mapping between different views. In many real-world applications, information
can be gathered from multiple sources, while each source can contain multiple
views, which are more cohesive for learning. The views under the same source
are usually fully mapped, but they can be very heterogeneous. Moreover, the
mappings between different sources are usually incomplete and partially
observed, which makes it more difficult to integrate all the views across
different sources. In this paper, we propose MMC (Multi-source Multi-view
Clustering), which is a framework based on collective spectral clustering with
a discrepancy penalty across sources, to tackle these challenges. MMC has
several advantages compared with other existing methods. First, MMC can deal
with incomplete mapping between sources. Second, it considers the disagreements
between sources while treating views in the same source as a cohesive set.
Third, MMC also tries to infer the instance similarities across sources to
enhance the clustering performance. Extensive experiments conducted on
real-world data demonstrate the effectiveness of the proposed approach.
| Weixiang Shao and Jiawei Zhang and Lifang He and Philip S. Yu | null | 1604.04029 | null | null |
Filling in the details: Perceiving from low fidelity images | cs.CV cs.LG cs.NE | Humans perceive their surroundings in great detail even though most of our
visual field is reduced to low-fidelity color-deprived (e.g. dichromatic) input
by the retina. In contrast, most deep learning architectures are
computationally wasteful in that they consider every part of the input when
performing an image processing task. Yet, the human visual system is able to
perform visual reasoning despite having only a small fovea of high visual
acuity. With this in mind, we wish to understand the extent to which
connectionist architectures are able to learn from and reason with low acuity,
distorted inputs. Specifically, we train autoencoders to generate full-detail
images from low-detail "foveations" of those images and then measure their
ability to reconstruct the full-detail images from the foveated versions. By
varying the type of foveation, we can study how well the architectures can cope
with various types of distortion. We find that the autoencoder compensates for
lower detail by learning increasingly global feature functions. In many cases,
the learnt features are suitable for reconstructing the original full-detail
image. For example, we find that the networks accurately perceive color in the
periphery, even when 75\% of the input is achromatic.
| Farahnaz Ahmed Wick, Michael L. Wick, Marc Pomplun | null | 1604.04125 | null | null |
Self-taught learning of a deep invariant representation for visual
tracking via temporal slowness principle | cs.CV cs.LG cs.NE | Visual representation is crucial for a visual tracking method's performances.
Conventionally, visual representations adopted in visual tracking rely on
hand-crafted computer vision descriptors. These descriptors were developed
generically without considering tracking-specific information. In this paper,
we propose to learn complex-valued invariant representations from tracked
sequential image patches, via strong temporal slowness constraint and stacked
convolutional autoencoders. The deep slow local representations are learned
offline on unlabeled data and transferred to the observational model of our
proposed tracker. The proposed observational model retains old training samples
to alleviate drift, and collect negative samples which are coherent with
target's motion pattern for better discriminative tracking. With the learned
representation and online training samples, a logistic regression classifier is
adopted to distinguish target from background, and retrained online to adapt to
appearance changes. Subsequently, the observational model is integrated into a
particle filter framework to peform visual tracking. Experimental results on
various challenging benchmark sequences demonstrate that the proposed tracker
performs favourably against several state-of-the-art trackers.
| Jason Kuen, Kian Ming Lim, Chin Poo Lee | 10.1016/j.patcog.2015.02.012 | 1604.04144 | null | null |
Consistently Estimating Markov Chains with Noisy Aggregate Data | cs.LG stat.ML | We address the problem of estimating the parameters of a time-homogeneous
Markov chain given only noisy, aggregate data. This arises when a population of
individuals behave independently according to a Markov chain, but individual
sample paths cannot be observed due to limitations of the observation process
or the need to protect privacy. Instead, only population-level counts of the
number of individuals in each state at each time step are available. When these
counts are exact, a conditional least squares (CLS) estimator is known to be
consistent and asymptotically normal. We initiate the study of method of
moments estimators for this problem to handle the more realistic case when
observations are additionally corrupted by noise. We show that CLS can be
interpreted as a simple "plug-in" method of moments estimator. However, when
observations are noisy, it is not consistent because it fails to account for
additional variance introduced by the noise. We develop a new, simpler method
of moments estimator that bypasses this problem and is consistent under noisy
observations.
| Garrett Bernstein and Daniel Sheldon | null | 1604.04182 | null | null |
Modeling Electrical Daily Demand in Presence of PHEVs in Smart Grids
with Supervised Learning | cs.LG | Replacing a portion of current light duty vehicles (LDV) with plug-in hybrid
electric vehicles (PHEVs) offers the possibility to reduce the dependence on
petroleum fuels together with environmental and economic benefits. The charging
activity of PHEVs will certainly introduce new load to the power grid. In the
framework of the development of a smarter grid, the primary focus of the
present study is to propose a model for the electrical daily demand in presence
of PHEVs charging. Expected PHEV demand is modeled by the PHEV charging time
and the starting time of charge according to real world data. A normal
distribution for starting time of charge is assumed. Several distributions for
charging time are considered: uniform distribution, Gaussian with positive
support, Rician distribution and a non-uniform distribution coming from driving
patterns in real-world data. We generate daily demand profiles by using
real-world residential profiles throughout 2014 in the presence of different
expected PHEV demand models. Support vector machines (SVMs), a set of
supervised machine learning models, are employed in order to find the best
model to fit the data. SVMs with radial basis function (RBF) and polynomial
kernels were tested. Model performances are evaluated by means of mean squared
error (MSE) and mean absolute percentage error (MAPE). Best results are
obtained with RBF kernel: maximum (worst) values for MSE and MAPE were about
2.89 10-8 and 0.023, respectively.
| Marco Pellegrini and Farshad Rassaei | null | 1604.04213 | null | null |
Improving the Robustness of Deep Neural Networks via Stability Training | cs.CV cs.LG | In this paper we address the issue of output instability of deep neural
networks: small perturbations in the visual input can significantly distort the
feature embeddings and output of a neural network. Such instability affects
many deep architectures with state-of-the-art performance on a wide range of
computer vision tasks. We present a general stability training method to
stabilize deep networks against small input distortions that result from
various types of common image processing, such as compression, rescaling, and
cropping. We validate our method by stabilizing the state-of-the-art Inception
architecture against these types of distortions. In addition, we demonstrate
that our stabilized model gives robust state-of-the-art performance on
large-scale near-duplicate detection, similar-image ranking, and classification
on noisy datasets.
| Stephan Zheng, Yang Song, Thomas Leung, Ian Goodfellow | null | 1604.04326 | null | null |
Positive Definite Estimation of Large Covariance Matrix Using
Generalized Nonconvex Penalties | cs.IT cs.LG math.IT stat.ML | This work addresses the issue of large covariance matrix estimation in
high-dimensional statistical analysis. Recently, improved iterative algorithms
with positive-definite guarantee have been developed. However, these algorithms
cannot be directly extended to use a nonconvex penalty for sparsity inducing.
Generally, a nonconvex penalty has the capability of ameliorating the bias
problem of the popular convex lasso penalty, and thus is more advantageous. In
this work, we propose a class of positive-definite covariance estimators using
generalized nonconvex penalties. We develop a first-order algorithm based on
the alternating direction method framework to solve the nonconvex optimization
problem efficiently. The convergence of this algorithm has been proved.
Further, the statistical properties of the new estimators have been analyzed
for generalized nonconvex penalties. Moreover, extension of this algorithm to
covariance estimation from sketched measurements has been considered. The
performances of the new estimators have been demonstrated by both a simulation
study and a gene clustering example for tumor tissues. Code for the proposed
estimators is available at https://github.com/FWen/Nonconvex-PDLCE.git.
| Fei Wen, Yuan Yang, Peilin Liu, and Robert C. Qiu | null | 1604.04348 | null | null |
Match-SRNN: Modeling the Recursive Matching Structure with Spatial RNN | cs.CL cs.AI cs.LG cs.NE | Semantic matching, which aims to determine the matching degree between two
texts, is a fundamental problem for many NLP applications. Recently, deep
learning approach has been applied to this problem and significant improvements
have been achieved. In this paper, we propose to view the generation of the
global interaction between two texts as a recursive process: i.e. the
interaction of two texts at each position is a composition of the interactions
between their prefixes as well as the word level interaction at the current
position. Based on this idea, we propose a novel deep architecture, namely
Match-SRNN, to model the recursive matching structure. Firstly, a tensor is
constructed to capture the word level interactions. Then a spatial RNN is
applied to integrate the local interactions recursively, with importance
determined by four types of gates. Finally, the matching score is calculated
based on the global interaction. We show that, after degenerated to the exact
matching scenario, Match-SRNN can approximate the dynamic programming process
of longest common subsequence. Thus, there exists a clear interpretation for
Match-SRNN. Our experiments on two semantic matching tasks showed the
effectiveness of Match-SRNN, and its ability of visualizing the learned
matching structure.
| Shengxian Wan, Yanyan Lan, Jun Xu, Jiafeng Guo, Liang Pang, Xueqi
Cheng | null | 1604.04378 | null | null |
The Artificial Mind's Eye: Resisting Adversarials for Convolutional
Neural Networks using Internal Projection | cs.LG cs.NE | We introduce a novel artificial neural network architecture that integrates
robustness to adversarial input in the network structure. The main idea of our
approach is to force the network to make predictions on what the given instance
of the class under consideration would look like and subsequently test those
predictions. By forcing the network to redraw the relevant parts of the image
and subsequently comparing this new image to the original, we are having the
network give a "proof" of the presence of the object.
| Harm Berntsen, Wouter Kuijper and Tom Heskes | null | 1604.04428 | null | null |
Bayesian linear regression with Student-t assumptions | cs.LG stat.ML | As an automatic method of determining model complexity using the training
data alone, Bayesian linear regression provides us a principled way to select
hyperparameters. But one often needs approximation inference if distribution
assumption is beyond Gaussian distribution. In this paper, we propose a
Bayesian linear regression model with Student-t assumptions (BLRS), which can
be inferred exactly. In this framework, both conjugate prior and expectation
maximization (EM) algorithm are generalized. Meanwhile, we prove that the
maximum likelihood solution is equivalent to the standard Bayesian linear
regression with Gaussian assumptions (BLRG). The $q$-EM algorithm for BLRS is
nearly identical to the EM algorithm for BLRG. It is showed that $q$-EM for
BLRS can converge faster than EM for BLRG for the task of predicting online
news popularity.
| Chaobing Song, Shu-Tao Xia | null | 1604.04434 | null | null |
Delta divergence: A novel decision cognizant measure of classifier
incongruence | cs.LG cs.IT math.IT stat.ML | Disagreement between two classifiers regarding the class membership of an
observation in pattern recognition can be indicative of an anomaly and its
nuance. As in general classifiers base their decision on class aposteriori
probabilities, the most natural approach to detecting classifier incongruence
is to use divergence. However, existing divergences are not particularly
suitable to gauge classifier incongruence. In this paper, we postulate the
properties that a divergence measure should satisfy and propose a novel
divergence measure, referred to as Delta divergence. In contrast to existing
measures, it is decision cognizant. The focus in Delta divergence on the
dominant hypotheses has a clutter reducing property, the significance of which
grows with increasing number of classes. The proposed measure satisfies other
important properties such as symmetry, and independence of classifier
confidence. The relationship of the proposed divergence to some baseline
measures is demonstrated experimentally, showing its superiority.
| Josef Kittler and Cemre Zor | null | 1604.04451 | null | null |
A short note on extension theorems and their connection to universal
consistency in machine learning | stat.ML cs.LG | Statistical machine learning plays an important role in modern statistics and
computer science. One main goal of statistical machine learning is to provide
universally consistent algorithms, i.e., the estimator converges in probability
or in some stronger sense to the Bayes risk or to the Bayes decision function.
Kernel methods based on minimizing the regularized risk over a reproducing
kernel Hilbert space (RKHS) belong to these statistical machine learning
methods. It is in general unknown which kernel yields optimal results for a
particular data set or for the unknown probability measure. Hence various
kernel learning methods were proposed to choose the kernel and therefore also
its RKHS in a data adaptive manner. Nevertheless, many practitioners often use
the classical Gaussian RBF kernel or certain Sobolev kernels with good success.
The goal of this short note is to offer one possible theoretical explanation
for this empirical fact.
| Andreas Christmann, Florian Dumpert, and Dao-Hong Xiang | null | 1604.04505 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.