title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Cost-sensitive Learning for Utility Optimization in Online Advertising
Auctions | cs.LG | One of the most challenging problems in computational advertising is the
prediction of click-through and conversion rates for bidding in online
advertising auctions. An unaddressed problem in previous approaches is the
existence of highly non-uniform misprediction costs. While for model evaluation
these costs have been taken into account through recently proposed
business-aware offline metrics -- such as the Utility metric which measures the
impact on advertiser profit -- this is not the case when training the models
themselves. In this paper, to bridge the gap, we formally analyze the
relationship between optimizing the Utility metric and the log loss, which is
considered as one of the state-of-the-art approaches in conversion modeling.
Our analysis motivates the idea of weighting the log loss with the business
value of the predicted outcome. We present and analyze a new cost weighting
scheme and show that significant gains in offline and online performance can be
achieved.
| Flavian Vasile, Damien Lefortier, Olivier Chapelle | null | 1603.03713 | null | null |
Distribution Free Learning with Local Queries | cs.LG | The model of learning with \emph{local membership queries} interpolates
between the PAC model and the membership queries model by allowing the learner
to query the label of any example that is similar to an example in the training
set. This model, recently proposed and studied by Awasthi, Feldman and Kanade,
aims to facilitate practical use of membership queries.
We continue this line of work, proving both positive and negative results in
the {\em distribution free} setting. We restrict to the boolean cube $\{-1,
1\}^n$, and say that a query is $q$-local if it is of a hamming distance $\le
q$ from some training example. On the positive side, we show that $1$-local
queries already give an additional strength, and allow to learn a certain type
of DNF formulas. On the negative side, we show that even
$\left(n^{0.99}\right)$-local queries cannot help to learn various classes
including Automata, DNFs and more. Likewise, $q$-local queries for any constant
$q$ cannot help to learn Juntas, Decision Trees, Sparse Polynomials and more.
Moreover, for these classes, an algorithm that uses
$\left(\log^{0.99}(n)\right)$-local queries would lead to a breakthrough in the
best known running times.
| Galit Bary-Weisberg and Amit Daniely and Shai Shalev-Shwartz | null | 1603.03714 | null | null |
Efficient Clustering of Correlated Variables and Variable Selection in
High-Dimensional Linear Models | stat.ML cs.LG | In this paper, we introduce Adaptive Cluster Lasso(ACL) method for variable
selection in high dimensional sparse regression models with strongly correlated
variables. To handle correlated variables, the concept of clustering or
grouping variables and then pursuing model fitting is widely accepted. When the
dimension is very high, finding an appropriate group structure is as difficult
as the original problem. The ACL is a three-stage procedure where, at the first
stage, we use the Lasso(or its adaptive or thresholded version) to do initial
selection, then we also include those variables which are not selected by the
Lasso but are strongly correlated with the variables selected by the Lasso. At
the second stage we cluster the variables based on the reduced set of
predictors and in the third stage we perform sparse estimation such as Lasso on
cluster representatives or the group Lasso based on the structures generated by
clustering procedure. We show that our procedure is consistent and efficient in
finding true underlying population group structure(under assumption of
irrepresentable and beta-min conditions). We also study the group selection
consistency of our method and we support the theory using simulated and
pseudo-real dataset examples.
| Niharika Gauraha, Swapan K. Parui | null | 1603.03724 | null | null |
A Recursive Born Approach to Nonlinear Inverse Scattering | cs.LG physics.optics | The Iterative Born Approximation (IBA) is a well-known method for describing
waves scattered by semi-transparent objects. In this paper, we present a novel
nonlinear inverse scattering method that combines IBA with an edge-preserving
total variation (TV) regularizer. The proposed method is obtained by relating
iterations of IBA to layers of a feedforward neural network and developing a
corresponding error backpropagation algorithm for efficiently estimating the
permittivity of the object. Simulations illustrate that, by accounting for
multiple scattering, the method successfully recovers the permittivity
distribution where the traditional linear inverse scattering fails.
| Ulugbek S. Kamilov, Dehong Liu, Hassan Mansour, and Petros T.
Boufounos | 10.1109/LSP.2016.2579647 | 1603.03768 | null | null |
A Primer on the Signature Method in Machine Learning | stat.ML cs.LG stat.ME | In these notes, we wish to provide an introduction to the signature method,
focusing on its basic theoretical properties and recent numerical applications.
The notes are split into two parts. The first part focuses on the definition
and fundamental properties of the signature of a path, or the path signature.
We have aimed for a minimalistic approach, assuming only familiarity with
classical real analysis and integration theory, and supplementing theory with
straightforward examples. We have chosen to focus in detail on the principle
properties of the signature which we believe are fundamental to understanding
its role in applications. We also present an informal discussion on some of its
deeper properties and briefly mention the role of the signature in rough paths
theory, which we hope could serve as a light introduction to rough paths for
the interested reader.
The second part of these notes discusses practical applications of the path
signature to the area of machine learning. The signature approach represents a
non-parametric way for extraction of characteristic features from data. The
data are converted into a multi-dimensional path by means of various embedding
algorithms and then processed for computation of individual terms of the
signature which summarise certain information contained in the data. The
signature thus transforms raw data into a set of features which are used in
machine learning tasks. We will review current progress in applications of
signatures to machine learning problems.
| Ilya Chevyrev, Andrey Kormilitzin | null | 1603.03788 | null | null |
Sequential Short-Text Classification with Recurrent and Convolutional
Neural Networks | cs.CL cs.AI cs.LG cs.NE stat.ML | Recent approaches based on artificial neural networks (ANNs) have shown
promising results for short-text classification. However, many short texts
occur in sequences (e.g., sentences in a document or utterances in a dialog),
and most existing ANN-based systems do not leverage the preceding short texts
when classifying a subsequent one. In this work, we present a model based on
recurrent neural networks and convolutional neural networks that incorporates
the preceding short texts. Our model achieves state-of-the-art results on three
different datasets for dialog act prediction.
| Ji Young Lee, Franck Dernoncourt | null | 1603.03827 | null | null |
From virtual demonstration to real-world manipulation using LSTM and MDN | cs.RO cs.AI cs.LG | Robots assisting the disabled or elderly must perform complex manipulation
tasks and must adapt to the home environment and preferences of their user.
Learning from demonstration is a promising choice, that would allow the
non-technical user to teach the robot different tasks. However, collecting
demonstrations in the home environment of a disabled user is time consuming,
disruptive to the comfort of the user, and presents safety challenges. It would
be desirable to perform the demonstrations in a virtual environment. In this
paper we describe a solution to the challenging problem of behavior transfer
from virtual demonstration to a physical robot. The virtual demonstrations are
used to train a deep neural network based controller, which is using a Long
Short Term Memory (LSTM) recurrent neural network to generate trajectories. The
training process uses a Mixture Density Network (MDN) to calculate an error
signal suitable for the multimodal nature of demonstrations. The controller
learned in the virtual environment is transferred to a physical robot (a
Rethink Robotics Baxter). An off-the-shelf vision component is used to
substitute for geometric knowledge available in the simulation and an inverse
kinematics module is used to allow the Baxter to enact the trajectory. Our
experimental studies validate the three contributions of the paper: (1) the
controller learned from virtual demonstrations can be used to successfully
perform the manipulation tasks on a physical robot, (2) the LSTM+MDN
architectural choice outperforms other choices, such as the use of feedforward
networks and mean-squared error based training signals and (3) allowing
imperfect demonstrations in the training set also allows the controller to
learn how to correct its manipulation mistakes.
| Rouhollah Rahmatizadeh, Pooya Abolghasemi, Aman Behal, Ladislau
B\"ol\"oni | null | 1603.03833 | null | null |
Pufferfish Privacy Mechanisms for Correlated Data | cs.LG cs.CR stat.ML | Many modern databases include personal and sensitive correlated data, such as
private information on users connected together in a social network, and
measurements of physical activity of single subjects across time. However,
differential privacy, the current gold standard in data privacy, does not
adequately address privacy issues in this kind of data.
This work looks at a recent generalization of differential privacy, called
Pufferfish, that can be used to address privacy in correlated data. The main
challenge in applying Pufferfish is a lack of suitable mechanisms. We provide
the first mechanism -- the Wasserstein Mechanism -- which applies to any
general Pufferfish framework. Since this mechanism may be computationally
inefficient, we provide an additional mechanism that applies to some practical
cases such as physical activity measurements across time, and is
computationally efficient. Our experimental evaluations indicate that this
mechanism provides privacy and utility for synthetic as well as real data in
two separate domains.
| Shuang Song, Yizhen Wang, Kamalika Chaudhuri | null | 1603.03977 | null | null |
On Learning High Dimensional Structured Single Index Models | stat.ML cs.AI cs.LG | Single Index Models (SIMs) are simple yet flexible semi-parametric models for
machine learning, where the response variable is modeled as a monotonic
function of a linear combination of features. Estimation in this context
requires learning both the feature weights and the nonlinear function that
relates features to observations. While methods have been described to learn
SIMs in the low dimensional regime, a method that can efficiently learn SIMs in
high dimensions, and under general structural assumptions, has not been
forthcoming. In this paper, we propose computationally efficient algorithms for
SIM inference in high dimensions with structural constraints. Our general
approach specializes to sparsity, group sparsity, and low-rank assumptions
among others. Experiments show that the proposed method enjoys superior
predictive performance when compared to generalized linear models, and achieves
results comparable to or better than single layer feedforward neural networks
with significantly less computational cost.
| Nikhil Rao, Ravi Ganti, Laura Balzano, Rebecca Willett, Robert Nowak | null | 1603.03980 | null | null |
Learning Typographic Style | cs.CV cs.LG cs.NE | Typography is a ubiquitous art form that affects our understanding,
perception, and trust in what we read. Thousands of different font-faces have
been created with enormous variations in the characters. In this paper, we
learn the style of a font by analyzing a small subset of only four letters.
From these four letters, we learn two tasks. The first is a discrimination
task: given the four letters and a new candidate letter, does the new letter
belong to the same font? Second, given the four basis letters, can we generate
all of the other letters with the same characteristics as those in the basis
set? We use deep neural networks to address both tasks, quantitatively and
qualitatively measure the results in a variety of novel manners, and present a
thorough investigation of the weaknesses and strengths of the approach.
| Shumeet Baluja | null | 1603.04000 | null | null |
Fast Learning from Distributed Datasets without Entity Matching | cs.LG cs.DC | Consider the following data fusion scenario: two datasets/peers contain the
same real-world entities described using partially shared features, e.g.
banking and insurance company records of the same customer base. Our goal is to
learn a classifier in the cross product space of the two domains, in the hard
case in which no shared ID is available -- e.g. due to anonymization.
Traditionally, the problem is approached by first addressing entity matching
and subsequently learning the classifier in a standard manner. We present an
end-to-end solution which bypasses matching entities, based on the recently
introduced concept of Rademacher observations (rados). Informally, we replace
the minimisation of a loss over examples, which requires to solve entity
resolution, by the equivalent minimisation of a (different) loss over rados.
Among others, key properties we show are (i) a potentially huge subset of these
rados does not require to perform entity matching, and (ii) the algorithm that
provably minimizes the rado loss over these rados has time and space
complexities smaller than the algorithm minimizing the equivalent example loss.
Last, we relax a key assumption of the model, that the data is vertically
partitioned among peers --- in this case, we would not even know the existence
of a solution to entity resolution. In this more general setting, experiments
validate the possibility of significantly beating even the optimal peer in
hindsight.
| Giorgio Patrini, Richard Nock, Stephen Hardy, Tiberio Caetano | null | 1603.04002 | null | null |
Active Algorithms For Preference Learning Problems with Multiple
Populations | stat.ML cs.AI cs.LG | In this paper we model the problem of learning preferences of a population as
an active learning problem. We propose an algorithm can adaptively choose pairs
of items to show to users coming from a heterogeneous population, and use the
obtained reward to decide which pair of items to show next. We provide
computationally efficient algorithms with provable sample complexity guarantees
for this problem in both the noiseless and noisy cases. In the process of
establishing sample complexity guarantees for our algorithms, we establish new
results using a Nystr{\"o}m-like method which can be of independent interest.
We supplement our theoretical results with experimental comparisons.
| Aniruddha Bhargava, Ravi Ganti, Robert Nowak | null | 1603.04118 | null | null |
Exploratory Gradient Boosting for Reinforcement Learning in Complex
Domains | cs.AI cs.LG stat.ML | High-dimensional observations and complex real-world dynamics present major
challenges in reinforcement learning for both function approximation and
exploration. We address both of these challenges with two complementary
techniques: First, we develop a gradient-boosting style, non-parametric
function approximator for learning on $Q$-function residuals. And second, we
propose an exploration strategy inspired by the principles of state abstraction
and information acquisition under uncertainty. We demonstrate the empirical
effectiveness of these techniques, first, as a preliminary check, on two
standard tasks (Blackjack and $n$-Chain), and then on two much larger and more
realistic tasks with high-dimensional observation spaces. Specifically, we
introduce two benchmarks built within the game Minecraft where the observations
are pixel arrays of the agent's visual field. A combination of our two
algorithmic techniques performs competitively on the standard
reinforcement-learning tasks while consistently and substantially outperforming
baselines on the two tasks with high-dimensional observation spaces. The new
function approximator, exploration strategy, and evaluation benchmarks are each
of independent interest in the pursuit of reinforcement-learning methods that
scale to real-world domains.
| David Abel, Alekh Agarwal, Fernando Diaz, Akshay Krishnamurthy, Robert
E. Schapire | null | 1603.04119 | null | null |
On the Influence of Momentum Acceleration on Online Learning | math.OC cs.LG stat.ML | The article examines in some detail the convergence rate and
mean-square-error performance of momentum stochastic gradient methods in the
constant step-size and slow adaptation regime. The results establish that
momentum methods are equivalent to the standard stochastic gradient method with
a re-scaled (larger) step-size value. The size of the re-scaling is determined
by the value of the momentum parameter. The equivalence result is established
for all time instants and not only in steady-state. The analysis is carried out
for general strongly convex and smooth risk functions, and is not limited to
quadratic risks. One notable conclusion is that the well-known bene ts of
momentum constructions for deterministic optimization problems do not
necessarily carry over to the adaptive online setting when small constant
step-sizes are used to enable continuous adaptation and learn- ing in the
presence of persistent gradient noise. From simulations, the equivalence
between momentum and standard stochastic gradient methods is also observed for
non-differentiable and non-convex problems.
| Kun Yuan, Bicheng Ying, and Ali H. Sayed | null | 1603.04136 | null | null |
Top-$K$ Ranking from Pairwise Comparisons: When Spectral Ranking is
Optimal | cs.LG cs.IT cs.SI math.IT stat.ML | We explore the top-$K$ rank aggregation problem. Suppose a collection of
items is compared in pairs repeatedly, and we aim to recover a consistent
ordering that focuses on the top-$K$ ranked items based on partially revealed
preference information. We investigate the Bradley-Terry-Luce model in which
one ranks items according to their perceived utilities modeled as noisy
observations of their underlying true utilities. Our main contributions are
two-fold. First, in a general comparison model where item pairs to compare are
given a priori, we attain an upper and lower bound on the sample size for
reliable recovery of the top-$K$ ranked items. Second, more importantly,
extending the result to a random comparison model where item pairs to compare
are chosen independently with some probability, we show that in slightly
restricted regimes, the gap between the derived bounds reduces to a constant
factor, hence reveals that a spectral method can achieve the minimax optimality
on the (order-wise) sample size required for top-$K$ ranking. That is to say,
we demonstrate a spectral method alone to be sufficient to achieve the
optimality and advantageous in terms of computational complexity, as it does
not require an additional stage of maximum likelihood estimation that a
state-of-the-art scheme employs to achieve the optimality. We corroborate our
main results by numerical experiments.
| Minje Jang, Sunghyun Kim, Changho Suh, Sewoong Oh | null | 1603.04153 | null | null |
Visual Concept Recognition and Localization via Iterative Introspection | cs.CV cs.LG | Convolutional neural networks have been shown to develop internal
representations, which correspond closely to semantically meaningful objects
and parts, although trained solely on class labels. Class Activation Mapping
(CAM) is a recent method that makes it possible to easily highlight the image
regions contributing to a network's classification decision. We build upon
these two developments to enable a network to re-examine informative image
regions, which we term introspection. We propose a weakly-supervised iterative
scheme, which shifts its center of attention to increasingly discriminative
regions as it progresses, by alternating stages of classification and
introspection. We evaluate our method and show its effectiveness over a range
of several datasets, where we obtain competitive or state-of-the-art results:
on Stanford-40 Actions, we set a new state-of the art of 81.74%. On
FGVC-Aircraft and the Stanford Dogs dataset, we show consistent improvements
over baselines, some of which include significantly more supervision.
| Amir Rosenfeld, Shimon Ullman | null | 1603.04186 | null | null |
Online Isotonic Regression | cs.LG stat.ML | We consider the online version of the isotonic regression problem. Given a
set of linearly ordered points (e.g., on the real line), the learner must
predict labels sequentially at adversarially chosen positions and is evaluated
by her total squared loss compared against the best isotonic (non-decreasing)
function in hindsight. We survey several standard online learning algorithms
and show that none of them achieve the optimal regret exponent; in fact, most
of them (including Online Gradient Descent, Follow the Leader and Exponential
Weights) incur linear regret. We then prove that the Exponential Weights
algorithm played over a covering net of isotonic functions has a regret bounded
by $O\big(T^{1/3} \log^{2/3}(T)\big)$ and present a matching $\Omega(T^{1/3})$
lower bound on regret. We provide a computationally efficient version of this
algorithm. We also analyze the noise-free case, in which the revealed labels
are isotonic, and show that the bound can be improved to $O(\log T)$ or even to
$O(1)$ (when the labels are revealed in isotonic order). Finally, we extend the
analysis beyond squared loss and give bounds for entropic loss and absolute
loss.
| Wojciech Kot{\l}owski, Wouter M. Koolen, Alan Malek | null | 1603.04190 | null | null |
A Variational Perspective on Accelerated Methods in Optimization | math.OC cs.LG stat.ML | Accelerated gradient methods play a central role in optimization, achieving
optimal rates in many settings. While many generalizations and extensions of
Nesterov's original acceleration method have been proposed, it is not yet clear
what is the natural scope of the acceleration concept. In this paper, we study
accelerated methods from a continuous-time perspective. We show that there is a
Lagrangian functional that we call the \emph{Bregman Lagrangian} which
generates a large class of accelerated methods in continuous time, including
(but not limited to) accelerated gradient descent, its non-Euclidean extension,
and accelerated higher-order gradient methods. We show that the continuous-time
limit of all of these methods correspond to traveling the same curve in
spacetime at different speeds. From this perspective, Nesterov's technique and
many of its generalizations can be viewed as a systematic way to go from the
continuous-time curves generated by the Bregman Lagrangian to a family of
discrete-time accelerated algorithms.
| Andre Wibisono, Ashia C. Wilson, Michael I. Jordan | 10.1073/pnas.1614734113 | 1603.04245 | null | null |
Item2Vec: Neural Item Embedding for Collaborative Filtering | cs.LG cs.AI cs.IR | Many Collaborative Filtering (CF) algorithms are item-based in the sense that
they analyze item-item relations in order to produce item similarities.
Recently, several works in the field of Natural Language Processing (NLP)
suggested to learn a latent representation of words using neural embedding
algorithms. Among them, the Skip-gram with Negative Sampling (SGNS), also known
as word2vec, was shown to provide state-of-the-art results on various
linguistics tasks. In this paper, we show that item-based CF can be cast in the
same framework of neural word embedding. Inspired by SGNS, we describe a method
we name item2vec for item-based CF that produces embedding for items in a
latent space. The method is capable of inferring item-item relations even when
user information is not available. We present experimental results that
demonstrate the effectiveness of the item2vec method and show it is competitive
with SVD.
| Oren Barkan and Noam Koenigstein | null | 1603.04259 | null | null |
Universal probability-free prediction | cs.LG | We construct universal prediction systems in the spirit of Popper's
falsifiability and Kolmogorov complexity and randomness. These prediction
systems do not depend on any statistical assumptions (but under the IID
assumption they dominate, to within the usual accuracy, conformal prediction).
Our constructions give rise to a theory of algorithmic complexity and
randomness of time containing analogues of several notions and results of the
classical theory of Kolmogorov complexity and randomness.
| Vladimir Vovk and Dusko Pavlovic | null | 1603.04283 | null | null |
Learning Network of Multivariate Hawkes Processes: A Time Series
Approach | cs.LG cs.AI stat.ML | Learning the influence structure of multiple time series data is of great
interest to many disciplines. This paper studies the problem of recovering the
causal structure in network of multivariate linear Hawkes processes. In such
processes, the occurrence of an event in one process affects the probability of
occurrence of new events in some other processes. Thus, a natural notion of
causality exists between such processes captured by the support of the
excitation matrix. We show that the resulting causal influence network is
equivalent to the Directed Information graph (DIG) of the processes, which
encodes the causal factorization of the joint distribution of the processes.
Furthermore, we present an algorithm for learning the support of excitation
matrix (or equivalently the DIG). The performance of the algorithm is evaluated
on synthesized multivariate Hawkes networks as well as a stock market and
MemeTracker real-world dataset.
| Jalal Etesami, Negar Kiyavash, Kun Zhang, Kushagra Singhal | null | 1603.04319 | null | null |
Criteria of efficiency for conformal prediction | cs.LG | We study optimal conformity measures for various criteria of efficiency of
classification in an idealised setting. This leads to an important class of
criteria of efficiency that we call probabilistic; it turns out that the most
standard criteria of efficiency used in literature on conformal prediction are
not probabilistic unless the problem of classification is binary. We consider
both unconditional and label-conditional conformal prediction.
| Vladimir Vovk, Ilia Nouretdinov, Valentina Fedorova, Ivan Petej, and
Alex Gammerman | null | 1603.04416 | null | null |
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed
Systems | cs.DC cs.LG | TensorFlow is an interface for expressing machine learning algorithms, and an
implementation for executing such algorithms. A computation expressed using
TensorFlow can be executed with little or no change on a wide variety of
heterogeneous systems, ranging from mobile devices such as phones and tablets
up to large-scale distributed systems of hundreds of machines and thousands of
computational devices such as GPU cards. The system is flexible and can be used
to express a wide variety of algorithms, including training and inference
algorithms for deep neural network models, and it has been used for conducting
research and for deploying machine learning systems into production across more
than a dozen areas of computer science and other fields, including speech
recognition, computer vision, robotics, information retrieval, natural language
processing, geographic information extraction, and computational drug
discovery. This paper describes the TensorFlow interface and an implementation
of that interface that we have built at Google. The TensorFlow API and a
reference implementation were released as an open-source package under the
Apache 2.0 license in November, 2015 and are available at www.tensorflow.org.
| Mart\'in Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng
Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin,
Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard,
Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh
Levenberg, Dan Mane, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah,
Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar,
Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viegas, Oriol
Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang
Zheng | null | 1603.04467 | null | null |
Conformal Predictors for Compound Activity Prediction | cs.LG | The paper presents an application of Conformal Predictors to a
chemoinformatics problem of identifying activities of chemical compounds. The
paper addresses some specific challenges of this domain: a large number of
compounds (training examples), high-dimensionality of feature space, sparseness
and a strong class imbalance. A variant of conformal predictors called
Inductive Mondrian Conformal Predictor is applied to deal with these
challenges. Results are presented for several non-conformity measures (NCM)
extracted from underlying algorithms and different kernels. A number of
performance measures are used in order to demonstrate the flexibility of
Inductive Mondrian Conformal Predictors in dealing with such a complex set of
data.
Keywords: Conformal Prediction, Confidence Estimation, Chemoinformatics,
Non-Conformity Measure.
| Paolo Toccacheli, Ilia Nouretdinov and Alexander Gammerman | null | 1603.04506 | null | null |
Object Contour Detection with a Fully Convolutional Encoder-Decoder
Network | cs.CV cs.LG | We develop a deep learning algorithm for contour detection with a fully
convolutional encoder-decoder network. Different from previous low-level edge
detection, our algorithm focuses on detecting higher-level object contours. Our
network is trained end-to-end on PASCAL VOC with refined ground truth from
inaccurate polygon annotations, yielding much higher precision in object
contour detection than previous methods. We find that the learned model
generalizes well to unseen object classes from the same super-categories on MS
COCO and can match state-of-the-art edge detection on BSDS500 with fine-tuning.
By combining with the multiscale combinatorial grouping algorithm, our method
can generate high-quality segmented object proposals, which significantly
advance the state-of-the-art on PASCAL VOC (improving average recall from 0.62
to 0.67) with a relatively small amount of candidates ($\sim$1660 per image).
| Jimei Yang, Brian Price, Scott Cohen, Honglak Lee, Ming-Hsuan Yang | null | 1603.04530 | null | null |
Learning Domain-Invariant Subspace using Domain Features and
Independence Maximization | cs.CV cs.AI cs.LG | Domain adaptation algorithms are useful when the distributions of the
training and the test data are different. In this paper, we focus on the
problem of instrumental variation and time-varying drift in the field of
sensors and measurement, which can be viewed as discrete and continuous
distributional change in the feature space. We propose maximum independence
domain adaptation (MIDA) and semi-supervised MIDA (SMIDA) to address this
problem. Domain features are first defined to describe the background
information of a sample, such as the device label and acquisition time. Then,
MIDA learns a subspace which has maximum independence with the domain features,
so as to reduce the inter-domain discrepancy in distributions. A feature
augmentation strategy is also designed to project samples according to their
backgrounds so as to improve the adaptation. The proposed algorithms are
flexible and fast. Their effectiveness is verified by experiments on synthetic
datasets and four real-world ones on sensors, measurement, and computer vision.
They can greatly enhance the practicability of sensor systems, as well as
extend the application scope of existing domain adaptation algorithms by
uniformly handling different kinds of distributional change.
| Ke Yan, Lu Kou, and David Zhang | 10.1109/TCYB.2016.2633306 | 1603.04535 | null | null |
Matching while Learning | cs.LG cs.DS stat.ME stat.ML | We consider the problem faced by a service platform that needs to match
limited supply with demand but also to learn the attributes of new users in
order to match them better in the future. We introduce a benchmark model with
heterogeneous "workers" (demand) and a limited supply of "jobs" that arrive
over time. Job types are known to the platform, but worker types are unknown
and must be learned by observing match outcomes. Workers depart after
performing a certain number of jobs. The expected payoff from a match depends
on the pair of types and the goal is to maximize the steady-state rate of
accumulation of payoff. Though we use terminology inspired by labor markets,
our framework applies more broadly to platforms where a limited supply of
heterogeneous products is matched to users over time.
Our main contribution is a complete characterization of the structure of the
optimal policy in the limit that each worker performs many jobs. The platform
faces a trade-off for each worker between myopically maximizing payoffs
(exploitation) and learning the type of the worker (exploration). This creates
a multitude of multi-armed bandit problems, one for each worker, coupled
together by the constraint on availability of jobs of different types (capacity
constraints). We find that the platform should estimate a shadow price for each
job type, and use the payoffs adjusted by these prices, first, to determine its
learning goals and then, for each worker, (i) to balance learning with payoffs
during the "exploration phase," and (ii) to myopically match after it has
achieved its learning goals during the "exploitation phase."
| Ramesh Johari, Vijay Kamble and Yash Kanoria | null | 1603.04549 | null | null |
Unsupervised Ranking Model for Entity Coreference Resolution | cs.CL cs.LG | Coreference resolution is one of the first stages in deep language
understanding and its importance has been well recognized in the natural
language processing community. In this paper, we propose a generative,
unsupervised ranking model for entity coreference resolution by introducing
resolution mode variables. Our unsupervised system achieves 58.44% F1 score of
the CoNLL metric on the English data from the CoNLL-2012 shared task (Pradhan
et al., 2012), outperforming the Stanford deterministic system (Lee et al.,
2013) by 3.01%.
| Xuezhe Ma and Zhengzhong Liu and Eduard Hovy | null | 1603.04553 | null | null |
Structured and Efficient Variational Deep Learning with Matrix Gaussian
Posteriors | stat.ML cs.LG | We introduce a variational Bayesian neural network where the parameters are
governed via a probability distribution on random matrices. Specifically, we
employ a matrix variate Gaussian \cite{gupta1999matrix} parameter posterior
distribution where we explicitly model the covariance among the input and
output dimensions of each layer. Furthermore, with approximate covariance
matrices we can achieve a more efficient way to represent those correlations
that is also cheaper than fully factorized parameter posteriors. We further
show that with the "local reprarametrization trick"
\cite{kingma2015variational} on this posterior distribution we arrive at a
Gaussian Process \cite{rasmussen2006gaussian} interpretation of the hidden
units in each layer and we, similarly with \cite{gal2015dropout}, provide
connections with deep Gaussian processes. We continue in taking advantage of
this duality and incorporate "pseudo-data" \cite{snelson2005sparse} in our
model, which in turn allows for more efficient sampling while maintaining the
properties of the original model. The validity of the proposed approach is
verified through extensive experiments.
| Christos Louizos and Max Welling | null | 1603.04733 | null | null |
Revisiting Batch Normalization For Practical Domain Adaptation | cs.CV cs.LG | Deep neural networks (DNN) have shown unprecedented success in various
computer vision applications such as image classification and object detection.
However, it is still a common annoyance during the training phase, that one has
to prepare at least thousands of labeled images to fine-tune a network to a
specific domain. Recent study (Tommasi et al. 2015) shows that a DNN has strong
dependency towards the training dataset, and the learned features cannot be
easily transferred to a different but relevant task without fine-tuning. In
this paper, we propose a simple yet powerful remedy, called Adaptive Batch
Normalization (AdaBN) to increase the generalization ability of a DNN. By
modulating the statistics in all Batch Normalization layers across the network,
our approach achieves deep adaptation effect for domain adaptation tasks. In
contrary to other deep learning domain adaptation methods, our method does not
require additional components, and is parameter-free. It archives
state-of-the-art performance despite its surprising simplicity. Furthermore, we
demonstrate that our method is complementary with other existing methods.
Combining AdaBN with existing domain adaptation treatments may further improve
model performance.
| Yanghao Li, Naiyan Wang, Jianping Shi, Jiaying Liu, Xiaodi Hou | null | 1603.04779 | null | null |
Ensemble of Deep Convolutional Neural Networks for Learning to Detect
Retinal Vessels in Fundus Images | cs.LG cs.CV stat.ML | Vision impairment due to pathological damage of the retina can largely be
prevented through periodic screening using fundus color imaging. However the
challenge with large scale screening is the inability to exhaustively detect
fine blood vessels crucial to disease diagnosis. In this work we present a
computational imaging framework using deep and ensemble learning for reliable
detection of blood vessels in fundus color images. An ensemble of deep
convolutional neural networks is trained to segment vessel and non-vessel areas
of a color fundus image. During inference, the responses of the individual
ConvNets of the ensemble are averaged to form the final segmentation. In
experimental evaluation with the DRIVE database, we achieve the objective of
vessel detection with maximum average accuracy of 94.7\% and area under ROC
curve of 0.9283.
| Debapriya Maji, Anirban Santara, Pabitra Mitra and Debdoot Sheet | null | 1603.04833 | null | null |
Bias Correction for Regularized Regression and its Application in
Learning with Streaming Data | stat.ML cs.LG | We propose an approach to reduce the bias of ridge regression and
regularization kernel network. When applied to a single data set the new
algorithms have comparable learning performance with the original ones. When
applied to incremental learning with block wise streaming data the new
algorithms are more efficient due to bias reduction. Both theoretical
characterizations and simulation studies are used to verify the effectiveness
of these new algorithms.
| Qiang Wu | null | 1603.04882 | null | null |
Turing learning: a metric-free approach to inferring behavior and its
application to swarms | stat.ML cs.LG cs.NE | We propose Turing Learning, a novel system identification method for
inferring the behavior of natural or artificial systems. Turing Learning
simultaneously optimizes two populations of computer programs, one representing
models of the behavior of the system under investigation, and the other
representing classifiers. By observing the behavior of the system as well as
the behaviors produced by the models, two sets of data samples are obtained.
The classifiers are rewarded for discriminating between these two sets, that
is, for correctly categorizing data samples as either genuine or counterfeit.
Conversely, the models are rewarded for 'tricking' the classifiers into
categorizing their data samples as genuine. Unlike other methods for system
identification, Turing Learning does not require predefined metrics to quantify
the difference between the system and its models. We present two case studies
with swarms of simulated robots and prove that the underlying behaviors cannot
be inferred by a metric-based system identification method. By contrast, Turing
Learning infers the behaviors with high accuracy. It also produces a useful
by-product - the classifiers - that can be used to detect abnormal behavior in
the swarm. Moreover, we show that Turing Learning also successfully infers the
behavior of physical robot swarms. The results show that collective behaviors
can be directly inferred from motion trajectories of individuals in the swarm,
which may have significant implications for the study of animal collectives.
Furthermore, Turing Learning could prove useful whenever a behavior is not
easily characterizable using metrics, making it suitable for a wide range of
applications.
| Wei Li, Melvin Gauci and Roderich Gross | 10.1007/s11721-016-0126-1 | 1603.04904 | null | null |
Data Clustering and Graph Partitioning via Simulated Mixing | cs.LG stat.ML | Spectral clustering approaches have led to well-accepted algorithms for
finding accurate clusters in a given dataset. However, their application to
large-scale datasets has been hindered by computational complexity of
eigenvalue decompositions. Several algorithms have been proposed in the recent
past to accelerate spectral clustering, however they compromise on the accuracy
of the spectral clustering to achieve faster speed. In this paper, we propose a
novel spectral clustering algorithm based on a mixing process on a graph.
Unlike the existing spectral clustering algorithms, our algorithm does not
require computing eigenvectors. Specifically, it finds the equivalent of a
linear combination of eigenvectors of the normalized similarity matrix weighted
with corresponding eigenvalues. This linear combination is then used to
partition the dataset into meaningful clusters. Simulations on real datasets
show that partitioning datasets based on such linear combinations of
eigenvectors achieves better accuracy than standard spectral clustering methods
as the number of clusters increase. Our algorithm can easily be implemented in
a distributed setting.
| Shahzad Bhatti, Carolyn Beck, Angelia Nedic | null | 1603.04918 | null | null |
Deep Fully-Connected Networks for Video Compressive Sensing | cs.CV cs.LG cs.MM | In this work we present a deep learning framework for video compressive
sensing. The proposed formulation enables recovery of video frames in a few
seconds at significantly improved reconstruction quality compared to previous
approaches. Our investigation starts by learning a linear mapping between video
sequences and corresponding measured frames which turns out to provide
promising results. We then extend the linear formulation to deep
fully-connected networks and explore the performance gains using deeper
architectures. Our analysis is always driven by the applicability of the
proposed framework on existing compressive video architectures. Extensive
simulations on several video sequences document the superiority of our approach
both quantitatively and qualitatively. Finally, our analysis offers insights
into understanding how dataset sizes and number of layers affect reconstruction
performance while raising a few points for future investigation.
Code is available at Github: https://github.com/miliadis/DeepVideoCS
| Michael Iliadis, Leonidas Spinoulas, Aggelos K. Katsaggelos | null | 1603.04930 | null | null |
On the Complexity of One-class SVM for Multiple Instance Learning | cs.LG | In traditional multiple instance learning (MIL), both positive and negative
bags are required to learn a prediction function. However, a high human cost is
needed to know the label of each bag---positive or negative. Only positive bags
contain our focus (positive instances) while negative bags consist of noise or
background (negative instances). So we do not expect to spend too much to label
the negative bags. Contrary to our expectation, nearly all existing MIL methods
require enough negative bags besides positive ones. In this paper we propose an
algorithm called "Positive Multiple Instance" (PMI), which learns a classifier
given only a set of positive bags. So the annotation of negative bags becomes
unnecessary in our method. PMI is constructed based on the assumption that the
unknown positive instances in positive bags be similar each other and
constitute one compact cluster in feature space and the negative instances
locate outside this cluster. The experimental results demonstrate that PMI
achieves the performances close to or a little worse than those of the
traditional MIL algorithms on benchmark and real data sets. However, the number
of training bags in PMI is reduced significantly compared with traditional MIL
algorithms.
| Zhen Hu, Zhuyin Xue | null | 1603.04947 | null | null |
Online Optimization in Dynamic Environments: Improved Regret Rates for
Strongly Convex Problems | cs.LG math.OC | In this paper, we address tracking of a time-varying parameter with unknown
dynamics. We formalize the problem as an instance of online optimization in a
dynamic setting. Using online gradient descent, we propose a method that
sequentially predicts the value of the parameter and in turn suffers a loss.
The objective is to minimize the accumulation of losses over the time horizon,
a notion that is termed dynamic regret. While existing methods focus on convex
loss functions, we consider strongly convex functions so as to provide better
guarantees of performance. We derive a regret bound that captures the
path-length of the time-varying parameter, defined in terms of the distance
between its consecutive values. In other words, the bound represents the
natural connection of tracking quality to the rate of change of the parameter.
We provide numerical experiments to complement our theoretical findings.
| Aryan Mokhtari and Shahin Shahrampour and Ali Jadbabaie and Alejandro
Ribeiro | null | 1603.04954 | null | null |
An Approximate Dynamic Programming Approach to Adversarial Online
Learning | cs.GT cs.DS cs.LG stat.ML | We describe an approximate dynamic programming (ADP) approach to compute
approximations of the optimal strategies and of the minimal losses that can be
guaranteed in discounted repeated games with vector-valued losses. Such games
prominently arise in the analysis of regret in repeated decision-making in
adversarial environments, also known as adversarial online learning. At the
core of our approach is a characterization of the lower Pareto frontier of the
set of expected losses that a player can guarantee in these games as the unique
fixed point of a set-valued dynamic programming operator. When applied to the
problem of regret minimization with discounted losses, our approach yields
algorithms that achieve markedly improved performance bounds compared to
off-the-shelf online learning algorithms like Hedge. These results thus suggest
the significant potential of ADP-based approaches in adversarial online
learning.
| Vijay Kamble, Patrick Loiseau, Jean Walrand | null | 1603.04981 | null | null |
Scaled stochastic gradient descent for low-rank matrix completion | cs.LG math.OC | The paper looks at a scaled variant of the stochastic gradient descent
algorithm for the matrix completion problem. Specifically, we propose a novel
matrix-scaling of the partial derivatives that acts as an efficient
preconditioning for the standard stochastic gradient descent algorithm. This
proposed matrix-scaling provides a trade-off between local and global second
order information. It also resolves the issue of scale invariance that exists
in matrix factorization models. The overall computational complexity is linear
with the number of known entries, thereby extending to a large-scale setup.
Numerical comparisons show that the proposed algorithm competes favorably with
state-of-the-art algorithms on various different benchmarks.
| Bamdev Mishra and Rodolphe Sepulchre | null | 1603.04989 | null | null |
Identity Mappings in Deep Residual Networks | cs.CV cs.LG | Deep residual networks have emerged as a family of extremely deep
architectures showing compelling accuracy and nice convergence behaviors. In
this paper, we analyze the propagation formulations behind the residual
building blocks, which suggest that the forward and backward signals can be
directly propagated from one block to any other block, when using identity
mappings as the skip connections and after-addition activation. A series of
ablation experiments support the importance of these identity mappings. This
motivates us to propose a new residual unit, which makes training easier and
improves generalization. We report improved results using a 1001-layer ResNet
on CIFAR-10 (4.62% error) and CIFAR-100, and a 200-layer ResNet on ImageNet.
Code is available at: https://github.com/KaimingHe/resnet-1k-layers
| Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | null | 1603.05027 | null | null |
One-Shot Generalization in Deep Generative Models | stat.ML cs.AI cs.LG | Humans have an impressive ability to reason about new concepts and
experiences from just a single example. In particular, humans have an ability
for one-shot generalization: an ability to encounter a new concept, understand
its structure, and then be able to generate compelling alternative variations
of the concept. We develop machine learning systems with this important
capacity by developing new deep generative models, models that combine the
representational power of deep learning with the inferential power of Bayesian
reasoning. We develop a class of sequential generative models that are built on
the principles of feedback and attention. These two characteristics lead to
generative models that are among the state-of-the art in density estimation and
image generation. We demonstrate the one-shot generalization ability of our
models using three tasks: unconditional sampling, generating new exemplars of a
given concept, and generating new exemplars of a family of concepts. In all
cases our models are able to generate compelling and diverse samples---having
seen new examples just once---providing an important class of general-purpose
models for one-shot machine learning.
| Danilo Jimenez Rezende, Shakir Mohamed, Ivo Danihelka, Karol Gregor,
Daan Wierstra | null | 1603.05106 | null | null |
Suppressing the Unusual: towards Robust CNNs using Symmetric Activation
Functions | cs.CV cs.AI cs.LG | Many deep Convolutional Neural Networks (CNN) make incorrect predictions on
adversarial samples obtained by imperceptible perturbations of clean samples.
We hypothesize that this is caused by a failure to suppress unusual signals
within network layers. As remedy we propose the use of Symmetric Activation
Functions (SAF) in non-linear signal transducer units. These units suppress
signals of exceptional magnitude. We prove that SAF networks can perform
classification tasks to arbitrary precision in a simplified situation. In
practice, rather than use SAFs alone, we add them into CNNs to improve their
robustness. The modified CNNs can be easily trained using popular strategies
with the moderate training load. Our experiments on MNIST and CIFAR-10 show
that the modified CNNs perform similarly to plain ones on clean samples, and
are remarkably more robust against adversarial and nonsense samples.
| Qiyang Zhao, Lewis D Griffin | null | 1603.05145 | null | null |
Feature Selection as a Multiagent Coordination Problem | cs.LG stat.ML | Datasets with hundreds to tens of thousands features is the new norm. Feature
selection constitutes a central problem in machine learning, where the aim is
to derive a representative set of features from which to construct a
classification (or prediction) model for a specific task. Our experimental
study involves microarray gene expression datasets, these are high-dimensional
and noisy datasets that contain genetic data typically used for distinguishing
between benign or malicious tissues or classifying different types of cancer.
In this paper, we formulate feature selection as a multiagent coordination
problem and propose a novel feature selection method using multiagent
reinforcement learning. The central idea of the proposed approach is to
"assign" a reinforcement learning agent to each feature where each agent learns
to control a single feature, we refer to this approach as MARL. Applying this
to microarray datasets creates an enormous multiagent coordination problem
between thousands of learning agents. To address the scalability challenge we
apply a form of reward shaping called CLEAN rewards. We compare in total nine
feature selection methods, including state-of-the-art methods, and show that
the proposed method using CLEAN rewards can significantly scale-up, thus
outperforming the rest of learning-based methods. We further show that a hybrid
variant of MARL achieves the best overall performance.
| Kleanthis Malialis and Jun Wang and Gary Brooks and George Frangou | null | 1603.05152 | null | null |
Distributed Inexact Damped Newton Method: Data Partitioning and
Load-Balancing | cs.LG math.OC | In this paper we study inexact dumped Newton method implemented in a
distributed environment. We start with an original DiSCO algorithm
[Communication-Efficient Distributed Optimization of Self-Concordant Empirical
Loss, Yuchen Zhang and Lin Xiao, 2015]. We will show that this algorithm may
not scale well and propose an algorithmic modifications which will lead to less
communications, better load-balancing and more efficient computation. We
perform numerical experiments with an regularized empirical loss minimization
instance described by a 273GB dataset.
| Chenxin Ma and Martin Tak\'a\v{c} | null | 1603.05191 | null | null |
Understanding and Improving Convolutional Neural Networks via
Concatenated Rectified Linear Units | cs.LG cs.CV | Recently, convolutional neural networks (CNNs) have been used as a powerful
tool to solve many problems of machine learning and computer vision. In this
paper, we aim to provide insight on the property of convolutional neural
networks, as well as a generic method to improve the performance of many CNN
architectures. Specifically, we first examine existing CNN models and observe
an intriguing property that the filters in the lower layers form pairs (i.e.,
filters with opposite phase). Inspired by our observation, we propose a novel,
simple yet effective activation scheme called concatenated ReLU (CRelu) and
theoretically analyze its reconstruction property in CNNs. We integrate CRelu
into several state-of-the-art CNN architectures and demonstrate improvement in
their recognition performance on CIFAR-10/100 and ImageNet datasets with fewer
trainable parameters. Our results suggest that better understanding of the
properties of CNNs can lead to significant performance improvement with a
simple modification.
| Wenling Shang, Kihyuk Sohn, Diogo Almeida, Honglak Lee | null | 1603.05201 | null | null |
Fast moment estimation for generalized latent Dirichlet models | math.ST cs.LG stat.AP stat.ME stat.TH | We develop a generalized method of moments (GMM) approach for fast parameter
estimation in a new class of Dirichlet latent variable models with mixed data
types. Parameter estimation via GMM has been demonstrated to have computational
and statistical advantages over alternative methods, such as expectation
maximization, variational inference, and Markov chain Monte Carlo. The key
computational advan- tage of our method (MELD) is that parameter estimation
does not require instantiation of the latent variables. Moreover, a
representational advantage of the GMM approach is that the behavior of the
model is agnostic to distributional assumptions of the observations. We derive
population moment conditions after marginalizing out the sample-specific
Dirichlet latent variables. The moment conditions only depend on component mean
parameters. We illustrate the utility of our approach on simulated data,
comparing results from MELD to alternative methods, and we show the promise of
our approach through the application of MELD to several data sets.
| Shiwen Zhao and Barbara E. Engelhardt and Sayan Mukherjee and David B.
Dunson | null | 1603.05324 | null | null |
Cascading Bandits for Large-Scale Recommendation Problems | cs.LG stat.ML | Most recommender systems recommend a list of items. The user examines the
list, from the first item to the last, and often chooses the first attractive
item and does not examine the rest. This type of user behavior can be modeled
by the cascade model. In this work, we study cascading bandits, an online
learning variant of the cascade model where the goal is to recommend $K$ most
attractive items from a large set of $L$ candidate items. We propose two
algorithms for solving this problem, which are based on the idea of linear
generalization. The key idea in our solutions is that we learn a predictor of
the attraction probabilities of items from their features, as opposing to
learning the attraction probability of each item independently as in the
existing work. This results in practical learning algorithms whose regret does
not depend on the number of items $L$. We bound the regret of one algorithm and
comprehensively evaluate the other on a range of recommendation problems. The
algorithm performs well and outperforms all baselines.
| Shi Zong, Hao Ni, Kenny Sung, Nan Rosemary Ke, Zheng Wen, and
Branislav Kveton | null | 1603.05359 | null | null |
Online semi-parametric learning for inverse dynamics modeling | math.OC cs.LG stat.ML | This paper presents a semi-parametric algorithm for online learning of a
robot inverse dynamics model. It combines the strength of the parametric and
non-parametric modeling. The former exploits the rigid body dynamics equa-
tion, while the latter exploits a suitable kernel function. We provide an
extensive comparison with other methods from the literature using real data
from the iCub humanoid robot. In doing so we also compare two different
techniques, namely cross validation and marginal likelihood optimization, for
estimating the hyperparameters of the kernel function.
| Diego Romeres and Mattia Zorzi and Raffaello Camoriano and Alessandro
Chiuso | null | 1603.05412 | null | null |
Accelerating Deep Neural Network Training with Inconsistent Stochastic
Gradient Descent | cs.LG cs.DC | SGD is the widely adopted method to train CNN. Conceptually it approximates
the population with a randomly sampled batch; then it evenly trains batches by
conducting a gradient update on every batch in an epoch. In this paper, we
demonstrate Sampling Bias, Intrinsic Image Difference and Fixed Cycle Pseudo
Random Sampling differentiate batches in training, which then affect learning
speeds on them. Because of this, the unbiased treatment of batches involved in
SGD creates improper load balancing. To address this issue, we present
Inconsistent Stochastic Gradient Descent (ISGD) to dynamically vary training
effort according to learning statuses on batches. Specifically ISGD leverages
techniques in Statistical Process Control to identify a undertrained batch.
Once a batch is undertrained, ISGD solves a new subproblem, a chasing logic
plus a conservative constraint, to accelerate the training on the batch while
avoid drastic parameter changes. Extensive experiments on a variety of datasets
demonstrate ISGD converges faster than SGD. In training AlexNet, ISGD is
21.05\% faster than SGD to reach 56\% top1 accuracy under the exactly same
experiment setup. We also extend ISGD to work on multiGPU or heterogeneous
distributed system based on data parallelism, enabling the batch size to be the
key to scalability. Then we present the study of ISGD batch size to the
learning rate, parallelism, synchronization cost, system saturation and
scalability. We conclude the optimal ISGD batch size is machine dependent.
Various experiments on a multiGPU system validate our claim. In particular,
ISGD trains AlexNet to 56.3% top1 and 80.1% top5 accuracy in 11.5 hours with 4
NVIDIA TITAN X at the batch size of 1536.
| Linnan Wang, Yi Yang, Martin Renqiang Min, Srimat Chakradhar | null | 1603.05544 | null | null |
Reliable Prediction Intervals for Local Linear Regression | stat.ME cs.LG | This paper introduces two methods for estimating reliable prediction
intervals for local linear least-squares regressions, named Bounded Oscillation
Prediction Intervals (BOPI). It also proposes a new measure for comparing
interval prediction models named Equivalent Gaussian Standard Deviation (EGSD).
The experimental results compare BOPI to other methods using coverage
probability, Mean Interval Size and the introduced EGSD measure. The results
were generally in favor of the BOPI on considered benchmark regression
datasets. It also, reports simulation studies validating the BOPI method's
reliability.
| Mohammad Ghasemi Hamed and Masoud Ebadi Kivaj | null | 1603.05587 | null | null |
Streaming Algorithms for News and Scientific Literature Recommendation:
Submodular Maximization with a d-Knapsack Constraint | cs.LG cs.DS | Submodular maximization problems belong to the family of combinatorial
optimization problems and enjoy wide applications. In this paper, we focus on
the problem of maximizing a monotone submodular function subject to a
$d$-knapsack constraint, for which we propose a streaming algorithm that
achieves a $\left(\frac{1}{1+2d}-\epsilon\right)$-approximation of the optimal
value, while it only needs one single pass through the dataset without storing
all the data in the memory. In our experiments, we extensively evaluate the
effectiveness of our proposed algorithm via two applications: news
recommendation and scientific literature recommendation. It is observed that
the proposed streaming algorithm achieves both execution speedup and memory
saving by several orders of magnitude, compared with existing approaches.
| Qilian Yu, Easton Li Xu, Shuguang Cui | null | 1603.05614 | null | null |
Discriminative Embeddings of Latent Variable Models for Structured Data | cs.LG | Kernel classifiers and regressors designed for structured data, such as
sequences, trees and graphs, have significantly advanced a number of
interdisciplinary areas such as computational biology and drug design.
Typically, kernels are designed beforehand for a data type which either exploit
statistics of the structures or make use of probabilistic generative models,
and then a discriminative classifier is learned based on the kernels via convex
optimization. However, such an elegant two-stage approach also limited kernel
methods from scaling up to millions of data points, and exploiting
discriminative information to learn feature representations.
We propose, structure2vec, an effective and scalable approach for structured
data representation based on the idea of embedding latent variable models into
feature spaces, and learning such feature spaces using discriminative
information. Interestingly, structure2vec extracts features by performing a
sequence of function mappings in a way similar to graphical model inference
procedures, such as mean field and belief propagation. In applications
involving millions of data points, we showed that structure2vec runs 2 times
faster, produces models which are $10,000$ times smaller, while at the same
time achieving the state-of-the-art predictive performance.
| Hanjun Dai, Bo Dai, Le Song | null | 1603.05629 | null | null |
Optimal Black-Box Reductions Between Optimization Objectives | math.OC cs.DS cs.LG stat.ML | The diverse world of machine learning applications has given rise to a
plethora of algorithms and optimization methods, finely tuned to the specific
regression or classification task at hand. We reduce the complexity of
algorithm design for machine learning by reductions: we develop reductions that
take a method developed for one setting and apply it to the entire spectrum of
smoothness and strong-convexity in applications.
Furthermore, unlike existing results, our new reductions are OPTIMAL and more
PRACTICAL. We show how these new reductions give rise to new and faster running
times on training linear classifiers for various families of loss functions,
and conclude with experiments showing their successes also in practice.
| Zeyuan Allen-Zhu, Elad Hazan | null | 1603.05642 | null | null |
Variance Reduction for Faster Non-Convex Optimization | math.OC cs.DS cs.LG cs.NE stat.ML | We consider the fundamental problem in non-convex optimization of efficiently
reaching a stationary point. In contrast to the convex case, in the long
history of this basic problem, the only known theoretical results on
first-order non-convex optimization remain to be full gradient descent that
converges in $O(1/\varepsilon)$ iterations for smooth objectives, and
stochastic gradient descent that converges in $O(1/\varepsilon^2)$ iterations
for objectives that are sum of smooth functions.
We provide the first improvement in this line of research. Our result is
based on the variance reduction trick recently introduced to convex
optimization, as well as a brand new analysis of variance reduction that is
suitable for non-convex optimization. For objectives that are sum of smooth
functions, our first-order minibatch stochastic method converges with an
$O(1/\varepsilon)$ rate, and is faster than full gradient descent by
$\Omega(n^{1/3})$.
We demonstrate the effectiveness of our methods on empirical risk
minimizations with non-convex loss functions and training neural nets.
| Zeyuan Allen-Zhu, Elad Hazan | null | 1603.05643 | null | null |
Predicting health inspection results from online restaurant reviews | cs.CL cs.LG | Informatics around public health are increasingly shifting from the
professional to the public spheres. In this work, we apply linguistic analytics
to restaurant reviews, from Yelp, in order to automatically predict official
health inspection reports. We consider two types of feature sets, i.e., keyword
detection and topic model features, and use these in several classification
methods. Our empirical analysis shows that these extracted features can predict
public health inspection reports with over 90% accuracy using simple support
vector machines.
| Samantha Wong and Hamidreza Chinaei and Frank Rudzicz | null | 1603.05673 | null | null |
Do Deep Convolutional Nets Really Need to be Deep and Convolutional? | stat.ML cs.LG | Yes, they do. This paper provides the first empirical demonstration that deep
convolutional models really need to be both deep and convolutional, even when
trained with methods such as distillation that allow small or shallow models of
high accuracy to be trained. Although previous research showed that shallow
feed-forward nets sometimes can learn the complex functions previously learned
by deep nets while using the same number of parameters as the deep models they
mimic, in this paper we demonstrate that the same methods cannot be used to
train accurate models on CIFAR-10 unless the student models contain multiple
layers of convolution. Although the student models do not have to be as deep as
the teacher model they mimic, the students need multiple convolutional layers
to learn functions of comparable accuracy as the deep convolutional teacher.
| Gregor Urban, Krzysztof J. Geras, Samira Ebrahimi Kahou, Ozlem Aslan,
Shengjie Wang, Rich Caruana, Abdelrahman Mohamed, Matthai Philipose and Matt
Richardson | null | 1603.05691 | null | null |
A Comparison between Deep Neural Nets and Kernel Acoustic Models for
Speech Recognition | cs.LG stat.ML | We study large-scale kernel methods for acoustic modeling and compare to DNNs
on performance metrics related to both acoustic modeling and recognition.
Measuring perplexity and frame-level classification accuracy, kernel-based
acoustic models are as effective as their DNN counterparts. However, on
token-error-rates DNN models can be significantly better. We have discovered
that this might be attributed to DNN's unique strength in reducing both the
perplexity and the entropy of the predicted posterior probabilities. Motivated
by our findings, we propose a new technique, entropy regularized perplexity,
for model selection. This technique can noticeably improve the recognition
performance of both types of models, and reduces the gap between them. While
effective on Broadcast News, this technique could be also applicable to other
tasks.
| Zhiyun Lu, Dong Guo, Alireza Bagheri Garakani, Kuan Liu, Avner May,
Aurelien Bellet, Linxi Fan, Michael Collins, Brian Kingsbury, Michael
Picheny, Fei Sha | null | 1603.05800 | null | null |
Comparing Time and Frequency Domain for Audio Event Recognition Using
Deep Learning | cs.NE cs.LG cs.SD | Recognizing acoustic events is an intricate problem for a machine and an
emerging field of research. Deep neural networks achieve convincing results and
are currently the state-of-the-art approach for many tasks. One advantage is
their implicit feature learning, opposite to an explicit feature extraction of
the input signal. In this work, we analyzed whether more discriminative
features can be learned from either the time-domain or the frequency-domain
representation of the audio signal. For this purpose, we trained multiple deep
networks with different architectures on the Freiburg-106 and ESC-10 datasets.
Our results show that feature learning from the frequency domain is superior to
the time domain. Moreover, additionally using convolution and pooling layers,
to explore local structures of the audio signal, significantly improves the
recognition performance and achieves state-of-the-art results.
| Lars Hertel, Huy Phan, Alfred Mertins | null | 1603.05824 | null | null |
N-ary Error Correcting Coding Scheme | cs.LG | The coding matrix design plays a fundamental role in the prediction
performance of the error correcting output codes (ECOC)-based multi-class task.
{In many-class classification problems, e.g., fine-grained categorization, it
is difficult to distinguish subtle between-class differences under existing
coding schemes due to a limited choices of coding values.} In this paper, we
investigate whether one can relax existing binary and ternary code design to
$N$-ary code design to achieve better classification performance. {In
particular, we present a novel $N$-ary coding scheme that decomposes the
original multi-class problem into simpler multi-class subproblems, which is
similar to applying a divide-and-conquer method.} The two main advantages of
such a coding scheme are as follows: (i) the ability to construct more
discriminative codes and (ii) the flexibility for the user to select the best
$N$ for ECOC-based classification. We show empirically that the optimal $N$
(based on classification performance) lies in $[3, 10]$ with some trade-off in
computational cost. Moreover, we provide theoretical insights on the dependency
of the generalization error bound of an $N$-ary ECOC on the average base
classifier generalization error and the minimum distance between any two codes
constructed. Extensive experimental results on benchmark multi-class datasets
show that the proposed coding scheme achieves superior prediction performance
over the state-of-the-art coding methods.
| Joey Tianyi Zhou, Ivor W. Tsang, Shen-Shyang Ho and Klaus-Robert
Muller | null | 1603.05850 | null | null |
Distributed Iterative Learning Control for a Team of Quadrotors | cs.RO cs.LG cs.MA | The goal of this work is to enable a team of quadrotors to learn how to
accurately track a desired trajectory while holding a given formation. We solve
this problem in a distributed manner, where each vehicle has only access to the
information of its neighbors. The desired trajectory is only available to one
(or few) vehicles. We present a distributed iterative learning control (ILC)
approach where each vehicle learns from the experience of its own and its
neighbors' previous task repetitions, and adapts its feedforward input to
improve performance. Existing algorithms are extended in theory to make them
more applicable to real-world experiments. In particular, we prove stability
for any causal learning function with gains chosen according to a simple scalar
condition. Previous proofs were restricted to a specific learning function that
only depends on the tracking error derivative (D-type ILC). Our extension
provides more degrees of freedom in the ILC design and, as a result, better
performance can be achieved. We also show that stability is not affected by a
linear dynamic coupling between neighbors. This allows us to use an additional
consensus feedback controller to compensate for non-repetitive disturbances.
Experiments with two quadrotors attest the effectiveness of the proposed
distributed multi-agent ILC approach. This is the first work to show
distributed ILC in experiment.
| Andreas Hock, Angela P. Schoellig | null | 1603.05933 | null | null |
Katyusha: The First Direct Acceleration of Stochastic Gradient Methods | math.OC cs.DS cs.LG stat.ML | Nesterov's momentum trick is famously known for accelerating gradient
descent, and has been proven useful in building fast iterative algorithms.
However, in the stochastic setting, counterexamples exist and prevent
Nesterov's momentum from providing similar acceleration, even if the underlying
problem is convex and finite-sum.
We introduce $\mathtt{Katyusha}$, a direct, primal-only stochastic gradient
method to fix this issue. In convex finite-sum stochastic optimization,
$\mathtt{Katyusha}$ has an optimal accelerated convergence rate, and enjoys an
optimal parallel linear speedup in the mini-batch setting.
The main ingredient is $\textit{Katyusha momentum}$, a novel "negative
momentum" on top of Nesterov's momentum. It can be incorporated into a
variance-reduction based algorithm and speed it up, both in terms of
$\textit{sequential and parallel}$ performance. Since variance reduction has
been successfully applied to a growing list of practical problems, our paper
suggests that in each of such cases, one could potentially try to give Katyusha
a hug.
| Zeyuan Allen-Zhu | null | 1603.05953 | null | null |
Document Neural Autoregressive Distribution Estimation | cs.LG cs.CL | We present an approach based on feed-forward neural networks for learning the
distribution of textual documents. This approach is inspired by the Neural
Autoregressive Distribution Estimator(NADE) model, which has been shown to be a
good estimator of the distribution of discrete-valued igh-dimensional vectors.
In this paper, we present how NADE can successfully be adapted to the case of
textual data, retaining from NADE the property that sampling or computing the
probability of observations can be done exactly and efficiently. The approach
can also be used to learn deep representations of documents that are
competitive to those learned by the alternative topic modeling approaches.
Finally, we describe how the approach can be combined with a regular neural
network N-gram model and substantially improve its performance, by making its
learned representation sensitive to the larger, document-specific context.
| Stanislas Lauly, Yin Zheng, Alexandre Allauzen, Hugo Larochelle | null | 1603.05962 | null | null |
L0-norm Sparse Graph-regularized SVD for Biclustering | cs.LG stat.ML | Learning the "blocking" structure is a central challenge for high dimensional
data (e.g., gene expression data). Recently, a sparse singular value
decomposition (SVD) has been used as a biclustering tool to achieve this goal.
However, this model ignores the structural information between variables (e.g.,
gene interaction graph). Although typical graph-regularized norm can
incorporate such prior graph information to get accurate discovery and better
interpretability, it fails to consider the opposite effect of variables with
different signs. Motivated by the development of sparse coding and
graph-regularized norm, we propose a novel sparse graph-regularized SVD as a
powerful biclustering tool for analyzing high-dimensional data. The key of this
method is to impose two penalties including a novel graph-regularized norm
($|\pmb{u}|\pmb{L}|\pmb{u}|$) and $L_0$-norm ($\|\pmb{u}\|_0$) on singular
vectors to induce structural sparsity and enhance interpretability. We design
an efficient Alternating Iterative Sparse Projection (AISP) algorithm to solve
it. Finally, we apply our method and related ones to simulated and real data to
show its efficiency in capturing natural blocking structures.
| Wenwen Min, Juan Liu, Shihua Zhang | null | 1603.06035 | null | null |
Tensor Methods and Recommender Systems | cs.LG cs.IR stat.ML | A substantial progress in development of new and efficient tensor
factorization techniques has led to an extensive research of their
applicability in recommender systems field. Tensor-based recommender models
push the boundaries of traditional collaborative filtering techniques by taking
into account a multifaceted nature of real environments, which allows to
produce more accurate, situational (e.g. context-aware, criteria-driven)
recommendations. Despite the promising results, tensor-based methods are poorly
covered in existing recommender systems surveys. This survey aims to complement
previous works and provide a comprehensive overview on the subject. To the best
of our knowledge, this is the first attempt to consolidate studies from various
application domains in an easily readable, digestible format, which helps to
get a notion of the current state of the field. We also provide a high level
discussion of the future perspectives and directions for further improvement of
tensor-based recommendation systems.
| Evgeny Frolov and Ivan Oseledets | null | 1603.06038 | null | null |
Globally Normalized Transition-Based Neural Networks | cs.CL cs.LG cs.NE | We introduce a globally normalized transition-based neural network model that
achieves state-of-the-art part-of-speech tagging, dependency parsing and
sentence compression results. Our model is a simple feed-forward neural network
that operates on a task-specific transition system, yet achieves comparable or
better accuracies than recurrent models. We discuss the importance of global as
opposed to local normalization: a key insight is that the label bias problem
implies that globally normalized models can be strictly more expressive than
locally normalized models.
| Daniel Andor, Chris Alberti, David Weiss, Aliaksei Severyn, Alessandro
Presta, Kuzman Ganchev, Slav Petrov and Michael Collins | null | 1603.06042 | null | null |
Fast DPP Sampling for Nystr\"om with Application to Kernel Methods | cs.LG | The Nystr\"om method has long been popular for scaling up kernel methods. Its
theoretical guarantees and empirical performance rely critically on the quality
of the landmarks selected. We study landmark selection for Nystr\"om using
Determinantal Point Processes (DPPs), discrete probability models that allow
tractable generation of diverse samples. We prove that landmarks selected via
DPPs guarantee bounds on approximation errors; subsequently, we analyze
implications for kernel ridge regression. Contrary to prior reservations due to
cubic complexity of DPPsampling, we show that (under certain conditions) Markov
chain DPP sampling requires only linear time in the size of the data. We
present several empirical results that support our theoretical analysis, and
demonstrate the superior performance of DPP-based landmark selection compared
with existing approaches.
| Chengtao Li, Stefanie Jegelka and Suvrit Sra | null | 1603.06052 | null | null |
DASA: Domain Adaptation in Stacked Autoencoders using Systematic Dropout | cs.CV cs.LG | Domain adaptation deals with adapting behaviour of machine learning based
systems trained using samples in source domain to their deployment in target
domain where the statistics of samples in both domains are dissimilar. The task
of directly training or adapting a learner in the target domain is challenged
by lack of abundant labeled samples. In this paper we propose a technique for
domain adaptation in stacked autoencoder (SAE) based deep neural networks (DNN)
performed in two stages: (i) unsupervised weight adaptation using systematic
dropouts in mini-batch training, (ii) supervised fine-tuning with limited
number of labeled samples in target domain. We experimentally evaluate
performance in the problem of retinal vessel segmentation where the SAE-DNN is
trained using large number of labeled samples in the source domain (DRIVE
dataset) and adapted using less number of labeled samples in target domain
(STARE dataset). The performance of SAE-DNN measured using $logloss$ in source
domain is $0.19$, without and with adaptation are $0.40$ and $0.18$, and $0.39$
when trained exclusively with limited samples in target domain. The area under
ROC curve is observed respectively as $0.90$, $0.86$, $0.92$ and $0.87$. The
high efficiency of vessel segmentation with DASA strongly substantiates our
claim.
| Abhijit Guha Roy and Debdoot Sheet | null | 1603.06060 | null | null |
Deep Shading: Convolutional Neural Networks for Screen-Space Shading | cs.GR cs.LG | In computer vision, convolutional neural networks (CNNs) have recently
achieved new levels of performance for several inverse problems where RGB pixel
appearance is mapped to attributes such as positions, normals or reflectance.
In computer graphics, screen-space shading has recently increased the visual
quality in interactive image synthesis, where per-pixel attributes such as
positions, normals or reflectance of a virtual 3D scene are converted into RGB
pixel appearance, enabling effects like ambient occlusion, indirect light,
scattering, depth-of-field, motion blur, or anti-aliasing. In this paper we
consider the diagonal problem: synthesizing appearance from given per-pixel
attributes using a CNN. The resulting Deep Shading simulates various
screen-space effects at competitive quality and speed while not being
programmed by human experts but learned from example images.
| Oliver Nalbach, Elena Arabadzhiyska, Dushyant Mehta, Hans-Peter
Seidel, Tobias Ritschel | 10.1111/cgf.13225 | 1603.06078 | null | null |
How Transferable are Neural Networks in NLP Applications? | cs.CL cs.LG cs.NE | Transfer learning is aimed to make use of valuable knowledge in a source
domain to help model performance in a target domain. It is particularly
important to neural networks, which are very likely to be overfitting. In some
fields like image processing, many studies have shown the effectiveness of
neural network-based transfer learning. For neural NLP, however, existing
studies have only casually applied transfer learning, and conclusions are
inconsistent. In this paper, we conduct systematic case studies and provide an
illuminating picture on the transferability of neural networks in NLP.
| Lili Mou, Zhao Meng, Rui Yan, Ge Li, Yan Xu, Lu Zhang, Zhi Jin | null | 1603.06111 | null | null |
Sentence Pair Scoring: Towards Unified Framework for Text Comprehension | cs.CL cs.AI cs.LG cs.NE | We review the task of Sentence Pair Scoring, popular in the literature in
various forms - viewed as Answer Sentence Selection, Semantic Text Scoring,
Next Utterance Ranking, Recognizing Textual Entailment, Paraphrasing or e.g. a
component of Memory Networks.
We argue that all such tasks are similar from the model perspective and
propose new baselines by comparing the performance of common IR metrics and
popular convolutional, recurrent and attention-based neural models across many
Sentence Pair Scoring tasks and datasets. We discuss the problem of evaluating
randomized models, propose a statistically grounded methodology, and attempt to
improve comparisons by releasing new datasets that are much harder than some of
the currently used well explored benchmarks. We introduce a unified open source
software framework with easily pluggable models and tasks, which enables us to
experiment with multi-task reusability of trained sentence model. We set a new
state-of-art in performance on the Ubuntu Dialogue dataset.
| Petr Baudi\v{s}, Jan Pichl, Tom\'a\v{s} Vysko\v{c}il, Jan \v{S}ediv\'y | null | 1603.06127 | null | null |
Automated Correction for Syntax Errors in Programming Assignments using
Recurrent Neural Networks | cs.PL cs.AI cs.LG cs.SE | We present a method for automatically generating repair feedback for syntax
errors for introductory programming problems. Syntax errors constitute one of
the largest classes of errors (34%) in our dataset of student submissions
obtained from a MOOC course on edX. The previous techniques for generating
automated feed- back on programming assignments have focused on functional
correctness and style considerations of student programs. These techniques
analyze the program AST of the program and then perform some dynamic and
symbolic analyses to compute repair feedback. Unfortunately, it is not possible
to generate ASTs for student pro- grams with syntax errors and therefore the
previous feedback techniques are not applicable in repairing syntax errors.
We present a technique for providing feedback on syntax errors that uses
Recurrent neural networks (RNNs) to model syntactically valid token sequences.
Our approach is inspired from the recent work on learning language models from
Big Code (large code corpus). For a given programming assignment, we first
learn an RNN to model all valid token sequences using the set of syntactically
correct student submissions. Then, for a student submission with syntax errors,
we query the learnt RNN model with the prefix to- ken sequence to predict token
sequences that can fix the error by either replacing or inserting the predicted
token sequence at the error location. We evaluate our technique on over 14, 000
student submissions with syntax errors. Our technique can completely re- pair
31.69% (4501/14203) of submissions with syntax errors and in addition partially
correct 6.39% (908/14203) of the submissions.
| Sahil Bhatia and Rishabh Singh | null | 1603.06129 | null | null |
A Character-Level Decoder without Explicit Segmentation for Neural
Machine Translation | cs.CL cs.LG | The existing machine translation systems, whether phrase-based or neural,
have relied almost exclusively on word-level modelling with explicit
segmentation. In this paper, we ask a fundamental question: can neural machine
translation generate a character sequence without any explicit segmentation? To
answer this question, we evaluate an attention-based encoder-decoder with a
subword-level encoder and a character-level decoder on four language
pairs--En-Cs, En-De, En-Ru and En-Fi-- using the parallel corpora from WMT'15.
Our experiments show that the models with a character-level decoder outperform
the ones with a subword-level decoder on all of the four language pairs.
Furthermore, the ensembles of neural models with a character-level decoder
outperform the state-of-the-art non-neural machine translation systems on
En-Cs, En-De and En-Fi and perform comparably on En-Ru.
| Junyoung Chung, Kyunghyun Cho and Yoshua Bengio | null | 1603.06147 | null | null |
Fast Incremental Method for Nonconvex Optimization | math.OC cs.LG stat.ML | We analyze a fast incremental aggregated gradient method for optimizing
nonconvex problems of the form $\min_x \sum_i f_i(x)$. Specifically, we analyze
the SAGA algorithm within an Incremental First-order Oracle framework, and show
that it converges to a stationary point provably faster than both gradient
descent and stochastic gradient descent. We also discuss a Polyak's special
class of nonconvex problems for which SAGA converges at a linear rate to the
global optimum. Finally, we analyze the practically valuable regularized and
minibatch variants of SAGA. To our knowledge, this paper presents the first
analysis of fast convergence for an incremental aggregated gradient method for
nonconvex problems.
| Sashank J. Reddi, Suvrit Sra, Barnabas Poczos, Alex Smola | null | 1603.06159 | null | null |
Stochastic Variance Reduction for Nonconvex Optimization | math.OC cs.LG cs.NE stat.ML | We study nonconvex finite-sum problems and analyze stochastic variance
reduced gradient (SVRG) methods for them. SVRG and related methods have
recently surged into prominence for convex optimization given their edge over
stochastic gradient descent (SGD); but their theoretical analysis almost
exclusively assumes convexity. In contrast, we prove non-asymptotic rates of
convergence (to stationary points) of SVRG for nonconvex optimization, and show
that it is provably faster than SGD and gradient descent. We also analyze a
subclass of nonconvex problems on which SVRG attains linear convergence to the
global optimum. We extend our analysis to mini-batch variants of SVRG, showing
(theoretical) linear speedup due to mini-batching in parallel settings.
| Sashank J. Reddi, Ahmed Hefny, Suvrit Sra, Barnabas Poczos, Alex Smola | null | 1603.06160 | null | null |
Joint Stochastic Approximation learning of Helmholtz Machines | cs.LG stat.ML | Though with progress, model learning and performing posterior inference still
remains a common challenge for using deep generative models, especially for
handling discrete hidden variables. This paper is mainly concerned with
algorithms for learning Helmholz machines, which is characterized by pairing
the generative model with an auxiliary inference model. A common drawback of
previous learning algorithms is that they indirectly optimize some bounds of
the targeted marginal log-likelihood. In contrast, we successfully develop a
new class of algorithms, based on stochastic approximation (SA) theory of the
Robbins-Monro type, to directly optimize the marginal log-likelihood and
simultaneously minimize the inclusive KL-divergence. The resulting learning
algorithm is thus called joint SA (JSA). Moreover, we construct an effective
MCMC operator for JSA. Our results on the MNIST datasets demonstrate that the
JSA's performance is consistently superior to that of competing algorithms like
RWS, for learning a range of difficult models.
| Haotian Xu, Zhijian Ou | null | 1603.06170 | null | null |
Evaluation of a Tree-based Pipeline Optimization Tool for Automating
Data Science | cs.NE cs.AI cs.LG | As the field of data science continues to grow, there will be an
ever-increasing demand for tools that make machine learning accessible to
non-experts. In this paper, we introduce the concept of tree-based pipeline
optimization for automating one of the most tedious parts of machine
learning---pipeline design. We implement an open source Tree-based Pipeline
Optimization Tool (TPOT) in Python and demonstrate its effectiveness on a
series of simulated and real-world benchmark data sets. In particular, we show
that TPOT can design machine learning pipelines that provide a significant
improvement over a basic machine learning analysis while requiring little to no
input nor prior knowledge from the user. We also address the tendency for TPOT
to design overly complex pipelines by integrating Pareto optimization, which
produces compact pipelines without sacrificing classification accuracy. As
such, this work represents an important step toward fully automating machine
learning pipeline design.
| Randal S. Olson, Nathan Bartley, Ryan J. Urbanowicz, Jason H. Moore | null | 1603.06212 | null | null |
Flow of Information in Feed-Forward Deep Neural Networks | cs.IT cs.LG math.IT | Feed-forward deep neural networks have been used extensively in various
machine learning applications. Developing a precise understanding of the
underling behavior of neural networks is crucial for their efficient
deployment. In this paper, we use an information theoretic approach to study
the flow of information in a neural network and to determine how entropy of
information changes between consecutive layers. Moreover, using the Information
Bottleneck principle, we develop a constrained optimization problem that can be
used in the training process of a deep neural network. Furthermore, we
determine a lower bound for the level of data representation that can be
achieved in a deep neural network with an acceptable level of distortion.
| Pejman Khadivi, Ravi Tandon, Naren Ramakrishnan | null | 1603.06220 | null | null |
Collaborative prediction with expert advice | cs.LG | Many practical learning systems aggregate data across many users, while
learning theory traditionally considers a single learner who trusts all of
their observations. A case in point is the foundational learning problem of
prediction with expert advice. To date, there has been no theoretical study of
the general collaborative version of prediction with expert advice, in which
many users face a similar problem and would like to share their experiences in
order to learn faster. A key issue in this collaborative framework is
robustness: generally algorithms that aggregate data are vulnerable to
manipulation by even a small number of dishonest users.
We exhibit the first robust collaborative algorithm for prediction with
expert advice. When all users are honest and have similar tastes our algorithm
matches the performance of pooling data and using a traditional algorithm. But
our algorithm also guarantees that adding users never significantly degrades
performance, even if the additional users behave adversarially. We achieve
strong guarantees even when the overwhelming majority of users behave
adversarially. As a special case, our algorithm is extremely robust to
variation amongst the users.
| Paul Christiano | null | 1603.06265 | null | null |
Multi-Task Cross-Lingual Sequence Tagging from Scratch | cs.CL cs.LG | We present a deep hierarchical recurrent neural network for sequence tagging.
Given a sequence of words, our model employs deep gated recurrent units on both
character and word levels to encode morphology and context information, and
applies a conditional random field layer to predict the tags. Our model is task
independent, language independent, and feature engineering free. We further
extend our model to multi-task and cross-lingual joint training by sharing the
architecture and parameters. Our model achieves state-of-the-art results in
multiple languages on several benchmark tasks including POS tagging, chunking,
and NER. We also demonstrate that multi-task and cross-lingual joint training
can improve the performance in various cases.
| Zhilin Yang and Ruslan Salakhutdinov and William Cohen | null | 1603.06270 | null | null |
Multi-fidelity Gaussian Process Bandit Optimisation | stat.ML cs.AI cs.LG | In many scientific and engineering applications, we are tasked with the
maximisation of an expensive to evaluate black box function $f$. Traditional
settings for this problem assume just the availability of this single function.
However, in many cases, cheap approximations to $f$ may be obtainable. For
example, the expensive real world behaviour of a robot can be approximated by a
cheap computer simulation. We can use these approximations to eliminate low
function value regions cheaply and use the expensive evaluations of $f$ in a
small but promising region and speedily identify the optimum. We formalise this
task as a \emph{multi-fidelity} bandit problem where the target function and
its approximations are sampled from a Gaussian process. We develop MF-GP-UCB, a
novel method based on upper confidence bound techniques. In our theoretical
analysis we demonstrate that it exhibits precisely the above behaviour, and
achieves better regret than strategies which ignore multi-fidelity information.
Empirically, MF-GP-UCB outperforms such naive strategies and other
multi-fidelity methods on several synthetic and real experiments.
| Kirthevasan Kandasamy, Gautam Dasarathy, Junier B. Oliva, Jeff
Schneider, Barnabas Poczos | null | 1603.06288 | null | null |
Harnessing Deep Neural Networks with Logic Rules | cs.LG cs.AI cs.CL stat.ML | Combining deep neural networks with structured logic rules is desirable to
harness flexibility and reduce uninterpretability of the neural models. We
propose a general framework capable of enhancing various types of neural
networks (e.g., CNNs and RNNs) with declarative first-order logic rules.
Specifically, we develop an iterative distillation method that transfers the
structured information of logic rules into the weights of neural networks. We
deploy the framework on a CNN for sentiment analysis, and an RNN for named
entity recognition. With a few highly intuitive rules, we obtain substantial
improvements and achieve state-of-the-art or comparable results to previous
best-performing systems.
| Zhiting Hu, Xuezhe Ma, Zhengzhong Liu, Eduard Hovy, Eric Xing | null | 1603.06318 | null | null |
Learning Dexterous Manipulation for a Soft Robotic Hand from Human
Demonstration | cs.LG cs.RO | Dexterous multi-fingered hands can accomplish fine manipulation behaviors
that are infeasible with simple robotic grippers. However, sophisticated
multi-fingered hands are often expensive and fragile. Low-cost soft hands offer
an appealing alternative to more conventional devices, but present considerable
challenges in sensing and actuation, making them difficult to apply to more
complex manipulation tasks. In this paper, we describe an approach to learning
from demonstration that can be used to train soft robotic hands to perform
dexterous manipulation tasks. Our method uses object-centric demonstrations,
where a human demonstrates the desired motion of manipulated objects with their
own hands, and the robot autonomously learns to imitate these demonstrations
using reinforcement learning. We propose a novel algorithm that allows us to
blend and select a subset of the most feasible demonstrations to learn to
imitate on the hardware, which we use with an extension of the guided policy
search framework to use multiple demonstrations to learn generalizable neural
network policies. We demonstrate our approach on the RBO Hand 2, with learned
motor skills for turning a valve, manipulating an abacus, and grasping.
| Abhishek Gupta, Clemens Eppner, Sergey Levine, Pieter Abbeel | null | 1603.06348 | null | null |
Online Learning with Low Rank Experts | cs.LG | We consider the problem of prediction with expert advice when the losses of
the experts have low-dimensional structure: they are restricted to an unknown
$d$-dimensional subspace. We devise algorithms with regret bounds that are
independent of the number of experts and depend only on the rank $d$. For the
stochastic model we show a tight bound of $\Theta(\sqrt{dT})$, and extend it to
a setting of an approximate $d$ subspace. For the adversarial model we show an
upper bound of $O(d\sqrt{T})$ and a lower bound of $\Omega(\sqrt{dT})$.
| Elad Hazan, Tomer Koren, Roi Livni, Yishay Mansour | null | 1603.06352 | null | null |
Incorporating Copying Mechanism in Sequence-to-Sequence Learning | cs.CL cs.AI cs.LG cs.NE | We address an important problem in sequence-to-sequence (Seq2Seq) learning
referred to as copying, in which certain segments in the input sequence are
selectively replicated in the output sequence. A similar phenomenon is
observable in human language communication. For example, humans tend to repeat
entity names or even long phrases in conversation. The challenge with regard to
copying in Seq2Seq is that new machinery is needed to decide when to perform
the operation. In this paper, we incorporate copying into neural network-based
Seq2Seq learning and propose a new model called CopyNet with encoder-decoder
structure. CopyNet can nicely integrate the regular way of word generation in
the decoder with the new copying mechanism which can choose sub-sequences in
the input sequence and put them at proper places in the output sequence. Our
empirical study on both synthetic data sets and real world data sets
demonstrates the efficacy of CopyNet. For example, CopyNet can outperform
regular RNN-based model with remarkable margins on text summarization tasks.
| Jiatao Gu, Zhengdong Lu, Hang Li and Victor O.K. Li | null | 1603.06393 | null | null |
Deep Learning in Bioinformatics | cs.LG q-bio.GN | In the era of big data, transformation of biomedical big data into valuable
knowledge has been one of the most important challenges in bioinformatics. Deep
learning has advanced rapidly since the early 2000s and now demonstrates
state-of-the-art performance in various fields. Accordingly, application of
deep learning in bioinformatics to gain insight from data has been emphasized
in both academia and industry. Here, we review deep learning in bioinformatics,
presenting examples of current research. To provide a useful and comprehensive
perspective, we categorize research both by the bioinformatics domain (i.e.,
omics, biomedical imaging, biomedical signal processing) and deep learning
architecture (i.e., deep neural networks, convolutional neural networks,
recurrent neural networks, emergent architectures) and present brief
descriptions of each study. Additionally, we discuss theoretical and practical
issues of deep learning in bioinformatics and suggest future research
directions. We believe that this review will provide valuable insights and
serve as a starting point for researchers to apply deep learning approaches in
their bioinformatics studies.
| Seonwoo Min, Byunghan Lee, Sungroh Yoon | null | 1603.06430 | null | null |
Hard-Clustering with Gaussian Mixture Models | cs.LG cs.DS | Training the parameters of statistical models to describe a given data set is
a central task in the field of data mining and machine learning. A very popular
and powerful way of parameter estimation is the method of maximum likelihood
estimation (MLE). Among the most widely used families of statistical models are
mixture models, especially, mixtures of Gaussian distributions. A popular
hard-clustering variant of the MLE problem is the so-called complete-data
maximum likelihood estimation (CMLE) method. The standard approach to solve the
CMLE problem is the Classification-Expectation-Maximization (CEM) algorithm.
Unfortunately, it is only guaranteed that the algorithm converges to some
(possibly arbitrarily poor) stationary point of the objective function.
In this paper, we present two algorithms for a restricted version of the CMLE
problem. That is, our algorithms approximate reasonable solutions to the CMLE
problem which satisfy certain natural properties. Moreover, they compute
solutions whose cost (i.e. complete-data log-likelihood values) are at most a
factor $(1+\epsilon)$ worse than the cost of the solutions that we search for.
Note the CMLE problem in its most general, i.e. unrestricted, form is not well
defined and allows for trivial optimal solutions that can be thought of as
degenerated solutions.
| Johannes Bl\"omer, Sascha Brauer, Kathrin Bujna | null | 1603.06478 | null | null |
Deep video gesture recognition using illumination invariants | cs.CV cs.LG | In this paper we present architectures based on deep neural nets for gesture
recognition in videos, which are invariant to local scaling. We amalgamate
autoencoder and predictor architectures using an adaptive weighting scheme
coping with a reduced size labeled dataset, while enriching our models from
enormous unlabeled sets. We further improve robustness to lighting conditions
by introducing a new adaptive filer based on temporal local scale
normalization. We provide superior results over known methods, including recent
reported approaches based on neural nets.
| Otkrist Gupta, Dan Raviv, Ramesh Raskar | null | 1603.06531 | null | null |
A Comparison Study of Nonlinear Kernels | stat.ML cs.LG | In this paper, we compare 5 different nonlinear kernels: min-max, RBF, fRBF
(folded RBF), acos, and acos-$\chi^2$, on a wide range of publicly available
datasets. The proposed fRBF kernel performs very similarly to the RBF kernel.
Both RBF and fRBF kernels require an important tuning parameter ($\gamma$).
Interestingly, for a significant portion of the datasets, the min-max kernel
outperforms the best-tuned RBF/fRBF kernels. The acos kernel and acos-$\chi^2$
kernel also perform well in general and in some datasets achieve the best
accuracies.
One crucial issue with the use of nonlinear kernels is the excessive
computational and memory cost. These days, one increasingly popular strategy is
to linearize the kernels through various randomization algorithms. In our
study, the randomization method for the min-max kernel demonstrates excellent
performance compared to the randomization methods for other types of nonlinear
kernels, measured in terms of the number of nonzero terms in the transformed
dataset.
Our study provides evidence for supporting the use of the min-max kernel and
the corresponding randomized linearization method (i.e., the so-called "0-bit
CWS"). Furthermore, the results motivate at least two directions for future
research: (i) To develop new (and linearizable) nonlinear kernels for better
accuracies; and (ii) To develop better linearization algorithms for improving
the current linearization methods for the RBF kernel, the acos kernel, and the
acos-$\chi^2$ kernel. One attempt is to combine the min-max kernel with the
acos kernel or the acos-$\chi^2$ kernel. The advantages of these two new and
tuning-free nonlinear kernels are demonstrated vias our extensive experiments.
| Ping Li | null | 1603.06541 | null | null |
Action-Affect Classification and Morphing using Multi-Task
Representation Learning | cs.CV cs.AI cs.HC cs.LG | Most recent work focused on affect from facial expressions, and not as much
on body. This work focuses on body affect analysis. Affect does not occur in
isolation. Humans usually couple affect with an action in natural interactions;
for example, a person could be talking and smiling. Recognizing body affect in
sequences requires efficient algorithms to capture both the micro movements
that differentiate between happy and sad and the macro variations between
different actions. We depart from traditional approaches for time-series data
analytics by proposing a multi-task learning model that learns a shared
representation that is well-suited for action-affect classification as well as
generation. For this paper we choose Conditional Restricted Boltzmann Machines
to be our building block. We propose a new model that enhances the CRBM model
with a factored multi-task component to become Multi-Task Conditional
Restricted Boltzmann Machines (MTCRBMs). We evaluate our approach on two
publicly available datasets, the Body Affect dataset and the Tower Game
dataset, and show superior classification performance improvement over the
state-of-the-art, as well as the generative abilities of our model.
| Timothy J. Shields, Mohamed R. Amer, Max Ehrlich, Amir Tamrakar | null | 1603.06554 | null | null |
Hyperband: A Novel Bandit-Based Approach to Hyperparameter Optimization | cs.LG stat.ML | Performance of machine learning algorithms depends critically on identifying
a good set of hyperparameters. While recent approaches use Bayesian
optimization to adaptively select configurations, we focus on speeding up
random search through adaptive resource allocation and early-stopping. We
formulate hyperparameter optimization as a pure-exploration non-stochastic
infinite-armed bandit problem where a predefined resource like iterations, data
samples, or features is allocated to randomly sampled configurations. We
introduce a novel algorithm, Hyperband, for this framework and analyze its
theoretical properties, providing several desirable guarantees. Furthermore, we
compare Hyperband with popular Bayesian optimization methods on a suite of
hyperparameter optimization problems. We observe that Hyperband can provide
over an order-of-magnitude speedup over our competitor set on a variety of
deep-learning and kernel-based learning problems.
| Lisha Li, Kevin Jamieson, Giulia DeSalvo, Afshin Rostamizadeh, Ameet
Talwalkar | null | 1603.06560 | null | null |
Bayesian Neural Word Embedding | cs.CL cs.LG | Recently, several works in the domain of natural language processing
presented successful methods for word embedding. Among them, the Skip-Gram with
negative sampling, known also as word2vec, advanced the state-of-the-art of
various linguistics tasks. In this paper, we propose a scalable Bayesian neural
word embedding algorithm. The algorithm relies on a Variational Bayes solution
for the Skip-Gram objective and a detailed step by step description is
provided. We present experimental results that demonstrate the performance of
the proposed algorithm for word analogy and similarity tasks on six different
datasets and show it is competitive with the original Skip-Gram method.
| Oren Barkan | null | 1603.06571 | null | null |
Variational Autoencoders for Feature Detection of Magnetic Resonance
Imaging Data | cs.LG cs.NE stat.ML | Independent component analysis (ICA), as an approach to the blind
source-separation (BSS) problem, has become the de-facto standard in many
medical imaging settings. Despite successes and a large ongoing research
effort, the limitation of ICA to square linear transformations have not been
overcome, so that general INFOMAX is still far from being realized. As an
alternative, we present feature analysis in medical imaging as a problem solved
by Helmholtz machines, which include dimensionality reduction and
reconstruction of the raw data under the same objective, and which recently
have overcome major difficulties in inference and learning with deep and
nonlinear configurations. We demonstrate one approach to training Helmholtz
machines, variational auto-encoders (VAE), as a viable approach toward feature
extraction with magnetic resonance imaging (MRI) data.
| R. Devon Hjelm and Sergey M. Plis and Vince C. Calhoun | null | 1603.06624 | null | null |
Information Theoretic-Learning Auto-Encoder | cs.LG | We propose Information Theoretic-Learning (ITL) divergence measures for
variational regularization of neural networks. We also explore ITL-regularized
autoencoders as an alternative to variational autoencoding bayes, adversarial
autoencoders and generative adversarial networks for randomly generating sample
data without explicitly defining a partition function. This paper also
formalizes, generative moment matching networks under the ITL framework.
| Eder Santana, Matthew Emigh and Jose C Principe | null | 1603.06653 | null | null |
Recursive Neural Conditional Random Fields for Aspect-based Sentiment
Analysis | cs.CL cs.IR cs.LG | In aspect-based sentiment analysis, extracting aspect terms along with the
opinions being expressed from user-generated content is one of the most
important subtasks. Previous studies have shown that exploiting connections
between aspect and opinion terms is promising for this task. In this paper, we
propose a novel joint model that integrates recursive neural networks and
conditional random fields into a unified framework for explicit aspect and
opinion terms co-extraction. The proposed model learns high-level
discriminative features and double propagate information between aspect and
opinion terms, simultaneously. Moreover, it is flexible to incorporate
hand-crafted features into the proposed model to further boost its information
extraction performance. Experimental results on the SemEval Challenge 2014
dataset show the superiority of our proposed model over several baseline
methods as well as the winning systems of the challenge.
| Wenya Wang, Sinno Jialin Pan, Daniel Dahlmeier and Xiaokui Xiao | null | 1603.06679 | null | null |
A Self-Paced Regularization Framework for Multi-Label Learning | cs.LG | In this paper, we propose a novel multi-label learning framework, called
Multi-Label Self-Paced Learning (MLSPL), in an attempt to incorporate the
self-paced learning strategy into multi-label learning regime. In light of the
benefits of adopting the easy-to-hard strategy proposed by self-paced learning,
the devised MLSPL aims to learn multiple labels jointly by gradually including
label learning tasks and instances into model training from the easy to the
hard. We first introduce a self-paced function as a regularizer in the
multi-label learning formulation, so as to simultaneously rank priorities of
the label learning tasks and the instances in each learning iteration.
Considering that different multi-label learning scenarios often need different
self-paced schemes during optimization, we thus propose a general way to find
the desired self-paced functions. Experimental results on three benchmark
datasets suggest the state-of-the-art performance of our approach.
| Changsheng Li and Fan Wei and Junchi Yan and Weishan Dong and Qingshan
Liu and Xiaoyu Zhang and Hongyuan Zha | null | 1603.06708 | null | null |
Localized Lasso for High-Dimensional Regression | stat.ML cs.LG stat.ME | We introduce the localized Lasso, which is suited for learning models that
are both interpretable and have a high predictive power in problems with high
dimensionality $d$ and small sample size $n$. More specifically, we consider a
function defined by local sparse models, one at each data point. We introduce
sample-wise network regularization to borrow strength across the models, and
sample-wise exclusive group sparsity (a.k.a., $\ell_{1,2}$ norm) to introduce
diversity into the choice of feature sets in the local models. The local models
are interpretable in terms of similarity of their sparsity patterns. The cost
function is convex, and thus has a globally optimal solution. Moreover, we
propose a simple yet efficient iterative least-squares based optimization
procedure for the localized Lasso, which does not need a tuning parameter, and
is guaranteed to converge to a globally optimal solution. The solution is
empirically shown to outperform alternatives for both simulated and genomic
personalized medicine data.
| Makoto Yamada, Koh Takeuchi, Tomoharu Iwata, John Shawe-Taylor, Samuel
Kaski | null | 1603.06743 | null | null |
Doubly Random Parallel Stochastic Methods for Large Scale Learning | cs.LG math.OC | We consider learning problems over training sets in which both, the number of
training examples and the dimension of the feature vectors, are large. To solve
these problems we propose the random parallel stochastic algorithm (RAPSA). We
call the algorithm random parallel because it utilizes multiple processors to
operate in a randomly chosen subset of blocks of the feature vector. We call
the algorithm parallel stochastic because processors choose elements of the
training set randomly and independently. Algorithms that are parallel in either
of these dimensions exist, but RAPSA is the first attempt at a methodology that
is parallel in both, the selection of blocks and the selection of elements of
the training set. In RAPSA, processors utilize the randomly chosen functions to
compute the stochastic gradient component associated with a randomly chosen
block. The technical contribution of this paper is to show that this minimally
coordinated algorithm converges to the optimal classifier when the training
objective is convex. In particular, we show that: (i) When using decreasing
stepsizes, RAPSA converges almost surely over the random choice of blocks and
functions. (ii) When using constant stepsizes, convergence is to a neighborhood
of optimality with a rate that is linear in expectation. RAPSA is numerically
evaluated on the MNIST digit recognition problem.
| Aryan Mokhtari and Alec Koppel and Alejandro Ribeiro | null | 1603.06782 | null | null |
Using real-time cluster configurations of streaming asynchronous
features as online state descriptors in financial markets | q-fin.TR cs.LG q-fin.CP | We present a scheme for online, unsupervised state discovery and detection
from streaming, multi-featured, asynchronous data in high-frequency financial
markets. Online feature correlations are computed using an unbiased, lossless
Fourier estimator. A high-speed maximum likelihood clustering algorithm is then
used to find the feature cluster configuration which best explains the
structure in the correlation matrix. We conjecture that this feature
configuration is a candidate descriptor for the temporal state of the system.
Using a simple cluster configuration similarity metric, we are able to
enumerate the state space based on prevailing feature configurations. The
proposed state representation removes the need for human-driven data
pre-processing for state attribute specification, allowing a learning agent to
find structure in streaming data, discern changes in the system, enumerate its
perceived state space and learn suitable action-selection policies.
| Dieter Hendricks | null | 1603.06805 | null | null |
Generating Factoid Questions With Recurrent Neural Networks: The 30M
Factoid Question-Answer Corpus | cs.CL cs.AI cs.LG cs.NE | Over the past decade, large-scale supervised learning corpora have enabled
machine learning researchers to make substantial advances. However, to this
date, there are no large-scale question-answer corpora available. In this paper
we present the 30M Factoid Question-Answer Corpus, an enormous question answer
pair corpus produced by applying a novel neural network architecture on the
knowledge base Freebase to transduce facts into natural language questions. The
produced question answer pairs are evaluated both by human evaluators and using
automatic evaluation metrics, including well-established machine translation
and sentence similarity metrics. Across all evaluation criteria the
question-generation model outperforms the competing template-based baseline.
Furthermore, when presented to human evaluators, the generated questions appear
comparable in quality to real human-generated questions.
| Iulian Vlad Serban, Alberto Garc\'ia-Dur\'an, Caglar Gulcehre, Sungjin
Ahn, Sarath Chandar, Aaron Courville, Yoshua Bengio | null | 1603.06807 | null | null |
Multi-velocity neural networks for gesture recognition in videos | cs.CV cs.LG | We present a new action recognition deep neural network which adaptively
learns the best action velocities in addition to the classification. While deep
neural networks have reached maturity for image understanding tasks, we are
still exploring network topologies and features to handle the richer
environment of video clips. Here, we tackle the problem of multiple velocities
in action recognition, and provide state-of-the-art results for gesture
recognition, on known and new collected datasets. We further provide the
training steps for our semi-supervised network, suited to learn from huge
unlabeled datasets with only a fraction of labeled examples.
| Otkrist Gupta, Dan Raviv and Ramesh Raskar | null | 1603.06829 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.