title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Random Forests for Big Data | stat.ML cs.LG math.ST stat.TH | Big Data is one of the major challenges of statistical science and has
numerous consequences from algorithmic and theoretical viewpoints. Big Data
always involve massive data but they also often include online data and data
heterogeneity. Recently some statistical methods have been adapted to process
Big Data, like linear regression models, clustering methods and bootstrapping
schemes. Based on decision trees combined with aggregation and bootstrap ideas,
random forests were introduced by Breiman in 2001. They are a powerful
nonparametric statistical method allowing to consider in a single and versatile
framework regression problems, as well as two-class and multi-class
classification problems. Focusing on classification problems, this paper
proposes a selective review of available proposals that deal with scaling
random forests to Big Data problems. These proposals rely on parallel
environments or on online adaptations of random forests. We also describe how
related quantities -- such as out-of-bag error and variable importance -- are
addressed in these methods. Then, we formulate various remarks for random
forests in the Big Data context. Finally, we experiment five variants on two
massive datasets (15 and 120 millions of observations), a simulated one as well
as real world data. One variant relies on subsampling while three others are
related to parallel implementations of random forests and involve either
various adaptations of bootstrap to Big Data or to "divide-and-conquer"
approaches. The fifth variant relates on online learning of random forests.
These numerical experiments lead to highlight the relative performance of the
different variants, as well as some of their limitations.
| Robin Genuer (ISPED, SISTM), Jean-Michel Poggi (UPD5, LM-Orsay),
Christine Tuleau-Malot (JAD), Nathalie Villa-Vialaneix (MIAT INRA) | null | 1511.08327 | null | null |
The Automatic Statistician: A Relational Perspective | cs.LG stat.ML | Gaussian Processes (GPs) provide a general and analytically tractable way of
modeling complex time-varying, nonparametric functions. The Automatic Bayesian
Covariance Discovery (ABCD) system constructs natural-language description of
time-series data by treating unknown time-series data nonparametrically using
GP with a composite covariance kernel function. Unfortunately, learning a
composite covariance kernel with a single time-series data set often results in
less informative kernel that may not give qualitative, distinctive descriptions
of data. We address this challenge by proposing two relational kernel learning
methods which can model multiple time-series data sets by finding common,
shared causes of changes. We show that the relational kernel learning methods
find more accurate models for regression problems on several real-world data
sets; US stock data, US house price index data and currency exchange rate data.
| Yunseong Hwang, Anh Tong and Jaesik Choi | null | 1511.08343 | null | null |
Regularizing RNNs by Stabilizing Activations | cs.NE cs.CL cs.LG stat.ML | We stabilize the activations of Recurrent Neural Networks (RNNs) by
penalizing the squared distance between successive hidden states' norms.
This penalty term is an effective regularizer for RNNs including LSTMs and
IRNNs, improving performance on character-level language modeling and phoneme
recognition, and outperforming weight noise and dropout.
We achieve competitive performance (18.6\% PER) on the TIMIT phoneme
recognition task for RNNs evaluated without beam search or an RNN transducer.
With this penalty term, IRNN can achieve similar performance to LSTM on
language modeling, although adding the penalty term to the LSTM results in
superior performance.
Our penalty term also prevents the exponential growth of IRNN's activations
outside of their training horizon, allowing them to generalize to much longer
sequences.
| David Krueger, Roland Memisevic | null | 1511.08400 | null | null |
Gains and Losses are Fundamentally Different in Regret Minimization: The
Sparse Case | cs.LG stat.ML | We demonstrate that, in the classical non-stochastic regret minimization
problem with $d$ decisions, gains and losses to be respectively maximized or
minimized are fundamentally different. Indeed, by considering the additional
sparsity assumption (at each stage, at most $s$ decisions incur a nonzero
outcome), we derive optimal regret bounds of different orders. Specifically,
with gains, we obtain an optimal regret guarantee after $T$ stages of order
$\sqrt{T\log s}$, so the classical dependency in the dimension is replaced by
the sparsity size. With losses, we provide matching upper and lower bounds of
order $\sqrt{Ts\log(d)/d}$, which is decreasing in $d$. Eventually, we also
study the bandit setting, and obtain an upper bound of order $\sqrt{Ts\log
(d/s)}$ when outcomes are losses. This bound is proven to be optimal up to the
logarithmic factor $\sqrt{\log(d/s)}$.
| Joon Kwon and Vianney Perchet | null | 1511.08405 | null | null |
The Mechanism of Additive Composition | cs.CL cs.LG | Additive composition (Foltz et al, 1998; Landauer and Dumais, 1997; Mitchell
and Lapata, 2010) is a widely used method for computing meanings of phrases,
which takes the average of vector representations of the constituent words. In
this article, we prove an upper bound for the bias of additive composition,
which is the first theoretical analysis on compositional frameworks from a
machine learning point of view. The bound is written in terms of collocation
strength; we prove that the more exclusively two successive words tend to occur
together, the more accurate one can guarantee their additive composition as an
approximation to the natural phrase vector. Our proof relies on properties of
natural language data that are empirically verified, and can be theoretically
derived from an assumption that the data is generated from a Hierarchical
Pitman-Yor Process. The theory endorses additive composition as a reasonable
operation for calculating meanings of phrases, and suggests ways to improve
additive compositionality, including: transforming entries of distributional
word vectors by a function that meets a specific condition, constructing a
novel type of vector representations to make additive composition sensitive to
word order, and utilizing singular value decomposition to train word vectors.
| Ran Tian, Naoaki Okazaki, Kentaro Inui | 10.1007/s10994-017-5634-8 | 1511.08407 | null | null |
An Introduction to Convolutional Neural Networks | cs.NE cs.CV cs.LG | The field of machine learning has taken a dramatic twist in recent times,
with the rise of the Artificial Neural Network (ANN). These biologically
inspired computational models are able to far exceed the performance of
previous forms of artificial intelligence in common machine learning tasks. One
of the most impressive forms of ANN architecture is that of the Convolutional
Neural Network (CNN). CNNs are primarily used to solve difficult image-driven
pattern recognition tasks and with their precise yet simple architecture,
offers a simplified method of getting started with ANNs.
This document provides a brief introduction to CNNs, discussing recently
published papers and newly formed techniques in developing these brilliantly
fantastic image recognition models. This introduction assumes you are familiar
with the fundamentals of ANNs and machine learning.
| Keiron O'Shea and Ryan Nash | null | 1511.08458 | null | null |
Incremental Truncated LSTD | cs.LG cs.AI | Balancing between computational efficiency and sample efficiency is an
important goal in reinforcement learning. Temporal difference (TD) learning
algorithms stochastically update the value function, with a linear time
complexity in the number of features, whereas least-squares temporal difference
(LSTD) algorithms are sample efficient but can be quadratic in the number of
features. In this work, we develop an efficient incremental low-rank
LSTD({\lambda}) algorithm that progresses towards the goal of better balancing
computation and sample efficiency. The algorithm reduces the computation and
storage complexity to the number of features times the chosen rank parameter
while summarizing past samples efficiently to nearly obtain the sample
complexity of LSTD. We derive a simulation bound on the solution given by
truncated low-rank approximation, illustrating a bias- variance trade-off
dependent on the choice of rank. We demonstrate that the algorithm effectively
balances computational complexity and sample efficiency for policy evaluation
in a benchmark task and a high-dimensional energy allocation domain.
| Clement Gehring, Yangchen Pan, Martha White | null | 1511.08495 | null | null |
Iterative Instance Segmentation | cs.CV cs.LG | Existing methods for pixel-wise labelling tasks generally disregard the
underlying structure of labellings, often leading to predictions that are
visually implausible. While incorporating structure into the model should
improve prediction quality, doing so is challenging - manually specifying the
form of structural constraints may be impractical and inference often becomes
intractable even if structural constraints are given. We sidestep this problem
by reducing structured prediction to a sequence of unconstrained prediction
problems and demonstrate that this approach is capable of automatically
discovering priors on shape, contiguity of region predictions and smoothness of
region contours from data without any a priori specification. On the instance
segmentation task, this method outperforms the state-of-the-art, achieving a
mean $\mathrm{AP}^{r}$ of 63.6% at 50% overlap and 43.3% at 70% overlap.
| Ke Li, Bharath Hariharan, Jitendra Malik | null | 1511.08498 | null | null |
Regularized EM Algorithms: A Unified Framework and Statistical
Guarantees | cs.LG stat.ML | Latent variable models are a fundamental modeling tool in machine learning
applications, but they present significant computational and analytical
challenges. The popular EM algorithm and its variants, is a much used
algorithmic tool; yet our rigorous understanding of its performance is highly
incomplete. Recently, work in Balakrishnan et al. (2014) has demonstrated that
for an important class of problems, EM exhibits linear local convergence. In
the high-dimensional setting, however, the M-step may not be well defined. We
address precisely this setting through a unified treatment using
regularization. While regularization for high-dimensional problems is by now
well understood, the iterative EM algorithm requires a careful balancing of
making progress towards the solution while identifying the right structure
(e.g., sparsity or low-rank). In particular, regularizing the M-step using the
state-of-the-art high-dimensional prescriptions (e.g., Wainwright (2014)) is
not guaranteed to provide this balance. Our algorithm and analysis are linked
in a way that reveals the balance between optimization and statistical errors.
We specialize our general framework to sparse gaussian mixture models,
high-dimensional mixed regression, and regression with missing variables,
obtaining statistical guarantees for each of these examples.
| Xinyang Yi and Constantine Caramanis | null | 1511.08551 | null | null |
Simultaneous Private Learning of Multiple Concepts | cs.DS cs.CR cs.LG | We investigate the direct-sum problem in the context of differentially
private PAC learning: What is the sample complexity of solving $k$ learning
tasks simultaneously under differential privacy, and how does this cost compare
to that of solving $k$ learning tasks without privacy? In our setting, an
individual example consists of a domain element $x$ labeled by $k$ unknown
concepts $(c_1,\ldots,c_k)$. The goal of a multi-learner is to output $k$
hypotheses $(h_1,\ldots,h_k)$ that generalize the input examples.
Without concern for privacy, the sample complexity needed to simultaneously
learn $k$ concepts is essentially the same as needed for learning a single
concept. Under differential privacy, the basic strategy of learning each
hypothesis independently yields sample complexity that grows polynomially with
$k$. For some concept classes, we give multi-learners that require fewer
samples than the basic strategy. Unfortunately, however, we also give lower
bounds showing that even for very simple concept classes, the sample cost of
private multi-learning must grow polynomially in $k$.
| Mark Bun and Kobbi Nissim and Uri Stemmer | 10.1145/2840728.2840747 | 1511.08552 | null | null |
Shaping Proto-Value Functions via Rewards | cs.AI cs.LG | In this paper, we combine task-dependent reward shaping and task-independent
proto-value functions to obtain reward dependent proto-value functions (RPVFs).
In constructing the RPVFs we are making use of the immediate rewards which are
available during the sampling phase but are not used in the PVF construction.
We show via experiments that learning with an RPVF based representation is
better than learning with just reward shaping or PVFs. In particular, when the
state space is symmetrical and the rewards are asymmetrical, the RPVF capture
the asymmetry better than the PVFs.
| Chandrashekar Lakshmi Narayanan, Raj Kumar Maity and Shalabh Bhatnagar | null | 1511.08589 | null | null |
Algorithms for Differentially Private Multi-Armed Bandits | stat.ML cs.CR cs.LG | We present differentially private algorithms for the stochastic Multi-Armed
Bandit (MAB) problem. This is a problem for applications such as adaptive
clinical trials, experiment design, and user-targeted advertising where private
information is connected to individual rewards. Our major contribution is to
show that there exist $(\epsilon, \delta)$ differentially private variants of
Upper Confidence Bound algorithms which have optimal regret, $O(\epsilon^{-1} +
\log T)$. This is a significant improvement over previous results, which only
achieve poly-log regret $O(\epsilon^{-2} \log^{2} T)$, because of our use of a
novel interval-based mechanism. We also substantially improve the bounds of
previous family of algorithms which use a continual release mechanism.
Experiments clearly validate our theoretical bounds.
| Aristide Tossou, Christos Dimitrakakis | null | 1511.08681 | null | null |
On the convergence of cycle detection for navigational reinforcement
learning | cs.LG cs.AI | We consider a reinforcement learning framework where agents have to navigate
from start states to goal states. We prove convergence of a cycle-detection
learning algorithm on a class of tasks that we call reducible. Reducible tasks
have an acyclic solution. We also syntactically characterize the form of the
final policy. This characterization can be used to precisely detect the
convergence point in a simulation. Our result demonstrates that even simple
algorithms can be successful in learning a large class of nontrivial tasks. In
addition, our framework is elementary in the sense that we only use basic
concepts to formally prove convergence.
| Tom J. Ameloot and Jan Van den Bussche | null | 1511.08724 | null | null |
Informative Data Projections: A Framework and Two Examples | cs.LG cs.IR math.ST stat.TH | Methods for Projection Pursuit aim to facilitate the visual exploration of
high-dimensional data by identifying interesting low-dimensional projections. A
major challenge is the design of a suitable quality metric of projections,
commonly referred to as the projection index, to be maximized by the Projection
Pursuit algorithm. In this paper, we introduce a new information-theoretic
strategy for tackling this problem, based on quantifying the amount of
information the projection conveys to a user given their prior beliefs about
the data. The resulting projection index is a subjective quantity, explicitly
dependent on the intended user. As a useful illustration, we developed this
idea for two particular kinds of prior beliefs. The first kind leads to PCA
(Principal Component Analysis), shining new light on when PCA is (not)
appropriate. The second kind leads to a novel projection index, the
maximization of which can be regarded as a robust variant of PCA. We show how
this projection index, though non-convex, can be effectively maximized using a
modified power method as well as using a semidefinite programming relaxation.
The usefulness of this new projection index is demonstrated in comparative
empirical experiments against PCA and a popular Projection Pursuit method.
| Tijl De Bie, Jefrey Lijffijt, Raul Santos-Rodriguez, Bo Kang | null | 1511.08762 | null | null |
Multiagent Cooperation and Competition with Deep Reinforcement Learning | cs.AI cs.LG q-bio.NC | Multiagent systems appear in most social, economical, and political
situations. In the present work we extend the Deep Q-Learning Network
architecture proposed by Google DeepMind to multiagent environments and
investigate how two agents controlled by independent Deep Q-Networks interact
in the classic videogame Pong. By manipulating the classical rewarding scheme
of Pong we demonstrate how competitive and collaborative behaviors emerge.
Competitive agents learn to play and score efficiently. Agents trained under
collaborative rewarding schemes find an optimal strategy to keep the ball in
the game as long as possible. We also describe the progression from competitive
to collaborative behavior. The present work demonstrates that Deep Q-Networks
can become a practical tool for studying the decentralized learning of
multiagent systems living in highly complex environments.
| Ardi Tampuu, Tambet Matiisen, Dorian Kodelja, Ilya Kuzovkin, Kristjan
Korjus, Juhan Aru, Jaan Aru and Raul Vicente | null | 1511.08779 | null | null |
Efficient Sum of Outer Products Dictionary Learning (SOUP-DIL) - The
$\ell_0$ Method | cs.LG | The sparsity of natural signals and images in a transform domain or
dictionary has been extensively exploited in several applications such as
compression, denoising and inverse problems. More recently, data-driven
adaptation of synthesis dictionaries has shown promise in many applications
compared to fixed or analytical dictionary models. However, dictionary learning
problems are typically non-convex and NP-hard, and the usual alternating
minimization approaches for these problems are often computationally expensive,
with the computations dominated by the NP-hard synthesis sparse coding step. In
this work, we investigate an efficient method for $\ell_{0}$ "norm"-based
dictionary learning by first approximating the training data set with a sum of
sparse rank-one matrices and then using a block coordinate descent approach to
estimate the unknowns. The proposed block coordinate descent algorithm involves
efficient closed-form solutions. In particular, the sparse coding step involves
a simple form of thresholding. We provide a convergence analysis for the
proposed block coordinate descent approach. Our numerical experiments show the
promising performance and significant speed-ups provided by our method over the
classical K-SVD scheme in sparse signal representation and image denoising.
| Saiprasad Ravishankar, Raj Rao Nadakuditi, Jeffrey A. Fessler | null | 1511.08842 | null | null |
Designing High-Fidelity Single-Shot Three-Qubit Gates: A Machine
Learning Approach | quant-ph cs.LG | Three-qubit quantum gates are key ingredients for quantum error correction
and quantum information processing. We generate quantum-control procedures to
design three types of three-qubit gates, namely Toffoli, Controlled-Not-Not and
Fredkin gates. The design procedures are applicable to a system comprising
three nearest-neighbor-coupled superconducting artificial atoms. For each
three-qubit gate, the numerical simulation of the proposed scheme achieves
99.9% fidelity, which is an accepted threshold fidelity for fault-tolerant
quantum computing. We test our procedure in the presence of decoherence-induced
noise as well as show its robustness against random external noise generated by
the control electronics. The three-qubit gates are designed via the machine
learning algorithm called Subspace-Selective Self-Adaptive Differential
Evolution (SuSSADE).
| Ehsan Zahedinejad, Joydip Ghosh, Barry C. Sanders | 10.1103/PhysRevApplied.6.054005 | 1511.08862 | null | null |
MidRank: Learning to rank based on subsequences | cs.CV cs.LG | We present a supervised learning to rank algorithm that effectively orders
images by exploiting the structure in image sequences. Most often in the
supervised learning to rank literature, ranking is approached either by
analyzing pairs of images or by optimizing a list-wise surrogate loss function
on full sequences. In this work we propose MidRank, which learns from
moderately sized sub-sequences instead. These sub-sequences contain useful
structural ranking information that leads to better learnability during
training and better generalization during testing. By exploiting sub-sequences,
the proposed MidRank improves ranking accuracy considerably on an extensive
array of image ranking applications and datasets.
| Basura Fernando, Efstratios Gavves, Damien Muselet, Tinne Tuytelaars | null | 1511.08951 | null | null |
Learning Directed Acyclic Graphs with Penalized Neighbourhood Regression | math.ST cs.LG stat.ML stat.TH | We study a family of regularized score-based estimators for learning the
structure of a directed acyclic graph (DAG) for a multivariate normal
distribution from high-dimensional data with $p\gg n$. Our main results
establish support recovery guarantees and deviation bounds for a family of
penalized least-squares estimators under concave regularization without
assuming prior knowledge of a variable ordering. These results apply to a
variety of practical situations that allow for arbitrary nondegenerate
covariance structures as well as many popular regularizers including the MCP,
SCAD, $\ell_{0}$ and $\ell_{1}$. The proof relies on interpreting a DAG as a
recursive linear structural equation model, which reduces the estimation
problem to a series of neighbourhood regressions. We provide a novel
statistical analysis of these neighbourhood problems, establishing uniform
control over the superexponential family of neighbourhoods associated with a
Gaussian distribution. We then apply these results to study the statistical
properties of score-based DAG estimators, learning causal DAGs, and inferring
conditional independence relations via graphical models. Our results
yield---for the first time---finite-sample guarantees for structure learning of
Gaussian DAGs in high-dimensions via score-based estimation.
| Bryon Aragam, Arash A. Amini, Qing Zhou | null | 1511.08963 | null | null |
Robotic Search & Rescue via Online Multi-task Reinforcement Learning | cs.AI cs.LG cs.RO | Reinforcement learning (RL) is a general and well-known method that a robot
can use to learn an optimal control policy to solve a particular task. We would
like to build a versatile robot that can learn multiple tasks, but using RL for
each of them would be prohibitively expensive in terms of both time and
wear-and-tear on the robot. To remedy this problem, we use the Policy Gradient
Efficient Lifelong Learning Algorithm (PG-ELLA), an online multi-task RL
algorithm that enables the robot to efficiently learn multiple consecutive
tasks by sharing knowledge between these tasks to accelerate learning and
improve performance. We implemented and evaluated three RL methods--Q-learning,
policy gradient RL, and PG-ELLA--on a ground robot whose task is to find a
target object in an environment under different surface conditions. In this
paper, we discuss our implementations as well as present an empirical analysis
of their learning performance.
| Lisa Lee | null | 1511.08967 | null | null |
How do the naive Bayes classifier and the Support Vector Machine compare
in their ability to forecast the Stock Exchange of Thailand? | cs.LG | This essay investigates the question of how the naive Bayes classifier and
the support vector machine compare in their ability to forecast the Stock
Exchange of Thailand. The theory behind the SVM and the naive Bayes classifier
is explored. The algorithms are trained using data from the month of January
2010, extracted from the MarketWatch.com website. Input features are selected
based on previous studies of the SET100 Index. The Weka 3 software is used to
create models from the labeled training data. Mean squared error and proportion
of correctly classified instances, and a number of other error measurements are
the used to compare the two algorithms. This essay shows that these two
algorithms are currently not advanced enough to accurately model the stock
exchange. Nevertheless, the naive Bayes is better than the support vector
machine at predicting the Stock Exchange of Thailand.
| Napas Udomsak | null | 1511.08987 | null | null |
Multiple-Instance Learning: Radon-Nikodym Approach to Distribution
Regression Problem | cs.LG | For distribution regression problem, where a bag of $x$--observations is
mapped to a single $y$ value, a one--step solution is proposed. The problem of
random distribution to random value is transformed to random vector to random
value by taking distribution moments of $x$ observations in a bag as random
vector. Then Radon--Nikodym or least squares theory can be applied, what give
$y(x)$ estimator. The probability distribution of $y$ is also obtained, what
requires solving generalized eigenvalues problem, matrix spectrum (not
depending on $x$) give possible $y$ outcomes and depending on $x$ probabilities
of outcomes can be obtained by projecting the distribution with fixed $x$ value
(delta--function) to corresponding eigenvector. A library providing numerically
stable polynomial basis for these calculations is available, what make the
proposed approach practical.
| Vladislav Gennadievich Malyshkin | null | 1511.09058 | null | null |
Position paper: a general framework for applying machine learning
techniques in operating room | cs.CY cs.LG | In this position paper we describe a general framework for applying machine
learning and pattern recognition techniques in healthcare. In particular, we
are interested in providing an automated tool for monitoring and incrementing
the level of awareness in the operating room and for identifying human errors
which occur during the laparoscopy surgical operation. The framework that we
present is divided in three different layers: each layer implements algorithms
which have an increasing level of complexity and which perform functionality
with an higher degree of abstraction. In the first layer, raw data collected
from sensors in the operating room during surgical operation, they are
pre-processed and aggregated. The results of this initial phase are transferred
to a second layer, which implements pattern recognition techniques and extract
relevant features from the data. Finally, in the last layer, expert systems are
employed to take high level decisions, which represent the final output of the
system.
| Filippo Maria Bianchi, Enrico De Santis, Hedieh Montazeri, Parisa
Naraei, Alireza Sadeghian | null | 1511.09099 | null | null |
A Short Survey on Data Clustering Algorithms | cs.DS cs.CV cs.LG stat.CO stat.ML | With rapidly increasing data, clustering algorithms are important tools for
data analytics in modern research. They have been successfully applied to a
wide range of domains; for instance, bioinformatics, speech recognition, and
financial analysis. Formally speaking, given a set of data instances, a
clustering algorithm is expected to divide the set of data instances into the
subsets which maximize the intra-subset similarity and inter-subset
dissimilarity, where a similarity measure is defined beforehand. In this work,
the state-of-the-arts clustering algorithms are reviewed from design concept to
methodology; Different clustering paradigms are discussed. Advanced clustering
algorithms are also discussed. After that, the existing clustering evaluation
metrics are reviewed. A summary with future insights is provided at the end.
| Ka-Chun Wong | null | 1511.09123 | null | null |
Aspect-based Opinion Summarization with Convolutional Neural Networks | cs.CL cs.IR cs.LG | This paper considers Aspect-based Opinion Summarization (AOS) of reviews on
particular products. To enable real applications, an AOS system needs to
address two core subtasks, aspect extraction and sentiment classification. Most
existing approaches to aspect extraction, which use linguistic analysis or
topic modeling, are general across different products but not precise enough or
suitable for particular products. Instead we take a less general but more
precise scheme, directly mapping each review sentence into pre-defined aspects.
To tackle aspect mapping and sentiment classification, we propose two
Convolutional Neural Network (CNN) based methods, cascaded CNN and multitask
CNN. Cascaded CNN contains two levels of convolutional networks. Multiple CNNs
at level 1 deal with aspect mapping task, and a single CNN at level 2 deals
with sentiment classification. Multitask CNN also contains multiple aspect CNNs
and a sentiment CNN, but different networks share the same word embeddings.
Experimental results indicate that both cascaded and multitask CNNs outperform
SVM-based methods by large margins. Multitask CNN generally performs better
than cascaded CNN.
| Haibing Wu, Yiwei Gu, Shangdi Sun and Xiaodong Gu | null | 1511.09128 | null | null |
Proximal gradient method for huberized support vector machine | stat.ML cs.LG cs.NA math.NA | The Support Vector Machine (SVM) has been used in a wide variety of
classification problems. The original SVM uses the hinge loss function, which
is non-differentiable and makes the problem difficult to solve in particular
for regularized SVMs, such as with $\ell_1$-regularization. This paper
considers the Huberized SVM (HSVM), which uses a differentiable approximation
of the hinge loss function. We first explore the use of the Proximal Gradient
(PG) method to solving binary-class HSVM (B-HSVM) and then generalize it to
multi-class HSVM (M-HSVM). Under strong convexity assumptions, we show that our
algorithm converges linearly. In addition, we give a finite convergence result
about the support of the solution, based on which we further accelerate the
algorithm by a two-stage method. We present extensive numerical experiments on
both synthetic and real datasets which demonstrate the superiority of our
methods over some state-of-the-art methods for both binary- and multi-class
SVMs.
| Yangyang Xu, Ioannis Akrotirianakis, Amit Chakraborty | 10.1007/s10044-015-0485-z | 1511.09159 | null | null |
Asynchronous adaptive networks | math.OC cs.LG cs.MA | In a recent article [1] we surveyed advances related to adaptation, learning,
and optimization over synchronous networks. Various distributed strategies were
discussed that enable a collection of networked agents to interact locally in
response to streaming data and to continually learn and adapt to track drifts
in the data and models. Under reasonable technical conditions on the data, the
adaptive networks were shown to be mean-square stable in the slow adaptation
regime, and their mean-square-error performance and convergence rate were
characterized in terms of the network topology and data statistical moments
[2]. Classical results for single-agent adaptation and learning were recovered
as special cases. Following the works [3]-[5], this chapter complements the
exposition from [1] and extends the results to asynchronous networks. The
operation of this class of networks can be subject to various sources of
uncertainties that influence their dynamic behavior, including randomly
changing topologies, random link failures, random data arrival times, and
agents turning on and off randomly. In an asynchronous environment, agents may
stop updating their solutions or may stop sending or receiving information in a
random manner and without coordination with other agents. The presentation will
reveal that the mean-square-error performance of asynchronous networks remains
largely unaltered compared to synchronous networks. The results justify the
remarkable resilience of cooperative networks in the face of random events.
| Ali H. Sayed and Xiaochuan Zhao | null | 1511.09180 | null | null |
Non-adaptive Group Testing on Graphs | cs.DS cs.LG | Grebinski and Kucherov (1998) and Alon et al. (2004-2005) study the problem
of learning a hidden graph for some especial cases, such as hamiltonian cycle,
cliques, stars, and matchings. This problem is motivated by problems in
chemical reactions, molecular biology and genome sequencing.
In this paper, we present a generalization of this problem. Precisely, we
consider a graph G and a subgraph H of G and we assume that G contains exactly
one defective subgraph isomorphic to H. The goal is to find the defective
subgraph by testing whether an induced subgraph contains an edge of the
defective subgraph, with the minimum number of tests. We present an upper bound
for the number of tests to find the defective subgraph by using the symmetric
and high probability variation of Lov\'asz Local Lemma.
| Hamid Kameli | 10.23638/DMTCS-20-1-9 | 1511.09196 | null | null |
On Learning to Think: Algorithmic Information Theory for Novel
Combinations of Reinforcement Learning Controllers and Recurrent Neural World
Models | cs.AI cs.LG cs.NE | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers.
| Juergen Schmidhuber | null | 1511.09249 | null | null |
Scalable and Accurate Online Feature Selection for Big Data | cs.LG | Feature selection is important in many big data applications. Two critical
challenges closely associate with big data. Firstly, in many big data
applications, the dimensionality is extremely high, in millions, and keeps
growing. Secondly, big data applications call for highly scalable feature
selection algorithms in an online manner such that each feature can be
processed in a sequential scan. We present SAOLA, a Scalable and Accurate
OnLine Approach for feature selection in this paper. With a theoretical
analysis on bounds of the pairwise correlations between features, SAOLA employs
novel pairwise comparison techniques and maintain a parsimonious model over
time in an online manner. Furthermore, to deal with upcoming features that
arrive by groups, we extend the SAOLA algorithm, and then propose a new
group-SAOLA algorithm for online group feature selection. The group-SAOLA
algorithm can online maintain a set of feature groups that is sparse at the
levels of both groups and individual features simultaneously. An empirical
study using a series of benchmark real data sets shows that our two algorithms,
SAOLA and group-SAOLA, are scalable on data sets of extremely high
dimensionality, and have superior performance over the state-of-the-art feature
selection methods.
| Kui Yu, Xindong Wu, Wei Ding, and Jian Pei | null | 1511.09263 | null | null |
Cost-aware Pre-training for Multiclass Cost-sensitive Deep Learning | cs.LG cs.NE | Deep learning has been one of the most prominent machine learning techniques
nowadays, being the state-of-the-art on a broad range of applications where
automatic feature extraction is needed. Many such applications also demand
varying costs for different types of mis-classification errors, but it is not
clear whether or how such cost information can be incorporated into deep
learning to improve performance. In this work, we propose a novel cost-aware
algorithm that takes into account the cost information into not only the
training stage but also the pre-training stage of deep learning. The approach
allows deep learning to conduct automatic feature extraction with the cost
information effectively. Extensive experimental results demonstrate that the
proposed approach outperforms other deep learning models that do not digest the
cost information in the pre-training stage.
| Yu-An Chung, Hsuan-Tien Lin, Shao-Wen Yang | null | 1511.09337 | null | null |
k-Nearest Neighbour Classification of Datasets with a Family of
Distances | stat.ML cs.LG | The $k$-nearest neighbour ($k$-NN) classifier is one of the oldest and most
important supervised learning algorithms for classifying datasets.
Traditionally the Euclidean norm is used as the distance for the $k$-NN
classifier. In this thesis we investigate the use of alternative distances for
the $k$-NN classifier.
We start by introducing some background notions in statistical machine
learning. We define the $k$-NN classifier and discuss Stone's theorem and the
proof that $k$-NN is universally consistent on the normed space $R^d$. We then
prove that $k$-NN is universally consistent if we take a sequence of random
norms (that are independent of the sample and the query) from a family of norms
that satisfies a particular boundedness condition. We extend this result by
replacing norms with distances based on uniformly locally Lipschitz functions
that satisfy certain conditions. We discuss the limitations of Stone's lemma
and Stone's theorem, particularly with respect to quasinorms and adaptively
choosing a distance for $k$-NN based on the labelled sample. We show the
universal consistency of a two stage $k$-NN type classifier where we select the
distance adaptively based on a split labelled sample and the query. We conclude
by giving some examples of improvements of the accuracy of classifying various
datasets using the above techniques.
| Stan Hatko | null | 1512.00001 | null | null |
Decoding Hidden Markov Models Faster Than Viterbi Via Online
Matrix-Vector (max, +)-Multiplication | cs.LG cs.DS cs.IT math.IT | In this paper, we present a novel algorithm for the maximum a posteriori
decoding (MAPD) of time-homogeneous Hidden Markov Models (HMM), improving the
worst-case running time of the classical Viterbi algorithm by a logarithmic
factor. In our approach, we interpret the Viterbi algorithm as a repeated
computation of matrix-vector $(\max, +)$-multiplications. On time-homogeneous
HMMs, this computation is online: a matrix, known in advance, has to be
multiplied with several vectors revealed one at a time. Our main contribution
is an algorithm solving this version of matrix-vector $(\max,+)$-multiplication
in subquadratic time, by performing a polynomial preprocessing of the matrix.
Employing this fast multiplication algorithm, we solve the MAPD problem in
$O(mn^2/ \log n)$ time for any time-homogeneous HMM of size $n$ and observation
sequence of length $m$, with an extra polynomial preprocessing cost negligible
for $m > n$. To the best of our knowledge, this is the first algorithm for the
MAPD problem requiring subquadratic time per observation, under the only
assumption -- usually verified in practice -- that the transition probability
matrix does not change with time.
| Massimo Cairo, Gabriele Farina, Romeo Rizzi | null | 1512.00077 | null | null |
Learning Using 1-Local Membership Queries | cs.LG cs.AI | Classic machine learning algorithms learn from labelled examples. For
example, to design a machine translation system, a typical training set will
consist of English sentences and their translation. There is a stronger model,
in which the algorithm can also query for labels of new examples it creates.
E.g, in the translation task, the algorithm can create a new English sentence,
and request its translation from the user during training. This combination of
examples and queries has been widely studied. Yet, despite many theoretical
results, query algorithms are almost never used. One of the main causes for
this is a report (Baum and Lang, 1992) on very disappointing empirical
performance of a query algorithm. These poor results were mainly attributed to
the fact that the algorithm queried for labels of examples that are artificial,
and impossible to interpret by humans.
In this work we study a new model of local membership queries (Awasthi et
al., 2012), which tries to resolve the problem of artificial queries. In this
model, the algorithm is only allowed to query the labels of examples which are
close to examples from the training set. E.g., in translation, the algorithm
can change individual words in a sentence it has already seen, and then ask for
the translation. In this model, the examples queried by the algorithm will be
close to natural examples and hence, hopefully, will not appear as artificial
or random. We focus on 1-local queries (i.e., queries of distance 1 from an
example in the training sample). We show that 1-local membership queries are
already stronger than the standard learning model. We also present an
experiment on a well known NLP task of sentiment analysis. In this experiment,
the users were asked to provide more information than merely indicating the
label. We present results that illustrate that this extra information is
beneficial in practice.
| Galit Bary | null | 1512.00165 | null | null |
MOCICE-BCubed F$_1$: A New Evaluation Measure for Biclustering
Algorithms | cs.LG cs.IR | The validation of biclustering algorithms remains a challenging task, even
though a number of measures have been proposed for evaluating the quality of
these algorithms. Although no criterion is universally accepted as the overall
best, a number of meta-evaluation conditions to be satisfied by biclustering
algorithms have been enunciated. In this work, we present MOCICE-BCubed F$_1$,
a new external measure for evaluating biclusterings, in the scenario where gold
standard annotations are available for both the object clusters and the
associated feature subspaces. Our proposal relies on the so-called
micro-objects transformation and satisfies the most comprehensive set of
meta-evaluation conditions so far enunciated for biclusterings. Additionally,
the proposed measure adequately handles the occurrence of overlapping in both
the object and feature spaces. Moreover, when used for evaluating traditional
clusterings, which are viewed as a particular case of biclustering, the
proposed measure also satisfies the most comprehensive set of meta-evaluation
conditions so far enunciated for this task.
| Henry Rosales-M\'endez, Yunior Ram\'irez-Cruz | 10.1016/j.patrec.2016.09.002 | 1512.00228 | null | null |
Towards Dropout Training for Convolutional Neural Networks | cs.LG cs.CV cs.NE | Recently, dropout has seen increasing use in deep learning. For deep
convolutional neural networks, dropout is known to work well in fully-connected
layers. However, its effect in convolutional and pooling layers is still not
clear. This paper demonstrates that max-pooling dropout is equivalent to
randomly picking activation based on a multinomial distribution at training
time. In light of this insight, we advocate employing our proposed
probabilistic weighted pooling, instead of commonly used max-pooling, to act as
model averaging at test time. Empirical evidence validates the superiority of
probabilistic weighted pooling. We also empirically show that the effect of
convolutional dropout is not trivial, despite the dramatically reduced
possibility of over-fitting due to the convolutional architecture. Elaborately
designing dropout training simultaneously in max-pooling and fully-connected
layers, we achieve state-of-the-art performance on MNIST, and very competitive
results on CIFAR-10 and CIFAR-100, relative to other approaches without data
augmentation. Finally, we compare max-pooling dropout and stochastic pooling,
both of which introduce stochasticity based on multinomial distributions at
pooling stage.
| Haibing Wu and Xiaodong Gu | 10.1016/j.neunet.2015.07.007 | 1512.00242 | null | null |
Sequential visibility-graph motifs | physics.data-an cs.LG nlin.CD | Visibility algorithms transform time series into graphs and encode dynamical
information in their topology, paving the way for graph-theoretical time series
analysis as well as building a bridge between nonlinear dynamics and network
science. In this work we introduce and study the concept of sequential
visibility graph motifs, smaller substructures of n consecutive nodes that
appear with characteristic frequencies. We develop a theory to compute in an
exact way the motif profiles associated to general classes of deterministic and
stochastic dynamics. We find that this simple property is indeed a highly
informative and computationally efficient feature capable to distinguish among
different dynamics and robust against noise contamination. We finally confirm
that it can be used in practice to perform unsupervised learning, by extracting
motif profiles from experimental heart-rate series and being able, accordingly,
to disentangle meditative from other relaxation states. Applications of this
general theory include the automatic classification and description of
physical, biological, and financial time series.
| Jacopo Iacovacci, Lucas Lacasa | 10.1103/PhysRevE.93.042309 | 1512.00297 | null | null |
Taxonomy grounded aggregation of classifiers with different label sets | cs.AI cs.LG | We describe the problem of aggregating the label predictions of diverse
classifiers using a class taxonomy. Such a taxonomy may not have been available
or referenced when the individual classifiers were designed and trained, yet
mapping the output labels into the taxonomy is desirable to integrate the
effort spent in training the constituent classifiers. A hierarchical taxonomy
representing some domain knowledge may be different from, but partially
mappable to, the label sets of the individual classifiers. We present a
heuristic approach and a principled graphical model to aggregate the label
predictions by grounding them into the available taxonomy. Our model aggregates
the labels using the taxonomy structure as constraints to find the most likely
hierarchically consistent class. We experimentally validate our proposed method
on image and text classification tasks.
| Amrita Saha, Sathish Indurthi, Shantanu Godbole, Subendhu Rongali and
Vikas C. Raykar | null | 1512.00355 | null | null |
Reinforcement Learning Applied to an Electric Water Heater: From Theory
to Practice | cs.LG | Electric water heaters have the ability to store energy in their water buffer
without impacting the comfort of the end user. This feature makes them a prime
candidate for residential demand response. However, the stochastic and
nonlinear dynamics of electric water heaters, makes it challenging to harness
their flexibility. Driven by this challenge, this paper formulates the
underlying sequential decision-making problem as a Markov decision process and
uses techniques from reinforcement learning. Specifically, we apply an
auto-encoder network to find a compact feature representation of the sensor
measurements, which helps to mitigate the curse of dimensionality. A wellknown
batch reinforcement learning technique, fitted Q-iteration, is used to find a
control policy, given this feature representation. In a simulation-based
experiment using an electric water heater with 50 temperature sensors, the
proposed method was able to achieve good policies much faster than when using
the full state information. In a lab experiment, we apply fitted Q-iteration to
an electric water heater with eight temperature sensors. Further reducing the
state vector did not improve the results of fitted Q-iteration. The results of
the lab experiment, spanning 40 days, indicate that compared to a thermostat
controller, the presented approach was able to reduce the total cost of energy
consumption of the electric water heater by 15%.
| Frederik Ruelens, Bert Claessens, Salman Quaiyum, Bart De Schutter,
Robert Babuska, and Ronnie Belmans | null | 1512.00408 | null | null |
Fast k-Nearest Neighbour Search via Dynamic Continuous Indexing | cs.DS cs.AI cs.IR cs.LG stat.ML | Existing methods for retrieving k-nearest neighbours suffer from the curse of
dimensionality. We argue this is caused in part by inherent deficiencies of
space partitioning, which is the underlying strategy used by most existing
methods. We devise a new strategy that avoids partitioning the vector space and
present a novel randomized algorithm that runs in time linear in dimensionality
of the space and sub-linear in the intrinsic dimensionality and the size of the
dataset and takes space constant in dimensionality of the space and linear in
the size of the dataset. The proposed algorithm allows fine-grained control
over accuracy and speed on a per-query basis, automatically adapts to
variations in data density, supports dynamic updates to the dataset and is
easy-to-implement. We show appealing theoretical properties and demonstrate
empirically that the proposed algorithm outperforms locality-sensitivity
hashing (LSH) in terms of approximation quality, speed and space efficiency.
| Ke Li, Jitendra Malik | null | 1512.00442 | null | null |
Loss Functions for Top-k Error: Analysis and Insights | stat.ML cs.CV cs.LG | In order to push the performance on realistic computer vision tasks, the
number of classes in modern benchmark datasets has significantly increased in
recent years. This increase in the number of classes comes along with increased
ambiguity between the class labels, raising the question if top-1 error is the
right performance measure. In this paper, we provide an extensive comparison
and evaluation of established multiclass methods comparing their top-k
performance both from a practical as well as from a theoretical perspective.
Moreover, we introduce novel top-k loss functions as modifications of the
softmax and the multiclass SVM losses and provide efficient optimization
schemes for them. In the experiments, we compare on various datasets all of the
proposed and established methods for top-k error optimization. An interesting
insight of this paper is that the softmax loss yields competitive top-k
performance for all k simultaneously. For a specific top-k error, our new top-k
losses lead typically to further improvements while being faster to train than
the softmax.
| Maksim Lapin, Matthias Hein, Bernt Schiele | null | 1512.00486 | null | null |
Attribute2Image: Conditional Image Generation from Visual Attributes | cs.LG cs.AI cs.CV | This paper investigates a novel problem of generating images from visual
attributes. We model the image as a composite of foreground and background and
develop a layered generative model with disentangled latent variables that can
be learned end-to-end using a variational auto-encoder. We experiment with
natural images of faces and birds and demonstrate that the proposed models are
capable of generating realistic and diverse samples with disentangled latent
representations. We use a general energy minimization algorithm for posterior
inference of latent variables given novel images. Therefore, the learned
generative models show excellent quantitative and visual results in the tasks
of attribute-conditioned image reconstruction and completion.
| Xinchen Yan, Jimei Yang, Kihyuk Sohn, Honglak Lee | null | 1512.00570 | null | null |
Object-based World Modeling in Semi-Static Environments with Dependent
Dirichlet-Process Mixtures | cs.AI cs.LG cs.RO | To accomplish tasks in human-centric indoor environments, robots need to
represent and understand the world in terms of objects and their attributes. We
refer to this attribute-based representation as a world model, and consider how
to acquire it via noisy perception and maintain it over time, as objects are
added, changed, and removed in the world. Previous work has framed this as
multiple-target tracking problem, where objects are potentially in motion at
all times. Although this approach is general, it is computationally expensive.
We argue that such generality is not needed in typical world modeling tasks,
where objects only change state occasionally. More efficient approaches are
enabled by restricting ourselves to such semi-static environments.
We consider a previously-proposed clustering-based world modeling approach
that assumed static environments, and extend it to semi-static domains by
applying a dependent Dirichlet-process (DDP) mixture model. We derive a novel
MAP inference algorithm under this model, subject to data association
constraints. We demonstrate our approach improves computational performance in
semi-static environments.
| Lawson L.S. Wong, Thanard Kurutach, Leslie Pack Kaelbling, Tom\'as
Lozano-P\'erez | null | 1512.00573 | null | null |
Centroid Based Binary Tree Structured SVM for Multi Classification | cs.LG | Support Vector Machines (SVMs) were primarily designed for 2-class
classification. But they have been extended for N-class classification also
based on the requirement of multiclasses in the practical applications.
Although N-class classification using SVM has considerable research attention,
getting minimum number of classifiers at the time of training and testing is
still a continuing research. We propose a new algorithm CBTS-SVM (Centroid
based Binary Tree Structured SVM) which addresses this issue. In this we build
a binary tree of SVM models based on the similarity of the class labels by
finding their distance from the corresponding centroids at the root level. The
experimental results demonstrates the comparable accuracy for CBTS with OVO
with reasonable gamma and cost values. On the other hand when CBTS is compared
with OVA, it gives the better accuracy with reduced training time and testing
time. Furthermore CBTS is also scalable as it is able to handle the large data
sets.
| Aruna Govada, Bhavul Gauri and S.K.Sahay | 10.1109/ICACCI.2015.7275618 | 1512.00659 | null | null |
Recognizing Semantic Features in Faces using Deep Learning | cs.LG cs.AI cs.CV stat.ML | The human face constantly conveys information, both consciously and
subconsciously. However, as basic as it is for humans to visually interpret
this information, it is quite a big challenge for machines. Conventional
semantic facial feature recognition and analysis techniques are already in use
and are based on physiological heuristics, but they suffer from lack of
robustness and high computation time. This thesis aims to explore ways for
machines to learn to interpret semantic information available in faces in an
automated manner without requiring manual design of feature detectors, using
the approach of Deep Learning. This thesis provides a study of the effects of
various factors and hyper-parameters of deep neural networks in the process of
determining an optimal network configuration for the task of semantic facial
feature recognition. This thesis explores the effectiveness of the system to
recognize the various semantic features (like emotions, age, gender, ethnicity
etc.) present in faces. Furthermore, the relation between the effect of
high-level concepts on low level features is explored through an analysis of
the similarities in low-level descriptors of different semantic features. This
thesis also demonstrates a novel idea of using a deep network to generate 3-D
Active Appearance Models of faces from real-world 2-D images.
For a more detailed report on this work, please see [arXiv:1512.00743v1].
| Amogh Gudi | null | 1512.00743 | null | null |
Zero-Shot Event Detection by Multimodal Distributional Semantic
Embedding of Videos | cs.CV cs.CL cs.LG | We propose a new zero-shot Event Detection method by Multi-modal
Distributional Semantic embedding of videos. Our model embeds object and action
concepts as well as other available modalities from videos into a
distributional semantic space. To our knowledge, this is the first Zero-Shot
event detection model that is built on top of distributional semantics and
extends it in the following directions: (a) semantic embedding of multimodal
information in videos (with focus on the visual modalities), (b) automatically
determining relevance of concepts/attributes to a free text query, which could
be useful for other applications, and (c) retrieving videos by free text event
query (e.g., "changing a vehicle tire") based on their content. We embed videos
into a distributional semantic space and then measure the similarity between
videos and the event query in a free text form. We validated our method on the
large TRECVID MED (Multimedia Event Detection) challenge. Using only the event
title as a query, our method outperformed the state-of-the-art that uses big
descriptions from 12.6% to 13.5% with MAP metric and 0.73 to 0.83 with ROC-AUC
metric. It is also an order of magnitude faster.
| Mohamed Elhoseiny, Jingen Liu, Hui Cheng, Harpreet Sawhney, Ahmed
Elgammal | null | 1512.00818 | null | null |
Protein secondary structure prediction using deep convolutional neural
fields | q-bio.BM cs.LG q-bio.QM | Protein secondary structure (SS) prediction is important for studying protein
structure and function. When only the sequence (profile) information is used as
input feature, currently the best predictors can obtain ~80% Q3 accuracy, which
has not been improved in the past decade. Here we present DeepCNF (Deep
Convolutional Neural Fields) for protein SS prediction. DeepCNF is a Deep
Learning extension of Conditional Neural Fields (CNF), which is an integration
of Conditional Random Fields (CRF) and shallow neural networks. DeepCNF can
model not only complex sequence-structure relationship by a deep hierarchical
architecture, but also interdependency between adjacent SS labels, so it is
much more powerful than CNF. Experimental results show that DeepCNF can obtain
~84% Q3 accuracy, ~85% SOV score, and ~72% Q8 accuracy, respectively, on the
CASP and CAMEO test proteins, greatly outperforming currently popular
predictors. As a general framework, DeepCNF can be used to predict other
protein structure properties such as contact number, disorder regions, and
solvent accessibility.
| Sheng Wang, Jian Peng, Jianzhu Ma and Jinbo Xu | null | 1512.00843 | null | null |
Innovation Pursuit: A New Approach to Subspace Clustering | cs.CV cs.IR cs.IT cs.LG math.IT stat.ML | In subspace clustering, a group of data points belonging to a union of
subspaces are assigned membership to their respective subspaces. This paper
presents a new approach dubbed Innovation Pursuit (iPursuit) to the problem of
subspace clustering using a new geometrical idea whereby subspaces are
identified based on their relative novelties. We present two frameworks in
which the idea of innovation pursuit is used to distinguish the subspaces.
Underlying the first framework is an iterative method that finds the subspaces
consecutively by solving a series of simple linear optimization problems, each
searching for a direction of innovation in the span of the data potentially
orthogonal to all subspaces except for the one to be identified in one step of
the algorithm. A detailed mathematical analysis is provided establishing
sufficient conditions for iPursuit to correctly cluster the data. The proposed
approach can provably yield exact clustering even when the subspaces have
significant intersections. It is shown that the complexity of the iterative
approach scales only linearly in the number of data points and subspaces, and
quadratically in the dimension of the subspaces. The second framework
integrates iPursuit with spectral clustering to yield a new variant of
spectral-clustering-based algorithms. The numerical simulations with both real
and synthetic data demonstrate that iPursuit can often outperform the
state-of-the-art subspace clustering algorithms, more so for subspaces with
significant intersections, and that it significantly improves the
state-of-the-art result for subspace-segmentation-based face clustering.
| Mostafa Rahmani, George Atia | 10.1109/TSP.2017.2749206 | 1512.00907 | null | null |
Neural Enquirer: Learning to Query Tables with Natural Language | cs.AI cs.CL cs.LG cs.NE | We proposed Neural Enquirer as a neural network architecture to execute a
natural language (NL) query on a knowledge-base (KB) for answers. Basically,
Neural Enquirer finds the distributed representation of a query and then
executes it on knowledge-base tables to obtain the answer as one of the values
in the tables. Unlike similar efforts in end-to-end training of semantic
parsers, Neural Enquirer is fully "neuralized": it not only gives
distributional representation of the query and the knowledge-base, but also
realizes the execution of compositional queries as a series of differentiable
operations, with intermediate results (consisting of annotations of the tables
at different levels) saved on multiple layers of memory. Neural Enquirer can be
trained with gradient descent, with which not only the parameters of the
controlling components and semantic parsing component, but also the embeddings
of the tables and query words can be learned from scratch. The training can be
done in an end-to-end fashion, but it can take stronger guidance, e.g., the
step-by-step supervision for complicated queries, and benefit from it. Neural
Enquirer is one step towards building neural network systems which seek to
understand language by executing it on real-world. Our experiments show that
Neural Enquirer can learn to execute fairly complicated NL queries on tables
with rich structures.
| Pengcheng Yin, Zhengdong Lu, Hang Li, Ben Kao | null | 1512.00965 | null | null |
Fast Low-Rank Matrix Learning with Nonconvex Regularization | cs.NA cs.LG stat.ML | Low-rank modeling has a lot of important applications in machine learning,
computer vision and social network analysis. While the matrix rank is often
approximated by the convex nuclear norm, the use of nonconvex low-rank
regularizers has demonstrated better recovery performance. However, the
resultant optimization problem is much more challenging. A very recent
state-of-the-art is based on the proximal gradient algorithm. However, it
requires an expensive full SVD in each proximal step. In this paper, we show
that for many commonly-used nonconvex low-rank regularizers, a cutoff can be
derived to automatically threshold the singular values obtained from the
proximal operator. This allows the use of power method to approximate the SVD
efficiently. Besides, the proximal operator can be reduced to that of a much
smaller matrix projected onto this leading subspace. Convergence, with a rate
of O(1/T) where T is the number of iterations, can be guaranteed. Extensive
experiments are performed on matrix completion and robust principal component
analysis. The proposed method achieves significant speedup over the
state-of-the-art. Moreover, the matrix solution obtained is more accurate and
has a lower rank than that of the traditional nuclear norm regularizer.
| Quanming Yao, James T. Kwok, Wenliang Zhong | null | 1512.00984 | null | null |
Bag Reference Vector for Multi-instance Learning | stat.ML cs.LG | Multi-instance learning (MIL) has a wide range of applications due to its
distinctive characteristics. Although many state-of-the-art algorithms have
achieved decent performances, a plurality of existing methods solve the problem
only in instance level rather than excavating relations among bags. In this
paper, we propose an efficient algorithm to describe each bag by a
corresponding feature vector via comparing it with other bags. In other words,
the crucial information of a bag is extracted from the similarity between that
bag and other reference bags. In addition, we apply extensions of Hausdorff
distance to representing the similarity, to a certain extent, overcoming the
key challenge of MIL problem, the ambiguity of instances' labels in positive
bags. Experimental results on benchmarks and text categorization tasks show
that the proposed method outperforms the previous state-of-the-art by a large
margin.
| Hanqiang Song and Zhuotun Zhu and Xinggang Wang | null | 1512.00994 | null | null |
Bayesian Matrix Completion via Adaptive Relaxed Spectral Regularization | cs.NA cs.AI cs.LG | Bayesian matrix completion has been studied based on a low-rank matrix
factorization formulation with promising results. However, little work has been
done on Bayesian matrix completion based on the more direct spectral
regularization formulation. We fill this gap by presenting a novel Bayesian
matrix completion method based on spectral regularization. In order to
circumvent the difficulties of dealing with the orthonormality constraints of
singular vectors, we derive a new equivalent form with relaxed constraints,
which then leads us to design an adaptive version of spectral regularization
feasible for Bayesian inference. Our Bayesian method requires no parameter
tuning and can infer the number of latent factors automatically. Experiments on
synthetic and real datasets demonstrate encouraging results on rank recovery
and collaborative filtering, with notably good results for very sparse
matrices.
| Yang Song, Jun Zhu | null | 1512.01110 | null | null |
Deep Reinforcement Learning with Attention for Slate Markov Decision
Processes with High-Dimensional States and Actions | cs.AI cs.HC cs.LG | Many real-world problems come with action spaces represented as feature
vectors. Although high-dimensional control is a largely unsolved problem, there
has recently been progress for modest dimensionalities. Here we report on a
successful attempt at addressing problems of dimensionality as high as $2000$,
of a particular form. Motivated by important applications such as
recommendation systems that do not fit the standard reinforcement learning
frameworks, we introduce Slate Markov Decision Processes (slate-MDPs). A
Slate-MDP is an MDP with a combinatorial action space consisting of slates
(tuples) of primitive actions of which one is executed in an underlying MDP.
The agent does not control the choice of this executed action and the action
might not even be from the slate, e.g., for recommendation systems for which
all recommendations can be ignored. We use deep Q-learning based on feature
representations of both the state and action to learn the value of whole
slates. Unlike existing methods, we optimize for both the combinatorial and
sequential aspects of our tasks. The new agent's superiority over agents that
either ignore the combinatorial or sequential long-term value aspect is
demonstrated on a range of environments with dynamics from a real-world
recommendation system. Further, we use deep deterministic policy gradients to
learn a policy that for each position of the slate, guides attention towards
the part of the action space in which the value is the highest and we only
evaluate actions in this area. The attention is used within a sequentially
greedy procedure leveraging submodularity. Finally, we show how introducing
risk-seeking can dramatically improve the agents performance and ability to
discover more far reaching strategies.
| Peter Sunehag, Richard Evans, Gabriel Dulac-Arnold, Yori Zwols, Daniel
Visentin and Ben Coppin | null | 1512.01124 | null | null |
Building Memory with Concept Learning Capabilities from Large-scale
Knowledge Base | cs.CL cs.AI cs.LG | We present a new perspective on neural knowledge base (KB) embeddings, from
which we build a framework that can model symbolic knowledge in the KB together
with its learning process. We show that this framework well regularizes
previous neural KB embedding model for superior performance in reasoning tasks,
while having the capabilities of dealing with unseen entities, that is, to
learn their embeddings from natural language descriptions, which is very like
human's behavior of learning semantic concepts.
| Jiaxin Shi, Jun Zhu | null | 1512.01173 | null | null |
MXNet: A Flexible and Efficient Machine Learning Library for
Heterogeneous Distributed Systems | cs.DC cs.LG cs.MS cs.NE | MXNet is a multi-language machine learning (ML) library to ease the
development of ML algorithms, especially for deep neural networks. Embedded in
the host language, it blends declarative symbolic expression with imperative
tensor computation. It offers auto differentiation to derive gradients. MXNet
is computation and memory efficient and runs on various heterogeneous systems,
ranging from mobile devices to distributed GPU clusters.
This paper describes both the API design and the system implementation of
MXNet, and explains how embedding of both symbolic expression and tensor
operation is handled in a unified fashion. Our preliminary experiments reveal
promising results on large scale deep neural network applications using
multiple GPU machines.
| Tianqi Chen, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang,
Tianjun Xiao, Bing Xu, Chiyuan Zhang and Zheng Zhang | null | 1512.01274 | null | null |
Predicting the top and bottom ranks of billboard songs using Machine
Learning | cs.CL cs.LG | The music industry is a $130 billion industry. Predicting whether a song
catches the pulse of the audience impacts the industry. In this paper we
analyze language inside the lyrics of the songs using several computational
linguistic algorithms and predict whether a song would make to the top or
bottom of the billboard rankings based on the language features. We trained and
tested an SVM classifier with a radial kernel function on the linguistic
features. Results indicate that we can classify whether a song belongs to top
and bottom of the billboard charts with a precision of 0.76.
| Vivek Datla and Abhinav Vishnu | null | 1512.01283 | null | null |
Predicting and visualizing psychological attributions with a deep neural
network | cs.CV cs.LG cs.NE | Judgments about personality based on facial appearance are strong effectors
in social decision making, and are known to have impact on areas from
presidential elections to jury decisions. Recent work has shown that it is
possible to predict perception of memorability, trustworthiness, intelligence
and other attributes in human face images. The most successful of these
approaches require face images expertly annotated with key facial landmarks. We
demonstrate a Convolutional Neural Network (CNN) model that is able to perform
the same task without the need for landmark features, thereby greatly
increasing efficiency. The model has high accuracy, surpassing human-level
performance in some cases. Furthermore, we use a deconvolutional approach to
visualize important features for perception of 22 attributes and demonstrate a
new method for separately visualizing positive and negative features.
| Edward Grant, Stephan Sahm, Mariam Zabihi, Marcel van Gerven | null | 1512.01289 | null | null |
Fixed-Point Performance Analysis of Recurrent Neural Networks | cs.LG cs.NE | Recurrent neural networks have shown excellent performance in many
applications, however they require increased complexity in hardware or software
based implementations. The hardware complexity can be much lowered by
minimizing the word-length of weights and signals. This work analyzes the
fixed-point performance of recurrent neural networks using a retrain based
quantization method. The quantization sensitivity of each layer in RNNs is
studied, and the overall fixed-point optimization results minimizing the
capacity of weights while not sacrificing the performance are presented. A
language model and a phoneme recognition examples are used.
| Sungho Shin, Kyuyeon Hwang, and Wonyong Sung | 10.1109/MSP.2015.2411564 | 1512.01322 | null | null |
Toward a Taxonomy and Computational Models of Abnormalities in Images | cs.CV cs.AI cs.HC cs.IT cs.LG math.IT | The human visual system can spot an abnormal image, and reason about what
makes it strange. This task has not received enough attention in computer
vision. In this paper we study various types of atypicalities in images in a
more comprehensive way than has been done before. We propose a new dataset of
abnormal images showing a wide range of atypicalities. We design human subject
experiments to discover a coarse taxonomy of the reasons for abnormality. Our
experiments reveal three major categories of abnormality: object-centric,
scene-centric, and contextual. Based on this taxonomy, we propose a
comprehensive computational model that can predict all different types of
abnormality in images and outperform prior arts in abnormality recognition.
| Babak Saleh, Ahmed Elgammal, Jacob Feldman, Ali Farhadi | null | 1512.01325 | null | null |
Q-Networks for Binary Vector Actions | cs.NE cs.LG | In this paper reinforcement learning with binary vector actions was
investigated. We suggest an effective architecture of the neural networks for
approximating an action-value function with binary vector actions. The proposed
architecture approximates the action-value function by a linear function with
respect to the action vector, but is still non-linear with respect to the state
input. We show that this approximation method enables the efficient calculation
of greedy action selection and softmax action selection. Using this
architecture, we suggest an online algorithm based on Q-learning. The empirical
results in the grid world and the blocker task suggest that our approximation
architecture would be effective for the RL problems with large discrete action
sets.
| Naoto Yoshida | null | 1512.01332 | null | null |
Proposition of a Theoretical Model for Missing Data Imputation using
Deep Learning and Evolutionary Algorithms | cs.NE cs.LG | In the last couple of decades, there has been major advancements in the
domain of missing data imputation. The techniques in the domain include amongst
others: Expectation Maximization, Neural Networks with Evolutionary Algorithms
or optimization techniques and K-Nearest Neighbor approaches to solve the
problem. The presence of missing data entries in databases render the tasks of
decision-making and data analysis nontrivial. As a result this area has
attracted a lot of research interest with the aim being to yield accurate and
time efficient and sensitive missing data imputation techniques especially when
time sensitive applications are concerned like power plants and winding
processes. In this article, considering arbitrary and monotone missing data
patterns, we hypothesize that the use of deep neural networks built using
autoencoders and denoising autoencoders in conjunction with genetic algorithms,
swarm intelligence and maximum likelihood estimator methods as novel data
imputation techniques will lead to better imputed values than existing
techniques. Also considered are the missing at random, missing completely at
random and missing not at random missing data mechanisms. We also intend to use
fuzzy logic in tandem with deep neural networks to perform the missing data
imputation tasks, as well as different building blocks for the deep neural
networks like Stacked Restricted Boltzmann Machines and Deep Belief Networks to
test our hypothesis. The motivation behind this article is the need for missing
data imputation techniques that lead to better imputed values than existing
methods with higher accuracies and lower errors.
| Collins Leke, Tshilidzi Marwala and Satyakama Paul | null | 1512.01362 | null | null |
Max-Pooling Dropout for Regularization of Convolutional Neural Networks | cs.LG cs.CV cs.NE | Recently, dropout has seen increasing use in deep learning. For deep
convolutional neural networks, dropout is known to work well in fully-connected
layers. However, its effect in pooling layers is still not clear. This paper
demonstrates that max-pooling dropout is equivalent to randomly picking
activation based on a multinomial distribution at training time. In light of
this insight, we advocate employing our proposed probabilistic weighted
pooling, instead of commonly used max-pooling, to act as model averaging at
test time. Empirical evidence validates the superiority of probabilistic
weighted pooling. We also compare max-pooling dropout and stochastic pooling,
both of which introduce stochasticity based on multinomial distributions at
pooling stage.
| Haibing Wu and Xiaodong Gu | null | 1512.01400 | null | null |
State of the Art Control of Atari Games Using Shallow Reinforcement
Learning | cs.LG | The recently introduced Deep Q-Networks (DQN) algorithm has gained attention
as one of the first successful combinations of deep neural networks and
reinforcement learning. Its promise was demonstrated in the Arcade Learning
Environment (ALE), a challenging framework composed of dozens of Atari 2600
games used to evaluate general competency in AI. It achieved dramatically
better results than earlier approaches, showing that its ability to learn good
representations is quite robust and general. This paper attempts to understand
the principles that underlie DQN's impressive performance and to better
contextualize its success. We systematically evaluate the importance of key
representational biases encoded by DQN's network by proposing simple linear
representations that make use of these concepts. Incorporating these
characteristics, we obtain a computationally practical feature set that
achieves competitive performance to DQN in the ALE. Besides offering insight
into the strengths and weaknesses of DQN, we provide a generic representation
for the ALE, significantly reducing the burden of learning a representation for
each game. Moreover, we also provide a simple, reproducible benchmark for the
sake of comparison to future work in the ALE.
| Yitao Liang, Marlos C. Machado, Erik Talvitie, Michael Bowling | null | 1512.01563 | null | null |
Hybrid Approach for Inductive Semi Supervised Learning using Label
Propagation and Support Vector Machine | cs.LG cs.DC | Semi supervised learning methods have gained importance in today's world
because of large expenses and time involved in labeling the unlabeled data by
human experts. The proposed hybrid approach uses SVM and Label Propagation to
label the unlabeled data. In the process, at each step SVM is trained to
minimize the error and thus improve the prediction quality. Experiments are
conducted by using SVM and logistic regression(Logreg). Results prove that SVM
performs tremendously better than Logreg. The approach is tested using 12
datasets of different sizes ranging from the order of 1000s to the order of
10000s. Results show that the proposed approach outperforms Label Propagation
by a large margin with F-measure of almost twice on average. The parallel
version of the proposed approach is also designed and implemented, the analysis
shows that the training time decreases significantly when parallel version is
used.
| Aruna Govada, Pravin Joshi, Sahil Mittal and Sanjay K Sahay | 10.1007/978-3-319-21024-7_14 | 1512.01568 | null | null |
Extracting Biomolecular Interactions Using Semantic Parsing of
Biomedical Text | cs.CL cs.AI cs.IR cs.IT cs.LG math.IT | We advance the state of the art in biomolecular interaction extraction with
three contributions: (i) We show that deep, Abstract Meaning Representations
(AMR) significantly improve the accuracy of a biomolecular interaction
extraction system when compared to a baseline that relies solely on surface-
and syntax-based features; (ii) In contrast with previous approaches that infer
relations on a sentence-by-sentence basis, we expand our framework to enable
consistent predictions over sets of sentences (documents); (iii) We further
modify and expand a graph kernel learning framework to enable concurrent
exploitation of automatically induced AMR (semantic) and dependency structure
(syntactic) representations. Our experiments show that our approach yields
interaction extraction systems that are more robust in environments where there
is a significant mismatch between training and test conditions.
| Sahil Garg, Aram Galstyan, Ulf Hermjakob, and Daniel Marcu | null | 1512.01587 | null | null |
Creation of a Deep Convolutional Auto-Encoder in Caffe | cs.NE cs.CV cs.LG | The development of a deep (stacked) convolutional auto-encoder in the Caffe
deep learning framework is presented in this paper. We describe simple
principles which we used to create this model in Caffe. The proposed model of
convolutional auto-encoder does not have pooling/unpooling layers yet. The
results of our experimental research show comparable accuracy of dimensionality
reduction in comparison with a classic auto-encoder on the example of MNIST
dataset.
| Volodymyr Turchenko, Artur Luczak | null | 1512.01596 | null | null |
Risk-Constrained Reinforcement Learning with Percentile Risk Criteria | cs.AI cs.LG math.OC | In many sequential decision-making problems one is interested in minimizing
an expected cumulative cost while taking into account \emph{risk}, i.e.,
increased awareness of events of small probability and high consequences.
Accordingly, the objective of this paper is to present efficient reinforcement
learning algorithms for risk-constrained Markov decision processes (MDPs),
where risk is represented via a chance constraint or a constraint on the
conditional value-at-risk (CVaR) of the cumulative cost. We collectively refer
to such problems as percentile risk-constrained MDPs.
Specifically, we first derive a formula for computing the gradient of the
Lagrangian function for percentile risk-constrained MDPs. Then, we devise
policy gradient and actor-critic algorithms that (1) estimate such gradient,
(2) update the policy in the descent direction, and (3) update the Lagrange
multiplier in the ascent direction. For these algorithms we prove convergence
to locally optimal policies. Finally, we demonstrate the effectiveness of our
algorithms in an optimal stopping problem and an online marketing application.
| Yinlam Chow and Mohammad Ghavamzadeh and Lucas Janson and Marco Pavone | null | 1512.01629 | null | null |
Approximated and User Steerable tSNE for Progressive Visual Analytics | cs.CV cs.LG | Progressive Visual Analytics aims at improving the interactivity in existing
analytics techniques by means of visualization as well as interaction with
intermediate results. One key method for data analysis is dimensionality
reduction, for example, to produce 2D embeddings that can be visualized and
analyzed efficiently. t-Distributed Stochastic Neighbor Embedding (tSNE) is a
well-suited technique for the visualization of several high-dimensional data.
tSNE can create meaningful intermediate results but suffers from a slow
initialization that constrains its application in Progressive Visual Analytics.
We introduce a controllable tSNE approximation (A-tSNE), which trades off speed
and accuracy, to enable interactive data exploration. We offer real-time
visualization techniques, including a density-based solution and a Magic Lens
to inspect the degree of approximation. With this feedback, the user can decide
on local refinements and steer the approximation level during the analysis. We
demonstrate our technique with several datasets, in a real-world research
scenario and for the real-time analysis of high-dimensional streams to
illustrate its effectiveness for interactive data analysis.
| Nicola Pezzotti, Boudewijn P.F. Lelieveldt, Laurens van der Maaten,
Thomas H\"ollt, Elmar Eisemann, and Anna Vilanova | null | 1512.01655 | null | null |
Deep Attention Recurrent Q-Network | cs.LG | A deep learning approach to reinforcement learning led to a general learner
able to train on visual input to play a variety of arcade games at the human
and superhuman levels. Its creators at the Google DeepMind's team called the
approach: Deep Q-Network (DQN). We present an extension of DQN by "soft" and
"hard" attention mechanisms. Tests of the proposed Deep Attention Recurrent
Q-Network (DARQN) algorithm on multiple Atari 2600 games show level of
performance superior to that of DQN. Moreover, built-in attention mechanisms
allow a direct online monitoring of the training process by highlighting the
regions of the game screen the agent is focusing on when making decisions.
| Ivan Sorokin, Alexey Seleznev, Mikhail Pavlov, Aleksandr Fedorov,
Anastasiia Ignateva | null | 1512.01693 | null | null |
Variance Reduction for Distributed Stochastic Gradient Descent | cs.LG cs.DC math.OC stat.ML | Variance reduction (VR) methods boost the performance of stochastic gradient
descent (SGD) by enabling the use of larger, constant stepsizes and preserving
linear convergence rates. However, current variance reduced SGD methods require
either high memory usage or an exact gradient computation (using the entire
dataset) at the end of each epoch. This limits the use of VR methods in
practical distributed settings. In this paper, we propose a variance reduction
method, called VR-lite, that does not require full gradient computations or
extra storage. We explore distributed synchronous and asynchronous variants
that are scalable and remain stable with low communication frequency. We
empirically compare both the sequential and distributed algorithms to
state-of-the-art stochastic optimization methods, and find that our proposed
algorithms perform favorably to other stochastic methods.
| Soham De, Gavin Taylor, Tom Goldstein | null | 1512.01708 | null | null |
Generating News Headlines with Recurrent Neural Networks | cs.CL cs.LG cs.NE | We describe an application of an encoder-decoder recurrent neural network
with LSTM units and attention to generating headlines from the text of news
articles. We find that the model is quite effective at concisely paraphrasing
news articles. Furthermore, we study how the neural network decides which input
words to pay attention to, and specifically we identify the function of the
different neurons in a simplified attention mechanism. Interestingly, our
simplified attention mechanism performs better that the more complex attention
mechanism on a held out set of articles.
| Konstantin Lopyrev | null | 1512.01712 | null | null |
Similarity Learning via Adaptive Regression and Its Application to Image
Retrieval | cs.LG | We study the problem of similarity learning and its application to image
retrieval with large-scale data. The similarity between pairs of images can be
measured by the distances between their high dimensional representations, and
the problem of learning the appropriate similarity is often addressed by
distance metric learning. However, distance metric learning requires the
learned metric to be a PSD matrix, which is computational expensive and not
necessary for retrieval ranking problem. On the other hand, the bilinear model
is shown to be more flexible for large-scale image retrieval task, hence, we
adopt it to learn a matrix for estimating pairwise similarities under the
regression framework. By adaptively updating the target matrix in regression,
we can mimic the hinge loss, which is more appropriate for similarity learning
problem. Although the regression problem can have the closed-form solution, the
computational cost can be very expensive. The computational challenges come
from two aspects: the number of images can be very large and image features
have high dimensionality. We address the first challenge by compressing the
data by a randomized algorithm with the theoretical guarantee. For the high
dimensional issue, we address it by taking low rank assumption and applying
alternating method to obtain the partial matrix, which has a global optimal
solution. Empirical studies on real world image datasets (i.e., Caltech and
ImageNet) demonstrate the effectiveness and efficiency of the proposed method.
| Qi Qian, Inci M. Baytas, Rong Jin, Anil Jain and Shenghuo Zhu | null | 1512.01728 | null | null |
Large Scale Distributed Semi-Supervised Learning Using Streaming
Approximation | cs.LG cs.AI | Traditional graph-based semi-supervised learning (SSL) approaches, even
though widely applied, are not suited for massive data and large label
scenarios since they scale linearly with the number of edges $|E|$ and distinct
labels $m$. To deal with the large label size problem, recent works propose
sketch-based methods to approximate the distribution on labels per node thereby
achieving a space reduction from $O(m)$ to $O(\log m)$, under certain
conditions. In this paper, we present a novel streaming graph-based SSL
approximation that captures the sparsity of the label distribution and ensures
the algorithm propagates labels accurately, and further reduces the space
complexity per node to $O(1)$. We also provide a distributed version of the
algorithm that scales well to large data sizes. Experiments on real-world
datasets demonstrate that the new method achieves better performance than
existing state-of-the-art algorithms with significant reduction in memory
footprint. We also study different graph construction mechanisms for natural
language applications and propose a robust graph augmentation strategy trained
using state-of-the-art unsupervised deep learning architectures that yields
further significant quality gains.
| Sujith Ravi, Qiming Diao | null | 1512.01752 | null | null |
Explaining reviews and ratings with PACO: Poisson Additive Co-Clustering | cs.LG stat.ML | Understanding a user's motivations provides valuable information beyond the
ability to recommend items. Quite often this can be accomplished by perusing
both ratings and review texts, since it is the latter where the reasoning for
specific preferences is explicitly expressed.
Unfortunately matrix factorization approaches to recommendation result in
large, complex models that are difficult to interpret and give recommendations
that are hard to clearly explain to users. In contrast, in this paper, we
attack this problem through succinct additive co-clustering. We devise a novel
Bayesian technique for summing co-clusterings of Poisson distributions. With
this novel technique we propose a new Bayesian model for joint collaborative
filtering of ratings and text reviews through a sum of simple co-clusterings.
The simple structure of our model yields easily interpretable recommendations.
Even with a simple, succinct structure, our model outperforms competitors in
terms of predicting ratings with reviews.
| Chao-Yuan Wu, Alex Beutel, Amr Ahmed, Alexander J. Smola | null | 1512.01845 | null | null |
Rademacher Complexity of the Restricted Boltzmann Machine | cs.LG | Boltzmann machine, as a fundamental construction block of deep belief network
and deep Boltzmann machines, is widely used in deep learning community and
great success has been achieved. However, theoretical understanding of many
aspects of it is still far from clear. In this paper, we studied the Rademacher
complexity of both the asymptotic restricted Boltzmann machine and the
practical implementation with single-step contrastive divergence (CD-1)
procedure. Our results disclose the fact that practical implementation training
procedure indeed increased the Rademacher complexity of restricted Boltzmann
machines. A further research direction might be the investigation of the VC
dimension of a compositional function used in the CD-1 procedure.
| Xiao Zhang | null | 1512.01914 | null | null |
Thinking Required | cs.LG cs.AI cs.CL | There exists a theory of a single general-purpose learning algorithm which
could explain the principles its operation. It assumes the initial rough
architecture, a small library of simple innate circuits which are prewired at
birth. and proposes that all significant mental algorithms are learned. Given
current understanding and observations, this paper reviews and lists the
ingredients of such an algorithm from architectural and functional
perspectives.
| Kamil Rocki | null | 1512.01926 | null | null |
Fast Optimization Algorithm on Riemannian Manifolds and Its Application
in Low-Rank Representation | cs.NA cs.CV cs.LG | The paper addresses the problem of optimizing a class of composite functions
on Riemannian manifolds and a new first order optimization algorithm (FOA) with
a fast convergence rate is proposed. Through the theoretical analysis for FOA,
it has been proved that the algorithm has quadratic convergence. The
experiments in the matrix completion task show that FOA has better performance
than other first order optimization methods on Riemannian manifolds. A fast
subspace pursuit method based on FOA is proposed to solve the low-rank
representation model based on augmented Lagrange method on the low rank matrix
variety. Experimental results on synthetic and real data sets are presented to
demonstrate that both FOA and SP-RPRG(ALM) can achieve superior performance in
terms of faster convergence and higher accuracy.
| Haoran Chen and Yanfeng Sun and Junbin Gao and Yongli Hu | null | 1512.01927 | null | null |
A Novel Approach to Distributed Multi-Class SVM | cs.LG cs.DC | With data sizes constantly expanding, and with classical machine learning
algorithms that analyze such data requiring larger and larger amounts of
computation time and storage space, the need to distribute computation and
memory requirements among several computers has become apparent. Although
substantial work has been done in developing distributed binary SVM algorithms
and multi-class SVM algorithms individually, the field of multi-class
distributed SVMs remains largely unexplored. This research proposes a novel
algorithm that implements the Support Vector Machine over a multi-class dataset
and is efficient in a distributed environment (here, Hadoop). The idea is to
divide the dataset into half recursively and thus compute the optimal Support
Vector Machine for this half during the training phase, much like a divide and
conquer approach. While testing, this structure has been effectively exploited
to significantly reduce the prediction time. Our algorithm has shown better
computation time during the prediction phase than the traditional sequential
SVM methods (One vs. One, One vs. Rest) and out-performs them as the size of
the dataset grows. This approach also classifies the data with higher accuracy
than the traditional multi-class algorithms.
| Aruna Govada, Shree Ranjani, Aditi Viswanathan and S.K.Sahay | 10.14738/tmlai.25.562 | 1512.01993 | null | null |
Jointly Modeling Topics and Intents with Global Order Structure | cs.CL cs.IR cs.LG | Modeling document structure is of great importance for discourse analysis and
related applications. The goal of this research is to capture the document
intent structure by modeling documents as a mixture of topic words and
rhetorical words. While the topics are relatively unchanged through one
document, the rhetorical functions of sentences usually change following
certain orders in discourse. We propose GMM-LDA, a topic modeling based
Bayesian unsupervised model, to analyze the document intent structure
cooperated with order information. Our model is flexible that has the ability
to combine the annotations and do supervised learning. Additionally, entropic
regularization can be introduced to model the significant divergence between
topics and intents. We perform experiments in both unsupervised and supervised
settings, results show the superiority of our model over several
state-of-the-art baselines.
| Bei Chen, Jun Zhu, Nan Yang, Tian Tian, Ming Zhou, Bo Zhang | null | 1512.02009 | null | null |
How to Discount Deep Reinforcement Learning: Towards New Dynamic
Strategies | cs.LG cs.AI | Using deep neural nets as function approximator for reinforcement learning
tasks have recently been shown to be very powerful for solving problems
approaching real-world complexity. Using these results as a benchmark, we
discuss the role that the discount factor may play in the quality of the
learning process of a deep Q-network (DQN). When the discount factor
progressively increases up to its final value, we empirically show that it is
possible to significantly reduce the number of learning steps. When used in
conjunction with a varying learning rate, we empirically show that it
outperforms original DQN on several experiments. We relate this phenomenon with
the instabilities of neural networks when they are used in an approximate
Dynamic Programming setting. We also describe the possibility to fall within a
local optimum during the learning process, thus connecting our discussion with
the exploration/exploitation dilemma.
| Vincent Fran\c{c}ois-Lavet, Raphael Fonteneau, Damien Ernst | null | 1512.02011 | null | null |
Discriminative Nonparametric Latent Feature Relational Models with Data
Augmentation | cs.LG stat.ML | We present a discriminative nonparametric latent feature relational model
(LFRM) for link prediction to automatically infer the dimensionality of latent
features. Under the generic RegBayes (regularized Bayesian inference)
framework, we handily incorporate the prediction loss with probabilistic
inference of a Bayesian model; set distinct regularization parameters for
different types of links to handle the imbalance issue in real networks; and
unify the analysis of both the smooth logistic log-loss and the piecewise
linear hinge loss. For the nonconjugate posterior inference, we present a
simple Gibbs sampler via data augmentation, without making restricting
assumptions as done in variational methods. We further develop an approximate
sampler using stochastic gradient Langevin dynamics to handle large networks
with hundreds of thousands of entities and millions of links, orders of
magnitude larger than what existing LFRM models can process. Extensive studies
on various real networks show promising performance.
| Bei Chen, Ning Chen, Jun Zhu, Jiaming Song, Bo Zhang | null | 1512.02016 | null | null |
Risk Minimization in Structured Prediction using Orbit Loss | cs.LG | We introduce a new surrogate loss function called orbit loss in the
structured prediction framework, which has good theoretical and practical
advantages. While the orbit loss is not convex, it has a simple analytical
gradient and a simple perceptron-like learning rule. We analyze the new loss
theoretically and state a PAC-Bayesian generalization bound. We also prove that
the new loss is consistent in the strong sense; namely, the risk achieved by
the set of the trained parameters approaches the infimum risk achievable by any
linear decoder over the given features. Methods that are aimed at risk
minimization, such as the structured ramp loss, the structured probit loss and
the direct loss minimization require at least two inference operations per
training iteration. In this sense, the orbit loss is more efficient as it
requires only one inference operation per training iteration, while yields
similar performance. We conclude the paper with an empirical comparison of the
proposed loss function to the structured hinge loss, the structured ramp loss,
the structured probit loss and the direct loss minimization method on several
benchmark datasets and tasks.
| Danny Karmon and Joseph Keshet | null | 1512.02033 | null | null |
Clustering by Deep Nearest Neighbor Descent (D-NND): A Density-based
Parameter-Insensitive Clustering Method | stat.ML cs.CV cs.LG stat.CO stat.ME | Most density-based clustering methods largely rely on how well the underlying
density is estimated. However, density estimation itself is also a challenging
problem, especially the determination of the kernel bandwidth. A large
bandwidth could lead to the over-smoothed density estimation in which the
number of density peaks could be less than the true clusters, while a small
bandwidth could lead to the under-smoothed density estimation in which spurious
density peaks, or called the "ripple noise", would be generated in the
estimated density. In this paper, we propose a density-based hierarchical
clustering method, called the Deep Nearest Neighbor Descent (D-NND), which
could learn the underlying density structure layer by layer and capture the
cluster structure at the same time. The over-smoothed density estimation could
be largely avoided and the negative effect of the under-estimated cases could
be also largely reduced. Overall, D-NND presents not only the strong capability
of discovering the underlying cluster structure but also the remarkable
reliability due to its insensitivity to parameters.
| Teng Qiu, Yongjie Li | null | 1512.02097 | null | null |
Obtaining A Linear Combination of the Principal Components of a Matrix
on Quantum Computers | quant-ph cs.LG math.ST stat.TH | Principal component analysis is a multivariate statistical method frequently
used in science and engineering to reduce the dimension of a problem or extract
the most significant features from a dataset. In this paper, using a similar
notion to the quantum counting, we show how to apply the amplitude
amplification together with the phase estimation algorithm to an operator in
order to procure the eigenvectors of the operator associated to the eigenvalues
defined in the range $\left[a, b\right]$, where $a$ and $b$ are real and $0
\leq a \leq b \leq 1$. This makes possible to obtain a combination of the
eigenvectors associated to the largest eigenvalues and so can be used to do
principal component analysis on quantum computers.
| Anmer Daskin | 10.1007/s11128-016-1388-7 | 1512.02109 | null | null |
A Large Dataset to Train Convolutional Networks for Disparity, Optical
Flow, and Scene Flow Estimation | cs.CV cs.LG stat.ML | Recent work has shown that optical flow estimation can be formulated as a
supervised learning task and can be successfully solved with convolutional
networks. Training of the so-called FlowNet was enabled by a large
synthetically generated dataset. The present paper extends the concept of
optical flow estimation via convolutional networks to disparity and scene flow
estimation. To this end, we propose three synthetic stereo video datasets with
sufficient realism, variation, and size to successfully train large networks.
Our datasets are the first large-scale datasets to enable training and
evaluating scene flow methods. Besides the datasets, we present a convolutional
network for real-time disparity estimation that provides state-of-the-art
results. By combining a flow and disparity estimation network and training it
jointly, we demonstrate the first scene flow estimation with a convolutional
network.
| Nikolaus Mayer, Eddy Ilg, Philip H\"ausser, Philipp Fischer, Daniel
Cremers, Alexey Dosovitskiy, Thomas Brox | 10.1109/CVPR.2016.438 | 1512.02134 | null | null |
The Teaching Dimension of Linear Learners | cs.LG | Teaching dimension is a learning theoretic quantity that specifies the
minimum training set size to teach a target model to a learner. Previous
studies on teaching dimension focused on version-space learners which maintain
all hypotheses consistent with the training data, and cannot be applied to
modern machine learners which select a specific hypothesis via optimization.
This paper presents the first known teaching dimension for ridge regression,
support vector machines, and logistic regression. We also exhibit optimal
training sets that match these teaching dimensions. Our approach generalizes to
other linear learners.
| Ji Liu and Xiaojin Zhu | null | 1512.02181 | null | null |
Pseudo-Bayesian Robust PCA: Algorithms and Analyses | cs.CV cs.LG stat.ML | Commonly used in computer vision and other applications, robust PCA
represents an algorithmic attempt to reduce the sensitivity of classical PCA to
outliers. The basic idea is to learn a decomposition of some data matrix of
interest into low rank and sparse components, the latter representing unwanted
outliers. Although the resulting optimization problem is typically NP-hard,
convex relaxations provide a computationally-expedient alternative with
theoretical support. However, in practical regimes performance guarantees break
down and a variety of non-convex alternatives, including Bayesian-inspired
models, have been proposed to boost estimation quality. Unfortunately though,
without additional a priori knowledge none of these methods can significantly
expand the critical operational range such that exact principal subspace
recovery is possible. Into this mix we propose a novel pseudo-Bayesian
algorithm that explicitly compensates for design weaknesses in many existing
non-convex approaches leading to state-of-the-art performance with a sound
analytical foundation. Surprisingly, our algorithm can even outperform convex
matrix completion despite the fact that the latter is provided with perfect
knowledge of which entries are not corrupted.
| Tae-Hyun Oh, Yasuyuki Matsushita, In So Kweon, David Wipf | null | 1512.02188 | null | null |
Fast spectral algorithms from sum-of-squares proofs: tensor
decomposition and planted sparse vectors | cs.DS cs.CC cs.LG stat.ML | We consider two problems that arise in machine learning applications: the
problem of recovering a planted sparse vector in a random linear subspace and
the problem of decomposing a random low-rank overcomplete 3-tensor. For both
problems, the best known guarantees are based on the sum-of-squares method. We
develop new algorithms inspired by analyses of the sum-of-squares method. Our
algorithms achieve the same or similar guarantees as sum-of-squares for these
problems but the running time is significantly faster.
For the planted sparse vector problem, we give an algorithm with running time
nearly linear in the input size that approximately recovers a planted sparse
vector with up to constant relative sparsity in a random subspace of $\mathbb
R^n$ of dimension up to $\tilde \Omega(\sqrt n)$. These recovery guarantees
match the best known ones of Barak, Kelner, and Steurer (STOC 2014) up to
logarithmic factors.
For tensor decomposition, we give an algorithm with running time close to
linear in the input size (with exponent $\approx 1.086$) that approximately
recovers a component of a random 3-tensor over $\mathbb R^n$ of rank up to
$\tilde \Omega(n^{4/3})$. The best previous algorithm for this problem due to
Ge and Ma (RANDOM 2015) works up to rank $\tilde \Omega(n^{3/2})$ but requires
quasipolynomial time.
| Samuel B. Hopkins, Tselil Schramm, Jonathan Shi, David Steurer | null | 1512.02337 | null | null |
Online Crowdsourcing | cs.LG | With the success of modern internet based platform, such as Amazon Mechanical
Turk, it is now normal to collect a large number of hand labeled samples from
non-experts. The Dawid- Skene algorithm, which is based on Expectation-
Maximization update, has been widely used for inferring the true labels from
noisy crowdsourced labels. However, Dawid-Skene scheme requires all the data to
perform each EM iteration, and can be infeasible for streaming data or large
scale data. In this paper, we provide an online version of Dawid- Skene
algorithm that only requires one data frame for each iteration. Further, we
prove that under mild conditions, the online Dawid-Skene scheme with projection
converges to a stationary point of the marginal log-likelihood of the observed
data. Our experiments demonstrate that the online Dawid- Skene scheme achieves
state of the art performance comparing with other methods based on the Dawid-
Skene scheme.
| Changbo Zhu, Huan Xu, Shuicheng Yan | null | 1512.02393 | null | null |
Online Gradient Descent in Function Space | cs.LG | In many problems in machine learning and operations research, we need to
optimize a function whose input is a random variable or a probability density
function, i.e. to solve optimization problems in an infinite dimensional space.
On the other hand, online learning has the advantage of dealing with streaming
examples, and better model a changing environ- ment. In this paper, we extend
the celebrated online gradient descent algorithm to Hilbert spaces (function
spaces), and analyze the convergence guarantee of the algorithm. Finally, we
demonstrate that our algorithms can be useful in several important problems.
| Changbo Zhu, Huan Xu | null | 1512.02394 | null | null |
Learning Discrete Bayesian Networks from Continuous Data | cs.AI cs.LG | Learning Bayesian networks from raw data can help provide insights into the
relationships between variables. While real data often contains a mixture of
discrete and continuous-valued variables, many Bayesian network structure
learning algorithms assume all random variables are discrete. Thus, continuous
variables are often discretized when learning a Bayesian network. However, the
choice of discretization policy has significant impact on the accuracy, speed,
and interpretability of the resulting models. This paper introduces a
principled Bayesian discretization method for continuous variables in Bayesian
networks with quadratic complexity instead of the cubic complexity of other
standard techniques. Empirical demonstrations show that the proposed method is
superior to the established minimum description length algorithm. In addition,
this paper shows how to incorporate existing methods into the structure
learning process to discretize all continuous variables and simultaneously
learn Bayesian network structures.
| Yi-Chun Chen, Tim Allan Wheeler, Mykel John Kochenderfer | null | 1512.02406 | null | null |
Explaining NonLinear Classification Decisions with Deep Taylor
Decomposition | cs.LG stat.ML | Nonlinear methods such as Deep Neural Networks (DNNs) are the gold standard
for various challenging machine learning problems, e.g., image classification,
natural language processing or human action recognition. Although these methods
perform impressively well, they have a significant disadvantage, the lack of
transparency, limiting the interpretability of the solution and thus the scope
of application in practice. Especially DNNs act as black boxes due to their
multilayer nonlinear structure. In this paper we introduce a novel methodology
for interpreting generic multilayer neural networks by decomposing the network
classification decision into contributions of its input elements. Although our
focus is on image classification, the method is applicable to a broad set of
input data, learning tasks and network architectures. Our method is based on
deep Taylor decomposition and efficiently utilizes the structure of the network
by backpropagating the explanations from the output to the input layer. We
evaluate the proposed method empirically on the MNIST and ILSVRC data sets.
| Gr\'egoire Montavon, Sebastian Bach, Alexander Binder, Wojciech Samek,
Klaus-Robert M\"uller | 10.1016/j.patcog.2016.11.008 | 1512.02479 | null | null |
Deep Exemplar 2D-3D Detection by Adapting from Real to Rendered Views | cs.CV cs.LG cs.NE | This paper presents an end-to-end convolutional neural network (CNN) for
2D-3D exemplar detection. We demonstrate that the ability to adapt the features
of natural images to better align with those of CAD rendered views is critical
to the success of our technique. We show that the adaptation can be learned by
compositing rendered views of textured object models on natural images. Our
approach can be naturally incorporated into a CNN detection pipeline and
extends the accuracy and speed benefits from recent advances in deep learning
to 2D-3D exemplar detection. We applied our method to two tasks: instance
detection, where we evaluated on the IKEA dataset, and object category
detection, where we out-perform Aubry et al. for "chair" detection on a subset
of the Pascal VOC dataset.
| Francisco Massa, Bryan Russell, Mathieu Aubry | null | 1512.02497 | null | null |
Deep Learning for Single and Multi-Session i-Vector Speaker Recognition | cs.SD cs.LG | The promising performance of Deep Learning (DL) in speech recognition has
motivated the use of DL in other speech technology applications such as speaker
recognition. Given i-vectors as inputs, the authors proposed an impostor
selection algorithm and a universal model adaptation process in a hybrid system
based on Deep Belief Networks (DBN) and Deep Neural Networks (DNN) to
discriminatively model each target speaker. In order to have more insight into
the behavior of DL techniques in both single and multi-session speaker
enrollment tasks, some experiments have been carried out in this paper in both
scenarios. Additionally, the parameters of the global model, referred to as
universal DBN (UDBN), are normalized before adaptation. UDBN normalization
facilitates training DNNs specifically with more than one hidden layer.
Experiments are performed on the NIST SRE 2006 corpus. It is shown that the
proposed impostor selection algorithm and UDBN adaptation process enhance the
performance of conventional DNNs 8-20 % and 16-20 % in terms of EER for the
single and multi-session tasks, respectively. In both scenarios, the proposed
architectures outperform the baseline systems obtaining up to 17 % reduction in
EER.
| Omid Ghahabi and Javier Hernando | 10.1109/TASLP.2017.2661705 | 1512.02560 | null | null |
Speeding Up Distributed Machine Learning Using Codes | cs.DC cs.IT cs.LG cs.PF math.IT | Codes are widely used in many engineering applications to offer robustness
against noise. In large-scale systems there are several types of noise that can
affect the performance of distributed machine learning algorithms -- straggler
nodes, system failures, or communication bottlenecks -- but there has been
little interaction cutting across codes, machine learning, and distributed
systems. In this work, we provide theoretical insights on how coded solutions
can achieve significant gains compared to uncoded ones. We focus on two of the
most basic building blocks of distributed learning algorithms: matrix
multiplication and data shuffling. For matrix multiplication, we use codes to
alleviate the effect of stragglers, and show that if the number of homogeneous
workers is $n$, and the runtime of each subtask has an exponential tail, coded
computation can speed up distributed matrix multiplication by a factor of $\log
n$. For data shuffling, we use codes to reduce communication bottlenecks,
exploiting the excess in storage. We show that when a constant fraction
$\alpha$ of the data matrix can be cached at each worker, and $n$ is the number
of workers, \emph{coded shuffling} reduces the communication cost by a factor
of $(\alpha + \frac{1}{n})\gamma(n)$ compared to uncoded shuffling, where
$\gamma(n)$ is the ratio of the cost of unicasting $n$ messages to $n$ users to
multicasting a common message (of the same size) to $n$ users. For instance,
$\gamma(n) \simeq n$ if multicasting a message to $n$ users is as cheap as
unicasting a message to one user. We also provide experiment results,
corroborating our theoretical gains of the coded algorithms.
| Kangwook Lee, Maximilian Lam, Ramtin Pedarsani, Dimitris
Papailiopoulos, Kannan Ramchandran | 10.1109/TIT.2017.2736066 | 1512.02673 | null | null |
Reinforcement Control with Hierarchical Backpropagated Adaptive Critics | cs.NE cs.LG cs.SY | Present incremental learning methods are limited in the ability to achieve
reliable credit assignment over a large number time steps (or events). However,
this situation is typical for cases where the dynamical system to be controlled
requires relatively frequent control updates in order to maintain stability or
robustness yet has some action-consequences which must be established over
relatively long periods of time. To address this problem, the learning
capabilities of a control architecture comprised of two Backpropagated Adaptive
Critics (BACs) in a two-level hierarchy with continuous actions are explored.
The high-level BAC updates less frequently than the low-level BAC and controls
the latter to some degree. The response of the low-level to high-level signals
can either be determined a priori or it can emerge during learning. A general
approach called Response Induction Learning is introduced to address the latter
case.
| John W. Jameson | null | 1512.02693 | null | null |
Distributed Training of Deep Neural Networks with Theoretical Analysis:
Under SSP Setting | stat.ML cs.LG math.OC | We propose a distributed approach to train deep neural networks (DNNs), which
has guaranteed convergence theoretically and great scalability empirically:
close to 6 times faster on instance of ImageNet data set when run with 6
machines. The proposed scheme is close to optimally scalable in terms of number
of machines, and guaranteed to converge to the same optima as the undistributed
setting. The convergence and scalability of the distributed setting is shown
empirically across different datasets (TIMIT and ImageNet) and machine learning
tasks (image classification and phoneme extraction). The convergence analysis
provides novel insights into this complex learning scheme, including: 1)
layerwise convergence, and 2) convergence of the weights in probability.
| Abhimanu Kumar and Pengtao Xie and Junming Yin and Eric P. Xing | null | 1512.02728 | null | null |
Window-Object Relationship Guided Representation Learning for Generic
Object Detections | cs.CV cs.LG cs.MM | In existing works that learn representation for object detection, the
relationship between a candidate window and the ground truth bounding box of an
object is simplified by thresholding their overlap. This paper shows
information loss in this simplification and picks up the relative location/size
information discarded by thresholding. We propose a representation learning
pipeline to use the relationship as supervision for improving the learned
representation in object detection. Such relationship is not limited to object
of the target category, but also includes surrounding objects of other
categories. We show that image regions with multiple contexts and multiple
rotations are effective in capturing such relationship during the
representation learning process and in handling the semantic and visual
variation caused by different window-object configurations. Experimental
results show that the representation learned by our approach can improve the
object detection accuracy by 6.4% in mean average precision (mAP) on
ILSVRC2014. On the challenging ILSVRC2014 test dataset, 48.6% mAP is achieved
by our single model and it is the best among published results. On PASCAL VOC,
it outperforms the state-of-the-art result of Fast RCNN by 3.3% in absolute
mAP.
| Xingyu Zeng, Wanli Ouyang, Xiaogang Wang | null | 1512.02736 | null | null |
Perfect Recovery Conditions For Non-Negative Sparse Modeling | cs.IT cs.LG math.IT | Sparse modeling has been widely and successfully used in many applications
such as computer vision, machine learning, and pattern recognition. Accompanied
with those applications, significant research has studied the theoretical
limits and algorithm design for convex relaxations in sparse modeling. However,
theoretical analyses on non-negative versions of sparse modeling are limited in
the literature either to a noiseless setting or a scenario with a specific
statistical noise model such as Gaussian noise. This paper studies the
performance of non-negative sparse modeling in a more general scenario where
the observed signals have an unknown arbitrary distortion, especially focusing
on non-negativity constrained and L1-penalized least squares, and gives an
exact bound for which this problem can recover the correct signal elements. We
pose two conditions to guarantee the correct signal recovery: minimum
coefficient condition (MCC) and nonlinearity vs. subset coherence condition
(NSCC). The former defines the minimum weight for each of the correct atoms
present in the signal and the latter defines the tolerable deviation from the
linear model relative to the positive subset coherence (PSC), a novel type of
"coherence" metric. We provide rigorous performance guarantees based on these
conditions and experimentally verify their precise predictive power in a
hyperspectral data unmixing application.
| Yuki Itoh, Marco F. Duarte, Mario Parente | 10.1109/TSP.2016.2613067 | 1512.02743 | null | null |
A Novel Regularized Principal Graph Learning Framework on Explicit Graph
Representation | cs.AI cs.LG stat.ML | Many scientific datasets are of high dimension, and the analysis usually
requires visual manipulation by retaining the most important structures of
data. Principal curve is a widely used approach for this purpose. However, many
existing methods work only for data with structures that are not
self-intersected, which is quite restrictive for real applications. A few
methods can overcome the above problem, but they either require complicated
human-made rules for a specific task with lack of convergence guarantee and
adaption flexibility to different tasks, or cannot obtain explicit structures
of data. To address these issues, we develop a new regularized principal graph
learning framework that captures the local information of the underlying graph
structure based on reversed graph embedding. As showcases, models that can
learn a spanning tree or a weighted undirected $\ell_1$ graph are proposed, and
a new learning algorithm is developed that learns a set of principal points and
a graph structure from data, simultaneously. The new algorithm is simple with
guaranteed convergence. We then extend the proposed framework to deal with
large-scale data. Experimental results on various synthetic and six real world
datasets show that the proposed method compares favorably with baselines and
can uncover the underlying structure correctly.
| Qi Mao, Li Wang, Ivor W. Tsang, Yijun Sun | null | 1512.02752 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.