title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
On the Optimal Sample Complexity for Best Arm Identification | cs.LG cs.DS | We study the best arm identification (BEST-1-ARM) problem, which is defined
as follows. We are given $n$ stochastic bandit arms. The $i$th arm has a reward
distribution $D_i$ with an unknown mean $\mu_{i}$. Upon each play of the $i$th
arm, we can get a reward, sampled i.i.d. from $D_i$. We would like to identify
the arm with the largest mean with probability at least $1-\delta$, using as
few samples as possible. We provide a nontrivial algorithm for BEST-1-ARM,
which improves upon several prior upper bounds on the same problem. We also
study an important special case where there are only two arms, which we call
the sign problem. We provide a new lower bound of sign, simplifying and
significantly extending a classical result by Farrell in 1964, with a
completely new proof. Using the new lower bound for sign, we obtain the first
lower bound for BEST-1-ARM that goes beyond the classic Mannor-Tsitsiklis lower
bound, by an interesting reduction from Sign to BEST-1-ARM. We propose an
interesting conjecture concerning the optimal sample complexity of BEST-1-ARM
from the perspective of instance-wise optimality.
| Lijie Chen, Jian Li | null | 1511.03774 | null | null |
Towards Vision-Based Deep Reinforcement Learning for Robotic Motion
Control | cs.LG cs.CV cs.RO | This paper introduces a machine learning based system for controlling a
robotic manipulator with visual perception only. The capability to autonomously
learn robot controllers solely from raw-pixel images and without any prior
knowledge of configuration is shown for the first time. We build upon the
success of recent deep reinforcement learning and develop a system for learning
target reaching with a three-joint robot manipulator using external visual
observation. A Deep Q Network (DQN) was demonstrated to perform target reaching
after training in simulation. Transferring the network to real hardware and
real observation in a naive approach failed, but experiments show that the
network works when replacing camera images with synthetic images.
| Fangyi Zhang, J\"urgen Leitner, Michael Milford, Ben Upcroft, Peter
Corke | null | 1511.03791 | null | null |
Learning Nonparametric Forest Graphical Models with Prior Information | stat.ME cs.LG stat.ML | We present a framework for incorporating prior information into nonparametric
estimation of graphical models. To avoid distributional assumptions, we
restrict the graph to be a forest and build on the work of forest density
estimation (FDE). We reformulate the FDE approach from a Bayesian perspective,
and introduce prior distributions on the graphs. As two concrete examples, we
apply this framework to estimating scale-free graphs and learning multiple
graphs with similar structures. The resulting algorithms are equivalent to
finding a maximum spanning tree of a weighted graph with a penalty term on the
connectivity pattern of the graph. We solve the optimization problem via a
minorize-maximization procedure with Kruskal's algorithm. Simulations show that
the proposed methods outperform competing parametric methods, and are robust to
the true data distribution. They also lead to improvement in predictive power
and interpretability in two real data sets.
| Yuancheng Zhu, Zhe Liu and Siqi Sun | null | 1511.03796 | null | null |
Characterizing Concept Drift | cs.LG cs.AI | Most machine learning models are static, but the world is dynamic, and
increasing online deployment of learned models gives increasing urgency to the
development of efficient and effective mechanisms to address learning in the
context of non-stationary distributions, or as it is commonly called concept
drift. However, the key issue of characterizing the different types of drift
that can occur has not previously been subjected to rigorous definition and
analysis. In particular, while some qualitative drift categorizations have been
proposed, few have been formally defined, and the quantitative descriptions
required for precise and objective understanding of learner performance have
not existed. We present the first comprehensive framework for quantitative
analysis of drift. This supports the development of the first comprehensive set
of formal definitions of types of concept drift. The formal definitions clarify
ambiguities and identify gaps in previous definitions, giving rise to a new
comprehensive taxonomy of concept drift types and a solid foundation for
research into mechanisms to detect and address concept drift.
| Geoffrey I. Webb and Roy Hyde and Hong Cao and Hai Long Nguyen and
Francois Petitjean | 10.1007/s10618-015-0448-4 | 1511.03816 | null | null |
Feature Learning based Deep Supervised Hashing with Pairwise Labels | cs.LG cs.CV | Recent years have witnessed wide application of hashing for large-scale image
retrieval. However, most existing hashing methods are based on hand-crafted
features which might not be optimally compatible with the hashing procedure.
Recently, deep hashing methods have been proposed to perform simultaneous
feature learning and hash-code learning with deep neural networks, which have
shown better performance than traditional hashing methods with hand-crafted
features. Most of these deep hashing methods are supervised whose supervised
information is given with triplet labels. For another common application
scenario with pairwise labels, there have not existed methods for simultaneous
feature learning and hash-code learning. In this paper, we propose a novel deep
hashing method, called deep pairwise-supervised hashing(DPSH), to perform
simultaneous feature learning and hash-code learning for applications with
pairwise labels. Experiments on real datasets show that our DPSH method can
outperform other methods to achieve the state-of-the-art performance in image
retrieval applications.
| Wu-Jun Li, Sheng Wang, and Wang-Cheng Kang | null | 1511.03855 | null | null |
Learning Human Identity from Motion Patterns | cs.LG cs.CV cs.NE | We present a large-scale study exploring the capability of temporal deep
neural networks to interpret natural human kinematics and introduce the first
method for active biometric authentication with mobile inertial sensors. At
Google, we have created a first-of-its-kind dataset of human movements,
passively collected by 1500 volunteers using their smartphones daily over
several months. We (1) compare several neural architectures for efficient
learning of temporal multi-modal data representations, (2) propose an optimized
shift-invariant dense convolutional mechanism (DCWRNN), and (3) incorporate the
discriminatively-trained dynamic features in a probabilistic generative
framework taking into account temporal characteristics. Our results demonstrate
that human kinematics convey important information about user identity and can
serve as a valuable component of multi-modal authentication systems.
| Natalia Neverova, Christian Wolf, Griffin Lacey, Lex Fridman, Deepak
Chandra, Brandon Barbello, Graham Taylor | null | 1511.03908 | null | null |
Bayesian Analysis of Dynamic Linear Topic Models | stat.ML cs.LG stat.ME | In dynamic topic modeling, the proportional contribution of a topic to a
document depends on the temporal dynamics of that topic's overall prevalence in
the corpus. We extend the Dynamic Topic Model of Blei and Lafferty (2006) by
explicitly modeling document level topic proportions with covariates and
dynamic structure that includes polynomial trends and periodicity. A Markov
Chain Monte Carlo (MCMC) algorithm that utilizes Polya-Gamma data augmentation
is developed for posterior inference. Conditional independencies in the model
and sampling are made explicit, and our MCMC algorithm is parallelized where
possible to allow for inference in large corpora. To address computational
bottlenecks associated with Polya-Gamma sampling, we appeal to the Central
Limit Theorem to develop a Gaussian approximation to the Polya-Gamma random
variable. This approximation is fast and reliable for parameter values relevant
in the text mining domain. Our model and inference algorithm are validated with
multiple simulation examples, and we consider the application of modeling
trends in PubMed abstracts. We demonstrate that sharing information across
documents is critical for accurately estimating document-specific topic
proportions. We also show that explicitly modeling polynomial and periodic
behavior improves our ability to predict topic prevalence at future time
points.
| Chris Glynn, Surya T. Tokdar, David L. Banks, Brian Howard | null | 1511.03947 | null | null |
Document Context Language Models | cs.CL cs.LG stat.ML | Text documents are structured on multiple levels of detail: individual words
are related by syntax, but larger units of text are related by discourse
structure. Existing language models generally fail to account for discourse
structure, but it is crucial if we are to have language models that reward
coherence and generate coherent texts. We present and empirically evaluate a
set of multi-level recurrent neural network language models, called
Document-Context Language Models (DCLM), which incorporate contextual
information both within and beyond the sentence. In comparison with word-level
recurrent neural network language models, the DCLM models obtain slightly
better predictive likelihoods, and considerably better assessments of document
coherence.
| Yangfeng Ji, Trevor Cohn, Lingpeng Kong, Chris Dyer, Jacob Eisenstein | null | 1511.03962 | null | null |
Representational Distance Learning for Deep Neural Networks | cs.NE cs.CV cs.LG | Deep neural networks (DNNs) provide useful models of visual representational
transformations. We present a method that enables a DNN (student) to learn from
the internal representational spaces of a reference model (teacher), which
could be another DNN or, in the future, a biological brain. Representational
spaces of the student and the teacher are characterized by representational
distance matrices (RDMs). We propose representational distance learning (RDL),
a stochastic gradient descent method that drives the RDMs of the student to
approximate the RDMs of the teacher. We demonstrate that RDL is competitive
with other transfer learning techniques for two publicly available benchmark
computer vision datasets (MNIST and CIFAR-100), while allowing for
architectural differences between student and teacher. By pulling the student's
RDMs towards those of the teacher, RDL significantly improved visual
classification performance when compared to baseline networks that did not use
transfer learning. In the future, RDL may enable combined supervised training
of deep neural networks using task constraints (e.g. images and category
labels) and constraints from brain-activity measurements, so as to build models
that replicate the internal representational spaces of biological brains.
| Patrick McClure, Nikolaus Kriegeskorte | null | 1511.03979 | null | null |
Prediction of the Yield of Enzymatic Synthesis of Betulinic Acid Ester
Using Artificial Neural Networks and Support Vector Machine | cs.LG cs.NE | 3\b{eta}-O-phthalic ester of betulinic acid is of great importance in
anticancer studies. However, the optimization of its reaction conditions
requires a large number of experimental works. To simplify the number of times
of optimization in experimental works, here, we use artificial neural network
(ANN) and support vector machine (SVM) models for the prediction of yields of
3\b{eta}-O-phthalic ester of betulinic acid synthesized by betulinic acid and
phthalic anhydride using lipase as biocatalyst. General regression neural
network (GRNN), multilayer feed-forward neural network (MLFN) and the SVM
models were trained based on experimental data. Four indicators were set as
independent variables, including time (h), temperature (C), amount of enzyme
(mg) and molar ratio, while the yield of the 3\b{eta}-O-phthalic ester of
betulinic acid was set as the dependent variable. Results show that the GRNN
and SVM models have the best prediction results during the testing process,
with comparatively low RMS errors (4.01 and 4.23respectively) and short
training times (both 1s). The prediction accuracy of the GRNN and SVM are both
100% in testing process, under the tolerance of 30%.
| Run Wang, Qiaoli Mo, Qian Zhang, Fudi Chen and Dazuo Yang | null | 1511.03984 | null | null |
Block-diagonal covariance selection for high-dimensional Gaussian
graphical models | math.ST cs.LG stat.ML stat.TH | Gaussian graphical models are widely utilized to infer and visualize networks
of dependencies between continuous variables. However, inferring the graph is
difficult when the sample size is small compared to the number of variables. To
reduce the number of parameters to estimate in the model, we propose a
non-asymptotic model selection procedure supported by strong theoretical
guarantees based on an oracle inequality and a minimax lower bound. The
covariance matrix of the model is approximated by a block-diagonal matrix. The
structure of this matrix is detected by thresholding the sample covariance
matrix, where the threshold is selected using the slope heuristic. Based on the
block-diagonal structure of the covariance matrix, the estimation problem is
divided into several independent problems: subsequently, the network of
dependencies between variables is inferred using the graphical lasso algorithm
in each block. The performance of the procedure is illustrated on simulated
data. An application to a real gene expression dataset with a limited sample
size is also presented: the dimension reduction allows attention to be
objectively focused on interactions among smaller subsets of genes, leading to
a more parsimonious and interpretable modular network.
| Emilie Devijver, M\'elina Gallopin | null | 1511.04033 | null | null |
Kernel Methods for Accurate UWB-Based Ranging with Reduced Complexity | cs.LG cs.IT math.IT | Accurate and robust positioning in multipath environments can enable many
applications, such as search-and-rescue and asset tracking. For this problem,
ultra-wideband (UWB) technology can provide the most accurate range estimates,
which are required for range-based positioning. However, UWB still faces a
problem with non-line-of-sight (NLOS) measurements, in which the range
estimates based on time-of-arrival (TOA) will typically be positively biased.
There are many techniques that address this problem, mainly based on NLOS
identification and NLOS error mitigation algorithms. However, these techniques
do not exploit all available information in the UWB channel impulse response.
Kernel-based machine learning methods, such as Gaussian Process Regression
(GPR), are able to make use of all information, but they may be too complex in
their original form. In this paper, we propose novel ranging methods based on
kernel principal component analysis (kPCA), in which the selected channel
parameters are projected onto a nonlinear orthogonal high-dimensional space,
and a subset of these projections is then used as an input for ranging. We
evaluate the proposed methods using real UWB measurements obtained in a
basement tunnel, and found that one of the proposed methods is able to
outperform state-of-the-art, even if little training samples are available.
| Vladimir Savic, Erik G. Larsson, Javier Ferrer-Coll, Peter Stenumgaard | 10.1109/TWC.2015.2496584 | 1511.04045 | null | null |
Efficient non-greedy optimization of decision trees | cs.LG cs.CV | Decision trees and randomized forests are widely used in computer vision and
machine learning. Standard algorithms for decision tree induction optimize the
split functions one node at a time according to some splitting criteria. This
greedy procedure often leads to suboptimal trees. In this paper, we present an
algorithm for optimizing the split functions at all levels of the tree jointly
with the leaf parameters, based on a global objective. We show that the problem
of finding optimal linear-combination (oblique) splits for decision trees is
related to structured prediction with latent variables, and we formulate a
convex-concave upper bound on the tree's empirical loss. The run-time of
computing the gradient of the proposed surrogate objective with respect to each
training exemplar is quadratic in the the tree depth, and thus training deep
trees is feasible. The use of stochastic gradient descent for optimization
enables effective training with large datasets. Experiments on several
classification benchmarks demonstrate that the resulting non-greedy decision
trees outperform greedy decision tree baselines.
| Mohammad Norouzi, Maxwell D. Collins, Matthew Johnson, David J. Fleet,
Pushmeet Kohli | null | 1511.04056 | null | null |
LSTM-based Deep Learning Models for Non-factoid Answer Selection | cs.CL cs.LG | In this paper, we apply a general deep learning (DL) framework for the answer
selection task, which does not depend on manually defined features or
linguistic tools. The basic framework is to build the embeddings of questions
and answers based on bidirectional long short-term memory (biLSTM) models, and
measure their closeness by cosine similarity. We further extend this basic
model in two directions. One direction is to define a more composite
representation for questions and answers by combining convolutional neural
network with the basic framework. The other direction is to utilize a simple
but efficient attention mechanism in order to generate the answer
representation according to the question context. Several variations of models
are provided. The models are examined by two datasets, including TREC-QA and
InsuranceQA. Experimental results demonstrate that the proposed models
substantially outperform several strong baselines.
| Ming Tan, Cicero dos Santos, Bing Xiang, Bowen Zhou | null | 1511.04108 | null | null |
Action Recognition using Visual Attention | cs.LG cs.CV | We propose a soft attention based model for the task of action recognition in
videos. We use multi-layered Recurrent Neural Networks (RNNs) with Long
Short-Term Memory (LSTM) units which are deep both spatially and temporally.
Our model learns to focus selectively on parts of the video frames and
classifies videos after taking a few glimpses. The model essentially learns
which parts in the frames are relevant for the task at hand and attaches higher
importance to them. We evaluate the model on UCF-11 (YouTube Action), HMDB-51
and Hollywood2 datasets and analyze how the model focuses its attention
depending on the scene and the action being performed.
| Shikhar Sharma, Ryan Kiros, Ruslan Salakhutdinov | null | 1511.04119 | null | null |
Seeing the Unseen Network: Inferring Hidden Social Ties from
Respondent-Driven Sampling | cs.SI cs.AI cs.LG | Learning about the social structure of hidden and hard-to-reach populations
--- such as drug users and sex workers --- is a major goal of epidemiological
and public health research on risk behaviors and disease prevention.
Respondent-driven sampling (RDS) is a peer-referral process widely used by many
health organizations, where research subjects recruit other subjects from their
social network. In such surveys, researchers observe who recruited whom, along
with the time of recruitment and the total number of acquaintances (network
degree) of respondents. However, due to privacy concerns, the identities of
acquaintances are not disclosed. In this work, we show how to reconstruct the
underlying network structure through which the subjects are recruited. We
formulate the dynamics of RDS as a continuous-time diffusion process over the
underlying graph and derive the likelihood for the recruitment time series
under an arbitrary recruitment time distribution. We develop an efficient
stochastic optimization algorithm called RENDER (REspoNdent-Driven nEtwork
Reconstruction) that finds the network that best explains the collected data.
We support our analytical results through an exhaustive set of experiments on
both synthetic and real data.
| Lin Chen, Forrest W. Crawford, Amin Karbasi | null | 1511.04137 | null | null |
Deep Reinforcement Learning in Parameterized Action Space | cs.AI cs.LG cs.MA cs.NE | Recent work has shown that deep neural networks are capable of approximating
both value functions and policies in reinforcement learning domains featuring
continuous state and action spaces. However, to the best of our knowledge no
previous work has succeeded at using deep neural networks in structured
(parameterized) continuous action spaces. To fill this gap, this paper focuses
on learning within the domain of simulated RoboCup soccer, which features a
small set of discrete action types, each of which is parameterized with
continuous variables. The best learned agent can score goals more reliably than
the 2012 RoboCup champion agent. As such, this paper represents a successful
extension of deep reinforcement learning to the class of parameterized action
space MDPs.
| Matthew Hausknecht and Peter Stone | null | 1511.04143 | null | null |
A Continuous-time Mutually-Exciting Point Process Framework for
Prioritizing Events in Social Media | cs.SI cs.LG | The overwhelming amount and rate of information update in online social media
is making it increasingly difficult for users to allocate their attention to
their topics of interest, thus there is a strong need for prioritizing news
feeds. The attractiveness of a post to a user depends on many complex
contextual and temporal features of the post. For instance, the contents of the
post, the responsiveness of a third user, and the age of the post may all have
impact. So far, these static and dynamic features has not been incorporated in
a unified framework to tackle the post prioritization problem. In this paper,
we propose a novel approach for prioritizing posts based on a feature modulated
multi-dimensional point process. Our model is able to simultaneously capture
textual and sentiment features, and temporal features such as self-excitation,
mutual-excitation and bursty nature of social interaction. As an evaluation, we
also curated a real-world conversational benchmark dataset crawled from
Facebook. In our experiments, we demonstrate that our algorithm is able to
achieve the-state-of-the-art performance in terms of analyzing, predicting, and
prioritizing events. In terms of interpretability of our method, we observe
that features indicating individual user profile and linguistic characteristics
of the events work best for prediction and prioritization of new events.
| Mehrdad Farajtabar, Safoora Yousefi, Long Q. Tran, Le Song, Hongyuan
Zha | null | 1511.04145 | null | null |
Deep Mean Maps | stat.ML cs.CV cs.LG | The use of distributions and high-level features from deep architecture has
become commonplace in modern computer vision. Both of these methodologies have
separately achieved a great deal of success in many computer vision tasks.
However, there has been little work attempting to leverage the power of these
to methodologies jointly. To this end, this paper presents the Deep Mean Maps
(DMMs) framework, a novel family of methods to non-parametrically represent
distributions of features in convolutional neural network models.
DMMs are able to both classify images using the distribution of top-level
features, and to tune the top-level features for performing this task. We show
how to implement DMMs using a special mean map layer composed of typical CNN
operations, making both forward and backward propagation simple.
We illustrate the efficacy of DMMs at analyzing distributional patterns in
image data in a synthetic data experiment. We also show that we extending
existing deep architectures with DMMs improves the performance of existing CNNs
on several challenging real-world datasets.
| Junier B. Oliva, Danica J. Sutherland, Barnab\'as P\'oczos, Jeff
Schneider | null | 1511.04150 | null | null |
Adaptive Affinity Matrix for Unsupervised Metric Learning | cs.CV cs.LG | Spectral clustering is one of the most popular clustering approaches with the
capability to handle some challenging clustering problems. Most spectral
clustering methods provide a nonlinear map from the data manifold to a
subspace. Only a little work focuses on the explicit linear map which can be
viewed as the unsupervised distance metric learning. In practice, the selection
of the affinity matrix exhibits a tremendous impact on the unsupervised
learning. While much success of affinity learning has been achieved in recent
years, some issues such as noise reduction remain to be addressed. In this
paper, we propose a novel method, dubbed Adaptive Affinity Matrix (AdaAM), to
learn an adaptive affinity matrix and derive a distance metric from the
affinity. We assume the affinity matrix to be positive semidefinite with
ability to quantify the pairwise dissimilarity. Our method is based on posing
the optimization of objective function as a spectral decomposition problem. We
yield the affinity from both the original data distribution and the widely-used
heat kernel. The provided matrix can be regarded as the optimal representation
of pairwise relationship on the manifold. Extensive experiments on a number of
real-world data sets show the effectiveness and efficiency of AdaAM.
| Yaoyi Li, Junxuan Chen and Hongtao Lu | 10.1109/ICME.2016.7552887 | 1511.04153 | null | null |
Neuroprosthetic decoder training as imitation learning | stat.ML cs.LG q-bio.NC | Neuroprosthetic brain-computer interfaces function via an algorithm which
decodes neural activity of the user into movements of an end effector, such as
a cursor or robotic arm. In practice, the decoder is often learned by updating
its parameters while the user performs a task. When the user's intention is not
directly observable, recent methods have demonstrated value in training the
decoder against a surrogate for the user's intended movement. We describe how
training a decoder in this way is a novel variant of an imitation learning
problem, where an oracle or expert is employed for supervised training in lieu
of direct observations, which are not available. Specifically, we describe how
a generic imitation learning meta-algorithm, dataset aggregation (DAgger, [1]),
can be adapted to train a generic brain-computer interface. By deriving
existing learning algorithms for brain-computer interfaces in this framework,
we provide a novel analysis of regret (an important metric of learning
efficacy) for brain-computer interfaces. This analysis allows us to
characterize the space of algorithmic variants and bounds on their regret
rates. Existing approaches for decoder learning have been performed in the
cursor control setting, but the available design principles for these decoders
are such that it has been impossible to scale them to naturalistic settings.
Leveraging our findings, we then offer an algorithm that combines imitation
learning with optimal control, which should allow for training of arbitrary
effectors for which optimal control can generate goal-oriented control. We
demonstrate this novel and general BCI algorithm with simulated neuroprosthetic
control of a 26 degree-of-freedom model of an arm, a sophisticated and
realistic end effector.
| Josh Merel, David Carlson, Liam Paninski, John P. Cunningham | 10.1371/journal.pcbi.1004948 | 1511.04156 | null | null |
On the Quality of the Initial Basin in Overspecified Neural Networks | cs.LG stat.ML | Deep learning, in the form of artificial neural networks, has achieved
remarkable practical success in recent years, for a variety of difficult
machine learning applications. However, a theoretical explanation for this
remains a major open problem, since training neural networks involves
optimizing a highly non-convex objective function, and is known to be
computationally hard in the worst case. In this work, we study the
\emph{geometric} structure of the associated non-convex objective function, in
the context of ReLU networks and starting from a random initialization of the
network parameters. We identify some conditions under which it becomes more
favorable to optimization, in the sense of (i) High probability of initializing
at a point from which there is a monotonically decreasing path to a global
minimum; and (ii) High probability of initializing at a basin (suitably
defined) with a small minimal objective value. A common theme in our results is
that such properties are more likely to hold for larger ("overspecified")
networks, which accords with some recent empirical and theoretical
observations.
| Itay Safran, Ohad Shamir | null | 1511.04210 | null | null |
Active Contextual Entropy Search | stat.ML cs.LG | Contextual policy search allows adapting robotic movement primitives to
different situations. For instance, a locomotion primitive might be adapted to
different terrain inclinations or desired walking speeds. Such an adaptation is
often achievable by modifying a small number of hyperparameters. However,
learning, when performed on real robotic systems, is typically restricted to a
small number of trials. Bayesian optimization has recently been proposed as a
sample-efficient means for contextual policy search that is well suited under
these conditions. In this work, we extend entropy search, a variant of Bayesian
optimization, such that it can be used for active contextual policy search
where the agent selects those tasks during training in which it expects to
learn the most. Empirical results in simulation suggest that this allows
learning successful behavior with less trials.
| Jan Hendrik Metzen | null | 1511.04211 | null | null |
Deep Feature Learning for EEG Recordings | cs.NE cs.LG | We introduce and compare several strategies for learning discriminative
features from electroencephalography (EEG) recordings using deep learning
techniques. EEG data are generally only available in small quantities, they are
high-dimensional with a poor signal-to-noise ratio, and there is considerable
variability between individual subjects and recording sessions. Our proposed
techniques specifically address these challenges for feature learning.
Cross-trial encoding forces auto-encoders to focus on features that are stable
across trials. Similarity-constraint encoders learn features that allow to
distinguish between classes by demanding that two trials from the same class
are more similar to each other than to trials from other classes. This
tuple-based training approach is especially suitable for small datasets.
Hydra-nets allow for separate processing pathways adapting to subsets of a
dataset and thus combine the advantages of individual feature learning (better
adaptation of early, low-level processing) with group model training (better
generalization of higher-level processing in deeper layers). This way, models
can, for instance, adapt to each subject individually to compensate for
differences in spatial patterns due to anatomical differences or variance in
electrode positions. The different techniques are evaluated using the publicly
available OpenMIIR dataset of EEG recordings taken while participants listened
to and imagined music.
| Sebastian Stober, Avital Sternin, Adrian M. Owen and Jessica A. Grahn | null | 1511.04306 | null | null |
Handling Class Imbalance in Link Prediction using Learning to Rank
Techniques | stat.ML cs.LG cs.SI | We consider the link prediction problem in a partially observed network,
where the objective is to make predictions in the unobserved portion of the
network. Many existing methods reduce link prediction to binary classification
problem. However, the dominance of absent links in real world networks makes
misclassification error a poor performance metric. Instead, researchers have
argued for using ranking performance measures, like AUC, AP and NDCG, for
evaluation. Our main contribution is to recast the link prediction problem as a
learning to rank problem and use effective learning to rank techniques directly
during training. This is in contrast to existing work that uses ranking
measures only during evaluation. Our approach is able to deal with the class
imbalance problem by using effective, scalable learning to rank techniques
during training. Furthermore, our approach allows us to combine network
topology and node features. As a demonstration of our general approach, we
develop a link prediction method by optimizing the cross-entropy surrogate,
originally used in the popular ListNet ranking algorithm. We conduct extensive
experiments on publicly available co-authorship, citation and metabolic
networks to demonstrate the merits of our method.
| Bopeng Li, Sougata Chaudhuri, Ambuj Tewari | null | 1511.04383 | null | null |
Similarity-based Text Recognition by Deeply Supervised Siamese Network | cs.CV cs.LG | In this paper, we propose a new text recognition model based on measuring the
visual similarity of text and predicting the content of unlabeled texts. First
a Siamese convolutional network is trained with deep supervision on a labeled
training dataset. This network projects texts into a similarity manifold. The
Deeply Supervised Siamese network learns visual similarity of texts. Then a
K-nearest neighbor classifier is used to predict unlabeled text based on
similarity distance to labeled texts. The performance of the model is evaluated
on three datasets of machine-print and hand-written text combined. We
demonstrate that the model reduces the cost of human estimation by $50\%-85\%$.
The error of the system is less than $0.5\%$. The proposed model outperform
conventional Siamese network by finding visually-similar barely-readable and
readable text, e.g. machine-printed, handwritten, due to deep supervision. The
results also demonstrate that the predicted labels are sometimes better than
human labels e.g. spelling correction.
| Ehsan Hosseini-Asl, Angshuman Guha | null | 1511.04397 | null | null |
Symbol Grounding Association in Multimodal Sequences with Missing
Elements | cs.CV cs.CL cs.LG cs.NE | In this paper, we extend a symbolic association framework for being able to
handle missing elements in multimodal sequences. The general scope of the work
is the symbolic associations of object-word mappings as it happens in language
development in infants. In other words, two different representations of the
same abstract concepts can associate in both directions. This scenario has been
long interested in Artificial Intelligence, Psychology, and Neuroscience. In
this work, we extend a recent approach for multimodal sequences (visual and
audio) to also cope with missing elements in one or both modalities. Our method
uses two parallel Long Short-Term Memories (LSTMs) with a learning rule based
on EM-algorithm. It aligns both LSTM outputs via Dynamic Time Warping (DTW). We
propose to include an extra step for the combination with the max operation for
exploiting the common elements between both sequences. The motivation behind is
that the combination acts as a condition selector for choosing the best
representation from both LSTMs. We evaluated the proposed extension in the
following scenarios: missing elements in one modality (visual or audio) and
missing elements in both modalities (visual and sound). The performance of our
extension reaches better results than the original model and similar results to
individual LSTM trained in each modality.
| Federico Raue, Andreas Dengel, Thomas M. Breuel, Marcus Liwicki | null | 1511.04401 | null | null |
Dynamic Sum Product Networks for Tractable Inference on Sequence Data
(Extended Version) | cs.LG cs.AI stat.ML | Sum-Product Networks (SPN) have recently emerged as a new class of tractable
probabilistic graphical models. Unlike Bayesian networks and Markov networks
where inference may be exponential in the size of the network, inference in
SPNs is in time linear in the size of the network. Since SPNs represent
distributions over a fixed set of variables only, we propose dynamic sum
product networks (DSPNs) as a generalization of SPNs for sequence data of
varying length. A DSPN consists of a template network that is repeated as many
times as needed to model data sequences of any length. We present a local
search technique to learn the structure of the template network. In contrast to
dynamic Bayesian networks for which inference is generally exponential in the
number of variables per time slice, DSPNs inherit the linear inference
complexity of SPNs. We demonstrate the advantages of DSPNs over DBNs and other
models on several datasets of sequence data.
| Mazen Melibari, Pascal Poupart, Prashant Doshi and George Trimponias | null | 1511.04412 | null | null |
Deeply-Recursive Convolutional Network for Image Super-Resolution | cs.CV cs.LG | We propose an image super-resolution method (SR) using a deeply-recursive
convolutional network (DRCN). Our network has a very deep recursive layer (up
to 16 recursions). Increasing recursion depth can improve performance without
introducing new parameters for additional convolutions. Albeit advantages,
learning a DRCN is very hard with a standard gradient descent method due to
exploding/vanishing gradients. To ease the difficulty of training, we propose
two extensions: recursive-supervision and skip-connection. Our method
outperforms previous methods by a large margin.
| Jiwon Kim, Jung Kwon Lee and Kyoung Mu Lee | null | 1511.04491 | null | null |
Distillation as a Defense to Adversarial Perturbations against Deep
Neural Networks | cs.CR cs.LG cs.NE stat.ML | Deep learning algorithms have been shown to perform extremely well on many
classical machine learning problems. However, recent studies have shown that
deep learning, like other machine learning techniques, is vulnerable to
adversarial samples: inputs crafted to force a deep neural network (DNN) to
provide adversary-selected outputs. Such attacks can seriously undermine the
security of the system supported by the DNN, sometimes with devastating
consequences. For example, autonomous vehicles can be crashed, illicit or
illegal content can bypass content filters, or biometric authentication systems
can be manipulated to allow improper access. In this work, we introduce a
defensive mechanism called defensive distillation to reduce the effectiveness
of adversarial samples on DNNs. We analytically investigate the
generalizability and robustness properties granted by the use of defensive
distillation when training DNNs. We also empirically study the effectiveness of
our defense mechanisms on two DNNs placed in adversarial settings. The study
shows that defensive distillation can reduce effectiveness of sample creation
from 95% to less than 0.5% on a studied DNN. Such dramatic gains can be
explained by the fact that distillation leads gradients used in adversarial
sample creation to be reduced by a factor of 10^30. We also find that
distillation increases the average minimum number of features that need to be
modified to create adversarial samples by about 800% on one of the DNNs we
tested.
| Nicolas Papernot and Patrick McDaniel and Xi Wu and Somesh Jha and
Ananthram Swami | null | 1511.04508 | null | null |
Sparse Nonlinear Regression: Parameter Estimation and Asymptotic
Inference | stat.ML cs.IT cs.LG math.IT math.OC | We study parameter estimation and asymptotic inference for sparse nonlinear
regression. More specifically, we assume the data are given by $y = f( x^\top
\beta^* ) + \epsilon$, where $f$ is nonlinear. To recover $\beta^*$, we propose
an $\ell_1$-regularized least-squares estimator. Unlike classical linear
regression, the corresponding optimization problem is nonconvex because of the
nonlinearity of $f$. In spite of the nonconvexity, we prove that under mild
conditions, every stationary point of the objective enjoys an optimal
statistical rate of convergence. In addition, we provide an efficient algorithm
that provably converges to a stationary point. We also access the uncertainty
of the obtained estimator. Specifically, based on any stationary point of the
objective, we construct valid hypothesis tests and confidence intervals for the
low dimensional components of the high-dimensional parameter $\beta^*$.
Detailed numerical results are provided to back up our theory.
| Zhuoran Yang, Zhaoran Wang, Han Liu, Yonina C. Eldar, Tong Zhang | null | 1511.04514 | null | null |
Efficient Training of Very Deep Neural Networks for Supervised Hashing | cs.CV cs.LG cs.NE | In this paper, we propose training very deep neural networks (DNNs) for
supervised learning of hash codes. Existing methods in this context train
relatively "shallow" networks limited by the issues arising in back propagation
(e.e. vanishing gradients) as well as computational efficiency. We propose a
novel and efficient training algorithm inspired by alternating direction method
of multipliers (ADMM) that overcomes some of these limitations. Our method
decomposes the training process into independent layer-wise local updates
through auxiliary variables. Empirically we observe that our training algorithm
always converges and its computational complexity is linearly proportional to
the number of edges in the networks. Empirically we manage to train DNNs with
64 hidden layers and 1024 nodes per layer for supervised hashing in about 3
hours using a single GPU. Our proposed very deep supervised hashing (VDSH)
method significantly outperforms the state-of-the-art on several benchmark
datasets.
| Ziming Zhang, Yuting Chen and Venkatesh Saligrama | null | 1511.04524 | null | null |
8-Bit Approximations for Parallelism in Deep Learning | cs.NE cs.LG | The creation of practical deep learning data-products often requires
parallelization across processors and computers to make deep learning feasible
on large data sets, but bottlenecks in communication bandwidth make it
difficult to attain good speedups through parallelism. Here we develop and test
8-bit approximation algorithms which make better use of the available bandwidth
by compressing 32-bit gradients and nonlinear activations to 8-bit
approximations. We show that these approximations do not decrease predictive
performance on MNIST, CIFAR10, and ImageNet for both model and data parallelism
and provide a data transfer speedup of 2x relative to 32-bit parallelism. We
build a predictive model for speedups based on our experimental data, verify
its validity on known speedup data, and show that we can obtain a speedup of
50x and more on a system of 96 GPUs compared to a speedup of 23x for 32-bit. We
compare our data types with other methods and show that 8-bit approximations
achieve state-of-the-art speedups for model parallelism. Thus 8-bit
approximation is an efficient method to parallelize convolutional networks on
very large systems of GPUs.
| Tim Dettmers | null | 1511.04561 | null | null |
A Test of Relative Similarity For Model Selection in Generative Models | stat.ML cs.LG | Probabilistic generative models provide a powerful framework for representing
data that avoids the expense of manual annotation typically needed by
discriminative approaches. Model selection in this generative setting can be
challenging, however, particularly when likelihoods are not easily accessible.
To address this issue, we introduce a statistical test of relative similarity,
which is used to determine which of two models generates samples that are
significantly closer to a real-world reference dataset of interest. We use as
our test statistic the difference in maximum mean discrepancies (MMDs) between
the reference dataset and each model dataset, and derive a powerful,
low-variance test based on the joint asymptotic distribution of the MMDs
between each reference-model pair. In experiments on deep generative models,
including the variational auto-encoder and generative moment matching network,
the tests provide a meaningful ranking of model performance as a function of
parameter and training settings.
| Wacha Bounliphone, Eugene Belilovsky, Matthew B. Blaschko, Ioannis
Antonoglou, Arthur Gretton | null | 1511.04581 | null | null |
Accurate Image Super-Resolution Using Very Deep Convolutional Networks | cs.CV cs.LG | We present a highly accurate single-image super-resolution (SR) method. Our
method uses a very deep convolutional network inspired by VGG-net used for
ImageNet classification \cite{simonyan2015very}. We find increasing our network
depth shows a significant improvement in accuracy. Our final model uses 20
weight layers. By cascading small filters many times in a deep network
structure, contextual information over large image regions is exploited in an
efficient way. With very deep networks, however, convergence speed becomes a
critical issue during training. We propose a simple yet effective training
procedure. We learn residuals only and use extremely high learning rates
($10^4$ times higher than SRCNN \cite{dong2015image}) enabled by adjustable
gradient clipping. Our proposed method performs better than existing methods in
accuracy and visual improvements in our results are easily noticeable.
| Jiwon Kim, Jung Kwon Lee and Kyoung Mu Lee | null | 1511.04587 | null | null |
DeepFool: a simple and accurate method to fool deep neural networks | cs.LG cs.CV | State-of-the-art deep neural networks have achieved impressive results on
many image classification tasks. However, these same architectures have been
shown to be unstable to small, well sought, perturbations of the images.
Despite the importance of this phenomenon, no effective methods have been
proposed to accurately compute the robustness of state-of-the-art deep
classifiers to such perturbations on large-scale datasets. In this paper, we
fill this gap and propose the DeepFool algorithm to efficiently compute
perturbations that fool deep networks, and thus reliably quantify the
robustness of these classifiers. Extensive experimental results show that our
approach outperforms recent methods in the task of computing adversarial
perturbations and making classifiers more robust.
| Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Pascal Frossard | null | 1511.04599 | null | null |
Deep Reinforcement Learning with a Natural Language Action Space | cs.AI cs.CL cs.LG | This paper introduces a novel architecture for reinforcement learning with
deep neural networks designed to handle state and action spaces characterized
by natural language, as found in text-based games. Termed a deep reinforcement
relevance network (DRRN), the architecture represents action and state spaces
with separate embedding vectors, which are combined with an interaction
function to approximate the Q-function in reinforcement learning. We evaluate
the DRRN on two popular text games, showing superior performance over other
deep Q-learning architectures. Experiments with paraphrased action descriptions
show that the model is extracting meaning rather than simply memorizing strings
of text.
| Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng,
Mari Ostendorf | null | 1511.04636 | null | null |
Deep Activity Recognition Models with Triaxial Accelerometers | cs.LG cs.HC cs.NE | Despite the widespread installation of accelerometers in almost all mobile
phones and wearable devices, activity recognition using accelerometers is still
immature due to the poor recognition accuracy of existing recognition methods
and the scarcity of labeled training data. We consider the problem of human
activity recognition using triaxial accelerometers and deep learning paradigms.
This paper shows that deep activity recognition models (a) provide better
recognition accuracy of human activities, (b) avoid the expensive design of
handcrafted features in existing systems, and (c) utilize the massive unlabeled
acceleration samples for unsupervised feature extraction. Moreover, a hybrid
approach of deep learning and hidden Markov models (DL-HMM) is presented for
sequential activity recognition. This hybrid approach integrates the
hierarchical representations of deep activity recognition models with the
stochastic modeling of temporal sequences in the hidden Markov models. We show
substantial recognition improvement on real world datasets over
state-of-the-art methods of human activity recognition using triaxial
accelerometers.
| Mohammad Abu Alsheikh, Ahmed Selim, Dusit Niyato, Linda Doyle, Shaowei
Lin, Hwee-Pink Tan | null | 1511.04664 | null | null |
Robust Elastic Net Regression | cs.LG stat.ML | We propose a robust elastic net (REN) model for high-dimensional sparse
regression and give its performance guarantees (both the statistical error
bound and the optimization bound). A simple idea of trimming the inner product
is applied to the elastic net model. Specifically, we robustify the covariance
matrix by trimming the inner product based on the intuition that the trimmed
inner product can not be significant affected by a bounded number of
arbitrarily corrupted points (outliers). The REN model can also derive two
interesting special cases: robust Lasso and robust soft thresholding.
Comprehensive experimental results show that the robustness of the proposed
model consistently outperforms the original elastic net and matches the
performance guarantees nicely.
| Weiyang Liu, Rongmei Lin, Meng Yang | null | 1511.04690 | null | null |
An Iterative Reweighted Method for Tucker Decomposition of Incomplete
Multiway Tensors | cs.NA cs.LG | We consider the problem of low-rank decomposition of incomplete multiway
tensors. Since many real-world data lie on an intrinsically low dimensional
subspace, tensor low-rank decomposition with missing entries has applications
in many data analysis problems such as recommender systems and image
inpainting. In this paper, we focus on Tucker decomposition which represents an
Nth-order tensor in terms of N factor matrices and a core tensor via
multilinear operations. To exploit the underlying multilinear low-rank
structure in high-dimensional datasets, we propose a group-based log-sum
penalty functional to place structural sparsity over the core tensor, which
leads to a compact representation with smallest core tensor. The method for
Tucker decomposition is developed by iteratively minimizing a surrogate
function that majorizes the original objective function, which results in an
iterative reweighted process. In addition, to reduce the computational
complexity, an over-relaxed monotone fast iterative shrinkage-thresholding
technique is adapted and embedded in the iterative reweighted process. The
proposed method is able to determine the model complexity (i.e. multilinear
rank) in an automatic way. Simulation results show that the proposed algorithm
offers competitive performance compared with other existing algorithms.
| Linxiao Yang and Jun Fang and Hongbin Li and Bing Zeng | 10.1109/TSP.2016.2572047 | 1511.04695 | null | null |
Deep Linear Discriminant Analysis | cs.LG | We introduce Deep Linear Discriminant Analysis (DeepLDA) which learns
linearly separable latent representations in an end-to-end fashion. Classic LDA
extracts features which preserve class separability and is used for
dimensionality reduction for many classification problems. The central idea of
this paper is to put LDA on top of a deep neural network. This can be seen as a
non-linear extension of classic LDA. Instead of maximizing the likelihood of
target labels for individual samples, we propose an objective function that
pushes the network to produce feature distributions which: (a) have low
variance within the same class and (b) high variance between different classes.
Our objective is derived from the general LDA eigenvalue problem and still
allows to train with stochastic gradient descent and back-propagation. For
evaluation we test our approach on three different benchmark datasets (MNIST,
CIFAR-10 and STL-10). DeepLDA produces competitive results on MNIST and
CIFAR-10 and outperforms a network trained with categorical cross entropy (same
architecture) on a supervised setting of STL-10.
| Matthias Dorfer, Rainer Kelz and Gerhard Widmer | null | 1511.04707 | null | null |
Learning Representations of Affect from Speech | cs.CL cs.LG | There has been a lot of prior work on representation learning for speech
recognition applications, but not much emphasis has been given to an
investigation of effective representations of affect from speech, where the
paralinguistic elements of speech are separated out from the verbal content. In
this paper, we explore denoising autoencoders for learning paralinguistic
attributes i.e. categorical and dimensional affective traits from speech. We
show that the representations learnt by the bottleneck layer of the autoencoder
are highly discriminative of activation intensity and at separating out
negative valence (sadness and anger) from positive valence (happiness). We
experiment with different input speech features (such as FFT and log-mel
spectrograms with temporal context windows), and different autoencoder
architectures (such as stacked and deep autoencoders). We also learn utterance
specific representations by a combination of denoising autoencoders and BLSTM
based recurrent autoencoders. Emotion classification is performed with the
learnt temporal/dynamic representations to evaluate the quality of the
representations. Experiments on a well-established real-life speech dataset
(IEMOCAP) show that the learnt representations are comparable to state of the
art feature extractors (such as voice quality features and MFCCs) and are
competitive with state-of-the-art approaches at emotion and dimensional affect
recognition.
| Sayan Ghosh, Eugene Laksana, Louis-Philippe Morency, Stefan Scherer | null | 1511.04747 | null | null |
Large-Scale Approximate Kernel Canonical Correlation Analysis | cs.LG | Kernel canonical correlation analysis (KCCA) is a nonlinear multi-view
representation learning technique with broad applicability in statistics and
machine learning. Although there is a closed-form solution for the KCCA
objective, it involves solving an $N\times N$ eigenvalue system where $N$ is
the training set size, making its computational requirements in both memory and
time prohibitive for large-scale problems. Various approximation techniques
have been developed for KCCA. A commonly used approach is to first transform
the original inputs to an $M$-dimensional random feature space so that inner
products in the feature space approximate kernel evaluations, and then apply
linear CCA to the transformed inputs. In many applications, however, the
dimensionality $M$ of the random feature space may need to be very large in
order to obtain a sufficiently good approximation; it then becomes challenging
to perform the linear CCA step on the resulting very high-dimensional data
matrices. We show how to use a stochastic optimization algorithm, recently
proposed for linear CCA and its neural-network extension, to further alleviate
the computation requirements of approximate KCCA. This approach allows us to
run approximate KCCA on a speech dataset with $1.4$ million training samples
and a random feature space of dimensionality $M=100000$ on a typical
workstation.
| Weiran Wang, Karen Livescu | null | 1511.04773 | null | null |
Expressive recommender systems through normalized nonnegative models | cs.LG stat.ML | We introduce normalized nonnegative models (NNM) for explorative data
analysis. NNMs are partial convexifications of models from probability theory.
We demonstrate their value at the example of item recommendation. We show that
NNM-based recommender systems satisfy three criteria that all recommender
systems should ideally satisfy: high predictive power, computational
tractability, and expressive representations of users and items. Expressive
user and item representations are important in practice to succinctly summarize
the pool of customers and the pool of items. In NNMs, user representations are
expressive because each user's preference can be regarded as normalized mixture
of preferences of stereotypical users. The interpretability of item and user
representations allow us to arrange properties of items (e.g., genres of movies
or topics of documents) or users (e.g., personality traits) hierarchically.
| Cyril Stark | null | 1511.04775 | null | null |
Mixtures of Sparse Autoregressive Networks | stat.ML cs.LG | We consider high-dimensional distribution estimation through autoregressive
networks. By combining the concepts of sparsity, mixtures and parameter sharing
we obtain a simple model which is fast to train and which achieves
state-of-the-art or better results on several standard benchmark datasets.
Specifically, we use an L1-penalty to regularize the conditional distributions
and introduce a procedure for automatic parameter sharing between mixture
components. Moreover, we propose a simple distributed representation which
permits exact likelihood evaluations since the latent variables are interleaved
with the observable variables and can be easily integrated out. Our model
achieves excellent generalization performance and scales well to extremely high
dimensions.
| Marc Goessling, Yali Amit | null | 1511.04776 | null | null |
Causal interpretation rules for encoding and decoding models in
neuroimaging | stat.ML cs.LG q-bio.NC stat.AP | Causal terminology is often introduced in the interpretation of encoding and
decoding models trained on neuroimaging data. In this article, we investigate
which causal statements are warranted and which ones are not supported by
empirical evidence. We argue that the distinction between encoding and decoding
models is not sufficient for this purpose: relevant features in encoding and
decoding models carry a different meaning in stimulus- and in response-based
experimental paradigms. We show that only encoding models in the stimulus-based
setting support unambiguous causal interpretations. By combining encoding and
decoding models trained on the same data, however, we obtain insights into
causal relations beyond those that are implied by each individual model type.
We illustrate the empirical relevance of our theoretical findings on EEG data
recorded during a visuo-motor learning task.
| Sebastian Weichwald, Timm Meyer, Ozan \"Ozdenizci, Bernhard
Sch\"olkopf, Tonio Ball, Moritz Grosse-Wentrup | 10.1016/j.neuroimage.2015.01.036 | 1511.04780 | null | null |
Budget Online Multiple Kernel Learning | cs.LG | Online learning with multiple kernels has gained increasing interests in
recent years and found many applications. For classification tasks, Online
Multiple Kernel Classification (OMKC), which learns a kernel based classifier
by seeking the optimal linear combination of a pool of single kernel
classifiers in an online fashion, achieves superior accuracy and enjoys great
flexibility compared with traditional single-kernel classifiers. Despite being
studied extensively, existing OMKC algorithms suffer from high computational
cost due to their unbounded numbers of support vectors. To overcome this
drawback, we present a novel framework of Budget Online Multiple Kernel
Learning (BOMKL) and propose a new Sparse Passive Aggressive learning to
perform effective budget online learning. Specifically, we adopt a simple yet
effective Bernoulli sampling to decide if an incoming instance should be added
to the current set of support vectors. By limiting the number of support
vectors, our method can significantly accelerate OMKC while maintaining
satisfactory accuracy that is comparable to that of the existing OMKC
algorithms. We theoretically prove that our new method achieves an optimal
regret bound in expectation, and empirically found that the proposed algorithm
outperforms various OMKC algorithms and can easily scale up to large-scale
datasets.
| Jing Lu, Steven C.H. Hoi, Doyen Sahoo, Peilin Zhao | null | 1511.04813 | null | null |
Neural Programmer: Inducing Latent Programs with Gradient Descent | cs.LG cs.CL stat.ML | Deep neural networks have achieved impressive supervised classification
performance in many tasks including image recognition, speech recognition, and
sequence to sequence learning. However, this success has not been translated to
applications like question answering that may involve complex arithmetic and
logic reasoning. A major limitation of these models is in their inability to
learn even simple arithmetic and logic operations. For example, it has been
shown that neural networks fail to learn to add two binary numbers reliably. In
this work, we propose Neural Programmer, an end-to-end differentiable neural
network augmented with a small set of basic arithmetic and logic operations.
Neural Programmer can call these augmented operations over several steps,
thereby inducing compositional programs that are more complex than the built-in
operations. The model learns from a weak supervision signal which is the result
of execution of the correct program, hence it does not require expensive
annotation of the correct program itself. The decisions of what operations to
call, and what data segments to apply to are inferred by Neural Programmer.
Such decisions, during training, are done in a differentiable fashion so that
the entire network can be trained jointly by gradient descent. We find that
training the model is difficult, but it can be greatly improved by adding
random noise to the gradient. On a fairly complex synthetic table-comprehension
dataset, traditional recurrent networks and attentional models perform poorly
while Neural Programmer typically obtains nearly perfect accuracy.
| Arvind Neelakantan, Quoc V. Le, Ilya Sutskever | null | 1511.04834 | null | null |
Nonparametric Canonical Correlation Analysis | cs.LG stat.ML | Canonical correlation analysis (CCA) is a classical representation learning
technique for finding correlated variables in multi-view data. Several
nonlinear extensions of the original linear CCA have been proposed, including
kernel and deep neural network methods. These approaches seek maximally
correlated projections among families of functions, which the user specifies
(by choosing a kernel or neural network structure), and are computationally
demanding. Interestingly, the theory of nonlinear CCA, without functional
restrictions, had been studied in the population setting by Lancaster already
in the 1950s, but these results have not inspired practical algorithms. We
revisit Lancaster's theory to devise a practical algorithm for nonparametric
CCA (NCCA). Specifically, we show that the solution can be expressed in terms
of the singular value decomposition of a certain operator associated with the
joint density of the views. Thus, by estimating the population density from
data, NCCA reduces to solving an eigenvalue system, superficially like kernel
CCA but, importantly, without requiring the inversion of any kernel matrix. We
also derive a partially linear CCA (PLCCA) variant in which one of the views
undergoes a linear projection while the other is nonparametric. Using a kernel
density estimate based on a small number of nearest neighbors, our NCCA and
PLCCA algorithms are memory-efficient, often run much faster, and perform
better than kernel CCA and comparable to deep CCA.
| Tomer Michaeli, Weiran Wang, Karen Livescu | null | 1511.04839 | null | null |
Deep learning is a good steganalysis tool when embedding key is reused
for different images, even if there is a cover source-mismatch | cs.MM cs.CV cs.LG cs.NE | Since the BOSS competition, in 2010, most steganalysis approaches use a
learning methodology involving two steps: feature extraction, such as the Rich
Models (RM), for the image representation, and use of the Ensemble Classifier
(EC) for the learning step. In 2015, Qian et al. have shown that the use of a
deep learning approach that jointly learns and computes the features, is very
promising for the steganalysis. In this paper, we follow-up the study of Qian
et al., and show that, due to intrinsic joint minimization, the results
obtained from a Convolutional Neural Network (CNN) or a Fully Connected Neural
Network (FNN), if well parameterized, surpass the conventional use of a RM with
an EC. First, numerous experiments were conducted in order to find the best "
shape " of the CNN. Second, experiments were carried out in the clairvoyant
scenario in order to compare the CNN and FNN to an RM with an EC. The results
show more than 16% reduction in the classification error with our CNN or FNN.
Third, experiments were also performed in a cover-source mismatch setting. The
results show that the CNN and FNN are naturally robust to the mismatch problem.
In Addition to the experiments, we provide discussions on the internal
mechanisms of a CNN, and weave links with some previously stated ideas, in
order to understand the impressive results we obtained.
| Lionel Pibre, Pasquet J\'er\^ome, Dino Ienco, Marc Chaumont | null | 1511.04855 | null | null |
A Neural Transducer | cs.LG cs.CL cs.NE | Sequence-to-sequence models have achieved impressive results on various
tasks. However, they are unsuitable for tasks that require incremental
predictions to be made as more data arrives or tasks that have long input
sequences and output sequences. This is because they generate an output
sequence conditioned on an entire input sequence. In this paper, we present a
Neural Transducer that can make incremental predictions as more input arrives,
without redoing the entire computation. Unlike sequence-to-sequence models, the
Neural Transducer computes the next-step distribution conditioned on the
partially observed input sequence and the partially generated sequence. At each
time step, the transducer can decide to emit zero to many output symbols. The
data can be processed using an encoder and presented as input to the
transducer. The discrete decision to emit a symbol at every time step makes it
difficult to learn with conventional backpropagation. It is however possible to
train the transducer by using a dynamic programming algorithm to generate
target discrete decisions. Our experiments show that the Neural Transducer
works well in settings where it is required to produce output predictions as
data come in. We also find that the Neural Transducer performs well for long
sequences even when attention mechanisms are not used.
| Navdeep Jaitly, David Sussillo, Quoc V. Le, Oriol Vinyals, Ilya
Sutskever and Samy Bengio | null | 1511.04868 | null | null |
Sherlock: Scalable Fact Learning in Images | cs.CV cs.CL cs.LG | We study scalable and uniform understanding of facts in images. Existing
visual recognition systems are typically modeled differently for each fact type
such as objects, actions, and interactions. We propose a setting where all
these facts can be modeled simultaneously with a capacity to understand
unbounded number of facts in a structured way. The training data comes as
structured facts in images, including (1) objects (e.g., $<$boy$>$), (2)
attributes (e.g., $<$boy, tall$>$), (3) actions (e.g., $<$boy, playing$>$), and
(4) interactions (e.g., $<$boy, riding, a horse $>$). Each fact has a semantic
language view (e.g., $<$ boy, playing$>$) and a visual view (an image with this
fact). We show that learning visual facts in a structured way enables not only
a uniform but also generalizable visual understanding. We propose and
investigate recent and strong approaches from the multiview learning literature
and also introduce two learning representation models as potential baselines.
We applied the investigated methods on several datasets that we augmented with
structured facts and a large scale dataset of more than 202,000 facts and
814,000 images. Our experiments show the advantage of relating facts by the
structure by the proposed models compared to the designed baselines on
bidirectional fact retrieval.
| Mohamed Elhoseiny, Scott Cohen, Walter Chang, Brian Price, Ahmed
Elgammal | null | 1511.04891 | null | null |
Performing Highly Accurate Predictions Through Convolutional Networks
for Actual Telecommunication Challenges | cs.LG cs.CV | We investigated how the application of deep learning, specifically the use of
convolutional networks trained with GPUs, can help to build better predictive
models in telecommunication business environments, and fill this gap. In
particular, we focus on the non-trivial problem of predicting customer churn in
telecommunication operators. Our model, called WiseNet, consists of a
convolutional network and a novel encoding method that transforms customer
activity data and Call Detail Records (CDRs) into images. Experimental
evaluation with several machine learning classifiers supports the ability of
WiseNet for learning features when using structured input data. For this type
of telecommunication business problems, we found that WiseNet outperforms
machine learning models with hand-crafted features, and does not require the
labor-intensive step of feature engineering. Furthermore, the same model has
been applied without retraining to a different market, achieving consistent
results. This confirms the generalization property of WiseNet and the ability
to extract useful representations.
| Jaime Zaratiegui, Ana Montoro and Federico Castanedo | null | 1511.04906 | null | null |
A genetic algorithm to discover flexible motifs with support | cs.LG cs.NE | Finding repeated patterns or motifs in a time series is an important
unsupervised task that has still a number of open issues, starting by the
definition of motif. In this paper, we revise the notion of motif support,
characterizing it as the number of patterns or repetitions that define a motif.
We then propose GENMOTIF, a genetic algorithm to discover motifs with support
which, at the same time, is flexible enough to accommodate other motif
specifications and task characteristics. GENMOTIF is an anytime algorithm that
easily adapts to many situations: searching in a range of segment lengths,
applying uniform scaling, dealing with multiple dimensions, using different
similarity and grouping criteria, etc. GENMOTIF is also parameter-friendly: it
has only two intuitive parameters which, if set within reasonable bounds, do
not substantially affect its performance. We demonstrate the value of our
approach in a number of synthetic and real-world settings, considering traffic
volume measurements, accelerometer signals, and telephone call records.
| Joan Serr\`a, Aleksandar Matic, Josep Luis Arcos, Alexandros
Karatzoglou | null | 1511.04986 | null | null |
An Exploration of Softmax Alternatives Belonging to the Spherical Loss
Family | cs.NE cs.LG stat.ML | In a multi-class classification problem, it is standard to model the output
of a neural network as a categorical distribution conditioned on the inputs.
The output must therefore be positive and sum to one, which is traditionally
enforced by a softmax. This probabilistic mapping allows to use the maximum
likelihood principle, which leads to the well-known log-softmax loss. However
the choice of the softmax function seems somehow arbitrary as there are many
other possible normalizing functions. It is thus unclear why the log-softmax
loss would perform better than other loss alternatives. In particular Vincent
et al. (2015) recently introduced a class of loss functions, called the
spherical family, for which there exists an efficient algorithm to compute the
updates of the output weights irrespective of the output size. In this paper,
we explore several loss functions from this family as possible alternatives to
the traditional log-softmax. In particular, we focus our investigation on
spherical bounds of the log-softmax loss and on two spherical log-likelihood
losses, namely the log-Spherical Softmax suggested by Vincent et al. (2015) and
the log-Taylor Softmax that we introduce. Although these alternatives do not
yield as good results as the log-softmax loss on two language modeling tasks,
they surprisingly outperform it in our experiments on MNIST and CIFAR-10,
suggesting that they might be relevant in a broad range of applications.
| Alexandre de Br\'ebisson and Pascal Vincent | null | 1511.05042 | null | null |
Diversity Networks: Neural Network Compression Using Determinantal Point
Processes | cs.LG cs.NE | We introduce Divnet, a flexible technique for learning networks with diverse
neurons. Divnet models neuronal diversity by placing a Determinantal Point
Process (DPP) over neurons in a given layer. It uses this DPP to select a
subset of diverse neurons and subsequently fuses the redundant neurons into the
selected ones. Compared with previous approaches, Divnet offers a more
principled, flexible technique for capturing neuronal diversity and thus
implicitly enforcing regularization. This enables effective auto-tuning of
network architecture and leads to smaller network sizes without hurting
performance. Moreover, through its focus on diversity and neuron fusing, Divnet
remains compatible with other procedures that seek to reduce memory footprints
of networks. We present experimental results to corroborate our claims: for
pruning neural networks, Divnet is seen to be notably superior to competing
approaches.
| Zelda Mariet, Suvrit Sra | null | 1511.05077 | null | null |
Topic Modeling of Behavioral Modes Using Sensor Data | cs.LG | The field of Movement Ecology, like so many other fields, is experiencing a
period of rapid growth in availability of data. As the volume rises,
traditional methods are giving way to machine learning and data science, which
are playing an increasingly large part it turning this data into
science-driving insights. One rich and interesting source is the bio-logger.
These small electronic wearable devices are attached to animals free to roam in
their natural habitats, and report back readings from multiple sensors,
including GPS and accelerometer bursts. A common use of accelerometer data is
for supervised learning of behavioral modes. However, we need unsupervised
analysis tools as well, in order to overcome the inherent difficulties of
obtaining a labeled dataset, which in some cases is either infeasible or does
not successfully encompass the full repertoire of behavioral modes of interest.
Here we present a matrix factorization based topic-model method for
accelerometer bursts, derived using a linear mixture property of patch
features. Our method is validated via comparison to a labeled dataset, and is
further compared to standard clustering algorithms.
| Yehezkel S. Resheff, Shay Rotics, Ran Nathan, Daphna Weinshall | null | 1511.05082 | null | null |
Yin and Yang: Balancing and Answering Binary Visual Questions | cs.CL cs.CV cs.LG | The complex compositional structure of language makes problems at the
intersection of vision and language challenging. But language also provides a
strong prior that can result in good superficial performance, without the
underlying models truly understanding the visual content. This can hinder
progress in pushing state of art in the computer vision aspects of multi-modal
AI. In this paper, we address binary Visual Question Answering (VQA) on
abstract scenes. We formulate this problem as visual verification of concepts
inquired in the questions. Specifically, we convert the question to a tuple
that concisely summarizes the visual concept to be detected in the image. If
the concept can be found in the image, the answer to the question is "yes", and
otherwise "no". Abstract scenes play two roles (1) They allow us to focus on
the high-level semantics of the VQA task as opposed to the low-level
recognition problems, and perhaps more importantly, (2) They provide us the
modality to balance the dataset such that language priors are controlled, and
the role of vision is essential. In particular, we collect fine-grained pairs
of scenes for every question, such that the answer to the question is "yes" for
one scene, and "no" for the other for the exact same question. Indeed, language
priors alone do not perform better than chance on our balanced dataset.
Moreover, our proposed approach matches the performance of a state-of-the-art
VQA approach on the unbalanced dataset, and outperforms it on the balanced
dataset.
| Peng Zhang, Yash Goyal, Douglas Summers-Stay, Dhruv Batra, Devi Parikh | null | 1511.05099 | null | null |
How (not) to Train your Generative Model: Scheduled Sampling,
Likelihood, Adversary? | stat.ML cs.AI cs.IT cs.LG math.IT | Modern applications and progress in deep learning research have created
renewed interest for generative models of text and of images. However, even
today it is unclear what objective functions one should use to train and
evaluate these models. In this paper we present two contributions.
Firstly, we present a critique of scheduled sampling, a state-of-the-art
training method that contributed to the winning entry to the MSCOCO image
captioning benchmark in 2015. Here we show that despite this impressive
empirical performance, the objective function underlying scheduled sampling is
improper and leads to an inconsistent learning algorithm.
Secondly, we revisit the problems that scheduled sampling was meant to
address, and present an alternative interpretation. We argue that maximum
likelihood is an inappropriate training objective when the end-goal is to
generate natural-looking samples. We go on to derive an ideal objective
function to use in this situation instead. We introduce a generalisation of
adversarial training, and show how such method can interpolate between maximum
likelihood training and our ideal training objective. To our knowledge this is
the first theoretical analysis that explains why adversarial training tends to
produce samples with higher perceived quality.
| Ferenc Husz\'ar | null | 1511.05101 | null | null |
Resolving the Geometric Locus Dilemma for Support Vector Learning
Machines | cs.LG stat.ML | Capacity control, the bias/variance dilemma, and learning unknown functions
from data, are all concerned with identifying effective and consistent fits of
unknown geometric loci to random data points. A geometric locus is a curve or
surface formed by points, all of which possess some uniform property. A
geometric locus of an algebraic equation is the set of points whose coordinates
are solutions of the equation. Any given curve or surface must pass through
each point on a specified locus. This paper argues that it is impossible to fit
random data points to algebraic equations of partially configured geometric
loci that reference arbitrary Cartesian coordinate systems. It also argues that
the fundamental curve of a linear decision boundary is actually a principal
eigenaxis. It is shown that learning principal eigenaxes of linear decision
boundaries involves finding a point of statistical equilibrium for which
eigenenergies of principal eigenaxis components are symmetrically balanced with
each other. It is demonstrated that learning linear decision boundaries
involves strong duality relationships between a statistical eigenlocus of
principal eigenaxis components and its algebraic forms, in primal and dual,
correlated Hilbert spaces. Locus equations are introduced and developed that
describe principal eigen-coordinate systems for lines, planes, and hyperplanes.
These equations are used to introduce and develop primal and dual statistical
eigenlocus equations of principal eigenaxes of linear decision boundaries.
Important generalizations for linear decision boundaries are shown to be
encoded within a dual statistical eigenlocus of principal eigenaxis components.
Principal eigenaxes of linear decision boundaries are shown to encode Bayes'
likelihood ratio for common covariance data and a robust likelihood ratio for
all other data.
| Denise M. Reeves | null | 1511.05102 | null | null |
Random sampling of bandlimited signals on graphs | cs.SI cs.LG stat.ML | We study the problem of sampling k-bandlimited signals on graphs. We propose
two sampling strategies that consist in selecting a small subset of nodes at
random. The first strategy is non-adaptive, i.e., independent of the graph
structure, and its performance depends on a parameter called the graph
coherence. On the contrary, the second strategy is adaptive but yields optimal
results. Indeed, no more than O(k log(k)) measurements are sufficient to ensure
an accurate and stable recovery of all k-bandlimited signals. This second
strategy is based on a careful choice of the sampling distribution, which can
be estimated quickly. Then, we propose a computationally efficient decoder to
reconstruct k-bandlimited signals from their samples. We prove that it yields
accurate reconstructions and that it is also stable to noise. Finally, we
conduct several experiments to test these techniques.
| Gilles Puy, Nicolas Tremblay, R\'emi Gribonval, Pierre Vandergheynst | null | 1511.05118 | null | null |
Deep Kalman Filters | stat.ML cs.LG | Kalman Filters are one of the most influential models of time-varying
phenomena. They admit an intuitive probabilistic interpretation, have a simple
functional form, and enjoy widespread adoption in a variety of disciplines.
Motivated by recent variational methods for learning deep generative models, we
introduce a unified algorithm to efficiently learn a broad spectrum of Kalman
filters. Of particular interest is the use of temporal generative models for
counterfactual inference. We investigate the efficacy of such models for
counterfactual inference, and to that end we introduce the "Healing MNIST"
dataset where long-term structure, noise and actions are applied to sequences
of digits. We show the efficacy of our method for modeling this dataset. We
further show how our model can be used for counterfactual inference for
patients, based on electronic health record data of 8,000 patients over 4.5
years.
| Rahul G. Krishnan, Uri Shalit, David Sontag | null | 1511.05121 | null | null |
Adversarial Manipulation of Deep Representations | cs.CV cs.LG cs.NE | We show that the representation of an image in a deep neural network (DNN)
can be manipulated to mimic those of other natural images, with only minor,
imperceptible perturbations to the original image. Previous methods for
generating adversarial images focused on image perturbations designed to
produce erroneous class labels, while we concentrate on the internal layers of
DNN representations. In this way our new class of adversarial images differs
qualitatively from others. While the adversary is perceptually similar to one
image, its internal representation appears remarkably similar to a different
image, one from a different class, bearing little if any apparent similarity to
the input; they appear generic and consistent with the space of natural images.
This phenomenon raises questions about DNN representations, as well as the
properties of natural images themselves.
| Sara Sabour, Yanshuai Cao, Fartash Faghri, David J. Fleet | null | 1511.05122 | null | null |
Fast Proximal Linearized Alternating Direction Method of Multiplier with
Parallel Splitting | math.OC cs.LG cs.NA | The Augmented Lagragian Method (ALM) and Alternating Direction Method of
Multiplier (ADMM) have been powerful optimization methods for general convex
programming subject to linear constraint. We consider the convex problem whose
objective consists of a smooth part and a nonsmooth but simple part. We propose
the Fast Proximal Augmented Lagragian Method (Fast PALM) which achieves the
convergence rate $O(1/K^2)$, compared with $O(1/K)$ by the traditional PALM. In
order to further reduce the per-iteration complexity and handle the
multi-blocks problem, we propose the Fast Proximal ADMM with Parallel Splitting
(Fast PL-ADMM-PS) method. It also partially improves the rate related to the
smooth part of the objective function. Experimental results on both synthesized
and real world data demonstrate that our fast methods significantly improve the
previous PALM and ADMM.
| Canyi Lu, Huan Li, Zhouchen Lin, Shuicheng Yan | null | 1511.05133 | null | null |
Convolutional Models for Joint Object Categorization and Pose Estimation | cs.CV cs.AI cs.LG | In the task of Object Recognition, there exists a dichotomy between the
categorization of objects and estimating object pose, where the former
necessitates a view-invariant representation, while the latter requires a
representation capable of capturing pose information over different categories
of objects. With the rise of deep architectures, the prime focus has been on
object category recognition. Deep learning methods have achieved wide success
in this task. In contrast, object pose regression using these approaches has
received relatively much less attention. In this paper we show how deep
architectures, specifically Convolutional Neural Networks (CNN), can be adapted
to the task of simultaneous categorization and pose estimation of objects. We
investigate and analyze the layers of various CNN models and extensively
compare between them with the goal of discovering how the layers of distributed
representations of CNNs represent object pose information and how this
contradicts with object category representations. We extensively experiment on
two recent large and challenging multi-view datasets. Our models achieve better
than state-of-the-art performance on both datasets.
| Mohamed Elhoseiny, Tarek El-Gaaly, Amr Bakry, Ahmed Elgammal | null | 1511.05175 | null | null |
MuProp: Unbiased Backpropagation for Stochastic Neural Networks | cs.LG | Deep neural networks are powerful parametric models that can be trained
efficiently using the backpropagation algorithm. Stochastic neural networks
combine the power of large parametric functions with that of graphical models,
which makes it possible to learn very complex distributions. However, as
backpropagation is not directly applicable to stochastic networks that include
discrete sampling operations within their computational graph, training such
networks remains difficult. We present MuProp, an unbiased gradient estimator
for stochastic networks, designed to make this task easier. MuProp improves on
the likelihood-ratio estimator by reducing its variance using a control variate
based on the first-order Taylor expansion of a mean-field network. Crucially,
unlike prior attempts at using backpropagation for training stochastic
networks, the resulting estimator is unbiased and well behaved. Our experiments
on structured output prediction and discrete latent variable modeling
demonstrate that MuProp yields consistently good performance across a range of
difficult tasks.
| Shixiang Gu, Sergey Levine, Ilya Sutskever, Andriy Mnih | null | 1511.05176 | null | null |
Binary Classifier Calibration using an Ensemble of Near Isotonic
Regression Models | cs.LG stat.ML | Learning accurate probabilistic models from data is crucial in many practical
tasks in data mining. In this paper we present a new non-parametric calibration
method called \textit{ensemble of near isotonic regression} (ENIR). The method
can be considered as an extension of BBQ, a recently proposed calibration
method, as well as the commonly used calibration method based on isotonic
regression. ENIR is designed to address the key limitation of isotonic
regression which is the monotonicity assumption of the predictions. Similar to
BBQ, the method post-processes the output of a binary classifier to obtain
calibrated probabilities. Thus it can be combined with many existing
classification models. We demonstrate the performance of ENIR on synthetic and
real datasets for the commonly used binary classification models. Experimental
results show that the method outperforms several common binary classifier
calibration methods. In particular on the real data, ENIR commonly performs
statistically significantly better than the other methods, and never worse. It
is able to improve the calibration power of classifiers, while retaining their
discrimination power. The method is also computationally tractable for large
scale datasets, as it is $O(N \log N)$ time, where $N$ is the number of
samples.
| Mahdi Pakdaman Naeini, Gregory F. Cooper | null | 1511.05191 | null | null |
Sparse-promoting Full Waveform Inversion based on Online Orthonormal
Dictionary Learning | physics.geo-ph cs.LG cs.NA math.NA | Full waveform inversion (FWI) delivers high-resolution images of the
subsurface by minimizing iteratively the misfit between the recorded and
calculated seismic data. It has been attacked successfully with the
Gauss-Newton method and sparsity promoting regularization based on fixed
multiscale transforms that permit significant subsampling of the seismic data
when the model perturbation at each FWI data-fitting iteration can be
represented with sparse coefficients. Rather than using analytical transforms
with predefined dictionaries to achieve sparse representation, we introduce an
adaptive transform called the Sparse Orthonormal Transform (SOT) whose
dictionary is learned from many small training patches taken from the model
perturbations in previous iterations. The patch-based dictionary is constrained
to be orthonormal and trained with an online approach to provide the best
sparse representation of the complex features and variations of the entire
model perturbation. The complexity of the training method is proportional to
the cube of the number of samples in one small patch. By incorporating both
compressive subsampling and the adaptive SOT-based representation into the
Gauss-Newton least-squares problem for each FWI iteration, the model
perturbation can be recovered after an l1-norm sparsity constraint is applied
on the SOT coefficients. Numerical experiments on synthetic models demonstrate
that the SOT-based sparsity promoting regularization can provide robust FWI
results with reduced computation.
| Lingchen Zhu, Entao Liu, James H. McClellan | null | 1511.05194 | null | null |
Binary embeddings with structured hashed projections | cs.LG | We consider the hashing mechanism for constructing binary embeddings, that
involves pseudo-random projections followed by nonlinear (sign function)
mappings. The pseudo-random projection is described by a matrix, where not all
entries are independent random variables but instead a fixed "budget of
randomness" is distributed across the matrix. Such matrices can be efficiently
stored in sub-quadratic or even linear space, provide reduction in randomness
usage (i.e. number of required random values), and very often lead to
computational speed ups. We prove several theoretical results showing that
projections via various structured matrices followed by nonlinear mappings
accurately preserve the angular distance between input high-dimensional
vectors. To the best of our knowledge, these results are the first that give
theoretical ground for the use of general structured matrices in the nonlinear
setting. In particular, they generalize previous extensions of the
Johnson-Lindenstrauss lemma and prove the plausibility of the approach that was
so far only heuristically confirmed for some special structured matrices.
Consequently, we show that many structured matrices can be used as an efficient
information compression mechanism. Our findings build a better understanding of
certain deep architectures, which contain randomly weighted and untrained
layers, and yet achieve high performance on different learning tasks. We
empirically verify our theoretical findings and show the dependence of learning
via structured hashed projections on the performance of neural network as well
as nearest neighbor classifier.
| Anna Choromanska and Krzysztof Choromanski and Mariusz Bojarski and
Tony Jebara and Sanjiv Kumar and Yann LeCun | null | 1511.05212 | null | null |
How much does your data exploration overfit? Controlling bias via
information usage | stat.ML cs.LG | Modern data is messy and high-dimensional, and it is often not clear a priori
what are the right questions to ask. Instead, the analyst typically needs to
use the data to search for interesting analyses to perform and hypotheses to
test. This is an adaptive process, where the choice of analysis to be performed
next depends on the results of the previous analyses on the same data.
Ultimately, which results are reported can be heavily influenced by the data.
It is widely recognized that this process, even if well-intentioned, can lead
to biases and false discoveries, contributing to the crisis of reproducibility
in science. But while %the adaptive nature of exploration any data-exploration
renders standard statistical theory invalid, experience suggests that different
types of exploratory analysis can lead to disparate levels of bias, and the
degree of bias also depends on the particulars of the data set. In this paper,
we propose a general information usage framework to quantify and provably bound
the bias and other error metrics of an arbitrary exploratory analysis. We prove
that our mutual information based bound is tight in natural settings, and then
use it to give rigorous insights into when commonly used procedures do or do
not lead to substantially biased estimation. Through the lens of information
usage, we analyze the bias of specific exploration procedures such as
filtering, rank selection and clustering. Our general framework also naturally
motivates randomization techniques that provably reduces exploration bias while
preserving the utility of the data analysis. We discuss the connections between
our approach and related ideas from differential privacy and blinded data
analysis, and supplement our results with illustrative simulations.
| Daniel Russo and James Zou | null | 1511.05219 | null | null |
Reduced-Precision Strategies for Bounded Memory in Deep Neural Nets | cs.LG cs.NE | This work investigates how using reduced precision data in Convolutional
Neural Networks (CNNs) affects network accuracy during classification. More
specifically, this study considers networks where each layer may use different
precision data. Our key result is the observation that the tolerance of CNNs to
reduced precision data not only varies across networks, a well established
observation, but also within networks. Tuning precision per layer is appealing
as it could enable energy and performance improvements. In this paper we study
how error tolerance across layers varies and propose a method for finding a low
precision configuration for a network while maintaining high accuracy. A
diverse set of CNNs is analyzed showing that compared to a conventional
implementation using a 32-bit floating-point representation for all layers, and
with less than 1% loss in relative accuracy, the data footprint required by
these networks can be reduced by an average of 74% and up to 92%.
| Patrick Judd, Jorge Albericio, Tayler Hetherington, Tor Aamodt,
Natalie Enright Jerger, Raquel Urtasun, Andreas Moshovos | null | 1511.05236 | null | null |
An extension of McDiarmid's inequality | cs.LG math.PR math.ST stat.TH | We generalize McDiarmid's inequality for functions with bounded differences
on a high probability set, using an extension argument. Those functions
concentrate around their conditional expectations. We further extend the
results to concentration in general metric spaces.
| Richard Combes | null | 1511.05240 | null | null |
Robust PCA via Nonconvex Rank Approximation | cs.CV cs.LG cs.NA stat.ML | Numerous applications in data mining and machine learning require recovering
a matrix of minimal rank. Robust principal component analysis (RPCA) is a
general framework for handling this kind of problems. Nuclear norm based convex
surrogate of the rank function in RPCA is widely investigated. Under certain
assumptions, it can recover the underlying true low rank matrix with high
probability. However, those assumptions may not hold in real-world
applications. Since the nuclear norm approximates the rank by adding all
singular values together, which is essentially a $\ell_1$-norm of the singular
values, the resulting approximation error is not trivial and thus the resulting
matrix estimator can be significantly biased. To seek a closer approximation
and to alleviate the above-mentioned limitations of the nuclear norm, we
propose a nonconvex rank approximation. This approximation to the matrix rank
is tighter than the nuclear norm. To solve the associated nonconvex
minimization problem, we develop an efficient augmented Lagrange multiplier
based optimization algorithm. Experimental results demonstrate that our method
outperforms current state-of-the-art algorithms in both accuracy and
efficiency.
| Zhao Kang, Chong Peng, Qiang Cheng | 10.1109/ICDM.2015.15 | 1511.05261 | null | null |
The Use of Machine Learning Algorithms in Recommender Systems: A
Systematic Review | cs.SE cs.IR cs.LG | Recommender systems use algorithms to provide users with product or service
recommendations. Recently, these systems have been using machine learning
algorithms from the field of artificial intelligence. However, choosing a
suitable machine learning algorithm for a recommender system is difficult
because of the number of algorithms described in the literature. Researchers
and practitioners developing recommender systems are left with little
information about the current approaches in algorithm usage. Moreover, the
development of a recommender system using a machine learning algorithm often
has problems and open questions that must be evaluated, so software engineers
know where to focus research efforts. This paper presents a systematic review
of the literature that analyzes the use of machine learning algorithms in
recommender systems and identifies research opportunities for software
engineering research. The study concludes that Bayesian and decision tree
algorithms are widely used in recommender systems because of their relative
simplicity, and that requirement and design phases of recommender system
development appear to offer opportunities for further research.
| Ivens Portugal, Paulo Alencar, Donald Cowan | null | 1511.05263 | null | null |
AUC-maximized Deep Convolutional Neural Fields for Sequence Labeling | stat.ML cs.LG | Deep Convolutional Neural Networks (DCNN) has shown excellent performance in
a variety of machine learning tasks. This manuscript presents Deep
Convolutional Neural Fields (DeepCNF), a combination of DCNN with Conditional
Random Field (CRF), for sequence labeling with highly imbalanced label
distribution. The widely-used training methods, such as maximum-likelihood and
maximum labelwise accuracy, do not work well on highly imbalanced data. To
handle this, we present a new training algorithm called maximum-AUC for
DeepCNF. That is, we train DeepCNF by directly maximizing the empirical Area
Under the ROC Curve (AUC), which is an unbiased measurement for imbalanced
data. To fulfill this, we formulate AUC in a pairwise ranking framework,
approximate it by a polynomial function and then apply a gradient-based
procedure to optimize it. We then test our AUC-maximized DeepCNF on three very
different protein sequence labeling tasks: solvent accessibility prediction,
8-state secondary structure prediction, and disorder prediction. Our
experimental results confirm that maximum-AUC greatly outperforms the other two
training methods on 8-state secondary structure prediction and disorder
prediction since their label distributions are highly imbalanced and also have
similar performance as the other two training methods on the solvent
accessibility prediction problem which has three equally-distributed labels.
Furthermore, our experimental results also show that our AUC-trained DeepCNF
models greatly outperform existing popular predictors of these three tasks.
| Sheng Wang, Siqi Sun and Jinbo Xu | null | 1511.05265 | null | null |
Semi-supervised Collaborative Ranking with Push at Top | cs.LG cs.IR | Existing collaborative ranking based recommender systems tend to perform best
when there is enough observed ratings for each user and the observation is made
completely at random. Under this setting recommender systems can properly
suggest a list of recommendations according to the user interests. However,
when the observed ratings are extremely sparse (e.g. in the case of cold-start
users where no rating data is available), and are not sampled uniformly at
random, existing ranking methods fail to effectively leverage side information
to transduct the knowledge from existing ratings to unobserved ones. We propose
a semi-supervised collaborative ranking model, dubbed \texttt{S$^2$COR}, to
improve the quality of cold-start recommendation. \texttt{S$^2$COR} mitigates
the sparsity issue by leveraging side information about both observed and
missing ratings by collaboratively learning the ranking model. This enables it
to deal with the case of missing data not at random, but to also effectively
incorporate the available side information in transduction. We experimentally
evaluated our proposed algorithm on a number of challenging real-world datasets
and compared against state-of-the-art models for cold-start recommendation. We
report significantly higher quality recommendations with our algorithm compared
to the state-of-the-art.
| Iman Barjasteh, Rana Forsati, Abdol-Hossein Esfahanian, Hayder Radha | null | 1511.05266 | null | null |
On the interplay of network structure and gradient convergence in deep
learning | cs.LG stat.ML | The regularization and output consistency behavior of dropout and layer-wise
pretraining for learning deep networks have been fairly well studied. However,
our understanding of how the asymptotic convergence of backpropagation in deep
architectures is related to the structural properties of the network and other
design choices (like denoising and dropout rate) is less clear at this time. An
interesting question one may ask is whether the network architecture and input
data statistics may guide the choices of learning parameters and vice versa. In
this work, we explore the association between such structural, distributional
and learnability aspects vis-\`a-vis their interaction with parameter
convergence rates. We present a framework to address these questions based on
convergence of backpropagation for general nonconvex objectives using
first-order information. This analysis suggests an interesting relationship
between feature denoising and dropout. Building upon these results, we obtain a
setup that provides systematic guidance regarding the choice of learning
parameters and network sizes that achieve a certain level of convergence (in
the optimization sense) often mediated by statistical attributes of the inputs.
Our results are supported by a set of experimental evaluations as well as
independent empirical observations reported by other groups.
| Vamsi K Ithapu, Sathya N Ravi, Vikas Singh | null | 1511.05297 | null | null |
Structural-RNN: Deep Learning on Spatio-Temporal Graphs | cs.CV cs.LG cs.NE cs.RO | Deep Recurrent Neural Network architectures, though remarkably capable at
modeling sequences, lack an intuitive high-level spatio-temporal structure.
That is while many problems in computer vision inherently have an underlying
high-level structure and can benefit from it. Spatio-temporal graphs are a
popular tool for imposing such high-level intuitions in the formulation of real
world problems. In this paper, we propose an approach for combining the power
of high-level spatio-temporal graphs and sequence learning success of Recurrent
Neural Networks~(RNNs). We develop a scalable method for casting an arbitrary
spatio-temporal graph as a rich RNN mixture that is feedforward, fully
differentiable, and jointly trainable. The proposed method is generic and
principled as it can be used for transforming any spatio-temporal graph through
employing a certain set of well defined steps. The evaluations of the proposed
approach on a diverse set of problems, ranging from modeling human motion to
object interactions, shows improvement over the state-of-the-art with a large
margin. We expect this method to empower new approaches to problem formulation
through high-level spatio-temporal graphs and Recurrent Neural Networks.
| Ashesh Jain, Amir R. Zamir, Silvio Savarese, Ashutosh Saxena | null | 1511.05298 | null | null |
Constant Time EXPected Similarity Estimation using Stochastic
Optimization | cs.LG | A new algorithm named EXPected Similarity Estimation (EXPoSE) was recently
proposed to solve the problem of large-scale anomaly detection. It is a
non-parametric and distribution free kernel method based on the Hilbert space
embedding of probability measures. Given a dataset of $n$ samples, EXPoSE needs
only $\mathcal{O}(n)$ (linear time) to build a model and $\mathcal{O}(1)$
(constant time) to make a prediction. In this work we improve the linear
computational complexity and show that an $\epsilon$-accurate model can be
estimated in constant time, which has significant implications for large-scale
learning problems. To achieve this goal, we cast the original EXPoSE
formulation into a stochastic optimization problem. It is crucial that this
approach allows us to determine the number of iteration based on a desired
accuracy $\epsilon$, independent of the dataset size $n$. We will show that the
proposed stochastic gradient descent algorithm works in general (possible
infinite-dimensional) Hilbert spaces, is easy to implement and requires no
additional step-size parameters.
| Markus Schneider and Wolfgang Ertel and G\"unther Palm | null | 1511.05371 | null | null |
Bayesian Optimization with Dimension Scheduling: Application to
Biological Systems | stat.ML cs.AI cs.LG math.OC | Bayesian Optimization (BO) is a data-efficient method for global black-box
optimization of an expensive-to-evaluate fitness function. BO typically assumes
that computation cost of BO is cheap, but experiments are time consuming or
costly. In practice, this allows us to optimize ten or fewer critical
parameters in up to 1,000 experiments. But experiments may be less expensive
than BO methods assume: In some simulation models, we may be able to conduct
multiple thousands of experiments in a few hours, and the computational burden
of BO is no longer negligible compared to experimentation time. To address this
challenge we introduce a new Dimension Scheduling Algorithm (DSA), which
reduces the computational burden of BO for many experiments. The key idea is
that DSA optimizes the fitness function only along a small set of dimensions at
each iteration. This DSA strategy (1) reduces the necessary computation time,
(2) finds good solutions faster than the traditional BO method, and (3) can be
parallelized straightforwardly. We evaluate the DSA in the context of
optimizing parameters of dynamic models of microalgae metabolism and show
faster convergence than traditional BO.
| Doniyor Ulmasov, Caroline Baroukh, Benoit Chachuat, Marc Peter
Deisenroth, Ruth Misener | null | 1511.05385 | null | null |
Learning the Dimensionality of Word Embeddings | stat.ML cs.CL cs.LG | We describe a method for learning word embeddings with data-dependent
dimensionality. Our Stochastic Dimensionality Skip-Gram (SD-SG) and Stochastic
Dimensionality Continuous Bag-of-Words (SD-CBOW) are nonparametric analogs of
Mikolov et al.'s (2013) well-known 'word2vec' models. Vector dimensionality is
made dynamic by employing techniques used by Cote & Larochelle (2016) to define
an RBM with an infinite number of hidden units. We show qualitatively and
quantitatively that SD-SG and SD-CBOW are competitive with their
fixed-dimension counterparts while providing a distribution over embedding
dimensionalities, which offers a window into how semantics distribute across
dimensions.
| Eric Nalisnick, Sachin Ravi | null | 1511.05392 | null | null |
Understanding Adversarial Training: Increasing Local Stability of Neural
Nets through Robust Optimization | stat.ML cs.LG cs.NE | We propose a general framework for increasing local stability of Artificial
Neural Nets (ANNs) using Robust Optimization (RO). We achieve this through an
alternating minimization-maximization procedure, in which the loss of the
network is minimized over perturbed examples that are generated at each
parameter update. We show that adversarial training of ANNs is in fact
robustification of the network optimization, and that our proposed framework
generalizes previous approaches for increasing local stability of ANNs.
Experimental results reveal that our approach increases the robustness of the
network to existing adversarial examples, while making it harder to generate
new ones. Furthermore, our algorithm improves the accuracy of the network also
on the original test data.
| Uri Shaham, Yutaro Yamada, and Sahand Negahban | 10.1016/j.neucom.2018.04.027 | 1511.05432 | null | null |
Deep multi-scale video prediction beyond mean square error | cs.LG cs.CV stat.ML | Learning to predict future images from a video sequence involves the
construction of an internal representation that models the image evolution
accurately, and therefore, to some degree, its content and dynamics. This is
why pixel-space video prediction may be viewed as a promising avenue for
unsupervised feature learning. In addition, while optical flow has been a very
studied problem in computer vision for a long time, future frame prediction is
rarely approached. Still, many vision applications could benefit from the
knowledge of the next frames of videos, that does not require the complexity of
tracking every pixel trajectories. In this work, we train a convolutional
network to generate future frames given an input sequence. To deal with the
inherently blurry predictions obtained from the standard Mean Squared Error
(MSE) loss function, we propose three different and complementary feature
learning strategies: a multi-scale architecture, an adversarial training
method, and an image gradient difference loss function. We compare our
predictions to different published results based on recurrent neural networks
on the UCF101 dataset
| Michael Mathieu, Camille Couprie and Yann LeCun | null | 1511.05440 | null | null |
Extending Gossip Algorithms to Distributed Estimation of U-Statistics | stat.ML cs.DC cs.LG cs.SY stat.CO | Efficient and robust algorithms for decentralized estimation in networks are
essential to many distributed systems. Whereas distributed estimation of sample
mean statistics has been the subject of a good deal of attention, computation
of $U$-statistics, relying on more expensive averaging over pairs of
observations, is a less investigated area. Yet, such data functionals are
essential to describe global properties of a statistical population, with
important examples including Area Under the Curve, empirical variance, Gini
mean difference and within-cluster point scatter. This paper proposes new
synchronous and asynchronous randomized gossip algorithms which simultaneously
propagate data across the network and maintain local estimates of the
$U$-statistic of interest. We establish convergence rate bounds of $O(1/t)$ and
$O(\log t / t)$ for the synchronous and asynchronous cases respectively, where
$t$ is the number of iterations, with explicit data and network dependent
terms. Beyond favorable comparisons in terms of rate analysis, numerical
experiments provide empirical evidence the proposed algorithms surpasses the
previously introduced approach.
| Igor Colin and Aur\'elien Bellet and Joseph Salmon and St\'ephan
Cl\'emen\c{c}on | null | 1511.05464 | null | null |
Gated Graph Sequence Neural Networks | cs.LG cs.AI cs.NE stat.ML | Graph-structured data appears frequently in domains including chemistry,
natural language semantics, social networks, and knowledge bases. In this work,
we study feature learning techniques for graph-structured inputs. Our starting
point is previous work on Graph Neural Networks (Scarselli et al., 2009), which
we modify to use gated recurrent units and modern optimization techniques and
then extend to output sequences. The result is a flexible and broadly useful
class of neural network models that has favorable inductive biases relative to
purely sequence-based models (e.g., LSTMs) when the problem is
graph-structured. We demonstrate the capabilities on some simple AI (bAbI) and
graph algorithm learning tasks. We then show it achieves state-of-the-art
performance on a problem from program verification, in which subgraphs need to
be matched to abstract data structures.
| Yujia Li, Daniel Tarlow, Marc Brockschmidt, Richard Zemel | null | 1511.05493 | null | null |
Learning Neural Network Architectures using Backpropagation | cs.LG cs.CV cs.NE | Deep neural networks with millions of parameters are at the heart of many
state of the art machine learning models today. However, recent works have
shown that models with much smaller number of parameters can also perform just
as well. In this work, we introduce the problem of architecture-learning, i.e;
learning the architecture of a neural network along with weights. We introduce
a new trainable parameter called tri-state ReLU, which helps in eliminating
unnecessary neurons. We also propose a smooth regularizer which encourages the
total number of neurons after elimination to be small. The resulting objective
is differentiable and simple to optimize. We experimentally validate our method
on both small and large networks, and show that it can learn models with a
considerably small number of parameters without affecting prediction accuracy.
| Suraj Srinivas and R. Venkatesh Babu | null | 1511.05497 | null | null |
Automatic Instrument Recognition in Polyphonic Music Using Convolutional
Neural Networks | cs.SD cs.IR cs.LG cs.NE | Traditional methods to tackle many music information retrieval tasks
typically follow a two-step architecture: feature engineering followed by a
simple learning algorithm. In these "shallow" architectures, feature
engineering and learning are typically disjoint and unrelated. Additionally,
feature engineering is difficult, and typically depends on extensive domain
expertise.
In this paper, we present an application of convolutional neural networks for
the task of automatic musical instrument identification. In this model, feature
extraction and learning algorithms are trained together in an end-to-end
fashion. We show that a convolutional neural network trained on raw audio can
achieve performance surpassing traditional methods that rely on hand-crafted
features.
| Peter Li and Jiyuan Qian and Tian Wang | null | 1511.05520 | null | null |
Return of Frustratingly Easy Domain Adaptation | cs.CV cs.AI cs.LG cs.NE | Unlike human learning, machine learning often fails to handle changes between
training (source) and test (target) input distributions. Such domain shifts,
common in practical scenarios, severely damage the performance of conventional
machine learning methods. Supervised domain adaptation methods have been
proposed for the case when the target data have labels, including some that
perform very well despite being "frustratingly easy" to implement. However, in
practice, the target domain is often unlabeled, requiring unsupervised
adaptation. We propose a simple, effective, and efficient method for
unsupervised domain adaptation called CORrelation ALignment (CORAL). CORAL
minimizes domain shift by aligning the second-order statistics of source and
target distributions, without requiring any target labels. Even though it is
extraordinarily simple--it can be implemented in four lines of Matlab
code--CORAL performs remarkably well in extensive evaluations on standard
benchmark datasets.
| Baochen Sun, Jiashi Feng, Kate Saenko | null | 1511.05547 | null | null |
Identifying the Absorption Bump with Deep Learning | cs.CV cs.LG cs.NE | The pervasive interstellar dust grains provide significant insights to
understand the formation and evolution of the stars, planetary systems, and the
galaxies, and may harbor the building blocks of life. One of the most effective
way to analyze the dust is via their interaction with the light from background
sources. The observed extinction curves and spectral features carry the size
and composition information of dust. The broad absorption bump at 2175 Angstrom
is the most prominent feature in the extinction curves. Traditionally,
statistical methods are applied to detect the existence of the absorption bump.
These methods require heavy preprocessing and the co-existence of other
reference features to alleviate the influence from the noises. In this paper,
we apply Deep Learning techniques to detect the broad absorption bump. We
demonstrate the key steps for training the selected models and their results.
The success of Deep Learning based method inspires us to generalize a common
methodology for broader science discovery problems. We present our on-going
work to build the DeepDis system for such kind of applications.
| Min Li, Sudeep Gaddam, Xiaolin Li, Yinan Zhao, Jingzhe Ma, Jian Ge | null | 1511.05607 | null | null |
A Block Regression Model for Short-Term Mobile Traffic Forecasting | cs.NI cs.LG | Accurate mobile traffic forecast is important for efficient network planning
and operations. However, existing traffic forecasting models have high
complexity, making the forecasting process slow and costly. In this paper, we
analyze some characteristics of mobile traffic such as periodicity, spatial
similarity and short term relativity. Based on these characteristics, we
propose a \emph{Block Regression} ({BR}) model for mobile traffic forecasting.
This model employs seasonal differentiation so as to take into account of the
temporally repetitive nature of mobile traffic. One of the key features of our
{BR} model lies in its low complexity since it constructs a single model for
all base stations. We evaluate the accuracy of {BR} model based on real traffic
data and compare it with the existing models. Results show that our {BR} model
offers equal accuracy to the existing models but has much less complexity.
| Huimin Pan, Jingchu Liu, Sheng Zhou, Zhisheng Niu | 10.1109/ICCChina.2015.7448619 | 1511.05612 | null | null |
Learning Structured Inference Neural Networks with Label Relations | cs.CV cs.LG | Images of scenes have various objects as well as abundant attributes, and
diverse levels of visual categorization are possible. A natural image could be
assigned with fine-grained labels that describe major components,
coarse-grained labels that depict high level abstraction or a set of labels
that reveal attributes. Such categorization at different concept layers can be
modeled with label graphs encoding label information. In this paper, we exploit
this rich information with a state-of-art deep learning framework, and propose
a generic structured model that leverages diverse label relations to improve
image classification performance. Our approach employs a novel stacked label
prediction neural network, capturing both inter-level and intra-level label
semantics. We evaluate our method on benchmark image datasets, and empirical
results illustrate the efficacy of our model.
| Hexiang Hu, Guang-Tong Zhou, Zhiwei Deng, Zicheng Liao, Greg Mori | null | 1511.05616 | null | null |
Predicting distributions with Linearizing Belief Networks | cs.LG cs.CV | Conditional belief networks introduce stochastic binary variables in neural
networks. Contrary to a classical neural network, a belief network can predict
more than the expected value of the output $Y$ given the input $X$. It can
predict a distribution of outputs $Y$ which is useful when an input can admit
multiple outputs whose average is not necessarily a valid answer. Such networks
are particularly relevant to inverse problems such as image prediction for
denoising, or text to speech. However, traditional sigmoid belief networks are
hard to train and are not suited to continuous problems. This work introduces a
new family of networks called linearizing belief nets or LBNs. A LBN decomposes
into a deep linear network where each linear unit can be turned on or off by
non-deterministic binary latent units. It is a universal approximator of
real-valued conditional distributions and can be trained using gradient
descent. Moreover, the linear pathways efficiently propagate continuous
information and they act as multiplicative skip-connections that help
optimization by removing gradient diffusion. This yields a model which trains
efficiently and improves the state-of-the-art on image denoising and facial
expression generation with the Toronto faces dataset.
| Yann N. Dauphin, David Grangier | null | 1511.05622 | null | null |
Competitive Multi-scale Convolution | cs.CV cs.LG cs.NE | In this paper, we introduce a new deep convolutional neural network (ConvNet)
module that promotes competition among a set of multi-scale convolutional
filters. This new module is inspired by the inception module, where we replace
the original collaborative pooling stage (consisting of a concatenation of the
multi-scale filter outputs) by a competitive pooling represented by a maxout
activation unit. This extension has the following two objectives: 1) the
selection of the maximum response among the multi-scale filters prevents filter
co-adaptation and allows the formation of multiple sub-networks within the same
model, which has been shown to facilitate the training of complex learning
problems; and 2) the maxout unit reduces the dimensionality of the outputs from
the multi-scale filters. We show that the use of our proposed module in typical
deep ConvNets produces classification results that are either better than or
comparable to the state of the art on the following benchmark datasets: MNIST,
CIFAR-10, CIFAR-100 and SVHN.
| Zhibin Liao, Gustavo Carneiro | null | 1511.05635 | null | null |
Net2Net: Accelerating Learning via Knowledge Transfer | cs.LG | We introduce techniques for rapidly transferring the information stored in
one neural net into another neural net. The main purpose is to accelerate the
training of a significantly larger neural net. During real-world workflows, one
often trains very many different neural networks during the experimentation and
design process. This is a wasteful process in which each new model is trained
from scratch. Our Net2Net technique accelerates the experimentation process by
instantaneously transferring the knowledge from a previous network to each new
deeper or wider network. Our techniques are based on the concept of
function-preserving transformations between neural network specifications. This
differs from previous approaches to pre-training that altered the function
represented by a neural net when adding layers to it. Using our knowledge
transfer mechanism to add depth to Inception modules, we demonstrate a new
state of the art accuracy rating on the ImageNet dataset.
| Tianqi Chen and Ian Goodfellow and Jonathon Shlens | null | 1511.05641 | null | null |
A New Smooth Approximation to the Zero One Loss with a Probabilistic
Interpretation | cs.CV cs.AI cs.IR cs.LG | We examine a new form of smooth approximation to the zero one loss in which
learning is performed using a reformulation of the widely used logistic
function. Our approach is based on using the posterior mean of a novel
generalized Beta-Bernoulli formulation. This leads to a generalized logistic
function that approximates the zero one loss, but retains a probabilistic
formulation conferring a number of useful properties. The approach is easily
generalized to kernel logistic regression and easily integrated into methods
for structured prediction. We present experiments in which we learn such models
using an optimization method consisting of a combination of gradient descent
and coordinate descent using localized grid search so as to escape from local
minima. Our experiments indicate that optimization quality is improved when
learning meta-parameters are themselves optimized using a validation set. Our
experiments show improved performance relative to widely used logistic and
hinge loss methods on a wide variety of problems ranging from standard UC
Irvine and libSVM evaluation datasets to product review predictions and a
visual information extraction task. We observe that the approach: 1) is more
robust to outliers compared to the logistic and hinge losses; 2) outperforms
comparable logistic and max margin models on larger scale benchmark problems;
3) when combined with Gaussian- Laplacian mixture prior on parameters the
kernelized version of our formulation yields sparser solutions than Support
Vector Machine classifiers; and 4) when integrated into a probabilistic
structured prediction technique our approach provides more accurate
probabilities yielding improved inference and increasing information extraction
performance.
| Md Kamrul Hasan, Christopher J. Pal | null | 1511.05643 | null | null |
Adversarial Autoencoders | cs.LG | In this paper, we propose the "adversarial autoencoder" (AAE), which is a
probabilistic autoencoder that uses the recently proposed generative
adversarial networks (GAN) to perform variational inference by matching the
aggregated posterior of the hidden code vector of the autoencoder with an
arbitrary prior distribution. Matching the aggregated posterior to the prior
ensures that generating from any part of prior space results in meaningful
samples. As a result, the decoder of the adversarial autoencoder learns a deep
generative model that maps the imposed prior to the data distribution. We show
how the adversarial autoencoder can be used in applications such as
semi-supervised classification, disentangling style and content of images,
unsupervised clustering, dimensionality reduction and data visualization. We
performed experiments on MNIST, Street View House Numbers and Toronto Face
datasets and show that adversarial autoencoders achieve competitive results in
generative modeling and semi-supervised classification tasks.
| Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, Ian Goodfellow,
Brendan Frey | null | 1511.05644 | null | null |
Tree-Guided MCMC Inference for Normalized Random Measure Mixture Models | stat.ML cs.LG | Normalized random measures (NRMs) provide a broad class of discrete random
measures that are often used as priors for Bayesian nonparametric models.
Dirichlet process is a well-known example of NRMs. Most of posterior inference
methods for NRM mixture models rely on MCMC methods since they are easy to
implement and their convergence is well studied. However, MCMC often suffers
from slow convergence when the acceptance rate is low. Tree-based inference is
an alternative deterministic posterior inference method, where Bayesian
hierarchical clustering (BHC) or incremental Bayesian hierarchical clustering
(IBHC) have been developed for DP or NRM mixture (NRMM) models, respectively.
Although IBHC is a promising method for posterior inference for NRMM models due
to its efficiency and applicability to online inference, its convergence is not
guaranteed since it uses heuristics that simply selects the best solution after
multiple trials are made. In this paper, we present a hybrid inference
algorithm for NRMM models, which combines the merits of both MCMC and IBHC.
Trees built by IBHC outlines partitions of data, which guides
Metropolis-Hastings procedure to employ appropriate proposals. Inheriting the
nature of MCMC, our tree-guided MCMC (tgMCMC) is guaranteed to converge, and
enjoys the fast convergence thanks to the effective proposals guided by trees.
Experiments on both synthetic and real-world datasets demonstrate the benefit
of our method.
| Juho Lee and Seungjin Choi | null | 1511.05650 | null | null |
Why are deep nets reversible: A simple theory, with implications for
training | cs.LG | Generative models for deep learning are promising both to improve
understanding of the model, and yield training methods requiring fewer labeled
samples.
Recent works use generative model approaches to produce the deep net's input
given the value of a hidden layer several levels above. However, there is no
accompanying "proof of correctness" for the generative model, showing that the
feedforward deep net is the correct inference method for recovering the hidden
layer given the input. Furthermore, these models are complicated.
The current paper takes a more theoretical tack. It presents a very simple
generative model for RELU deep nets, with the following characteristics: (i)
The generative model is just the reverse of the feedforward net: if the forward
transformation at a layer is $A$ then the reverse transformation is $A^T$.
(This can be seen as an explanation of the old weight tying idea for denoising
autoencoders.) (ii) Its correctness can be proven under a clean theoretical
assumption: the edge weights in real-life deep nets behave like random numbers.
Under this assumption ---which is experimentally tested on real-life nets like
AlexNet--- it is formally proved that feed forward net is a correct inference
method for recovering the hidden layer.
The generative model suggests a simple modification for training: use the
generative model to produce synthetic data with labels and include it in the
training set. Experiments are shown to support this theory of random-like deep
nets; and that it helps the training.
| Sanjeev Arora and Yingyu Liang and Tengyu Ma | null | 1511.05653 | null | null |
Expressiveness of Rectifier Networks | cs.LG | Rectified Linear Units (ReLUs) have been shown to ameliorate the vanishing
gradient problem, allow for efficient backpropagation, and empirically promote
sparsity in the learned parameters. They have led to state-of-the-art results
in a variety of applications. However, unlike threshold and sigmoid networks,
ReLU networks are less explored from the perspective of their expressiveness.
This paper studies the expressiveness of ReLU networks. We characterize the
decision boundary of two-layer ReLU networks by constructing functionally
equivalent threshold networks. We show that while the decision boundary of a
two-layer ReLU network can be captured by a threshold network, the latter may
require an exponentially larger number of hidden units. We also formulate
sufficient conditions for a corresponding logarithmic reduction in the number
of hidden units to represent a sign network as a ReLU network. Finally, we
experimentally compare threshold networks and their much smaller ReLU
counterparts with respect to their ability to learn from synthetically
generated data.
| Xingyuan Pan and Vivek Srikumar | null | 1511.05678 | null | null |
A Distribution Adaptive Framework for Prediction Interval Estimation
Using Nominal Variables | cs.LG | Proposed methods for prediction interval estimation so far focus on cases
where input variables are numerical. In datasets with solely nominal input
variables, we observe records with the exact same input $x^u$, but different
real valued outputs due to the inherent noise in the system. Existing
prediction interval estimation methods do not use representations that can
accurately model such inherent noise in the case of nominal inputs. We propose
a new prediction interval estimation method tailored for this type of data,
which is prevalent in biology and medicine. We call this method Distribution
Adaptive Prediction Interval Estimation given Nominal inputs (DAPIEN) and has
four main phases. First, we select a distribution function that can best
represent the inherent noise of the system for all unique inputs. Then we infer
the parameters $\theta_i$ (e.g. $\theta_i=[mean_i, variance_i]$) of the
selected distribution function for all unique input vectors $x^u_i$ and
generate a new corresponding training set using pairs of $x^u_i, \theta_i$.
III). Then, we train a model to predict $\theta$ given a new $x_u$. Finally, we
calculate the prediction interval for a new sample using the inverse of the
cumulative distribution function once the parameters $\theta$ is predicted by
the trained model. We compared DAPIEN to the commonly used Bootstrap method on
three synthetic datasets. Our results show that DAPIEN provides tighter
prediction intervals while preserving the requested coverage when compared to
Bootstrap. This work can facilitate broader usage of regression methods in
medicine and biology where it is necessary to provide tight prediction
intervals while preserving coverage when input variables are nominal.
| Ameen Eetemadi, Ilias Tagkopoulos | null | 1511.05688 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.