title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Learning Compact Structural Representations for Audio Events Using
Regressor Banks | cs.SD cs.LG stat.ML | We introduce a new learned descriptor for audio signals which is efficient
for event representation. The entries of the descriptor are produced by
evaluating a set of regressors on the input signal. The regressors are
class-specific and trained using the random regression forests framework. Given
an input signal, each regressor estimates the onset and offset positions of the
target event. The estimation confidence scores output by a regressor are then
used to quantify how the target event aligns with the temporal structure of the
corresponding category. Our proposed descriptor has two advantages. First, it
is compact, i.e. the dimensionality of the descriptor is equal to the number of
event classes. Second, we show that even simple linear classification models,
trained on our descriptor, yield better accuracies on audio event
classification task than not only the nonlinear baselines but also the
state-of-the-art results.
| Huy Phan, Marco Maass, Lars Hertel, Radoslaw Mazur, Ian McLoughlin,
Alfred Mertins | 10.1109/ICASSP.2016.7471667 | 1604.08716 | null | null |
Music transcription modelling and composition using deep learning | cs.SD cs.LG | We apply deep learning methods, specifically long short-term memory (LSTM)
networks, to music transcription modelling and composition. We build and train
LSTM networks using approximately 23,000 music transcriptions expressed with a
high-level vocabulary (ABC notation), and use them to generate new
transcriptions. Our practical aim is to create music transcription models
useful in particular contexts of music composition. We present results from
three perspectives: 1) at the population level, comparing descriptive
statistics of the set of training transcriptions and generated transcriptions;
2) at the individual level, examining how a generated transcription reflects
the conventions of a music practice in the training transcriptions (Celtic
folk); 3) at the application level, using the system for idea generation in
music composition. We make our datasets, software and sound examples open and
available: \url{https://github.com/IraKorshunova/folk-rnn}.
| Bob L. Sturm, Jo\~ao Felipe Santos, Oded Ben-Tal and Iryna Korshunova | null | 1604.08723 | null | null |
MetaGrad: Multiple Learning Rates in Online Learning | cs.LG | In online convex optimization it is well known that certain subclasses of
objective functions are much easier than arbitrary convex functions. We are
interested in designing adaptive methods that can automatically get fast rates
in as many such subclasses as possible, without any manual tuning. Previous
adaptive methods are able to interpolate between strongly convex and general
convex functions. We present a new method, MetaGrad, that adapts to a much
broader class of functions, including exp-concave and strongly convex
functions, but also various types of stochastic and non-stochastic functions
without any curvature. For instance, MetaGrad can achieve logarithmic regret on
the unregularized hinge loss, even though it has no curvature, if the data come
from a favourable probability distribution. MetaGrad's main feature is that it
simultaneously considers multiple learning rates. Unlike previous methods with
provable regret guarantees, however, its learning rates are not monotonically
decreasing over time and are not tuned based on a theoretically derived bound
on the regret. Instead, they are weighted directly proportional to their
empirical performance on the data using a tilted exponential weights master
algorithm.
| Tim van Erven and Wouter M. Koolen | null | 1604.08740 | null | null |
Towards Conceptual Compression | stat.ML cs.CV cs.LG | We introduce a simple recurrent variational auto-encoder architecture that
significantly improves image modeling. The system represents the
state-of-the-art in latent variable models for both the ImageNet and Omniglot
datasets. We show that it naturally separates global conceptual information
from lower level details, thus addressing one of the fundamentally desired
properties of unsupervised learning. Furthermore, the possibility of
restricting ourselves to storing only global information about an image allows
us to achieve high quality 'conceptual compression'.
| Karol Gregor, Frederic Besse, Danilo Jimenez Rezende, Ivo Danihelka
and Daan Wierstra | null | 1604.08772 | null | null |
Joint Sound Source Separation and Speaker Recognition | cs.SD cs.LG | Non-negative Matrix Factorization (NMF) has already been applied to learn
speaker characterizations from single or non-simultaneous speech for speaker
recognition applications. It is also known for its good performance in (blind)
source separation for simultaneous speech. This paper explains how NMF can be
used to jointly solve the two problems in a multichannel speaker recognizer for
simultaneous speech. It is shown how state-of-the-art multichannel NMF for
blind source separation can be easily extended to incorporate speaker
recognition. Experiments on the CHiME corpus show that this method outperforms
the sequential approach of first applying source separation, followed by
speaker recognition that uses state-of-the-art i-vector techniques.
| Jeroen Zegers, Hugo Van hamme | null | 1604.08852 | null | null |
The Z-loss: a shift and scale invariant classification loss belonging to
the Spherical Family | cs.LG cs.AI stat.ML | Despite being the standard loss function to train multi-class neural
networks, the log-softmax has two potential limitations. First, it involves
computations that scale linearly with the number of output classes, which can
restrict the size of problems we are able to tackle with current hardware.
Second, it remains unclear how close it matches the task loss such as the top-k
error rate or other non-differentiable evaluation metrics which we aim to
optimize ultimately. In this paper, we introduce an alternative classification
loss function, the Z-loss, which is designed to address these two issues.
Unlike the log-softmax, it has the desirable property of belonging to the
spherical loss family (Vincent et al., 2015), a class of loss functions for
which training can be performed very efficiently with a complexity independent
of the number of output classes. We show experimentally that it significantly
outperforms the other spherical loss functions previously investigated.
Furthermore, we show on a word language modeling task that it also outperforms
the log-softmax with respect to certain ranking scores, such as top-k scores,
suggesting that the Z-loss has the flexibility to better match the task loss.
These qualities thus makes the Z-loss an appealing candidate to train very
efficiently large output networks such as word-language models or other extreme
classification problems. On the One Billion Word (Chelba et al., 2014) dataset,
we are able to train a model with the Z-loss 40 times faster than the
log-softmax and more than 4 times faster than the hierarchical softmax.
| Alexandre de Br\'ebisson, Pascal Vincent | null | 1604.08859 | null | null |
Deep, Convolutional, and Recurrent Models for Human Activity Recognition
using Wearables | cs.LG cs.AI cs.HC stat.ML | Human activity recognition (HAR) in ubiquitous computing is beginning to
adopt deep learning to substitute for well-established analysis techniques that
rely on hand-crafted feature extraction and classification techniques. From
these isolated applications of custom deep architectures it is, however,
difficult to gain an overview of their suitability for problems ranging from
the recognition of manipulative gestures to the segmentation and identification
of physical activities like running or ascending stairs. In this paper we
rigorously explore deep, convolutional, and recurrent approaches across three
representative datasets that contain movement data captured with wearable
sensors. We describe how to train recurrent approaches in this setting,
introduce a novel regularisation approach, and illustrate how they outperform
the state-of-the-art on a large benchmark dataset. Across thousands of
recognition experiments with randomly sampled model configurations we
investigate the suitability of each model for different tasks in HAR, explore
the impact of hyperparameters using the fANOVA framework, and provide
guidelines for the practitioner who wants to apply deep learning in their
problem setting.
| Nils Y. Hammerla, Shane Halloran and Thomas Ploetz | null | 1604.08880 | null | null |
An expressive dissimilarity measure for relational clustering using
neighbourhood trees | stat.ML cs.AI cs.LG | Clustering is an underspecified task: there are no universal criteria for
what makes a good clustering. This is especially true for relational data,
where similarity can be based on the features of individuals, the relationships
between them, or a mix of both. Existing methods for relational clustering have
strong and often implicit biases in this respect. In this paper, we introduce a
novel similarity measure for relational data. It is the first measure to
incorporate a wide variety of types of similarity, including similarity of
attributes, similarity of relational context, and proximity in a hypergraph. We
experimentally evaluate how using this similarity affects the quality of
clustering on very different types of datasets. The experiments demonstrate
that (a) using this similarity in standard clustering methods consistently
gives good results, whereas other measures work well only on datasets that
match their bias; and (b) on most datasets, the novel similarity outperforms
even the best among the existing ones.
| Sebastijan Dumancic and Hendrik Blockeel | 10.1007/s10994-017-5644-6 | 1604.08934 | null | null |
Predicting the direction of stock market prices using random forest | cs.LG cs.CE | Predicting trends in stock market prices has been an area of interest for
researchers for many years due to its complex and dynamic nature. Intrinsic
volatility in stock market across the globe makes the task of prediction
challenging. Forecasting and diffusion modeling, although effective can't be
the panacea to the diverse range of problems encountered in prediction,
short-term or otherwise. Market risk, strongly correlated with forecasting
errors, needs to be minimized to ensure minimal risk in investment. The authors
propose to minimize forecasting error by treating the forecasting problem as a
classification problem, a popular suite of algorithms in Machine learning. In
this paper, we propose a novel way to minimize the risk of investment in stock
market by predicting the returns of a stock using a class of powerful machine
learning algorithms known as ensemble learning. Some of the technical
indicators such as Relative Strength Index (RSI), stochastic oscillator etc are
used as inputs to train our model. The learning model used is an ensemble of
multiple decision trees. The algorithm is shown to outperform existing algo-
rithms found in the literature. Out of Bag (OOB) error estimates have been
found to be encouraging. Key Words: Random Forest Classifier, stock price
forecasting, Exponential smoothing, feature extraction, OOB error and
convergence.
| Luckyson Khaidem, Snehanshu Saha and Sudeepa Roy Dey | null | 1605.00003 | null | null |
deepMiRGene: Deep Neural Network based Precursor microRNA Prediction | cs.LG q-bio.QM | Since microRNAs (miRNAs) play a crucial role in post-transcriptional gene
regulation, miRNA identification is one of the most essential problems in
computational biology. miRNAs are usually short in length ranging between 20
and 23 base pairs. It is thus often difficult to distinguish miRNA-encoding
sequences from other non-coding RNAs and pseudo miRNAs that have a similar
length, and most previous studies have recommended using precursor miRNAs
instead of mature miRNAs for robust detection. A great number of conventional
machine-learning-based classification methods have been proposed, but they
often have the serious disadvantage of requiring manual feature engineering,
and their performance is limited as well. In this paper, we propose a novel
miRNA precursor prediction algorithm, deepMiRGene, based on recurrent neural
networks, specifically long short-term memory networks. deepMiRGene
automatically learns suitable features from the data themselves without manual
feature engineering and constructs a model that can successfully reflect
structural characteristics of precursor miRNAs. For the performance evaluation
of our approach, we have employed several widely used evaluation metrics on
three recent benchmark datasets and verified that deepMiRGene delivered
comparable performance among the current state-of-the-art tools.
| Seunghyun Park, Seonwoo Min, Hyunsoo Choi, and Sungroh Yoon | null | 1605.00017 | null | null |
Deep Convolutional Neural Networks on Cartoon Functions | cs.LG cs.CV math.NA stat.ML | Wiatowski and B\"olcskei, 2015, proved that deformation stability and
vertical translation invariance of deep convolutional neural network-based
feature extractors are guaranteed by the network structure per se rather than
the specific convolution kernels and non-linearities. While the translation
invariance result applies to square-integrable functions, the deformation
stability bound holds for band-limited functions only. Many signals of
practical relevance (such as natural images) exhibit, however, sharp and curved
discontinuities and are, hence, not band-limited. The main contribution of this
paper is a deformation stability result that takes these structural properties
into account. Specifically, we establish deformation stability bounds for the
class of cartoon functions introduced by Donoho, 2001.
| Philipp Grohs, Thomas Wiatowski, Helmut B\"olcskei | 10.1109/ISIT.2016.7541482 | 1605.00031 | null | null |
Improved Sparse Low-Rank Matrix Estimation | math.OC cs.LG stat.ML | We address the problem of estimating a sparse low-rank matrix from its noisy
observation. We propose an objective function consisting of a data-fidelity
term and two parameterized non-convex penalty functions. Further, we show how
to set the parameters of the non-convex penalty functions, in order to ensure
that the objective function is strictly convex. The proposed objective function
better estimates sparse low-rank matrices than a convex method which utilizes
the sum of the nuclear norm and the $\ell_1$ norm. We derive an algorithm (as
an instance of ADMM) to solve the proposed problem, and guarantee its
convergence provided the scalar augmented Lagrangian parameter is set
appropriately. We demonstrate the proposed method for denoising an audio signal
and an adjacency matrix representing protein interactions in the `Escherichia
coli' bacteria.
| Ankit Parekh and Ivan W. Selesnick | 10.1016/j.sigpro.2017.04.011 | 1605.00042 | null | null |
Distributed Cell Association for Energy Harvesting IoT Devices in Dense
Small Cell Networks: A Mean-Field Multi-Armed Bandit Approach | cs.NI cs.LG cs.MA | The emerging Internet of Things (IoT)-driven ultra-dense small cell networks
(UD-SCNs) will need to combat a variety of challenges. On one hand, massive
number of devices sharing the limited wireless resources will render
centralized control mechanisms infeasible due to the excessive cost of
information acquisition and computations. On the other hand, to reduce energy
consumption from fixed power grid and/or battery, many IoT devices may need to
depend on the energy harvested from the ambient environment (e.g., from RF
transmissions, environmental sources). However, due to the opportunistic nature
of energy harvesting, this will introduce uncertainty in the network operation.
In this article, we study the distributed cell association problem for energy
harvesting IoT devices in UD-SCNs. After reviewing the state-of-the-art
research on the cell association problem in small cell networks, we outline the
major challenges for distributed cell association in IoT-driven UD-SCNs where
the IoT devices will need to perform cell association in a distributed manner
in presence of uncertainty (e.g., limited knowledge on channel/network) and
limited computational capabilities. To this end, we propose an approach based
on mean-field multi-armed bandit games to solve the uplink cell association
problem for energy harvesting IoT devices in a UD-SCN. This approach is
particularly suitable to analyze large multi-agent systems under uncertainty
and lack of information. We provide some theoretical results as well as
preliminary performance evaluation results for the proposed approach.
| Setareh Maghsudi and Ekram Hossain | 10.1109/ACCESS.2017.2676166 | 1605.00057 | null | null |
Constructive neural network learning | cs.LG | In this paper, we aim at developing scalable neural network-type learning
systems. Motivated by the idea of "constructive neural networks" in
approximation theory, we focus on "constructing" rather than "training"
feed-forward neural networks (FNNs) for learning, and propose a novel FNNs
learning system called the constructive feed-forward neural network (CFN).
Theoretically, we prove that the proposed method not only overcomes the
classical saturation problem for FNN approximation, but also reaches the
optimal learning rate when the regression function is smooth, while the
state-of-the-art learning rates established for traditional FNNs are only near
optimal (up to a logarithmic factor). A series of numerical simulations are
provided to show the efficiency and feasibility of CFN via comparing with the
well-known regularized least squares (RLS) with Gaussian kernel and extreme
learning machine (ELM).
| Shaobo Lin, Jinshan Zeng, and Xiaoqin Zhang | null | 1605.00079 | null | null |
Look-ahead before you leap: end-to-end active recognition by forecasting
the effect of motion | cs.CV cs.AI cs.LG cs.RO | Visual recognition systems mounted on autonomous moving agents face the
challenge of unconstrained data, but simultaneously have the opportunity to
improve their performance by moving to acquire new views of test data. In this
work, we first show how a recurrent neural network-based system may be trained
to perform end-to-end learning of motion policies suited for this "active
recognition" setting. Further, we hypothesize that active vision requires an
agent to have the capacity to reason about the effects of its motions on its
view of the world. To verify this hypothesis, we attempt to induce this
capacity in our active recognition pipeline, by simultaneously learning to
forecast the effects of the agent's motions on its internal representation of
the environment conditional on all past views. Results across two challenging
datasets confirm both that our end-to-end system successfully learns meaningful
policies for active category recognition, and that "learning to look ahead"
further boosts recognition performance.
| Dinesh Jayaraman and Kristen Grauman | null | 1605.00164 | null | null |
Stochastic Contextual Bandits with Known Reward Functions | cs.LG | Many sequential decision-making problems in communication networks can be
modeled as contextual bandit problems, which are natural extensions of the
well-known multi-armed bandit problem. In contextual bandit problems, at each
time, an agent observes some side information or context, pulls one arm and
receives the reward for that arm. We consider a stochastic formulation where
the context-reward tuples are independently drawn from an unknown distribution
in each trial. Motivated by networking applications, we analyze a setting where
the reward is a known non-linear function of the context and the chosen arm's
current state. We first consider the case of discrete and finite context-spaces
and propose DCB($\epsilon$), an algorithm that we prove, through a careful
analysis, yields regret (cumulative reward gap compared to a distribution-aware
genie) scaling logarithmically in time and linearly in the number of arms that
are not optimal for any context, improving over existing algorithms where the
regret scales linearly in the total number of arms. We then study continuous
context-spaces with Lipschitz reward functions and propose CCB($\epsilon,
\delta$), an algorithm that uses DCB($\epsilon$) as a subroutine.
CCB($\epsilon, \delta$) reveals a novel regret-storage trade-off that is
parametrized by $\delta$. Tuning $\delta$ to the time horizon allows us to
obtain sub-linear regret bounds, while requiring sub-linear storage. By
exploiting joint learning for all contexts we get regret bounds for
CCB($\epsilon, \delta$) that are unachievable by any existing contextual bandit
algorithm for continuous context-spaces. We also show similar performance
bounds for the unknown horizon case.
| Pranav Sakulkar and Bhaskar Krishnamachari | null | 1605.00176 | null | null |
Text-mining the NeuroSynth corpus using Deep Boltzmann Machines | cs.LG cs.CL q-bio.NC stat.ML | Large-scale automated meta-analysis of neuroimaging data has recently
established itself as an important tool in advancing our understanding of human
brain function. This research has been pioneered by NeuroSynth, a database
collecting both brain activation coordinates and associated text across a large
cohort of neuroimaging research papers. One of the fundamental aspects of such
meta-analysis is text-mining. To date, word counts and more sophisticated
methods such as Latent Dirichlet Allocation have been proposed. In this work we
present an unsupervised study of the NeuroSynth text corpus using Deep
Boltzmann Machines (DBMs). The use of DBMs yields several advantages over the
aforementioned methods, principal among which is the fact that it yields both
word and document embeddings in a high-dimensional vector space. Such
embeddings serve to facilitate the use of traditional machine learning
techniques on the text corpus. The proposed DBM model is shown to learn
embeddings with a clear semantic structure.
| Ricardo Pio Monti, Romy Lorenz, Robert Leech, Christoforos
Anagnostopoulos and Giovanni Montana | null | 1605.00223 | null | null |
Common-Description Learning: A Framework for Learning Algorithms and
Generating Subproblems from Few Examples | cs.AI cs.LG | Current learning algorithms face many difficulties in learning simple
patterns and using them to learn more complex ones. They also require more
examples than humans do to learn the same pattern, assuming no prior knowledge.
In this paper, a new learning framework is introduced that is called
common-description learning (CDL). This framework has been tested on 32 small
multi-task datasets, and the results show that it was able to learn complex
algorithms from a few number of examples. The final model is perfectly
interpretable and its depth depends on the question. What is meant by depth
here is that whenever needed, the model learns to break down the problem into
simpler subproblems and solves them using previously learned models. Finally,
we explain the capabilities of our framework in discovering complex relations
in data and how it can help in improving language understanding in machines.
| Basem G. El-Barashy | null | 1605.00241 | null | null |
A vector-contraction inequality for Rademacher complexities | cs.LG stat.ML | The contraction inequality for Rademacher averages is extended to Lipschitz
functions with vector-valued domains, and it is also shown that in the bounding
expression the Rademacher variables can be replaced by arbitrary iid symmetric
and sub-gaussian variables. Example applications are given for multi-category
learning, K-means clustering and learning-to-learn.
| Andreas Maurer | null | 1605.00251 | null | null |
Fast Rates for General Unbounded Loss Functions: from ERM to Generalized
Bayes | cs.LG stat.ML | We present new excess risk bounds for general unbounded loss functions
including log loss and squared loss, where the distribution of the losses may
be heavy-tailed. The bounds hold for general estimators, but they are optimized
when applied to $\eta$-generalized Bayesian, MDL, and empirical risk
minimization estimators. In the case of log loss, the bounds imply convergence
rates for generalized Bayesian inference under misspecification in terms of a
generalization of the Hellinger metric as long as the learning rate $\eta$ is
set correctly. For general loss functions, our bounds rely on two separate
conditions: the $v$-GRIP (generalized reversed information projection)
conditions, which control the lower tail of the excess loss; and the newly
introduced witness condition, which controls the upper tail. The parameter $v$
in the $v$-GRIP conditions determines the achievable rate and is akin to the
exponent in the Tsybakov margin condition and the Bernstein condition for
bounded losses, which the $v$-GRIP conditions generalize; favorable $v$ in
combination with small model complexity leads to $\tilde{O}(1/n)$ rates. The
witness condition allows us to connect the excess risk to an "annealed" version
thereof, by which we generalize several previous results connecting Hellinger
and R\'enyi divergence to KL divergence.
| Peter D. Gr\"unwald and Nishant A. Mehta | null | 1605.00252 | null | null |
Particle Smoothing for Hidden Diffusion Processes: Adaptive Path
Integral Smoother | cs.LG stat.CO | Particle smoothing methods are used for inference of stochastic processes
based on noisy observations. Typically, the estimation of the marginal
posterior distribution given all observations is cumbersome and computational
intensive. In this paper, we propose a simple algorithm based on path integral
control theory to estimate the smoothing distribution of continuous-time
diffusion processes with partial observations. In particular, we use an
adaptive importance sampling method to improve the effective sampling size of
the posterior over processes given the observations and the reliability of the
estimation of the marginals. This is achieved by estimating a feedback
controller to sample efficiently from the joint smoothing distributions. We
compare the results with estimations obtained from the standard Forward
Filter/Backward Simulator for two diffusion processes of different complexity.
We show that the proposed method gives more reliable estimations than the
standard FFBSi when the smoothing distribution is poorly represented by the
filter distribution.
| H.-Ch. Ruiz and H. J. Kappen | 10.1109/TSP.2017.2686340 | 1605.00278 | null | null |
Some Insights into the Geometry and Training of Neural Networks | cs.LG | Neural networks have been successfully used for classification tasks in a
rapidly growing number of practical applications. Despite their popularity and
widespread use, there are still many aspects of training and classification
that are not well understood. In this paper we aim to provide some new insights
into training and classification by analyzing neural networks from a
feature-space perspective. We review and explain the formation of decision
regions and study some of their combinatorial aspects. We place a particular
emphasis on the connections between the neural network weight and bias terms
and properties of decision boundaries and other regions that exhibit varying
levels of classification confidence. We show how the error backpropagates in
these regions and emphasize the important role they have in the formation of
gradients. These findings expose the connections between scaling of the weight
parameters and the density of the training samples. This sheds more light on
the vanishing gradient problem, explains the need for regularization, and
suggests an approach for subsampling training data to improve performance.
| Ewout van den Berg | null | 1605.00329 | null | null |
Recovery of non-linear cause-effect relationships from linearly mixed
neuroimaging data | stat.ME cs.LG stat.AP stat.ML | Causal inference concerns the identification of cause-effect relationships
between variables. However, often only linear combinations of variables
constitute meaningful causal variables. For example, recovering the signal of a
cortical source from electroencephalography requires a well-tuned combination
of signals recorded at multiple electrodes. We recently introduced the MERLiN
(Mixture Effect Recovery in Linear Networks) algorithm that is able to recover,
from an observed linear mixture, a causal variable that is a linear effect of
another given variable. Here we relax the assumption of this cause-effect
relationship being linear and present an extended algorithm that can pick up
non-linear cause-effect relationships. Thus, the main contribution is an
algorithm (and ready to use code) that has broader applicability and allows for
a richer model class. Furthermore, a comparative analysis indicates that the
assumption of linear cause-effect relationships is not restrictive in analysing
electroencephalographic data.
| Sebastian Weichwald, Arthur Gretton, Bernhard Sch\"olkopf, Moritz
Grosse-Wentrup | 10.1109/PRNI.2016.7552331 | 1605.00391 | null | null |
Simple2Complex: Global Optimization by Gradient Descent | cs.LG cs.NE | A method named simple2complex for modeling and training deep neural networks
is proposed. Simple2complex train deep neural networks by smoothly adding more
and more layers to the shallow networks, as the learning procedure going on,
the network is just like growing. Compared with learning by end2end,
simple2complex is with less possibility trapping into local minimal, namely,
owning ability for global optimization. Cifar10 is used for verifying the
superiority of simple2complex.
| Ming Li | null | 1605.00404 | null | null |
Gradient Descent Only Converges to Minimizers: Non-Isolated Critical
Points and Invariant Regions | math.DS cs.LG | Given a non-convex twice differentiable cost function f, we prove that the
set of initial conditions so that gradient descent converges to saddle points
where \nabla^2 f has at least one strictly negative eigenvalue has (Lebesgue)
measure zero, even for cost functions f with non-isolated critical points,
answering an open question in [Lee, Simchowitz, Jordan, Recht, COLT2016].
Moreover, this result extends to forward-invariant convex subspaces, allowing
for weak (non-globally Lipschitz) smoothness assumptions. Finally, we produce
an upper bound on the allowable step-size.
| Ioannis Panageas and Georgios Piliouras | null | 1605.00405 | null | null |
Methods for Sparse and Low-Rank Recovery under Simplex Constraints | stat.ME cs.LG | The de-facto standard approach of promoting sparsity by means of
$\ell_1$-regularization becomes ineffective in the presence of simplex
constraints, i.e.,~the target is known to have non-negative entries summing up
to a given constant. The situation is analogous for the use of nuclear norm
regularization for low-rank recovery of Hermitian positive semidefinite
matrices with given trace. In the present paper, we discuss several strategies
to deal with this situation, from simple to more complex. As a starting point,
we consider empirical risk minimization (ERM). It follows from existing theory
that ERM enjoys better theoretical properties w.r.t.~prediction and
$\ell_2$-estimation error than $\ell_1$-regularization. In light of this, we
argue that ERM combined with a subsequent sparsification step like thresholding
is superior to the heuristic of using $\ell_1$-regularization after dropping
the sum constraint and subsequent normalization.
At the next level, we show that any sparsity-promoting regularizer under
simplex constraints cannot be convex. A novel sparsity-promoting regularization
scheme based on the inverse or negative of the squared $\ell_2$-norm is
proposed, which avoids shortcomings of various alternative methods from the
literature. Our approach naturally extends to Hermitian positive semidefinite
matrices with given trace. Numerical studies concerning compressed sensing,
sparse mixture density estimation, portfolio optimization and quantum state
tomography are used to illustrate the key points of the paper.
| Ping Li and Syama Sundar Rangapuram and Martin Slawski | null | 1605.00507 | null | null |
Linear-time Outlier Detection via Sensitivity | stat.ML cs.LG | Outliers are ubiquitous in modern data sets. Distance-based techniques are a
popular non-parametric approach to outlier detection as they require no prior
assumptions on the data generating distribution and are simple to implement.
Scaling these techniques to massive data sets without sacrificing accuracy is a
challenging task. We propose a novel algorithm based on the intuition that
outliers have a significant influence on the quality of divergence-based
clustering solutions. We propose sensitivity - the worst-case impact of a data
point on the clustering objective - as a measure of outlierness. We then prove
that influence, a (non-trivial) upper-bound on the sensitivity, can be computed
by a simple linear time algorithm. To scale beyond a single machine, we propose
a communication efficient distributed algorithm. In an extensive experimental
evaluation, we demonstrate the effectiveness and establish the statistical
significance of the proposed approach. In particular, it outperforms the most
popular distance-based approaches while being several orders of magnitude
faster.
| Mario Lucic, Olivier Bachem, Andreas Krause | null | 1605.00519 | null | null |
Tradeoffs for Space, Time, Data and Risk in Unsupervised Learning | stat.ML cs.LG | Faced with massive data, is it possible to trade off (statistical) risk, and
(computational) space and time? This challenge lies at the heart of large-scale
machine learning. Using k-means clustering as a prototypical unsupervised
learning problem, we show how we can strategically summarize the data (control
space) in order to trade off risk and time when data is generated by a
probabilistic model. Our summarization is based on coreset constructions from
computational geometry. We also develop an algorithm, TRAM, to navigate the
space/time/data/risk tradeoff in practice. In particular, we show that for a
fixed risk (or data size), as the data size increases (resp. risk increases)
the running time of TRAM decreases. Our extensive experiments on real data sets
demonstrate the existence and practical utility of such tradeoffs, not only for
k-means but also for Gaussian Mixture Models.
| Mario Lucic, Mesrob I. Ohannessian, Amin Karbasi, Andreas Krause | null | 1605.00529 | null | null |
Graph Clustering Bandits for Recommendation | stat.ML cs.AI cs.IR cs.LG | We investigate an efficient context-dependent clustering technique for
recommender systems based on exploration-exploitation strategies through
multi-armed bandits over multiple users. Our algorithm dynamically groups users
based on their observed behavioral similarity during a sequence of logged
activities. In doing so, the algorithm reacts to the currently served user by
shaping clusters around him/her but, at the same time, it explores the
generation of clusters over users which are not currently engaged. We motivate
the effectiveness of this clustering policy, and provide an extensive empirical
analysis on real-world datasets, showing scalability and improved prediction
performance over state-of-the-art methods for sequential clustering of users in
multi-armed bandit scenarios.
| Shuai Li and Claudio Gentile and Alexandros Karatzoglou | null | 1605.00596 | null | null |
Algorithms for Learning Sparse Additive Models with Interactions in High
Dimensions | cs.LG cs.IT math.IT math.NA stat.ML | A function $f: \mathbb{R}^d \rightarrow \mathbb{R}$ is a Sparse Additive
Model (SPAM), if it is of the form $f(\mathbf{x}) = \sum_{l \in
\mathcal{S}}\phi_{l}(x_l)$ where $\mathcal{S} \subset [d]$, $|\mathcal{S}| \ll
d$. Assuming $\phi$'s, $\mathcal{S}$ to be unknown, there exists extensive work
for estimating $f$ from its samples. In this work, we consider a generalized
version of SPAMs, that also allows for the presence of a sparse number of
second order interaction terms. For some $\mathcal{S}_1 \subset [d],
\mathcal{S}_2 \subset {[d] \choose 2}$, with $|\mathcal{S}_1| \ll d,
|\mathcal{S}_2| \ll d^2$, the function $f$ is now assumed to be of the form:
$\sum_{p \in \mathcal{S}_1}\phi_{p} (x_p) + \sum_{(l,l^{\prime}) \in
\mathcal{S}_2}\phi_{(l,l^{\prime})} (x_l,x_{l^{\prime}})$. Assuming we have the
freedom to query $f$ anywhere in its domain, we derive efficient algorithms
that provably recover $\mathcal{S}_1,\mathcal{S}_2$ with finite sample bounds.
Our analysis covers the noiseless setting where exact samples of $f$ are
obtained, and also extends to the noisy setting where the queries are corrupted
with noise. For the noisy setting in particular, we consider two noise models
namely: i.i.d Gaussian noise and arbitrary but bounded noise. Our main methods
for identification of $\mathcal{S}_2$ essentially rely on estimation of sparse
Hessian matrices, for which we provide two novel compressed sensing based
schemes. Once $\mathcal{S}_1, \mathcal{S}_2$ are known, we show how the
individual components $\phi_p$, $\phi_{(l,l^{\prime})}$ can be estimated via
additional queries of $f$, with uniform error bounds. Lastly, we provide
simulation results on synthetic data that validate our theoretical findings.
| Hemant Tyagi, Anastasios Kyrillidis, Bernd G\"artner, Andreas Krause | null | 1605.00609 | null | null |
Predicting online extremism, content adopters, and interaction
reciprocity | cs.SI cs.LG physics.soc-ph | We present a machine learning framework that leverages a mixture of metadata,
network, and temporal features to detect extremist users, and predict content
adopters and interaction reciprocity in social media. We exploit a unique
dataset containing millions of tweets generated by more than 25 thousand users
who have been manually identified, reported, and suspended by Twitter due to
their involvement with extremist campaigns. We also leverage millions of tweets
generated by a random sample of 25 thousand regular users who were exposed to,
or consumed, extremist content. We carry out three forecasting tasks, (i) to
detect extremist users, (ii) to estimate whether regular users will adopt
extremist content, and finally (iii) to predict whether users will reciprocate
contacts initiated by extremists. All forecasting tasks are set up in two
scenarios: a post hoc (time independent) prediction task on aggregated data,
and a simulated real-time prediction task. The performance of our framework is
extremely promising, yielding in the different forecasting scenarios up to 93%
AUC for extremist user detection, up to 80% AUC for content adoption
prediction, and finally up to 72% AUC for interaction reciprocity forecasting.
We conclude by providing a thorough feature analysis that helps determine which
are the emerging signals that provide predictive power in different scenarios.
| Emilio Ferrara, Wen-Qiang Wang, Onur Varol, Alessandro Flammini, Aram
Galstyan | 10.1007/978-3-319-47874-6_3 | 1605.00659 | null | null |
Radio Transformer Networks: Attention Models for Learning to Synchronize
in Wireless Systems | cs.LG cs.NI cs.SY | We introduce learned attention models into the radio machine learning domain
for the task of modulation recognition by leveraging spatial transformer
networks and introducing new radio domain appropriate transformations. This
attention model allows the network to learn a localization network capable of
synchronizing and normalizing a radio signal blindly with zero knowledge of the
signals structure based on optimization of the network for classification
accuracy, sparse representation, and regularization. Using this architecture we
are able to outperform our prior results in accuracy vs signal to noise ratio
against an identical system without attention, however we believe such an
attention model has implication far beyond the task of modulation recognition.
| Timothy J O'Shea, Latha Pemula, Dhruv Batra, T. Charles Clancy | null | 1605.00716 | null | null |
VLSI Extreme Learning Machine: A Design Space Exploration | cs.LG cs.ET | In this paper, we describe a compact low-power, high performance hardware
implementation of the extreme learning machine (ELM) for machine learning
applications. Mismatch in current mirrors are used to perform the vector-matrix
multiplication that forms the first stage of this classifier and is the most
computationally intensive. Both regression and classification (on UCI data
sets) are demonstrated and a design space trade-off between speed, power and
accuracy is explored. Our results indicate that for a wide set of problems,
$\sigma V_T$ in the range of $15-25$mV gives optimal results. An input weight
matrix rotation method to extend the input dimension and hidden layer size
beyond the physical limits imposed by the chip is also described. This allows
us to overcome a major limit imposed on most hardware machine learners. The
chip is implemented in a $0.35 \mu$m CMOS process and occupies a die area of
around 5 mm $\times$ 5 mm. Operating from a $1$ V power supply, it achieves an
energy efficiency of $0.47$ pJ/MAC at a classification rate of $31.6$ kHz.
| Enyi Yao and Arindam Basu | null | 1605.00740 | null | null |
Learning from Binary Labels with Instance-Dependent Corruption | cs.LG | Suppose we have a sample of instances paired with binary labels corrupted by
arbitrary instance- and label-dependent noise. With sufficiently many such
samples, can we optimally classify and rank instances with respect to the
noise-free distribution? We provide a theoretical analysis of this question,
with three main contributions. First, we prove that for instance-dependent
noise, any algorithm that is consistent for classification on the noisy
distribution is also consistent on the clean distribution. Second, we prove
that for a broad class of instance- and label-dependent noise, a similar
consistency result holds for the area under the ROC curve. Third, for the
latter noise model, when the noise-free class-probability function belongs to
the generalised linear model family, we show that the Isotron can efficiently
and provably learn from the corrupted sample.
| Aditya Krishna Menon, Brendan van Rooyen, Nagarajan Natarajan | null | 1605.00751 | null | null |
Online Learning of Commission Avoidant Portfolio Ensembles | cs.AI cs.LG | We present a novel online ensemble learning strategy for portfolio selection.
The new strategy controls and exploits any set of commission-oblivious
portfolio selection algorithms. The strategy handles transaction costs using a
novel commission avoidance mechanism. We prove a logarithmic regret bound for
our strategy with respect to optimal mixtures of the base algorithms. Numerical
examples validate the viability of our method and show significant improvement
over the state-of-the-art.
| Guy Uziel and Ran El-Yaniv | null | 1605.00788 | null | null |
Dictionary Learning for Massive Matrix Factorization | stat.ML cs.LG q-bio.QM | Sparse matrix factorization is a popular tool to obtain interpretable data
decompositions, which are also effective to perform data completion or
denoising. Its applicability to large datasets has been addressed with online
and randomized methods, that reduce the complexity in one of the matrix
dimension, but not in both of them. In this paper, we tackle very large
matrices in both dimensions. We propose a new factoriza-tion method that scales
gracefully to terabyte-scale datasets, that could not be processed by previous
algorithms in a reasonable amount of time. We demonstrate the efficiency of our
approach on massive functional Magnetic Resonance Imaging (fMRI) data, and on
matrix completion problems for recommender systems, where we obtain significant
speed-ups compared to state-of-the art coordinate descent methods.
| Arthur Mensch (PARIETAL), Julien Mairal (LEAR), Bertrand Thirion
(PARIETAL), Ga\"el Varoquaux (PARIETAL) | null | 1605.00937 | null | null |
Personalized Risk Scoring for Critical Care Patients using Mixtures of
Gaussian Process Experts | cs.LG stat.ML | We develop a personalized real time risk scoring algorithm that provides
timely and granular assessments for the clinical acuity of ward patients based
on their (temporal) lab tests and vital signs. Heterogeneity of the patients
population is captured via a hierarchical latent class model. The proposed
algorithm aims to discover the number of latent classes in the patients
population, and train a mixture of Gaussian Process (GP) experts, where each
expert models the physiological data streams associated with a specific class.
Self-taught transfer learning is used to transfer the knowledge of latent
classes learned from the domain of clinically stable patients to the domain of
clinically deteriorating patients. For new patients, the posterior beliefs of
all GP experts about the patient's clinical status given her physiological data
stream are computed, and a personalized risk score is evaluated as a weighted
average of those beliefs, where the weights are learned from the patient's
hospital admission information. Experiments on a heterogeneous cohort of 6,313
patients admitted to Ronald Regan UCLA medical center show that our risk score
outperforms the currently deployed risk scores, such as MEWS and Rothman
scores.
| Ahmed M. Alaa, Jinsung Yoon, Scott Hu, Mihaela van der Schaar | null | 1605.00959 | null | null |
Online Machine Learning Techniques for Predicting Operator Performance | cs.LG | This thesis explores a number of online machine learning algorithms. From a
theoret- ical perspective, it assesses their employability for a particular
function approximation problem where the analytical models fall short.
Furthermore, it discusses the applica- tion of theoretically suitable learning
algorithms to the function approximation problem at hand through an efficient
implementation that exploits various computational and mathematical shortcuts.
Finally, this thesis work evaluates the implemented learning algorithms
according to various evaluation criteria through rigorous testing.
| Ahmet Anil Pala | null | 1605.01029 | null | null |
Do logarithmic proximity measures outperform plain ones in graph
clustering? | cs.LG cs.DM | We consider a number of graph kernels and proximity measures including
commute time kernel, regularized Laplacian kernel, heat kernel, exponential
diffusion kernel (also called "communicability"), etc., and the corresponding
distances as applied to clustering nodes in random graphs and several
well-known datasets. The model of generating random graphs involves edge
probabilities for the pairs of nodes that belong to the same class or different
predefined classes of nodes. It turns out that in most cases, logarithmic
measures (i.e., measures resulting after taking logarithm of the proximities)
perform better while distinguishing underlying classes than the "plain"
measures. A comparison in terms of reject curves of inter-class and intra-class
distances confirms this conclusion. A similar conclusion can be made for
several well-known datasets. A possible origin of this effect is that most
kernels have a multiplicative nature, while the nature of distances used in
cluster algorithms is an additive one (cf. the triangle inequality). The
logarithmic transformation is a tool to transform the first nature to the
second one. Moreover, some distances corresponding to the logarithmic measures
possess a meaningful cutpoint additivity property. In our experiments, the
leader is usually the logarithmic Communicability measure. However, we indicate
some more complicated cases in which other measures, typically, Communicability
and plain Walk, can be the winners.
| Vladimir Ivashkin and Pavel Chebotarev | null | 1605.01046 | null | null |
Decentralized Dynamic Discriminative Dictionary Learning | stat.ML cs.LG | We consider discriminative dictionary learning in a distributed online
setting, where a network of agents aims to learn a common set of dictionary
elements of a feature space and model parameters while sequentially receiving
observations. We formulate this problem as a distributed stochastic program
with a non-convex objective and present a block variant of the Arrow-Hurwicz
saddle point algorithm to solve it. Using Lagrange multipliers to penalize the
discrepancy between them, only neighboring nodes exchange model information. We
show that decisions made with this saddle point algorithm asymptotically
achieve a first-order stationarity condition on average.
| Alec Koppel, Garrett Warnell, Ethan Stump, Alejandro Ribeiro | null | 1605.01107 | null | null |
An evaluation of randomized machine learning methods for redundant data:
Predicting short and medium-term suicide risk from administrative records and
risk assessments | stat.ML cs.LG | Accurate prediction of suicide risk in mental health patients remains an open
problem. Existing methods including clinician judgments have acceptable
sensitivity, but yield many false positives. Exploiting administrative data has
a great potential, but the data has high dimensionality and redundancies in the
recording processes. We investigate the efficacy of three most effective
randomized machine learning techniques random forests, gradient boosting
machines, and deep neural nets with dropout in predicting suicide risk. Using a
cohort of mental health patients from a regional Australian hospital, we
compare the predictive performance with popular traditional approaches
clinician judgments based on a checklist, sparse logistic regression and
decision trees. The randomized methods demonstrated robustness against data
redundancies and superior predictive performance on AUC and F-measure.
| Thuong Nguyen, Truyen Tran, Shivapratap Gopakumar, Dinh Phung, Svetha
Venkatesh | null | 1605.01116 | null | null |
Deep Motif: Visualizing Genomic Sequence Classifications | cs.LG | This paper applies a deep convolutional/highway MLP framework to classify
genomic sequences on the transcription factor binding site task. To make the
model understandable, we propose an optimization driven strategy to extract
"motifs", or symbolic patterns which visualize the positive class learned by
the network. We show that our system, Deep Motif (DeMo), extracts motifs that
are similar to, and in some cases outperform the current well known motifs. In
addition, we find that a deeper model consisting of multiple convolutional and
highway layers can outperform a single convolutional and fully connected layer
in the previous state-of-the-art.
| Jack Lanchantin, Ritambhara Singh, Zeming Lin, Yanjun Qi | null | 1605.01133 | null | null |
Linear Bandit algorithms using the Bootstrap | stat.ML cs.LG | This study presents two new algorithms for solving linear stochastic bandit
problems. The proposed methods use an approach from non-parametric statistics
called bootstrapping to create confidence bounds. This is achieved without
making any assumptions about the distribution of noise in the underlying
system. We present the X-Random and X-Fixed bootstrap bandits which correspond
to the two well-known approaches for conducting bootstraps on models, in the
literature. The proposed methods are compared to other popular solutions for
linear stochastic bandit problems, namely, OFUL, LinUCB and Thompson Sampling.
The comparisons are carried out using a simulation study on a hierarchical
probability meta-model, built from published data of experiments, which are run
on real systems. The model representing the response surfaces is conceptualized
as a Bayesian Network which is presented with varying degrees of noise for the
simulations. One of the proposed methods, X-Random bootstrap, performs better
than the baselines in-terms of cumulative regret across various degrees of
noise and different number of trials. In certain settings the cumulative regret
of this method is less than half of the best baseline. The X-Fixed bootstrap
performs comparably in most situations and particularly well when the number of
trials is low. The study concludes that these algorithms could be a preferred
alternative for solving linear bandit problems, especially when the
distribution of the noise in the system is unknown.
| Nandan Sudarsanam and Balaraman Ravindran | null | 1605.01185 | null | null |
A Bayesian Approach to Policy Recognition and State Representation
Learning | stat.ML cs.LG cs.SY math.DS math.PR | Learning from demonstration (LfD) is the process of building behavioral
models of a task from demonstrations provided by an expert. These models can be
used e.g. for system control by generalizing the expert demonstrations to
previously unencountered situations. Most LfD methods, however, make strong
assumptions about the expert behavior, e.g. they assume the existence of a
deterministic optimal ground truth policy or require direct monitoring of the
expert's controls, which limits their practical use as part of a general system
identification framework. In this work, we consider the LfD problem in a more
general setting where we allow for arbitrary stochastic expert policies,
without reasoning about the optimality of the demonstrations. Following a
Bayesian methodology, we model the full posterior distribution of possible
expert controllers that explain the provided demonstration data. Moreover, we
show that our methodology can be applied in a nonparametric context to infer
the complexity of the state representation used by the expert, and to learn
task-appropriate partitionings of the system state space.
| Adrian \v{S}o\v{s}i\'c, Abdelhak M. Zoubir, Heinz Koeppl | 10.1109/TPAMI.2017.2711024 | 1605.01278 | null | null |
Fast rates with high probability in exp-concave statistical learning | cs.LG | We present an algorithm for the statistical learning setting with a bounded
exp-concave loss in $d$ dimensions that obtains excess risk $O(d
\log(1/\delta)/n)$ with probability at least $1 - \delta$. The core technique
is to boost the confidence of recent in-expectation $O(d/n)$ excess risk bounds
for empirical risk minimization (ERM), without sacrificing the rate, by
leveraging a Bernstein condition which holds due to exp-concavity. We also show
that with probability $1 - \delta$ the standard ERM method obtains excess risk
$O(d (\log(n) + \log(1/\delta))/n)$. We further show that a regret bound for
any online learner in this setting translates to a high probability excess risk
bound for the corresponding online-to-batch conversion of the online learner.
Lastly, we present two high probability bounds for the exp-concave model
selection aggregation problem that are quantile-adaptive in a certain sense.
The first bound is a purely exponential weights type algorithm, obtains a
nearly optimal rate, and has no explicit dependence on the Lipschitz continuity
of the loss. The second bound requires Lipschitz continuity but obtains the
optimal rate.
| Nishant A. Mehta | null | 1605.01288 | null | null |
Single Channel Speech Enhancement Using Outlier Detection | cs.SD cs.LG | Distortion of the underlying speech is a common problem for single-channel
speech enhancement algorithms, and hinders such methods from being used more
extensively. A dictionary based speech enhancement method that emphasizes
preserving the underlying speech is proposed. Spectral patches of clean speech
are sampled and clustered to train a dictionary. Given a noisy speech spectral
patch, the best matching dictionary entry is selected and used to estimate the
noise power at each time-frequency bin. The noise estimation step is formulated
as an outlier detection problem, where the noise at each bin is assumed present
only if it is an outlier to the corresponding bin of the best matching
dictionary entry. This framework assigns higher priority in removing spectral
elements that strongly deviate from a typical spoken unit stored in the trained
dictionary. Even without the aid of a separate noise model, this method can
achieve significant noise reduction for various non-stationary noises, while
effectively preserving the underlying speech in more challenging noisy
environments.
| Eunjoon Cho, Bowon Lee, Ronald Schafer, Bernard Widrow | null | 1605.01329 | null | null |
Learning from the memory of Atari 2600 | cs.LG cs.AI | We train a number of neural networks to play games Bowling, Breakout and
Seaquest using information stored in the memory of a video game console Atari
2600. We consider four models of neural networks which differ in size and
architecture: two networks which use only information contained in the RAM and
two mixed networks which use both information in the RAM and information from
the screen. As the benchmark we used the convolutional model proposed in NIPS
and received comparable results in all considered games. Quite surprisingly, in
the case of Seaquest we were able to train RAM-only agents which behave better
than the benchmark screen-only agent. Mixing screen and RAM did not lead to an
improved performance comparing to screen-only and RAM-only agents.
| Jakub Sygnowski and Henryk Michalewski | null | 1605.01335 | null | null |
Accelerating Deep Learning with Shrinkage and Recall | cs.LG cs.CV cs.NE | Deep Learning is a very powerful machine learning model. Deep Learning trains
a large number of parameters for multiple layers and is very slow when data is
in large scale and the architecture size is large. Inspired from the shrinking
technique used in accelerating computation of Support Vector Machines (SVM)
algorithm and screening technique used in LASSO, we propose a shrinking Deep
Learning with recall (sDLr) approach to speed up deep learning computation. We
experiment shrinking Deep Learning with recall (sDLr) using Deep Neural Network
(DNN), Deep Belief Network (DBN) and Convolution Neural Network (CNN) on 4 data
sets. Results show that the speedup using shrinking Deep Learning with recall
(sDLr) can reach more than 2.0 while still giving competitive classification
performance.
| Shuai Zheng, Abhinav Vishnu, Chris Ding | null | 1605.01369 | null | null |
Boltzmann meets Nash: Energy-efficient routing in optical networks under
uncertainty | cs.NI cs.GT cs.LG | Motivated by the massive deployment of power-hungry data centers for service
provisioning, we examine the problem of routing in optical networks with the
aim of minimizing traffic-driven power consumption. To tackle this issue,
routing must take into account energy efficiency as well as capacity
considerations; moreover, in rapidly-varying network environments, this must be
accomplished in a real-time, distributed manner that remains robust in the
presence of random disturbances and noise. In view of this, we derive a pricing
scheme whose Nash equilibria coincide with the network's socially optimum
states, and we propose a distributed learning method based on the Boltzmann
distribution of statistical mechanics. Using tools from stochastic calculus, we
show that the resulting Boltzmann routing scheme exhibits remarkable
convergence properties under uncertainty: specifically, the long-term average
of the network's power consumption converges within $\varepsilon$ of its
minimum value in time which is at most $\tilde O(1/\varepsilon^2)$,
irrespective of the fluctuations' magnitude; additionally, if the network
admits a strict, non-mixing optimum state, the algorithm converges to it -
again, no matter the noise level. Our analysis is supplemented by extensive
numerical simulations which show that Boltzmann routing can lead to a
significant decrease in power consumption over basic, shortest-path routing
schemes in realistic network conditions.
| Panayotis Mertikopoulos and Aris L. Moustakas and Anna Tzanakaki | null | 1605.01451 | null | null |
Classification of Human Whole-Body Motion using Hidden Markov Models | cs.LG cs.CV | Human motion plays an important role in many fields. Large databases exist
that store and make available recordings of human motions. However, annotating
each motion with multiple labels is a cumbersome and error-prone process. This
bachelor's thesis presents different approaches to solve the multi-label
classification problem using Hidden Markov Models (HMMs). First, different
features that can be directly obtained from the raw data are introduced. Next,
additional features are derived to improve classification performance. These
features are then used to perform the multi-label classification using two
different approaches. The first approach simply transforms the multi-label
problem into a multi-class problem. The second, novel approach solves the same
problem without the need to construct a transformation by predicting the labels
directly from the likelihood scores. The second approach scales linearly with
the number of labels whereas the first approach is subject to combinatorial
explosion. All aspects of the classification process are evaluated on a data
set that consists of 454 motions. System 1 achieves an accuracy of 98.02% and
system 2 an accuracy of 93.39% on the test set.
| Matthias Plappert | null | 1605.01569 | null | null |
On the Convergence of A Family of Robust Losses for Stochastic Gradient
Descent | cs.LG | The convergence of Stochastic Gradient Descent (SGD) using convex loss
functions has been widely studied. However, vanilla SGD methods using convex
losses cannot perform well with noisy labels, which adversely affect the update
of the primal variable in SGD methods. Unfortunately, noisy labels are
ubiquitous in real world applications such as crowdsourcing. To handle noisy
labels, in this paper, we present a family of robust losses for SGD methods. By
employing our robust losses, SGD methods successfully reduce negative effects
caused by noisy labels on each update of the primal variable. We not only
reveal that the convergence rate is O(1/T) for SGD methods using robust losses,
but also provide the robustness analysis on two representative robust losses.
Comprehensive experimental results on six real-world datasets show that SGD
methods using robust losses are obviously more robust than other baseline
methods in most situations with fast convergence.
| Bo Han and Ivor W. Tsang and Ling Chen | null | 1605.01623 | null | null |
Maximal Sparsity with Deep Networks? | cs.LG | The iterations of many sparse estimation algorithms are comprised of a fixed
linear filter cascaded with a thresholding nonlinearity, which collectively
resemble a typical neural network layer. Consequently, a lengthy sequence of
algorithm iterations can be viewed as a deep network with shared, hand-crafted
layer weights. It is therefore quite natural to examine the degree to which a
learned network model might act as a viable surrogate for traditional sparse
estimation in domains where ample training data is available. While the
possibility of a reduced computational budget is readily apparent when a
ceiling is imposed on the number of layers, our work primarily focuses on
estimation accuracy. In particular, it is well-known that when a signal
dictionary has coherent columns, as quantified by a large RIP constant, then
most tractable iterative algorithms are unable to find maximally sparse
representations. In contrast, we demonstrate both theoretically and empirically
the potential for a trained deep network to recover minimal $\ell_0$-norm
representations in regimes where existing methods fail. The resulting system is
deployed on a practical photometric stereo estimation problem, where the goal
is to remove sparse outliers that can disrupt the estimation of surface normals
from a 3D scene.
| Bo Xin, Yizhou Wang, Wen Gao and David Wipf | null | 1605.01636 | null | null |
A Tight Bound of Hard Thresholding | stat.ML cs.IT cs.LG cs.NA math.IT math.NA math.OC | This paper is concerned with the hard thresholding operator which sets all
but the $k$ largest absolute elements of a vector to zero. We establish a {\em
tight} bound to quantitatively characterize the deviation of the thresholded
solution from a given signal. Our theoretical result is universal in the sense
that it holds for all choices of parameters, and the underlying analysis
depends only on fundamental arguments in mathematical optimization. We discuss
the implications for two domains:
Compressed Sensing. On account of the crucial estimate, we bridge the
connection between the restricted isometry property (RIP) and the sparsity
parameter for a vast volume of hard thresholding based algorithms, which
renders an improvement on the RIP condition especially when the true sparsity
is unknown. This suggests that in essence, many more kinds of sensing matrices
or fewer measurements are admissible for the data acquisition procedure.
Machine Learning. In terms of large-scale machine learning, a significant yet
challenging problem is learning accurate sparse models in an efficient manner.
In stark contrast to prior work that attempted the $\ell_1$-relaxation for
promoting sparsity, we present a novel stochastic algorithm which performs hard
thresholding in each iteration, hence ensuring such parsimonious solutions.
Equipped with the developed bound, we prove the {\em global linear convergence}
for a number of prevalent statistical models under mild assumptions, even
though the problem turns out to be non-convex.
| Jie Shen and Ping Li | null | 1605.01656 | null | null |
Copeland Dueling Bandit Problem: Regret Lower Bound, Optimal Algorithm,
and Computationally Efficient Algorithm | stat.ML cs.LG | We study the K-armed dueling bandit problem, a variation of the standard
stochastic bandit problem where the feedback is limited to relative comparisons
of a pair of arms. The hardness of recommending Copeland winners, the arms that
beat the greatest number of other arms, is characterized by deriving an
asymptotic regret bound. We propose Copeland Winners Relative Minimum Empirical
Divergence (CW-RMED) and derive an asymptotically optimal regret bound for it.
However, it is not known whether the algorithm can be efficiently computed or
not. To address this issue, we devise an efficient version (ECW-RMED) and
derive its asymptotic regret bound. Experimental comparisons of dueling bandit
algorithms show that ECW-RMED significantly outperforms existing ones.
| Junpei Komiyama, Junya Honda, Hiroshi Nakagawa | null | 1605.01677 | null | null |
A note on adjusting $R^2$ for using with cross-validation | cs.LG cs.AI stat.ML | We show how to adjust the coefficient of determination ($R^2$) when used for
measuring predictive accuracy via leave-one-out cross-validation.
| Indre Zliobaite and Nikolaj Tatti | null | 1605.01703 | null | null |
Not Just a Black Box: Learning Important Features Through Propagating
Activation Differences | cs.LG cs.CV cs.NE | Note: This paper describes an older version of DeepLIFT. See
https://arxiv.org/abs/1704.02685 for the newer version. Original abstract
follows: The purported "black box" nature of neural networks is a barrier to
adoption in applications where interpretability is essential. Here we present
DeepLIFT (Learning Important FeaTures), an efficient and effective method for
computing importance scores in a neural network. DeepLIFT compares the
activation of each neuron to its 'reference activation' and assigns
contribution scores according to the difference. We apply DeepLIFT to models
trained on natural images and genomic data, and show significant advantages
over gradient-based methods.
| Avanti Shrikumar, Peyton Greenside, Anna Shcherbina, Anshul Kundaje | null | 1605.01713 | null | null |
Rank Ordered Autoencoders | cs.LG stat.ML | A new method for the unsupervised learning of sparse representations using
autoencoders is proposed and implemented by ordering the output of the hidden
units by their activation value and progressively reconstructing the input in
this order. This can be done efficiently in parallel with the use of cumulative
sums and sorting only slightly increasing the computational costs. Minimizing
the difference of this progressive reconstruction with respect to the input can
be seen as minimizing the number of active output units required for the
reconstruction of the input. The model thus learns to reconstruct optimally
using the least number of active output units. This leads to high sparsity
without the need for extra hyperparameters, the amount of sparsity is instead
implicitly learned by minimizing this progressive reconstruction error. Results
of the trained model are given for patches of the CIFAR10 dataset, showing
rapid convergence of features and extremely sparse output activations while
maintaining a minimal reconstruction error and showing extreme robustness to
overfitting. Additionally the reconstruction as function of number of active
units is presented which shows the autoencoder learns a rank order code over
the input where the highest ranked units correspond to the highest decrease in
reconstruction error.
| Paul Bertens | null | 1605.01749 | null | null |
DCTNet and PCANet for acoustic signal feature extraction | cs.SD cs.LG | We introduce the use of DCTNet, an efficient approximation and alternative to
PCANet, for acoustic signal classification. In PCANet, the eigenfunctions of
the local sample covariance matrix (PCA) are used as filterbanks for
convolution and feature extraction. When the eigenfunctions are well
approximated by the Discrete Cosine Transform (DCT) functions, each layer of of
PCANet and DCTNet is essentially a time-frequency representation. We relate
DCTNet to spectral feature representation methods, such as the the short time
Fourier transform (STFT), spectrogram and linear frequency spectral
coefficients (LFSC). Experimental results on whale vocalization data show that
DCTNet improves classification rate, demonstrating DCTNet's applicability to
signal processing problems such as underwater acoustics.
| Yin Xian, Andrew Thompson, Xiaobai Sun, Douglas Nowacek, and Loren
Nolte | null | 1605.01755 | null | null |
Cross-Graph Learning of Multi-Relational Associations | cs.LG | Cross-graph Relational Learning (CGRL) refers to the problem of predicting
the strengths or labels of multi-relational tuples of heterogeneous object
types, through the joint inference over multiple graphs which specify the
internal connections among each type of objects. CGRL is an open challenge in
machine learning due to the daunting number of all possible tuples to deal with
when the numbers of nodes in multiple graphs are large, and because the labeled
training instances are extremely sparse as typical. Existing methods such as
tensor factorization or tensor-kernel machines do not work well because of the
lack of convex formulation for the optimization of CGRL models, the poor
scalability of the algorithms in handling combinatorial numbers of tuples,
and/or the non-transductive nature of the learning methods which limits their
ability to leverage unlabeled data in training. This paper proposes a novel
framework which formulates CGRL as a convex optimization problem, enables
transductive learning using both labeled and unlabeled tuples, and offers a
scalable algorithm that guarantees the optimal solution and enjoys a linear
time complexity with respect to the sizes of input graphs. In our experiments
with a subset of DBLP publication records and an Enzyme multi-source dataset,
the proposed method successfully scaled to the large cross-graph inference
problem, and outperformed other representative approaches significantly.
| Hanxiao Liu, Yiming Yang | null | 1605.01832 | null | null |
DeepPicker: a Deep Learning Approach for Fully Automated Particle
Picking in Cryo-EM | q-bio.QM cs.LG | Particle picking is a time-consuming step in single-particle analysis and
often requires significant interventions from users, which has become a
bottleneck for future automated electron cryo-microscopy (cryo-EM). Here we
report a deep learning framework, called DeepPicker, to address this problem
and fill the current gaps toward a fully automated cryo-EM pipeline. DeepPicker
employs a novel cross-molecule training strategy to capture common features of
particles from previously-analyzed micrographs, and thus does not require any
human intervention during particle picking. Tests on the recently-published
cryo-EM data of three complexes have demonstrated that our deep learning based
scheme can successfully accomplish the human-level particle picking process and
identify a sufficient number of particles that are comparable to those manually
by human experts. These results indicate that DeepPicker can provide a
practically useful tool to significantly reduce the time and manual effort
spent in single-particle analysis and thus greatly facilitate high-resolution
cryo-EM structure determination.
| Feng Wang and Huichao Gong and Gaochao liu and Meijing Li and Chuangye
Yan and Tian Xia and Xueming Li and Jianyang Zeng | null | 1605.01838 | null | null |
Energy Disaggregation for Real-Time Building Flexibility Detection | stat.ML cs.AI cs.LG | Energy is a limited resource which has to be managed wisely, taking into
account both supply-demand matching and capacity constraints in the
distribution grid. One aspect of the smart energy management at the building
level is given by the problem of real-time detection of flexible demand
available. In this paper we propose the use of energy disaggregation techniques
to perform this task. Firstly, we investigate the use of existing
classification methods to perform energy disaggregation. A comparison is
performed between four classifiers, namely Naive Bayes, k-Nearest Neighbors,
Support Vector Machine and AdaBoost. Secondly, we propose the use of Restricted
Boltzmann Machine to automatically perform feature extraction. The extracted
features are then used as inputs to the four classifiers and consequently shown
to improve their accuracy. The efficiency of our approach is demonstrated on a
real database consisting of detailed appliance-level measurements with high
temporal resolution, which has been used for energy disaggregation in previous
studies, namely the REDD. The results show robustness and good generalization
capabilities to newly presented buildings with at least 96% accuracy.
| Elena Mocanu, Phuong H. Nguyen, Madeleine Gibescu | null | 1605.01939 | null | null |
Automatic LQR Tuning Based on Gaussian Process Global Optimization | cs.RO cs.LG cs.SY | This paper proposes an automatic controller tuning framework based on linear
optimal control combined with Bayesian optimization. With this framework, an
initial set of controller gains is automatically improved according to a
pre-defined performance objective evaluated from experimental data. The
underlying Bayesian optimization algorithm is Entropy Search, which represents
the latent objective as a Gaussian process and constructs an explicit belief
over the location of the objective minimum. This is used to maximize the
information gain from each experimental evaluation. Thus, this framework shall
yield improved controllers with fewer evaluations compared to alternative
approaches. A seven-degree-of-freedom robot arm balancing an inverted pole is
used as the experimental demonstrator. Results of a two- and four-dimensional
tuning problems highlight the method's potential for automatic controller
tuning on robotic platforms.
| Alonso Marco, Philipp Hennig, Jeannette Bohg, Stefan Schaal and
Sebastian Trimpe | 10.1109/ICRA.2016.7487144 | 1605.01950 | null | null |
Training Neural Networks Without Gradients: A Scalable ADMM Approach | cs.LG | With the growing importance of large network models and enormous training
datasets, GPUs have become increasingly necessary to train neural networks.
This is largely because conventional optimization algorithms rely on stochastic
gradient methods that don't scale well to large numbers of cores in a cluster
setting. Furthermore, the convergence of all gradient methods, including batch
methods, suffers from common problems like saturation effects, poor
conditioning, and saddle points. This paper explores an unconventional training
method that uses alternating direction methods and Bregman iteration to train
networks without gradient descent steps. The proposed method reduces the
network training problem to a sequence of minimization sub-steps that can each
be solved globally in closed form. The proposed method is advantageous because
it avoids many of the caveats that make gradient methods slow on highly
non-convex problems. The method exhibits strong scaling in the distributed
setting, yielding linear speedups even when split over thousands of cores.
| Gavin Taylor, Ryan Burmeister, Zheng Xu, Bharat Singh, Ankit Patel,
Tom Goldstein | null | 1605.02026 | null | null |
Low-Complexity Stochastic Generalized Belief Propagation | cs.LG cs.AI cs.IT math.IT | The generalized belief propagation (GBP), introduced by Yedidia et al., is an
extension of the belief propagation (BP) algorithm, which is widely used in
different problems involved in calculating exact or approximate marginals of
probability distributions. In many problems, it has been observed that the
accuracy of GBP considerably outperforms that of BP. However, because in
general the computational complexity of GBP is higher than BP, its application
is limited in practice.
In this paper, we introduce a stochastic version of GBP called stochastic
generalized belief propagation (SGBP) that can be considered as an extension to
the stochastic BP (SBP) algorithm introduced by Noorshams et al. They have
shown that SBP reduces the complexity per iteration of BP by an order of
magnitude in alphabet size. In contrast to SBP, SGBP can reduce the computation
complexity if certain topological conditions are met by the region graph
associated to a graphical model. However, this reduction can be larger than
only one order of magnitude in alphabet size. In this paper, we characterize
these conditions and the amount of computation gain that we can obtain by using
SGBP. Finally, using similar proof techniques employed by Noorshams et al., for
general graphical models satisfy contraction conditions, we prove the
asymptotic convergence of SGBP to the unique GBP fixed point, as well as
providing non-asymptotic upper bounds on the mean square error and on the high
probability error.
| Farzin Haddadpour, Mahdi Jafari Siavoshani, Morteza Noshad | null | 1605.02046 | null | null |
Concentrated Differential Privacy: Simplifications, Extensions, and
Lower Bounds | cs.CR cs.DS cs.IT cs.LG math.IT | "Concentrated differential privacy" was recently introduced by Dwork and
Rothblum as a relaxation of differential privacy, which permits sharper
analyses of many privacy-preserving computations. We present an alternative
formulation of the concept of concentrated differential privacy in terms of the
Renyi divergence between the distributions obtained by running an algorithm on
neighboring inputs. With this reformulation in hand, we prove sharper
quantitative results, establish lower bounds, and raise a few new questions. We
also unify this approach with approximate differential privacy by giving an
appropriate definition of "approximate concentrated differential privacy."
| Mark Bun, Thomas Steinke | null | 1605.02065 | null | null |
Function-Specific Mixing Times and Concentration Away from Equilibrium | math.ST cs.LG math.PR stat.TH | Slow mixing is the central hurdle when working with Markov chains, especially
those used for Monte Carlo approximations (MCMC). In many applications, it is
only of interest to estimate the stationary expectations of a small set of
functions, and so the usual definition of mixing based on total variation
convergence may be too conservative. Accordingly, we introduce
function-specific analogs of mixing times and spectral gaps, and use them to
prove Hoeffding-like function-specific concentration inequalities. These
results show that it is possible for empirical expectations of functions to
concentrate long before the underlying chain has mixed in the classical sense,
and we show that the concentration rates we achieve are optimal up to
constants. We use our techniques to derive confidence intervals that are
sharper than those implied by both classical Markov chain Hoeffding bounds and
Berry-Esseen-corrected CLT bounds. For applications that require testing,
rather than point estimation, we show similar improvements over recent
sequential testing results for MCMC. We conclude by applying our framework to
real data examples of MCMC, providing evidence that our theory is both accurate
and relevant to practice.
| Maxim Rabinovich, Aaditya Ramdas, Michael I. Jordan, and Martin J.
Wainwright | null | 1605.02077 | null | null |
ViZDoom: A Doom-based AI Research Platform for Visual Reinforcement
Learning | cs.LG cs.AI cs.CV | The recent advances in deep neural networks have led to effective
vision-based reinforcement learning methods that have been employed to obtain
human-level controllers in Atari 2600 games from pixel data. Atari 2600 games,
however, do not resemble real-world tasks since they involve non-realistic 2D
environments and the third-person perspective. Here, we propose a novel
test-bed platform for reinforcement learning research from raw visual
information which employs the first-person perspective in a semi-realistic 3D
world. The software, called ViZDoom, is based on the classical first-person
shooter video game, Doom. It allows developing bots that play the game using
the screen buffer. ViZDoom is lightweight, fast, and highly customizable via a
convenient mechanism of user scenarios. In the experimental part, we test the
environment by trying to learn bots for two scenarios: a basic move-and-shoot
task and a more complex maze-navigation problem. Using convolutional deep
neural networks with Q-learning and experience replay, for both scenarios, we
were able to train competent bots, which exhibit human-like behaviors. The
results confirm the utility of ViZDoom as an AI research platform and imply
that visual reinforcement learning in 3D realistic first-person perspective
environments is feasible.
| Micha{\l} Kempka, Marek Wydmuch, Grzegorz Runc, Jakub Toczek and
Wojciech Ja\'skowski | null | 1605.02097 | null | null |
Some Simulation Results for Emphatic Temporal-Difference Learning
Algorithms | cs.LG | This is a companion note to our recent study of the weak convergence
properties of constrained emphatic temporal-difference learning (ETD)
algorithms from a theoretic perspective. It supplements the latter analysis
with simulation results and illustrates the behavior of some of the ETD
algorithms using three example problems.
| Huizhen Yu | null | 1605.02099 | null | null |
Distributed Learning with Infinitely Many Hypotheses | math.OC cs.LG stat.ML | We consider a distributed learning setup where a network of agents
sequentially access realizations of a set of random variables with unknown
distributions. The network objective is to find a parametrized distribution
that best describes their joint observations in the sense of the
Kullback-Leibler divergence. Apart from recent efforts in the literature, we
analyze the case of countably many hypotheses and the case of a continuum of
hypotheses. We provide non-asymptotic bounds for the concentration rate of the
agents' beliefs around the correct hypothesis in terms of the number of agents,
the network parameters, and the learning abilities of the agents. Additionally,
we provide a novel motivation for a general set of distributed Non-Bayesian
update rules as instances of the distributed stochastic mirror descent
algorithm.
| Angelia Nedi\'c and Alex Olshevsky and C\'esar Uribe | null | 1605.02105 | null | null |
Adobe-MIT submission to the DSTC 4 Spoken Language Understanding pilot
task | cs.CL cs.AI cs.LG | The Dialog State Tracking Challenge 4 (DSTC 4) proposes several pilot tasks.
In this paper, we focus on the spoken language understanding pilot task, which
consists of tagging a given utterance with speech acts and semantic slots. We
compare different classifiers: the best system obtains 0.52 and 0.67 F1-scores
on the test set for speech act recognition for the tourist and the guide
respectively, and 0.52 F1-score for semantic tagging for both the guide and the
tourist.
| Franck Dernoncourt, Ji Young Lee, Trung H. Bui, and Hung H. Bui | null | 1605.02129 | null | null |
Robust Dialog State Tracking for Large Ontologies | cs.CL cs.AI cs.LG | The Dialog State Tracking Challenge 4 (DSTC 4) differentiates itself from the
previous three editions as follows: the number of slot-value pairs present in
the ontology is much larger, no spoken language understanding output is given,
and utterances are labeled at the subdialog level. This paper describes a novel
dialog state tracking method designed to work robustly under these conditions,
using elaborate string matching, coreference resolution tailored for dialogs
and a few other improvements. The method can correctly identify many values
that are not explicitly present in the utterance. On the final evaluation, our
method came in first among 7 competing teams and 24 entries. The F1-score
achieved by our method was 9 and 7 percentage points higher than that of the
runner-up for the utterance-level evaluation and for the subdialog-level
evaluation, respectively.
| Franck Dernoncourt, Ji Young Lee, Trung H. Bui, Hung H. Bui | null | 1605.02130 | null | null |
All Weather Perception: Joint Data Association, Tracking, and
Classification for Autonomous Ground Vehicles | cs.SY cs.CV cs.LG cs.RO | A novel probabilistic perception algorithm is presented as a real-time joint
solution to data association, object tracking, and object classification for an
autonomous ground vehicle in all-weather conditions. The presented algorithm
extends a Rao-Blackwellized Particle Filter originally built with a particle
filter for data association and a Kalman filter for multi-object tracking
(Miller et al. 2011a) to now also include multiple model tracking for
classification. Additionally a state-of-the-art vision detection algorithm that
includes heading information for autonomous ground vehicle (AGV) applications
was implemented. Cornell's AGV from the DARPA Urban Challenge was upgraded and
used to experimentally examine if and how state-of-the-art vision algorithms
can complement or replace lidar and radar sensors. Sensor and algorithm
performance in adverse weather and lighting conditions is tested. Experimental
evaluation demonstrates robust all-weather data association, tracking, and
classification where camera, lidar, and radar sensors complement each other
inside the joint probabilistic perception algorithm.
| Peter Radecki, Mark Campbell and Kevin Matzen | null | 1605.02196 | null | null |
Distributed stochastic optimization for deep learning (thesis) | cs.LG | We study the problem of how to distribute the training of large-scale deep
learning models in the parallel computing environment. We propose a new
distributed stochastic optimization method called Elastic Averaging SGD
(EASGD). We analyze the convergence rate of the EASGD method in the synchronous
scenario and compare its stability condition with the existing ADMM method in
the round-robin scheme. An asynchronous and momentum variant of the EASGD
method is applied to train deep convolutional neural networks for image
classification on the CIFAR and ImageNet datasets. Our approach accelerates the
training and furthermore achieves better test accuracy. It also requires a much
smaller amount of communication than other common baseline approaches such as
the DOWNPOUR method.
We then investigate the limit in speedup of the initial and the asymptotic
phase of the mini-batch SGD, the momentum SGD, and the EASGD methods. We find
that the spread of the input data distribution has a big impact on their
initial convergence rate and stability region. We also find a surprising
connection between the momentum SGD and the EASGD method with a negative moving
average rate. A non-convex case is also studied to understand when EASGD can
get trapped by a saddle point.
Finally, we scale up the EASGD method by using a tree structured network
topology. We show empirically its advantage and challenge. We also establish a
connection between the EASGD and the DOWNPOUR method with the classical Jacobi
and the Gauss-Seidel method, thus unifying a class of distributed stochastic
optimization methods.
| Sixin Zhang | null | 1605.02216 | null | null |
Neural Autoregressive Distribution Estimation | cs.LG | We present Neural Autoregressive Distribution Estimation (NADE) models, which
are neural network architectures applied to the problem of unsupervised
distribution and density estimation. They leverage the probability product rule
and a weight sharing scheme inspired from restricted Boltzmann machines, to
yield an estimator that is both tractable and has good generalization
performance. We discuss how they achieve competitive performance in modeling
both binary and real-valued observations. We also present how deep NADE models
can be trained to be agnostic to the ordering of input dimensions used by the
autoregressive product rule decomposition. Finally, we also show how to exploit
the topological structure of pixels in images using a deep convolutional
architecture for NADE.
| Benigno Uria, Marc-Alexandre C\^ot\'e, Karol Gregor, Iain Murray, Hugo
Larochelle | null | 1605.02226 | null | null |
Rate-Distortion Bounds on Bayes Risk in Supervised Learning | cs.IT cs.LG math.IT stat.ML | We present an information-theoretic framework for bounding the number of
labeled samples needed to train a classifier in a parametric Bayesian setting.
We derive bounds on the average $L_p$ distance between the learned classifier
and the true maximum a posteriori classifier, which are well-established
surrogates for the excess classification error due to imperfect learning. We
provide lower and upper bounds on the rate-distortion function, using $L_p$
loss as the distortion measure, of a maximum a priori classifier in terms of
the differential entropy of the posterior distribution and a quantity called
the interpolation dimension, which characterizes the complexity of the
parametric distribution family. In addition to expressing the information
content of a classifier in terms of lossy compression, the rate-distortion
function also expresses the minimum number of bits a learning machine needs to
extract from training data to learn a classifier to within a specified $L_p$
tolerance. We use results from universal source coding to express the
information content in the training data in terms of the Fisher information of
the parametric family and the number of training samples available. The result
is a framework for computing lower bounds on the Bayes $L_p$ risk. This
framework complements the well-known probably approximately correct (PAC)
framework, which provides minimax risk bounds involving the Vapnik-Chervonenkis
dimension or Rademacher complexity. Whereas the PAC framework provides upper
bounds the risk for the worst-case data distribution, the proposed
rate-distortion framework lower bounds the risk averaged over the data
distribution. We evaluate the bounds for a variety of data models, including
categorical, multinomial, and Gaussian models. In each case the bounds are
provably tight orderwise, and in two cases we prove that the bounds are tight
up to multiplicative constants.
| Matthew Nokleby, Ahmad Beirami, and Robert Calderbank | null | 1605.02268 | null | null |
Predicting Performance on MOOC Assessments using Multi-Regression Models | cs.CY cs.LG | The past few years has seen the rapid growth of data min- ing approaches for
the analysis of data obtained from Mas- sive Open Online Courses (MOOCs). The
objectives of this study are to develop approaches to predict the scores a stu-
dent may achieve on a given grade-related assessment based on information,
considered as prior performance or prior ac- tivity in the course. We develop a
personalized linear mul- tiple regression (PLMR) model to predict the grade for
a student, prior to attempting the assessment activity. The developed model is
real-time and tracks the participation of a student within a MOOC (via
click-stream server logs) and predicts the performance of a student on the next
as- sessment within the course offering. We perform a com- prehensive set of
experiments on data obtained from three openEdX MOOCs via a Stanford University
initiative. Our experimental results show the promise of the proposed ap-
proach in comparison to baseline approaches and also helps in identification of
key features that are associated with the study habits and learning behaviors
of students.
| Zhiyun Ren, Huzefa Rangwala, Aditya Johri | null | 1605.02269 | null | null |
Active Learning for Community Detection in Stochastic Block Models | cs.LG cs.SI math.PR | The stochastic block model (SBM) is an important generative model for random
graphs in network science and machine learning, useful for benchmarking
community detection (or clustering) algorithms. The symmetric SBM generates a
graph with $2n$ nodes which cluster into two equally sized communities. Nodes
connect with probability $p$ within a community and $q$ across different
communities. We consider the case of $p=a\ln (n)/n$ and $q=b\ln (n)/n$. In this
case, it was recently shown that recovering the community membership (or label)
of every node with high probability (w.h.p.) using only the graph is possible
if and only if the Chernoff-Hellinger (CH) divergence
$D(a,b)=(\sqrt{a}-\sqrt{b})^2 \geq 1$. In this work, we study if, and by how
much, community detection below the clustering threshold (i.e. $D(a,b)<1$) is
possible by querying the labels of a limited number of chosen nodes (i.e.,
active learning). Our main result is to show that, under certain conditions,
sampling the labels of a vanishingly small fraction of nodes (a number
sub-linear in $n$) is sufficient for exact community detection even when
$D(a,b)<1$. Furthermore, we provide an efficient learning algorithm which
recovers the community memberships of all nodes w.h.p. as long as the number of
sampled points meets the sufficient condition. We also show that recovery is
not possible if the number of observed labels is less than $n^{1-D(a,b)}$. The
validity of our results is demonstrated through numerical experiments.
| Akshay Gadde, Eyal En Gad, Salman Avestimehr and Antonio Ortega | 10.1109/ISIT.2016.7541627 | 1605.02372 | null | null |
Structured Nonconvex and Nonsmooth Optimization: Algorithms and
Iteration Complexity Analysis | math.OC cs.LG stat.ML | Nonconvex and nonsmooth optimization problems are frequently encountered in
much of statistics, business, science and engineering, but they are not yet
widely recognized as a technology in the sense of scalability. A reason for
this relatively low degree of popularity is the lack of a well developed system
of theory and algorithms to support the applications, as is the case for its
convex counterpart. This paper aims to take one step in the direction of
disciplined nonconvex and nonsmooth optimization. In particular, we consider in
this paper some constrained nonconvex optimization models in block decision
variables, with or without coupled affine constraints. In the case of without
coupled constraints, we show a sublinear rate of convergence to an
$\epsilon$-stationary solution in the form of variational inequality for a
generalized conditional gradient method, where the convergence rate is shown to
be dependent on the H\"olderian continuity of the gradient of the smooth part
of the objective. For the model with coupled affine constraints, we introduce
corresponding $\epsilon$-stationarity conditions, and apply two proximal-type
variants of the ADMM to solve such a model, assuming the proximal ADMM updates
can be implemented for all the block variables except for the last block, for
which either a gradient step or a majorization-minimization step is
implemented. We show an iteration complexity bound of $O(1/\epsilon^2)$ to
reach an $\epsilon$-stationary solution for both algorithms. Moreover, we show
that the same iteration complexity of a proximal BCD method follows
immediately. Numerical results are provided to illustrate the efficacy of the
proposed algorithms for tensor robust PCA.
| Bo Jiang, Tianyi Lin, Shiqian Ma, Shuzhong Zhang | null | 1605.02408 | null | null |
Randomized Kaczmarz for Rank Aggregation from Pairwise Comparisons | cs.LG stat.ML | We revisit the problem of inferring the overall ranking among entities in the
framework of Bradley-Terry-Luce (BTL) model, based on available empirical data
on pairwise preferences. By a simple transformation, we can cast the problem as
that of solving a noisy linear system, for which a ready algorithm is available
in the form of the randomized Kaczmarz method. This scheme is provably
convergent, has excellent empirical performance, and is amenable to on-line,
distributed and asynchronous variants. Convergence, convergence rate, and error
analysis of the proposed algorithm are presented and several numerical
experiments are conducted whose results validate our theoretical findings.
| Vivek S. Borkar, Nikhil Karamchandani, Sharad Mirani | null | 1605.02470 | null | null |
Clustering Time Series and the Surprising Robustness of HMMs | cs.IT cs.LG math.IT stat.ML | Suppose that we are given a time series where consecutive samples are
believed to come from a probabilistic source, that the source changes from time
to time and that the total number of sources is fixed. Our objective is to
estimate the distributions of the sources. A standard approach to this problem
is to model the data as a hidden Markov model (HMM). However, since the data
often lacks the Markov or the stationarity properties of an HMM, one can ask
whether this approach is still suitable or perhaps another approach is
required. In this paper we show that a maximum likelihood HMM estimator can be
used to approximate the source distributions in a much larger class of models
than HMMs. Specifically, we propose a natural and fairly general non-stationary
model of the data, where the only restriction is that the sources do not change
too often. Our main result shows that for this model, a maximum-likelihood HMM
estimator produces the correct second moment of the data, and the results can
be extended to higher moments.
| Mark Kozdoba and Shie Mannor | null | 1605.02531 | null | null |
Random Fourier Features for Operator-Valued Kernels | cs.LG stat.ML | Devoted to multi-task learning and structured output learning,
operator-valued kernels provide a flexible tool to build vector-valued
functions in the context of Reproducing Kernel Hilbert Spaces. To scale up
these methods, we extend the celebrated Random Fourier Feature methodology to
get an approximation of operator-valued kernels. We propose a general principle
for Operator-valued Random Fourier Feature construction relying on a
generalization of Bochner's theorem for translation-invariant operator-valued
Mercer kernels. We prove the uniform convergence of the kernel approximation
for bounded and unbounded operator random Fourier features using appropriate
Bernstein matrix concentration inequality. An experimental proof-of-concept
shows the quality of the approximation and the efficiency of the corresponding
linear models on example datasets.
| Romain Brault, Florence d'Alch\'e-Buc, Markus Heinonen | null | 1605.02536 | null | null |
Oracle Based Active Set Algorithm for Scalable Elastic Net Subspace
Clustering | cs.LG cs.CV stat.ML | State-of-the-art subspace clustering methods are based on expressing each
data point as a linear combination of other data points while regularizing the
matrix of coefficients with $\ell_1$, $\ell_2$ or nuclear norms. $\ell_1$
regularization is guaranteed to give a subspace-preserving affinity (i.e.,
there are no connections between points from different subspaces) under broad
theoretical conditions, but the clusters may not be connected. $\ell_2$ and
nuclear norm regularization often improve connectivity, but give a
subspace-preserving affinity only for independent subspaces. Mixed $\ell_1$,
$\ell_2$ and nuclear norm regularizations offer a balance between the
subspace-preserving and connectedness properties, but this comes at the cost of
increased computational complexity. This paper studies the geometry of the
elastic net regularizer (a mixture of the $\ell_1$ and $\ell_2$ norms) and uses
it to derive a provably correct and scalable active set method for finding the
optimal coefficients. Our geometric analysis also provides a theoretical
justification and a geometric interpretation for the balance between the
connectedness (due to $\ell_2$ regularization) and subspace-preserving (due to
$\ell_1$ regularization) properties for elastic net subspace clustering. Our
experiments show that the proposed active set method not only achieves
state-of-the-art clustering performance, but also efficiently handles
large-scale datasets.
| Chong You, Chun-Guang Li, Daniel P. Robinson, Rene Vidal | null | 1605.02633 | null | null |
Theano: A Python framework for fast computation of mathematical
expressions | cs.SC cs.LG cs.MS | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it.
| The Theano Development Team: Rami Al-Rfou, Guillaume Alain, Amjad
Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas,
Fr\'ed\'eric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky,
Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh
Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier
Bouthillier, Alexandre de Br\'ebisson, Olivier Breuleux, Pierre-Luc Carrier,
Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre
C\^ot\'e, Myriam C\^ot\'e, Aaron Courville, Yann N. Dauphin, Olivier
Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent
Dinh, M\'elanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru
Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow,
Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe
Heng, Bal\'azs Hidasi, Sina Honari, Arjun Jain, S\'ebastien Jean, Kai Jia,
Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen,
C\'esar Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas
L\'eonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli
Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland
Memisevic, Bart van Merri\"enboer, Vincent Michalski, Mehdi Mirza, Alberto
Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel,
Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski,
John Salvatier, Fran\c{c}ois Savard, Jan Schl\"uter, John Schulman, Gabriel
Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, \'Etienne
Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski,
J\'er\'emie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal
Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb,
Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | null | 1605.02688 | null | null |
A Theoretical Analysis of Deep Neural Networks for Texture
Classification | cs.CV cs.LG stat.ML | We investigate the use of Deep Neural Networks for the classification of
image datasets where texture features are important for generating
class-conditional discriminative representations. To this end, we first derive
the size of the feature space for some standard textural features extracted
from the input dataset and then use the theory of Vapnik-Chervonenkis dimension
to show that hand-crafted feature extraction creates low-dimensional
representations which help in reducing the overall excess error rate. As a
corollary to this analysis, we derive for the first time upper bounds on the VC
dimension of Convolutional Neural Network as well as Dropout and Dropconnect
networks and the relation between excess error rate of Dropout and Dropconnect
networks. The concept of intrinsic dimension is used to validate the intuition
that texture-based datasets are inherently higher dimensional as compared to
handwritten digits or other object recognition datasets and hence more
difficult to be shattered by neural networks. We then derive the mean distance
from the centroid to the nearest and farthest sampling points in an
n-dimensional manifold and show that the Relative Contrast of the sample data
vanishes as dimensionality of the underlying vector space tends to infinity.
| Saikat Basu, Manohar Karki, Robert DiBiano, Supratik Mukhopadhyay,
Sangram Ganguly, Ramakrishna Nemani and Shreekant Gayaka | null | 1605.02699 | null | null |
Nonconvex Sparse Learning via Stochastic Optimization with Progressive
Variance Reduction | cs.LG math.OC stat.ML | We propose a stochastic variance reduced optimization algorithm for solving
sparse learning problems with cardinality constraints. Sufficient conditions
are provided, under which the proposed algorithm enjoys strong linear
convergence guarantees and optimal estimation accuracy in high dimensions. We
further extend the proposed algorithm to an asynchronous parallel variant with
a near linear speedup. Numerical experiments demonstrate the efficiency of our
algorithm in terms of both parameter estimation and computational performance.
| Xingguo Li, Raman Arora, Han Liu, Jarvis Haupt, Tuo Zhao | null | 1605.02711 | null | null |
LightNet: A Versatile, Standalone Matlab-based Environment for Deep
Learning | cs.LG cs.CV cs.NE | LightNet is a lightweight, versatile and purely Matlab-based deep learning
framework. The idea underlying its design is to provide an easy-to-understand,
easy-to-use and efficient computational platform for deep learning research.
The implemented framework supports major deep learning architectures such as
Multilayer Perceptron Networks (MLP), Convolutional Neural Networks (CNN) and
Recurrent Neural Networks (RNN). The framework also supports both CPU and GPU
computation, and the switch between them is straightforward. Different
applications in computer vision, natural language processing and robotics are
demonstrated as experiments.
| Chengxi Ye, Chen Zhao, Yezhou Yang, Cornelia Fermuller, Yiannis
Aloimonos | null | 1605.02766 | null | null |
Transport Analysis of Infinitely Deep Neural Network | cs.LG stat.ML | We investigated the feature map inside deep neural networks (DNNs) by
tracking the transport map. We are interested in the role of depth (why do DNNs
perform better than shallow models?) and the interpretation of DNNs (what do
intermediate layers do?) Despite the rapid development in their application,
DNNs remain analytically unexplained because the hidden layers are nested and
the parameters are not faithful. Inspired by the integral representation of
shallow NNs, which is the continuum limit of the width, or the hidden unit
number, we developed the flow representation and transport analysis of DNNs.
The flow representation is the continuum limit of the depth or the hidden layer
number, and it is specified by an ordinary differential equation with a vector
field. We interpret an ordinary DNN as a transport map or a Euler broken line
approximation of the flow. Technically speaking, a dynamical system is a
natural model for the nested feature maps. In addition, it opens a new way to
the coordinate-free treatment of DNNs by avoiding the redundant parametrization
of DNNs. Following Wasserstein geometry, we analyze a flow in three aspects:
dynamical system, continuity equation, and Wasserstein gradient flow. A key
finding is that we specified a series of transport maps of the denoising
autoencoder (DAE). Starting from the shallow DAE, this paper develops three
topics: the transport map of the deep DAE, the equivalence between the stacked
DAE and the composition of DAEs, and the development of the double continuum
limit or the integral representation of the flow representation. As partial
answers to the research questions, we found that deeper DAEs converge faster
and the extracted features are better; in addition, a deep Gaussian DAE
transports mass to decrease the Shannon entropy of the data distribution.
| Sho Sonoda, Noboru Murata | null | 1605.02832 | null | null |
Performance Analysis of the Gradient Comparator LMS Algorithm | cs.IT cs.LG math.IT | The sparsity-aware zero attractor least mean square (ZA-LMS) algorithm
manifests much lower misadjustment in strongly sparse environment than its
sparsity-agnostic counterpart, the least mean square (LMS), but is shown to
perform worse than the LMS when sparsity of the impulse response decreases. The
reweighted variant of the ZA-LMS, namely RZA-LMS shows robustness against this
variation in sparsity, but at the price of increased computational complexity.
The other variants such as the l 0 -LMS and the improved proportionate
normalized LMS (IPNLMS), though perform satisfactorily, are also
computationally intensive. The gradient comparator LMS (GC-LMS) is a practical
solution of this trade-off when hardware constraint is to be considered. In
this paper, we analyse the mean and the mean square convergence performance of
the GC-LMS algorithm in detail. The analyses satisfactorily match with the
simulation results.
| Bijit Kumar Das and Mrityunjoy Chakraborty | null | 1605.02877 | null | null |
Adaptive Combination of l0 LMS Adaptive Filters for Sparse System
Identification in Fluctuating Noise Power | cs.IT cs.LG math.IT | Recently, the l0-least mean square (l0-LMS) algorithm has been proposed to
identify sparse linear systems by employing a sparsity-promoting continuous
function as an approximation of l0 pseudonorm penalty. However, the performance
of this algorithm is sensitive to the appropriate choice of the some parameter
responsible for the zero-attracting intensity. The optimum choice for this
parameter depends on the signal-to-noise ratio (SNR) prevailing in the system.
Thus, it becomes difficult to fix a suitable value for this parameter,
particularly in a situation where SNR fluctuates over time. In this work, we
propose several adaptive combinations of differently parameterized l0-LMS to
get an overall satisfactory performance independent of the SNR, and discuss
some issues relevant to these combination structures. We also demonstrate an
efficient partial update scheme which not only reduces the number of
computations per iteration, but also achieves some interesting performance gain
compared with the full update case. Then, we propose a new recursive least
squares (RLS)-type rule to update the combining parameter more efficiently.
Finally, we extend the combination of two filters to a combination of M number
adaptive filters, which manifests further improvement for M > 2.
| Bijit Kumar Das and Mrityunjoy Chakraborty | null | 1605.02878 | null | null |
Learning theory estimates with observations from general stationary
stochastic processes | stat.ML cs.LG | This paper investigates the supervised learning problem with observations
drawn from certain general stationary stochastic processes. Here by
\emph{general}, we mean that many stationary stochastic processes can be
included. We show that when the stochastic processes satisfy a generalized
Bernstein-type inequality, a unified treatment on analyzing the learning
schemes with various mixing processes can be conducted and a sharp oracle
inequality for generic regularized empirical risk minimization schemes can be
established. The obtained oracle inequality is then applied to derive
convergence rates for several learning schemes such as empirical risk
minimization (ERM), least squares support vector machines (LS-SVMs) using given
generic kernels, and SVMs using Gaussian kernels for both least squares and
quantile regression. It turns out that for i.i.d.~processes, our learning rates
for ERM recover the optimal rates. On the other hand, for non-i.i.d.~processes
including geometrically $\alpha$-mixing Markov processes, geometrically
$\alpha$-mixing processes with restricted decay, $\phi$-mixing processes, and
(time-reversed) geometrically $\mathcal{C}$-mixing processes, our learning
rates for SVMs with Gaussian kernels match, up to some arbitrarily small extra
term in the exponent, the optimal rates. For the remaining cases, our rates are
at least close to the optimal rates. As a by-product, the assumed generalized
Bernstein-type inequality also provides an interpretation of the so-called
"effective number of observations" for various mixing processes.
| Hanyuan Hang, Yunlong Feng, Ingo Steinwart, and Johan A.K. Suykens | null | 1605.02887 | null | null |
Web Spam Detection Using Multiple Kernels in Twin Support Vector Machine | cs.IR cs.LG | Search engines are the most important tools for web data acquisition. Web
pages are crawled and indexed by search Engines. Users typically locate useful
web pages by querying a search engine. One of the challenges in search engines
administration is spam pages which waste search engine resources. These pages
by deception of search engine ranking algorithms try to be showed in the first
page of results. There are many approaches to web spam pages detection such as
measurement of HTML code style similarity, pages linguistic pattern analysis
and machine learning algorithm on page content features. One of the famous
algorithms has been used in machine learning approach is Support Vector Machine
(SVM) classifier. Recently basic structure of SVM has been changed by new
extensions to increase robustness and classification accuracy. In this paper we
improved accuracy of web spam detection by using two nonlinear kernels into
Twin SVM (TSVM) as an improved extension of SVM. The classifier ability to data
separation has been increased by using two separated kernels for each class of
data. Effectiveness of new proposed method has been experimented with two
publicly used spam datasets called UK-2007 and UK-2006. Results show the
effectiveness of proposed kernelized version of TSVM in web spam page
detection.
| Seyed Hamid Reza Mohammadi, Mohammad Ali Zare Chahooki | null | 1605.02917 | null | null |
An efficient K-means algorithm for Massive Data | stat.ML cs.LG | Due to the progressive growth of the amount of data available in a wide
variety of scientific fields, it has become more difficult to ma- nipulate and
analyze such information. Even though datasets have grown in size, the K-means
algorithm remains as one of the most popular clustering methods, in spite of
its dependency on the initial settings and high computational cost, especially
in terms of distance computations. In this work, we propose an efficient
approximation to the K-means problem intended for massive data. Our approach
recursively partitions the entire dataset into a small number of sub- sets,
each of which is characterized by its representative (center of mass) and
weight (cardinality), afterwards a weighted version of the K-means algorithm is
applied over such local representation, which can drastically reduce the number
of distances computed. In addition to some theoretical properties, experimental
results indicate that our method outperforms well-known approaches, such as the
K-means++ and the minibatch K-means, in terms of the relation between number of
distance computations and the quality of the approximation.
| Marco Cap\'o, Aritz P\'erez, Jos\'e Antonio Lozano | null | 1605.02989 | null | null |
MUST-CNN: A Multilayer Shift-and-Stitch Deep Convolutional Architecture
for Sequence-based Protein Structure Prediction | cs.LG | Predicting protein properties such as solvent accessibility and secondary
structure from its primary amino acid sequence is an important task in
bioinformatics. Recently, a few deep learning models have surpassed the
traditional window based multilayer perceptron. Taking inspiration from the
image classification domain we propose a deep convolutional neural network
architecture, MUST-CNN, to predict protein properties. This architecture uses a
novel multilayer shift-and-stitch (MUST) technique to generate fully dense
per-position predictions on protein sequences. Our model is significantly
simpler than the state-of-the-art, yet achieves better results. By combining
MUST and the efficient convolution operation, we can consider far more
parameters while retaining very fast prediction speeds. We beat the
state-of-the-art performance on two large protein property prediction datasets.
| Zeming Lin, Jack Lanchantin, Yanjun Qi | null | 1605.03004 | null | null |
Semi-Supervised Representation Learning based on Probabilistic Labeling | cs.LG | In this paper, we present a new algorithm for semi-supervised representation
learning. In this algorithm, we first find a vector representation for the
labels of the data points based on their local positions in the space. Then, we
map the data to lower-dimensional space using a linear transformation such that
the dependency between the transformed data and the assigned labels is
maximized. In fact, we try to find a mapping that is as discriminative as
possible. The approach will use Hilber-Schmidt Independence Criterion (HSIC) as
the dependence measure. We also present a kernelized version of the algorithm,
which allows non-linear transformations and provides more flexibility in
finding the appropriate mapping. Use of unlabeled data for learning new
representation is not always beneficial and there is no algorithm that can
deterministically guarantee the improvement of the performance by exploiting
unlabeled data. Therefore, we also propose a bound on the performance of the
algorithm, which can be used to determine the effectiveness of using the
unlabeled data in the algorithm. We demonstrate the ability of the algorithm in
finding the transformation using both toy examples and real-world datasets.
| Ershad Banijamali and Ali Ghodsi | null | 1605.03072 | null | null |
Active Uncertainty Calibration in Bayesian ODE Solvers | cs.NA cs.LG math.NA stat.ML | There is resurging interest, in statistics and machine learning, in solvers
for ordinary differential equations (ODEs) that return probability measures
instead of point estimates. Recently, Conrad et al. introduced a sampling-based
class of methods that are 'well-calibrated' in a specific sense. But the
computational cost of these methods is significantly above that of classic
methods. On the other hand, Schober et al. pointed out a precise connection
between classic Runge-Kutta ODE solvers and Gaussian filters, which gives only
a rough probabilistic calibration, but at negligible cost overhead. By
formulating the solution of ODEs as approximate inference in linear Gaussian
SDEs, we investigate a range of probabilistic ODE solvers, that bridge the
trade-off between computational cost and probabilistic calibration, and
identify the inaccurate gradient measurement as the crucial source of
uncertainty. We propose the novel filtering-based method Bayesian Quadrature
filtering (BQF) which uses Bayesian quadrature to actively learn the
imprecision in the gradient measurement by collecting multiple gradient
evaluations.
| Hans Kersting, Philipp Hennig | null | 1605.03364 | null | null |
Unbiased split variable selection for random survival forests using
maximally selected rank statistics | stat.ML cs.LG | The most popular approach for analyzing survival data is the Cox regression
model. The Cox model may, however, be misspecified, and its proportionality
assumption may not always be fulfilled. An alternative approach for survival
prediction is random forests for survival outcomes. The standard split
criterion for random survival forests is the log-rank test statistics, which
favors splitting variables with many possible split points. Conditional
inference forests avoid this split variable selection bias. However, linear
rank statistics are utilized by default in conditional inference forests to
select the optimal splitting variable, which cannot detect non-linear effects
in the independent variables. An alternative is to use maximally selected rank
statistics for the split point selection. As in conditional inference forests,
splitting variables are compared on the p-value scale. However, instead of the
conditional Monte-Carlo approach used in conditional inference forests, p-value
approximations are employed. We describe several p-value approximations and the
implementation of the proposed random forest approach. A simulation study
demonstrates that unbiased split variable selection is possible. However, there
is a trade-off between unbiased split variable selection and runtime. In
benchmark studies of prediction performance on simulated and real datasets the
new method performs better than random survival forests if informative
dichotomous variables are combined with uninformative variables with more
categories and better than conditional inference forests if non-linear
covariate effects are included. In a runtime comparison the method proves to be
computationally faster than both alternatives, if a simple p-value
approximation is used.
| Marvin N. Wright, Theresa Dankowski and Andreas Ziegler | 10.1002/sim.7212 | 1605.03391 | null | null |
A constrained L1 minimization approach for estimating multiple Sparse
Gaussian or Nonparanormal Graphical Models | cs.LG cs.AI stat.ML | Identifying context-specific entity networks from aggregated data is an
important task, arising often in bioinformatics and neuroimaging.
Computationally, this task can be formulated as jointly estimating multiple
different, but related, sparse Undirected Graphical Models (UGM) from
aggregated samples across several contexts. Previous joint-UGM studies have
mostly focused on sparse Gaussian Graphical Models (sGGMs) and can't identify
context-specific edge patterns directly. We, therefore, propose a novel
approach, SIMULE (detecting Shared and Individual parts of MULtiple graphs
Explicitly) to learn multi-UGM via a constrained L1 minimization. SIMULE
automatically infers both specific edge patterns that are unique to each
context and shared interactions preserved among all the contexts. Through the
L1 constrained formulation, this problem is cast as multiple independent
subtasks of linear programming that can be solved efficiently in parallel. In
addition to Gaussian data, SIMULE can also handle multivariate Nonparanormal
data that greatly relaxes the normality assumption that many real-world
applications do not follow. We provide a novel theoretical proof showing that
SIMULE achieves a consistent result at the rate O(log(Kp)/n_{tot}). On multiple
synthetic datasets and two biomedical datasets, SIMULE shows significant
improvement over state-of-the-art multi-sGGM and single-UGM baselines.
| Beilun Wang, Ritambhara Singh and Yanjun Qi | null | 1605.03468 | null | null |
Tweet2Vec: Character-Based Distributed Representations for Social Media | cs.LG cs.CL | Text from social media provides a set of challenges that can cause
traditional NLP approaches to fail. Informal language, spelling errors,
abbreviations, and special characters are all commonplace in these posts,
leading to a prohibitively large vocabulary size for word-level approaches. We
propose a character composition model, tweet2vec, which finds vector-space
representations of whole tweets by learning complex, non-local dependencies in
character sequences. The proposed model outperforms a word-level baseline at
predicting user-annotated hashtags associated with the posts, doing
significantly better when the input contains many out-of-vocabulary words or
unusual character sequences. Our tweet2vec encoder is publicly available.
| Bhuwan Dhingra, Zhong Zhou, Dylan Fitzpatrick, Michael Muehl, William
W. Cohen | null | 1605.03481 | null | null |
On the Iteration Complexity of Oblivious First-Order Optimization
Algorithms | math.OC cs.LG | We consider a broad class of first-order optimization algorithms which are
\emph{oblivious}, in the sense that their step sizes are scheduled regardless
of the function under consideration, except for limited side-information such
as smoothness or strong convexity parameters. With the knowledge of these two
parameters, we show that any such algorithm attains an iteration complexity
lower bound of $\Omega(\sqrt{L/\epsilon})$ for $L$-smooth convex functions, and
$\tilde{\Omega}(\sqrt{L/\mu}\ln(1/\epsilon))$ for $L$-smooth $\mu$-strongly
convex functions. These lower bounds are stronger than those in the traditional
oracle model, as they hold independently of the dimension. To attain these, we
abandon the oracle model in favor of a structure-based approach which builds
upon a framework recently proposed in (Arjevani et al., 2015). We further show
that without knowing the strong convexity parameter, it is impossible to attain
an iteration complexity better than
$\tilde{\Omega}\left((L/\mu)\ln(1/\epsilon)\right)$. This result is then used
to formalize an observation regarding $L$-smooth convex functions, namely, that
the iteration complexity of algorithms employing time-invariant step sizes must
be at least $\Omega(L/\epsilon)$.
| Yossi Arjevani and Ohad Shamir | null | 1605.03529 | null | null |
EEF: Exponentially Embedded Families with Class-Specific Features for
Classification | stat.ML cs.LG | In this letter, we present a novel exponentially embedded families (EEF)
based classification method, in which the probability density function (PDF) on
raw data is estimated from the PDF on features. With the PDF construction, we
show that class-specific features can be used in the proposed classification
method, instead of a common feature subset for all classes as used in
conventional approaches. We apply the proposed EEF classifier for text
categorization as a case study and derive an optimal Bayesian classification
rule with class-specific feature selection based on the Information Gain (IG)
score. The promising performance on real-life data sets demonstrates the
effectiveness of the proposed approach and indicates its wide potential
applications.
| Bo Tang, Steven Kay, Haibo He, and Paul M. Baggenstoss | 10.1109/LSP.2016.2574327 | 1605.03631 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.