title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Context-Based Prediction of App Usage | cs.LG | There are around a hundred installed apps on an average smartphone. The high
number of apps and the limited number of app icons that can be displayed on the
device's screen requires a new paradigm to address their visibility to the
user. In this paper we propose a new online algorithm for dynamically
predicting a set of apps that the user is likely to use. The algorithm runs on
the user's device and constantly learns the user's habits at a given time,
location, and device state. It is designed to actively help the user to
navigate to the desired app as well as to provide a personalized feeling, and
hence is aimed at maximizing the AUC. We show both theoretically and
empirically that the algorithm maximizes the AUC, and yields good results on a
set of 1,000 devices.
| Joseph Keshet, Adam Kariv, Arnon Dagan, Dvir Volk, Joey Simhon | null | 1512.07851 | null | null |
An unsupervised spatiotemporal graphical modeling approach to anomaly
detection in distributed CPS | cs.LG | Modern distributed cyber-physical systems (CPSs) encounter a large variety of
physical faults and cyber anomalies and in many cases, they are vulnerable to
catastrophic fault propagation scenarios due to strong connectivity among the
sub-systems. This paper presents a new data-driven framework for system-wide
anomaly detection for addressing such issues. The framework is based on a
spatiotemporal feature extraction scheme built on the concept of symbolic
dynamics for discovering and representing causal interactions among the
subsystems of a CPS. The extracted spatiotemporal features are then used to
learn system-wide patterns via a Restricted Boltzmann Machine (RBM). The
results show that: (1) the RBM free energy in the off-nominal conditions is
different from that in the nominal conditions and can be used for anomaly
detection; (2) the framework can capture multiple nominal modes with one
graphical model; (3) the case studies with simulated data and an integrated
building system validate the proposed approach.
| Chao Liu, Sambuddha Ghosal, Zhanhong Jiang, Soumik Sarkar | null | 1512.07876 | null | null |
Bridging the Gap between Stochastic Gradient MCMC and Stochastic
Optimization | stat.ML cs.LG | Stochastic gradient Markov chain Monte Carlo (SG-MCMC) methods are Bayesian
analogs to popular stochastic optimization methods; however, this connection is
not well studied. We explore this relationship by applying simulated annealing
to an SGMCMC algorithm. Furthermore, we extend recent SG-MCMC methods with two
key components: i) adaptive preconditioners (as in ADAgrad or RMSprop), and ii)
adaptive element-wise momentum weights. The zero-temperature limit gives a
novel stochastic optimization method with adaptive element-wise momentum
weights, while conventional optimization methods only have a shared, static
momentum weight. Under certain assumptions, our theoretical analysis suggests
the proposed simulated annealing approach converges close to the global optima.
Experiments on several deep neural network models show state-of-the-art results
compared to related stochastic optimization algorithms.
| Changyou Chen, David Carlson, Zhe Gan, Chunyuan Li and Lawrence Carin | null | 1512.07962 | null | null |
Inducing Generalized Multi-Label Rules with Learning Classifier Systems | cs.NE cs.LG | In recent years, multi-label classification has attracted a significant body
of research, motivated by real-life applications, such as text classification
and medical diagnoses. Although sparsely studied in this context, Learning
Classifier Systems are naturally well-suited to multi-label classification
problems, whose search space typically involves multiple highly specific
niches. This is the motivation behind our current work that introduces a
generalized multi-label rule format -- allowing for flexible label-dependency
modeling, with no need for explicit knowledge of which correlations to search
for -- and uses it as a guide for further adapting the general Michigan-style
supervised Learning Classifier System framework. The integration of the
aforementioned rule format and framework adaptations results in a novel
algorithm for multi-label classification whose behavior is studied through a
set of properly defined artificial problems. The proposed algorithm is also
thoroughly evaluated on a set of multi-label datasets and found competitive to
other state-of-the-art multi-label classification methods.
| Fani A. Tzima, Miltiadis Allamanis, Alexandros Filotheou, Pericles A.
Mitkas | null | 1512.07982 | null | null |
Discovering topic structures of a temporally evolving document corpus | cs.IR cs.LG | In this paper we describe a novel framework for the discovery of the topical
content of a data corpus, and the tracking of its complex structural changes
across the temporal dimension. In contrast to previous work our model does not
impose a prior on the rate at which documents are added to the corpus nor does
it adopt the Markovian assumption which overly restricts the type of changes
that the model can capture. Our key technical contribution is a framework based
on (i) discretization of time into epochs, (ii) epoch-wise topic discovery
using a hierarchical Dirichlet process-based model, and (iii) a temporal
similarity graph which allows for the modelling of complex topic changes:
emergence and disappearance, evolution, splitting, and merging. The power of
the proposed framework is demonstrated on two medical literature corpora
concerned with the autism spectrum disorder (ASD) and the metabolic syndrome
(MetS) -- both increasingly important research subjects with significant social
and healthcare consequences. In addition to the collected ASD and metabolic
syndrome literature corpora which we made freely available, our contribution
also includes an extensive empirical analysis of the proposed framework. We
describe a detailed and careful examination of the effects that our
algorithms's free parameters have on its output, and discuss the significance
of the findings both in the context of the practical application of our
algorithm as well as in the context of the existing body of work on temporal
topic analysis. Our quantitative analysis is followed by several qualitative
case studies highly relevant to the current research on ASD and MetS, on which
our algorithm is shown to capture well the actual developments in these fields.
| Adham Beykikhoshk and Ognjen Arandjelovic and Dinh Phung and Svetha
Venkatesh | null | 1512.08008 | null | null |
Statistical Learning under Nonstationary Mixing Processes | cs.LG stat.ML | We study a special case of the problem of statistical learning without the
i.i.d. assumption. Specifically, we suppose a learning method is presented with
a sequence of data points, and required to make a prediction (e.g., a
classification) for each one, and can then observe the loss incurred by this
prediction. We go beyond traditional analyses, which have focused on stationary
mixing processes or nonstationary product processes, by combining these two
relaxations to allow nonstationary mixing processes. We are particularly
interested in the case of $\beta$-mixing processes, with the sum of changes in
marginal distributions growing sublinearly in the number of samples. Under
these conditions, we propose a learning method, and establish that for bounded
VC subgraph classes, the cumulative excess risk grows sublinearly in the number
of predictions, at a quantified rate.
| Steve Hanneke, Liu Yang | null | 1512.08064 | null | null |
Inverse Reinforcement Learning via Deep Gaussian Process | cs.LG cs.RO stat.ML | We propose a new approach to inverse reinforcement learning (IRL) based on
the deep Gaussian process (deep GP) model, which is capable of learning
complicated reward structures with few demonstrations. Our model stacks
multiple latent GP layers to learn abstract representations of the state
feature space, which is linked to the demonstrations through the Maximum
Entropy learning framework. Incorporating the IRL engine into the nonlinear
latent structure renders existing deep GP inference approaches intractable. To
tackle this, we develop a non-standard variational approximation framework
which extends previous inference schemes. This allows for approximate Bayesian
treatment of the feature space and guards against overfitting. Carrying out
representation and inverse reinforcement learning simultaneously within our
model outperforms state-of-the-art approaches, as we demonstrate with
experiments on standard benchmarks ("object world","highway driving") and a new
benchmark ("binary world").
| Ming Jin, Andreas Damianou, Pieter Abbeel, Costas Spanos | null | 1512.08065 | null | null |
Regularized Orthogonal Tensor Decompositions for Multi-Relational
Learning | cs.LG cs.AI | Multi-relational learning has received lots of attention from researchers in
various research communities. Most existing methods either suffer from
superlinear per-iteration cost, or are sensitive to the given ranks. To address
both issues, we propose a scalable core tensor trace norm Regularized
Orthogonal Iteration Decomposition (ROID) method for full or incomplete tensor
analytics, which can be generalized as a graph Laplacian regularized version by
using auxiliary information or a sparse higher-order orthogonal iteration
(SHOOI) version. We first induce the equivalence relation of the Schatten
p-norm (0<p<\infty) of a low multi-linear rank tensor and its core tensor. Then
we achieve a much smaller matrix trace norm minimization problem. Finally, we
develop two efficient augmented Lagrange multiplier algorithms to solve our
problems with convergence guarantees. Extensive experiments using both real and
synthetic datasets, even though with only a few observations, verified both the
efficiency and effectiveness of our methods.
| Fanhua Shang and James Cheng and Hong Cheng | null | 1512.08120 | null | null |
The Utility of Abstaining in Binary Classification | cs.LG | We explore the problem of binary classification in machine learning, with a
twist - the classifier is allowed to abstain on any datum, professing ignorance
about the true class label without committing to any prediction. This is
directly motivated by applications like medical diagnosis and fraud risk
assessment, in which incorrect predictions have potentially calamitous
consequences. We focus on a recent spate of theoretically driven work in this
area that characterizes how allowing abstentions can lead to fewer errors in
very general settings. Two areas are highlighted: the surprising possibility of
zero-error learning, and the fundamental tradeoff between predicting
sufficiently often and avoiding incorrect predictions. We review efficient
algorithms with provable guarantees for each of these areas. We also discuss
connections to other scenarios, notably active learning, as they suggest
promising directions of further inquiry in this emerging field.
| Akshay Balsubramani | null | 1512.08133 | null | null |
Self-Excitation: An Enabler for Online Thermal Estimation and Model
Predictive Control of Buildings | cs.SY cs.LG | This paper investigates a method to improve buildings' thermal predictive
control performance via online identification and excitation (active learning
process) that minimally disrupts normal operations. In previous studies we have
demonstrated scalable methods to acquire multi-zone thermal models of passive
buildings using a gray-box approach that leverages building topology and
measurement data. Here we extend the method to multi-zone actively controlled
buildings and examine how to improve the thermal model estimation by using the
controller to excite unknown portions of the building's dynamics. Comparing
against a baseline thermostat controller, we demonstrate the utility of both
the initially acquired and improved thermal models within a Model Predictive
Control (MPC) framework, which anticipates weather uncertainty and time-varying
temperature set-points. A simulation study demonstrates self-excitation
improves model estimation, which corresponds to improved MPC energy savings and
occupant comfort. By coupling building topology, estimation, and control
routines into a single online framework, we have demonstrated the potential for
low-cost scalable methods to actively learn and control buildings to ensure
occupant comfort and minimize energy usage, all while using the existing
building's HVAC sensors and hardware.
| Peter Radecki and Brandon Hencey | null | 1512.08169 | null | null |
Electricity Demand Forecasting by Multi-Task Learning | cs.LG | We explore the application of kernel-based multi-task learning techniques to
forecast the demand of electricity in multiple nodes of a distribution network.
We show that recently developed output kernel learning techniques are
particularly well suited to solve this problem, as they allow to flexibly model
the complex seasonal effects that characterize electricity demand data, while
learning and exploiting correlations between multiple demand profiles. We also
demonstrate that kernels with a multiplicative structure yield superior
predictive performance with respect to the widely adopted (generalized)
additive models. Our study is based on residential and industrial smart meter
data provided by the Irish Commission for Energy Regulation (CER).
| Jean-Baptiste Fiot, Francesco Dinuzzo | null | 1512.08178 | null | null |
New Perspectives on $k$-Support and Cluster Norms | cs.LG stat.ML | We study a regularizer which is defined as a parameterized infimum of
quadratics, and which we call the box-norm. We show that the k-support norm, a
regularizer proposed by [Argyriou et al, 2012] for sparse vector prediction
problems, belongs to this family, and the box-norm can be generated as a
perturbation of the former. We derive an improved algorithm to compute the
proximity operator of the squared box-norm, and we provide a method to compute
the norm. We extend the norms to matrices, introducing the spectral k-support
norm and spectral box-norm. We note that the spectral box-norm is essentially
equivalent to the cluster norm, a multitask learning regularizer introduced by
[Jacob et al. 2009a], and which in turn can be interpreted as a perturbation of
the spectral k-support norm. Centering the norm is important for multitask
learning and we also provide a method to use centered versions of the norms as
regularizers. Numerical experiments indicate that the spectral k-support and
box-norms and their centered variants provide state of the art performance in
matrix completion and multitask learning problems respectively.
| Andrew M. McDonald, Massimiliano Pontil, Dimitris Stamos | null | 1512.08204 | null | null |
Robust Semi-supervised Least Squares Classification by Implicit
Constraints | stat.ML cs.LG | We introduce the implicitly constrained least squares (ICLS) classifier, a
novel semi-supervised version of the least squares classifier. This classifier
minimizes the squared loss on the labeled data among the set of parameters
implied by all possible labelings of the unlabeled data. Unlike other
discriminative semi-supervised methods, this approach does not introduce
explicit additional assumptions into the objective function, but leverages
implicit assumptions already present in the choice of the supervised least
squares classifier. This method can be formulated as a quadratic programming
problem and its solution can be found using a simple gradient descent
procedure. We prove that, in a limited 1-dimensional setting, this approach
never leads to performance worse than the supervised classifier. Experimental
results show that also in the general multidimensional case performance
improvements can be expected, both in terms of the squared loss that is
intrinsic to the classifier, as well as in terms of the expected classification
error.
| Jesse H. Krijthe and Marco Loog | 10.1016/j.patcog.2016.09.009 | 1512.08240 | null | null |
Using Causal Discovery to Track Information Flow in Spatio-Temporal Data
- A Testbed and Experimental Results Using Advection-Diffusion Simulations | cs.LG | Causal discovery algorithms based on probabilistic graphical models have
emerged in geoscience applications for the identification and visualization of
dynamical processes. The key idea is to learn the structure of a graphical
model from observed spatio-temporal data, which indicates information flow,
thus pathways of interactions, in the observed physical system. Studying those
pathways allows geoscientists to learn subtle details about the underlying
dynamical mechanisms governing our planet. Initial studies using this approach
on real-world atmospheric data have shown great potential for scientific
discovery. However, in these initial studies no ground truth was available, so
that the resulting graphs have been evaluated only by whether a domain expert
thinks they seemed physically plausible. This paper seeks to fill this gap. We
develop a testbed that emulates two dynamical processes dominant in many
geoscience applications, namely advection and diffusion, in a 2D grid. Then we
apply the causal discovery based information tracking algorithms to the
simulation data to study how well the algorithms work for different scenarios
and to gain a better understanding of the physical meaning of the graph
results, in particular of instantaneous connections. We make all data sets used
in this study available to the community as a benchmark.
Keywords: Information flow, graphical model, structure learning, causal
discovery, geoscience.
| Imme Ebert-Uphoff and Yi Deng | null | 1512.08279 | null | null |
Natural Language Inference by Tree-Based Convolution and Heuristic
Matching | cs.CL cs.LG | In this paper, we propose the TBCNN-pair model to recognize entailment and
contradiction between two sentences. In our model, a tree-based convolutional
neural network (TBCNN) captures sentence-level semantics; then heuristic
matching layers like concatenation, element-wise product/difference combine the
information in individual sentences. Experimental results show that our model
outperforms existing sentence encoding-based approaches by a large margin.
| Lili Mou, Rui Men, Ge Li, Yan Xu, Lu Zhang, Rui Yan, Zhi Jin | null | 1512.08422 | null | null |
Convexified Modularity Maximization for Degree-corrected Stochastic
Block Models | math.ST cs.LG cs.SI stat.ML stat.TH | The stochastic block model (SBM) is a popular framework for studying
community detection in networks. This model is limited by the assumption that
all nodes in the same community are statistically equivalent and have equal
expected degrees. The degree-corrected stochastic block model (DCSBM) is a
natural extension of SBM that allows for degree heterogeneity within
communities. This paper proposes a convexified modularity maximization approach
for estimating the hidden communities under DCSBM. Our approach is based on a
convex programming relaxation of the classical (generalized) modularity
maximization formulation, followed by a novel doubly-weighted $ \ell_1 $-norm $
k $-median procedure. We establish non-asymptotic theoretical guarantees for
both approximate clustering and perfect clustering. Our approximate clustering
results are insensitive to the minimum degree, and hold even in sparse regime
with bounded average degrees. In the special case of SBM, these theoretical
results match the best-known performance guarantees of computationally feasible
algorithms. Numerically, we provide an efficient implementation of our
algorithm, which is applied to both synthetic and real-world networks.
Experiment results show that our method enjoys competitive performance compared
to the state of the art in the literature.
| Yudong Chen and Xiaodong Li and Jiaming Xu | null | 1512.08425 | null | null |
Visually Indicated Sounds | cs.CV cs.LG cs.SD | Objects make distinctive sounds when they are hit or scratched. These sounds
reveal aspects of an object's material properties, as well as the actions that
produced them. In this paper, we propose the task of predicting what sound an
object makes when struck as a way of studying physical interactions within a
visual scene. We present an algorithm that synthesizes sound from silent videos
of people hitting and scratching objects with a drumstick. This algorithm uses
a recurrent neural network to predict sound features from videos and then
produces a waveform from these features with an example-based synthesis
procedure. We show that the sounds predicted by our model are realistic enough
to fool participants in a "real or fake" psychophysical experiment, and that
they convey significant information about material properties and physical
interactions.
| Andrew Owens, Phillip Isola, Josh McDermott, Antonio Torralba, Edward
H. Adelson, William T. Freeman | null | 1512.08512 | null | null |
Taming the Noise in Reinforcement Learning via Soft Updates | cs.LG cs.IT math.IT | Model-free reinforcement learning algorithms, such as Q-learning, perform
poorly in the early stages of learning in noisy environments, because much
effort is spent unlearning biased estimates of the state-action value function.
The bias results from selecting, among several noisy estimates, the apparent
optimum, which may actually be suboptimal. We propose G-learning, a new
off-policy learning algorithm that regularizes the value estimates by
penalizing deterministic policies in the beginning of the learning process. We
show that this method reduces the bias of the value-function estimation,
leading to faster convergence to the optimal value and the optimal policy.
Moreover, G-learning enables the natural incorporation of prior domain
knowledge, when available. The stochastic nature of G-learning also makes it
avoid some exploration costs, a property usually attributed only to on-policy
algorithms. We illustrate these ideas in several examples, where G-learning
results in significant improvements of the convergence rate and the cost of the
learning process.
| Roy Fox, Ari Pakman, Naftali Tishby | null | 1512.08562 | null | null |
Structured Pruning of Deep Convolutional Neural Networks | cs.NE cs.LG stat.ML | Real time application of deep learning algorithms is often hindered by high
computational complexity and frequent memory accesses. Network pruning is a
promising technique to solve this problem. However, pruning usually results in
irregular network connections that not only demand extra representation efforts
but also do not fit well on parallel computation. We introduce structured
sparsity at various scales for convolutional neural networks, which are channel
wise, kernel wise and intra kernel strided sparsity. This structured sparsity
is very advantageous for direct computational resource savings on embedded
computers, parallel computing environments and hardware based systems. To
decide the importance of network connections and paths, the proposed method
uses a particle filtering approach. The importance weight of each particle is
assigned by computing the misclassification rate with corresponding
connectivity pattern. The pruned network is re-trained to compensate for the
losses due to pruning. While implementing convolutions as matrix products, we
particularly show that intra kernel strided sparsity with a simple constraint
can significantly reduce the size of kernel and feature map matrices. The
pruned network is finally fixed point optimized with reduced word length
precision. This results in significant reduction in the total storage size
providing advantages for on-chip memory based implementations of deep neural
networks.
| Sajid Anwar, Kyuyeon Hwang and Wonyong Sung | null | 1512.08571 | null | null |
Optimal Selective Attention in Reactive Agents | cs.LG cs.IT math.IT | In POMDPs, information about the hidden state, delivered through
observations, is both valuable to the agent, allowing it to base its actions on
better informed internal states, and a "curse", exploding the size and
diversity of the internal state space. One attempt to deal with this is to
focus on reactive policies, that only base their actions on the most recent
observation. However, even reactive policies can be demanding on resources, and
agents need to pay selective attention to only some of the information
available to them in observations. In this report we present the
minimum-information principle for selective attention in reactive agents. We
further motivate this approach by reducing the general problem of optimal
control in POMDPs, to reactive control with complex observations. Lastly, we
explore a newly discovered phenomenon of this optimization process - period
doubling bifurcations. This necessitates periodic policies, and raises many
more questions regarding stability, periodicity and chaos in optimal control.
| Roy Fox, Naftali Tishby | null | 1512.08575 | null | null |
A Simple Baseline for Travel Time Estimation using Large-Scale Trip Data | cs.LG cs.CY | The increased availability of large-scale trajectory data around the world
provides rich information for the study of urban dynamics. For example, New
York City Taxi Limousine Commission regularly releases source-destination
information about trips in the taxis they regulate. Taxi data provide
information about traffic patterns, and thus enable the study of urban flow --
what will traffic between two locations look like at a certain date and time in
the future? Existing big data methods try to outdo each other in terms of
complexity and algorithmic sophistication. In the spirit of "big data beats
algorithms", we present a very simple baseline which outperforms
state-of-the-art approaches, including Bing Maps and Baidu Maps (whose APIs
permit large scale experimentation). Such a travel time estimation baseline has
several important uses, such as navigation (fast travel time estimates can
serve as approximate heuristics for A search variants for path finding) and
trip planning (which uses operating hours for popular destinations along with
travel time estimates to create an itinerary).
| Hongjian Wang, Zhenhui Li, Yu-Hsuan Kuo, Dan Kifer | null | 1512.08580 | null | null |
Tight Bounds for Approximate Carath\'eodory and Beyond | cs.DS cs.LG math.OC | We give a deterministic nearly-linear time algorithm for approximating any
point inside a convex polytope with a sparse convex combination of the
polytope's vertices. Our result provides a constructive proof for the
Approximate Carath\'{e}odory Problem, which states that any point inside a
polytope contained in the $\ell_p$ ball of radius $D$ can be approximated to
within $\epsilon$ in $\ell_p$ norm by a convex combination of only $O\left(D^2
p/\epsilon^2\right)$ vertices of the polytope for $p \geq 2$. We also show that
this bound is tight, using an argument based on anti-concentration for the
binomial distribution.
Along the way of establishing the upper bound, we develop a technique for
minimizing norms over convex sets with complicated geometry; this is achieved
by running Mirror Descent on a dual convex function obtained via Sion's
Theorem.
As simple extensions of our method, we then provide new algorithms for
submodular function minimization and SVM training. For submodular function
minimization we obtain a simplification and (provable) speed-up over Wolfe's
algorithm, the method commonly found to be the fastest in practice. For SVM
training, we obtain $O(1/\epsilon^2)$ convergence for arbitrary kernels; each
iteration only requires matrix-vector operations involving the kernel matrix,
so we overcome the obstacle of having to explicitly store the kernel or compute
its Cholesky factorization.
| Vahab Mirrokni, Renato Paes Leme, Adrian Vladu, Sam Chiu-wai Wong | null | 1512.08602 | null | null |
Feed-Forward Networks with Attention Can Solve Some Long-Term Memory
Problems | cs.LG cs.NE | We propose a simplified model of attention which is applicable to
feed-forward neural networks and demonstrate that the resulting model can solve
the synthetic "addition" and "multiplication" long-term memory problems for
sequence lengths which are both longer and more widely varying than the best
published results for these tasks.
| Colin Raffel and Daniel P. W. Ellis | null | 1512.08756 | null | null |
Matrix Completion Under Monotonic Single Index Models | stat.ML cs.LG | Most recent results in matrix completion assume that the matrix under
consideration is low-rank or that the columns are in a union of low-rank
subspaces. In real-world settings, however, the linear structure underlying
these models is distorted by a (typically unknown) nonlinear transformation.
This paper addresses the challenge of matrix completion in the face of such
nonlinearities. Given a few observations of a matrix that are obtained by
applying a Lipschitz, monotonic function to a low rank matrix, our task is to
estimate the remaining unobserved entries. We propose a novel matrix completion
method that alternates between low-rank matrix estimation and monotonic
function estimation to estimate the missing matrix elements. Mean squared error
bounds provide insight into how well the matrix can be estimated based on the
size, rank of the matrix and properties of the nonlinear transformation.
Empirical results on synthetic and real-world datasets demonstrate the
competitiveness of the proposed approach.
| Ravi Ganti, Laura Balzano, Rebecca Willett | null | 1512.08787 | null | null |
Common Variable Learning and Invariant Representation Learning using
Siamese Neural Networks | stat.ML cs.LG cs.NE | We consider the statistical problem of learning common source of variability
in data which are synchronously captured by multiple sensors, and demonstrate
that Siamese neural networks can be naturally applied to this problem. This
approach is useful in particular in exploratory, data-driven applications,
where neither a model nor label information is available. In recent years, many
researchers have successfully applied Siamese neural networks to obtain an
embedding of data which corresponds to a "semantic similarity". We present an
interpretation of this "semantic similarity" as learning of equivalence
classes. We discuss properties of the embedding obtained by Siamese networks
and provide empirical results that demonstrate the ability of Siamese networks
to learn common variability.
| Uri Shaham, Roy Lederman | null | 1512.08806 | null | null |
Sparse group factor analysis for biclustering of multiple data sources | cs.LG cs.IR stat.ML | Motivation: Modelling methods that find structure in data are necessary with
the current large volumes of genomic data, and there have been various efforts
to find subsets of genes exhibiting consistent patterns over subsets of
treatments. These biclustering techniques have focused on one data source,
often gene expression data. We present a Bayesian approach for joint
biclustering of multiple data sources, extending a recent method Group Factor
Analysis (GFA) to have a biclustering interpretation with additional sparsity
assumptions. The resulting method enables data-driven detection of linear
structure present in parts of the data sources. Results: Our simulation studies
show that the proposed method reliably infers bi-clusters from heterogeneous
data sources. We tested the method on data from the NCI-DREAM drug sensitivity
prediction challenge, resulting in an excellent prediction accuracy. Moreover,
the predictions are based on several biclusters which provide insight into the
data sources, in this case on gene expression, DNA methylation, protein
abundance, exome sequence, functional connectivity fingerprints and drug
sensitivity.
| Kerstin Bunte, Eemeli Lepp\"aaho, Inka Saarinen, Samuel Kaski | 10.1093/bioinformatics/btw207 | 1512.08808 | null | null |
Learning to Filter with Predictive State Inference Machines | cs.LG | Latent state space models are a fundamental and widely used tool for modeling
dynamical systems. However, they are difficult to learn from data and learned
models often lack performance guarantees on inference tasks such as filtering
and prediction. In this work, we present the PREDICTIVE STATE INFERENCE MACHINE
(PSIM), a data-driven method that considers the inference procedure on a
dynamical system as a composition of predictors. The key idea is that rather
than first learning a latent state space model, and then using the learned
model for inference, PSIM directly learns predictors for inference in
predictive state space. We provide theoretical guarantees for inference, in
both realizable and agnostic settings, and showcase practical performance on a
variety of simulated and real world robotics benchmarks.
| Wen Sun, Arun Venkatraman, Byron Boots, J. Andrew Bagnell | null | 1512.08836 | null | null |
Estimation of the sample covariance matrix from compressive measurements | stat.ML cs.LG | This paper focuses on the estimation of the sample covariance matrix from
low-dimensional random projections of data known as compressive measurements.
In particular, we present an unbiased estimator to extract the covariance
structure from compressive measurements obtained by a general class of random
projection matrices consisting of i.i.d. zero-mean entries and finite first
four moments. In contrast to previous works, we make no structural assumptions
about the underlying covariance matrix such as being low-rank. In fact, our
analysis is based on a non-Bayesian data setting which requires no
distributional assumptions on the set of data samples. Furthermore, inspired by
the generality of the projection matrices, we propose an approach to covariance
estimation that utilizes sparse Rademacher matrices. Therefore, our algorithm
can be used to estimate the covariance matrix in applications with limited
memory and computation power at the acquisition devices. Experimental results
demonstrate that our approach allows for accurate estimation of the sample
covariance matrix on several real-world data sets, including video data.
| Farhad Pourkamali-Anaraki | 10.1049/iet-spr.2016.0169 | 1512.08887 | null | null |
Online Keyword Spotting with a Character-Level Recurrent Neural Network | cs.CL cs.LG cs.NE | In this paper, we propose a context-aware keyword spotting model employing a
character-level recurrent neural network (RNN) for spoken term detection in
continuous speech. The RNN is end-to-end trained with connectionist temporal
classification (CTC) to generate the probabilities of character and
word-boundary labels. There is no need for the phonetic transcription, senone
modeling, or system dictionary in training and testing. Also, keywords can
easily be added and modified by editing the text based keyword list without
retraining the RNN. Moreover, the unidirectional RNN processes an infinitely
long input audio streams without pre-segmentation and keywords are detected
with low-latency before the utterance is finished. Experimental results show
that the proposed keyword spotter significantly outperforms the deep neural
network (DNN) and hidden Markov model (HMM) based keyword-filler model even
with less computations.
| Kyuyeon Hwang, Minjae Lee, Wonyong Sung | null | 1512.08903 | null | null |
Simple, Robust and Optimal Ranking from Pairwise Comparisons | cs.LG cs.AI cs.IT math.IT stat.ML | We consider data in the form of pairwise comparisons of n items, with the
goal of precisely identifying the top k items for some value of k < n, or
alternatively, recovering a ranking of all the items. We analyze the Copeland
counting algorithm that ranks the items in order of the number of pairwise
comparisons won, and show it has three attractive features: (a) its
computational efficiency leads to speed-ups of several orders of magnitude in
computation time as compared to prior work; (b) it is robust in that
theoretical guarantees impose no conditions on the underlying matrix of
pairwise-comparison probabilities, in contrast to some prior work that applies
only to the BTL parametric model; and (c) it is an optimal method up to
constant factors, meaning that it achieves the information-theoretic limits for
recovering the top k-subset. We extend our results to obtain sharp guarantees
for approximate recovery under the Hamming distortion metric, and more
generally, to any arbitrary error requirement that satisfies a simple and
natural monotonicity condition.
| Nihar B. Shah and Martin J. Wainwright | null | 1512.08949 | null | null |
Detection in the stochastic block model with multiple clusters: proof of
the achievability conjectures, acyclic BP, and the information-computation
gap | math.PR cs.CC cs.IT cs.LG cs.SI math.IT | In a paper that initiated the modern study of the stochastic block model,
Decelle et al., backed by Mossel et al., made the following conjecture: Denote
by $k$ the number of balanced communities, $a/n$ the probability of connecting
inside communities and $b/n$ across, and set
$\mathrm{SNR}=(a-b)^2/(k(a+(k-1)b)$; for any $k \geq 2$, it is possible to
detect communities efficiently whenever $\mathrm{SNR}>1$ (the KS threshold),
whereas for $k\geq 4$, it is possible to detect communities
information-theoretically for some $\mathrm{SNR}<1$. Massouli\'e, Mossel et
al.\ and Bordenave et al.\ succeeded in proving that the KS threshold is
efficiently achievable for $k=2$, while Mossel et al.\ proved that it cannot be
crossed information-theoretically for $k=2$. The above conjecture remained open
for $k \geq 3$.
This paper proves this conjecture, further extending the efficient detection
to non-symmetrical SBMs with a generalized notion of detection and KS
threshold. For the efficient part, a linearized acyclic belief propagation
(ABP) algorithm is developed and proved to detect communities for any $k$ down
to the KS threshold in time $O(n \log n)$. Achieving this requires showing
optimality of ABP in the presence of cycles, a challenge for message passing
algorithms. The paper further connects ABP to a power iteration method with a
nonbacktracking operator of generalized order, formalizing the interplay
between message passing and spectral methods. For the information-theoretic
(IT) part, a non-efficient algorithm sampling a typical clustering is shown to
break down the KS threshold at $k=4$. The emerging gap is shown to be large in
some cases; if $a=0$, the KS threshold reads $b \gtrsim k^2$ whereas the IT
bound reads $b \gtrsim k \ln(k)$, making the SBM a good study-case for
information-computation gaps.
| Emmanuel Abbe and Colin Sandon | null | 1512.09080 | null | null |
Low rank approximation and decomposition of large matrices using error
correcting codes | cs.IT cs.LG cs.NA math.IT | Low rank approximation is an important tool used in many applications of
signal processing and machine learning. Recently, randomized sketching
algorithms were proposed to effectively construct low rank approximations and
obtain approximate singular value decompositions of large matrices. Similar
ideas were used to solve least squares regression problems. In this paper, we
show how matrices from error correcting codes can be used to find such low rank
approximations and matrix decompositions, and extend the framework to linear
least squares regression problems. The benefits of using these code matrices
are the following: (i) They are easy to generate and they reduce randomness
significantly. (ii) Code matrices with mild properties satisfy the subspace
embedding property, and have a better chance of preserving the geometry of an
entire subspace of vectors. (iii) For parallel and distributed applications,
code matrices have significant advantages over structured random matrices and
Gaussian random matrices. (iv) Unlike Fourier or Hadamard transform matrices,
which require sampling $O(k\log k)$ columns for a rank-$k$ approximation, the
log factor is not necessary for certain types of code matrices. That is,
$(1+\epsilon)$ optimal Frobenius norm error can be achieved for a rank-$k$
approximation with $O(k/\epsilon)$ samples. (v) Fast multiplication is possible
with structured code matrices, so fast approximations can be achieved for
general dense input matrices. (vi) For least squares regression problem
$\min\|Ax-b\|_2$ where $A\in \mathbb{R}^{n\times d}$, the $(1+\epsilon)$
relative error approximation can be achieved with $O(d/\epsilon)$ samples, with
high probability, when certain code matrices are used.
| Shashanka Ubaru, Arya Mazumdar and Yousef Saad | 10.1109/TIT.2017.2723898 | 1512.09156 | null | null |
Statistical Query Algorithms for Mean Vector Estimation and Stochastic
Convex Optimization | cs.LG cs.DS | Stochastic convex optimization, where the objective is the expectation of a
random convex function, is an important and widely used method with numerous
applications in machine learning, statistics, operations research and other
areas. We study the complexity of stochastic convex optimization given only
statistical query (SQ) access to the objective function. We show that
well-known and popular first-order iterative methods can be implemented using
only statistical queries. For many cases of interest we derive nearly matching
upper and lower bounds on the estimation (sample) complexity including linear
optimization in the most general setting. We then present several consequences
for machine learning, differential privacy and proving concrete lower bounds on
the power of convex optimization based methods.
The key ingredient of our work is SQ algorithms and lower bounds for
estimating the mean vector of a distribution over vectors supported on a convex
body in $\mathbb{R}^d$. This natural problem has not been previously studied
and we show that our solutions can be used to get substantially improved SQ
versions of Perceptron and other online algorithms for learning halfspaces.
| Vitaly Feldman, Cristobal Guzman, Santosh Vempala | null | 1512.09170 | null | null |
Personalized Course Sequence Recommendations | cs.CY cs.LG | Given the variability in student learning it is becoming increasingly
important to tailor courses as well as course sequences to student needs. This
paper presents a systematic methodology for offering personalized course
sequence recommendations to students. First, a forward-search
backward-induction algorithm is developed that can optimally select course
sequences to decrease the time required for a student to graduate. The
algorithm accounts for prerequisite requirements (typically present in higher
level education) and course availability. Second, using the tools of
multi-armed bandits, an algorithm is developed that can optimally recommend a
course sequence that both reduces the time to graduate while also increasing
the overall GPA of the student. The algorithm dynamically learns how students
with different contextual backgrounds perform for given course sequences and
then recommends an optimal course sequence for new students. Using real-world
student data from the UCLA Mechanical and Aerospace Engineering department, we
illustrate how the proposed algorithms outperform other methods that do not
include student contextual information when making course sequence
recommendations.
| Jie Xu, Tianwei Xing, Mihaela van der Schaar | 10.1109/TSP.2016.2595495 | 1512.09176 | null | null |
Bayes-Optimal Effort Allocation in Crowdsourcing: Bounds and Index
Policies | cs.LG cs.AI stat.ML | We consider effort allocation in crowdsourcing, where we wish to assign
labeling tasks to imperfect homogeneous crowd workers to maximize overall
accuracy in a continuous-time Bayesian setting, subject to budget and time
constraints. The Bayes-optimal policy for this problem is the solution to a
partially observable Markov decision process, but the curse of dimensionality
renders the computation infeasible. Based on the Lagrangian Relaxation
technique in Adelman & Mersereau (2008), we provide a computationally tractable
instance-specific upper bound on the value of this Bayes-optimal policy, which
can in turn be used to bound the optimality gap of any other sub-optimal
policy. In an approach similar in spirit to the Whittle index for restless
multiarmed bandits, we provide an index policy for effort allocation in
crowdsourcing and demonstrate numerically that it outperforms other stateof-
arts and performs close to optimal solution.
| Weici Hu, Peter I. Frazier | null | 1512.09204 | null | null |
Denoising and Completion of 3D Data via Multidimensional Dictionary
Learning | cs.LG cs.CV cs.DS | In this paper a new dictionary learning algorithm for multidimensional data
is proposed. Unlike most conventional dictionary learning methods which are
derived for dealing with vectors or matrices, our algorithm, named KTSVD,
learns a multidimensional dictionary directly via a novel algebraic approach
for tensor factorization as proposed in [3, 12, 13]. Using this approach one
can define a tensor-SVD and we propose to extend K-SVD algorithm used for 1-D
data to a K-TSVD algorithm for handling 2-D and 3-D data. Our algorithm, based
on the idea of sparse coding (using group-sparsity over multidimensional
coefficient vectors), alternates between estimating a compact representation
and dictionary learning. We analyze our KTSVD algorithm and demonstrate its
result on video completion and multispectral image denoising.
| Zemin Zhang and Shuchin Aeron | null | 1512.09227 | null | null |
Strategies and Principles of Distributed Machine Learning on Big Data | stat.ML cs.DC cs.LG | The rise of Big Data has led to new demands for Machine Learning (ML) systems
to learn complex models with millions to billions of parameters, that promise
adequate capacity to digest massive datasets and offer powerful predictive
analytics thereupon. In order to run ML algorithms at such scales, on a
distributed cluster with 10s to 1000s of machines, it is often the case that
significant engineering efforts are required --- and one might fairly ask if
such engineering truly falls within the domain of ML research or not. Taking
the view that Big ML systems can benefit greatly from ML-rooted statistical and
algorithmic insights --- and that ML researchers should therefore not shy away
from such systems design --- we discuss a series of principles and strategies
distilled from our recent efforts on industrial-scale ML solutions. These
principles and strategies span a continuum from application, to engineering,
and to theoretical research and development of Big ML systems and
architectures, with the goal of understanding how to make them efficient,
generally-applicable, and supported with convergence and scaling guarantees.
They concern four key questions which traditionally receive little attention in
ML research: How to distribute an ML program over a cluster? How to bridge ML
computation with inter-machine communication? How to perform such
communication? What should be communicated between machines? By exposing
underlying statistical and algorithmic characteristics unique to ML programs
but not typically seen in traditional computer programs, and by dissecting
successful cases to reveal how we have harnessed these principles to design and
develop both high-performance distributed ML software as well as
general-purpose ML frameworks, we present opportunities for ML researchers and
practitioners to further shape and grow the area that lies between ML and
systems.
| Eric P. Xing, Qirong Ho, Pengtao Xie, Wei Dai | null | 1512.09295 | null | null |
Autoencoding beyond pixels using a learned similarity metric | cs.LG cs.CV stat.ML | We present an autoencoder that leverages learned representations to better
measure similarities in data space. By combining a variational autoencoder with
a generative adversarial network we can use learned feature representations in
the GAN discriminator as basis for the VAE reconstruction objective. Thereby,
we replace element-wise errors with feature-wise errors to better capture the
data distribution while offering invariance towards e.g. translation. We apply
our method to images of faces and show that it outperforms VAEs with
element-wise similarity measures in terms of visual fidelity. Moreover, we show
that the method learns an embedding in which high-level abstract visual
features (e.g. wearing glasses) can be modified using simple arithmetic.
| Anders Boesen Lindbo Larsen, S{\o}ren Kaae S{\o}nderby, Hugo
Larochelle, Ole Winther | null | 1512.09300 | null | null |
Distributed Bayesian Learning with Stochastic Natural-gradient
Expectation Propagation and the Posterior Server | cs.LG stat.ML | This paper makes two contributions to Bayesian machine learning algorithms.
Firstly, we propose stochastic natural gradient expectation propagation (SNEP),
a novel alternative to expectation propagation (EP), a popular variational
inference algorithm. SNEP is a black box variational algorithm, in that it does
not require any simplifying assumptions on the distribution of interest, beyond
the existence of some Monte Carlo sampler for estimating the moments of the EP
tilted distributions. Further, as opposed to EP which has no guarantee of
convergence, SNEP can be shown to be convergent, even when using Monte Carlo
moment estimates. Secondly, we propose a novel architecture for distributed
Bayesian learning which we call the posterior server. The posterior server
allows scalable and robust Bayesian learning in cases where a data set is
stored in a distributed manner across a cluster, with each compute node
containing a disjoint subset of data. An independent Monte Carlo sampler is run
on each compute node, with direct access only to the local data subset, but
which targets an approximation to the global posterior distribution given all
data across the whole cluster. This is achieved by using a distributed
asynchronous implementation of SNEP to pass messages across the cluster. We
demonstrate SNEP and the posterior server on distributed Bayesian learning of
logistic regression and neural networks.
Keywords: Distributed Learning, Large Scale Learning, Deep Learning, Bayesian
Learn- ing, Variational Inference, Expectation Propagation, Stochastic
Approximation, Natural Gradient, Markov chain Monte Carlo, Parameter Server,
Posterior Server.
| Leonard Hasenclever, Stefan Webb, Thibaut Lienart, Sebastian Vollmer,
Balaji Lakshminarayanan, Charles Blundell, Yee Whye Teh | null | 1512.09327 | null | null |
Homology Computation of Large Point Clouds using Quantum Annealing | quant-ph cs.LG | Homology is a tool in topological data analysis which measures the shape of
the data. In many cases, these measurements translate into new insights which
are not available by other means. To compute homology, we rely on mathematical
constructions which scale exponentially with the size of the data. Therefore,
for large point clouds, the computation is infeasible using classical
computers. In this paper, we present a quantum annealing pipeline for
computation of homology of large point clouds. The pipeline takes as input a
graph approximating the given point cloud. It uses quantum annealing to compute
a clique covering of the graph and then uses this cover to construct a
Mayer-Vietoris complex. The pipeline terminates by performing a simplified
homology computation of the Mayer-Vietoris complex. We have introduced three
different clique coverings and their quantum annealing formulation. Our
pipeline scales polynomially in the size of the data, once the covering step is
solved. To prove correctness of our algorithm, we have also included tests
using D-Wave 2X quantum processor.
| Raouf Dridi, Hedayat Alghassi | null | 1512.09328 | null | null |
Selecting Near-Optimal Learners via Incremental Data Allocation | cs.LG stat.ML | We study a novel machine learning (ML) problem setting of sequentially
allocating small subsets of training data amongst a large set of classifiers.
The goal is to select a classifier that will give near-optimal accuracy when
trained on all data, while also minimizing the cost of misallocated samples.
This is motivated by large modern datasets and ML toolkits with many
combinations of learning algorithms and hyper-parameters. Inspired by the
principle of "optimism under uncertainty," we propose an innovative strategy,
Data Allocation using Upper Bounds (DAUB), which robustly achieves these
objectives across a variety of real-world datasets.
We further develop substantial theoretical support for DAUB in an idealized
setting where the expected accuracy of a classifier trained on $n$ samples can
be known exactly. Under these conditions we establish a rigorous sub-linear
bound on the regret of the approach (in terms of misallocated data), as well as
a rigorous bound on suboptimality of the selected classifier. Our accuracy
estimates using real-world datasets only entail mild violations of the
theoretical scenario, suggesting that the practical behavior of DAUB is likely
to approach the idealized behavior.
| Ashish Sabharwal, Horst Samulowitz, Gerald Tesauro | null | 1601.00024 | null | null |
Write a Classifier: Predicting Visual Classifiers from Unstructured Text | cs.CV cs.CL cs.LG | People typically learn through exposure to visual concepts associated with
linguistic descriptions. For instance, teaching visual object categories to
children is often accompanied by descriptions in text or speech. In a machine
learning context, these observations motivates us to ask whether this learning
process could be computationally modeled to learn visual classifiers. More
specifically, the main question of this work is how to utilize purely textual
description of visual classes with no training images, to learn explicit visual
classifiers for them. We propose and investigate two baseline formulations,
based on regression and domain transfer, that predict a linear classifier.
Then, we propose a new constrained optimization formulation that combines a
regression function and a knowledge transfer function with additional
constraints to predict the parameters of a linear classifier. We also propose a
generic kernelized models where a kernel classifier is predicted in the form
defined by the representer theorem. The kernelized models allow defining and
utilizing any two RKHS (Reproducing Kernel Hilbert Space) kernel functions in
the visual space and text space, respectively. We finally propose a kernel
function between unstructured text descriptions that builds on distributional
semantics, which shows an advantage in our setting and could be useful for
other applications. We applied all the studied models to predict visual
classifiers on two fine-grained and challenging categorization datasets (CU
Birds and Flower Datasets), and the results indicate successful predictions of
our final model over several baselines that we designed.
| Mohamed Elhoseiny, Ahmed Elgammal, Babak Saleh | null | 1601.00025 | null | null |
Stochastic Neural Networks with Monotonic Activation Functions | stat.ML cs.LG cs.NE | We propose a Laplace approximation that creates a stochastic unit from any
smooth monotonic activation function, using only Gaussian noise. This paper
investigates the application of this stochastic approximation in training a
family of Restricted Boltzmann Machines (RBM) that are closely linked to
Bregman divergences. This family, that we call exponential family RBM
(Exp-RBM), is a subset of the exponential family Harmoniums that expresses
family members through a choice of smooth monotonic non-linearity for each
neuron. Using contrastive divergence along with our Gaussian approximation, we
show that Exp-RBM can learn useful representations using novel stochastic
units.
| Siamak Ravanbakhsh, Barnabas Poczos, Jeff Schneider, Dale Schuurmans,
Russell Greiner | null | 1601.00034 | null | null |
Practical Algorithms for Learning Near-Isometric Linear Embeddings | stat.ML cs.LG math.OC | We propose two practical non-convex approaches for learning near-isometric,
linear embeddings of finite sets of data points. Given a set of training points
$\mathcal{X}$, we consider the secant set $S(\mathcal{X})$ that consists of all
pairwise difference vectors of $\mathcal{X}$, normalized to lie on the unit
sphere. The problem can be formulated as finding a symmetric and positive
semi-definite matrix $\boldsymbol{\Psi}$ that preserves the norms of all the
vectors in $S(\mathcal{X})$ up to a distortion parameter $\delta$. Motivated by
non-negative matrix factorization, we reformulate our problem into a Frobenius
norm minimization problem, which is solved by the Alternating Direction Method
of Multipliers (ADMM) and develop an algorithm, FroMax. Another method solves
for a projection matrix $\boldsymbol{\Psi}$ by minimizing the restricted
isometry property (RIP) directly over the set of symmetric, postive
semi-definite matrices. Applying ADMM and a Moreau decomposition on a proximal
mapping, we develop another algorithm, NILE-Pro, for dimensionality reduction.
FroMax is shown to converge faster for smaller $\delta$ while NILE-Pro
converges faster for larger $\delta$. Both non-convex approaches are then
empirically demonstrated to be more computationally efficient than prior convex
approaches for a number of applications in machine learning and signal
processing.
| Jerry Luo, Kayla Shapiro, Hao-Jun Michael Shi, Qi Yang, and Kan Zhu | null | 1601.00062 | null | null |
Supervised Dimensionality Reduction via Distance Correlation
Maximization | cs.LG stat.ML | In our work, we propose a novel formulation for supervised dimensionality
reduction based on a nonlinear dependency criterion called Statistical Distance
Correlation, Szekely et. al. (2007). We propose an objective which is free of
distributional assumptions on regression variables and regression model
assumptions. Our proposed formulation is based on learning a low-dimensional
feature representation $\mathbf{z}$, which maximizes the squared sum of
Distance Correlations between low dimensional features $\mathbf{z}$ and
response $y$, and also between features $\mathbf{z}$ and covariates
$\mathbf{x}$. We propose a novel algorithm to optimize our proposed objective
using the Generalized Minimization Maximizaiton method of \Parizi et. al.
(2015). We show superior empirical results on multiple datasets proving the
effectiveness of our proposed approach over several relevant state-of-the-art
supervised dimensionality reduction methods.
| Praneeth Vepakomma and Chetan Tonde and Ahmed Elgammal | null | 1601.00236 | null | null |
Dimensionality-Dependent Generalization Bounds for $k$-Dimensional
Coding Schemes | stat.ML cs.LG | The $k$-dimensional coding schemes refer to a collection of methods that
attempt to represent data using a set of representative $k$-dimensional
vectors, and include non-negative matrix factorization, dictionary learning,
sparse coding, $k$-means clustering and vector quantization as special cases.
Previous generalization bounds for the reconstruction error of the
$k$-dimensional coding schemes are mainly dimensionality independent. A major
advantage of these bounds is that they can be used to analyze the
generalization error when data is mapped into an infinite- or high-dimensional
feature space. However, many applications use finite-dimensional data features.
Can we obtain dimensionality-dependent generalization bounds for
$k$-dimensional coding schemes that are tighter than dimensionality-independent
bounds when data is in a finite-dimensional feature space? The answer is
positive. In this paper, we address this problem and derive a
dimensionality-dependent generalization bound for $k$-dimensional coding
schemes by bounding the covering number of the loss function class induced by
the reconstruction error. The bound is of order
$\mathcal{O}\left(\left(mk\ln(mkn)/n\right)^{\lambda_n}\right)$, where $m$ is
the dimension of features, $k$ is the number of the columns in the linear
implementation of coding schemes, $n$ is the size of sample, $\lambda_n>0.5$
when $n$ is finite and $\lambda_n=0.5$ when $n$ is infinite. We show that our
bound can be tighter than previous results, because it avoids inducing the
worst-case upper bound on $k$ of the loss function and converges faster. The
proposed generalization bound is also applied to some specific coding schemes
to demonstrate that the dimensionality-dependent bound is an indispensable
complement to these dimensionality-independent generalization bounds.
| Tongliang Liu, Dacheng Tao, and Dong Xu | null | 1601.00238 | null | null |
A Unified Approach for Learning the Parameters of Sum-Product Networks | cs.LG cs.AI | We present a unified approach for learning the parameters of Sum-Product
networks (SPNs). We prove that any complete and decomposable SPN is equivalent
to a mixture of trees where each tree corresponds to a product of univariate
distributions. Based on the mixture model perspective, we characterize the
objective function when learning SPNs based on the maximum likelihood
estimation (MLE) principle and show that the optimization problem can be
formulated as a signomial program. We construct two parameter learning
algorithms for SPNs by using sequential monomial approximations (SMA) and the
concave-convex procedure (CCCP), respectively. The two proposed methods
naturally admit multiplicative updates, hence effectively avoiding the
projection operation. With the help of the unified framework, we also show
that, in the case of SPNs, CCCP leads to the same algorithm as Expectation
Maximization (EM) despite the fact that they are different in general.
| Han Zhao, Pascal Poupart, Geoff Gordon | null | 1601.00318 | null | null |
Sparse Diffusion Steepest-Descent for One Bit Compressed Sensing in
Wireless Sensor Networks | stat.ML cs.IT cs.LG math.IT | This letter proposes a sparse diffusion steepest-descent algorithm for one
bit compressed sensing in wireless sensor networks. The approach exploits the
diffusion strategy from distributed learning in the one bit compressed sensing
framework. To estimate a common sparse vector cooperatively from only the sign
of measurements, steepest-descent is used to minimize the suitable global and
local convex cost functions. A diffusion strategy is suggested for distributive
learning of the sparse vector. Simulation results show the effectiveness of the
proposed distributed algorithm compared to the state-of-the-art non
distributive algorithms in the one bit compressed sensing framework.
| Hadi Zayyani, Mehdi Korki, Farrokh Marvasti | null | 1601.00350 | null | null |
On the Reducibility of Submodular Functions | cs.LG stat.ML | The scalability of submodular optimization methods is critical for their
usability in practice. In this paper, we study the reducibility of submodular
functions, a property that enables us to reduce the solution space of
submodular optimization problems without performance loss. We introduce the
concept of reducibility using marginal gains. Then we show that by adding
perturbation, we can endow irreducible functions with reducibility, based on
which we propose the perturbation-reduction optimization framework. Our
theoretical analysis proves that given the perturbation scales, the
reducibility gain could be computed, and the performance loss has additive
upper bounds. We further conduct empirical studies and the results demonstrate
that our proposed framework significantly accelerates existing optimization
methods for irreducible submodular functions with a cost of only small
performance losses.
| Jincheng Mei, Hao Zhang, Bao-Liang Lu | null | 1601.00393 | null | null |
Fitting Spectral Decay with the $k$-Support Norm | cs.LG stat.ML | The spectral $k$-support norm enjoys good estimation properties in low rank
matrix learning problems, empirically outperforming the trace norm. Its unit
ball is the convex hull of rank $k$ matrices with unit Frobenius norm. In this
paper we generalize the norm to the spectral $(k,p)$-support norm, whose
additional parameter $p$ can be used to tailor the norm to the decay of the
spectrum of the underlying model. We characterize the unit ball and we
explicitly compute the norm. We further provide a conditional gradient method
to solve regularization problems with the norm, and we derive an efficient
algorithm to compute the Euclidean projection on the unit ball in the case
$p=\infty$. In numerical experiments, we show that allowing $p$ to vary
significantly improves performance over the spectral $k$-support norm on
various matrix completion benchmarks, and better captures the spectral decay of
the underlying model.
| Andrew M. McDonald, Massimiliano Pontil, Dimitris Stamos | null | 1601.00449 | null | null |
Approximate Message Passing with Nearest Neighbor Sparsity Pattern
Learning | cs.IT cs.LG math.IT | We consider the problem of recovering clustered sparse signals with no prior
knowledge of the sparsity pattern. Beyond simple sparsity, signals of interest
often exhibits an underlying sparsity pattern which, if leveraged, can improve
the reconstruction performance. However, the sparsity pattern is usually
unknown a priori. Inspired by the idea of k-nearest neighbor (k-NN) algorithm,
we propose an efficient algorithm termed approximate message passing with
nearest neighbor sparsity pattern learning (AMP-NNSPL), which learns the
sparsity pattern adaptively. AMP-NNSPL specifies a flexible spike and slab
prior on the unknown signal and, after each AMP iteration, sets the sparse
ratios as the average of the nearest neighbor estimates via expectation
maximization (EM). Experimental results on both synthetic and real data
demonstrate the superiority of our proposed algorithm both in terms of
reconstruction performance and computational complexity.
| Xiangming Meng and Sheng Wu and Linling Kuang and Defeng (David) Huang
and Jianhua Lu | null | 1601.00543 | null | null |
NFL Play Prediction | cs.LG | Based on NFL game data we try to predict the outcome of a play in multiple
different ways. An application of this is the following: by plugging in various
play options one could determine the best play for a given situation in real
time. While the outcome of a play can be described in many ways we had the most
promising results with a newly defined measure that we call "progress". We see
this work as a first step to include predictive analysis into NFL playcalling.
| Brendan Teich, Roman Lutz, Valentin Kassarnig | null | 1601.00574 | null | null |
Robust Non-linear Regression: A Greedy Approach Employing Kernels with
Application to Image Denoising | cs.LG stat.ML | We consider the task of robust non-linear regression in the presence of both
inlier noise and outliers. Assuming that the unknown non-linear function
belongs to a Reproducing Kernel Hilbert Space (RKHS), our goal is to estimate
the set of the associated unknown parameters. Due to the presence of outliers,
common techniques such as the Kernel Ridge Regression (KRR) or the Support
Vector Regression (SVR) turn out to be inadequate. Instead, we employ sparse
modeling arguments to explicitly model and estimate the outliers, adopting a
greedy approach. The proposed robust scheme, i.e., Kernel Greedy Algorithm for
Robust Denoising (KGARD), is inspired by the classical Orthogonal Matching
Pursuit (OMP) algorithm. Specifically, the proposed method alternates between a
KRR task and an OMP-like selection step. Theoretical results concerning the
identification of the outliers are provided. Moreover, KGARD is compared
against other cutting edge methods, where its performance is evaluated via a
set of experiments with various types of noise. Finally, the proposed robust
estimation framework is applied to the task of image denoising, and its
enhanced performance in the presence of outliers is demonstrated.
| George Papageorgiou, Pantelis Bouboulis and Sergios Theodoridis | 10.1109/TSP.2017.2708029 | 1601.00595 | null | null |
Scalable Models for Computing Hierarchies in Information Networks | cs.AI cs.DL cs.LG | Information hierarchies are organizational structures that often used to
organize and present large and complex information as well as provide a
mechanism for effective human navigation. Fortunately, many statistical and
computational models exist that automatically generate hierarchies; however,
the existing approaches do not consider linkages in information {\em networks}
that are increasingly common in real-world scenarios. Current approaches also
tend to present topics as an abstract probably distribution over words, etc
rather than as tangible nodes from the original network. Furthermore, the
statistical techniques present in many previous works are not yet capable of
processing data at Web-scale. In this paper we present the Hierarchical
Document Topic Model (HDTM), which uses a distributed vertex-programming
process to calculate a nonparametric Bayesian generative model. Experiments on
three medium size data sets and the entire Wikipedia dataset show that HDTM can
infer accurate hierarchies even over large information networks.
| Baoxu Shi and Tim Weninger | null | 1601.00626 | null | null |
Variational Inference: A Review for Statisticians | stat.CO cs.LG stat.ML | One of the core problems of modern statistics is to approximate
difficult-to-compute probability densities. This problem is especially
important in Bayesian statistics, which frames all inference about unknown
quantities as a calculation involving the posterior density. In this paper, we
review variational inference (VI), a method from machine learning that
approximates probability densities through optimization. VI has been used in
many applications and tends to be faster than classical methods, such as Markov
chain Monte Carlo sampling. The idea behind VI is to first posit a family of
densities and then to find the member of that family which is close to the
target. Closeness is measured by Kullback-Leibler divergence. We review the
ideas behind mean-field variational inference, discuss the special case of VI
applied to exponential family models, present a full example with a Bayesian
mixture of Gaussians, and derive a variant that uses stochastic optimization to
scale up to massive data. We discuss modern research in VI and highlight
important open problems. VI is powerful, but it is not yet well understood. Our
hope in writing this paper is to catalyze statistical research on this class of
algorithms.
| David M. Blei, Alp Kucukelbir, Jon D. McAuliffe | 10.1080/01621459.2017.1285773 | 1601.00670 | null | null |
Nonlinear Hebbian learning as a unifying principle in receptive field
formation | q-bio.NC cs.LG | The development of sensory receptive fields has been modeled in the past by a
variety of models including normative models such as sparse coding or
independent component analysis and bottom-up models such as spike-timing
dependent plasticity or the Bienenstock-Cooper-Munro model of synaptic
plasticity. Here we show that the above variety of approaches can all be
unified into a single common principle, namely Nonlinear Hebbian Learning. When
Nonlinear Hebbian Learning is applied to natural images, receptive field shapes
were strongly constrained by the input statistics and preprocessing, but
exhibited only modest variation across different choices of nonlinearities in
neuron models or synaptic plasticity rules. Neither overcompleteness nor sparse
network activity are necessary for the development of localized receptive
fields. The analysis of alternative sensory modalities such as auditory models
or V2 development lead to the same conclusions. In all examples, receptive
fields can be predicted a priori by reformulating an abstract model as
nonlinear Hebbian learning. Thus nonlinear Hebbian learning and natural
statistics can account for many aspects of receptive field formation across
models and sensory modalities.
| Carlos S. N. Brito, Wulfram Gerstner | 10.1371/journal.pcbi.1005070 | 1601.00701 | null | null |
Weakly-supervised Disentangling with Recurrent Transformations for 3D
View Synthesis | cs.LG cs.AI cs.CV | An important problem for both graphics and vision is to synthesize novel
views of a 3D object from a single image. This is particularly challenging due
to the partial observability inherent in projecting a 3D object onto the image
space, and the ill-posedness of inferring object shape and pose. However, we
can train a neural network to address the problem if we restrict our attention
to specific object categories (in our case faces and chairs) for which we can
gather ample training data. In this paper, we propose a novel recurrent
convolutional encoder-decoder network that is trained end-to-end on the task of
rendering rotated objects starting from a single image. The recurrent structure
allows our model to capture long-term dependencies along a sequence of
transformations. We demonstrate the quality of its predictions for human faces
on the Multi-PIE dataset and for a dataset of 3D chair models, and also show
its ability to disentangle latent factors of variation (e.g., identity and
pose) without using full supervision.
| Jimei Yang, Scott Reed, Ming-Hsuan Yang, Honglak Lee | null | 1601.00706 | null | null |
Low-Rank Representation over the Manifold of Curves | cs.CV cs.LG | In machine learning it is common to interpret each data point as a vector in
Euclidean space. However the data may actually be functional i.e.\ each data
point is a function of some variable such as time and the function is
discretely sampled. The naive treatment of functional data as traditional
multivariate data can lead to poor performance since the algorithms are
ignoring the correlation in the curvature of each function. In this paper we
propose a method to analyse subspace structure of the functional data by using
the state of the art Low-Rank Representation (LRR). Experimental evaluation on
synthetic and real data reveals that this method massively outperforms
conventional LRR in tasks concerning functional data.
| Stephen Tierney, Junbin Gao, Yi Guo and Zhengwu Zhang | null | 1601.00732 | null | null |
Brain4Cars: Car That Knows Before You Do via Sensory-Fusion Deep
Learning Architecture | cs.RO cs.CV cs.LG | Advanced Driver Assistance Systems (ADAS) have made driving safer over the
last decade. They prepare vehicles for unsafe road conditions and alert drivers
if they perform a dangerous maneuver. However, many accidents are unavoidable
because by the time drivers are alerted, it is already too late. Anticipating
maneuvers beforehand can alert drivers before they perform the maneuver and
also give ADAS more time to avoid or prepare for the danger.
In this work we propose a vehicular sensor-rich platform and learning
algorithms for maneuver anticipation. For this purpose we equip a car with
cameras, Global Positioning System (GPS), and a computing device to capture the
driving context from both inside and outside of the car. In order to anticipate
maneuvers, we propose a sensory-fusion deep learning architecture which jointly
learns to anticipate and fuse multiple sensory streams. Our architecture
consists of Recurrent Neural Networks (RNNs) that use Long Short-Term Memory
(LSTM) units to capture long temporal dependencies. We propose a novel training
procedure which allows the network to predict the future given only a partial
temporal context. We introduce a diverse data set with 1180 miles of natural
freeway and city driving, and show that we can anticipate maneuvers 3.5 seconds
before they occur in real-time with a precision and recall of 90.5\% and 87.4\%
respectively.
| Ashesh Jain, Hema S Koppula, Shane Soh, Bharad Raghavan, Avi Singh,
Ashutosh Saxena | null | 1601.00740 | null | null |
Learning Preferences for Manipulation Tasks from Online Coactive
Feedback | cs.RO cs.AI cs.LG | We consider the problem of learning preferences over trajectories for mobile
manipulators such as personal robots and assembly line robots. The preferences
we learn are more intricate than simple geometric constraints on trajectories;
they are rather governed by the surrounding context of various objects and
human interactions in the environment. We propose a coactive online learning
framework for teaching preferences in contextually rich environments. The key
novelty of our approach lies in the type of feedback expected from the user:
the human user does not need to demonstrate optimal trajectories as training
data, but merely needs to iteratively provide trajectories that slightly
improve over the trajectory currently proposed by the system. We argue that
this coactive preference feedback can be more easily elicited than
demonstrations of optimal trajectories. Nevertheless, theoretical regret bounds
of our algorithm match the asymptotic rates of optimal trajectory algorithms.
We implement our algorithm on two high degree-of-freedom robots, PR2 and
Baxter, and present three intuitive mechanisms for providing such incremental
feedback. In our experimental evaluation we consider two context rich settings
-- household chores and grocery store checkout -- and show that users are able
to train the robot with just a few feedbacks (taking only a few
minutes).\footnote{Parts of this work has been published at NIPS and ISRR
conferences~\citep{Jain13,Jain13b}. This journal submission presents a
consistent full paper, and also includes the proof of regret bounds, more
details of the robotic system, and a thorough related work.}
| Ashesh Jain, Shikhar Sharma, Thorsten Joachims, Ashutosh Saxena | null | 1601.00741 | null | null |
End-to-End Relation Extraction using LSTMs on Sequences and Tree
Structures | cs.CL cs.LG | We present a novel end-to-end neural model to extract entities and relations
between them. Our recurrent neural network based model captures both word
sequence and dependency tree substructure information by stacking bidirectional
tree-structured LSTM-RNNs on bidirectional sequential LSTM-RNNs. This allows
our model to jointly represent both entities and relations with shared
parameters in a single model. We further encourage detection of entities during
training and use of entity information in relation extraction via entity
pretraining and scheduled sampling. Our model improves over the
state-of-the-art feature-based model on end-to-end relation extraction,
achieving 12.1% and 5.7% relative error reductions in F1-score on ACE2005 and
ACE2004, respectively. We also show that our LSTM-RNN based model compares
favorably to the state-of-the-art CNN based model (in F1-score) on nominal
relation classification (SemEval-2010 Task 8). Finally, we present an extensive
ablation analysis of several model components.
| Makoto Miwa and Mohit Bansal | null | 1601.00770 | null | null |
Open challenges in understanding development and evolution of speech
forms: The roles of embodied self-organization, motivation and active
exploration | cs.AI cs.CL cs.CY cs.LG | This article discusses open scientific challenges for understanding
development and evolution of speech forms, as a commentary to Moulin-Frier et
al. (Moulin-Frier et al., 2015). Based on the analysis of mathematical models
of the origins of speech forms, with a focus on their assumptions , we study
the fundamental question of how speech can be formed out of non--speech, at
both developmental and evolutionary scales. In particular, we emphasize the
importance of embodied self-organization , as well as the role of mechanisms of
motivation and active curiosity-driven exploration in speech formation. Finally
, we discuss an evolutionary-developmental perspective of the origins of
speech.
| Pierre-Yves Oudeyer (Flowers) | 10.1016/j.wocn.2015.09.001 | 1601.00816 | null | null |
DrMAD: Distilling Reverse-Mode Automatic Differentiation for Optimizing
Hyperparameters of Deep Neural Networks | cs.LG cs.NE | The performance of deep neural networks is well-known to be sensitive to the
setting of their hyperparameters. Recent advances in reverse-mode automatic
differentiation allow for optimizing hyperparameters with gradients. The
standard way of computing these gradients involves a forward and backward pass
of computations. However, the backward pass usually needs to consume
unaffordable memory to store all the intermediate variables to exactly reverse
the forward training procedure. In this work we propose a simple but effective
method, DrMAD, to distill the knowledge of the forward pass into a shortcut
path, through which we approximately reverse the training trajectory.
Experiments on several image benchmark datasets show that DrMAD is at least 45
times faster and consumes 100 times less memory compared to state-of-the-art
methods for optimizing hyperparameters with minimal compromise to its
effectiveness. To the best of our knowledge, DrMAD is the first research
attempt to make it practical to automatically tune thousands of hyperparameters
of deep neural networks. The code can be downloaded from
https://github.com/bigaidream-projects/drmad
| Jie Fu, Hongyin Luo, Jiashi Feng, Kian Hsiang Low, Tat-Seng Chua | null | 1601.00917 | null | null |
Complex Decomposition of the Negative Distance kernel | cs.LG | A Support Vector Machine (SVM) has become a very popular machine learning
method for text classification. One reason for this relates to the range of
existing kernels which allow for classifying data that is not linearly
separable. The linear, polynomial and RBF (Gaussian Radial Basis Function)
kernel are commonly used and serve as a basis of comparison in our study. We
show how to derive the primal form of the quadratic Power Kernel (PK) -- also
called the Negative Euclidean Distance Kernel (NDK) -- by means of complex
numbers. We exemplify the NDK in the framework of text categorization using the
Dewey Document Classification (DDC) as the target scheme. Our evaluation shows
that the power kernel produces F-scores that are comparable to the reference
kernels, but is -- except for the linear kernel -- faster to compute. Finally,
we show how to extend the NDK-approach by including the Mahalanobis distance.
| Tim vor der Br\"uck, Steffen Eger, Alexander Mehler | null | 1601.00925 | null | null |
Optimally Pruning Decision Tree Ensembles With Feature Cost | stat.ML cs.LG | We consider the problem of learning decision rules for prediction with
feature budget constraint. In particular, we are interested in pruning an
ensemble of decision trees to reduce expected feature cost while maintaining
high prediction accuracy for any test example. We propose a novel 0-1 integer
program formulation for ensemble pruning. Our pruning formulation is general -
it takes any ensemble of decision trees as input. By explicitly accounting for
feature-sharing across trees together with accuracy/cost trade-off, our method
is able to significantly reduce feature cost by pruning subtrees that introduce
more loss in terms of feature cost than benefit in terms of prediction accuracy
gain. Theoretically, we prove that a linear programming relaxation produces the
exact solution of the original integer program. This allows us to use efficient
convex optimization tools to obtain an optimally pruned ensemble for any given
budget. Empirically, we see that our pruning algorithm significantly improves
the performance of the state of the art ensemble method BudgetRF.
| Feng Nan, Joseph Wang, Venkatesh Saligrama | null | 1601.00955 | null | null |
A Survey on Social Media Anomaly Detection | cs.LG cs.SI | Social media anomaly detection is of critical importance to prevent malicious
activities such as bullying, terrorist attack planning, and fraud information
dissemination. With the recent popularity of social media, new types of
anomalous behaviors arise, causing concerns from various parties. While a large
amount of work have been dedicated to traditional anomaly detection problems,
we observe a surge of research interests in the new realm of social media
anomaly detection. In this paper, we present a survey on existing approaches to
address this problem. We focus on the new type of anomalous phenomena in the
social media and review the recent developed techniques to detect those special
types of anomalies. We provide a general overview of the problem domain, common
formulations, existing methodologies and potential directions. With this work,
we hope to call out the attention from the research community on this
challenging problem and open up new directions that we can contribute in the
future.
| Rose Yu, Huida Qiu, Zhen Wen, Ching-Yung Lin, Yan Liu | null | 1601.01102 | null | null |
A pragmatic approach to multi-class classification | cs.LG | We present a novel hierarchical approach to multi-class classification which
is generic in that it can be applied to different classification models (e.g.,
support vector machines, perceptrons), and makes no explicit assumptions about
the probabilistic structure of the problem as it is usually done in multi-class
classification. By adding a cascade of additional classifiers, each of which
receives the previous classifier's output in addition to regular input data,
the approach harnesses unused information that manifests itself in the form of,
e.g., correlations between predicted classes. Using multilayer perceptrons as a
classification model, we demonstrate the validity of this approach by testing
it on a complex ten-class 3D gesture recognition task.
| Thomas Kopinski, St\'ephane Magand (ENSTA ParisTech U2IS/RV), Uwe
Handmann, Alexander Gepperth (Flowers, ENSTA ParisTech U2IS/RV) | 10.1109/IJCNN.2015.7280768 | 1601.01121 | null | null |
Streaming Gibbs Sampling for LDA Model | cs.LG stat.ML | Streaming variational Bayes (SVB) is successful in learning LDA models in an
online manner. However previous attempts toward developing online Monte-Carlo
methods for LDA have little success, often by having much worse perplexity than
their batch counterparts. We present a streaming Gibbs sampling (SGS) method,
an online extension of the collapsed Gibbs sampling (CGS). Our empirical study
shows that SGS can reach similar perplexity as CGS, much better than SVB. Our
distributed version of SGS, DSGS, is much more scalable than SVB mainly because
the updates' communication complexity is small.
| Yang Gao, Jianfei Chen, Jun Zhu | null | 1601.01142 | null | null |
A simple technique for improving multi-class classification with neural
networks | cs.LG | We present a novel method to perform multi-class pattern classification with
neural networks and test it on a challenging 3D hand gesture recognition
problem. Our method consists of a standard one-against-all (OAA)
classification, followed by another network layer classifying the resulting
class scores, possibly augmented by the original raw input vector. This allows
the network to disambiguate hard-to-separate classes as the distribution of
class scores carries considerable information as well, and is in fact often
used for assessing the confidence of a decision. We show that by this approach
we are able to significantly boost our results, overall as well as for
particular difficult cases, on the hard 10-class gesture classification task.
| Thomas Kopinski, Alexander Gepperth (ENSTA ParisTech U2IS/RV,
Flowers), Uwe Handmann | null | 1601.01157 | null | null |
Adaptive and Efficient Nonlinear Channel Equalization for Underwater
Acoustic Communication | cs.LG cs.IT cs.SD math.IT | We investigate underwater acoustic (UWA) channel equalization and introduce
hierarchical and adaptive nonlinear channel equalization algorithms that are
highly efficient and provide significantly improved bit error rate (BER)
performance. Due to the high complexity of nonlinear equalizers and poor
performance of linear ones, to equalize highly difficult underwater acoustic
channels, we employ piecewise linear equalizers. However, in order to achieve
the performance of the best piecewise linear model, we use a tree structure to
hierarchically partition the space of the received signal. Furthermore, the
equalization algorithm should be completely adaptive, since due to the highly
non-stationary nature of the underwater medium, the optimal MSE equalizer as
well as the best piecewise linear equalizer changes in time. To this end, we
introduce an adaptive piecewise linear equalization algorithm that not only
adapts the linear equalizer at each region but also learns the complete
hierarchical structure with a computational complexity only polynomial in the
number of nodes of the tree. Furthermore, our algorithm is constructed to
directly minimize the final squared error without introducing any ad-hoc
parameters. We demonstrate the performance of our algorithms through highly
realistic experiments performed on accurately simulated underwater acoustic
channels.
| Dariush Kari and Nuri Denizcan Vanli and Suleyman Serdar Kozat | null | 1601.01218 | null | null |
Angrier Birds: Bayesian reinforcement learning | cs.AI cs.LG | We train a reinforcement learner to play a simplified version of the game
Angry Birds. The learner is provided with a game state in a manner similar to
the output that could be produced by computer vision algorithms. We improve on
the efficiency of regular {\epsilon}-greedy Q-Learning with linear function
approximation through more systematic exploration in Randomized Least Squares
Value Iteration (RLSVI), an algorithm that samples its policy from a posterior
distribution on optimal policies. With larger state-action spaces, efficient
exploration becomes increasingly important, as evidenced by the faster learning
in RLSVI.
| Imanol Arrieta Ibarra, Bernardo Ramos, Lars Roemheld | null | 1601.01297 | null | null |
From Word Embeddings to Item Recommendation | cs.LG cs.CL cs.IR cs.SI | Social network platforms can use the data produced by their users to serve
them better. One of the services these platforms provide is recommendation
service. Recommendation systems can predict the future preferences of users
using their past preferences. In the recommendation systems literature there
are various techniques, such as neighborhood based methods, machine-learning
based methods and matrix-factorization based methods. In this work, a set of
well known methods from natural language processing domain, namely Word2Vec, is
applied to recommendation systems domain. Unlike previous works that use
Word2Vec for recommendation, this work uses non-textual features, the
check-ins, and it recommends venues to visit/check-in to the target users. For
the experiments, a Foursquare check-in dataset is used. The results show that
use of continuous vector space representations of items modeled by techniques
of Word2Vec is promising for making recommendations.
| Makbule Gulcin Ozsoy | null | 1601.01356 | null | null |
Learning Kernels for Structured Prediction using Polynomial Kernel
Transformations | cs.LG stat.ML | Learning the kernel functions used in kernel methods has been a vastly
explored area in machine learning. It is now widely accepted that to obtain
'good' performance, learning a kernel function is the key challenge. In this
work we focus on learning kernel representations for structured regression. We
propose use of polynomials expansion of kernels, referred to as Schoenberg
transforms and Gegenbaur transforms, which arise from the seminal result of
Schoenberg (1938). These kernels can be thought of as polynomial combination of
input features in a high dimensional reproducing kernel Hilbert space (RKHS).
We learn kernels over input and output for structured data, such that,
dependency between kernel features is maximized. We use Hilbert-Schmidt
Independence Criterion (HSIC) to measure this. We also give an efficient,
matrix decomposition-based algorithm to learn these kernel transformations, and
demonstrate state-of-the-art results on several real-world datasets.
| Chetan Tonde and Ahmed Elgammal | null | 1601.01411 | null | null |
Fast Kronecker product kernel methods via generalized vec trick | stat.ML cs.LG | Kronecker product kernel provides the standard approach in the kernel methods
literature for learning from graph data, where edges are labeled and both start
and end vertices have their own feature representations. The methods allow
generalization to such new edges, whose start and end vertices do not appear in
the training data, a setting known as zero-shot or zero-data learning. Such a
setting occurs in numerous applications, including drug-target interaction
prediction, collaborative filtering and information retrieval. Efficient
training algorithms based on the so-called vec trick, that makes use of the
special structure of the Kronecker product, are known for the case where the
training data is a complete bipartite graph. In this work we generalize these
results to non-complete training graphs. This allows us to derive a general
framework for training Kronecker product kernel methods, as specific examples
we implement Kronecker ridge regression and support vector machine algorithms.
Experimental results demonstrate that the proposed approach leads to accurate
models, while allowing order of magnitude improvements in training and
prediction time.
| Antti Airola, Tapio Pahikkala | 10.1109/TNNLS.2017.2727545 | 1601.01507 | null | null |
State Space representation of non-stationary Gaussian Processes | cs.LG stat.ML | The state space (SS) representation of Gaussian processes (GP) has recently
gained a lot of interest. The main reason is that it allows to compute GPs
based inferences in O(n), where $n$ is the number of observations. This
implementation makes GPs suitable for Big Data. For this reason, it is
important to provide a SS representation of the most important kernels used in
machine learning. The aim of this paper is to show how to exploit the transient
behaviour of SS models to map non-stationary kernels to SS models.
| Alessio Benavoli and Marco Zaffalon | null | 1601.01544 | null | null |
An Automaton Learning Approach to Solving Safety Games over Infinite
Graphs | cs.FL cs.LG cs.LO | We propose a method to construct finite-state reactive controllers for
systems whose interactions with their adversarial environment are modeled by
infinite-duration two-player games over (possibly) infinite graphs. The
proposed method targets safety games with infinitely many states or with such a
large number of states that it would be impractical---if not impossible---for
conventional synthesis techniques that work on the entire state space. We
resort to constructing finite-state controllers for such systems through an
automata learning approach, utilizing a symbolic representation of the
underlying game that is based on finite automata. Throughout the learning
process, the learner maintains an approximation of the winning region
(represented as a finite automaton) and refines it using different types of
counterexamples provided by the teacher until a satisfactory controller can be
derived (if one exists). We present a symbolic representation of safety games
(inspired by regular model checking), propose implementations of the learner
and teacher, and evaluate their performance on examples motivated by robotic
motion planning in dynamic environments.
| Daniel Neider, Ufuk Topcu | null | 1601.01660 | null | null |
Ensemble Methods of Classification for Power Systems Security Assessment | cs.AI cs.LG | One of the most promising approaches for complex technical systems analysis
employs ensemble methods of classification. Ensemble methods enable to build a
reliable decision rules for feature space classification in the presence of
many possible states of the system. In this paper, novel techniques based on
decision trees are used for evaluation of the reliability of the regime of
electric power systems. We proposed hybrid approach based on random forests
models and boosting models. Such techniques can be applied to predict the
interaction of increasing renewable power, storage devices and swiching of
smart loads from intelligent domestic appliances, heaters and air-conditioning
units and electric vehicles with grid for enhanced decision making. The
ensemble classification methods were tested on the modified 118-bus IEEE power
system showing that proposed technique can be employed to examine whether the
power system is secured under steady-state operating conditions.
| Alexei Zhukov, Victor Kurbatsky, Nikita Tomin, Denis Sidorov, Daniil
Panasetsky and Aoife Foley | null | 1601.01675 | null | null |
Dense Bag-of-Temporal-SIFT-Words for Time Series Classification | cs.LG | Time series classification is an application of particular interest with the
increase of data to monitor. Classical techniques for time series
classification rely on point-to-point distances. Recently, Bag-of-Words
approaches have been used in this context. Words are quantized versions of
simple features extracted from sliding windows. The SIFT framework has proved
efficient for image classification. In this paper, we design a time series
classification scheme that builds on the SIFT framework adapted to time series
to feed a Bag-of-Words. We then refine our method by studying the impact of
normalized Bag-of-Words, as well as densely extract point descriptors. Proposed
adjustements achieve better performance. The evaluation shows that our method
outperforms classical techniques in terms of classification.
| Adeline Bailly (LETG - Costel, OBELIX), Simon Malinowski (LinkMedia),
Romain Tavenard (LETG - Costel, OBELIX), Thomas Guyet (DREAM), Laetitia
Chapel (OBELIX) | null | 1601.01799 | null | null |
Song Recommendation with Non-Negative Matrix Factorization and Graph
Total Variation | stat.ML cs.IR cs.LG physics.data-an | This work formulates a novel song recommender system as a matrix completion
problem that benefits from collaborative filtering through Non-negative Matrix
Factorization (NMF) and content-based filtering via total variation (TV) on
graphs. The graphs encode both playlist proximity information and song
similarity, using a rich combination of audio, meta-data and social features.
As we demonstrate, our hybrid recommendation system is very versatile and
incorporates several well-known methods while outperforming them. Particularly,
we show on real-world data that our model overcomes w.r.t. two evaluation
metrics the recommendation of models solely based on low-rank information,
graph-based information or a combination of both.
| Kirell Benzi, Vassilis Kalofolias, Xavier Bresson, Pierre
Vandergheynst | null | 1601.01892 | null | null |
Nonparametric semi-supervised learning of class proportions | stat.ML cs.LG | The problem of developing binary classifiers from positive and unlabeled data
is often encountered in machine learning. A common requirement in this setting
is to approximate posterior probabilities of positive and negative classes for
a previously unseen data point. This problem can be decomposed into two steps:
(i) the development of accurate predictors that discriminate between positive
and unlabeled data, and (ii) the accurate estimation of the prior probabilities
of positive and negative examples. In this work we primarily focus on the
latter subproblem. We study nonparametric class prior estimation and formulate
this problem as an estimation of mixing proportions in two-component mixture
models, given a sample from one of the components and another sample from the
mixture itself. We show that estimation of mixing proportions is generally
ill-defined and propose a canonical form to obtain identifiability while
maintaining the flexibility to model any distribution. We use insights from
this theory to elucidate the optimization surface of the class priors and
propose an algorithm for estimating them. To address the problems of
high-dimensional density estimation, we provide practical transformations to
low-dimensional spaces that preserve class priors. Finally, we demonstrate the
efficacy of our method on univariate and multivariate data.
| Shantanu Jain, Martha White, Michael W. Trosset, Predrag Radivojac | null | 1601.01944 | null | null |
Scale-Free Online Learning | cs.LG | We design and analyze algorithms for online linear optimization that have
optimal regret and at the same time do not need to know any upper or lower
bounds on the norm of the loss vectors. Our algorithms are instances of the
Follow the Regularized Leader (FTRL) and Mirror Descent (MD) meta-algorithms.
We achieve adaptiveness to the norms of the loss vectors by scale invariance,
i.e., our algorithms make exactly the same decisions if the sequence of loss
vectors is multiplied by any positive constant. The algorithm based on FTRL
works for any decision set, bounded or unbounded. For unbounded decisions sets,
this is the first adaptive algorithm for online linear optimization with a
non-vacuous regret bound. In contrast, we show lower bounds on scale-free
algorithms based on MD on unbounded domains.
| Francesco Orabona and D\'avid P\'al | null | 1601.01974 | null | null |
A note on the sample complexity of the Er-SpUD algorithm by Spielman,
Wang and Wright for exact recovery of sparsely used dictionaries | math.PR cs.LG math.ST stat.TH | We consider the problem of recovering an invertible $n \times n$ matrix $A$
and a sparse $n \times p$ random matrix $X$ based on the observation of $Y =
AX$ (up to a scaling and permutation of columns of $A$ and rows of $X$). Using
only elementary tools from the theory of empirical processes we show that a
version of the Er-SpUD algorithm by Spielman, Wang and Wright with high
probability recovers $A$ and $X$ exactly, provided that $p \ge Cn\log n$, which
is optimal up to the constant $C$.
| Rados{\l}aw Adamczak | null | 1601.02049 | null | null |
On Computationally Tractable Selection of Experiments in
Measurement-Constrained Regression Models | stat.ML cs.LG math.ST stat.TH | We derive computationally tractable methods to select a small subset of
experiment settings from a large pool of given design points. The primary focus
is on linear regression models, while the technique extends to generalized
linear models and Delta's method (estimating functions of linear regression
models) as well. The algorithms are based on a continuous relaxation of an
otherwise intractable combinatorial optimization problem, with sampling or
greedy procedures as post-processing steps. Formal approximation guarantees are
established for both algorithms, and numerical results on both synthetic and
real-world data confirm the effectiveness of the proposed methods.
| Yining Wang and Adams Wei Yu and Aarti Singh | null | 1601.02068 | null | null |
On Clustering Time Series Using Euclidean Distance and Pearson
Correlation | cs.LG cs.AI stat.ML | For time series comparisons, it has often been observed that z-score
normalized Euclidean distances far outperform the unnormalized variant. In this
paper we show that a z-score normalized, squared Euclidean Distance is, in
fact, equal to a distance based on Pearson Correlation. This has profound
impact on many distance-based classification or clustering methods. In addition
to this theoretically sound result we also show that the often used k-Means
algorithm formally needs a mod ification to keep the interpretation as Pearson
correlation strictly valid. Experimental results demonstrate that in many cases
the standard k-Means algorithm generally produces the same results.
| Michael R. Berthold and Frank H\"oppner | null | 1601.02213 | null | null |
A Sufficient Statistics Construction of Bayesian Nonparametric
Exponential Family Conjugate Models | cs.LG stat.ML | Conjugate pairs of distributions over infinite dimensional spaces are
prominent in statistical learning theory, particularly due to the widespread
adoption of Bayesian nonparametric methodologies for a host of models and
applications. Much of the existing literature in the learning community focuses
on processes possessing some form of computationally tractable conjugacy as is
the case for the beta and gamma processes (and, via normalization, the
Dirichlet process). For these processes, proofs of conjugacy and requisite
derivation of explicit computational formulae for posterior density parameters
are idiosyncratic to the stochastic process in question. As such, Bayesian
Nonparametric models are currently available for a limited number of conjugate
pairs, e.g. the Dirichlet-multinomial and beta-Bernoulli process pairs. In each
of these above cases the likelihood process belongs to the class of discrete
exponential family distributions. The exclusion of continuous likelihood
distributions from the known cases of Bayesian Nonparametric Conjugate models
stands as a disparity in the researcher's toolbox.
In this paper we first address the problem of obtaining a general
construction of prior distributions over infinite dimensional spaces possessing
distributional properties amenable to conjugacy. Second, we bridge the divide
between the discrete and continuous likelihoods by illustrating a canonical
construction for stochastic processes whose Levy measure densities are from
positive exponential families, and then demonstrate that these processes in
fact form the prior, likelihood, and posterior in a conjugate family. Our
canonical construction subsumes known computational formulae for posterior
density parameters in the cases where the likelihood is from a discrete
distribution belonging to an exponential family.
| Robert Finn and Brian Kulis | null | 1601.02257 | null | null |
Temporal Multinomial Mixture for Instance-Oriented Evolutionary
Clustering | cs.IR cs.LG stat.ML | Evolutionary clustering aims at capturing the temporal evolution of clusters.
This issue is particularly important in the context of social media data that
are naturally temporally driven. In this paper, we propose a new probabilistic
model-based evolutionary clustering technique. The Temporal Multinomial Mixture
(TMM) is an extension of classical mixture model that optimizes feature
co-occurrences in the trade-off with temporal smoothness. Our model is
evaluated for two recent case studies on opinion aggregation over time. We
compare four different probabilistic clustering models and we show the
superiority of our proposal in the task of instance-oriented clustering.
| Young-Min Kim, Julien Velcin, St\'ephane Bonnevay, Marian-Andrei
Rizoiu | 10.1007/978-3-319-16354-3_66 | 1601.02300 | null | null |
A Synthetic Approach for Recommendation: Combining Ratings, Social
Relations, and Reviews | cs.IR cs.AI cs.LG | Recommender systems (RSs) provide an effective way of alleviating the
information overload problem by selecting personalized choices. Online social
networks and user-generated content provide diverse sources for recommendation
beyond ratings, which present opportunities as well as challenges for
traditional RSs. Although social matrix factorization (Social MF) can integrate
ratings with social relations and topic matrix factorization can integrate
ratings with item reviews, both of them ignore some useful information. In this
paper, we investigate the effective data fusion by combining the two
approaches, in two steps. First, we extend Social MF to exploit the graph
structure of neighbors. Second, we propose a novel framework MR3 to jointly
model these three types of information effectively for rating prediction by
aligning latent factors and hidden topics. We achieve more accurate rating
prediction on two real-life datasets. Furthermore, we measure the contribution
of each data source to the proposed framework.
| Guang-Neng Hu, Xin-Yu Dai, Yunya Song, Shu-Jian Huang, Jia-Jun Chen | null | 1601.02327 | null | null |
Deep Learning over Multi-field Categorical Data: A Case Study on User
Response Prediction | cs.LG cs.IR | Predicting user responses, such as click-through rate and conversion rate,
are critical in many web applications including web search, personalised
recommendation, and online advertising. Different from continuous raw features
that we usually found in the image and audio domains, the input features in web
space are always of multi-field and are mostly discrete and categorical while
their dependencies are little known. Major user response prediction models have
to either limit themselves to linear models or require manually building up
high-order combination features. The former loses the ability of exploring
feature interactions, while the latter results in a heavy computation in the
large feature space. To tackle the issue, we propose two novel models using
deep neural networks (DNNs) to automatically learn effective patterns from
categorical feature interactions and make predictions of users' ad clicks. To
get our DNNs efficiently work, we propose to leverage three feature
transformation methods, i.e., factorisation machines (FMs), restricted
Boltzmann machines (RBMs) and denoising auto-encoders (DAEs). This paper
presents the structure of our models and their efficient training algorithms.
The large-scale experiments with real-world data demonstrate that our methods
work better than major state-of-the-art models.
| Weinan Zhang, Tianming Du, Jun Wang | null | 1601.02376 | null | null |
Implicit Look-alike Modelling in Display Ads: Transfer Collaborative
Filtering to CTR Estimation | cs.LG cs.IR | User behaviour targeting is essential in online advertising. Compared with
sponsored search keyword targeting and contextual advertising page content
targeting, user behaviour targeting builds users' interest profiles via
tracking their online behaviour and then delivers the relevant ads according to
each user's interest, which leads to higher targeting accuracy and thus more
improved advertising performance. The current user profiling methods include
building keywords and topic tags or mapping users onto a hierarchical taxonomy.
However, to our knowledge, there is no previous work that explicitly
investigates the user online visits similarity and incorporates such similarity
into their ad response prediction. In this work, we propose a general framework
which learns the user profiles based on their online browsing behaviour, and
transfers the learned knowledge onto prediction of their ad response.
Technically, we propose a transfer learning model based on the probabilistic
latent factor graphic models, where the users' ad response profiles are
generated from their online browsing profiles. The large-scale experiments
based on real-world data demonstrate significant improvement of our solution
over some strong baselines.
| Weinan Zhang, Lingxi Chen, Jun Wang | null | 1601.02377 | null | null |
How to learn a graph from smooth signals | stat.ML cs.LG physics.data-an | We propose a framework that learns the graph structure underlying a set of
smooth signals. Given $X\in\mathbb{R}^{m\times n}$ whose rows reside on the
vertices of an unknown graph, we learn the edge weights
$w\in\mathbb{R}_+^{m(m-1)/2}$ under the smoothness assumption that
$\text{tr}{X^\top LX}$ is small. We show that the problem is a weighted
$\ell$-1 minimization that leads to naturally sparse solutions. We point out
how known graph learning or construction techniques fall within our framework
and propose a new model that performs better than the state of the art in many
settings. We present efficient, scalable primal-dual based algorithms for both
our model and the previous state of the art, and evaluate their performance on
artificial and real data.
| Vassilis Kalofolias | null | 1601.02513 | null | null |
How to Use Temporal-Driven Constrained Clustering to Detect Typical
Evolutions | cs.LG cs.DS | In this paper, we propose a new time-aware dissimilarity measure that takes
into account the temporal dimension. Observations that are close in the
description space, but distant in time are considered as dissimilar. We also
propose a method to enforce the segmentation contiguity, by introducing, in the
objective function, a penalty term inspired from the Normal Distribution
Function. We combine the two propositions into a novel time-driven constrained
clustering algorithm, called TDCK-Means, which creates a partition of coherent
clusters, both in the multidimensional space and in the temporal space. This
algorithm uses soft semi-supervised constraints, to encourage adjacent
observations belonging to the same entity to be assigned to the same cluster.
We apply our algorithm to a Political Studies dataset in order to detect
typical evolution phases. We adapt the Shannon entropy in order to measure the
entity contiguity, and we show that our proposition consistently improves
temporal cohesion of clusters, without any significant loss in the
multidimensional variance.
| Marian-Andrei Rizoiu, Julien Velcin, St\'ephane Lallich | 10.1142/S0218213014600136 | 1601.02603 | null | null |
Using SVM to pre-classify government purchases | cs.LG | The Brazilian government often misclassifies the goods it buys. That makes it
hard to audit government expenditures. We cannot know whether the price paid
for a ballpoint pen (code #7510) was reasonable if the pen was misclassified as
a technical drawing pen (code #6675) or as any other good. This paper shows how
we can use machine learning to reduce misclassification. I trained a support
vector machine (SVM) classifier that takes a product description as input and
returns the most likely category codes as output. I trained the classifier
using 20 million goods purchased by the Brazilian government between 1999-04-01
and 2015-04-02. In 83.3% of the cases the correct category code was one of the
three most likely category codes identified by the classifier. I used the
trained classifier to develop a web app that might help the government reduce
misclassification. I open sourced the code on GitHub; anyone can use and modify
it.
| Thiago Marzag\~ao | null | 1601.02680 | null | null |
Robobarista: Learning to Manipulate Novel Objects via Deep Multimodal
Embedding | cs.RO cs.AI cs.LG | There is a large variety of objects and appliances in human environments,
such as stoves, coffee dispensers, juice extractors, and so on. It is
challenging for a roboticist to program a robot for each of these object types
and for each of their instantiations. In this work, we present a novel approach
to manipulation planning based on the idea that many household objects share
similarly-operated object parts. We formulate the manipulation planning as a
structured prediction problem and learn to transfer manipulation strategy
across different objects by embedding point-cloud, natural language, and
manipulation trajectory data into a shared embedding space using a deep neural
network. In order to learn semantically meaningful spaces throughout our
network, we introduce a method for pre-training its lower layers for multimodal
feature embedding and a method for fine-tuning this embedding space using a
loss-based margin. In order to collect a large number of manipulation
demonstrations for different objects, we develop a new crowd-sourcing platform
called Robobarista. We test our model on our dataset consisting of 116 objects
and appliances with 249 parts along with 250 language instructions, for which
there are 1225 crowd-sourced manipulation demonstrations. We further show that
our robot with our model can even prepare a cup of a latte with appliances it
has never seen before.
| Jaeyong Sung, Seok Hyun Jin, Ian Lenz, Ashutosh Saxena | null | 1601.02705 | null | null |
Deep Learning of Part-based Representation of Data Using Sparse
Autoencoders with Nonnegativity Constraints | cs.LG stat.ML | We demonstrate a new deep learning autoencoder network, trained by a
nonnegativity constraint algorithm (NCAE), that learns features which show
part-based representation of data. The learning algorithm is based on
constraining negative weights. The performance of the algorithm is assessed
based on decomposing data into parts and its prediction performance is tested
on three standard image data sets and one text dataset. The results indicate
that the nonnegativity constraint forces the autoencoder to learn features that
amount to a part-based representation of data, while improving sparsity and
reconstruction quality in comparison with the traditional sparse autoencoder
and Nonnegative Matrix Factorization. It is also shown that this newly acquired
representation improves the prediction performance of a deep neural network.
| Ehsan Hosseini-Asl, Jacek M. Zurada, Olfa Nasraoui | 10.1109/TNNLS.2015.2479223 | 1601.02733 | null | null |
Learning Hidden Unit Contributions for Unsupervised Acoustic Model
Adaptation | cs.CL cs.LG cs.SD | This work presents a broad study on the adaptation of neural network acoustic
models by means of learning hidden unit contributions (LHUC) -- a method that
linearly re-combines hidden units in a speaker- or environment-dependent manner
using small amounts of unsupervised adaptation data. We also extend LHUC to a
speaker adaptive training (SAT) framework that leads to a more adaptable DNN
acoustic model, working both in a speaker-dependent and a speaker-independent
manner, without the requirements to maintain auxiliary speaker-dependent
feature extractors or to introduce significant speaker-dependent changes to the
DNN structure. Through a series of experiments on four different speech
recognition benchmarks (TED talks, Switchboard, AMI meetings, and Aurora4)
comprising 270 test speakers, we show that LHUC in both its test-only and SAT
variants results in consistent word error rate reductions ranging from 5% to
23% relative depending on the task and the degree of mismatch between training
and test data. In addition, we have investigated the effect of the amount of
adaptation data per speaker, the quality of unsupervised adaptation targets,
the complementarity to other adaptation techniques, one-shot adaptation, and an
extension to adapting DNNs trained in a sequence discriminative manner.
| Pawel Swietojanski and Jinyu Li and Steve Renals | 10.1109/TASLP.2016.2560534 | 1601.02828 | null | null |
Online Model Estimation for Predictive Thermal Control of Buildings | cs.SY cs.LG | This study proposes a general, scalable method to learn control-oriented
thermal models of buildings that could enable wide-scale deployment of
cost-effective predictive controls. An Unscented Kalman Filter augmented for
parameter and disturbance estimation is shown to accurately learn and predict a
building's thermal response. Recent studies of heating, ventilating, and air
conditioning (HVAC) systems have shown significant energy savings with advanced
model predictive control (MPC). A scalable cost-effective method to readily
acquire accurate, robust models of individual buildings' unique thermal
envelopes has historically been elusive and hindered the widespread deployment
of prediction-based control systems. Continuous commissioning and lifetime
performance of these thermal models requires deployment of on-line data-driven
system identification and parameter estimation routines. We propose a novel
gray-box approach using an Unscented Kalman Filter based on a multi-zone
thermal network and validate it with EnergyPlus simulation data. The filter
quickly learns parameters of a thermal network during periods of known or
constrained loads and then characterizes unknown loads in order to provide
accurate 24+ hour energy predictions. This study extends our initial
investigation by formalizing parameter and disturbance estimation routines and
demonstrating results across a year-long study.
| Peter Radecki and Brandon Hencey | null | 1601.02947 | null | null |
Infomax strategies for an optimal balance between exploration and
exploitation | cs.LG cs.IT math.IT physics.data-an q-bio.PE stat.ML | Proper balance between exploitation and exploration is what makes good
decisions, which achieve high rewards like payoff or evolutionary fitness. The
Infomax principle postulates that maximization of information directs the
function of diverse systems, from living systems to artificial neural networks.
While specific applications are successful, the validity of information as a
proxy for reward remains unclear. Here, we consider the multi-armed bandit
decision problem, which features arms (slot-machines) of unknown probabilities
of success and a player trying to maximize cumulative payoff by choosing the
sequence of arms to play. We show that an Infomax strategy (Info-p) which
optimally gathers information on the highest mean reward among the arms
saturates known optimal bounds and compares favorably to existing policies. The
highest mean reward considered by Info-p is not the quantity actually needed
for the choice of the arm to play, yet it allows for optimal tradeoffs between
exploration and exploitation.
| Gautam Reddy, Antonio Celani and Massimo Vergassola | 10.1007/s10955-016-1521-0 | 1601.03073 | null | null |
Online Prediction of Dyadic Data with Heterogeneous Matrix Factorization | cs.LG stat.ML | Dyadic Data Prediction (DDP) is an important problem in many research areas.
This paper develops a novel fully Bayesian nonparametric framework which
integrates two popular and complementary approaches, discrete mixed membership
modeling and continuous latent factor modeling into a unified Heterogeneous
Matrix Factorization~(HeMF) model, which can predict the unobserved dyadics
accurately. The HeMF can determine the number of communities automatically and
exploit the latent linear structure for each bicluster efficiently. We propose
a Variational Bayesian method to estimate the parameters and missing data. We
further develop a novel online learning approach for Variational inference and
use it for the online learning of HeMF, which can efficiently cope with the
important large-scale DDP problem. We evaluate the performance of our method on
the EachMoive, MovieLens and Netflix Prize collaborative filtering datasets.
The experiment shows that, our model outperforms state-of-the-art methods on
all benchmarks. Compared with Stochastic Gradient Method (SGD), our online
learning approach achieves significant improvement on the estimation accuracy
and robustness.
| Guangyong Chen, Fengyuan Zhu, Pheng Ann Heng | null | 1601.03124 | null | null |
Dynamic Privacy For Distributed Machine Learning Over Network | cs.LG | Privacy-preserving distributed machine learning becomes increasingly
important due to the recent rapid growth of data. This paper focuses on a class
of regularized empirical risk minimization (ERM) machine learning problems, and
develops two methods to provide differential privacy to distributed learning
algorithms over a network. We first decentralize the learning algorithm using
the alternating direction method of multipliers (ADMM), and propose the methods
of dual variable perturbation and primal variable perturbation to provide
dynamic differential privacy. The two mechanisms lead to algorithms that can
provide privacy guarantees under mild conditions of the convexity and
differentiability of the loss function and the regularizer. We study the
performance of the algorithms, and show that the dual variable perturbation
outperforms its primal counterpart. To design an optimal privacy mechanisms, we
analyze the fundamental tradeoff between privacy and accuracy, and provide
guidelines to choose privacy parameters. Numerical experiments using customer
information database are performed to corroborate the results on privacy and
utility tradeoffs and design.
| Tao Zhang, Quanyan Zhu | null | 1601.03466 | null | null |
Deep Learning Applied to Image and Text Matching | cs.LG cs.CL cs.CV | The ability to describe images with natural language sentences is the
hallmark for image and language understanding. Such a system has wide ranging
applications such as annotating images and using natural sentences to search
for images.In this project we focus on the task of bidirectional image
retrieval: such asystem is capable of retrieving an image based on a sentence
(image search) andretrieve sentence based on an image query (image annotation).
We present asystem based on a global ranking objective function which uses a
combinationof convolutional neural networks (CNN) and multi layer perceptrons
(MLP).It takes a pair of image and sentence and processes them in different
channels,finally embedding it into a common multimodal vector space. These
embeddingsencode abstract semantic information about the two inputs and can be
comparedusing traditional information retrieval approaches. For each such pair,
the modelreturns a score which is interpretted as a similarity metric. If this
score is high,the image and sentence are likely to convey similar meaning, and
if the score is low then they are likely not to.
The visual input is modeled via deep convolutional neural network. On
theother hand we explore three models for the textual module. The first one
isbag of words with an MLP. The second one uses n-grams (bigram, trigrams,and a
combination of trigram & skip-grams) with an MLP. The third is morespecialized
deep network specific for modeling variable length sequences (SSE).We report
comparable performance to recent work in the field, even though ouroverall
model is simpler. We also show that the training time choice of how wecan
generate our negative samples has a significant impact on performance, and can
be used to specialize the bi-directional system in one particular task.
| Afroze Ibrahim Baqapuri | null | 1601.03478 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.