title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Tracking Human-like Natural Motion Using Deep Recurrent Neural Networks | cs.CV cs.LG cs.NE cs.RO | Kinect skeleton tracker is able to achieve considerable human body tracking
performance in convenient and a low-cost manner. However, The tracker often
captures unnatural human poses such as discontinuous and vibrated motions when
self-occlusions occur. A majority of approaches tackle this problem by using
multiple Kinect sensors in a workspace. Combination of the measurements from
different sensors is then conducted in Kalman filter framework or optimization
problem is formulated for sensor fusion. However, these methods usually require
heuristics to measure reliability of measurements observed from each Kinect
sensor. In this paper, we developed a method to improve Kinect skeleton using
single Kinect sensor, in which supervised learning technique was employed to
correct unnatural tracking motions. Specifically, deep recurrent neural
networks were used for improving joint positions and velocities of Kinect
skeleton, and three methods were proposed to integrate the refined positions
and velocities for further enhancement. Moreover, we suggested a novel measure
to evaluate naturalness of captured motions. We evaluated the proposed approach
by comparison with the ground truth obtained using a commercial optical
maker-based motion capture system.
| Youngbin Park, Sungphill Moon and Il Hong Suh | null | 1604.04528 | null | null |
Accessing accurate documents by mining auxiliary document information | cs.IR cs.AI cs.LG | Earlier techniques of text mining included algorithms like k-means, Naive
Bayes, SVM which classify and cluster the text document for mining relevant
information about the documents. The need for improving the mining techniques
has us searching for techniques using the available algorithms. This paper
proposes one technique which uses the auxiliary information that is present
inside the text documents to improve the mining. This auxiliary information can
be a description to the content. This information can be either useful or
completely useless for mining. The user should assess the worth of the
auxiliary information before considering this technique for text mining. In
this paper, a combination of classical clustering algorithms is used to mine
the datasets. The algorithm runs in two stages which carry out mining at
different levels of abstraction. The clustered documents would then be
classified based on the necessary groups. The proposed technique is aimed at
improved results of document clustering.
| Jinju Joby and Jyothi Korra | null | 1604.04558 | null | null |
CNN-RNN: A Unified Framework for Multi-label Image Classification | cs.CV cs.LG cs.NE | While deep convolutional neural networks (CNNs) have shown a great success in
single-label image classification, it is important to note that real world
images generally contain multiple labels, which could correspond to different
objects, scenes, actions and attributes in an image. Traditional approaches to
multi-label image classification learn independent classifiers for each
category and employ ranking or thresholding on the classification results.
These techniques, although working well, fail to explicitly exploit the label
dependencies in an image. In this paper, we utilize recurrent neural networks
(RNNs) to address this problem. Combined with CNNs, the proposed CNN-RNN
framework learns a joint image-label embedding to characterize the semantic
label dependency as well as the image-label relevance, and it can be trained
end-to-end from scratch to integrate both information in a unified framework.
Experimental results on public benchmark datasets demonstrate that the proposed
architecture achieves better performance than the state-of-the-art multi-label
classification model
| Jiang Wang, Yi Yang, Junhua Mao, Zhiheng Huang, Chang Huang, Wei Xu | null | 1604.04573 | null | null |
Make Up Your Mind: The Price of Online Queries in Differential Privacy | cs.CR cs.DS cs.LG | We consider the problem of answering queries about a sensitive dataset
subject to differential privacy. The queries may be chosen adversarially from a
larger set Q of allowable queries in one of three ways, which we list in order
from easiest to hardest to answer:
Offline: The queries are chosen all at once and the differentially private
mechanism answers the queries in a single batch.
Online: The queries are chosen all at once, but the mechanism only receives
the queries in a streaming fashion and must answer each query before seeing the
next query.
Adaptive: The queries are chosen one at a time and the mechanism must answer
each query before the next query is chosen. In particular, each query may
depend on the answers given to previous queries.
Many differentially private mechanisms are just as efficient in the adaptive
model as they are in the offline model. Meanwhile, most lower bounds for
differential privacy hold in the offline setting. This suggests that the three
models may be equivalent.
We prove that these models are all, in fact, distinct. Specifically, we show
that there is a family of statistical queries such that exponentially more
queries from this family can be answered in the offline model than in the
online model. We also exhibit a family of search queries such that
exponentially more queries from this family can be answered in the online model
than in the adaptive model. We also investigate whether such separations might
hold for simple queries like threshold queries over the real line.
| Mark Bun, Thomas Steinke, Jonathan Ullman | null | 1604.04618 | null | null |
ModelWizard: Toward Interactive Model Construction | cs.PL cs.LG | Data scientists engage in model construction to discover machine learning
models that well explain a dataset, in terms of predictiveness,
understandability and generalization across domains. Questions such as "what if
we model common cause Z" and "what if Y's dependence on X reverses" inspire
many candidate models to consider and compare, yet current tools emphasize
constructing a final model all at once.
To more naturally reflect exploration when debating numerous models, we
propose an interactive model construction framework grounded in composable
operations. Primitive operations capture core steps refining data and model
that, when verified, form an inductive basis to prove model validity. Derived,
composite operations enable advanced model families, both generic and
specialized, abstracted away from low-level details.
We prototype our envisioned framework in ModelWizard, a domain-specific
language embedded in F# to construct Tabular models. We enumerate language
design and demonstrate its use through several applications, emphasizing how
language may facilitate creation of complex models. To future engineers
designing data science languages and tools, we offer ModelWizard's design as a
new model construction paradigm, speeding discovery of our universe's
structure.
| Dylan Hutchison | null | 1604.04639 | null | null |
Phone-based Metric as a Predictor for Basic Personality Traits | cs.SI cs.LG physics.soc-ph | Basic personality traits are typically assessed through questionnaires. Here
we consider phone-based metrics as a way to asses personality traits. We use
data from smartphones with custom data-collection software distributed to 730
individuals. The data includes information about location, physical motion,
face-to-face contacts, online social network friends, text messages and calls.
The data is further complemented by questionnaire-based data on basic
personality traits. From the phone-based metrics, we define a set of behavioral
variables, which we use in a prediction of basic personality traits. We find
that predominantly, the Big Five personality traits extraversion and, to some
degree, neuroticism are strongly expressed in our data. As an alternative to
the Big Five, we investigate whether other linear combinations of the 44
questions underlying the Big Five Inventory are more predictable. In a tertile
classification problem, basic dimensionality reduction techniques, such as
independent component analysis, increase the predictability relative to the
baseline from $11\%$ to $23\%$. Finally, from a supervised linear classifier,
we were able to further improve this predictability to $33\%$. In all cases,
the most predictable projections had an overweight of the questions related to
extraversion and neuroticism. In addition, our findings indicate that the score
system underlying the Big Five Inventory disregards a part of the information
available in the 44 questions.
| Bjarke M{\o}nsted, Anders Mollgaard, Joachim Mathiesen | 10.1016/j.jrp.2017.12.004 | 1604.04696 | null | null |
DS-MLR: Exploiting Double Separability for Scaling up Distributed
Multinomial Logistic Regression | cs.LG stat.ML | Scaling multinomial logistic regression to datasets with very large number of
data points and classes is challenging. This is primarily because one needs to
compute the log-partition function on every data point. This makes distributing
the computation hard. In this paper, we present a distributed stochastic
gradient descent based optimization method (DS-MLR) for scaling up multinomial
logistic regression problems to massive scale datasets without hitting any
storage constraints on the data and model parameters. Our algorithm exploits
double-separability, an attractive property that allows us to achieve both data
as well as model parallelism simultaneously. In addition, we introduce a
non-blocking and asynchronous variant of our algorithm that avoids
bulk-synchronization. We demonstrate the versatility of DS-MLR to various
scenarios in data and model parallelism, through an extensive empirical study
using several real-world datasets. In particular, we demonstrate the
scalability of DS-MLR by solving an extreme multi-class classification problem
on the Reddit dataset (159 GB data, 358 GB parameters) where, to the best of
our knowledge, no other existing methods apply.
| Parameswaran Raman, Sriram Srinivasan, Shin Matsushima, Xinhua Zhang,
Hyokun Yun, S.V.N. Vishwanathan | null | 1604.04706 | null | null |
Efficient Dictionary Learning with Sparseness-Enforcing Projections | cs.LG cs.CV cs.NE | Learning dictionaries suitable for sparse coding instead of using engineered
bases has proven effective in a variety of image processing tasks. This paper
studies the optimization of dictionaries on image data where the representation
is enforced to be explicitly sparse with respect to a smooth, normalized
sparseness measure. This involves the computation of Euclidean projections onto
level sets of the sparseness measure. While previous algorithms for this
optimization problem had at least quasi-linear time complexity, here the first
algorithm with linear time complexity and constant space complexity is
proposed. The key for this is the mathematically rigorous derivation of a
characterization of the projection's result based on a soft-shrinkage function.
This theory is applied in an original algorithm called Easy Dictionary Learning
(EZDL), which learns dictionaries with a simple and fast-to-compute
Hebbian-like learning rule. The new algorithm is efficient, expressive and
particularly simple to implement. It is demonstrated that despite its
simplicity, the proposed learning algorithm is able to generate a rich variety
of dictionaries, in particular a topographic organization of atoms or separable
atoms. Further, the dictionaries are as expressive as those of benchmark
learning algorithms in terms of the reproduction quality on entire images, and
result in an equivalent denoising performance. EZDL learns approximately 30 %
faster than the already very efficient Online Dictionary Learning algorithm,
and is therefore eligible for rapid data set analysis and problems with vast
quantities of learning samples.
| Markus Thom, Matthias Rapp, G\"unther Palm | 10.1007/s11263-015-0799-8 | 1604.04767 | null | null |
Supervised and Unsupervised Ensembling for Knowledge Base Population | cs.CL cs.LG | We present results on combining supervised and unsupervised methods to
ensemble multiple systems for two popular Knowledge Base Population (KBP)
tasks, Cold Start Slot Filling (CSSF) and Tri-lingual Entity Discovery and
Linking (TEDL). We demonstrate that our combined system along with auxiliary
features outperforms the best performing system for both tasks in the 2015
competition, several ensembling baselines, as well as the state-of-the-art
stacking approach to ensembling KBP systems. The success of our technique on
two different and challenging problems demonstrates the power and generality of
our combined approach to ensembling.
| Nazneen Fatema Rajani and Raymond J. Mooney | null | 1604.04802 | null | null |
Structured Sparse Convolutional Autoencoder | cs.LG cs.NE | This paper aims to improve the feature learning in Convolutional Networks
(Convnet) by capturing the structure of objects. A new sparsity function is
imposed on the extracted featuremap to capture the structure and shape of the
learned object, extracting interpretable features to improve the prediction
performance. The proposed algorithm is based on organizing the activation
within and across featuremap by constraining the node activities through
$\ell_{2}$ and $\ell_{1}$ normalization in a structured form.
| Ehsan Hosseini-Asl | null | 1604.04812 | null | null |
SSP: Semantic Space Projection for Knowledge Graph Embedding with Text
Descriptions | cs.CL cs.LG | Knowledge representation is an important, long-history topic in AI, and there
have been a large amount of work for knowledge graph embedding which projects
symbolic entities and relations into low-dimensional, real-valued vector space.
However, most embedding methods merely concentrate on data fitting and ignore
the explicit semantic expression, leading to uninterpretable representations.
Thus, traditional embedding methods have limited potentials for many
applications such as question answering, and entity classification. To this
end, this paper proposes a semantic representation method for knowledge graph
\textbf{(KSR)}, which imposes a two-level hierarchical generative process that
globally extracts many aspects and then locally assigns a specific category in
each aspect for every triple. Since both aspects and categories are
semantics-relevant, the collection of categories in each aspect is treated as
the semantic representation of this triple. Extensive experiments justify our
model outperforms other state-of-the-art baselines substantially.
| Han Xiao, Minlie Huang, Xiaoyan Zhu | null | 1604.04835 | null | null |
Mahalanobis Distance Metric Learning Algorithm for Instance-based Data
Stream Classification | cs.LG | With the massive data challenges nowadays and the rapid growing of
technology, stream mining has recently received considerable attention. To
address the large number of scenarios in which this phenomenon manifests itself
suitable tools are required in various research fields. Instance-based data
stream algorithms generally employ the Euclidean distance for the
classification task underlying this problem. A novel way to look into this
issue is to take advantage of a more flexible metric due to the increased
requirements imposed by the data stream scenario. In this paper we present a
new algorithm that learns a Mahalanobis metric using similarity and
dissimilarity constraints in an online manner. This approach hybridizes a
Mahalanobis distance metric learning algorithm and a k-NN data stream
classification algorithm with concept drift detection. First, some basic
aspects of Mahalanobis distance metric learning are described taking into
account key properties as well as online distance metric learning algorithms.
Second, we implement specific evaluation methodologies and comparative metrics
such as Q statistic for data stream classification algorithms. Finally, our
algorithm is evaluated on different datasets by comparing its results with one
of the best instance-based data stream classification algorithm of the state of
the art. The results demonstrate that our proposal is better
| Jorge Luis Rivero Perez, Bernardete Ribeiro, Carlos Morell Perez | null | 1604.04879 | null | null |
An Initial Seed Selection Algorithm for K-means Clustering of
Georeferenced Data to Improve Replicability of Cluster Assignments for
Mapping Application | cs.LG cs.DS | K-means is one of the most widely used clustering algorithms in various
disciplines, especially for large datasets. However the method is known to be
highly sensitive to initial seed selection of cluster centers. K-means++ has
been proposed to overcome this problem and has been shown to have better
accuracy and computational efficiency than k-means. In many clustering problems
though -such as when classifying georeferenced data for mapping applications-
standardization of clustering methodology, specifically, the ability to arrive
at the same cluster assignment for every run of the method i.e. replicability
of the methodology, may be of greater significance than any perceived measure
of accuracy, especially when the solution is known to be non-unique, as in the
case of k-means clustering. Here we propose a simple initial seed selection
algorithm for k-means clustering along one attribute that draws initial cluster
boundaries along the 'deepest valleys' or greatest gaps in dataset. Thus, it
incorporates a measure to maximize distance between consecutive cluster centers
which augments the conventional k-means optimization for minimum distance
between cluster center and cluster members. Unlike existing initialization
methods, no additional parameters or degrees of freedom are introduced to the
clustering algorithm. This improves the replicability of cluster assignments by
as much as 100% over k-means and k-means++, virtually reducing the variance
over different runs to zero, without introducing any additional parameters to
the clustering process. Further, the proposed method is more computationally
efficient than k-means++ and in some cases, more accurate.
| Fouad Khan | 10.1016/j.asoc.2012.07.021 | 1604.04893 | null | null |
Multi-view Learning as a Nonparametric Nonlinear Inter-Battery Factor
Analysis | stat.ML cs.LG math.PR | Factor analysis aims to determine latent factors, or traits, which summarize
a given data set. Inter-battery factor analysis extends this notion to multiple
views of the data. In this paper we show how a nonlinear, nonparametric version
of these models can be recovered through the Gaussian process latent variable
model. This gives us a flexible formalism for multi-view learning where the
latent variables can be used both for exploratory purposes and for learning
representations that enable efficient inference for ambiguous estimation tasks.
Learning is performed in a Bayesian manner through the formulation of a
variational compression scheme which gives a rigorous lower bound on the log
likelihood. Our Bayesian framework provides strong regularization during
training, allowing the structure of the latent space to be determined
efficiently and automatically. We demonstrate this by producing the first (to
our knowledge) published results of learning from dozens of views, even when
data is scarce. We further show experimental results on several different types
of multi-view data sets and for different kinds of tasks, including exploratory
data analysis, generation, ambiguity modelling through latent priors and
classification.
| Andreas Damianou, Neil D. Lawrence, Carl Henrik Ek | null | 1604.04939 | null | null |
Identifying global optimality for dictionary learning | stat.ML cs.LG | Learning new representations of input observations in machine learning is
often tackled using a factorization of the data. For many such problems,
including sparse coding and matrix completion, learning these factorizations
can be difficult, in terms of efficiency and to guarantee that the solution is
a global minimum. Recently, a general class of objectives have been
introduced-which we term induced dictionary learning models (DLMs)-that have an
induced convex form that enables global optimization. Though attractive
theoretically, this induced form is impractical, particularly for large or
growing datasets. In this work, we investigate the use of practical alternating
minimization algorithms for induced DLMs, that ensure convergence to global
optima. We characterize the stationary points of these models, and, using these
insights, highlight practical choices for the objectives. We then provide
theoretical and empirical evidence that alternating minimization, from a random
initialization, converges to global minima for a large subclass of induced
DLMs. In particular, we take advantage of the existence of the (potentially
unknown) convex induced form, to identify when stationary points are global
minima for the dictionary learning objective. We then provide an empirical
investigation into practical optimization choices for using alternating
minimization for induced DLMs, for both batch and stochastic gradient descent.
| Lei Le and Martha White | null | 1604.04942 | null | null |
Gaussian Copula Variational Autoencoders for Mixed Data | stat.ML cs.LG | The variational autoencoder (VAE) is a generative model with continuous
latent variables where a pair of probabilistic encoder (bottom-up) and decoder
(top-down) is jointly learned by stochastic gradient variational Bayes. We
first elaborate Gaussian VAE, approximating the local covariance matrix of the
decoder as an outer product of the principal direction at a position determined
by a sample drawn from Gaussian distribution. We show that this model, referred
to as VAE-ROC, better captures the data manifold, compared to the standard
Gaussian VAE where independent multivariate Gaussian was used to model the
decoder. Then we extend the VAE-ROC to handle mixed categorical and continuous
data. To this end, we employ Gaussian copula to model the local dependency in
mixed categorical and continuous data, leading to {\em Gaussian copula
variational autoencoder} (GCVAE). As in VAE-ROC, we use the rank-one
approximation for the covariance in the Gaussian copula, to capture the local
dependency structure in the mixed data. Experiments on various datasets
demonstrate the useful behaviour of VAE-ROC and GCVAE, compared to the standard
VAE.
| Suwon Suh and Seungjin Choi | null | 1604.04960 | null | null |
Deep Aesthetic Quality Assessment with Semantic Information | cs.CV cs.LG cs.NE | Human beings often assess the aesthetic quality of an image coupled with the
identification of the image's semantic content. This paper addresses the
correlation issue between automatic aesthetic quality assessment and semantic
recognition. We cast the assessment problem as the main task among a multi-task
deep model, and argue that semantic recognition task offers the key to address
this problem. Based on convolutional neural networks, we employ a single and
simple multi-task framework to efficiently utilize the supervision of aesthetic
and semantic labels. A correlation item between these two tasks is further
introduced to the framework by incorporating the inter-task relationship
learning. This item not only provides some useful insight about the correlation
but also improves assessment accuracy of the aesthetic task. Particularly, an
effective strategy is developed to keep a balance between the two tasks, which
facilitates to optimize the parameters of the framework. Extensive experiments
on the challenging AVA dataset and Photo.net dataset validate the importance of
semantic recognition in aesthetic quality assessment, and demonstrate that
multi-task deep models can discover an effective aesthetic representation to
achieve state-of-the-art results.
| Yueying Kao, Ran He, Kaiqi Huang | 10.1109/TIP.2017.2651399 | 1604.04970 | null | null |
Empirical study of PROXTONE and PROXTONE$^+$ for Fast Learning of Large
Scale Sparse Models | cs.LG cs.AI | PROXTONE is a novel and fast method for optimization of large scale
non-smooth convex problem \cite{shi2015large}. In this work, we try to use
PROXTONE method in solving large scale \emph{non-smooth non-convex} problems,
for example training of sparse deep neural network (sparse DNN) or sparse
convolutional neural network (sparse CNN) for embedded or mobile device.
PROXTONE converges much faster than first order methods, while first order
method is easy in deriving and controlling the sparseness of the solutions.
Thus in some applications, in order to train sparse models fast, we propose to
combine the merits of both methods, that is we use PROXTONE in the first
several epochs to reach the neighborhood of an optimal solution, and then use
the first order method to explore the possibility of sparsity in the following
training. We call such method PROXTONE plus (PROXTONE$^+$). Both PROXTONE and
PROXTONE$^+$ are tested in our experiments, and which demonstrate both methods
improved convergence speed twice as fast at least on diverse sparse model
learning problems, and at the same time reduce the size to 0.5\% for DNN
models. The source of all the algorithms is available upon request.
| Ziqiang Shi and Rujie Liu | null | 1604.05024 | null | null |
Mastering 2048 with Delayed Temporal Coherence Learning, Multi-Stage
Weight Promotion, Redundant Encoding and Carousel Shaping | cs.AI cs.LG | 2048 is an engaging single-player, nondeterministic video puzzle game, which,
thanks to the simple rules and hard-to-master gameplay, has gained massive
popularity in recent years. As 2048 can be conveniently embedded into the
discrete-state Markov decision processes framework, we treat it as a testbed
for evaluating existing and new methods in reinforcement learning. With the aim
to develop a strong 2048 playing program, we employ temporal difference
learning with systematic n-tuple networks. We show that this basic method can
be significantly improved with temporal coherence learning, multi-stage
function approximator with weight promotion, carousel shaping, and redundant
encoding. In addition, we demonstrate how to take advantage of the
characteristics of the n-tuple network, to improve the algorithmic
effectiveness of the learning process by i) delaying the (decayed) update and
applying lock-free optimistic parallelism to effortlessly make advantage of
multiple CPU cores. This way, we were able to develop the best known 2048
playing program to date, which confirms the effectiveness of the introduced
methods for discrete-state Markov decision problems.
| Wojciech Ja\'skowski | null | 1604.05085 | null | null |
End-to-End Tracking and Semantic Segmentation Using Recurrent Neural
Networks | cs.LG cs.AI cs.CV cs.NE cs.RO | In this work we present a novel end-to-end framework for tracking and
classifying a robot's surroundings in complex, dynamic and only partially
observable real-world environments. The approach deploys a recurrent neural
network to filter an input stream of raw laser measurements in order to
directly infer object locations, along with their identity in both visible and
occluded areas. To achieve this we first train the network using unsupervised
Deep Tracking, a recently proposed theoretical framework for end-to-end space
occupancy prediction. We show that by learning to track on a large amount of
unsupervised data, the network creates a rich internal representation of its
environment which we in turn exploit through the principle of inductive
transfer of knowledge to perform the task of it's semantic classification. As a
result, we show that only a small amount of labelled data suffices to steer the
network towards mastering this additional task. Furthermore we propose a novel
recurrent neural network architecture specifically tailored to tracking and
semantic classification in real-world robotics applications. We demonstrate the
tracking and classification performance of the method on real-world data
collected at a busy road junction. Our evaluation shows that the proposed
end-to-end framework compares favourably to a state-of-the-art, model-free
tracking solution and that it outperforms a conventional one-shot training
scheme for semantic classification.
| Peter Ondruska, Julie Dequaire, Dominic Zeng Wang, Ingmar Posner | null | 1604.05091 | null | null |
Locally Imposing Function for Generalized Constraint Neural Networks - A
Study on Equality Constraints | cs.NE cs.LG stat.ML | This work is a further study on the Generalized Constraint Neural Network
(GCNN) model [1], [2]. Two challenges are encountered in the study, that is, to
embed any type of prior information and to select its imposing schemes. The
work focuses on the second challenge and studies a new constraint imposing
scheme for equality constraints. A new method called locally imposing function
(LIF) is proposed to provide a local correction to the GCNN prediction
function, which therefore falls within Locally Imposing Scheme (LIS). In
comparison, the conventional Lagrange multiplier method is considered as
Globally Imposing Scheme (GIS) because its added constraint term exhibits a
global impact to its objective function. Two advantages are gained from LIS
over GIS. First, LIS enables constraints to fire locally and explicitly in the
domain only where they need on the prediction function. Second, constraints can
be implemented within a network setting directly. We attempt to interpret
several constraint methods graphically from a viewpoint of the locality
principle. Numerical examples confirm the advantages of the proposed method. In
solving boundary value problems with Dirichlet and Neumann constraints, the
GCNN model with LIF is possible to achieve an exact satisfaction of the
constraints.
| Linlin Cao, Ran He, Bao-Gang Hu | null | 1604.05198 | null | null |
Can Boosting with SVM as Week Learners Help? | cs.CV cs.LG | Object recognition in images involves identifying objects with partial
occlusions, viewpoint changes, varying illumination, cluttered backgrounds.
Recent work in object recognition uses machine learning techniques SVM-KNN,
Local Ensemble Kernel Learning, Multiple Kernel Learning. In this paper, we
want to utilize SVM as week learners in AdaBoost. Experiments are done with
classifiers like near- est neighbor, k-nearest neighbor, Support vector
machines, Local learning(SVM- KNN) and AdaBoost. Models use Scale-Invariant
descriptors and Pyramid his- togram of gradient descriptors. AdaBoost is
trained with set of week classifier as SVMs, each with kernel distance function
on different descriptors. Results shows AdaBoost with SVM outperform other
methods for Object Categorization dataset.
| Dinesh Govindaraj | null | 1604.05242 | null | null |
Risk-Averse Multi-Armed Bandit Problems under Mean-Variance Measure | cs.LG | The multi-armed bandit problems have been studied mainly under the measure of
expected total reward accrued over a horizon of length $T$. In this paper, we
address the issue of risk in multi-armed bandit problems and develop parallel
results under the measure of mean-variance, a commonly adopted risk measure in
economics and mathematical finance. We show that the model-specific regret and
the model-independent regret in terms of the mean-variance of the reward
process are lower bounded by $\Omega(\log T)$ and $\Omega(T^{2/3})$,
respectively. We then show that variations of the UCB policy and the DSEE
policy developed for the classic risk-neutral MAB achieve these lower bounds.
| Sattar Vakili, Qing Zhao | 10.1109/JSTSP.2016.2592622 | 1604.05257 | null | null |
Chained Gaussian Processes | stat.ML cs.LG | Gaussian process models are flexible, Bayesian non-parametric approaches to
regression. Properties of multivariate Gaussians mean that they can be combined
linearly in the manner of additive models and via a link function (like in
generalized linear models) to handle non-Gaussian data. However, the link
function formalism is restrictive, link functions are always invertible and
must convert a parameter of interest to a linear combination of the underlying
processes. There are many likelihoods and models where a non-linear combination
is more appropriate. We term these more general models Chained Gaussian
Processes: the transformation of the GPs to the likelihood parameters will not
generally be invertible, and that implies that linearisation would only be
possible with multiple (localized) links, i.e. a chain. We develop an
approximate inference procedure for Chained GPs that is scalable and applicable
to any factorized likelihood. We demonstrate the approximation on a range of
likelihood functions.
| Alan D. Saul, James Hensman, Aki Vehtari, Neil D. Lawrence | null | 1604.05263 | null | null |
Asymptotic Convergence in Online Learning with Unbounded Delays | cs.LG cs.AI math.PR | We study the problem of predicting the results of computations that are too
expensive to run, via the observation of the results of smaller computations.
We model this as an online learning problem with delayed feedback, where the
length of the delay is unbounded, which we study mainly in a stochastic
setting. We show that in this setting, consistency is not possible in general,
and that optimal forecasters might not have average regret going to zero.
However, it is still possible to give algorithms that converge asymptotically
to Bayes-optimal predictions, by evaluating forecasters on specific sparse
independent subsequences of their predictions. We give an algorithm that does
this, which converges asymptotically on good behavior, and give very weak
bounds on how long it takes to converge. We then relate our results back to the
problem of predicting large computations in a deterministic setting.
| Scott Garrabrant, Nate Soares, Jessica Taylor | null | 1604.05280 | null | null |
Inductive Coherence | cs.AI cs.LG math.PR | While probability theory is normally applied to external environments, there
has been some recent interest in probabilistic modeling of the outputs of
computations that are too expensive to run. Since mathematical logic is a
powerful tool for reasoning about computer programs, we consider this problem
from the perspective of integrating probability and logic. Recent work on
assigning probabilities to mathematical statements has used the concept of
coherent distributions, which satisfy logical constraints such as the
probability of a sentence and its negation summing to one. Although there are
algorithms which converge to a coherent probability distribution in the limit,
this yields only weak guarantees about finite approximations of these
distributions. In our setting, this is a significant limitation: Coherent
distributions assign probability one to all statements provable in a specific
logical theory, such as Peano Arithmetic, which can prove what the output of
any terminating computation is; thus, a coherent distribution must assign
probability one to the output of any terminating computation. To model
uncertainty about computations, we propose to work with approximations to
coherent distributions. We introduce inductive coherence, a strengthening of
coherence that provides appropriate constraints on finite approximations, and
propose an algorithm which satisfies this criterion.
| Scott Garrabrant, Benya Fallenstein, Abram Demski, Nate Soares | null | 1604.05288 | null | null |
Learning Sparse Additive Models with Interactions in High Dimensions | cs.LG cs.IT math.IT stat.ML | A function $f: \mathbb{R}^d \rightarrow \mathbb{R}$ is referred to as a
Sparse Additive Model (SPAM), if it is of the form $f(\mathbf{x}) = \sum_{l \in
\mathcal{S}}\phi_{l}(x_l)$, where $\mathcal{S} \subset [d]$, $|\mathcal{S}| \ll
d$. Assuming $\phi_l$'s and $\mathcal{S}$ to be unknown, the problem of
estimating $f$ from its samples has been studied extensively. In this work, we
consider a generalized SPAM, allowing for second order interaction terms. For
some $\mathcal{S}_1 \subset [d], \mathcal{S}_2 \subset {[d] \choose 2}$, the
function $f$ is assumed to be of the form: $$f(\mathbf{x}) = \sum_{p \in
\mathcal{S}_1}\phi_{p} (x_p) + \sum_{(l,l^{\prime}) \in
\mathcal{S}_2}\phi_{(l,l^{\prime})} (x_{l},x_{l^{\prime}}).$$ Assuming
$\phi_{p},\phi_{(l,l^{\prime})}$, $\mathcal{S}_1$ and, $\mathcal{S}_2$ to be
unknown, we provide a randomized algorithm that queries $f$ and exactly
recovers $\mathcal{S}_1,\mathcal{S}_2$. Consequently, this also enables us to
estimate the underlying $\phi_p, \phi_{(l,l^{\prime})}$. We derive sample
complexity bounds for our scheme and also extend our analysis to include the
situation where the queries are corrupted with noise -- either stochastic, or
arbitrary but bounded. Lastly, we provide simulation results on synthetic data,
that validate our theoretical findings.
| Hemant Tyagi, Anastasios Kyrillidis, Bernd G\"artner, Andreas Krause | null | 1604.05307 | null | null |
Churn analysis using deep convolutional neural networks and autoencoders | stat.ML cs.LG cs.NE | Customer temporal behavioral data was represented as images in order to
perform churn prediction by leveraging deep learning architectures prominent in
image classification. Supervised learning was performed on labeled data of over
6 million customers using deep convolutional neural networks, which achieved an
AUC of 0.743 on the test dataset using no more than 12 temporal features for
each customer. Unsupervised learning was conducted using autoencoders to better
understand the reasons for customer churn. Images that maximally activate the
hidden units of an autoencoder trained with churned customers reveal ample
opportunities for action to be taken to prevent churn among strong data, no
voice users.
| Artit Wangperawong, Cyrille Brun, Olav Laudy, Rujikorn Pavasuthipaisit | null | 1604.05377 | null | null |
An Adaptive Learning Mechanism for Selection of Increasingly More
Complex Systems | cs.IT cs.LG math.IT | Recently it has been demonstrated that causal entropic forces can lead to the
emergence of complex phenomena associated with human cognitive niche such as
tool use and social cooperation. Here I show that even more fundamental traits
associated with human cognition such as 'self-awareness' can easily be
demonstrated to be arising out of merely a selection for 'better regulators';
i.e. systems which respond comparatively better to threats to their existence
which are internal to themselves. A simple model demonstrates how indeed the
average self-awareness for a universe of systems continues to rise as less
self-aware systems are eliminated. The model also demonstrates however that the
maximum attainable self-awareness for any system is limited by the plasticity
and energy availability for that typology of systems. I argue that this rise in
self-awareness may be the reason why systems tend towards greater complexity.
| Fouad Khan | 10.14569/IJACSA.2015.060632 | 1604.05393 | null | null |
Triplet Probabilistic Embedding for Face Verification and Clustering | cs.CV cs.LG stat.ML | Despite significant progress made over the past twenty five years,
unconstrained face verification remains a challenging problem. This paper
proposes an approach that couples a deep CNN-based approach with a
low-dimensional discriminative embedding learned using triplet probability
constraints to solve the unconstrained face verification problem. Aside from
yielding performance improvements, this embedding provides significant
advantages in terms of memory and for post-processing operations like subject
specific clustering. Experiments on the challenging IJB-A dataset show that the
proposed algorithm performs comparably or better than the state of the art
methods in verification and identification metrics, while requiring much less
training data and training time. The superior performance of the proposed
method on the CFP dataset shows that the representation learned by our deep CNN
is robust to extreme pose variation. Furthermore, we demonstrate the robustness
of the deep features to challenges including age, pose, blur and clutter by
performing simple clustering experiments on both IJB-A and LFW datasets.
| Swami Sankaranarayanan, Azadeh Alavi, Carlos Castillo, Rama Chellappa | 10.1109/BTAS.2016.7791205 | 1604.05417 | null | null |
Comparative Study of Instance Based Learning and Back Propagation for
Classification Problems | cs.LG | The paper presents a comparative study of the performance of Back Propagation
and Instance Based Learning Algorithm for classification tasks. The study is
carried out by a series of experiments will all possible combinations of
parameter values for the algorithms under evaluation. The algorithm's
classification accuracy is compared over a range of datasets and measurements
like Cross Validation, Kappa Statistics, Root Mean Squared Value and True
Positive vs False Positive rate have been used to evaluate their performance.
Along with performance comparison, techniques of handling missing values have
also been compared that include Mean or Mode replacement and Multiple
Imputation. The results showed that parameter adjustment plays vital role in
improving an algorithm's accuracy and therefore, Back Propagation has shown
better results as compared to Instance Based Learning. Furthermore, the problem
of missing values was better handled by Multiple imputation method, however,
not suitable for less amount of data.
| Nadia Kanwal and Erkan Bostanci | null | 1604.05429 | null | null |
Streaming Label Learning for Modeling Labels on the Fly | stat.ML cs.LG | It is challenging to handle a large volume of labels in multi-label learning.
However, existing approaches explicitly or implicitly assume that all the
labels in the learning process are given, which could be easily violated in
changing environments. In this paper, we define and study streaming label
learning (SLL), i.e., labels are arrived on the fly, to model newly arrived
labels with the help of the knowledge learned from past labels. The core of SLL
is to explore and exploit the relationships between new labels and past labels
and then inherit the relationship into hypotheses of labels to boost the
performance of new classifiers. In specific, we use the label
self-representation to model the label relationship, and SLL will be divided
into two steps: a regression problem and a empirical risk minimization (ERM)
problem. Both problems are simple and can be efficiently solved. We further
show that SLL can generate a tighter generalization error bound for new labels
than the general ERM framework with trace norm or Frobenius norm
regularization. Finally, we implement extensive experiments on various
benchmark datasets to validate the new setting. And results show that SLL can
effectively handle the constantly emerging new labels and provides excellent
classification performance.
| Shan You, Chang Xu, Yunhe Wang, Chao Xu and Dacheng Tao | null | 1604.05449 | null | null |
Understanding Rating Behaviour and Predicting Ratings by Identifying
Representative Users | cs.IR cs.AI cs.CL cs.LG | Online user reviews describing various products and services are now abundant
on the web. While the information conveyed through review texts and ratings is
easily comprehensible, there is a wealth of hidden information in them that is
not immediately obvious. In this study, we unlock this hidden value behind user
reviews to understand the various dimensions along which users rate products.
We learn a set of users that represent each of these dimensions and use their
ratings to predict product ratings. Specifically, we work with restaurant
reviews to identify users whose ratings are influenced by dimensions like
'Service', 'Atmosphere' etc. in order to predict restaurant ratings and
understand the variation in rating behaviour across different cuisines. While
previous approaches to obtaining product ratings require either a large number
of user ratings or a few review texts, we show that it is possible to predict
ratings with few user ratings and no review text. Our experiments show that our
approach outperforms other conventional methods by 16-27% in terms of RMSE.
| Rahul Kamath, Masanao Ochi, Yutaka Matsuo | null | 1604.05468 | null | null |
Locating a Small Cluster Privately | cs.DS cs.CR cs.LG | We present a new algorithm for locating a small cluster of points with
differential privacy [Dwork, McSherry, Nissim, and Smith, 2006]. Our algorithm
has implications to private data exploration, clustering, and removal of
outliers. Furthermore, we use it to significantly relax the requirements of the
sample and aggregate technique [Nissim, Raskhodnikova, and Smith, 2007], which
allows compiling of "off the shelf" (non-private) analyses into analyses that
preserve differential privacy.
| Kobbi Nissim, Uri Stemmer, Salil Vadhan | 10.1145/2902251.2902296 | 1604.05590 | null | null |
Trading-Off Cost of Deployment Versus Accuracy in Learning Predictive
Models | stat.ML cs.LG | Predictive models are finding an increasing number of applications in many
industries. As a result, a practical means for trading-off the cost of
deploying a model versus its effectiveness is needed. Our work is motivated by
risk prediction problems in healthcare. Cost-structures in domains such as
healthcare are quite complex, posing a significant challenge to existing
approaches. We propose a novel framework for designing cost-sensitive
structured regularizers that is suitable for problems with complex cost
dependencies. We draw upon a surprising connection to boolean circuits. In
particular, we represent the problem costs as a multi-layer boolean circuit,
and then use properties of boolean circuits to define an extended feature
vector and a group regularizer that exactly captures the underlying cost
structure. The resulting regularizer may then be combined with a fidelity
function to perform model prediction, for example. For the challenging
real-world application of risk prediction for sepsis in intensive care units,
the use of our regularizer leads to models that are in harmony with the
underlying cost structure and thus provide an excellent prediction accuracy
versus cost tradeoff.
| Daniel P. Robinson and Suchi Saria | null | 1604.05819 | null | null |
Greedy Criterion in Orthogonal Greedy Learning | cs.LG | Orthogonal greedy learning (OGL) is a stepwise learning scheme that starts
with selecting a new atom from a specified dictionary via the steepest gradient
descent (SGD) and then builds the estimator through orthogonal projection. In
this paper, we find that SGD is not the unique greedy criterion and introduce a
new greedy criterion, called "$\delta$-greedy threshold" for learning. Based on
the new greedy criterion, we derive an adaptive termination rule for OGL. Our
theoretical study shows that the new learning scheme can achieve the existing
(almost) optimal learning rate of OGL. Plenty of numerical experiments are
provided to support that the new scheme can achieve almost optimal
generalization performance, while requiring less computation than OGL.
| Lin Xu, Shaobo Lin, Jinshan Zeng, Xia Liu, Zongben Xu | null | 1604.05993 | null | null |
Constructive Preference Elicitation by Setwise Max-margin Learning | stat.ML cs.AI cs.LG | In this paper we propose an approach to preference elicitation that is
suitable to large configuration spaces beyond the reach of existing
state-of-the-art approaches. Our setwise max-margin method can be viewed as a
generalization of max-margin learning to sets, and can produce a set of
"diverse" items that can be used to ask informative queries to the user.
Moreover, the approach can encourage sparsity in the parameter space, in order
to favor the assessment of utility towards combinations of weights that
concentrate on just few features. We present a mixed integer linear programming
formulation and show how our approach compares favourably with Bayesian
preference elicitation alternatives and easily scales to realistic datasets.
| Stefano Teso, Andrea Passerini, Paolo Viappiani | null | 1604.06020 | null | null |
Hierarchical Deep Reinforcement Learning: Integrating Temporal
Abstraction and Intrinsic Motivation | cs.LG cs.AI cs.CV cs.NE stat.ML | Learning goal-directed behavior in environments with sparse feedback is a
major challenge for reinforcement learning algorithms. The primary difficulty
arises due to insufficient exploration, resulting in an agent being unable to
learn robust value functions. Intrinsically motivated agents can explore new
behavior for its own sake rather than to directly solve problems. Such
intrinsic behaviors could eventually help the agent solve tasks posed by the
environment. We present hierarchical-DQN (h-DQN), a framework to integrate
hierarchical value functions, operating at different temporal scales, with
intrinsically motivated deep reinforcement learning. A top-level value function
learns a policy over intrinsic goals, and a lower-level function learns a
policy over atomic actions to satisfy the given goals. h-DQN allows for
flexible goal specifications, such as functions over entities and relations.
This provides an efficient space for exploration in complicated environments.
We demonstrate the strength of our approach on two problems with very sparse,
delayed feedback: (1) a complex discrete stochastic decision process, and (2)
the classic ATARI game `Montezuma's Revenge'.
| Tejas D. Kulkarni, Karthik R. Narasimhan, Ardavan Saeedi, Joshua B.
Tenenbaum | null | 1604.06057 | null | null |
Embedded all relevant feature selection with Random Ferns | cs.LG | Many machine learning methods can produce variable importance scores
expressing the usability of each feature in context of the produced model;
those scores on their own are yet not sufficient to generate feature selection,
especially when an all relevant selection is required. Although there are
wrapper methods aiming to solve this problem, they introduce a substantial
increase in the required computational effort.
In this paper I investigate an idea of incorporating all relevant selection
within the training process by producing importance for implicitly generated
shadows, attributes irrelevant by design. I propose and evaluate such a method
in context of random ferns classifier. Experiment results confirm the
effectiveness of such approach, although show that fully stochastic nature of
random ferns limits its applicability either to small dimensions or as a part
of a broader feature selection procedure.
| Miron Bartosz Kursa | null | 1604.06133 | null | null |
Nonextensive information theoretical machine | cs.LG | In this paper, we propose a new discriminative model named \emph{nonextensive
information theoretical machine (NITM)} based on nonextensive generalization of
Shannon information theory. In NITM, weight parameters are treated as random
variables. Tsallis divergence is used to regularize the distribution of weight
parameters and maximum unnormalized Tsallis entropy distribution is used to
evaluate fitting effect. On the one hand, it is showed that some well-known
margin-based loss functions such as $\ell_{0/1}$ loss, hinge loss, squared
hinge loss and exponential loss can be unified by unnormalized Tsallis entropy.
On the other hand, Gaussian prior regularization is generalized to Student-t
prior regularization with similar computational complexity. The model can be
solved efficiently by gradient-based convex optimization and its performance is
illustrated on standard datasets.
| Chaobing Song, Shu-Tao Xia | null | 1604.06153 | null | null |
Deep Adaptive Network: An Efficient Deep Neural Network with Sparse
Binary Connections | cs.LG cs.CV cs.NE | Deep neural networks are state-of-the-art models for understanding the
content of images, video and raw input data. However, implementing a deep
neural network in embedded systems is a challenging task, because a typical
deep neural network, such as a Deep Belief Network using 128x128 images as
input, could exhaust Giga bytes of memory and result in bandwidth and computing
bottleneck. To address this challenge, this paper presents a hardware-oriented
deep learning algorithm, named as the Deep Adaptive Network, which attempts to
exploit the sparsity in the neural connections. The proposed method adaptively
reduces the weights associated with negligible features to zero, leading to
sparse feedforward network architecture. Furthermore, since the small
proportion of important weights are significantly larger than zero, they can be
robustly thresholded and represented using single-bit integers (-1 and +1),
leading to implementations of deep neural networks with sparse and binary
connections. Our experiments showed that, for the application of recognizing
MNIST handwritten digits, the features extracted by a two-layer Deep Adaptive
Network with about 25% reserved important connections achieved 97.2%
classification accuracy, which was almost the same with the standard Deep
Belief Network (97.3%). Furthermore, for efficient hardware implementations,
the sparse-and-binary-weighted deep neural network could save about 99.3%
memory and 99.9% computation units without significant loss of classification
accuracy for pattern recognition applications.
| Xichuan Zhou, Shengli Li, Kai Qin, Kunping Li, Fang Tang, Shengdong
Hu, Shujun Liu, Zhi Lin | null | 1604.06154 | null | null |
The Extended Littlestone's Dimension for Learning with Mistakes and
Abstentions | cs.LG | This paper studies classification with an abstention option in the online
setting. In this setting, examples arrive sequentially, the learner is given a
hypothesis class $\mathcal H$, and the goal of the learner is to either predict
a label on each example or abstain, while ensuring that it does not make more
than a pre-specified number of mistakes when it does predict a label.
Previous work on this problem has left open two main challenges. First, not
much is known about the optimality of algorithms, and in particular, about what
an optimal algorithmic strategy is for any individual hypothesis class. Second,
while the realizable case has been studied, the more realistic non-realizable
scenario is not well-understood. In this paper, we address both challenges.
First, we provide a novel measure, called the Extended Littlestone's Dimension,
which captures the number of abstentions needed to ensure a certain number of
mistakes. Second, we explore the non-realizable case, and provide upper and
lower bounds on the number of abstentions required by an algorithm to guarantee
a specified number of mistakes.
| Chicheng Zhang and Kamalika Chaudhuri | null | 1604.06162 | null | null |
Training Deep Nets with Sublinear Memory Cost | cs.LG | We propose a systematic approach to reduce the memory consumption of deep
neural network training. Specifically, we design an algorithm that costs
O(sqrt(n)) memory to train a n layer network, with only the computational cost
of an extra forward pass per mini-batch. As many of the state-of-the-art models
hit the upper bound of the GPU memory, our algorithm allows deeper and more
complex models to be explored, and helps advance the innovations in deep
learning research. We focus on reducing the memory cost to store the
intermediate feature maps and gradients during training. Computation graph
analysis is used for automatic in-place operation and memory sharing
optimizations. We show that it is possible to trade computation for memory -
giving a more memory efficient training algorithm with a little extra
computation cost. In the extreme case, our analysis also shows that the memory
consumption can be reduced to O(log n) with as little as O(n log n) extra cost
for forward computation. Our experiments show that we can reduce the memory
cost of a 1,000-layer deep residual network from 48G to 7G with only 30 percent
additional running time cost on ImageNet problems. Similarly, significant
memory cost reduction is observed in training complex recurrent neural networks
on very long sequences.
| Tianqi Chen and Bing Xu and Chiyuan Zhang and Carlos Guestrin | null | 1604.06174 | null | null |
Robust Audio Event Recognition with 1-Max Pooling Convolutional Neural
Networks | cs.NE cs.LG cs.SD | We present in this paper a simple, yet efficient convolutional neural network
(CNN) architecture for robust audio event recognition. Opposing to deep CNN
architectures with multiple convolutional and pooling layers topped up with
multiple fully connected layers, the proposed network consists of only three
layers: convolutional, pooling, and softmax layer. Two further features
distinguish it from the deep architectures that have been proposed for the
task: varying-size convolutional filters at the convolutional layer and 1-max
pooling scheme at the pooling layer. In intuition, the network tends to select
the most discriminative features from the whole audio signals for recognition.
Our proposed CNN not only shows state-of-the-art performance on the standard
task of robust audio event recognition but also outperforms other deep
architectures up to 4.5% in terms of recognition accuracy, which is equivalent
to 76.3% relative error reduction.
| Huy Phan, Lars Hertel, Marco Maass, Alfred Mertins | null | 1604.06338 | null | null |
Robust Estimators in High Dimensions without the Computational
Intractability | cs.DS cs.IT cs.LG math.IT math.ST stat.ML stat.TH | We study high-dimensional distribution learning in an agnostic setting where
an adversary is allowed to arbitrarily corrupt an $\varepsilon$-fraction of the
samples. Such questions have a rich history spanning statistics, machine
learning and theoretical computer science. Even in the most basic settings, the
only known approaches are either computationally inefficient or lose
dimension-dependent factors in their error guarantees. This raises the
following question:Is high-dimensional agnostic distribution learning even
possible, algorithmically?
In this work, we obtain the first computationally efficient algorithms with
dimension-independent error guarantees for agnostically learning several
fundamental classes of high-dimensional distributions: (1) a single Gaussian,
(2) a product distribution on the hypercube, (3) mixtures of two product
distributions (under a natural balancedness condition), and (4) mixtures of
spherical Gaussians. Our algorithms achieve error that is independent of the
dimension, and in many cases scales nearly-linearly with the fraction of
adversarially corrupted samples. Moreover, we develop a general recipe for
detecting and correcting corruptions in high-dimensions, that may be applicable
to many other problems.
| Ilias Diakonikolas, Gautam Kamath, Daniel Kane, Jerry Li, Ankur
Moitra, Alistair Stewart | null | 1604.06443 | null | null |
On Detection and Structural Reconstruction of Small-World Random
Networks | math.ST cs.LG stat.TH | In this paper, we study detection and fast reconstruction of the celebrated
Watts-Strogatz (WS) small-world random graph model \citep{watts1998collective}
which aims to describe real-world complex networks that exhibit both high
clustering and short average length properties. The WS model with neighborhood
size $k$ and rewiring probability probability $\beta$ can be viewed as a
continuous interpolation between a deterministic ring lattice graph and the
Erd\H{o}s-R\'{e}nyi random graph. We study both the computational and
statistical aspects of detecting the deterministic ring lattice structure (or
local geographical links, strong ties) in the presence of random connections
(or long range links, weak ties), and for its recovery. The phase diagram in
terms of $(k,\beta)$ is partitioned into several regions according to the
difficulty of the problem. We propose distinct methods for the various regions.
| T. Tony Cai, Tengyuan Liang and Alexander Rakhlin | 10.1109/TNSE.2017.2703102 | 1604.06474 | null | null |
Stabilized Sparse Online Learning for Sparse Data | stat.ML cs.LG | Stochastic gradient descent (SGD) is commonly used for optimization in
large-scale machine learning problems. Langford et al. (2009) introduce a
sparse online learning method to induce sparsity via truncated gradient. With
high-dimensional sparse data, however, the method suffers from slow convergence
and high variance due to the heterogeneity in feature sparsity. To mitigate
this issue, we introduce a stabilized truncated stochastic gradient descent
algorithm. We employ a soft-thresholding scheme on the weight vector where the
imposed shrinkage is adaptive to the amount of information available in each
feature. The variability in the resulted sparse weight vector is further
controlled by stability selection integrated with the informative truncation.
To facilitate better convergence, we adopt an annealing strategy on the
truncation rate, which leads to a balanced trade-off between exploration and
exploitation in learning a sparse weight vector. Numerical experiments show
that our algorithm compares favorably with the original algorithm in terms of
prediction accuracy, achieved sparsity and stability.
| Yuting Ma, Tian Zheng | null | 1604.06498 | null | null |
Approximation Vector Machines for Large-scale Online Learning | cs.LG stat.ML | One of the most challenging problems in kernel online learning is to bound
the model size and to promote the model sparsity. Sparse models not only
improve computation and memory usage, but also enhance the generalization
capacity, a principle that concurs with the law of parsimony. However,
inappropriate sparsity modeling may also significantly degrade the performance.
In this paper, we propose Approximation Vector Machine (AVM), a model that can
simultaneously encourage the sparsity and safeguard its risk in compromising
the performance. When an incoming instance arrives, we approximate this
instance by one of its neighbors whose distance to it is less than a predefined
threshold. Our key intuition is that since the newly seen instance is expressed
by its nearby neighbor the optimal performance can be analytically formulated
and maintained. We develop theoretical foundations to support this intuition
and further establish an analysis to characterize the gap between the
approximation and optimal solutions. This gap crucially depends on the
frequency of approximation and the predefined threshold. We perform the
convergence analysis for a wide spectrum of loss functions including Hinge,
smooth Hinge, and Logistic for classification task, and $l_1$, $l_2$, and
$\epsilon$-insensitive for regression task. We conducted extensive experiments
for classification task in batch and online modes, and regression task in
online mode over several benchmark datasets. The results show that our proposed
AVM achieved a comparable predictive performance with current state-of-the-art
methods while simultaneously achieving significant computational speed-up due
to the ability of the proposed AVM in maintaining the model size.
| Trung Le and Tu Dinh Nguyen and Vu Nguyen and Dinh Phung | null | 1604.06518 | null | null |
Dependency Parsing with LSTMs: An Empirical Evaluation | cs.CL cs.LG cs.NE | We propose a transition-based dependency parser using Recurrent Neural
Networks with Long Short-Term Memory (LSTM) units. This extends the feedforward
neural network parser of Chen and Manning (2014) and enables modelling of
entire sequences of shift/reduce transition decisions. On the Google Web
Treebank, our LSTM parser is competitive with the best feedforward parser on
overall accuracy and notably achieves more than 3% improvement for long-range
dependencies, which has proved difficult for previous transition-based parsers
due to error propagation and limited context information. Our findings
additionally suggest that dropout regularisation on the embedding layer is
crucial to improve the LSTM's generalisation.
| Adhiguna Kuncoro, Yuichiro Sawai, Kevin Duh, Yuji Matsumoto | null | 1604.06529 | null | null |
CT-Mapper: Mapping Sparse Multimodal Cellular Trajectories using a
Multilayer Transportation Network | cs.SI cs.LG | Mobile phone data have recently become an attractive source of information
about mobility behavior. Since cell phone data can be captured in a passive way
for a large user population, they can be harnessed to collect well-sampled
mobility information. In this paper, we propose CT-Mapper, an unsupervised
algorithm that enables the mapping of mobile phone traces over a multimodal
transport network. One of the main strengths of CT-Mapper is its capability to
map noisy sparse cellular multimodal trajectories over a multilayer
transportation network where the layers have different physical properties and
not only to map trajectories associated with a single layer. Such a network is
modeled by a large multilayer graph in which the nodes correspond to
metro/train stations or road intersections and edges correspond to connections
between them. The mapping problem is modeled by an unsupervised HMM where the
observations correspond to sparse user mobile trajectories and the hidden
states to the multilayer graph nodes. The HMM is unsupervised as the transition
and emission probabilities are inferred using respectively the physical
transportation properties and the information on the spatial coverage of
antenna base stations. To evaluate CT-Mapper we collected cellular traces with
their corresponding GPS trajectories for a group of volunteer users in Paris
and vicinity (France). We show that CT-Mapper is able to accurately retrieve
the real cell phone user paths despite the sparsity of the observed trace
trajectories. Furthermore our transition probability model is up to 20% more
accurate than other naive models.
| Fereshteh Asgari and Alexis Sultan and Haoyi Xiong and Vincent
Gauthier and Mounim El-Yacoubi | null | 1604.06577 | null | null |
Clustering with Missing Features: A Penalized Dissimilarity Measure
based approach | cs.LG | Many real-world clustering problems are plagued by incomplete data
characterized by missing or absent features for some or all of the data
instances. Traditional clustering methods cannot be directly applied to such
data without preprocessing by imputation or marginalization techniques. In this
article, we overcome this drawback by utilizing a penalized dissimilarity
measure which we refer to as the Feature Weighted Penalty based Dissimilarity
(FWPD). Using the FWPD measure, we modify the traditional k-means clustering
algorithm and the standard hierarchical agglomerative clustering algorithms so
as to make them directly applicable to datasets with missing features. We
present time complexity analyses for these new techniques and also undertake a
detailed theoretical analysis showing that the new FWPD based k-means algorithm
converges to a local optimum within a finite number of iterations. We also
present a detailed method for simulating random as well as feature dependent
missingness. We report extensive experiments on various benchmark datasets for
different types of missingness showing that the proposed clustering techniques
have generally better results compared to some of the most well-known
imputation methods which are commonly used to handle such incomplete data. We
append a possible extension of the proposed dissimilarity measure to the case
of absent features (where the unobserved features are known to be undefined).
| Shounak Datta, Supritam Bhattacharjee, Swagatam Das | null | 1604.06602 | null | null |
The Mean Partition Theorem of Consensus Clustering | cs.LG cs.CV stat.ML | To devise efficient solutions for approximating a mean partition in consensus
clustering, Dimitriadou et al. [3] presented a necessary condition of
optimality for a consensus function based on least square distances. We show
that their result is pivotal for deriving interesting properties of consensus
clustering beyond optimization. For this, we present the necessary condition of
optimality in a slightly stronger form in terms of the Mean Partition Theorem
and extend it to the Expected Partition Theorem. To underpin its versatility,
we show three examples that apply the Mean Partition Theorem: (i) equivalence
of the mean partition and optimal multiple alignment, (ii) construction of
profiles and motifs, and (iii) relationship between consensus clustering and
cluster stability.
| Brijnesh J. Jain | null | 1604.06626 | null | null |
Bridging LSTM Architecture and the Neural Dynamics during Reading | cs.CL cs.AI cs.LG cs.NE | Recently, the long short-term memory neural network (LSTM) has attracted wide
interest due to its success in many tasks. LSTM architecture consists of a
memory cell and three gates, which looks similar to the neuronal networks in
the brain. However, there still lacks the evidence of the cognitive
plausibility of LSTM architecture as well as its working mechanism. In this
paper, we study the cognitive plausibility of LSTM by aligning its internal
architecture with the brain activity observed via fMRI when the subjects read a
story. Experiment results show that the artificial memory vector in LSTM can
accurately predict the observed sequential brain activities, indicating the
correlation between LSTM architecture and the cognitive process of story
reading.
| Peng Qian, Xipeng Qiu, Xuanjing Huang | null | 1604.06635 | null | null |
Developing an ICU scoring system with interaction terms using a genetic
algorithm | cs.NE cs.LG stat.ML | ICU mortality scoring systems attempt to predict patient mortality using
predictive models with various clinical predictors. Examples of such systems
are APACHE, SAPS and MPM. However, most such scoring systems do not actively
look for and include interaction terms, despite physicians intuitively taking
such interactions into account when making a diagnosis. One barrier to
including such terms in predictive models is the difficulty of using most
variable selection methods in high-dimensional datasets. A genetic algorithm
framework for variable selection with logistic regression models is used to
search for two-way interaction terms in a clinical dataset of adult ICU
patients, with separate models being built for each category of diagnosis upon
admittance to the ICU. The models had good discrimination across all
categories, with a weighted average AUC of 0.84 (>0.90 for several categories)
and the genetic algorithm was able to find several significant interaction
terms, which may be able to provide greater insight into mortality prediction
for health practitioners. The GA selected models had improved performance
against stepwise selection and random forest models, and provides greater
flexibility in terms of variable selection by being able to optimize over any
modeler-defined model performance metric instead of a specific variable
importance metric.
| Chee Chun Gan and Gerard Learmonth | null | 1604.06730 | null | null |
Entity Embeddings of Categorical Variables | cs.LG | We map categorical variables in a function approximation problem into
Euclidean spaces, which are the entity embeddings of the categorical variables.
The mapping is learned by a neural network during the standard supervised
training process. Entity embedding not only reduces memory usage and speeds up
neural networks compared with one-hot encoding, but more importantly by mapping
similar values close to each other in the embedding space it reveals the
intrinsic properties of the categorical variables. We applied it successfully
in a recent Kaggle competition and were able to reach the third position with
relative simple features. We further demonstrate in this paper that entity
embedding helps the neural network to generalize better when the data is sparse
and statistics is unknown. Thus it is especially useful for datasets with lots
of high cardinality features, where other methods tend to overfit. We also
demonstrate that the embeddings obtained from the trained neural network boost
the performance of all tested machine learning methods considerably when used
as the input features instead. As entity embedding defines a distance measure
for categorical variables it can be used for visualizing categorical data and
for data clustering.
| Cheng Guo and Felix Berkhahn | null | 1604.06737 | null | null |
Latent Contextual Bandits and their Application to Personalized
Recommendations for New Users | cs.LG cs.AI | Personalized recommendations for new users, also known as the cold-start
problem, can be formulated as a contextual bandit problem. Existing contextual
bandit algorithms generally rely on features alone to capture user variability.
Such methods are inefficient in learning new users' interests. In this paper we
propose Latent Contextual Bandits. We consider both the benefit of leveraging a
set of learned latent user classes for new users, and how we can learn such
latent classes from prior users. We show that our approach achieves a better
regret bound than existing algorithms. We also demonstrate the benefit of our
approach using a large real world dataset and a preliminary user study.
| Li Zhou and Emma Brunskill | null | 1604.06743 | null | null |
Benchmarking Deep Reinforcement Learning for Continuous Control | cs.LG cs.AI cs.RO | Recently, researchers have made significant progress combining the advances
in deep learning for learning feature representations with reinforcement
learning. Some notable examples include training agents to play Atari games
based on raw pixel data and to acquire advanced manipulation skills using raw
sensory inputs. However, it has been difficult to quantify progress in the
domain of continuous control due to the lack of a commonly adopted benchmark.
In this work, we present a benchmark suite of continuous control tasks,
including classic tasks like cart-pole swing-up, tasks with very high state and
action dimensionality such as 3D humanoid locomotion, tasks with partial
observations, and tasks with hierarchical structure. We report novel findings
based on the systematic evaluation of a range of implemented reinforcement
learning algorithms. Both the benchmark and reference implementations are
released at https://github.com/rllab/rllab in order to facilitate experimental
reproducibility and to encourage adoption by other researchers.
| Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel | null | 1604.06778 | null | null |
A Computational Model for Situated Task Learning with Interactive
Instruction | cs.AI cs.LG | Learning novel tasks is a complex cognitive activity requiring the learner to
acquire diverse declarative and procedural knowledge. Prior ACT-R models of
acquiring task knowledge from instruction focused on learning procedural
knowledge from declarative instructions encoded in semantic memory. In this
paper, we identify the requirements for designing compu- tational models that
learn task knowledge from situated task- oriented interactions with an expert
and then describe and evaluate a model of learning from situated interactive
instruc- tion that is implemented in the Soar cognitive architecture.
| Shiwali Mohan, James Kirk, John Laird | null | 1604.06849 | null | null |
On the Sample Complexity of End-to-end Training vs. Semantic Abstraction
Training | cs.LG | We compare the end-to-end training approach to a modular approach in which a
system is decomposed into semantically meaningful components. We focus on the
sample complexity aspect, in the regime where an extremely high accuracy is
necessary, as is the case in autonomous driving applications. We demonstrate
cases in which the number of training examples required by the end-to-end
approach is exponentially larger than the number of examples required by the
semantic abstraction approach.
| Shai Shalev-Shwartz and Amnon Shashua | null | 1604.06915 | null | null |
Agnostic Estimation of Mean and Covariance | cs.DS cs.LG stat.ML | We consider the problem of estimating the mean and covariance of a
distribution from iid samples in $\mathbb{R}^n$, in the presence of an $\eta$
fraction of malicious noise; this is in contrast to much recent work where the
noise itself is assumed to be from a distribution of known type. The agnostic
problem includes many interesting special cases, e.g., learning the parameters
of a single Gaussian (or finding the best-fit Gaussian) when $\eta$ fraction of
data is adversarially corrupted, agnostically learning a mixture of Gaussians,
agnostic ICA, etc. We present polynomial-time algorithms to estimate the mean
and covariance with error guarantees in terms of information-theoretic lower
bounds. As a corollary, we also obtain an agnostic algorithm for Singular Value
Decomposition.
| Kevin A. Lai, Anup B. Rao, Santosh Vempala | null | 1604.06968 | null | null |
Deep Learning with Eigenvalue Decay Regularizer | cs.LG | This paper extends our previous work on regularization of neural networks
using Eigenvalue Decay by employing a soft approximation of the dominant
eigenvalue in order to enable the calculation of its derivatives in relation to
the synaptic weights, and therefore the application of back-propagation, which
is a primary demand for deep learning. Moreover, we extend our previous
theoretical analysis to deep neural networks and multiclass classification
problems. Our method is implemented as an additional regularizer in Keras, a
modular neural networks library written in Python, and evaluated in the
benchmark data sets Reuters Newswire Topics Classification, IMDB database for
binary sentiment classification, MNIST database of handwritten digits and
CIFAR-10 data set for image classification.
| Oswaldo Ludwig | null | 1604.06985 | null | null |
Stochastic Variance-Reduced ADMM | cs.LG math.OC stat.ML | The alternating direction method of multipliers (ADMM) is a powerful
optimization solver in machine learning. Recently, stochastic ADMM has been
integrated with variance reduction methods for stochastic gradient, leading to
SAG-ADMM and SDCA-ADMM that have fast convergence rates and low iteration
complexities. However, their space requirements can still be high. In this
paper, we propose an integration of ADMM with the method of stochastic variance
reduced gradient (SVRG). Unlike another recent integration attempt called
SCAS-ADMM, the proposed algorithm retains the fast convergence benefits of
SAG-ADMM and SDCA-ADMM, but is more advantageous in that its storage
requirement is very low, even independent of the sample size $n$. We also
extend the proposed method for nonconvex problems, and obtain a convergence
rate of $O(1/T)$. Experimental results demonstrate that it is as fast as
SAG-ADMM and SDCA-ADMM, much faster than SCAS-ADMM, and can be used on much
bigger data sets.
| Shuai Zheng and James T. Kwok | null | 1604.07070 | null | null |
Unsupervised Representation Learning of Structured Radio Communication
Signals | cs.LG | We explore unsupervised representation learning of radio communication
signals in raw sampled time series representation. We demonstrate that we can
learn modulation basis functions using convolutional autoencoders and visually
recognize their relationship to the analytic bases used in digital
communications. We also propose and evaluate quantitative met- rics for quality
of encoding using domain relevant performance metrics.
| Timothy J. O'Shea, Johnathan Corgan, T. Charles Clancy | null | 1604.07078 | null | null |
Semi-supervised Vocabulary-informed Learning | cs.CV cs.AI cs.LG stat.AP stat.ML | Despite significant progress in object categorization, in recent years, a
number of important challenges remain, mainly, ability to learn from limited
labeled data and ability to recognize object classes within large, potentially
open, set of labels. Zero-shot learning is one way of addressing these
challenges, but it has only been shown to work with limited sized class
vocabularies and typically requires separation between supervised and
unsupervised classes, allowing former to inform the latter but not vice versa.
We propose the notion of semi-supervised vocabulary-informed learning to
alleviate the above mentioned challenges and address problems of supervised,
zero-shot and open set recognition using a unified framework. Specifically, we
propose a maximum margin framework for semantic manifold-based recognition that
incorporates distance constraints from (both supervised and unsupervised)
vocabulary atoms, ensuring that labeled samples are projected closest to their
correct prototypes, in the embedding space, than to others. We show that
resulting model shows improvements in supervised, zero-shot, and large open set
recognition, with up to 310K class vocabulary on AwA and ImageNet datasets.
| Yanwei Fu, Leonid Sigal | null | 1604.07093 | null | null |
Double Thompson Sampling for Dueling Bandits | cs.LG stat.ML | In this paper, we propose a Double Thompson Sampling (D-TS) algorithm for
dueling bandit problems. As indicated by its name, D-TS selects both the first
and the second candidates according to Thompson Sampling. Specifically, D-TS
maintains a posterior distribution for the preference matrix, and chooses the
pair of arms for comparison by sampling twice from the posterior distribution.
This simple algorithm applies to general Copeland dueling bandits, including
Condorcet dueling bandits as its special case. For general Copeland dueling
bandits, we show that D-TS achieves $O(K^2 \log T)$ regret. For Condorcet
dueling bandits, we further simplify the D-TS algorithm and show that the
simplified D-TS algorithm achieves $O(K \log T + K^2 \log \log T)$ regret.
Simulation results based on both synthetic and real-world data demonstrate the
efficiency of the proposed D-TS algorithm.
| Huasen Wu and Xin Liu | null | 1604.07101 | null | null |
Neural Random Forests | stat.ML cs.LG math.ST stat.TH | Given an ensemble of randomized regression trees, it is possible to
restructure them as a collection of multilayered neural networks with
particular connection weights. Following this principle, we reformulate the
random forest method of Breiman (2001) into a neural network setting, and in
turn propose two new hybrid procedures that we call neural random forests. Both
predictors exploit prior knowledge of regression trees for their architecture,
have less parameters to tune than standard networks, and less restrictions on
the geometry of the decision boundaries than trees. Consistency results are
proved, and substantial numerical evidence is provided on both synthetic and
real data sets to assess the excellent performance of our methods in a large
variety of prediction problems.
| G\'erard Biau (LPMA, LSTA), Erwan Scornet (LSTA), Johannes Welbl (UCL) | null | 1604.07143 | null | null |
Protein Secondary Structure Prediction Using Cascaded Convolutional and
Recurrent Neural Networks | q-bio.BM cs.AI cs.LG cs.NE q-bio.QM | Protein secondary structure prediction is an important problem in
bioinformatics. Inspired by the recent successes of deep neural networks, in
this paper, we propose an end-to-end deep network that predicts protein
secondary structures from integrated local and global contextual features. Our
deep architecture leverages convolutional neural networks with different kernel
sizes to extract multiscale local contextual features. In addition, considering
long-range dependencies existing in amino acid sequences, we set up a
bidirectional neural network consisting of gated recurrent unit to capture
global contextual features. Furthermore, multi-task learning is utilized to
predict secondary structure labels and amino-acid solvent accessibility
simultaneously. Our proposed deep network demonstrates its effectiveness by
achieving state-of-the-art performance, i.e., 69.7% Q8 accuracy on the public
benchmark CB513, 76.9% Q8 accuracy on CASP10 and 73.1% Q8 accuracy on CASP11.
Our model and results are publicly available.
| Zhen Li and Yizhou Yu | null | 1604.07176 | null | null |
Weighted Spectral Cluster Ensemble | cs.LG cs.AI stat.ML | Clustering explores meaningful patterns in the non-labeled data sets. Cluster
Ensemble Selection (CES) is a new approach, which can combine individual
clustering results for increasing the performance of the final results.
Although CES can achieve better final results in comparison with individual
clustering algorithms and cluster ensemble methods, its performance can be
dramatically affected by its consensus diversity metric and thresholding
procedure. There are two problems in CES: 1) most of the diversity metrics is
based on heuristic Shannon's entropy and 2) estimating threshold values are
really hard in practice. The main goal of this paper is proposing a robust
approach for solving the above mentioned problems. Accordingly, this paper
develops a novel framework for clustering problems, which is called Weighted
Spectral Cluster Ensemble (WSCE), by exploiting some concepts from community
detection arena and graph based clustering. Under this framework, a new version
of spectral clustering, which is called Two Kernels Spectral Clustering, is
used for generating graphs based individual clustering results. Further, by
using modularity, which is a famous metric in the community detection, on the
transformed graph representation of individual clustering results, our approach
provides an effective diversity estimation for individual clustering results.
Moreover, this paper introduces a new approach for combining the evaluated
individual clustering results without the procedure of thresholding.
Experimental study on varied data sets demonstrates that the prosed approach
achieves superior performance to state-of-the-art methods.
| Muhammad Yousefnezhad, Daoqiang Zhang | 10.1109/ICDM.2015.145 | 1604.07178 | null | null |
Observing and Recommending from a Social Web with Biases | cs.DB cs.LG | The research question this report addresses is: how, and to what extent,
those directly involved with the design, development and employment of a
specific black box algorithm can be certain that it is not unlawfully
discriminating (directly and/or indirectly) against particular persons with
protected characteristics (e.g. gender, race and ethnicity)?
| Steffen Staab and Sophie Stalla-Bourdillon and Laura Carmichael | null | 1604.07180 | null | null |
Unbiased Comparative Evaluation of Ranking Functions | cs.IR cs.LG | Eliciting relevance judgments for ranking evaluation is labor-intensive and
costly, motivating careful selection of which documents to judge. Unlike
traditional approaches that make this selection deterministically,
probabilistic sampling has shown intriguing promise since it enables the design
of estimators that are provably unbiased even when reusing data with missing
judgments. In this paper, we first unify and extend these sampling approaches
by viewing the evaluation problem as a Monte Carlo estimation task that applies
to a large number of common IR metrics. Drawing on the theoretical clarity that
this view offers, we tackle three practical evaluation scenarios: comparing two
systems, comparing $k$ systems against a baseline, and ranking $k$ systems. For
each scenario, we derive an estimator and a variance-optimizing sampling
distribution while retaining the strengths of sampling-based evaluation,
including unbiasedness, reusability despite missing data, and ease of use in
practice. In addition to the theoretical contribution, we empirically evaluate
our methods against previously used sampling heuristics and find that they
generally cut the number of required relevance judgments at least in half.
| Tobias Schnabel, Adith Swaminathan, Peter Frazier, Thorsten Joachims | null | 1604.07209 | null | null |
Towards Reduced Reference Parametric Models for Estimating Audiovisual
Quality in Multimedia Services | cs.MM cs.LG | We have developed reduced reference parametric models for estimating
perceived quality in audiovisual multimedia services. We have created 144
unique configurations for audiovisual content including various application and
network parameters such as bitrates and distortions in terms of bandwidth,
packet loss rate and jitter. To generate the data needed for model training and
validation we have tasked 24 subjects, in a controlled environment, to rate the
overall audiovisual quality on the absolute category rating (ACR) 5-level
quality scale. We have developed models using Random Forest and Neural Network
based machine learning methods in order to estimate Mean Opinion Scores (MOS)
values. We have used information retrieved from the packet headers and side
information provided as network parameters for model training. Random Forest
based models have performed better in terms of Root Mean Square Error (RMSE)
and Pearson correlation coefficient. The side information proved to be very
effective in developing the model. We have found that, while the model
performance might be improved by replacing the side information with more
accurate bit stream level measurements, they are performing well in estimating
perceived quality in audiovisual multimedia services.
| Edip Demirbilek and Jean-Charles Gr\'egoire | null | 1604.07211 | null | null |
Learning Arbitrary Sum-Product Network Leaves with
Expectation-Maximization | cs.LG | Sum-Product Networks with complex probability distribution at the leaves have
been shown to be powerful tractable-inference probabilistic models. However,
while learning the internal parameters has been amply studied, learning complex
leaf distribution is an open problem with only few results available in special
cases. In this paper we derive an efficient method to learn a very large class
of leaf distributions with Expectation-Maximization. The EM updates have the
form of simple weighted maximum likelihood problems, allowing to use any
distribution that can be learned with maximum likelihood, even approximately.
The algorithm has cost linear in the model size and converges even if only
partial optimizations are performed. We demonstrate this approach with
experiments on twenty real-life datasets for density estimation, using tree
graphical models as leaves. Our model outperforms state-of-the-art methods for
parameter learning despite using SPNs with much fewer parameters.
| Mattia Desana and Christoph Schn\"orr | null | 1604.07243 | null | null |
A Deep Hierarchical Approach to Lifelong Learning in Minecraft | cs.AI cs.LG | We propose a lifelong learning system that has the ability to reuse and
transfer knowledge from one task to another while efficiently retaining the
previously learned knowledge-base. Knowledge is transferred by learning
reusable skills to solve tasks in Minecraft, a popular video game which is an
unsolved and high-dimensional lifelong learning problem. These reusable skills,
which we refer to as Deep Skill Networks, are then incorporated into our novel
Hierarchical Deep Reinforcement Learning Network (H-DRLN) architecture using
two techniques: (1) a deep skill array and (2) skill distillation, our novel
variation of policy distillation (Rusu et. al. 2015) for learning skills. Skill
distillation enables the HDRLN to efficiently retain knowledge and therefore
scale in lifelong learning, by accumulating knowledge and encapsulating
multiple reusable skills into a single distilled network. The H-DRLN exhibits
superior performance and lower learning sample complexity compared to the
regular Deep Q Network (Mnih et. al. 2015) in sub-domains of Minecraft.
| Chen Tessler, Shahar Givony, Tom Zahavy, Daniel J. Mankowitz, Shie
Mannor | null | 1604.07255 | null | null |
CMA-ES for Hyperparameter Optimization of Deep Neural Networks | cs.NE cs.LG | Hyperparameters of deep neural networks are often optimized by grid search,
random search or Bayesian optimization. As an alternative, we propose to use
the Covariance Matrix Adaptation Evolution Strategy (CMA-ES), which is known
for its state-of-the-art performance in derivative-free optimization. CMA-ES
has some useful invariance properties and is friendly to parallel evaluations
of solutions. We provide a toy example comparing CMA-ES and state-of-the-art
Bayesian optimization algorithms for tuning the hyperparameters of a
convolutional neural network for the MNIST dataset on 30 GPUs in parallel.
| Ilya Loshchilov and Frank Hutter | null | 1604.07269 | null | null |
End to End Learning for Self-Driving Cars | cs.CV cs.LG cs.NE | We trained a convolutional neural network (CNN) to map raw pixels from a
single front-facing camera directly to steering commands. This end-to-end
approach proved surprisingly powerful. With minimum training data from humans
the system learns to drive in traffic on local roads with or without lane
markings and on highways. It also operates in areas with unclear visual
guidance such as in parking lots and on unpaved roads.
The system automatically learns internal representations of the necessary
processing steps such as detecting useful road features with only the human
steering angle as the training signal. We never explicitly trained it to
detect, for example, the outline of roads.
Compared to explicit decomposition of the problem, such as lane marking
detection, path planning, and control, our end-to-end system optimizes all
processing steps simultaneously. We argue that this will eventually lead to
better performance and smaller systems. Better performance will result because
the internal components self-optimize to maximize overall system performance,
instead of optimizing human-selected intermediate criteria, e.g., lane
detection. Such criteria understandably are selected for ease of human
interpretation which doesn't automatically guarantee maximum system
performance. Smaller networks are possible because the system learns to solve
the problem with the minimal number of processing steps.
We used an NVIDIA DevBox and Torch 7 for training and an NVIDIA DRIVE(TM) PX
self-driving car computer also running Torch 7 for determining where to drive.
The system operates at 30 frames per second (FPS).
| Mariusz Bojarski, Davide Del Testa, Daniel Dworakowski, Bernhard
Firner, Beat Flepp, Prasoon Goyal, Lawrence D. Jackel, Mathew Monfort, Urs
Muller, Jiakai Zhang, Xin Zhang, Jake Zhao, Karol Zieba | null | 1604.07316 | null | null |
Fast nonlinear embeddings via structured matrices | stat.ML cs.LG | We present a new paradigm for speeding up randomized computations of several
frequently used functions in machine learning. In particular, our paradigm can
be applied for improving computations of kernels based on random embeddings.
Above that, the presented framework covers multivariate randomized functions.
As a byproduct, we propose an algorithmic approach that also leads to a
significant reduction of space complexity. Our method is based on careful
recycling of Gaussian vectors into structured matrices that share properties of
fully random matrices. The quality of the proposed structured approach follows
from combinatorial properties of the graphs encoding correlations between rows
of these structured matrices. Our framework covers as special cases already
known structured approaches such as the Fast Johnson-Lindenstrauss Transform,
but is much more general since it can be applied also to highly nonlinear
embeddings. We provide strong concentration results showing the quality of the
presented paradigm.
| Krzysztof Choromanski, Francois Fagan | null | 1604.07356 | null | null |
Context Encoders: Feature Learning by Inpainting | cs.CV cs.AI cs.GR cs.LG | We present an unsupervised visual feature learning algorithm driven by
context-based pixel prediction. By analogy with auto-encoders, we propose
Context Encoders -- a convolutional neural network trained to generate the
contents of an arbitrary image region conditioned on its surroundings. In order
to succeed at this task, context encoders need to both understand the content
of the entire image, as well as produce a plausible hypothesis for the missing
part(s). When training context encoders, we have experimented with both a
standard pixel-wise reconstruction loss, as well as a reconstruction plus an
adversarial loss. The latter produces much sharper results because it can
better handle multiple modes in the output. We found that a context encoder
learns a representation that captures not just appearance but also the
semantics of visual structures. We quantitatively demonstrate the effectiveness
of our learned features for CNN pre-training on classification, detection, and
segmentation tasks. Furthermore, context encoders can be used for semantic
inpainting tasks, either stand-alone or as initialization for non-parametric
methods.
| Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell,
Alexei A. Efros | null | 1604.07379 | null | null |
Deep Multi-fidelity Gaussian Processes | cs.LG stat.ML | We develop a novel multi-fidelity framework that goes far beyond the
classical AR(1) Co-kriging scheme of Kennedy and O'Hagan (2000). Our method can
handle general discontinuous cross-correlations among systems with different
levels of fidelity. A combination of multi-fidelity Gaussian Processes (AR(1)
Co-kriging) and deep neural networks enables us to construct a method that is
immune to discontinuities. We demonstrate the effectiveness of the new
technology using standard benchmark problems designed to resemble the outputs
of complicated high- and low-fidelity codes.
| Maziar Raissi and George Karniadakis | null | 1604.07484 | null | null |
A New Approach in Persian Handwritten Letters Recognition Using Error
Correcting Output Coding | cs.CV cs.LG stat.ML | Classification Ensemble, which uses the weighed polling of outputs, is the
art of combining a set of basic classifiers for generating high-performance,
robust and more stable results. This study aims to improve the results of
identifying the Persian handwritten letters using Error Correcting Output
Coding (ECOC) ensemble method. Furthermore, the feature selection is used to
reduce the costs of errors in our proposed method. ECOC is a method for
decomposing a multi-way classification problem into many binary classification
tasks; and then combining the results of the subtasks into a hypothesized
solution to the original problem. Firstly, the image features are extracted by
Principal Components Analysis (PCA). After that, ECOC is used for
identification the Persian handwritten letters which it uses Support Vector
Machine (SVM) as the base classifier. The empirical results of applying this
ensemble method using 10 real-world data sets of Persian handwritten letters
indicate that this method has better results in identifying the Persian
handwritten letters than other ensemble methods and also single
classifications. Moreover, by testing a number of different features, this
paper found that we can reduce the additional cost in feature selection stage
by using this method.
| Maziar Kazemi, Muhammad Yousefnezhad, Saber Nourian | null | 1604.07554 | null | null |
Online Influence Maximization in Non-Stationary Social Networks | cs.SI cs.DS cs.LG | Social networks have been popular platforms for information propagation. An
important use case is viral marketing: given a promotion budget, an advertiser
can choose some influential users as the seed set and provide them free or
discounted sample products; in this way, the advertiser hopes to increase the
popularity of the product in the users' friend circles by the world-of-mouth
effect, and thus maximizes the number of users that information of the
production can reach. There has been a body of literature studying the
influence maximization problem. Nevertheless, the existing studies mostly
investigate the problem on a one-off basis, assuming fixed known influence
probabilities among users, or the knowledge of the exact social network
topology. In practice, the social network topology and the influence
probabilities are typically unknown to the advertiser, which can be varying
over time, i.e., in cases of newly established, strengthened or weakened social
ties. In this paper, we focus on a dynamic non-stationary social network and
design a randomized algorithm, RSB, based on multi-armed bandit optimization,
to maximize influence propagation over time. The algorithm produces a sequence
of online decisions and calibrates its explore-exploit strategy utilizing
outcomes of previous decisions. It is rigorously proven to achieve an
upper-bounded regret in reward and applicable to large-scale social networks.
Practical effectiveness of the algorithm is evaluated using both synthetic and
real-world datasets, which demonstrates that our algorithm outperforms previous
stationary methods under non-stationary conditions.
| Yixin Bao, Xiaoke Wang, Zhi Wang, Chuan Wu, Francis C.M. Lau | 10.1109/IWQoS.2016.7590438 | 1604.07638 | null | null |
Distributed Clustering of Linear Bandits in Peer to Peer Networks | cs.LG cs.AI stat.ML | We provide two distributed confidence ball algorithms for solving linear
bandit problems in peer to peer networks with limited communication
capabilities. For the first, we assume that all the peers are solving the same
linear bandit problem, and prove that our algorithm achieves the optimal
asymptotic regret rate of any centralised algorithm that can instantly
communicate information between the peers. For the second, we assume that there
are clusters of peers solving the same bandit problem within each cluster, and
we prove that our algorithm discovers these clusters, while achieving the
optimal asymptotic regret rate within each one. Through experiments on several
real-world datasets, we demonstrate the performance of proposed algorithms
compared to the state-of-the-art.
| Nathan Korda and Balazs Szorenyi and Shuai Li | null | 1604.07706 | null | null |
Condorcet's Jury Theorem for Consensus Clustering and its Implications
for Diversity | stat.ML cs.LG | Condorcet's Jury Theorem has been invoked for ensemble classifiers to
indicate that the combination of many classifiers can have better predictive
performance than a single classifier. Such a theoretical underpinning is
unknown for consensus clustering. This article extends Condorcet's Jury Theorem
to the mean partition approach under the additional assumptions that a unique
ground-truth partition exists and sample partitions are drawn from a
sufficiently small ball containing the ground-truth. As an implication of
practical relevance, we question the claim that the quality of consensus
clustering depends on the diversity of the sample partitions. Instead, we
conjecture that limiting the diversity of the mean partitions is necessary for
controlling the quality.
| Brijnesh J. Jain | null | 1604.07711 | null | null |
F-measure Maximization in Multi-Label Classification with Conditionally
Independent Label Subsets | cs.LG | We discuss a method to improve the exact F-measure maximization algorithm
called GFM, proposed in (Dembczynski et al. 2011) for multi-label
classification, assuming the label set can be can partitioned into
conditionally independent subsets given the input features. If the labels were
all independent, the estimation of only $m$ parameters ($m$ denoting the number
of labels) would suffice to derive Bayes-optimal predictions in $O(m^2)$
operations. In the general case, $m^2+1$ parameters are required by GFM, to
solve the problem in $O(m^3)$ operations. In this work, we show that the number
of parameters can be reduced further to $m^2/n$, in the best case, assuming the
label set can be partitioned into $n$ conditionally independent subsets. As
this label partition needs to be estimated from the data beforehand, we use
first the procedure proposed in (Gasse et al. 2015) that finds such partition
and then infer the required parameters locally in each label subset. The latter
are aggregated and serve as input to GFM to form the Bayes-optimal prediction.
We show on a synthetic experiment that the reduction in the number of
parameters brings about significant benefits in terms of performance.
| Maxime Gasse and Alex Aussem | null | 1604.07759 | null | null |
Scale Normalization | cs.NE cs.LG stat.ML | One of the difficulties of training deep neural networks is caused by
improper scaling between layers. Scaling issues introduce exploding / gradient
problems, and have typically been addressed by careful scale-preserving
initialization. We investigate the value of preserving scale, or isometry,
beyond the initial weights. We propose two methods of maintaing isometry, one
exact and one stochastic. Preliminary experiments show that for both
determinant and scale-normalization effectively speeds up learning. Results
suggest that isometry is important in the beginning of learning, and
maintaining it leads to faster learning.
| Henry Z. Lo and Kevin Amaral and Wei Ding | null | 1604.07796 | null | null |
Learning by tracking: Siamese CNN for robust target association | cs.LG cs.CV | This paper introduces a novel approach to the task of data association within
the context of pedestrian tracking, by introducing a two-stage learning scheme
to match pairs of detections. First, a Siamese convolutional neural network
(CNN) is trained to learn descriptors encoding local spatio-temporal structures
between the two input image patches, aggregating pixel values and optical flow
information. Second, a set of contextual features derived from the position and
size of the compared input patches are combined with the CNN output by means of
a gradient boosting classifier to generate the final matching probability. This
learning approach is validated by using a linear programming based multi-person
tracker showing that even a simple and efficient tracker may outperform much
more complex models when fed with our learned matching probabilities. Results
on publicly available sequences show that our method meets state-of-the-art
standards in multiple people tracking.
| Laura Leal-Taix\'e, Cristian Canton Ferrer, Konrad Schindler | null | 1604.07866 | null | null |
Evaluating the effect of topic consideration in identifying communities
of rating-based social networks | cs.SI cs.LG stat.ML | Finding meaningful communities in social network has attracted the attentions
of many researchers. The community structure of complex networks reveals both
their organization and hidden relations among their constituents. Most of the
researches in the field of community detection mainly focus on the topological
structure of the network without performing any content analysis. Nowadays,
real world social networks are containing a vast range of information including
shared objects, comments, following information, etc. In recent years, a number
of researches have proposed approaches which consider both the contents that
are interchanged in the networks and the topological structures of the networks
in order to find more meaningful communities. In this research, the effect of
topic analysis in finding more meaningful communities in social networking
sites in which the users express their feelings toward different objects (like
movies) by the means of rating is demonstrated by performing extensive
experiments.
| Ali Reihanian, Behrouz Minaei-Bidgoli, Muhammad Yousefnezhad | 10.1109/IKT.2015.7288793 | 1604.07878 | null | null |
Image Colorization Using a Deep Convolutional Neural Network | cs.CV cs.LG cs.NE | In this paper, we present a novel approach that uses deep learning techniques
for colorizing grayscale images. By utilizing a pre-trained convolutional
neural network, which is originally designed for image classification, we are
able to separate content and style of different images and recombine them into
a single image. We then propose a method that can add colors to a grayscale
image by combining its content with style of a color image having semantic
similarity with the grayscale one. As an application, to our knowledge the
first of its kind, we use the proposed method to colorize images of ukiyo-e a
genre of Japanese painting?and obtain interesting results, showing the
potential of this method in the growing field of computer assisted art.
| Tung Nguyen, Kazuki Mori, and Ruck Thawonmas | null | 1604.07904 | null | null |
Distributed Flexible Nonlinear Tensor Factorization | cs.LG cs.AI cs.DC stat.ML | Tensor factorization is a powerful tool to analyse multi-way data. Compared
with traditional multi-linear methods, nonlinear tensor factorization models
are capable of capturing more complex relationships in the data. However, they
are computationally expensive and may suffer severe learning bias in case of
extreme data sparsity. To overcome these limitations, in this paper we propose
a distributed, flexible nonlinear tensor factorization model. Our model can
effectively avoid the expensive computations and structural restrictions of the
Kronecker-product in existing TGP formulations, allowing an arbitrary subset of
tensorial entries to be selected to contribute to the training. At the same
time, we derive a tractable and tight variational evidence lower bound (ELBO)
that enables highly decoupled, parallel computations and high-quality
inference. Based on the new bound, we develop a distributed inference algorithm
in the MapReduce framework, which is key-value-free and can fully exploit the
memory cache mechanism in fast MapReduce systems such as SPARK. Experimental
results fully demonstrate the advantages of our method over several
state-of-the-art approaches, in terms of both predictive performance and
computational efficiency. Moreover, our approach shows a promising potential in
the application of Click-Through-Rate (CTR) prediction for online advertising.
| Shandian Zhe, Kai Zhang, Pengyuan Wang, Kuang-chih Lee, Zenglin Xu,
Yuan Qi, Zoubin Ghahramani | null | 1604.07928 | null | null |
UBL: an R package for Utility-based Learning | cs.MS cs.LG stat.ML | This document describes the R package UBL that allows the use of several
methods for handling utility-based learning problems. Classification and
regression problems that assume non-uniform costs and/or benefits pose serious
challenges to predictive analytic tasks. In the context of meteorology,
finance, medicine, ecology, among many other, specific domain information
concerning the preference bias of the users must be taken into account to
enhance the models predictive performance. To deal with this problem, a large
number of techniques was proposed by the research community for both
classification and regression tasks. The main goal of UBL package is to
facilitate the utility-based predictive analytic task by providing a set of
methods to deal with this type of problems in the R environment. It is a
versatile tool that provides mechanisms to handle both regression and
classification (binary and multiclass) tasks. Moreover, UBL package allows the
user to specify his domain preferences, but it also provides some automatic
methods that try to infer those preference bias from the domain, considering
some common known settings.
| Paula Branco, Rita P. Ribeiro, Luis Torgo | null | 1604.08079 | null | null |
Local Uncertainty Sampling for Large-Scale Multi-Class Logistic
Regression | stat.CO cs.LG stat.ML | A major challenge for building statistical models in the big data era is that
the available data volume far exceeds the computational capability. A common
approach for solving this problem is to employ a subsampled dataset that can be
handled by available computational resources. In this paper, we propose a
general subsampling scheme for large-scale multi-class logistic regression and
examine the variance of the resulting estimator. We show that asymptotically,
the proposed method always achieves a smaller variance than that of the uniform
random sampling. Moreover, when the classes are conditionally imbalanced,
significant improvement over uniform sampling can be achieved. Empirical
performance of the proposed method is compared to other methods on both
simulated and real-world datasets, and these results match and confirm our
theoretical analysis.
| Lei Han, Kean Ming Tan, Ting Yang and Tong Zhang | null | 1604.08098 | null | null |
Classifying Options for Deep Reinforcement Learning | cs.LG cs.AI stat.ML | In this paper we combine one method for hierarchical reinforcement learning -
the options framework - with deep Q-networks (DQNs) through the use of
different "option heads" on the policy network, and a supervisory network for
choosing between the different options. We utilise our setup to investigate the
effects of architectural constraints in subtasks with positive and negative
transfer, across a range of network capacities. We empirically show that our
augmented DQN has lower sample complexity when simultaneously learning subtasks
with negative transfer, without degrading performance when learning subtasks
with positive transfer.
| Kai Arulkumaran, Nat Dilokthanakul, Murray Shanahan, Anil Anthony
Bharath | null | 1604.08153 | null | null |
Diving deeper into mentee networks | cs.LG cs.CV cs.NE | Modern computer vision is all about the possession of powerful image
representations. Deeper and deeper convolutional neural networks have been
built using larger and larger datasets and are made publicly available. A large
swath of computer vision scientists use these pre-trained networks with varying
degrees of successes in various tasks. Even though there is tremendous success
in copying these networks, the representational space is not learnt from the
target dataset in a traditional manner. One of the reasons for opting to use a
pre-trained network over a network learnt from scratch is that small datasets
provide less supervision and require meticulous regularization, smaller and
careful tweaking of learning rates to even achieve stable learning without
weight explosion. It is often the case that large deep networks are not
portable, which necessitates the ability to learn mid-sized networks from
scratch.
In this article, we dive deeper into training these mid-sized networks on
small datasets from scratch by drawing additional supervision from a large
pre-trained network. Such learning also provides better generalization
accuracies than networks trained with common regularization techniques such as
l2, l1 and dropouts. We show that features learnt thus, are more general than
those learnt independently. We studied various characteristics of such networks
and found some interesting behaviors.
| Ragav Venkatesan, Baoxin Li | null | 1604.08220 | null | null |
Crafting Adversarial Input Sequences for Recurrent Neural Networks | cs.CR cs.LG cs.NE | Machine learning models are frequently used to solve complex security
problems, as well as to make decisions in sensitive situations like guiding
autonomous vehicles or predicting financial market behaviors. Previous efforts
have shown that numerous machine learning models were vulnerable to adversarial
manipulations of their inputs taking the form of adversarial samples. Such
inputs are crafted by adding carefully selected perturbations to legitimate
inputs so as to force the machine learning model to misbehave, for instance by
outputting a wrong class if the machine learning task of interest is
classification. In fact, to the best of our knowledge, all previous work on
adversarial samples crafting for neural network considered models used to solve
classification tasks, most frequently in computer vision applications. In this
paper, we contribute to the field of adversarial machine learning by
investigating adversarial input sequences for recurrent neural networks
processing sequential data. We show that the classes of algorithms introduced
previously to craft adversarial samples misclassified by feed-forward neural
networks can be adapted to recurrent neural networks. In a experiment, we show
that adversaries can craft adversarial sequences misleading both categorical
and sequential recurrent neural networks.
| Nicolas Papernot and Patrick McDaniel and Ananthram Swami and Richard
Harang | null | 1604.08275 | null | null |
Streaming View Learning | stat.ML cs.LG | An underlying assumption in conventional multi-view learning algorithms is
that all views can be simultaneously accessed. However, due to various factors
when collecting and pre-processing data from different views, the streaming
view setting, in which views arrive in a streaming manner, is becoming more
common. By assuming that the subspaces of a multi-view model trained over past
views are stable, here we fine tune their combination weights such that the
well-trained multi-view model is compatible with new views. This largely
overcomes the burden of learning new view functions and updating past view
functions. We theoretically examine convergence issues and the influence of
streaming views in the proposed algorithm. Experimental results on real-world
datasets suggest that studying the streaming views problem in multi-view
learning is significant and that the proposed algorithm can effectively handle
streaming views in different applications.
| Chang Xu, Dacheng Tao, Chao Xu | null | 1604.08291 | null | null |
Joint Line Segmentation and Transcription for End-to-End Handwritten
Paragraph Recognition | cs.CV cs.LG cs.NE | Offline handwriting recognition systems require cropped text line images for
both training and recognition. On the one hand, the annotation of position and
transcript at line level is costly to obtain. On the other hand, automatic line
segmentation algorithms are prone to errors, compromising the subsequent
recognition. In this paper, we propose a modification of the popular and
efficient multi-dimensional long short-term memory recurrent neural networks
(MDLSTM-RNNs) to enable end-to-end processing of handwritten paragraphs. More
particularly, we replace the collapse layer transforming the two-dimensional
representation into a sequence of predictions by a recurrent version which can
recognize one line at a time. In the proposed model, a neural network performs
a kind of implicit line segmentation by computing attention weights on the
image representation. The experiments on paragraphs of Rimes and IAM database
yield results that are competitive with those of networks trained at line
level, and constitute a significant step towards end-to-end transcription of
full documents.
| Th\'eodore Bluche | null | 1604.08352 | null | null |
Convolutional Neural Networks For Automatic State-Time Feature
Extraction in Reinforcement Learning Applied to Residential Load Control | cs.LG cs.SY | Direct load control of a heterogeneous cluster of residential demand
flexibility sources is a high-dimensional control problem with partial
observability. This work proposes a novel approach that uses a convolutional
neural network to extract hidden state-time features to mitigate the curse of
partial observability. More specific, a convolutional neural network is used as
a function approximator to estimate the state-action value function or
Q-function in the supervised learning step of fitted Q-iteration. The approach
is evaluated in a qualitative simulation, comprising a cluster of
thermostatically controlled loads that only share their air temperature, whilst
their envelope temperature remains hidden. The simulation results show that the
presented approach is able to capture the underlying hidden features and
successfully reduce the electricity cost the cluster.
| Bert J. Claessens and Peter Vrancx and Frederik Ruelens | null | 1604.08382 | null | null |
Detection of epileptic seizure in EEG signals using linear least squares
preprocessing | cs.LG math.OC | An epileptic seizure is a transient event of abnormal excessive neuronal
discharge in the brain. This unwanted event can be obstructed by detection of
electrical changes in the brain that happen before the seizure takes place. The
automatic detection of seizures is necessary since the visual screening of EEG
recordings is a time consuming task and requires experts to improve the
diagnosis. Four linear least squares-based preprocessing models are proposed to
extract key features of an EEG signal in order to detect seizures. The first
two models are newly developed. The original signal (EEG) is approximated by a
sinusoidal curve. Its amplitude is formed by a polynomial function and compared
with the pre developed spline function.Different statistical measures namely
classification accuracy, true positive and negative rates, false positive and
negative rates and precision are utilized to assess the performance of the
proposed models. These metrics are derived from confusion matrices obtained
from classifiers. Different classifiers are used over the original dataset and
the set of extracted features. The proposed models significantly reduce the
dimension of the classification problem and the computational time while the
classification accuracy is improved in most cases. The first and third models
are promising feature extraction methods. Logistic, LazyIB1, LazyIB5 and J48
are the best classifiers. Their true positive and negative rates are $1$ while
false positive and negative rates are zero and the corresponding precision
values are $1$. Numerical results suggest that these models are robust and
efficient for detecting epileptic seizure.
| Z. Roshan Zamir | null | 1604.08500 | null | null |
A movie genre prediction based on Multivariate Bernoulli model and genre
correlations | cs.IR cs.LG | Movie ratings play an important role both in determining the likelihood of a
potential viewer to watch the movie and in reflecting the current viewer
satisfaction with the movie. They are available in several sources like the
television guide, best-selling reference books, newspaper columns, and
television programs. Furthermore, movie ratings are crucial for recommendation
engines that track the behavior of all users and utilize the information to
suggest items they might like. Movie ratings in most cases, thus, provide
information that might be more important than movie feature-based data. It is
intuitively appealing that information about the viewing preferences in movie
genres is sufficient for predicting a genre of an unlabeled movie. In order to
predict movie genres, we treat ratings as a feature vector, apply the Bernoulli
event model to estimate the likelihood of a movies given genre, and evaluate
the posterior probability of the genre of a given movie using the Bayes rule.
The goal of the proposed technique is to efficiently use the movie ratings for
the task of predicting movie genres. In our approach we attempted to answer the
question: "Given the set of users who watched a movie, is it possible to
predict the genre of a movie based on its ratings?" Our simulation results with
MovieLens 100k data demonstrated the efficiency and accuracy of our proposed
technique, achieving 59% prediction rate for exact prediction and 69% when
including correlated genres.
| Eric Makita, Artem Lenskiy | null | 1604.08608 | null | null |
On the representation and embedding of knowledge bases beyond binary
relations | cs.LG cs.AI | The models developed to date for knowledge base embedding are all based on
the assumption that the relations contained in knowledge bases are binary. For
the training and testing of these embedding models, multi-fold (or n-ary)
relational data are converted to triples (e.g., in FB15K dataset) and
interpreted as instances of binary relations. This paper presents a canonical
representation of knowledge bases containing multi-fold relations. We show that
the existing embedding models on the popular FB15K datasets correspond to a
sub-optimal modelling framework, resulting in a loss of structural information.
We advocate a novel modelling framework, which models multi-fold relations
directly using this canonical representation. Using this framework, the
existing TransH model is generalized to a new model, m-TransH. We demonstrate
experimentally that m-TransH outperforms TransH by a large margin, thereby
establishing a new state of the art.
| Jianfeng Wen, Jianxin Li, Yongyi Mao, Shini Chen, Richong Zhang | null | 1604.08642 | null | null |
Single Image 3D Interpreter Network | cs.CV cs.LG | Understanding 3D object structure from a single image is an important but
difficult task in computer vision, mostly due to the lack of 3D object
annotations in real images. Previous work tackles this problem by either
solving an optimization task given 2D keypoint positions, or training on
synthetic data with ground truth 3D information. In this work, we propose 3D
INterpreter Network (3D-INN), an end-to-end framework which sequentially
estimates 2D keypoint heatmaps and 3D object structure, trained on both real
2D-annotated images and synthetic 3D data. This is made possible mainly by two
technical innovations. First, we propose a Projection Layer, which projects
estimated 3D structure to 2D space, so that 3D-INN can be trained to predict 3D
structural parameters supervised by 2D annotations on real images. Second,
heatmaps of keypoints serve as an intermediate representation connecting real
and synthetic data, enabling 3D-INN to benefit from the variation and abundance
of synthetic 3D objects, without suffering from the difference between the
statistics of real and synthesized images due to imperfect rendering. The
network achieves state-of-the-art performance on both 2D keypoint estimation
and 3D structure recovery. We also show that the recovered 3D information can
be used in other vision applications, such as 3D rendering and image retrieval.
| Jiajun Wu, Tianfan Xue, Joseph J. Lim, Yuandong Tian, Joshua B.
Tenenbaum, Antonio Torralba, William T. Freeman | 10.1007/978-3-319-46466-4_22 | 1604.08685 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.