title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Dimension Reduction with Non-degrading Generalization | cs.LG cs.NE | Visualizing high dimensional data by projecting them into two or three
dimensional space is one of the most effective ways to intuitively understand
the data's underlying characteristics, for example their class neighborhood
structure. While data visualization in low dimensional space can be efficient
for revealing the data's underlying characteristics, classifying a new sample
in the reduced-dimensional space is not always beneficial because of the loss
of information in expressing the data. It is possible to classify the data in
the high dimensional space, while visualizing them in the low dimensional
space, but in this case, the visualization is often meaningless because it
fails to illustrate the underlying characteristics that are crucial for the
classification process.
In this paper, the performance-preserving property of the previously proposed
Restricted Radial Basis Function Network in reducing the dimension of labeled
data is explained. Here, it is argued through empirical experiments that the
internal representation of the Restricted Radial Basis Function Network, which
during the supervised learning process organizes a visualizable two dimensional
map, does not only preserve the topographical structure of high dimensional
data but also captures their class neighborhood structures that are important
for classifying them. Hence, unlike many of the existing dimension reduction
methods, the Restricted Radial Basis Function Network offers two dimensional
visualization that is strongly correlated with the classification process.
| Pitoyo Hartono | 10.1007/s00521-016-2726-5 | 1508.00984 | null | null |
Relation Classification via Recurrent Neural Network | cs.CL cs.LG cs.NE | Deep learning has gained much success in sentence-level relation
classification. For example, convolutional neural networks (CNN) have delivered
competitive performance without much effort on feature engineering as the
conventional pattern-based methods. Thus a lot of works have been produced
based on CNN structures. However, a key issue that has not been well addressed
by the CNN-based method is the lack of capability to learn temporal features,
especially long-distance dependency between nominal pairs. In this paper, we
propose a simple framework based on recurrent neural networks (RNN) and compare
it with CNN-based model. To show the limitation of popular used SemEval-2010
Task 8 dataset, we introduce another dataset refined from MIMLRE(Angeli et al.,
2014). Experiments on two different datasets strongly indicates that the
RNN-based model can deliver better performance on relation classification, and
it is particularly capable of learning long-distance relation patterns. This
makes it suitable for real-world applications where complicated expressions are
often involved.
| Dongxu Zhang and Dong Wang | null | 1508.01006 | null | null |
Learning from LDA using Deep Neural Networks | cs.LG cs.CL cs.IR cs.NE | Latent Dirichlet Allocation (LDA) is a three-level hierarchical Bayesian
model for topic inference. In spite of its great success, inferring the latent
topic distribution with LDA is time-consuming. Motivated by the transfer
learning approach proposed by~\newcite{hinton2015distilling}, we present a
novel method that uses LDA to supervise the training of a deep neural network
(DNN), so that the DNN can approximate the costly LDA inference with less
computation. Our experiments on a document classification task show that a
simple DNN can learn the LDA behavior pretty well, while the inference is
speeded up tens or hundreds of times.
| Dongxu Zhang, Tianyi Luo, Dong Wang and Rong Liu | null | 1508.01011 | null | null |
A review of heterogeneous data mining for brain disorders | cs.LG cs.CE cs.DB q-bio.NC stat.AP | With rapid advances in neuroimaging techniques, the research on brain
disorder identification has become an emerging area in the data mining
community. Brain disorder data poses many unique challenges for data mining
research. For example, the raw data generated by neuroimaging experiments is in
tensor representations, with typical characteristics of high dimensionality,
structural complexity and nonlinear separability. Furthermore, brain
connectivity networks can be constructed from the tensor data, embedding subtle
interactions between brain regions. Other clinical measures are usually
available reflecting the disease status from different perspectives. It is
expected that integrating complementary information in the tensor data and the
brain network data, and incorporating other clinical parameters will be
potentially transformative for investigating disease mechanisms and for
informing therapeutic interventions. Many research efforts have been devoted to
this area. They have achieved great success in various applications, such as
tensor-based modeling, subgraph pattern mining, multi-view feature analysis. In
this paper, we review some recent data mining methods that are used for
analyzing brain disorders.
| Bokai Cao, Xiangnan Kong, Philip S. Yu | null | 1508.01023 | null | null |
Deep Convolutional Networks are Hierarchical Kernel Machines | cs.LG cs.NE | In i-theory a typical layer of a hierarchical architecture consists of HW
modules pooling the dot products of the inputs to the layer with the
transformations of a few templates under a group. Such layers include as
special cases the convolutional layers of Deep Convolutional Networks (DCNs) as
well as the non-convolutional layers (when the group contains only the
identity). Rectifying nonlinearities -- which are used by present-day DCNs --
are one of the several nonlinearities admitted by i-theory for the HW module.
We discuss here the equivalence between group averages of linear combinations
of rectifying nonlinearities and an associated kernel. This property implies
that present-day DCNs can be exactly equivalent to a hierarchy of kernel
machines with pooling and non-pooling layers. Finally, we describe a conjecture
for theoretically understanding hierarchies of such modules. A main consequence
of the conjecture is that hierarchies of trained HW modules minimize memory
requirements while computing a selective and invariant representation.
| Fabio Anselmi, Lorenzo Rosasco, Cheston Tan, Tomaso Poggio | null | 1508.01084 | null | null |
Listen, Attend and Spell | cs.CL cs.LG cs.NE stat.ML | We present Listen, Attend and Spell (LAS), a neural network that learns to
transcribe speech utterances to characters. Unlike traditional DNN-HMM models,
this model learns all the components of a speech recognizer jointly. Our system
has two components: a listener and a speller. The listener is a pyramidal
recurrent network encoder that accepts filter bank spectra as inputs. The
speller is an attention-based recurrent network decoder that emits characters
as outputs. The network produces character sequences without making any
independence assumptions between the characters. This is the key improvement of
LAS over previous end-to-end CTC models. On a subset of the Google voice search
task, LAS achieves a word error rate (WER) of 14.1% without a dictionary or a
language model, and 10.3% with language model rescoring over the top 32 beams.
By comparison, the state-of-the-art CLDNN-HMM model achieves a WER of 8.0%.
| William Chan and Navdeep Jaitly and Quoc V. Le and Oriol Vinyals | null | 1508.01211 | null | null |
Empirical Similarity for Absent Data Generation in Imbalanced
Classification | stat.ML cs.LG | When the training data in a two-class classification problem is overwhelmed
by one class, most classification techniques fail to correctly identify the
data points belonging to the underrepresented class. We propose
Similarity-based Imbalanced Classification (SBIC) that learns patterns in the
training data based on an empirical similarity function. To take the imbalanced
structure of the training data into account, SBIC utilizes the concept of
absent data, i.e. data from the minority class which can help better find the
boundary between the two classes. SBIC simultaneously optimizes the weights of
the empirical similarity function and finds the locations of absent data
points. As such, SBIC uses an embedded mechanism for synthetic data generation
which does not modify the training dataset, but alters the algorithm to suit
imbalanced datasets. Therefore, SBIC uses the ideas of both major schools of
thoughts in imbalanced classification: Like cost-sensitive approaches SBIC
operates on an algorithm level to handle imbalanced structures; and similar to
synthetic data generation approaches, it utilizes the properties of unobserved
data points from the minority class. The application of SBIC to imbalanced
datasets suggests it is comparable to, and in some cases outperforms, other
commonly used classification techniques for imbalanced datasets.
| Arash Pourhabib | 10.1007/978-3-030-12388-8_70 | 1508.01235 | null | null |
Nonlinear Metric Learning for kNN and SVMs through Geometric
Transformations | cs.LG cs.CV | In recent years, research efforts to extend linear metric learning models to
handle nonlinear structures have attracted great interests. In this paper, we
propose a novel nonlinear solution through the utilization of deformable
geometric models to learn spatially varying metrics, and apply the strategy to
boost the performance of both kNN and SVM classifiers. Thin-plate splines (TPS)
are chosen as the geometric model due to their remarkable versatility and
representation power in accounting for high-order deformations. By transforming
the input space through TPS, we can pull same-class neighbors closer while
pushing different-class points farther away in kNN, as well as make the input
data points more linearly separable in SVMs. Improvements in the performance of
kNN classification are demonstrated through experiments on synthetic and real
world datasets, with comparisons made with several state-of-the-art metric
learning solutions. Our SVM-based models also achieve significant improvements
over traditional linear and kernel SVMs with the same datasets.
| Bibo Shi, Jundong Liu | null | 1508.01534 | null | null |
Theoretical and Empirical Analysis of a Parallel Boosting Algorithm | cs.LG cs.DC | Many real-world problems involve massive amounts of data. Under these
circumstances learning algorithms often become prohibitively expensive, making
scalability a pressing issue to be addressed. A common approach is to perform
sampling to reduce the size of the dataset and enable efficient learning.
Alternatively, one customizes learning algorithms to achieve scalability. In
either case, the key challenge is to obtain algorithmic efficiency without
compromising the quality of the results. In this paper we discuss a
meta-learning algorithm (PSBML) which combines features of parallel algorithms
with concepts from ensemble and boosting methodologies to achieve the desired
scalability property. We present both theoretical and empirical analyses which
show that PSBML preserves a critical property of boosting, specifically,
convergence to a distribution centered around the margin. We then present
additional empirical analyses showing that this meta-level algorithm provides a
general and effective framework that can be used in combination with a variety
of learning classifiers. We perform extensive experiments to investigate the
tradeoff achieved between scalability and accuracy, and robustness to noise, on
both synthetic and real-world data. These empirical results corroborate our
theoretical analysis, and demonstrate the potential of PSBML in achieving
scalability without sacrificing accuracy.
| Uday Kamath, Carlotta Domeniconi and Kenneth De Jong | null | 1508.01549 | null | null |
Applying Deep Learning to Answer Selection: A Study and An Open Task | cs.CL cs.LG | We apply a general deep learning framework to address the non-factoid
question answering task. Our approach does not rely on any linguistic tools and
can be applied to different languages or domains. Various architectures are
presented and compared. We create and release a QA corpus and setup a new QA
task in the insurance domain. Experimental results demonstrate superior
performance compared to the baseline methods and various technologies give
further improvements. For this highly challenging task, the top-1 accuracy can
reach up to 65.3% on a test set, which indicates a great potential for
practical use.
| Minwei Feng, Bing Xiang, Michael R. Glass, Lidan Wang, Bowen Zhou | null | 1508.01585 | null | null |
Sublinear Partition Estimation | stat.ML cs.LG | The output scores of a neural network classifier are converted to
probabilities via normalizing over the scores of all competing categories.
Computing this partition function, $Z$, is then linear in the number of
categories, which is problematic as real-world problem sets continue to grow in
categorical types, such as in visual object recognition or discriminative
language modeling. We propose three approaches for sublinear estimation of the
partition function, based on approximate nearest neighbor search and kernel
feature maps and compare the performance of the proposed approaches
empirically.
| Pushpendre Rastogi and Benjamin Van Durme | null | 1508.01596 | null | null |
Asynchronous Distributed Semi-Stochastic Gradient Optimization | cs.LG cs.DC | With the recent proliferation of large-scale learning problems,there have
been a lot of interest on distributed machine learning algorithms, particularly
those that are based on stochastic gradient descent (SGD) and its variants.
However, existing algorithms either suffer from slow convergence due to the
inherent variance of stochastic gradients, or have a fast linear convergence
rate but at the expense of poorer solution quality. In this paper, we combine
their merits by proposing a fast distributed asynchronous SGD-based algorithm
with variance reduction. A constant learning rate can be used, and it is also
guaranteed to converge linearly to the optimal solution. Experiments on the
Google Cloud Computing Platform demonstrate that the proposed algorithm
outperforms state-of-the-art distributed asynchronous algorithms in terms of
both wall clock time and solution quality.
| Ruiliang Zhang, Shuai Zheng, James T. Kwok | null | 1508.01633 | null | null |
Using Deep Learning for Detecting Spoofing Attacks on Speech Signals | cs.SD cs.CL cs.CR cs.LG stat.ML | It is well known that speaker verification systems are subject to spoofing
attacks. The Automatic Speaker Verification Spoofing and Countermeasures
Challenge -- ASVSpoof2015 -- provides a standard spoofing database, containing
attacks based on synthetic speech, along with a protocol for experiments. This
paper describes CPqD's systems submitted to the ASVSpoof2015 Challenge, based
on deep neural networks, working both as a classifier and as a feature
extraction module for a GMM and a SVM classifier. Results show the validity of
this approach, achieving less than 0.5\% EER for known attacks.
| Alan Godoy, Fl\'avio Sim\~oes, Jos\'e Augusto Stuchi, Marcus de Assis
Angeloni, M\'ario Uliani, Ricardo Violato | null | 1508.01746 | null | null |
An End-to-End Neural Network for Polyphonic Piano Music Transcription | stat.ML cs.LG cs.SD | We present a supervised neural network model for polyphonic piano music
transcription. The architecture of the proposed model is analogous to speech
recognition systems and comprises an acoustic model and a music language model.
The acoustic model is a neural network used for estimating the probabilities of
pitches in a frame of audio. The language model is a recurrent neural network
that models the correlations between pitch combinations over time. The proposed
model is general and can be used to transcribe polyphonic music without
imposing any constraints on the polyphony. The acoustic and language model
predictions are combined using a probabilistic graphical model. Inference over
the output variables is performed using the beam search algorithm. We perform
two sets of experiments. We investigate various neural network architectures
for the acoustic models and also investigate the effect of combining acoustic
and music language model predictions using the proposed architecture. We
compare performance of the neural network based acoustic models with two
popular unsupervised acoustic models. Results show that convolutional neural
network acoustic models yields the best performance across all evaluation
metrics. We also observe improved performance with the application of the music
language models. Finally, we present an efficient variant of beam search that
improves performance and reduces run-times by an order of magnitude, making the
model suitable for real-time applications.
| Siddharth Sigtia, Emmanouil Benetos, Simon Dixon | null | 1508.01774 | null | null |
Deep Boosting: Joint Feature Selection and Analysis Dictionary Learning
in Hierarchy | cs.CV cs.LG cs.NE | This work investigates how the traditional image classification pipelines can
be extended into a deep architecture, inspired by recent successes of deep
neural networks. We propose a deep boosting framework based on layer-by-layer
joint feature boosting and dictionary learning. In each layer, we construct a
dictionary of filters by combining the filters from the lower layer, and
iteratively optimize the image representation with a joint
discriminative-generative formulation, i.e. minimization of empirical
classification error plus regularization of analysis image generation over
training images. For optimization, we perform two iterating steps: i) to
minimize the classification error, select the most discriminative features
using the gentle adaboost algorithm; ii) according to the feature selection,
update the filters to minimize the regularization on analysis image
representation using the gradient descent method. Once the optimization is
converged, we learn the higher layer representation in the same way. Our model
delivers several distinct advantages. First, our layer-wise optimization
provides the potential to build very deep architectures. Second, the generated
image representation is compact and meaningful. In several visual recognition
tasks, our framework outperforms existing state-of-the-art approaches.
| Zhanglin Peng, Ya Li, Zhaoquan Cai and Liang Lin | null | 1508.01887 | null | null |
Diffusion Maximum Correntropy Criterion Algorithms for Robust
Distributed Estimation | stat.ML cs.LG | Robust diffusion adaptive estimation algorithms based on the maximum
correntropy criterion (MCC), including adaptation to combination MCC and
combination to adaptation MCC, are developed to deal with the distributed
estimation over network in impulsive (long-tailed) noise environments. The cost
functions used in distributed estimation are in general based on the mean
square error (MSE) criterion, which is desirable when the measurement noise is
Gaussian. In non-Gaussian situations, such as the impulsive-noise case, MCC
based methods may achieve much better performance than the MSE methods as they
take into account higher order statistics of error distribution. The proposed
methods can also outperform the robust diffusion least mean p-power(DLMP) and
diffusion minimum error entropy (DMEE) algorithms. The mean and mean square
convergence analysis of the new algorithms are also carried out.
| Wentao Ma, Badong Chen, Jiandong Duan, Haiquan Zhao | null | 1508.01903 | null | null |
A variational approach to the consistency of spectral clustering | math.ST cs.LG stat.ML stat.TH | This paper establishes the consistency of spectral approaches to data
clustering. We consider clustering of point clouds obtained as samples of a
ground-truth measure. A graph representing the point cloud is obtained by
assigning weights to edges based on the distance between the points they
connect. We investigate the spectral convergence of both unnormalized and
normalized graph Laplacians towards the appropriate operators in the continuum
domain. We obtain sharp conditions on how the connectivity radius can be scaled
with respect to the number of sample points for the spectral convergence to
hold.
We also show that the discrete clusters obtained via spectral clustering
converge towards a continuum partition of the ground truth measure. Such
continuum partition minimizes a functional describing the continuum analogue of
the graph-based spectral partitioning. Our approach, based on variational
convergence, is general and flexible.
| Nicol\'as Garc\'ia Trillos and Dejan Slep\v{c}ev | null | 1508.01928 | null | null |
Crowd Access Path Optimization: Diversity Matters | cs.LG cs.DB | Quality assurance is one the most important challenges in crowdsourcing.
Assigning tasks to several workers to increase quality through redundant
answers can be expensive if asking homogeneous sources. This limitation has
been overlooked by current crowdsourcing platforms resulting therefore in
costly solutions. In order to achieve desirable cost-quality tradeoffs it is
essential to apply efficient crowd access optimization techniques. Our work
argues that optimization needs to be aware of diversity and correlation of
information within groups of individuals so that crowdsourcing redundancy can
be adequately planned beforehand. Based on this intuitive idea, we introduce
the Access Path Model (APM), a novel crowd model that leverages the notion of
access paths as an alternative way of retrieving information. APM aggregates
answers ensuring high quality and meaningful confidence. Moreover, we devise a
greedy optimization algorithm for this model that finds a provably good
approximate plan to access the crowd. We evaluate our approach on three
crowdsourced datasets that illustrate various aspects of the problem. Our
results show that the Access Path Model combined with greedy optimization is
cost-efficient and practical to overcome common difficulties in large-scale
crowdsourcing like data sparsity and anonymity.
| Besmira Nushi, Adish Singla, Anja Gruenheid, Erfan Zamanian, Andreas
Krause, Donald Kossmann | null | 1508.01951 | null | null |
Improving Decision Analytics with Deep Learning: The Case of Financial
Disclosures | stat.ML cs.CL cs.LG | Decision analytics commonly focuses on the text mining of financial news
sources in order to provide managerial decision support and to predict stock
market movements. Existing predictive frameworks almost exclusively apply
traditional machine learning methods, whereas recent research indicates that
traditional machine learning methods are not sufficiently capable of extracting
suitable features and capturing the non-linear nature of complex tasks. As a
remedy, novel deep learning models aim to overcome this issue by extending
traditional neural network models with additional hidden layers. Indeed, deep
learning has been shown to outperform traditional methods in terms of
predictive performance. In this paper, we adapt the novel deep learning
technique to financial decision support. In this instance, we aim to predict
the direction of stock movements following financial disclosures. As a result,
we show how deep learning can outperform the accuracy of random forests as a
benchmark for machine learning by 5.66%.
| Stefan Feuerriegel and Ralph Fehrer | null | 1508.01993 | null | null |
Sensitivity study using machine learning algorithms on simulated r-mode
gravitational wave signals from newborn neutron stars | astro-ph.IM cs.LG | This is a follow-up sensitivity study on r-mode gravitational wave signals
from newborn neutron stars illustrating the applicability of machine learning
algorithms for the detection of long-lived gravitational-wave transients. In
this sensitivity study we examine three machine learning algorithms (MLAs):
artificial neural networks (ANNs), support vector machines (SVMs) and
constrained subspace classifiers (CSCs). The objective of this study is to
compare the detection efficiency that MLAs can achieve with the efficiency of
conventional detection algorithms discussed in an earlier paper. Comparisons
are made using 2 distinct r-mode waveforms. For the training of the MLAs we
assumed that some information about the distance to the source is given so that
the training was performed over distance ranges not wider than half an order of
magnitude. The results of this study suggest that machine learning algorithms
are suitable for the detection of long-lived gravitational-wave transients and
that when assuming knowledge of the distance to the source, MLAs are at least
as efficient as conventional methods.
| Antonis Mytidis, Athanasios Aris Panagopoulos, Orestis P.
Panagopoulos, Andrew Miller, Bernard Whiting | 10.1103/PhysRevD.99.024024 | 1508.02064 | null | null |
A Linearly-Convergent Stochastic L-BFGS Algorithm | math.OC cs.LG math.NA stat.CO stat.ML | We propose a new stochastic L-BFGS algorithm and prove a linear convergence
rate for strongly convex and smooth functions. Our algorithm draws heavily from
a recent stochastic variant of L-BFGS proposed in Byrd et al. (2014) as well as
a recent approach to variance reduction for stochastic gradient descent from
Johnson and Zhang (2013). We demonstrate experimentally that our algorithm
performs well on large-scale convex and non-convex optimization problems,
exhibiting linear convergence and rapidly solving the optimization problems to
high levels of precision. Furthermore, we show that our algorithm performs well
for a wide-range of step sizes, often differing by several orders of magnitude.
| Philipp Moritz, Robert Nishihara, Michael I. Jordan | null | 1508.02087 | null | null |
Lifted Representation of Relational Causal Models Revisited:
Implications for Reasoning and Structure Learning | cs.AI cs.LG | Maier et al. (2010) introduced the relational causal model (RCM) for
representing and inferring causal relationships in relational data. A lifted
representation, called abstract ground graph (AGG), plays a central role in
reasoning with and learning of RCM. The correctness of the algorithm proposed
by Maier et al. (2013a) for learning RCM from data relies on the soundness and
completeness of AGG for relational d-separation to reduce the learning of an
RCM to learning of an AGG. We revisit the definition of AGG and show that AGG,
as defined in Maier et al. (2013b), does not correctly abstract all ground
graphs. We revise the definition of AGG to ensure that it correctly abstracts
all ground graphs. We further show that AGG representation is not complete for
relational d-separation, that is, there can exist conditional independence
relations in an RCM that are not entailed by AGG. A careful examination of the
relationship between the lack of completeness of AGG for relational
d-separation and faithfulness conditions suggests that weaker notions of
completeness, namely adjacency faithfulness and orientation faithfulness
between an RCM and its AGG, can be used to learn an RCM from data.
| Sanghack Lee and Vasant Honavar | null | 1508.02103 | null | null |
Learning Structural Kernels for Natural Language Processing | cs.CL cs.LG | Structural kernels are a flexible learning paradigm that has been widely used
in Natural Language Processing. However, the problem of model selection in
kernel-based methods is usually overlooked. Previous approaches mostly rely on
setting default values for kernel hyperparameters or using grid search, which
is slow and coarse-grained. In contrast, Bayesian methods allow efficient model
selection by maximizing the evidence on the training data through
gradient-based methods. In this paper we show how to perform this in the
context of structural kernels by using Gaussian Processes. Experimental results
on tree kernels show that this procedure results in better prediction
performance compared to hyperparameter optimization via grid search. The
framework proposed in this paper can be adapted to other structures besides
trees, e.g., strings and graphs, thereby extending the utility of kernel-based
methods.
| Daniel Beck, Trevor Cohn, Christian Hardmeier, Lucia Specia | null | 1508.02131 | null | null |
Dropout Training for SVMs with Data Augmentation | cs.LG | Dropout and other feature noising schemes have shown promising results in
controlling over-fitting by artificially corrupting the training data. Though
extensive theoretical and empirical studies have been performed for generalized
linear models, little work has been done for support vector machines (SVMs),
one of the most successful approaches for supervised learning. This paper
presents dropout training for both linear SVMs and the nonlinear extension with
latent representation learning. For linear SVMs, to deal with the intractable
expectation of the non-smooth hinge loss under corrupting distributions, we
develop an iteratively re-weighted least square (IRLS) algorithm by exploring
data augmentation techniques. Our algorithm iteratively minimizes the
expectation of a re-weighted least square problem, where the re-weights are
analytically updated. For nonlinear latent SVMs, we consider learning one layer
of latent representations in SVMs and extend the data augmentation technique in
conjunction with first-order Taylor-expansion to deal with the intractable
expected non-smooth hinge loss and the nonlinearity of latent representations.
Finally, we apply the similar data augmentation ideas to develop a new IRLS
algorithm for the expected logistic loss under corrupting distributions, and we
further develop a non-linear extension of logistic regression by incorporating
one layer of latent representations. Our algorithms offer insights on the
connection and difference between the hinge loss and logistic loss in dropout
training. Empirical results on several real datasets demonstrate the
effectiveness of dropout training on significantly boosting the classification
accuracy of both linear and nonlinear SVMs. In addition, the nonlinear SVMs
further improve the prediction performance on several image datasets.
| Ning Chen and Jun Zhu and Jianfei Chen and Ting Chen | null | 1508.02268 | null | null |
Training Conditional Random Fields with Natural Gradient Descent | cs.LG | We propose a novel parameter estimation procedure that works efficiently for
conditional random fields (CRF). This algorithm is an extension to the maximum
likelihood estimation (MLE), using loss functions defined by Bregman
divergences which measure the proximity between the model expectation and the
empirical mean of the feature vectors. This leads to a flexible training
framework from which multiple update strategies can be derived using natural
gradient descent (NGD). We carefully choose the convex function inducing the
Bregman divergence so that the types of updates are reduced, while making the
optimization procedure more effective by transforming the gradients of the
log-likelihood loss function. The derived algorithms are very simple and can be
easily implemented on top of the existing stochastic gradient descent (SGD)
optimization procedure, yet it is very effective as illustrated by experimental
results.
| Yuan Cao | null | 1508.02373 | null | null |
Approximation-Aware Dependency Parsing by Belief Propagation | cs.CL cs.LG | We show how to train the fast dependency parser of Smith and Eisner (2008)
for improved accuracy. This parser can consider higher-order interactions among
edges while retaining O(n^3) runtime. It outputs the parse with maximum
expected recall -- but for speed, this expectation is taken under a posterior
distribution that is constructed only approximately, using loopy belief
propagation through structured factors. We show how to adjust the model
parameters to compensate for the errors introduced by this approximation, by
following the gradient of the actual loss on training data. We find this
gradient by back-propagation. That is, we treat the entire parser
(approximations and all) as a differentiable circuit, as Stoyanov et al. (2011)
and Domke (2010) did for loopy CRFs. The resulting trained parser obtains
higher accuracy with fewer iterations of belief propagation than one trained by
conditional log-likelihood.
| Matthew R. Gormley, Mark Dredze, Jason Eisner | null | 1508.02375 | null | null |
FactorBase: SQL for Learning A Multi-Relational Graphical Model | cs.DB cs.LG | We describe FactorBase, a new SQL-based framework that leverages a relational
database management system to support multi-relational model discovery. A
multi-relational statistical model provides an integrated analysis of the
heterogeneous and interdependent data resources in the database. We adopt the
BayesStore design philosophy: statistical models are stored and managed as
first-class citizens inside a database. Whereas previous systems like
BayesStore support multi-relational inference, FactorBase supports
multi-relational learning. A case study on six benchmark databases evaluates
how our system supports a challenging machine learning application, namely
learning a first-order Bayesian network model for an entire database. Model
learning in this setting has to examine a large number of potential statistical
associations across data tables. Our implementation shows how the SQL
constructs in FactorBase facilitate the fast, modular, and reliable development
of highly scalable model learning systems.
| Oliver Schulte and Zhensong Qian | null | 1508.02428 | null | null |
Towards Machine Wald | math.ST cs.LG stat.TH | The past century has seen a steady increase in the need of estimating and
predicting complex systems and making (possibly critical) decisions with
limited information. Although computers have made possible the numerical
evaluation of sophisticated statistical models, these models are still designed
\emph{by humans} because there is currently no known recipe or algorithm for
dividing the design of a statistical model into a sequence of arithmetic
operations. Indeed enabling computers to \emph{think} as \emph{humans} have the
ability to do when faced with uncertainty is challenging in several major ways:
(1) Finding optimal statistical models remains to be formulated as a well posed
problem when information on the system of interest is incomplete and comes in
the form of a complex combination of sample data, partial knowledge of
constitutive relations and a limited description of the distribution of input
random variables. (2) The space of admissible scenarios along with the space of
relevant information, assumptions, and/or beliefs, tend to be infinite
dimensional, whereas calculus on a computer is necessarily discrete and finite.
With this purpose, this paper explores the foundations of a rigorous framework
for the scientific computation of optimal statistical estimators/models and
reviews their connections with Decision Theory, Machine Learning, Bayesian
Inference, Stochastic Optimization, Robust Optimization, Optimal Uncertainty
Quantification and Information Based Complexity.
| Houman Owhadi and Clint Scovel | 10.1007/978-3-319-11259-6_3-1 | 1508.02449 | null | null |
Primal-Dual Active-Set Methods for Isotonic Regression and Trend
Filtering | math.OC cs.LG | Isotonic regression (IR) is a non-parametric calibration method used in
supervised learning. For performing large-scale IR, we propose a primal-dual
active-set (PDAS) algorithm which, in contrast to the state-of-the-art Pool
Adjacent Violators (PAV) algorithm, can be parallized and is easily
warm-started thus well-suited in the online settings. We prove that, like the
PAV algorithm, our PDAS algorithm for IR is convergent and has a work
complexity of O(n), though our numerical experiments suggest that our PDAS
algorithm is often faster than PAV. In addition, we propose PDAS variants (with
safeguarding to ensure convergence) for solving related trend filtering (TF)
problems, providing the results of experiments to illustrate their
effectiveness.
| Zheng Han and Frank E. Curtis | null | 1508.02452 | null | null |
Normalized Hierarchical SVM | cs.LG | We present improved methods of using structured SVMs in a large-scale
hierarchical classification problem, that is when labels are leaves, or sets of
leaves, in a tree or a DAG. We examine the need to normalize both the
regularization and the margin and show how doing so significantly improves
performance, including allowing achieving state-of-the-art results where
unnormalized structured SVMs do not perform better than flat models. We also
describe a further extension of hierarchical SVMs that highlight the connection
between hierarchical SVMs and matrix factorization models.
| Heejin Choi, Yutaka Sasaki, Nathan Srebro | null | 1508.02479 | null | null |
Type-Constrained Representation Learning in Knowledge Graphs | cs.AI cs.LG | Large knowledge graphs increasingly add value to various applications that
require machines to recognize and understand queries and their semantics, as in
search or question answering systems. Latent variable models have increasingly
gained attention for the statistical modeling of knowledge graphs, showing
promising results in tasks related to knowledge graph completion and cleaning.
Besides storing facts about the world, schema-based knowledge graphs are backed
by rich semantic descriptions of entities and relation-types that allow
machines to understand the notion of things and their semantic relationships.
In this work, we study how type-constraints can generally support the
statistical modeling with latent variable models. More precisely, we integrated
prior knowledge in form of type-constraints in various state of the art latent
variable approaches. Our experimental results show that prior knowledge on
relation-types significantly improves these models up to 77% in link-prediction
tasks. The achieved improvements are especially prominent when a low model
complexity is enforced, a crucial requirement when these models are applied to
very large datasets. Unfortunately, type-constraints are neither always
available nor always complete e.g., they can become fuzzy when entities lack
proper typing. We show that in these cases, it can be beneficial to apply a
local closed-world assumption that approximates the semantics of relation-types
based on observations made in the data.
| Denis Krompa{\ss} and Stephan Baier and Volker Tresp | null | 1508.02593 | null | null |
Artificial Prediction Markets for Online Prediction of Continuous
Variables-A Preliminary Report | cs.AI cs.LG | We propose the Artificial Continuous Prediction Market (ACPM) as a means to
predict a continuous real value, by integrating a range of data sources and
aggregating the results of different machine learning (ML) algorithms. ACPM
adapts the concept of the (physical) prediction market to address the
prediction of real values instead of discrete events. Each ACPM participant has
a data source, a ML algorithm and a local decision-making procedure that
determines what to bid on what value. The contributions of ACPM are: (i)
adaptation to changes in data quality by the use of learning in: (a) the
market, which weights each market participant to adjust the influence of each
on the market prediction and (b) the participants, which use a Q-learning based
trading strategy to incorporate the market prediction into their subsequent
predictions, (ii) resilience to a changing population of low- and
high-performing participants. We demonstrate the effectiveness of ACPM by
application to an influenza-like illnesses data set, showing ACPM out-performs
a range of well-known regression models and is resilient to variation in data
source quality.
| Fatemeh Jahedpari, Marina De Vos, Sattar Hashemi, Benjamin Hirsch,
Julian Padget | null | 1508.02681 | null | null |
The Effects of Hyperparameters on SGD Training of Neural Networks | cs.NE cs.LG | The performance of neural network classifiers is determined by a number of
hyperparameters, including learning rate, batch size, and depth. A number of
attempts have been made to explore these parameters in the literature, and at
times, to develop methods for optimizing them. However, exploration of
parameter spaces has often been limited. In this note, I report the results of
large scale experiments exploring these different parameters and their
interactions.
| Thomas M. Breuel | null | 1508.02788 | null | null |
On the Convergence of SGD Training of Neural Networks | cs.NE cs.LG | Neural networks are usually trained by some form of stochastic gradient
descent (SGD)). A number of strategies are in common use intended to improve
SGD optimization, such as learning rate schedules, momentum, and batching.
These are motivated by ideas about the occurrence of local minima at different
scales, valleys, and other phenomena in the objective function. Empirical
results presented here suggest that these phenomena are not significant factors
in SGD optimization of MLP-related objective functions, and that the behavior
of stochastic gradient descent in these problems is better described as the
simultaneous convergence at different rates of many, largely non-interacting
subproblems
| Thomas M. Breuel | null | 1508.02790 | null | null |
Learning to Hire Teams | cs.HC cs.CY cs.LG | Crowdsourcing and human computation has been employed in increasingly
sophisticated projects that require the solution of a heterogeneous set of
tasks. We explore the challenge of building or hiring an effective team, for
performing tasks required for such projects on an ongoing basis, from an
available pool of applicants or workers who have bid for the tasks. The
recruiter needs to learn workers' skills and expertise by performing online
tests and interviews, and would like to minimize the amount of budget or time
spent in this process before committing to hiring the team. How can one
optimally spend budget to learn the expertise of workers as part of recruiting
a team? How can one exploit the similarities among tasks as well as underlying
social ties or commonalities among the workers for faster learning? We tackle
these decision-theoretic challenges by casting them as an instance of online
learning for best action selection. We present algorithms with PAC bounds on
the required budget to hire a near-optimal team with high confidence.
Furthermore, we consider an embedding of the tasks and workers in an underlying
graph that may arise from task similarities or social ties, and that can
provide additional side-observations for faster learning. We then quantify the
improvement in the bounds that we can achieve depending on the characteristic
properties of this graph structure. We evaluate our methodology on simulated
problem instances as well as on real-world crowdsourcing data collected from
the oDesk platform. Our methodology and results present an interesting
direction of research to tackle the challenges faced by a recruiter for
contract-based crowdsourcing.
| Adish Singla, Eric Horvitz, Pushmeet Kohli, Andreas Krause | null | 1508.02823 | null | null |
Manifold regularization in structured output space for semi-supervised
structured output prediction | cs.LG cs.CV | Structured output prediction aims to learn a predictor to predict a
structured output from a input data vector. The structured outputs include
vector, tree, sequence, etc. We usually assume that we have a training set of
input-output pairs to train the predictor. However, in many real-world appli-
cations, it is difficult to obtain the output for a input, thus for many
training input data points, the structured outputs are missing. In this paper,
we dis- cuss how to learn from a training set composed of some input-output
pairs, and some input data points without outputs. This problem is called semi-
supervised structured output prediction. We propose a novel method for this
problem by constructing a nearest neighbor graph from the input space to
present the manifold structure, and using it to regularize the structured out-
put space directly. We define a slack structured output for each training data
point, and proposed to predict it by learning a structured output predictor.
The learning of both slack structured outputs and the predictor are unified
within one single minimization problem. In this problem, we propose to mini-
mize the structured loss between the slack structured outputs of neighboring
data points, and the prediction error measured by the structured loss. The
problem is optimized by an iterative algorithm. Experiment results over three
benchmark data sets show its advantage.
| Fei Jiang, Lili Jia, Xiaobao Sheng, Riley LeMieux | null | 1508.02849 | null | null |
No Regret Bound for Extreme Bandits | stat.ML cs.LG math.OC math.ST stat.TH | Algorithms for hyperparameter optimization abound, all of which work well
under different and often unverifiable assumptions. Motivated by the general
challenge of sequentially choosing which algorithm to use, we study the more
specific task of choosing among distributions to use for random hyperparameter
optimization. This work is naturally framed in the extreme bandit setting,
which deals with sequentially choosing which distribution from a collection to
sample in order to minimize (maximize) the single best cost (reward). Whereas
the distributions in the standard bandit setting are primarily characterized by
their means, a number of subtleties arise when we care about the minimal cost
as opposed to the average cost. For example, there may not be a well-defined
"best" distribution as there is in the standard bandit setting. The best
distribution depends on the rewards that have been obtained and on the
remaining time horizon. Whereas in the standard bandit setting, it is sensible
to compare policies with an oracle which plays the single best arm, in the
extreme bandit setting, there are multiple sensible oracle models. We define a
sensible notion of "extreme regret" in the extreme bandit setting, which
parallels the concept of regret in the standard bandit setting. We then prove
that no policy can asymptotically achieve no extreme regret.
| Robert Nishihara, David Lopez-Paz, L\'eon Bottou | null | 1508.02933 | null | null |
From Cutting Planes Algorithms to Compression Schemes and Active
Learning | cs.LG | Cutting-plane methods are well-studied localization(and optimization)
algorithms. We show that they provide a natural framework to perform
machinelearning ---and not just to solve optimization problems posed by
machinelearning--- in addition to their intended optimization use. In
particular, theyallow one to learn sparse classifiers and provide good
compression schemes.Moreover, we show that very little effort is required to
turn them intoeffective active learning methods. This last property provides a
generic way todesign a whole family of active learning algorithms from existing
passivemethods. We present numerical simulations testifying of the relevance
ofcutting-plane methods for passive and active learning tasks.
| Liva Ralaivola, Ugo Louche | null | 1508.02986 | null | null |
Optimized Projections for Compressed Sensing via Direct Mutual Coherence
Minimization | cs.IT cs.LG math.IT | Compressed Sensing (CS) is a novel technique for simultaneous signal sampling
and compression based on the existence of a sparse representation of signal and
a projected dictionary $PD$, where $P\in\mathbb{R}^{m\times d}$ is the
projection matrix and $D\in\mathbb{R}^{d\times n}$ is the dictionary. To
exactly recover the signal with a small number of measurements $m$, the
projected dictionary $PD$ is expected to be of low mutual coherence. Several
previous methods attempt to find the projection $P$ such that the mutual
coherence of $PD$ can be as low as possible. However, they do not minimize the
mutual coherence directly and thus their methods are far from optimal. Also the
solvers they used lack of the convergence guarantee and thus there has no
guarantee on the quality of their obtained solutions. This work aims to address
these issues. We propose to find an optimal projection by minimizing the mutual
coherence of $PD$ directly. This leads to a nonconvex nonsmooth minimization
problem. We then approximate it by smoothing and solve it by alternate
minimization. We further prove the convergence of our algorithm. To the best of
our knowledge, this is the first work which directly minimizes the mutual
coherence of the projected dictionary with a convergence guarantee. Numerical
experiments demonstrate that the proposed method can recover sparse signals
better than existing methods.
| Canyi Lu, Huan Li, Zhouchen Lin | null | 1508.03117 | null | null |
Probabilistic Dependency Networks for Prediction and Diagnostics | cs.LG | Research in transportation frequently involve modelling and predicting
attributes of events that occur at regular intervals. The event could be
arrival of a bus at a bus stop, the volume of a traffic at a particular point,
the demand at a particular bus stop etc. In this work, we propose a specific
implementation of probabilistic graphical models to learn the probabilistic
dependency between the events that occur in a network. A dependency graph is
built from the past observed instances of the event and we use the graph to
understand the causal effects of some events on others in the system. The
dependency graph is also used to predict the attributes of future events and is
shown to have a good prediction accuracy compared to the state of the art.
| Narayanan U. Edakunni, Aditi Raghunathan, Abhishek Tripathi, John
Handley, Fredric Roulland | null | 1508.03130 | null | null |
Hash Function Learning via Codewords | cs.LG | In this paper we introduce a novel hash learning framework that has two main
distinguishing features, when compared to past approaches. First, it utilizes
codewords in the Hamming space as ancillary means to accomplish its hash
learning task. These codewords, which are inferred from the data, attempt to
capture similarity aspects of the data's hash codes. Secondly and more
importantly, the same framework is capable of addressing supervised,
unsupervised and, even, semi-supervised hash learning tasks in a natural
manner. A series of comparative experiments focused on content-based image
retrieval highlights its performance advantages.
| Yinjie Huang and Michael Georgiopoulos and Georgios C. Anagnostopoulos | null | 1508.03285 | null | null |
A Survey on Contextual Multi-armed Bandits | cs.LG | In this survey we cover a few stochastic and adversarial contextual bandit
algorithms. We analyze each algorithm's assumption and regret bound.
| Li Zhou | null | 1508.03326 | null | null |
Multi-Task Learning with Group-Specific Feature Space Sharing | cs.LG | When faced with learning a set of inter-related tasks from a limited amount
of usable data, learning each task independently may lead to poor
generalization performance. Multi-Task Learning (MTL) exploits the latent
relations between tasks and overcomes data scarcity limitations by co-learning
all these tasks simultaneously to offer improved performance. We propose a
novel Multi-Task Multiple Kernel Learning framework based on Support Vector
Machines for binary classification tasks. By considering pair-wise task
affinity in terms of similarity between a pair's respective feature spaces, the
new framework, compared to other similar MTL approaches, offers a high degree
of flexibility in determining how similar feature spaces should be, as well as
which pairs of tasks should share a common feature space in order to benefit
overall performance. The associated optimization problem is solved via a block
coordinate descent, which employs a consensus-form Alternating Direction Method
of Multipliers algorithm to optimize the Multiple Kernel Learning weights and,
hence, to determine task affinities. Empirical evaluation on seven data sets
exhibits a statistically significant improvement of our framework's results
compared to the ones of several other Clustered Multi-Task Learning methods.
| Niloofar Yousefi, Michael Georgiopoulos and Georgios C.
Anagnostopoulos | null | 1508.03329 | null | null |
Dimensionality Reduction of Collective Motion by Principal Manifolds | math.NA cs.LG cs.MA math.DS stat.ML | While the existence of low-dimensional embedding manifolds has been shown in
patterns of collective motion, the current battery of nonlinear dimensionality
reduction methods are not amenable to the analysis of such manifolds. This is
mainly due to the necessary spectral decomposition step, which limits control
over the mapping from the original high-dimensional space to the embedding
space. Here, we propose an alternative approach that demands a two-dimensional
embedding which topologically summarizes the high-dimensional data. In this
sense, our approach is closely related to the construction of one-dimensional
principal curves that minimize orthogonal error to data points subject to
smoothness constraints. Specifically, we construct a two-dimensional principal
manifold directly in the high-dimensional space using cubic smoothing splines,
and define the embedding coordinates in terms of geodesic distances. Thus, the
mapping from the high-dimensional data to the manifold is defined in terms of
local coordinates. Through representative examples, we show that compared to
existing nonlinear dimensionality reduction methods, the principal manifold
retains the original structure even in noisy and sparse datasets. The principal
manifold finding algorithm is applied to configurations obtained from a
dynamical system of multiple agents simulating a complex maneuver called
predator mobbing, and the resulting two-dimensional embedding is compared with
that of a well-established nonlinear dimensionality reduction method.
| Kelum Gajamannage, Sachit Butail, Maurizio Porfiri, Erik M. Bollt | 10.1016/j.physd.2014.09.009 | 1508.03332 | null | null |
A Randomized Rounding Algorithm for Sparse PCA | cs.DS cs.LG stat.ML | We present and analyze a simple, two-step algorithm to approximate the
optimal solution of the sparse PCA problem. Our approach first solves a L1
penalized version of the NP-hard sparse PCA optimization problem and then uses
a randomized rounding strategy to sparsify the resulting dense solution. Our
main theoretical result guarantees an additive error approximation and provides
a tradeoff between sparsity and accuracy. Our experimental evaluation indicates
that our approach is competitive in practice, even compared to state-of-the-art
toolboxes such as Spasm.
| Kimon Fountoulakis, Abhisek Kundu, Eugenia-Maria Kontopoulou and
Petros Drineas | null | 1508.03337 | null | null |
Learning from Real Users: Rating Dialogue Success with Neural Networks
for Reinforcement Learning in Spoken Dialogue Systems | cs.LG cs.CL | To train a statistical spoken dialogue system (SDS) it is essential that an
accurate method for measuring task success is available. To date training has
relied on presenting a task to either simulated or paid users and inferring the
dialogue's success by observing whether this presented task was achieved or
not. Our aim however is to be able to learn from real users acting under their
own volition, in which case it is non-trivial to rate the success as any prior
knowledge of the task is simply unavailable. User feedback may be utilised but
has been found to be inconsistent. Hence, here we present two neural network
models that evaluate a sequence of turn-level features to rate the success of a
dialogue. Importantly these models make no use of any prior knowledge of the
user's task. The models are trained on dialogues generated by a simulated user
and the best model is then used to train a policy on-line which is shown to
perform at least as well as a baseline system using prior knowledge of the
user's task. We note that the models should also be of interest for evaluating
SDS and for monitoring a dialogue in rule-based SDS.
| Pei-Hao Su, David Vandyke, Milica Gasic, Dongho Kim, Nikola Mrksic,
Tsung-Hsien Wen, Steve Young | null | 1508.03386 | null | null |
Doubly Stochastic Primal-Dual Coordinate Method for Bilinear
Saddle-Point Problem | cs.LG stat.ML | We propose a doubly stochastic primal-dual coordinate optimization algorithm
for empirical risk minimization, which can be formulated as a bilinear
saddle-point problem. In each iteration, our method randomly samples a block of
coordinates of the primal and dual solutions to update. The linear convergence
of our method could be established in terms of 1) the distance from the current
iterate to the optimal solution and 2) the primal-dual objective gap. We show
that the proposed method has a lower overall complexity than existing
coordinate methods when either the data matrix has a factorized structure or
the proximal mapping on each block is computationally expensive, e.g.,
involving an eigenvalue decomposition. The efficiency of the proposed method is
confirmed by empirical studies on several real applications, such as the
multi-task large margin nearest neighbor problem.
| Adams Wei Yu, Qihang Lin, Tianbao Yang | null | 1508.03390 | null | null |
Reward Shaping with Recurrent Neural Networks for Speeding up On-Line
Policy Learning in Spoken Dialogue Systems | cs.LG cs.CL | Statistical spoken dialogue systems have the attractive property of being
able to be optimised from data via interactions with real users. However in the
reinforcement learning paradigm the dialogue manager (agent) often requires
significant time to explore the state-action space to learn to behave in a
desirable manner. This is a critical issue when the system is trained on-line
with real users where learning costs are expensive. Reward shaping is one
promising technique for addressing these concerns. Here we examine three
recurrent neural network (RNN) approaches for providing reward shaping
information in addition to the primary (task-orientated) environmental
feedback. These RNNs are trained on returns from dialogues generated by a
simulated user and attempt to diffuse the overall evaluation of the dialogue
back down to the turn level to guide the agent towards good behaviour faster.
In both simulated and real user scenarios these RNNs are shown to increase
policy learning speed. Importantly, they do not require prior knowledge of the
user's goal.
| Pei-Hao Su, David Vandyke, Milica Gasic, Nikola Mrksic, Tsung-Hsien
Wen, Steve Young | null | 1508.03391 | null | null |
End-to-end Learning of LDA by Mirror-Descent Back Propagation over a
Deep Architecture | cs.LG | We develop a fully discriminative learning approach for supervised Latent
Dirichlet Allocation (LDA) model using Back Propagation (i.e., BP-sLDA), which
maximizes the posterior probability of the prediction variable given the input
document. Different from traditional variational learning or Gibbs sampling
approaches, the proposed learning method applies (i) the mirror descent
algorithm for maximum a posterior inference and (ii) back propagation over a
deep architecture together with stochastic gradient/mirror descent for model
parameter estimation, leading to scalable and end-to-end discriminative
learning of the model. As a byproduct, we also apply this technique to develop
a new learning method for the traditional unsupervised LDA model (i.e.,
BP-LDA). Experimental results on three real-world regression and classification
tasks show that the proposed methods significantly outperform the previous
supervised topic models, neural networks, and is on par with deep neural
networks.
| Jianshu Chen, Ji He, Yelong Shen, Lin Xiao, Xiaodong He, Jianfeng Gao,
Xinying Song, Li Deng | null | 1508.03398 | null | null |
Emphatic TD Bellman Operator is a Contraction | stat.ML cs.LG | Recently, \citet{SuttonMW15} introduced the emphatic temporal differences
(ETD) algorithm for off-policy evaluation in Markov decision processes. In this
short note, we show that the projected fixed-point equation that underlies ETD
involves a contraction operator, with a $\sqrt{\gamma}$-contraction modulus
(where $\gamma$ is the discount factor). This allows us to provide error bounds
on the approximation error of ETD. To our knowledge, these are the first error
bounds for an off-policy evaluation algorithm under general target and behavior
policies.
| Assaf Hallak, Aviv Tamar and Shie Mannor | null | 1508.03411 | null | null |
Hierarchical Models as Marginals of Hierarchical Models | math.PR cs.LG cs.NE math.ST stat.TH | We investigate the representation of hierarchical models in terms of
marginals of other hierarchical models with smaller interactions. We focus on
binary variables and marginals of pairwise interaction models whose hidden
variables are conditionally independent given the visible variables. In this
case the problem is equivalent to the representation of linear subspaces of
polynomials by feedforward neural networks with soft-plus computational units.
We show that every hidden variable can freely model multiple interactions among
the visible variables, which allows us to generalize and improve previous
results. In particular, we show that a restricted Boltzmann machine with less
than $[ 2(\log(v)+1) / (v+1) ] 2^v-1$ hidden binary variables can approximate
every distribution of $v$ visible binary variables arbitrarily well, compared
to $2^{v-1}-1$ from the best previously known result.
| Guido Montufar and Johannes Rauh | null | 1508.03606 | null | null |
Towards an Axiomatic Approach to Hierarchical Clustering of Measures | stat.ML cs.LG math.ST stat.ME stat.TH | We propose some axioms for hierarchical clustering of probability measures
and investigate their ramifications. The basic idea is to let the user
stipulate the clusters for some elementary measures. This is done without the
need of any notion of metric, similarity or dissimilarity. Our main results
then show that for each suitable choice of user-defined clustering on
elementary measures we obtain a unique notion of clustering on a large set of
distributions satisfying a set of additivity and continuity axioms. We
illustrate the developed theory by numerous examples including some with and
some without a density.
| Philipp Thomann, Ingo Steinwart, Nico Schmid | null | 1508.03712 | null | null |
Classifying Relations via Long Short Term Memory Networks along Shortest
Dependency Path | cs.CL cs.LG | Relation classification is an important research arena in the field of
natural language processing (NLP). In this paper, we present SDP-LSTM, a novel
neural network to classify the relation of two entities in a sentence. Our
neural architecture leverages the shortest dependency path (SDP) between two
entities; multichannel recurrent neural networks, with long short term memory
(LSTM) units, pick up heterogeneous information along the SDP. Our proposed
model has several distinct features: (1) The shortest dependency paths retain
most relevant information (to relation classification), while eliminating
irrelevant words in the sentence. (2) The multichannel LSTM networks allow
effective information integration from heterogeneous sources over the
dependency paths. (3) A customized dropout strategy regularizes the neural
network to alleviate overfitting. We test our model on the SemEval 2010
relation classification task, and achieve an $F_1$-score of 83.7\%, higher than
competing methods in the literature.
| Xu Yan, Lili Mou, Ge Li, Yunchuan Chen, Hao Peng, Zhi Jin | null | 1508.03720 | null | null |
A Comparative Study on Regularization Strategies for Embedding-based
Neural Networks | cs.CL cs.LG | This paper aims to compare different regularization strategies to address a
common phenomenon, severe overfitting, in embedding-based neural networks for
NLP. We chose two widely studied neural models and tasks as our testbed. We
tried several frequently applied or newly proposed regularization strategies,
including penalizing weights (embeddings excluded), penalizing embeddings,
re-embedding words, and dropout. We also emphasized on incremental
hyperparameter tuning, and combining different regularizations. The results
provide a picture on tuning hyperparameters for neural NLP models.
| Hao Peng, Lili Mou, Ge Li, Yunchuan Chen, Yangyang Lu, Zhi Jin | null | 1508.03721 | null | null |
A Generative Word Embedding Model and its Low Rank Positive Semidefinite
Solution | cs.CL cs.LG stat.ML | Most existing word embedding methods can be categorized into Neural Embedding
Models and Matrix Factorization (MF)-based methods. However some models are
opaque to probabilistic interpretation, and MF-based methods, typically solved
using Singular Value Decomposition (SVD), may incur loss of corpus information.
In addition, it is desirable to incorporate global latent factors, such as
topics, sentiments or writing styles, into the word embedding model. Since
generative models provide a principled way to incorporate latent factors, we
propose a generative word embedding model, which is easy to interpret, and can
serve as a basis of more sophisticated latent factor models. The model
inference reduces to a low rank weighted positive semidefinite approximation
problem. Its optimization is approached by eigendecomposition on a submatrix,
followed by online blockwise regression, which is scalable and avoids the
information loss in SVD. In experiments on 7 common benchmark datasets, our
vectors are competitive to word2vec, and better than other MF-based methods.
| Shaohua Li, Jun Zhu, Chunyan Miao | null | 1508.03826 | null | null |
Schema Independent Relational Learning | cs.DB cs.AI cs.LG cs.LO | Learning novel concepts and relations from relational databases is an
important problem with many applications in database systems and machine
learning. Relational learning algorithms learn the definition of a new relation
in terms of existing relations in the database. Nevertheless, the same data set
may be represented under different schemas for various reasons, such as
efficiency, data quality, and usability. Unfortunately, the output of current
relational learning algorithms tends to vary quite substantially over the
choice of schema, both in terms of learning accuracy and efficiency. This
variation complicates their off-the-shelf application. In this paper, we
introduce and formalize the property of schema independence of relational
learning algorithms, and study both the theoretical and empirical dependence of
existing algorithms on the common class of (de) composition schema
transformations. We study both sample-based learning algorithms, which learn
from sets of labeled examples, and query-based algorithms, which learn by
asking queries to an oracle. We prove that current relational learning
algorithms are generally not schema independent. For query-based learning
algorithms we show that the (de) composition transformations influence their
query complexity. We propose Castor, a sample-based relational learning
algorithm that achieves schema independence by leveraging data dependencies. We
support the theoretical results with an empirical study that demonstrates the
schema dependence/independence of several algorithms on existing benchmark and
real-world datasets under (de) compositions.
| Jose Picado, Arash Termehchy, Alan Fern, Parisa Ataei | null | 1508.03846 | null | null |
Online Representation Learning in Recurrent Neural Language Models | cs.CL cs.LG cs.NE | We investigate an extension of continuous online learning in recurrent neural
network language models. The model keeps a separate vector representation of
the current unit of text being processed and adaptively adjusts it after each
prediction. The initial experiments give promising results, indicating that the
method is able to increase language modelling accuracy, while also decreasing
the parameters needed to store the model along with the computation required at
each step.
| Marek Rei | 10.18653/v1/D15-1026 | 1508.03854 | null | null |
Two-stage Cascaded Classifier for Purchase Prediction | cs.IR cs.LG | In this paper we describe our machine learning solution for the RecSys
Challenge, 2015. We have proposed a time efficient two-stage cascaded
classifier for the prediction of buy sessions and purchased items within such
sessions. Based on the model, several interesting features found, and formation
of our own test bed, we have achieved a reasonable score. Usage of Random
Forests helps us to cope with the effect of the multiplicity of good models
depending on varying subsets of features in the purchased items prediction and,
in its turn, boosting is used as a suitable technique to overcome severe class
imbalance of the buy-session prediction.
| Sheikh Muhammad Sarwar, Mahamudul Hasan, Dmitry I. Ignatov | null | 1508.03856 | null | null |
Predicting Grades | cs.LG | To increase efficacy in traditional classroom courses as well as in Massive
Open Online Courses (MOOCs), automated systems supporting the instructor are
needed. One important problem is to automatically detect students that are
going to do poorly in a course early enough to be able to take remedial
actions. Existing grade prediction systems focus on maximizing the accuracy of
the prediction while overseeing the importance of issuing timely and
personalized predictions. This paper proposes an algorithm that predicts the
final grade of each student in a class. It issues a prediction for each student
individually, when the expected accuracy of the prediction is sufficient. The
algorithm learns online what is the optimal prediction and time to issue a
prediction based on past history of students' performance in a course. We
derive a confidence estimate for the prediction accuracy and demonstrate the
performance of our algorithm on a dataset obtained based on the performance of
approximately 700 UCLA undergraduate students who have taken an introductory
digital signal processing over the past 7 years. We demonstrate that for 85% of
the students we can predict with 76% accuracy whether they are going do well or
poorly in the class after the 4th course week. Using data obtained from a pilot
course, our methodology suggests that it is effective to perform early in-class
assessments such as quizzes, which result in timely performance prediction for
each student, thereby enabling timely interventions by the instructor (at the
student or class level) when necessary.
| Yannick Meier, Jie Xu, Onur Atan and Mihaela van der Schaar | 10.1109/TSP.2015.2496278 | 1508.03865 | null | null |
Using a Machine Learning Approach to Implement and Evaluate Product Line
Features | cs.SE cs.LG | Bike-sharing systems are a means of smart transportation in urban
environments with the benefit of a positive impact on urban mobility. In this
paper we are interested in studying and modeling the behavior of features that
permit the end user to access, with her/his web browser, the status of the
Bike-Sharing system. In particular, we address features able to make a
prediction on the system state. We propose to use a machine learning approach
to analyze usage patterns and learn computational models of such features from
logs of system usage.
On the one hand, machine learning methodologies provide a powerful and
general means to implement a wide choice of predictive features. On the other
hand, trained machine learning models are provided with a measure of predictive
performance that can be used as a metric to assess the cost-performance
trade-off of the feature. This provides a principled way to assess the runtime
behavior of different components before putting them into operation.
| Davide Bacciu (Dipartimento di Informatica, Universit\`a di Pisa),
Stefania Gnesi (Istituto di Scienza e Tecnologie dell'Informazione, CNR),
Laura Semini (Dipartimento di Informatica, Universit\`a di Pisa) | 10.4204/EPTCS.188.8 | 1508.03906 | null | null |
Owl and Lizard: Patterns of Head Pose and Eye Pose in Driver Gaze
Classification | cs.CV cs.HC cs.LG | Accurate, robust, inexpensive gaze tracking in the car can help keep a driver
safe by facilitating the more effective study of how to improve (1) vehicle
interfaces and (2) the design of future Advanced Driver Assistance Systems. In
this paper, we estimate head pose and eye pose from monocular video using
methods developed extensively in prior work and ask two new interesting
questions. First, how much better can we classify driver gaze using head and
eye pose versus just using head pose? Second, are there individual-specific
gaze strategies that strongly correlate with how much gaze classification
improves with the addition of eye pose information? We answer these questions
by evaluating data drawn from an on-road study of 40 drivers. The main insight
of the paper is conveyed through the analogy of an "owl" and "lizard" which
describes the degree to which the eyes and the head move when shifting gaze.
When the head moves a lot ("owl"), not much classification improvement is
attained by estimating eye pose on top of head pose. On the other hand, when
the head stays still and only the eyes move ("lizard"), classification accuracy
increases significantly from adding in eye pose. We characterize how that
accuracy varies between people, gaze strategies, and gaze regions.
| Lex Fridman, Joonbum Lee, Bryan Reimer, Trent Victor | null | 1508.04028 | null | null |
A Generative Model for Multi-Dialect Representation | cs.CV cs.LG stat.ML | In the era of deep learning several unsupervised models have been developed
to capture the key features in unlabeled handwritten data. Popular among them
is the Restricted Boltzmann Machines RBM. However, due to the novelty in
handwritten multidialect data, the RBM may fail to generate an efficient
representation. In this paper we propose a generative model, the Mode
Synthesizing Machine MSM for on-line representation of real life handwritten
multidialect language data. The MSM takes advantage of the hierarchical
representation of the modes of a data distribution using a two-point error
update to learn a sequence of representative multidialects in a generative way.
Experiments were performed to evaluate the performance of the MSM over the RBM
with the former attaining much lower error values than the latter on both
independent and mixed data set.
| Emmanuel N. Osegi | null | 1508.04035 | null | null |
A Deep Learning Approach to Structured Signal Recovery | cs.LG stat.ML | In this paper, we develop a new framework for sensing and recovering
structured signals. In contrast to compressive sensing (CS) systems that employ
linear measurements, sparse representations, and computationally complex
convex/greedy algorithms, we introduce a deep learning framework that supports
both linear and mildly nonlinear measurements, that learns a structured
representation from training data, and that efficiently computes a signal
estimate. In particular, we apply a stacked denoising autoencoder (SDA), as an
unsupervised feature learner. SDA enables us to capture statistical
dependencies between the different elements of certain signals and improve
signal recovery performance as compared to the CS approach.
| Ali Mousavi, Ankit B. Patel, Richard G. Baraniuk | 10.1109/ALLERTON.2015.7447163 | 1508.04065 | null | null |
Evaluating Classifiers in Detecting 419 Scams in Bilingual Cybercriminal
Communities | cs.SI cs.CY cs.LG | Incidents of organized cybercrime are rising because of criminals are reaping
high financial rewards while incurring low costs to commit crime. As the
digital landscape broadens to accommodate more internet-enabled devices and
technologies like social media, more cybercriminals who are not native English
speakers are invading cyberspace to cash in on quick exploits. In this paper we
evaluate the performance of three machine learning classifiers in detecting 419
scams in a bilingual Nigerian cybercriminal community. We use three popular
classifiers in text processing namely: Na\"ive Bayes, k-nearest neighbors (IBK)
and Support Vector Machines (SVM). The preliminary results on a real world
dataset reveal the SVM significantly outperforms Na\"ive Bayes and IBK at 95%
confidence level.
| Alex V. Mbaziira, Ehab Abozinadah, and James H. Jones Jr | null | 1508.04123 | null | null |
Distributed Deep Q-Learning | cs.LG cs.AI cs.DC cs.NE | We propose a distributed deep learning model to successfully learn control
policies directly from high-dimensional sensory input using reinforcement
learning. The model is based on the deep Q-network, a convolutional neural
network trained with a variant of Q-learning. Its input is raw pixels and its
output is a value function estimating future rewards from taking an action
given a system state. To distribute the deep Q-network training, we adapt the
DistBelief software framework to the context of efficiently training
reinforcement learning agents. As a result, the method is completely
asynchronous and scales well with the number of machines. We demonstrate that
the deep Q-network agent, receiving only the pixels and the game score as
inputs, was able to achieve reasonable success on a simple game with minimal
parameter tuning.
| Hao Yi Ong, Kevin Chavez, Augustus Hong | null | 1508.04186 | null | null |
Zero-Truncated Poisson Tensor Factorization for Massive Binary Tensors | stat.ML cs.LG | We present a scalable Bayesian model for low-rank factorization of massive
tensors with binary observations. The proposed model has the following key
properties: (1) in contrast to the models based on the logistic or probit
likelihood, using a zero-truncated Poisson likelihood for binary data allows
our model to scale up in the number of \emph{ones} in the tensor, which is
especially appealing for massive but sparse binary tensors; (2)
side-information in form of binary pairwise relationships (e.g., an adjacency
network) between objects in any tensor mode can also be leveraged, which can be
especially useful in "cold-start" settings; and (3) the model admits simple
Bayesian inference via batch, as well as \emph{online} MCMC; the latter allows
scaling up even for \emph{dense} binary data (i.e., when the number of ones in
the tensor/network is also massive). In addition, non-negative factor matrices
in our model provide easy interpretability, and the tensor rank can be inferred
from the data. We evaluate our model on several large-scale real-world binary
tensors, achieving excellent computational scalability, and also demonstrate
its usefulness in leveraging side-information provided in form of
mode-network(s).
| Changwei Hu, Piyush Rai, Lawrence Carin | null | 1508.04210 | null | null |
Scalable Bayesian Non-Negative Tensor Factorization for Massive Count
Data | stat.ML cs.LG | We present a Bayesian non-negative tensor factorization model for
count-valued tensor data, and develop scalable inference algorithms (both batch
and online) for dealing with massive tensors. Our generative model can handle
overdispersed counts as well as infer the rank of the decomposition. Moreover,
leveraging a reparameterization of the Poisson distribution as a multinomial
facilitates conjugacy in the model and enables simple and efficient Gibbs
sampling and variational Bayes (VB) inference updates, with a computational
cost that only depends on the number of nonzeros in the tensor. The model also
provides a nice interpretability for the factors; in our model, each factor
corresponds to a "topic". We develop a set of online inference algorithms that
allow further scaling up the model to massive tensors, for which batch
inference methods may be infeasible. We apply our framework on diverse
real-world applications, such as \emph{multiway} topic modeling on a scientific
publications database, analyzing a political science data set, and analyzing a
massive household transactions data set.
| Changwei Hu, Piyush Rai, Changyou Chen, Matthew Harding, Lawrence
Carin | null | 1508.04211 | null | null |
Supervised learning of sparse context reconstruction coefficients for
data representation and classification | cs.LG cs.CV | Context of data points, which is usually defined as the other data points in
a data set, has been found to play important roles in data representation and
classification. In this paper, we study the problem of using context of a data
point for its classification problem. Our work is inspired by the observation
that actually only very few data points are critical in the context of a data
point for its representation and classification. We propose to represent a data
point as the sparse linear combination of its context, and learn the sparse
context in a supervised way to increase its discriminative ability. To this
end, we proposed a novel formulation for context learning, by modeling the
learning of context parameter and classifier in a unified objective, and
optimizing it with an alternative strategy in an iterative algorithm.
Experiments on three benchmark data set show its advantage over
state-of-the-art context-based data representation and classification methods.
| Xuejie Liu, Jingbin Wang, Ming Yin, Benjamin Edwards, Peijuan Xu | 10.1007/s00521-015-2042-5 | 1508.04221 | null | null |
Deep clustering: Discriminative embeddings for segmentation and
separation | cs.NE cs.LG stat.ML | We address the problem of acoustic source separation in a deep learning
framework we call "deep clustering." Rather than directly estimating signals or
masking functions, we train a deep network to produce spectrogram embeddings
that are discriminative for partition labels given in training data. Previous
deep network approaches provide great advantages in terms of learning power and
speed, but previously it has been unclear how to use them to separate signals
in a class-independent way. In contrast, spectral clustering approaches are
flexible with respect to the classes and number of items to be segmented, but
it has been unclear how to leverage the learning power and speed of deep
networks. To obtain the best of both worlds, we use an objective function that
to train embeddings that yield a low-rank approximation to an ideal pairwise
affinity matrix, in a class-independent way. This avoids the high cost of
spectral factorization and instead produces compact clusters that are amenable
to simple clustering methods. The segmentations are therefore implicitly
encoded in the embeddings, and can be "decoded" by clustering. Preliminary
experiments show that the proposed method can separate speech: when trained on
spectrogram features containing mixtures of two speakers, and tested on
mixtures of a held-out set of speakers, it can infer masking functions that
improve signal quality by around 6dB. We show that the model can generalize to
three-speaker mixtures despite training only on two-speaker mixtures. The
framework can be used without class labels, and therefore has the potential to
be trained on a diverse set of sound types, and to generalize to novel sources.
We hope that future work will lead to segmentation of arbitrary sounds, with
extensions to microphone array methods as well as image segmentation and other
domains.
| John R. Hershey, Zhuo Chen, Jonathan Le Roux, Shinji Watanabe | null | 1508.04306 | null | null |
Cascade Learning by Optimally Partitioning | cs.CV cs.LG | Cascaded AdaBoost classifier is a well-known efficient object detection
algorithm. The cascade structure has many parameters to be determined. Most of
existing cascade learning algorithms are designed by assigning detection rate
and false positive rate to each stage either dynamically or statically. Their
objective functions are not directly related to minimum computation cost. These
algorithms are not guaranteed to have optimal solution in the sense of
minimizing computation cost. On the assumption that a strong classifier is
given, in this paper we propose an optimal cascade learning algorithm (we call
it iCascade) which iteratively partitions the strong classifiers into two parts
until predefined number of stages are generated. iCascade searches the optimal
number ri of weak classifiers of each stage i by directly minimizing the
computation cost of the cascade. Theorems are provided to guarantee the
existence of the unique optimal solution. Theorems are also given for the
proposed efficient algorithm of searching optimal parameters ri. Once a new
stage is added, the parameter ri for each stage decreases gradually as
iteration proceeds, which we call decreasing phenomenon. Moreover, with the
goal of minimizing computation cost, we develop an effective algorithm for
setting the optimal threshold of each stage classifier. In addition, we prove
in theory why more new weak classifiers are required compared to the last
stage. Experimental results on face detection demonstrate the effectiveness and
efficiency of the proposed algorithm.
| Yanwei Pang, Jiale Cao, and Xuelong Li | 10.1109/TCYB.2016.2601438 | 1508.04326 | null | null |
ESDF: Ensemble Selection using Diversity and Frequency | cs.LG | Recently ensemble selection for consensus clustering has emerged as a
research problem in Machine Intelligence. Normally consensus clustering
algorithms take into account the entire ensemble of clustering, where there is
a tendency of generating a very large size ensemble before computing its
consensus. One can avoid considering the entire ensemble and can judiciously
select few partitions in the ensemble without compromising on the quality of
the consensus. This may result in an efficient consensus computation technique
and may save unnecessary computational overheads. The ensemble selection
problem addresses this issue of consensus clustering. In this paper, we propose
an efficient method of ensemble selection for a large ensemble. We prioritize
the partitions in the ensemble based on diversity and frequency. Our method
selects top K of the partitions in order of priority, where K is decided by the
user. We observe that considering jointly the diversity and frequency helps in
identifying few representative partitions whose consensus is qualitatively
better than the consensus of the entire ensemble. Experimental analysis on a
large number of datasets shows our method gives better results than earlier
ensemble selection methods.
| Shouvick Mondal and Arko Banerjee | null | 1508.04333 | null | null |
End-to-End Attention-based Large Vocabulary Speech Recognition | cs.CL cs.AI cs.LG cs.NE | Many of the current state-of-the-art Large Vocabulary Continuous Speech
Recognition Systems (LVCSR) are hybrids of neural networks and Hidden Markov
Models (HMMs). Most of these systems contain separate components that deal with
the acoustic modelling, language modelling and sequence decoding. We
investigate a more direct approach in which the HMM is replaced with a
Recurrent Neural Network (RNN) that performs sequence prediction directly at
the character level. Alignment between the input features and the desired
character sequence is learned automatically by an attention mechanism built
into the RNN. For each predicted character, the attention mechanism scans the
input sequence and chooses relevant frames. We propose two methods to speed up
this operation: limiting the scan to a subset of most promising frames and
pooling over time the information contained in neighboring frames, thereby
reducing source sequence length. Integrating an n-gram language model into the
decoding process yields recognition accuracies similar to other HMM-free
RNN-based approaches.
| Dzmitry Bahdanau, Jan Chorowski, Dmitriy Serdyuk, Philemon Brakel,
Yoshua Bengio | null | 1508.04395 | null | null |
Scalable Out-of-Sample Extension of Graph Embeddings Using Deep Neural
Networks | stat.ML cs.LG cs.NE stat.ME | Several popular graph embedding techniques for representation learning and
dimensionality reduction rely on performing computationally expensive
eigendecompositions to derive a nonlinear transformation of the input data
space. The resulting eigenvectors encode the embedding coordinates for the
training samples only, and so the embedding of novel data samples requires
further costly computation. In this paper, we present a method for the
out-of-sample extension of graph embeddings using deep neural networks (DNN) to
parametrically approximate these nonlinear maps. Compared with traditional
nonparametric out-of-sample extension methods, we demonstrate that the DNNs can
generalize with equal or better fidelity and require orders of magnitude less
computation at test time. Moreover, we find that unsupervised pretraining of
the DNNs improves optimization for larger network sizes, thus removing
sensitivity to model selection.
| Aren Jansen, Gregory Sell, Vince Lyzinski | null | 1508.04422 | null | null |
Robust Subspace Clustering via Smoothed Rank Approximation | cs.CV cs.IT cs.LG cs.NA math.IT stat.ML | Matrix rank minimizing subject to affine constraints arises in many
application areas, ranging from signal processing to machine learning. Nuclear
norm is a convex relaxation for this problem which can recover the rank exactly
under some restricted and theoretically interesting conditions. However, for
many real-world applications, nuclear norm approximation to the rank function
can only produce a result far from the optimum. To seek a solution of higher
accuracy than the nuclear norm, in this paper, we propose a rank approximation
based on Logarithm-Determinant. We consider using this rank approximation for
subspace clustering application. Our framework can model different kinds of
errors and noise. Effective optimization strategy is developed with theoretical
guarantee to converge to a stationary point. The proposed method gives
promising results on face clustering and motion segmentation tasks compared to
the state-of-the-art subspace clustering algorithms.
| Zhao Kang, Chong Peng, Qiang Cheng | 10.1109/LSP.2015.2460737 | 1508.04467 | null | null |
A Dictionary Learning Approach for Factorial Gaussian Models | cs.LG stat.ML | In this paper, we develop a parameter estimation method for factorially
parametrized models such as Factorial Gaussian Mixture Model and Factorial
Hidden Markov Model. Our contributions are two-fold. First, we show that the
emission matrix of the standard Factorial Model is unidentifiable even if the
true assignment matrix is known. Secondly, we address the issue of
identifiability by making a one component sharing assumption and derive a
parameter learning algorithm for this case. Our approach is based on a
dictionary learning problem of the form $X = O R$, where the goal is to learn
the dictionary $O$ given the data matrix $X$. We argue that due to the specific
structure of the activation matrix $R$ in the shared component factorial
mixture model, and an incoherence assumption on the shared component, it is
possible to extract the columns of the $O$ matrix without the need for
alternating between the estimation of $O$ and $R$.
| Y. Cem Subakan, Johannes Traa, Paris Smaragdis, Noah Stein | null | 1508.04486 | null | null |
Recognizing Extended Spatiotemporal Expressions by Actively Trained
Average Perceptron Ensembles | cs.CL cs.LG | Precise geocoding and time normalization for text requires that location and
time phrases be identified. Many state-of-the-art geoparsers and temporal
parsers suffer from low recall. Categories commonly missed by parsers are:
nouns used in a non- spatiotemporal sense, adjectival and adverbial phrases,
prepositional phrases, and numerical phrases. We collected and annotated data
set by querying commercial web searches API with such spatiotemporal
expressions as were missed by state-of-the- art parsers. Due to the high cost
of sentence annotation, active learning was used to label training data, and a
new strategy was designed to better select training examples to reduce labeling
cost. For the learning algorithm, we applied an average perceptron trained
Featurized Hidden Markov Model (FHMM). Five FHMM instances were used to create
an ensemble, with the output phrase selected by voting. Our ensemble model was
tested on a range of sequential labeling tasks, and has shown competitive
performance. Our contributions include (1) an new dataset annotated with named
entities and expanded spatiotemporal expressions; (2) a comparison of inference
algorithms for ensemble models showing the superior accuracy of Belief
Propagation over Viterbi Decoding; (3) a new example re-weighting method for
active ensemble learning that 'memorizes' the latest examples trained; (4) a
spatiotemporal parser that jointly recognizes expanded spatiotemporal
expressions as well as named entities.
| Wei Zhang, Yang Yu, Osho Gupta, Judith Gelernter | null | 1508.04525 | null | null |
Mining Brain Networks using Multiple Side Views for Neurological
Disorder Identification | cs.LG cs.CV cs.CY stat.AP stat.ML | Mining discriminative subgraph patterns from graph data has attracted great
interest in recent years. It has a wide variety of applications in disease
diagnosis, neuroimaging, etc. Most research on subgraph mining focuses on the
graph representation alone. However, in many real-world applications, the side
information is available along with the graph data. For example, for
neurological disorder identification, in addition to the brain networks derived
from neuroimaging data, hundreds of clinical, immunologic, serologic and
cognitive measures may also be documented for each subject. These measures
compose multiple side views encoding a tremendous amount of supplemental
information for diagnostic purposes, yet are often ignored. In this paper, we
study the problem of discriminative subgraph selection using multiple side
views and propose a novel solution to find an optimal set of subgraph features
for graph classification by exploring a plurality of side views. We derive a
feature evaluation criterion, named gSide, to estimate the usefulness of
subgraph patterns based upon side views. Then we develop a branch-and-bound
algorithm, called gMSV, to efficiently search for optimal subgraph features by
integrating the subgraph mining process and the procedure of discriminative
feature selection. Empirical studies on graph classification tasks for
neurological disorders using brain networks demonstrate that subgraph patterns
selected by the multi-side-view guided subgraph selection approach can
effectively boost graph classification performances and are relevant to disease
diagnosis.
| Bokai Cao, Xiangnan Kong, Jingyuan Zhang, Philip S. Yu and Ann B.
Ragin | 10.1109/ICDM.2015.50 | 1508.04554 | null | null |
Introduction to Cross-Entropy Clustering The R Package CEC | cs.LG stat.ME stat.ML | The R Package CEC performs clustering based on the cross-entropy clustering
(CEC) method, which was recently developed with the use of information theory.
The main advantage of CEC is that it combines the speed and simplicity of
$k$-means with the ability to use various Gaussian mixture models and reduce
unnecessary clusters. In this work we present a practical tutorial to CEC based
on the R Package CEC. Functions are provided to encompass the whole process of
clustering.
| Jacek Tabor, Przemys{\l}aw Spurek, Konrad Kamieniecki, Marek \'Smieja,
Krzysztof Misztal | null | 1508.04559 | null | null |
Learning to Predict Independent of Span | cs.LG | We consider how to learn multi-step predictions efficiently. Conventional
algorithms wait until observing actual outcomes before performing the
computations to update their predictions. If predictions are made at a high
rate or span over a large amount of time, substantial computation can be
required to store all relevant observations and to update all predictions when
the outcome is finally observed. We show that the exact same predictions can be
learned in a much more computationally congenial way, with uniform per-step
computation that does not depend on the span of the predictions. We apply this
idea to various settings of increasing generality, repeatedly adding desired
properties and each time deriving an equivalent span-independent algorithm for
the conventional algorithm that satisfies these desiderata. Interestingly,
along the way several known algorithmic constructs emerge spontaneously from
our derivations, including dutch eligibility traces, temporal difference
errors, and averaging. This allows us to link these constructs one-to-one to
the corresponding desiderata, unambiguously connecting the `how' to the `why'.
Each step, we make sure that the derived algorithm subsumes the previous
algorithms, thereby retaining their properties. Ultimately we arrive at a
single general temporal-difference algorithm that is applicable to the full
setting of reinforcement learning.
| Hado van Hasselt, Richard S. Sutton | null | 1508.04582 | null | null |
Fault Diagnosis of Helical Gear Box using Large Margin K-Nearest
Neighbors Classifier using Sound Signals | cs.LG | Gear drives are one of the most widely used transmission system in many
machinery. Sound signals of a rotating machine contain the dynamic information
about its health conditions. Not much information available in the literature
reporting suitability of sound signals for fault diagnosis applications.
Maximum numbers of literature are based on FFT (Fast Fourier Transform)
analysis and have its own limitations with non-stationary signals like the ones
from gears. In this paper, attempt has been made in using sound signals
acquired from gears in good and simulated faulty conditions for the purpose of
fault diagnosis through a machine learning approach. The descriptive
statistical features were extracted from the acquired sound signals and the
predominant features were selected using J48 decision tree technique. The
selected features were then used for classification using Large Margin
K-nearest neighbor approach. The paper also discusses the effect of various
parameters on classification accuracy.
| M. Amarnath, S. Arunav, Hemantha Kumar, V. Sugumaran, and G.S
Raghvendra | null | 1508.04734 | null | null |
Time Series Clustering via Community Detection in Networks | stat.ML cs.LG cs.SI | In this paper, we propose a technique for time series clustering using
community detection in complex networks. Firstly, we present a method to
transform a set of time series into a network using different distance
functions, where each time series is represented by a vertex and the most
similar ones are connected. Then, we apply community detection algorithms to
identify groups of strongly connected vertices (called a community) and,
consequently, identify time series clusters. Still in this paper, we make a
comprehensive analysis on the influence of various combinations of time series
distance functions, network generation methods and community detection
techniques on clustering results. Experimental study shows that the proposed
network-based approach achieves better results than various classic or
up-to-date clustering techniques under consideration. Statistical tests confirm
that the proposed method outperforms some classic clustering algorithms, such
as $k$-medoids, diana, median-linkage and centroid-linkage in various data
sets. Interestingly, the proposed method can effectively detect shape patterns
presented in time series due to the topological structure of the underlying
network constructed in the clustering process. At the same time, other
techniques fail to identify such patterns. Moreover, the proposed method is
robust enough to group time series presenting similar pattern but with time
shifts and/or amplitude variations. In summary, the main point of the proposed
method is the transformation of time series from time-space domain to
topological domain. Therefore, we hope that our approach contributes not only
for time series clustering, but also for general time series analysis tasks.
| Leonardo N. Ferreira and Liang Zhao | 10.1016/j.ins.2015.07.046 | 1508.04757 | null | null |
Dither is Better than Dropout for Regularising Deep Neural Networks | cs.LG | Regularisation of deep neural networks (DNN) during training is critical to
performance. By far the most popular method is known as dropout. Here, cast
through the prism of signal processing theory, we compare and contrast the
regularisation effects of dropout with those of dither. We illustrate some
serious inherent limitations of dropout and demonstrate that dither provides a
more effective regulariser.
| Andrew J.R. Simpson | null | 1508.04826 | null | null |
Multi-criteria Similarity-based Anomaly Detection using Pareto Depth
Analysis | cs.CV cs.LG stat.ML | We consider the problem of identifying patterns in a data set that exhibit
anomalous behavior, often referred to as anomaly detection. Similarity-based
anomaly detection algorithms detect abnormally large amounts of similarity or
dissimilarity, e.g.~as measured by nearest neighbor Euclidean distances between
a test sample and the training samples. In many application domains there may
not exist a single dissimilarity measure that captures all possible anomalous
patterns. In such cases, multiple dissimilarity measures can be defined,
including non-metric measures, and one can test for anomalies by scalarizing
using a non-negative linear combination of them. If the relative importance of
the different dissimilarity measures are not known in advance, as in many
anomaly detection applications, the anomaly detection algorithm may need to be
executed multiple times with different choices of weights in the linear
combination. In this paper, we propose a method for similarity-based anomaly
detection using a novel multi-criteria dissimilarity measure, the Pareto depth.
The proposed Pareto depth analysis (PDA) anomaly detection algorithm uses the
concept of Pareto optimality to detect anomalies under multiple criteria
without having to run an algorithm multiple times with different choices of
weights. The proposed PDA approach is provably better than using linear
combinations of the criteria and shows superior performance on experiments with
synthetic and real data sets.
| Ko-Jen Hsiao, Kevin S. Xu, Jeff Calder and Alfred O. Hero III | 10.1109/TNNLS.2015.2466686 | 1508.04887 | null | null |
Review and Perspective for Distance Based Trajectory Clustering | stat.ML cs.LG stat.AP | In this paper we tackle the issue of clustering trajectories of geolocalized
observations. Using clustering technics based on the choice of a distance
between the observations, we first provide a comprehensive review of the
different distances used in the literature to compare trajectories. Then based
on the limitations of these methods, we introduce a new distance : Symmetrized
Segment-Path Distance (SSPD). We finally compare this new distance to the
others according to their corresponding clustering results obtained using both
hierarchical clustering and affinity propagation methods.
| Philippe Besse (INSA Toulouse, IMT), Brendan Guillouet (IMT),
Jean-Michel Loubes, Royer Fran\c{c}ois | null | 1508.04904 | null | null |
Semi-supervised Learning with Regularized Laplacian | cs.LG | We study a semi-supervised learning method based on the similarity graph and
RegularizedLaplacian. We give convenient optimization formulation of the
Regularized Laplacian method and establishits various properties. In
particular, we show that the kernel of the methodcan be interpreted in terms of
discrete and continuous time random walks and possesses several
importantproperties of proximity measures. Both optimization and linear algebra
methods can be used for efficientcomputation of the classification functions.
We demonstrate on numerical examples that theRegularized Laplacian method is
competitive with respect to the other state of the art semi-supervisedlearning
methods.
| Konstantin Avrachenkov (MAESTRO), Pavel Chebotarev, Alexey Mishenin | null | 1508.04906 | null | null |
Histogram of gradients of Time-Frequency Representations for Audio scene
detection | cs.SD cs.LG | This paper addresses the problem of audio scenes classification and
contributes to the state of the art by proposing a novel feature. We build this
feature by considering histogram of gradients (HOG) of time-frequency
representation of an audio scene. Contrarily to classical audio features like
MFCC, we make the hypothesis that histogram of gradients are able to encode
some relevant informations in a time-frequency {representation:} namely, the
local direction of variation (in time and frequency) of the signal spectral
power. In addition, in order to gain more invariance and robustness, histogram
of gradients are locally pooled. We have evaluated the relevance of {the novel
feature} by comparing its performances with state-of-the-art competitors, on
several datasets, including a novel one that we provide, as part of our
contribution. This dataset, that we make publicly available, involves $19$
classes and contains about $900$ minutes of audio scene recording. We thus
believe that it may be the next standard dataset for evaluating audio scene
classification algorithms. Our comparison results clearly show that our
HOG-based features outperform its competitors
| Alain Rakotomamonjy (LITIS), Gilles Gasso (LITIS) | null | 1508.04909 | null | null |
The ABACOC Algorithm: a Novel Approach for Nonparametric Classification
of Data Streams | stat.ML cs.LG | Stream mining poses unique challenges to machine learning: predictive models
are required to be scalable, incrementally trainable, must remain bounded in
size (even when the data stream is arbitrarily long), and be nonparametric in
order to achieve high accuracy even in complex and dynamic environments.
Moreover, the learning system must be parameterless ---traditional tuning
methods are problematic in streaming settings--- and avoid requiring prior
knowledge of the number of distinct class labels occurring in the stream. In
this paper, we introduce a new algorithmic approach for nonparametric learning
in data streams. Our approach addresses all above mentioned challenges by
learning a model that covers the input space using simple local classifiers.
The distribution of these classifiers dynamically adapts to the local (unknown)
complexity of the classification problem, thus achieving a good balance between
model complexity and predictive accuracy. We design four variants of our
approach of increasing adaptivity. By means of an extensive empirical
evaluation against standard nonparametric baselines, we show state-of-the-art
results in terms of accuracy versus model size. For the variant that imposes a
strict bound on the model size, we show better performance against all other
methods measured at the same model size value. Our empirical analysis is
complemented by a theoretical performance guarantee which does not rely on any
stochastic assumption on the source generating the stream.
| Rocco De Rosa, Francesco Orabona, Nicol\`o Cesa-Bianchi | null | 1508.04912 | null | null |
Distributed Compressive Sensing: A Deep Learning Approach | cs.LG cs.CV | Various studies that address the compressed sensing problem with Multiple
Measurement Vectors (MMVs) have been recently carried. These studies assume the
vectors of the different channels to be jointly sparse. In this paper, we relax
this condition. Instead we assume that these sparse vectors depend on each
other but that this dependency is unknown. We capture this dependency by
computing the conditional probability of each entry in each vector being
non-zero, given the "residuals" of all previous vectors. To estimate these
probabilities, we propose the use of the Long Short-Term Memory (LSTM)[1], a
data driven model for sequence modelling that is deep in time. To calculate the
model parameters, we minimize a cross entropy cost function. To reconstruct the
sparse vectors at the decoder, we propose a greedy solver that uses the above
model to estimate the conditional probabilities. By performing extensive
experiments on two real world datasets, we show that the proposed method
significantly outperforms the general MMV solver (the Simultaneous Orthogonal
Matching Pursuit (SOMP)) and a number of the model-based Bayesian methods. The
proposed method does not add any complexity to the general compressive sensing
encoder. The trained model is used just at the decoder. As the proposed method
is a data driven method, it is only applicable when training data is available.
In many applications however, training data is indeed available, e.g. in
recorded images and videos.
| Hamid Palangi, Rabab Ward, Li Deng | 10.1109/TSP.2016.2557301 | 1508.04924 | null | null |
DeepWriterID: An End-to-end Online Text-independent Writer
Identification System | cs.CV cs.LG stat.ML | Owing to the rapid growth of touchscreen mobile terminals and pen-based
interfaces, handwriting-based writer identification systems are attracting
increasing attention for personal authentication, digital forensics, and other
applications. However, most studies on writer identification have not been
satisfying because of the insufficiency of data and difficulty of designing
good features under various conditions of handwritings. Hence, we introduce an
end-to-end system, namely DeepWriterID, employed a deep convolutional neural
network (CNN) to address these problems. A key feature of DeepWriterID is a new
method we are proposing, called DropSegment. It designs to achieve data
augmentation and improve the generalized applicability of CNN. For sufficient
feature representation, we further introduce path signature feature maps to
improve performance. Experiments were conducted on the NLPR handwriting
database. Even though we only use pen-position information in the pen-down
state of the given handwriting samples, we achieved new state-of-the-art
identification rates of 95.72% for Chinese text and 98.51% for English text.
| Weixin Yang, Lianwen Jin, Manfei Liu | null | 1508.04945 | null | null |
A Deep Bag-of-Features Model for Music Auto-Tagging | cs.LG cs.SD stat.ML | Feature learning and deep learning have drawn great attention in recent years
as a way of transforming input data into more effective representations using
learning algorithms. Such interest has grown in the area of music information
retrieval (MIR) as well, particularly in music audio classification tasks such
as auto-tagging. In this paper, we present a two-stage learning model to
effectively predict multiple labels from music audio. The first stage learns to
project local spectral patterns of an audio track onto a high-dimensional
sparse space in an unsupervised manner and summarizes the audio track as a
bag-of-features. The second stage successively performs the unsupervised
learning on the bag-of-features in a layer-by-layer manner to initialize a deep
neural network and finally fine-tunes it with the tag labels. Through the
experiment, we rigorously examine training choices and tuning parameters, and
show that the model achieves high performance on Magnatagatune, a popularly
used dataset in music auto-tagging.
| Juhan Nam, Jorge Herrera, Kyogu Lee | null | 1508.04999 | null | null |
AdaDelay: Delay Adaptive Distributed Stochastic Convex Optimization | stat.ML cs.LG math.OC | We study distributed stochastic convex optimization under the delayed
gradient model where the server nodes perform parameter updates, while the
worker nodes compute stochastic gradients. We discuss, analyze, and experiment
with a setup motivated by the behavior of real-world distributed computation
networks, where the machines are differently slow at different time. Therefore,
we allow the parameter updates to be sensitive to the actual delays
experienced, rather than to worst-case bounds on the maximum delay. This
sensitivity leads to larger stepsizes, that can help gain rapid initial
convergence without having to wait too long for slower machines, while
maintaining the same asymptotic complexity. We obtain encouraging improvements
to overall convergence for distributed experiments on real datasets with up to
billions of examples and features.
| Suvrit Sra, Adams Wei Yu, Mu Li, Alexander J. Smola | null | 1508.05003 | null | null |
Lifted Relational Neural Networks | cs.AI cs.LG cs.NE | We propose a method combining relational-logic representations with neural
network learning. A general lifted architecture, possibly reflecting some
background domain knowledge, is described through relational rules which may be
handcrafted or learned. The relational rule-set serves as a template for
unfolding possibly deep neural networks whose structures also reflect the
structures of given training or testing relational examples. Different networks
corresponding to different examples share their weights, which co-evolve during
training by stochastic gradient descent algorithm. The framework allows for
hierarchical relational modeling constructs and learning of latent relational
concepts through shared hidden layers weights corresponding to the rules.
Discovery of notable relational concepts and experiments on 78 relational
learning benchmarks demonstrate favorable performance of the method.
| Gustav Sourek, Vojtech Aschenbrenner, Filip Zelezny, Ondrej Kuzelka | null | 1508.05128 | null | null |
Steps Toward Deep Kernel Methods from Infinite Neural Networks | cs.LG cs.NE | Contemporary deep neural networks exhibit impressive results on practical
problems. These networks generalize well although their inherent capacity may
extend significantly beyond the number of training examples. We analyze this
behavior in the context of deep, infinite neural networks. We show that deep
infinite layers are naturally aligned with Gaussian processes and kernel
methods, and devise stochastic kernels that encode the information of these
networks. We show that stability results apply despite the size, offering an
explanation for their empirical success.
| Tamir Hazan and Tommi Jaakkola | null | 1508.05133 | null | null |
Adaptive Online Learning | cs.LG stat.ML | We propose a general framework for studying adaptive regret bounds in the
online learning framework, including model selection bounds and data-dependent
bounds. Given a data- or model-dependent bound we ask, "Does there exist some
algorithm achieving this bound?" We show that modifications to recently
introduced sequential complexity measures can be used to answer this question
by providing sufficient conditions under which adaptive rates can be achieved.
In particular each adaptive rate induces a set of so-called offset complexity
measures, and obtaining small upper bounds on these quantities is sufficient to
demonstrate achievability. A cornerstone of our analysis technique is the use
of one-sided tail inequalities to bound suprema of offset random processes.
Our framework recovers and improves a wide variety of adaptive bounds
including quantile bounds, second-order data-dependent bounds, and small loss
bounds. In addition we derive a new type of adaptive bound for online linear
optimization based on the spectral norm, as well as a new online PAC-Bayes
theorem that holds for countably infinite sets.
| Dylan J. Foster, Alexander Rakhlin, Karthik Sridharan | null | 1508.05170 | null | null |
Strong Coresets for Hard and Soft Bregman Clustering with Applications
to Exponential Family Mixtures | stat.ML cs.LG | Coresets are efficient representations of data sets such that models trained
on the coreset are provably competitive with models trained on the original
data set. As such, they have been successfully used to scale up clustering
models such as K-Means and Gaussian mixture models to massive data sets.
However, until now, the algorithms and the corresponding theory were usually
specific to each clustering problem.
We propose a single, practical algorithm to construct strong coresets for a
large class of hard and soft clustering problems based on Bregman divergences.
This class includes hard clustering with popular distortion measures such as
the Squared Euclidean distance, the Mahalanobis distance, KL-divergence and
Itakura-Saito distance. The corresponding soft clustering problems are directly
related to popular mixture models due to a dual relationship between Bregman
divergences and Exponential family distributions. Our theoretical results
further imply a randomized polynomial-time approximation scheme for hard
clustering. We demonstrate the practicality of the proposed algorithm in an
empirical evaluation.
| Mario Lucic, Olivier Bachem, Andreas Krause | null | 1508.05243 | null | null |
Dynamics of Human Cooperation in Economic Games | physics.soc-ph cs.GT cs.LG math.DS | Human decision behaviour is quite diverse. In many games humans on average do
not achieve maximal payoff and the behaviour of individual players remains
inhomogeneous even after playing many rounds. For instance, in repeated
prisoner dilemma games humans do not always optimize their mean reward and
frequently exhibit broad distributions of cooperativity. The reasons for these
failures of maximization are not known. Here we show that the dynamics
resulting from the tendency to shift choice probabilities towards previously
rewarding choices in closed loop interaction with the strategy of the opponent
can not only explain systematic deviations from 'rationality', but also
reproduce the diversity of choice behaviours. As a representative example we
investigate the dynamics of choice probabilities in prisoner dilemma games with
opponents using strategies with different degrees of extortion and generosity.
We find that already a simple model for human learning can account for a
surprisingly wide range of human decision behaviours. It reproduces suppression
of cooperation against extortionists and increasing cooperation when playing
with generous opponents, explains the broad distributions of individual choices
in ensembles of players, and predicts the evolution of individual subjects'
cooperation rates over the course of the games. We conclude that important
aspects of human decision behaviours are rooted in elementary learning
mechanisms realised in the brain.
| Martin Spanknebel and Klaus Pawelzik | null | 1508.05288 | null | null |
Hidden Markov Models for Gene Sequence Classification: Classifying the
VSG genes in the Trypanosoma brucei Genome | q-bio.GN cs.CE cs.LG | The article presents an application of Hidden Markov Models (HMMs) for
pattern recognition on genome sequences. We apply HMM for identifying genes
encoding the Variant Surface Glycoprotein (VSG) in the genomes of Trypanosoma
brucei (T. brucei) and other African trypanosomes. These are parasitic protozoa
causative agents of sleeping sickness and several diseases in domestic and wild
animals. These parasites have a peculiar strategy to evade the host's immune
system that consists in periodically changing their predominant cellular
surface protein (VSG). The motivation for using patterns recognition methods to
identify these genes, instead of traditional homology based ones, is that the
levels of sequence identity (amino acid and DNA sequence) amongst these genes
is often below of what is considered reliable in these methods. Among pattern
recognition approaches, HMM are particularly suitable to tackle this problem
because they can handle more naturally the determination of gene edges. We
evaluate the performance of the model using different number of states in the
Markov model, as well as several performance metrics. The model is applied
using public genomic data. Our empirical results show that the VSG genes on T.
brucei can be safely identified (high sensitivity and low rate of false
positives) using HMM.
| Andrea Mesa, Sebasti\'an Basterrech, Gustavo Guerberoff, Fernando
Alvarez-Valin | 10.1007/s10044-015-0508-9 | 1508.05367 | null | null |
StochasticNet: Forming Deep Neural Networks via Stochastic Connectivity | cs.CV cs.LG cs.NE | Deep neural networks is a branch in machine learning that has seen a meteoric
rise in popularity due to its powerful abilities to represent and model
high-level abstractions in highly complex data. One area in deep neural
networks that is ripe for exploration is neural connectivity formation. A
pivotal study on the brain tissue of rats found that synaptic formation for
specific functional connectivity in neocortical neural microcircuits can be
surprisingly well modeled and predicted as a random formation. Motivated by
this intriguing finding, we introduce the concept of StochasticNet, where deep
neural networks are formed via stochastic connectivity between neurons. As a
result, any type of deep neural networks can be formed as a StochasticNet by
allowing the neuron connectivity to be stochastic. Stochastic synaptic
formations, in a deep neural network architecture, can allow for efficient
utilization of neurons for performing specific tasks. To evaluate the
feasibility of such a deep neural network architecture, we train a
StochasticNet using four different image datasets (CIFAR-10, MNIST, SVHN, and
STL-10). Experimental results show that a StochasticNet, using less than half
the number of neural connections as a conventional deep neural network,
achieves comparable accuracy and reduces overfitting on the CIFAR-10, MNIST and
SVHN dataset. Interestingly, StochasticNet with less than half the number of
neural connections, achieved a higher accuracy (relative improvement in test
error rate of ~6% compared to ConvNet) on the STL-10 dataset than a
conventional deep neural network. Finally, StochasticNets have faster
operational speeds while achieving better or similar accuracy performances.
| Mohammad Javad Shafiee, Parthipan Siva, and Alexander Wong | null | 1508.05463 | null | null |
Towards Neural Network-based Reasoning | cs.AI cs.CL cs.LG cs.NE | We propose Neural Reasoner, a framework for neural network-based reasoning
over natural language sentences. Given a question, Neural Reasoner can infer
over multiple supporting facts and find an answer to the question in specific
forms. Neural Reasoner has 1) a specific interaction-pooling mechanism,
allowing it to examine multiple facts, and 2) a deep architecture, allowing it
to model the complicated logical relations in reasoning tasks. Assuming no
particular structure exists in the question and facts, Neural Reasoner is able
to accommodate different types of reasoning and different forms of language
expressions. Despite the model complexity, Neural Reasoner can still be trained
effectively in an end-to-end manner. Our empirical studies show that Neural
Reasoner can outperform existing neural reasoning systems with remarkable
margins on two difficult artificial tasks (Positional Reasoning and Path
Finding) proposed in [8]. For example, it improves the accuracy on Path
Finding(10K) from 33.4% [6] to over 98%.
| Baolin Peng, Zhengdong Lu, Hang Li and Kam-Fai Wong | null | 1508.05508 | null | null |
Gaussian Mixture Reduction Using Reverse Kullback-Leibler Divergence | stat.ML cs.CV cs.LG cs.RO cs.SY | We propose a greedy mixture reduction algorithm which is capable of pruning
mixture components as well as merging them based on the Kullback-Leibler
divergence (KLD). The algorithm is distinct from the well-known Runnalls' KLD
based method since it is not restricted to merging operations. The capability
of pruning (in addition to merging) gives the algorithm the ability of
preserving the peaks of the original mixture during the reduction. Analytical
approximations are derived to circumvent the computational intractability of
the KLD which results in a computationally efficient method. The proposed
algorithm is compared with Runnalls' and Williams' methods in two numerical
examples, using both simulated and real world data. The results indicate that
the performance and computational complexity of the proposed approach make it
an efficient alternative to existing mixture reduction methods.
| Tohid Ardeshiri, Umut Orguner, Emre \"Ozkan | null | 1508.05514 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.