title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
MoDeep: A Deep Learning Framework Using Motion Features for Human Pose
Estimation | cs.CV cs.LG cs.NE | In this work, we propose a novel and efficient method for articulated human
pose estimation in videos using a convolutional network architecture, which
incorporates both color and motion features. We propose a new human body pose
dataset, FLIC-motion, that extends the FLIC dataset with additional motion
features. We apply our architecture to this dataset and report significantly
better performance than current state-of-the-art pose detection systems.
| Arjun Jain, Jonathan Tompson, Yann LeCun and Christoph Bregler | null | 1409.7963 | null | null |
The Utility of Text: The Case of Amicus Briefs and the Supreme Court | cs.CL cs.AI cs.GT cs.LG | We explore the idea that authoring a piece of text is an act of maximizing
one's expected utility. To make this idea concrete, we consider the societally
important decisions of the Supreme Court of the United States. Extensive past
work in quantitative political science provides a framework for empirically
modeling the decisions of justices and how they relate to text. We incorporate
into such a model texts authored by amici curiae ("friends of the court"
separate from the litigants) who seek to weigh in on the decision, then
explicitly model their goals in a random utility model. We demonstrate the
benefits of this approach in improved vote prediction and the ability to
perform counterfactual analysis.
| Yanchuan Sim and Bryan Routledge and Noah A. Smith | null | 1409.7985 | null | null |
Adaptive Low-Complexity Sequential Inference for Dirichlet Process
Mixture Models | stat.ML cs.LG stat.ME | We develop a sequential low-complexity inference procedure for Dirichlet
process mixtures of Gaussians for online clustering and parameter estimation
when the number of clusters are unknown a-priori. We present an easily
computable, closed form parametric expression for the conditional likelihood,
in which hyperparameters are recursively updated as a function of the streaming
data assuming conjugate priors. Motivated by large-sample asymptotics, we
propose a novel adaptive low-complexity design for the Dirichlet process
concentration parameter and show that the number of classes grow at most at a
logarithmic rate. We further prove that in the large-sample limit, the
conditional likelihood and data predictive distribution become asymptotically
Gaussian. We demonstrate through experiments on synthetic and real data sets
that our approach is superior to other online state-of-the-art methods.
| Theodoros Tsiligkaridis, Keith W. Forsythe | null | 1409.8185 | null | null |
A Neural Networks Committee for the Contextual Bandit Problem | cs.NE cs.LG | This paper presents a new contextual bandit algorithm, NeuralBandit, which
does not need hypothesis on stationarity of contexts and rewards. Several
neural networks are trained to modelize the value of rewards knowing the
context. Two variants, based on multi-experts approach, are proposed to choose
online the parameters of multi-layer perceptrons. The proposed algorithms are
successfully tested on a large dataset with and without stationarity of
rewards.
| Robin Allesiardo, Raphael Feraud and Djallel Bouneffouf | null | 1409.8191 | null | null |
Short-Term Predictability of Photovoltaic Production over Italy | cs.LG stat.AP | Photovoltaic (PV) power production increased drastically in Europe throughout
the last years. About the 6% of electricity in Italy comes from PV and for an
efficient management of the power grid an accurate and reliable forecasting of
production would be needed. Starting from a dataset of electricity production
of 65 Italian solar plants for the years 2011-2012 we investigate the
possibility to forecast daily production from one to ten days of lead time
without using on site measurements. Our study is divided in two parts: an
assessment of the predictability of meteorological variables using weather
forecasts and an analysis on the application of data-driven modelling in
predicting solar power production. We calibrate a SVM model using available
observations and then we force the same model with the predicted variables from
weather forecasts with a lead time from one to ten days. As expected, solar
power production is strongly influenced by cloudiness and clear sky, in fact we
observe that while during summer we obtain a general error under the 10%
(slightly lower in south Italy), during winter the error is abundantly above
the 20%.
| Matteo De Felice, Marcello Petitta, Paolo M. Ruti | null | 1409.8202 | null | null |
Efficient multivariate sequence classification | cs.LG | Kernel-based approaches for sequence classification have been successfully
applied to a variety of domains, including the text categorization, image
classification, speech analysis, biological sequence analysis, time series and
music classification, where they show some of the most accurate results.
Typical kernel functions for sequences in these domains (e.g., bag-of-words,
mismatch, or subsequence kernels) are restricted to {\em discrete univariate}
(i.e. one-dimensional) string data, such as sequences of words in the text
analysis, codeword sequences in the image analysis, or nucleotide or amino acid
sequences in the DNA and protein sequence analysis. However, original sequence
data are often of real-valued multivariate nature, i.e. are not univariate and
discrete as required by typical $k$-mer based sequence kernel functions.
In this work, we consider the problem of the {\em multivariate} sequence
classification such as classification of multivariate music sequences, or
multidimensional protein sequence representations. To this end, we extend {\em
univariate} kernel functions typically used in sequence analysis and propose
efficient {\em multivariate} similarity kernel method (MVDFQ-SK) based on (1) a
direct feature quantization (DFQ) of each sequence dimension in the original
{\em real-valued} multivariate sequences and (2) applying novel multivariate
discrete kernel measures on these multivariate discrete DFQ sequence
representations to more accurately capture similarity relationships among
sequences and improve classification performance.
Experiments using the proposed MVDFQ-SK kernel method show excellent
classification performance on three challenging music classification tasks as
well as protein sequence classification with significant 25-40% improvements
over univariate kernel methods and existing state-of-the-art sequence
classification methods.
| Pavel P. Kuksa | null | 1409.8211 | null | null |
A Bayesian Tensor Factorization Model via Variational Inference for Link
Prediction | cs.LG cs.NA stat.ML | Probabilistic approaches for tensor factorization aim to extract meaningful
structure from incomplete data by postulating low rank constraints. Recently,
variational Bayesian (VB) inference techniques have successfully been applied
to large scale models. This paper presents full Bayesian inference via VB on
both single and coupled tensor factorization models. Our method can be run even
for very large models and is easily implemented. It exhibits better prediction
performance than existing approaches based on maximum likelihood on several
real-world datasets for missing link prediction problem.
| Beyza Ermis, A. Taylan Cemgil | null | 1409.8276 | null | null |
Arabic Spelling Correction using Supervised Learning | cs.LG cs.CL | In this work, we address the problem of spelling correction in the Arabic
language utilizing the new corpus provided by QALB (Qatar Arabic Language Bank)
project which is an annotated corpus of sentences with errors and their
corrections. The corpus contains edit, add before, split, merge, add after,
move and other error types. We are concerned with the first four error types as
they contribute more than 90% of the spelling errors in the corpus. The
proposed system has many models to address each error type on its own and then
integrating all the models to provide an efficient and robust system that
achieves an overall recall of 0.59, precision of 0.58 and F1 score of 0.58
including all the error types on the development set. Our system participated
in the QALB 2014 shared task "Automatic Arabic Error Correction" and achieved
an F1 score of 0.6, earning the sixth place out of nine participants.
| Youssef Hassan, Mohamed Aly and Amir Atiya | null | 1409.8309 | null | null |
Bayesian and regularization approaches to multivariable linear system
identification: the role of rank penalties | cs.SY cs.LG stat.ML | Recent developments in linear system identification have proposed the use of
non-parameteric methods, relying on regularization strategies, to handle the
so-called bias/variance trade-off. This paper introduces an impulse response
estimator which relies on an $\ell_2$-type regularization including a
rank-penalty derived using the log-det heuristic as a smooth approximation to
the rank function. This allows to account for different properties of the
estimated impulse response (e.g. smoothness and stability) while also
penalizing high-complexity models. This also allows to account and enforce
coupling between different input-output channels in MIMO systems. According to
the Bayesian paradigm, the parameters defining the relative weight of the two
regularization terms as well as the structure of the rank penalty are estimated
optimizing the marginal likelihood. Once these hyperameters have been
estimated, the impulse response estimate is available in closed form.
Experiments show that the proposed method is superior to the estimator relying
on the "classic" $\ell_2$-regularization alone as well as those based in atomic
and nuclear norm.
| Giulia Prando and Alessandro Chiuso and Gianluigi Pillonetto | null | 1409.8327 | null | null |
Nonstochastic Multi-Armed Bandits with Graph-Structured Feedback | cs.LG stat.ML | We present and study a partial-information model of online learning, where a
decision maker repeatedly chooses from a finite set of actions, and observes
some subset of the associated losses. This naturally models several situations
where the losses of different actions are related, and knowing the loss of one
action provides information on the loss of other actions. Moreover, it
generalizes and interpolates between the well studied full-information setting
(where all losses are revealed) and the bandit setting (where only the loss of
the action chosen by the player is revealed). We provide several algorithms
addressing different variants of our setting, and provide tight regret bounds
depending on combinatorial properties of the information feedback structure.
| Noga Alon, Nicol\`o Cesa-Bianchi, Claudio Gentile, Shie Mannor, Yishay
Mansour and Ohad Shamir | null | 1409.8428 | null | null |
An agent-driven semantical identifier using radial basis neural networks
and reinforcement learning | cs.NE cs.AI cs.CL cs.LG cs.MA | Due to the huge availability of documents in digital form, and the deception
possibility raise bound to the essence of digital documents and the way they
are spread, the authorship attribution problem has constantly increased its
relevance. Nowadays, authorship attribution,for both information retrieval and
analysis, has gained great importance in the context of security, trust and
copyright preservation. This work proposes an innovative multi-agent driven
machine learning technique that has been developed for authorship attribution.
By means of a preprocessing for word-grouping and time-period related analysis
of the common lexicon, we determine a bias reference level for the recurrence
frequency of the words within analysed texts, and then train a Radial Basis
Neural Networks (RBPNN)-based classifier to identify the correct author. The
main advantage of the proposed approach lies in the generality of the semantic
analysis, which can be applied to different contexts and lexical domains,
without requiring any modification. Moreover, the proposed system is able to
incorporate an external input, meant to tune the classifier, and then
self-adjust by means of continuous learning reinforcement.
| Christian Napoli, Giuseppe Pappalardo, Emiliano Tramontana | 10.13140/2.1.1446.7843 | 1409.8484 | null | null |
Non-myopic learning in repeated stochastic games | cs.GT cs.AI cs.LG | In repeated stochastic games (RSGs), an agent must quickly adapt to the
behavior of previously unknown associates, who may themselves be learning. This
machine-learning problem is particularly challenging due, in part, to the
presence of multiple (even infinite) equilibria and inherently large strategy
spaces. In this paper, we introduce a method to reduce the strategy space of
two-player general-sum RSGs to a handful of expert strategies. This process,
called Mega, effectually reduces an RSG to a bandit problem. We show that the
resulting strategy space preserves several important properties of the original
RSG, thus enabling a learner to produce robust strategies within a reasonably
small number of interactions. To better establish strengths and weaknesses of
this approach, we empirically evaluate the resulting learning system against
other algorithms in three different RSGs.
| Jacob W. Crandall | null | 1409.8498 | null | null |
A Deep Learning Approach to Data-driven Parameterizations for
Statistical Parametric Speech Synthesis | cs.CL cs.LG cs.NE | Nearly all Statistical Parametric Speech Synthesizers today use Mel Cepstral
coefficients as the vocal tract parameterization of the speech signal. Mel
Cepstral coefficients were never intended to work in a parametric speech
synthesis framework, but as yet, there has been little success in creating a
better parameterization that is more suited to synthesis. In this paper, we use
deep learning algorithms to investigate a data-driven parameterization
technique that is designed for the specific requirements of synthesis. We
create an invertible, low-dimensional, noise-robust encoding of the Mel Log
Spectrum by training a tapered Stacked Denoising Autoencoder (SDA). This SDA is
then unwrapped and used as the initialization for a Multi-Layer Perceptron
(MLP). The MLP is fine-tuned by training it to reconstruct the input at the
output layer. This MLP is then split down the middle to form encoding and
decoding networks. These networks produce a parameterization of the Mel Log
Spectrum that is intended to better fulfill the requirements of synthesis.
Results are reported for experiments conducted using this resulting
parameterization with the ClusterGen speech synthesizer.
| Prasanna Kumar Muthukumar and Alan W. Black | null | 1409.8558 | null | null |
Freshness-Aware Thompson Sampling | cs.IR cs.LG | To follow the dynamicity of the user's content, researchers have recently
started to model interactions between users and the Context-Aware Recommender
Systems (CARS) as a bandit problem where the system needs to deal with
exploration and exploitation dilemma. In this sense, we propose to study the
freshness of the user's content in CARS through the bandit problem. We
introduce in this paper an algorithm named Freshness-Aware Thompson Sampling
(FA-TS) that manages the recommendation of fresh document according to the
user's risk of the situation. The intensive evaluation and the detailed
analysis of the experimental results reveals several important discoveries in
the exploration/exploitation (exr/exp) behaviour.
| Djallel Bouneffouf | null | 1409.8572 | null | null |
Data Imputation through the Identification of Local Anomalies | cs.LG stat.ML | We introduce a comprehensive and statistical framework in a model free
setting for a complete treatment of localized data corruptions due to severe
noise sources, e.g., an occluder in the case of a visual recording. Within this
framework, we propose i) a novel algorithm to efficiently separate, i.e.,
detect and localize, possible corruptions from a given suspicious data instance
and ii) a Maximum A Posteriori (MAP) estimator to impute the corrupted data. As
a generalization to Euclidean distance, we also propose a novel distance
measure, which is based on the ranked deviations among the data attributes and
empirically shown to be superior in separating the corruptions. Our algorithm
first splits the suspicious instance into parts through a binary partitioning
tree in the space of data attributes and iteratively tests those parts to
detect local anomalies using the nominal statistics extracted from an
uncorrupted (clean) reference data set. Once each part is labeled as anomalous
vs normal, the corresponding binary patterns over this tree that characterize
corruptions are identified and the affected attributes are imputed. Under a
certain conditional independency structure assumed for the binary patterns, we
analytically show that the false alarm rate of the introduced algorithm in
detecting the corruptions is independent of the data and can be directly set
without any parameter tuning. The proposed framework is tested over several
well-known machine learning data sets with synthetically generated corruptions;
and experimentally shown to produce remarkable improvements in terms of
classification purposes with strong corruption separation capabilities. Our
experiments also indicate that the proposed algorithms outperform the typical
approaches and are robust to varying training phase conditions.
| Huseyin Ozkan, Ozgun S. Pelvan and Suleyman S. Kozat | null | 1409.8576 | null | null |
Distributed Detection : Finite-time Analysis and Impact of Network
Topology | math.OC cs.LG cs.SI stat.ML | This paper addresses the problem of distributed detection in multi-agent
networks. Agents receive private signals about an unknown state of the world.
The underlying state is globally identifiable, yet informative signals may be
dispersed throughout the network. Using an optimization-based framework, we
develop an iterative local strategy for updating individual beliefs. In
contrast to the existing literature which focuses on asymptotic learning, we
provide a finite-time analysis. Furthermore, we introduce a Kullback-Leibler
cost to compare the efficiency of the algorithm to its centralized counterpart.
Our bounds on the cost are expressed in terms of network size, spectral gap,
centrality of each agent and relative entropy of agents' signal structures. A
key observation is that distributing more informative signals to central agents
results in a faster learning rate. Furthermore, optimizing the weights, we can
speed up learning by improving the spectral gap. We also quantify the effect of
link failures on learning speed in symmetric networks. We finally provide
numerical simulations which verify our theoretical results.
| Shahin Shahrampour, Alexander Rakhlin, Ali Jadbabaie | null | 1409.8606 | null | null |
Riemannian Multi-Manifold Modeling | stat.ML cs.CV cs.LG | This paper advocates a novel framework for segmenting a dataset in a
Riemannian manifold $M$ into clusters lying around low-dimensional submanifolds
of $M$. Important examples of $M$, for which the proposed clustering algorithm
is computationally efficient, are the sphere, the set of positive definite
matrices, and the Grassmannian. The clustering problem with these examples of
$M$ is already useful for numerous application domains such as action
identification in video sequences, dynamic texture clustering, brain fiber
segmentation in medical imaging, and clustering of deformed images. The
proposed clustering algorithm constructs a data-affinity matrix by thoroughly
exploiting the intrinsic geometry and then applies spectral clustering. The
intrinsic local geometry is encoded by local sparse coding and more importantly
by directional information of local tangent spaces and geodesics. Theoretical
guarantees are established for a simplified variant of the algorithm even when
the clusters intersect. To avoid complication, these guarantees assume that the
underlying submanifolds are geodesic. Extensive validation on synthetic and
real data demonstrates the resiliency of the proposed method against deviations
from the theoretical model as well as its superior performance over
state-of-the-art techniques.
| Xu Wang, Konstantinos Slavakis, Gilad Lerman | null | 1410.0095 | null | null |
Deep Tempering | cs.LG stat.ML | Restricted Boltzmann Machines (RBMs) are one of the fundamental building
blocks of deep learning. Approximate maximum likelihood training of RBMs
typically necessitates sampling from these models. In many training scenarios,
computationally efficient Gibbs sampling procedures are crippled by poor
mixing. In this work we propose a novel method of sampling from Boltzmann
machines that demonstrates a computationally efficient way to promote mixing.
Our approach leverages an under-appreciated property of deep generative models
such as the Deep Belief Network (DBN), where Gibbs sampling from deeper levels
of the latent variable hierarchy results in dramatically increased ergodicity.
Our approach is thus to train an auxiliary latent hierarchical model, based on
the DBN. When used in conjunction with parallel-tempering, the method is
asymptotically guaranteed to simulate samples from the target RBM. Experimental
results confirm the effectiveness of this sampling strategy in the context of
RBM training.
| Guillaume Desjardins, Heng Luo, Aaron Courville and Yoshua Bengio | null | 1410.0123 | null | null |
A Multi-World Approach to Question Answering about Real-World Scenes
based on Uncertain Input | cs.AI cs.CL cs.CV cs.LG | We propose a method for automatically answering questions about images by
bringing together recent advances from natural language processing and computer
vision. We combine discrete reasoning with uncertain predictions by a
multi-world approach that represents uncertainty about the perceived world in a
bayesian framework. Our approach can handle human questions of high complexity
about realistic scenes and replies with range of answer like counts, object
classes, instances and lists of them. The system is directly trained from
question-answer pairs. We establish a first benchmark for this task that can be
seen as a modern attempt at a visual turing test.
| Mateusz Malinowski and Mario Fritz | null | 1410.0210 | null | null |
ASKIT: Approximate Skeletonization Kernel-Independent Treecode in High
Dimensions | cs.DS cs.LG | We present a fast algorithm for kernel summation problems in high-dimensions.
These problems appear in computational physics, numerical approximation,
non-parametric statistics, and machine learning. In our context, the sums
depend on a kernel function that is a pair potential defined on a dataset of
points in a high-dimensional Euclidean space. A direct evaluation of the sum
scales quadratically with the number of points. Fast kernel summation methods
can reduce this cost to linear complexity, but the constants involved do not
scale well with the dimensionality of the dataset.
The main algorithmic components of fast kernel summation algorithms are the
separation of the kernel sum between near and far field (which is the basis for
pruning) and the efficient and accurate approximation of the far field.
We introduce novel methods for pruning and approximating the far field. Our
far field approximation requires only kernel evaluations and does not use
analytic expansions. Pruning is not done using bounding boxes but rather
combinatorially using a sparsified nearest-neighbor graph of the input. The
time complexity of our algorithm depends linearly on the ambient dimension. The
error in the algorithm depends on the low-rank approximability of the far
field, which in turn depends on the kernel function and on the intrinsic
dimensionality of the distribution of the points. The error of the far field
approximation does not depend on the ambient dimension.
We present the new algorithm along with experimental results that demonstrate
its performance. We report results for Gaussian kernel sums for 100 million
points in 64 dimensions, for one million points in 1000 dimensions, and for
problems in which the Gaussian kernel has a variable bandwidth. To the best of
our knowledge, all of these experiments are impossible or prohibitively
expensive with existing fast kernel summation methods.
| William B. March, Bo Xiao, George Biros | null | 1410.0260 | null | null |
$\ell_1$-K-SVD: A Robust Dictionary Learning Algorithm With Simultaneous
Update | cs.CV cs.LG | We develop a dictionary learning algorithm by minimizing the $\ell_1$
distortion metric on the data term, which is known to be robust for
non-Gaussian noise contamination. The proposed algorithm exploits the idea of
iterative minimization of weighted $\ell_2$ error. We refer to this algorithm
as $\ell_1$-K-SVD, where the dictionary atoms and the corresponding sparse
coefficients are simultaneously updated to minimize the $\ell_1$ objective,
resulting in noise-robustness. We demonstrate through experiments that the
$\ell_1$-K-SVD algorithm results in higher atom recovery rate compared with the
K-SVD and the robust dictionary learning (RDL) algorithm proposed by Lu et al.,
both in Gaussian and non-Gaussian noise conditions. We also show that, for
fixed values of sparsity, number of dictionary atoms, and data-dimension, the
$\ell_1$-K-SVD algorithm outperforms the K-SVD and RDL algorithms when the
training set available is small. We apply the proposed algorithm for denoising
natural images corrupted by additive Gaussian and Laplacian noise. The images
denoised using $\ell_1$-K-SVD are observed to have slightly higher peak
signal-to-noise ratio (PSNR) over K-SVD for Laplacian noise, but the
improvement in structural similarity index (SSIM) is significant (approximately
$0.1$) for lower values of input PSNR, indicating the efficacy of the $\ell_1$
metric.
| Subhadip Mukherjee, Rupam Basu, and Chandra Sekhar Seelamantula | null | 1410.0311 | null | null |
Domain adaptation of weighted majority votes via perturbed
variation-based self-labeling | stat.ML cs.LG | In machine learning, the domain adaptation problem arrives when the test
(target) and the train (source) data are generated from different
distributions. A key applied issue is thus the design of algorithms able to
generalize on a new distribution, for which we have no label information. We
focus on learning classification models defined as a weighted majority vote
over a set of real-val ued functions. In this context, Germain et al. (2013)
have shown that a measure of disagreement between these functions is crucial to
control. The core of this measure is a theoretical bound--the C-bound (Lacasse
et al., 2007)--which involves the disagreement and leads to a well performing
majority vote learning algorithm in usual non-adaptative supervised setting:
MinCq. In this work, we propose a framework to extend MinCq to a domain
adaptation scenario. This procedure takes advantage of the recent perturbed
variation divergence between distributions proposed by Harel and Mannor (2012).
Justified by a theoretical bound on the target risk of the vote, we provide to
MinCq a target sample labeled thanks to a perturbed variation-based
self-labeling focused on the regions where the source and target marginals
appear similar. We also study the influence of our self-labeling, from which we
deduce an original process for tuning the hyperparameters. Finally, our
framework called PV-MinCq shows very promising results on a rotation and
translation synthetic problem.
| Emilie Morvant (LHC) | null | 1410.0334 | null | null |
Generalized Low Rank Models | stat.ML cs.LG math.OC | Principal components analysis (PCA) is a well-known technique for
approximating a tabular data set by a low rank matrix. Here, we extend the idea
of PCA to handle arbitrary data sets consisting of numerical, Boolean,
categorical, ordinal, and other data types. This framework encompasses many
well known techniques in data analysis, such as nonnegative matrix
factorization, matrix completion, sparse and robust PCA, $k$-means, $k$-SVD,
and maximum margin matrix factorization. The method handles heterogeneous data
sets, and leads to coherent schemes for compressing, denoising, and imputing
missing entries across all data types simultaneously. It also admits a number
of interesting interpretations of the low rank factors, which allow clustering
of examples or of features. We propose several parallel algorithms for fitting
generalized low rank models, and describe implementations and numerical
results.
| Madeleine Udell, Corinne Horn, Reza Zadeh and Stephen Boyd | null | 1410.0342 | null | null |
Scalable Nonlinear Learning with Adaptive Polynomial Expansions | cs.LG stat.ML | Can we effectively learn a nonlinear representation in time comparable to
linear learning? We describe a new algorithm that explicitly and adaptively
expands higher-order interaction features over base linear representations. The
algorithm is designed for extreme computational efficiency, and an extensive
experimental study shows that its computation/prediction tradeoff ability
compares very favorably against strong baselines.
| Alekh Agarwal, Alina Beygelzimer, Daniel Hsu, John Langford, Matus
Telgarsky | null | 1410.0440 | null | null |
Identification of Dynamic functional brain network states Through Tensor
Decomposition | cs.NE cs.LG q-bio.NC | With the advances in high resolution neuroimaging, there has been a growing
interest in the detection of functional brain connectivity. Complex network
theory has been proposed as an attractive mathematical representation of
functional brain networks. However, most of the current studies of functional
brain networks have focused on the computation of graph theoretic indices for
static networks, i.e. long-time averages of connectivity networks. It is
well-known that functional connectivity is a dynamic process and the
construction and reorganization of the networks is key to understanding human
cognition. Therefore, there is a growing need to track dynamic functional brain
networks and identify time intervals over which the network is
quasi-stationary. In this paper, we present a tensor decomposition based method
to identify temporally invariant 'network states' and find a common topographic
representation for each state. The proposed methods are applied to
electroencephalogram (EEG) data during the study of error-related negativity
(ERN).
| Arash Golibagh Mahyari, Selin Aviyente | 10.1109/ICASSP.2014.6853969 | 1410.0446 | null | null |
Deep Sequential Neural Network | cs.LG cs.NE | Neural Networks sequentially build high-level features through their
successive layers. We propose here a new neural network model where each layer
is associated with a set of candidate mappings. When an input is processed, at
each layer, one mapping among these candidates is selected according to a
sequential decision process. The resulting model is structured according to a
DAG like architecture, so that a path from the root to a leaf node defines a
sequence of transformations. Instead of considering global transformations,
like in classical multilayer networks, this model allows us for learning a set
of local transformations. It is thus able to process data with different
characteristics through specific sequences of such local transformations,
increasing the expression power of this model w.r.t a classical multilayered
network. The learning algorithm is inspired from policy gradient techniques
coming from the reinforcement learning domain and is used here instead of the
classical back-propagation based gradient descent techniques. Experiments on
different datasets show the relevance of this approach.
| Ludovic Denoyer and Patrick Gallinari | null | 1410.0510 | null | null |
Mapping Energy Landscapes of Non-Convex Learning Problems | stat.ML cs.LG | In many statistical learning problems, the target functions to be optimized
are highly non-convex in various model spaces and thus are difficult to
analyze. In this paper, we compute \emph{Energy Landscape Maps} (ELMs) which
characterize and visualize an energy function with a tree structure, in which
each leaf node represents a local minimum and each non-leaf node represents the
barrier between adjacent energy basins. The ELM also associates each node with
the estimated probability mass and volume for the corresponding energy basin.
We construct ELMs by adopting the generalized Wang-Landau algorithm and
multi-domain sampler that simulates a Markov chain traversing the model space
by dynamically reweighting the energy function. We construct ELMs in the model
space for two classic statistical learning problems: i) clustering with
Gaussian mixture models or Bernoulli templates; and ii) bi-clustering. We
propose a way to measure the difficulties (or complexity) of these learning
problems and study how various conditions affect the landscape complexity, such
as separability of the clusters, the number of examples, and the level of
supervision; and we also visualize the behaviors of different algorithms, such
as K-mean, EM, two-step EM and Swendsen-Wang cuts, in the energy landscapes.
| Maria Pavlovskaia, Kewei Tu and Song-Chun Zhu | null | 1410.0576 | null | null |
Deep Directed Generative Autoencoders | stat.ML cs.LG cs.NE | For discrete data, the likelihood $P(x)$ can be rewritten exactly and
parametrized into $P(X = x) = P(X = x | H = f(x)) P(H = f(x))$ if $P(X | H)$
has enough capacity to put no probability mass on any $x'$ for which $f(x')\neq
f(x)$, where $f(\cdot)$ is a deterministic discrete function. The log of the
first factor gives rise to the log-likelihood reconstruction error of an
autoencoder with $f(\cdot)$ as the encoder and $P(X|H)$ as the (probabilistic)
decoder. The log of the second term can be seen as a regularizer on the encoded
activations $h=f(x)$, e.g., as in sparse autoencoders. Both encoder and decoder
can be represented by a deep neural network and trained to maximize the average
of the optimal log-likelihood $\log p(x)$. The objective is to learn an encoder
$f(\cdot)$ that maps $X$ to $f(X)$ that has a much simpler distribution than
$X$ itself, estimated by $P(H)$. This "flattens the manifold" or concentrates
probability mass in a smaller number of (relevant) dimensions over which the
distribution factorizes. Generating samples from the model is straightforward
using ancestral sampling. One challenge is that regular back-propagation cannot
be used to obtain the gradient on the parameters of the encoder, but we find
that using the straight-through estimator works well here. We also find that
although optimizing a single level of such architecture may be difficult, much
better results can be obtained by pre-training and stacking them, gradually
transforming the data distribution into one that is more easily captured by a
simple parametric model.
| Sherjil Ozair and Yoshua Bengio | null | 1410.0630 | null | null |
Deterministic Conditions for Subspace Identifiability from Incomplete
Sampling | stat.ML cs.LG math.CO | Consider a generic $r$-dimensional subspace of $\mathbb{R}^d$, $r<d$, and
suppose that we are only given projections of this subspace onto small subsets
of the canonical coordinates. The paper establishes necessary and sufficient
deterministic conditions on the subsets for subspace identifiability.
| Daniel L. Pimentel-Alarc\'on, Robert D. Nowak, Nigel Boston | null | 1410.0633 | null | null |
Term-Weighting Learning via Genetic Programming for Text Classification | cs.NE cs.LG | This paper describes a novel approach to learning term-weighting schemes
(TWSs) in the context of text classification. In text mining a TWS determines
the way in which documents will be represented in a vector space model, before
applying a classifier. Whereas acceptable performance has been obtained with
standard TWSs (e.g., Boolean and term-frequency schemes), the definition of
TWSs has been traditionally an art. Further, it is still a difficult task to
determine what is the best TWS for a particular problem and it is not clear
yet, whether better schemes, than those currently available, can be generated
by combining known TWS. We propose in this article a genetic program that aims
at learning effective TWSs that can improve the performance of current schemes
in text classification. The genetic program learns how to combine a set of
basic units to give rise to discriminative TWSs. We report an extensive
experimental study comprising data sets from thematic and non-thematic text
classification as well as from image classification. Our study shows the
validity of the proposed method; in fact, we show that TWSs learned with the
genetic program outperform traditional schemes and other TWSs proposed in
recent works. Further, we show that TWSs learned from a specific domain can be
effectively used for other tasks.
| Hugo Jair Escalante, Mauricio A. Garc\'ia-Lim\'on, Alicia
Morales-Reyes, Mario Graff, Manuel Montes-y-G\'omez, Eduardo F. Morales | null | 1410.0640 | null | null |
Proceedings of the second "international Traveling Workshop on
Interactions between Sparse models and Technology" (iTWIST'14) | cs.NA cs.CV cs.IT cs.LG math.IT math.OC math.ST stat.TH | The implicit objective of the biennial "international - Traveling Workshop on
Interactions between Sparse models and Technology" (iTWIST) is to foster
collaboration between international scientific teams by disseminating ideas
through both specific oral/poster presentations and free discussions. For its
second edition, the iTWIST workshop took place in the medieval and picturesque
town of Namur in Belgium, from Wednesday August 27th till Friday August 29th,
2014. The workshop was conveniently located in "The Arsenal" building within
walking distance of both hotels and town center. iTWIST'14 has gathered about
70 international participants and has featured 9 invited talks, 10 oral
presentations, and 14 posters on the following themes, all related to the
theory, application and generalization of the "sparsity paradigm":
Sparsity-driven data sensing and processing; Union of low dimensional
subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph
sensing/processing; Blind inverse problems and dictionary learning; Sparsity
and computational neuroscience; Information theory, geometry and randomness;
Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?;
Sparse machine learning and inference.
| L. Jacques, C. De Vleeschouwer, Y. Boursier, P. Sudhakar, C. De Mol,
A. Pizurica, S. Anthoine, P. Vandergheynst, P. Frossard, C. Bilen, S. Kitic,
N. Bertin, R. Gribonval, N. Boumal, B. Mishra, P.-A. Absil, R. Sepulchre, S.
Bundervoet, C. Schretter, A. Dooms, P. Schelkens, O. Chabiron, F. Malgouyres,
J.-Y. Tourneret, N. Dobigeon, P. Chainais, C. Richard, B. Cornelis, I.
Daubechies, D. Dunson, M. Dankova, P. Rajmic, K. Degraux, V. Cambareri, B.
Geelen, G. Lafruit, G. Setti, J.-F. Determe, J. Louveaux, F. Horlin, A.
Dr\'emeau, P. Heas, C. Herzet, V. Duval, G. Peyr\'e, A. Fawzi, M. Davies, N.
Gillis, S. A. Vavasis, C. Soussen, L. Le Magoarou, J. Liang, J. Fadili, A.
Liutkus, D. Martina, S. Gigan, L. Daudet, M. Maggioni, S. Minsker, N. Strawn,
C. Mory, F. Ngole, J.-L. Starck, I. Loris, S. Vaiter, M. Golbabaee, D.
Vukobratovic | null | 1410.0719 | null | null |
HD-CNN: Hierarchical Deep Convolutional Neural Network for Large Scale
Visual Recognition | cs.CV cs.AI cs.LG cs.NE stat.ML | In image classification, visual separability between different object
categories is highly uneven, and some categories are more difficult to
distinguish than others. Such difficult categories demand more dedicated
classifiers. However, existing deep convolutional neural networks (CNN) are
trained as flat N-way classifiers, and few efforts have been made to leverage
the hierarchical structure of categories. In this paper, we introduce
hierarchical deep CNNs (HD-CNNs) by embedding deep CNNs into a category
hierarchy. An HD-CNN separates easy classes using a coarse category classifier
while distinguishing difficult classes using fine category classifiers. During
HD-CNN training, component-wise pretraining is followed by global finetuning
with a multinomial logistic loss regularized by a coarse category consistency
term. In addition, conditional executions of fine category classifiers and
layer parameter compression make HD-CNNs scalable for large-scale visual
recognition. We achieve state-of-the-art results on both CIFAR100 and
large-scale ImageNet 1000-class benchmark datasets. In our experiments, we
build up three different HD-CNNs and they lower the top-1 error of the standard
CNNs by 2.65%, 3.1% and 1.1%, respectively.
| Zhicheng Yan, Hao Zhang, Robinson Piramuthu, Vignesh Jagadeesh, Dennis
DeCoste, Wei Di, Yizhou Yu | null | 1410.0736 | null | null |
Generalized Laguerre Reduction of the Volterra Kernel for Practical
Identification of Nonlinear Dynamic Systems | cs.LG | The Volterra series can be used to model a large subset of nonlinear, dynamic
systems. A major drawback is the number of coefficients required model such
systems. In order to reduce the number of required coefficients, Laguerre
polynomials are used to estimate the Volterra kernels. Existing literature
proposes algorithms for a fixed number of Volterra kernels, and Laguerre
series. This paper presents a novel algorithm for generalized calculation of
the finite order Volterra-Laguerre (VL) series for a MIMO system. An example
addresses the utility of the algorithm in practical application.
| Brett W. Israelsen, Dale A. Smith | null | 1410.0741 | null | null |
cuDNN: Efficient Primitives for Deep Learning | cs.NE cs.LG cs.MS | We present a library of efficient implementations of deep learning
primitives. Deep learning workloads are computationally intensive, and
optimizing their kernels is difficult and time-consuming. As parallel
architectures evolve, kernels must be reoptimized, which makes maintaining
codebases difficult over time. Similar issues have long been addressed in the
HPC community by libraries such as the Basic Linear Algebra Subroutines (BLAS).
However, there is no analogous library for deep learning. Without such a
library, researchers implementing deep learning workloads on parallel
processors must create and optimize their own implementations of the main
computational kernels, and this work must be repeated as new parallel
processors emerge. To address this problem, we have created a library similar
in intent to BLAS, with optimized routines for deep learning workloads. Our
implementation contains routines for GPUs, although similarly to the BLAS
library, these routines could be implemented for other platforms. The library
is easy to integrate into existing frameworks, and provides optimized
performance and memory usage. For example, integrating cuDNN into Caffe, a
popular framework for convolutional networks, improves performance by 36% on a
standard model while also reducing memory consumption.
| Sharan Chetlur, Cliff Woolley, Philippe Vandermersch, Jonathan Cohen,
John Tran, Bryan Catanzaro, Evan Shelhamer | null | 1410.0759 | null | null |
SimNets: A Generalization of Convolutional Networks | cs.NE cs.LG | We present a deep layered architecture that generalizes classical
convolutional neural networks (ConvNets). The architecture, called SimNets, is
driven by two operators, one being a similarity function whose family contains
the convolution operator used in ConvNets, and the other is a new soft
max-min-mean operator called MEX that realizes classical operators like ReLU
and max pooling, but has additional capabilities that make SimNets a powerful
generalization of ConvNets. Three interesting properties emerge from the
architecture: (i) the basic input to hidden layer to output machinery contains
as special cases kernel machines with the Exponential and Generalized Gaussian
kernels, the output units being "neurons in feature space" (ii) in its general
form, the basic machinery has a higher abstraction level than kernel machines,
and (iii) initializing networks using unsupervised learning is natural.
Experiments demonstrate the capability of achieving state of the art accuracy
with networks that are an order of magnitude smaller than comparable ConvNets.
| Nadav Cohen and Amnon Shashua | null | 1410.0781 | null | null |
Probit Normal Correlated Topic Models | stat.ML cs.IR cs.LG | The logistic normal distribution has recently been adapted via the
transformation of multivariate Gaus- sian variables to model the topical
distribution of documents in the presence of correlations among topics. In this
paper, we propose a probit normal alternative approach to modelling correlated
topical structures. Our use of the probit model in the context of topic
discovery is novel, as many authors have so far con- centrated solely of the
logistic model partly due to the formidable inefficiency of the multinomial
probit model even in the case of very small topical spaces. We herein
circumvent the inefficiency of multinomial probit estimation by using an
adaptation of the diagonal orthant multinomial probit in the topic models
context, resulting in the ability of our topic modelling scheme to handle
corpuses with a large number of latent topics. An additional and very important
benefit of our method lies in the fact that unlike with the logistic normal
model whose non-conjugacy leads to the need for sophisticated sampling schemes,
our ap- proach exploits the natural conjugacy inherent in the auxiliary
formulation of the probit model to achieve greater simplicity. The application
of our proposed scheme to a well known Associated Press corpus not only helps
discover a large number of meaningful topics but also reveals the capturing of
compellingly intuitive correlations among certain topics. Besides, our proposed
approach lends itself to even further scalability thanks to various existing
high performance algorithms and architectures capable of handling millions of
documents.
| Xingchen Yu and Ernest Fokoue | null | 1410.0908 | null | null |
Tight Regret Bounds for Stochastic Combinatorial Semi-Bandits | cs.LG cs.AI math.OC stat.ML | A stochastic combinatorial semi-bandit is an online learning problem where at
each step a learning agent chooses a subset of ground items subject to
constraints, and then observes stochastic weights of these items and receives
their sum as a payoff. In this paper, we close the problem of computationally
and sample efficient learning in stochastic combinatorial semi-bandits. In
particular, we analyze a UCB-like algorithm for solving the problem, which is
known to be computationally efficient; and prove $O(K L (1 / \Delta) \log n)$
and $O(\sqrt{K L n \log n})$ upper bounds on its $n$-step regret, where $L$ is
the number of ground items, $K$ is the maximum number of chosen items, and
$\Delta$ is the gap between the expected returns of the optimal and best
suboptimal solutions. The gap-dependent bound is tight up to a constant factor
and the gap-free bound is tight up to a polylogarithmic factor.
| Branislav Kveton, Zheng Wen, Azin Ashkan, and Csaba Szepesvari | null | 1410.0949 | null | null |
Minimax Analysis of Active Learning | cs.LG math.ST stat.ML stat.TH | This work establishes distribution-free upper and lower bounds on the minimax
label complexity of active learning with general hypothesis classes, under
various noise models. The results reveal a number of surprising facts. In
particular, under the noise model of Tsybakov (2004), the minimax label
complexity of active learning with a VC class is always asymptotically smaller
than that of passive learning, and is typically significantly smaller than the
best previously-published upper bounds in the active learning literature. In
high-noise regimes, it turns out that all active learning problems of a given
VC dimension have roughly the same minimax label complexity, which contrasts
with well-known results for bounded noise. In low-noise regimes, we find that
the label complexity is well-characterized by a simple combinatorial complexity
measure we call the star number. Interestingly, we find that almost all of the
complexity measures previously explored in the active learning literature have
worst-case values exactly equal to the star number. We also propose new active
learning strategies that nearly achieve these minimax label complexities.
| Steve Hanneke and Liu Yang | null | 1410.0996 | null | null |
Gamma Processes, Stick-Breaking, and Variational Inference | stat.ML cs.AI cs.LG | While most Bayesian nonparametric models in machine learning have focused on
the Dirichlet process, the beta process, or their variants, the gamma process
has recently emerged as a useful nonparametric prior in its own right. Current
inference schemes for models involving the gamma process are restricted to
MCMC-based methods, which limits their scalability. In this paper, we present a
variational inference framework for models involving gamma process priors. Our
approach is based on a novel stick-breaking constructive definition of the
gamma process. We prove correctness of this stick-breaking process by using the
characterization of the gamma process as a completely random measure (CRM), and
we explicitly derive the rate measure of our construction using Poisson process
machinery. We also derive error bounds on the truncation of the infinite
process required for variational inference, similar to the truncation analyses
for other nonparametric models based on the Dirichlet and beta processes. Our
representation is then used to derive a variational inference algorithm for a
particular Bayesian nonparametric latent structure formulation known as the
infinite Gamma-Poisson model, where the latent variables are drawn from a gamma
process prior with Poisson likelihoods. Finally, we present results for our
algorithms on nonnegative matrix factorization tasks on document corpora, and
show that we compare favorably to both sampling-based techniques and
variational approaches based on beta-Bernoulli priors.
| Anirban Roychowdhury, Brian Kulis | null | 1410.1068 | null | null |
Explain Images with Multimodal Recurrent Neural Networks | cs.CV cs.CL cs.LG | In this paper, we present a multimodal Recurrent Neural Network (m-RNN) model
for generating novel sentence descriptions to explain the content of images. It
directly models the probability distribution of generating a word given
previous words and the image. Image descriptions are generated by sampling from
this distribution. The model consists of two sub-networks: a deep recurrent
neural network for sentences and a deep convolutional network for images. These
two sub-networks interact with each other in a multimodal layer to form the
whole m-RNN model. The effectiveness of our model is validated on three
benchmark datasets (IAPR TC-12, Flickr 8K, and Flickr 30K). Our model
outperforms the state-of-the-art generative method. In addition, the m-RNN
model can be applied to retrieval tasks for retrieving images or sentences, and
achieves significant performance improvement over the state-of-the-art methods
which directly optimize the ranking objective function for retrieval.
| Junhua Mao, Wei Xu, Yi Yang, Jiang Wang, Alan L. Yuille | null | 1410.1090 | null | null |
Online Ranking with Top-1 Feedback | cs.LG | We consider a setting where a system learns to rank a fixed set of $m$ items.
The goal is produce good item rankings for users with diverse interests who
interact online with the system for $T$ rounds. We consider a novel top-$1$
feedback model: at the end of each round, the relevance score for only the top
ranked object is revealed. However, the performance of the system is judged on
the entire ranked list. We provide a comprehensive set of results regarding
learnability under this challenging setting. For PairwiseLoss and DCG, two
popular ranking measures, we prove that the minimax regret is
$\Theta(T^{2/3})$. Moreover, the minimax regret is achievable using an
efficient strategy that only spends $O(m \log m)$ time per round. The same
efficient strategy achieves $O(T^{2/3})$ regret for Precision@$k$.
Surprisingly, we show that for normalized versions of these ranking measures,
i.e., AUC, NDCG \& MAP, no online ranking algorithm can have sublinear regret.
| Sougata Chaudhuri and Ambuj Tewari | null | 1410.1103 | null | null |
On the Computational Efficiency of Training Neural Networks | cs.LG cs.AI stat.ML | It is well-known that neural networks are computationally hard to train. On
the other hand, in practice, modern day neural networks are trained efficiently
using SGD and a variety of tricks that include different activation functions
(e.g. ReLU), over-specification (i.e., train networks which are larger than
needed), and regularization. In this paper we revisit the computational
complexity of training neural networks from a modern perspective. We provide
both positive and negative results, some of them yield new provably efficient
and practical algorithms for training certain types of neural networks.
| Roi Livni and Shai Shalev-Shwartz and Ohad Shamir | null | 1410.1141 | null | null |
Understanding Locally Competitive Networks | cs.NE cs.LG | Recently proposed neural network activation functions such as rectified
linear, maxout, and local winner-take-all have allowed for faster and more
effective training of deep neural architectures on large and complex datasets.
The common trait among these functions is that they implement local competition
between small groups of computational units within a layer, so that only part
of the network is activated for any given input pattern. In this paper, we
attempt to visualize and understand this self-modularization, and suggest a
unified explanation for the beneficial properties of such networks. We also
show how our insights can be directly useful for efficiently performing
retrieval over large datasets using neural networks.
| Rupesh Kumar Srivastava, Jonathan Masci, Faustino Gomez, J\"urgen
Schmidhuber | null | 1410.1165 | null | null |
Interactive Fingerprinting Codes and the Hardness of Preventing False
Discovery | cs.CR cs.DS cs.LG | We show an essentially tight bound on the number of adaptively chosen
statistical queries that a computationally efficient algorithm can answer
accurately given $n$ samples from an unknown distribution. A statistical query
asks for the expectation of a predicate over the underlying distribution, and
an answer to a statistical query is accurate if it is "close" to the correct
expectation over the distribution. This question was recently studied by Dwork
et al., who showed how to answer $\tilde{\Omega}(n^2)$ queries efficiently, and
also by Hardt and Ullman, who showed that answering $\tilde{O}(n^3)$ queries is
hard. We close the gap between the two bounds and show that, under a standard
hardness assumption, there is no computationally efficient algorithm that,
given $n$ samples from an unknown distribution, can give valid answers to
$O(n^2)$ adaptively chosen statistical queries. An implication of our results
is that computationally efficient algorithms for answering arbitrary,
adaptively chosen statistical queries may as well be differentially private.
We obtain our results using a new connection between the problem of answering
adaptively chosen statistical queries and a combinatorial object called an
interactive fingerprinting code. In order to optimize our hardness result, we
give a new Fourier-analytic approach to analyzing fingerprinting codes that is
simpler, more flexible, and yields better parameters than previous
constructions.
| Thomas Steinke and Jonathan Ullman | null | 1410.1228 | null | null |
Top Rank Optimization in Linear Time | cs.LG cs.AI cs.IR | Bipartite ranking aims to learn a real-valued ranking function that orders
positive instances before negative instances. Recent efforts of bipartite
ranking are focused on optimizing ranking accuracy at the top of the ranked
list. Most existing approaches are either to optimize task specific metrics or
to extend the ranking loss by emphasizing more on the error associated with the
top ranked instances, leading to a high computational cost that is super-linear
in the number of training instances. We propose a highly efficient approach,
titled TopPush, for optimizing accuracy at the top that has computational
complexity linear in the number of training instances. We present a novel
analysis that bounds the generalization error for the top ranked instances for
the proposed approach. Empirical study shows that the proposed approach is
highly competitive to the state-of-the-art approaches and is 10-100 times
faster.
| Nan Li and Rong Jin and Zhi-Hua Zhou | null | 1410.1462 | null | null |
Stochastic Discriminative EM | cs.LG | Stochastic discriminative EM (sdEM) is an online-EM-type algorithm for
discriminative training of probabilistic generative models belonging to the
exponential family. In this work, we introduce and justify this algorithm as a
stochastic natural gradient descent method, i.e. a method which accounts for
the information geometry in the parameter space of the statistical model. We
show how this learning algorithm can be used to train probabilistic generative
models by minimizing different discriminative loss functions, such as the
negative conditional log-likelihood and the Hinge loss. The resulting models
trained by sdEM are always generative (i.e. they define a joint probability
distribution) and, in consequence, allows to deal with missing data and latent
variables in a principled way either when being learned or when making
predictions. The performance of this method is illustrated by several text
classification problems for which a multinomial naive Bayes and a latent
Dirichlet allocation based classifier are learned using different
discriminative loss functions.
| Andres R. Masegosa | null | 1410.1784 | null | null |
GLAD: Group Anomaly Detection in Social Media Analysis- Extended
Abstract | cs.LG cs.SI | Traditional anomaly detection on social media mostly focuses on individual
point anomalies while anomalous phenomena usually occur in groups. Therefore it
is valuable to study the collective behavior of individuals and detect group
anomalies. Existing group anomaly detection approaches rely on the assumption
that the groups are known, which can hardly be true in real world social media
applications. In this paper, we take a generative approach by proposing a
hierarchical Bayes model: Group Latent Anomaly Detection (GLAD) model. GLAD
takes both pair-wise and point-wise data as input, automatically infers the
groups and detects group anomalies simultaneously. To account for the dynamic
properties of the social media data, we further generalize GLAD to its dynamic
extension d-GLAD. We conduct extensive experiments to evaluate our models on
both synthetic and real world datasets. The empirical results demonstrate that
our approach is effective and robust in discovering latent groups and detecting
group anomalies.
| Qi (Rose) Yu, Xinran He and Yan Liu | null | 1410.1940 | null | null |
Supervised learning Methods for Bangla Web Document Categorization | cs.CL cs.LG | This paper explores the use of machine learning approaches, or more
specifically, four supervised learning Methods, namely Decision Tree(C 4.5),
K-Nearest Neighbour (KNN), Na\"ive Bays (NB), and Support Vector Machine (SVM)
for categorization of Bangla web documents. This is a task of automatically
sorting a set of documents into categories from a predefined set. Whereas a
wide range of methods have been applied to English text categorization,
relatively few studies have been conducted on Bangla language text
categorization. Hence, we attempt to analyze the efficiency of those four
methods for categorization of Bangla documents. In order to validate, Bangla
corpus from various websites has been developed and used as examples for the
experiment. For Bangla, empirical results support that all four methods produce
satisfactory performance with SVM attaining good result in terms of high
dimensional and relatively noisy document feature vectors.
| Ashis Kumar Mandal and Rikta Sen | null | 1410.2045 | null | null |
Learning manifold to regularize nonnegative matrix factorization | cs.LG | Inthischapterwediscusshowtolearnanoptimalmanifoldpresentationto regularize
nonegative matrix factorization (NMF) for data representation problems.
NMF,whichtriestorepresentanonnegativedatamatrixasaproductoftwolowrank
nonnegative matrices, has been a popular method for data representation due to
its ability to explore the latent part-based structure of data. Recent study
shows that lots of data distributions have manifold structures, and we should
respect the manifold structure when the data are represented. Recently,
manifold regularized NMF used a nearest neighbor graph to regulate the learning
of factorization parameter matrices and has shown its advantage over
traditional NMF methods for data representation problems. However, how to
construct an optimal graph to present the manifold prop- erly remains a
difficultproblem due to the graph modelselection, noisy features, and nonlinear
distributed data. In this chapter, we introduce three effective methods to
solve these problems of graph construction for manifold regularized NMF.
Multiple graph learning is proposed to solve the problem of graph model
selection, adaptive graph learning via feature selection is proposed to solve
the problem of constructing a graph from noisy features, while multi-kernel
learning-based graph construction is used to solve the problem of learning a
graph from nonlinearly distributed data.
| Jim Jing-Yan Wang, Xin Gao | null | 1410.2191 | null | null |
Bayesian Robust Tensor Factorization for Incomplete Multiway Data | cs.CV cs.LG | We propose a generative model for robust tensor factorization in the presence
of both missing data and outliers. The objective is to explicitly infer the
underlying low-CP-rank tensor capturing the global information and a sparse
tensor capturing the local information (also considered as outliers), thus
providing the robust predictive distribution over missing entries. The
low-CP-rank tensor is modeled by multilinear interactions between multiple
latent factors on which the column sparsity is enforced by a hierarchical
prior, while the sparse tensor is modeled by a hierarchical view of Student-$t$
distribution that associates an individual hyperparameter with each element
independently. For model learning, we develop an efficient closed-form
variational inference under a fully Bayesian treatment, which can effectively
prevent the overfitting problem and scales linearly with data size. In contrast
to existing related works, our method can perform model selection automatically
and implicitly without need of tuning parameters. More specifically, it can
discover the groundtruth of CP rank and automatically adapt the sparsity
inducing priors to various types of outliers. In addition, the tradeoff between
the low-rank approximation and the sparse representation can be optimized in
the sense of maximum model evidence. The extensive experiments and comparisons
with many state-of-the-art algorithms on both synthetic and real-world datasets
demonstrate the superiorities of our method from several perspectives.
| Qibin Zhao, Guoxu Zhou, Liqing Zhang, Andrzej Cichocki, and Shun-ichi
Amari | 10.1109/TNNLS.2015.2423694 | 1410.2386 | null | null |
BilBOWA: Fast Bilingual Distributed Representations without Word
Alignments | stat.ML cs.CL cs.LG | We introduce BilBOWA (Bilingual Bag-of-Words without Alignments), a simple
and computationally-efficient model for learning bilingual distributed
representations of words which can scale to large monolingual datasets and does
not require word-aligned parallel training data. Instead it trains directly on
monolingual data and extracts a bilingual signal from a smaller set of raw-text
sentence-aligned data. This is achieved using a novel sampled bag-of-words
cross-lingual objective, which is used to regularize two noise-contrastive
language models for efficient cross-lingual feature learning. We show that
bilingual embeddings learned using the proposed model outperform
state-of-the-art methods on a cross-lingual document classification task as
well as a lexical translation task on WMT11 data.
| Stephan Gouws, Yoshua Bengio, Greg Corrado | null | 1410.2455 | null | null |
Speculate-Correct Error Bounds for k-Nearest Neighbor Classifiers | cs.LG cs.IT math.IT stat.ML | We introduce the speculate-correct method to derive error bounds for local
classifiers. Using it, we show that k nearest neighbor classifiers, in spite of
their famously fractured decision boundaries, have exponential error bounds
with O(sqrt((k + ln n) / n)) error bound range for n in-sample examples.
| Eric Bax, Lingjie Weng, Xu Tian | null | 1410.2500 | null | null |
Recovery of Sparse Signals Using Multiple Orthogonal Least Squares | stat.ME cs.IT cs.LG math.IT | We study the problem of recovering sparse signals from compressed linear
measurements. This problem, often referred to as sparse recovery or sparse
reconstruction, has generated a great deal of interest in recent years. To
recover the sparse signals, we propose a new method called multiple orthogonal
least squares (MOLS), which extends the well-known orthogonal least squares
(OLS) algorithm by allowing multiple $L$ indices to be chosen per iteration.
Owing to inclusion of multiple support indices in each selection, the MOLS
algorithm converges in much fewer iterations and improves the computational
efficiency over the conventional OLS algorithm. Theoretical analysis shows that
MOLS ($L > 1$) performs exact recovery of all $K$-sparse signals within $K$
iterations if the measurement matrix satisfies the restricted isometry property
(RIP) with isometry constant $\delta_{LK} < \frac{\sqrt{L}}{\sqrt{K} + 2
\sqrt{L}}.$ The recovery performance of MOLS in the noisy scenario is also
studied. It is shown that stable recovery of sparse signals can be achieved
with the MOLS algorithm when the signal-to-noise ratio (SNR) scales linearly
with the sparsity level of input signals.
| Jian Wang, Ping Li | null | 1410.2505 | null | null |
Polarization Measurement of High Dimensional Social Media Messages With
Support Vector Machine Algorithm Using Mapreduce | cs.LG cs.CL | In this article, we propose a new Support Vector Machine (SVM) training
algorithm based on distributed MapReduce technique. In literature, there are a
lots of research that shows us SVM has highest generalization property among
classification algorithms used in machine learning area. Also, SVM classifier
model is not affected by correlations of the features. But SVM uses quadratic
optimization techniques in its training phase. The SVM algorithm is formulated
as quadratic optimization problem. Quadratic optimization problem has $O(m^3)$
time and $O(m^2)$ space complexity, where m is the training set size. The
computation time of SVM training is quadratic in the number of training
instances. In this reason, SVM is not a suitable classification algorithm for
large scale dataset classification. To solve this training problem we developed
a new distributed MapReduce method developed. Accordingly, (i) SVM algorithm is
trained in distributed dataset individually; (ii) then merge all support
vectors of classifier model in every trained node; and (iii) iterate these two
steps until the classifier model converges to the optimal classifier function.
In the implementation phase, large scale social media dataset is presented in
TFxIDF matrix. The matrix is used for sentiment analysis to get polarization
value. Two and three class models are created for classification method.
Confusion matrices of each classification model are presented in tables. Social
media messages corpus consists of 108 public and 66 private universities
messages in Turkey. Twitter is used for source of corpus. Twitter user messages
are collected using Twitter Streaming API. Results are shown in graphics and
tables.
| Ferhat \"Ozg\"ur \c{C}atak | null | 1410.2686 | null | null |
New SVD based initialization strategy for Non-negative Matrix
Factorization | cs.LG cs.NA | There are two problems need to be dealt with for Non-negative Matrix
Factorization (NMF): choose a suitable rank of the factorization and provide a
good initialization method for NMF algorithms. This paper aims to solve these
two problems using Singular Value Decomposition (SVD). At first we extract the
number of main components as the rank, actually this method is inspired from
[1, 2]. Second, we use the singular value and its vectors to initialize NMF
algorithm. In 2008, Boutsidis and Gollopoulos [3] provided the method titled
NNDSVD to enhance initialization of NMF algorithms. They extracted the positive
section and respective singular triplet information of the unit matrices
{C(j)}k j=1 which were obtained from singular vector pairs. This strategy aims
to use positive section to cope with negative elements of the singular vectors,
but in experiments we found that even replacing negative elements by their
absolute values could get better results than NNDSVD. Hence, we give another
method based SVD to fulfil initialization for NMF algorithms (SVD-NMF).
Numerical experiments on two face databases ORL and YALE [16, 17] show that our
method is better than NNDSVD.
| Hanli Qiao | null | 1410.2786 | null | null |
Approximate False Positive Rate Control in Selection Frequency for
Random Forest | cs.LG stat.ME | Random Forest has become one of the most popular tools for feature selection.
Its ability to deal with high-dimensional data makes this algorithm especially
useful for studies in neuroimaging and bioinformatics. Despite its popularity
and wide use, feature selection in Random Forest still lacks a crucial
ingredient: false positive rate control. To date there is no efficient,
principled and computationally light-weight solution to this shortcoming. As a
result, researchers using Random Forest for feature selection have to resort to
using heuristically set thresholds on feature rankings. This article builds an
approximate probabilistic model for the feature selection process in random
forest training, which allows us to compute an estimated false positive rate
for a given threshold on selection frequency. Hence, it presents a principled
way to determine thresholds for the selection of relevant features without any
additional computational load. Experimental analysis with synthetic data
demonstrates that the proposed approach can limit false positive rates on the
order of the desired values and keep false negative rates low. Results show
that this holds even in the presence of a complex correlation structure between
features. Its good statistical properties and light-weight computational needs
make this approach widely applicable to feature selection for a wide-range of
applications.
| Ender Konukoglu and Melanie Ganz | null | 1410.2838 | null | null |
Computabilities of Validity and Satisfiability in Probability Logics
over Finite and Countable Models | cs.LO cs.LG math.LO math.PR | The $\epsilon$-logic (which is called $\epsilon$E-logic in this paper) of
Kuyper and Terwijn is a variant of first order logic with the same syntax, in
which the models are equipped with probability measures and in which the
$\forall x$ quantifier is interpreted as "there exists a set $A$ of measure
$\ge 1 - \epsilon$ such that for each $x \in A$, ...." Previously, Kuyper and
Terwijn proved that the general satisfiability and validity problems for this
logic are, i) for rational $\epsilon \in (0, 1)$, respectively
$\Sigma^1_1$-complete and $\Pi^1_1$-hard, and ii) for $\epsilon = 0$,
respectively decidable and $\Sigma^0_1$-complete. The adjective "general" here
means "uniformly over all languages."
We extend these results in the scenario of finite models. In particular, we
show that the problems of satisfiability by and validity over finite models in
$\epsilon$E-logic are, i) for rational $\epsilon \in (0, 1)$, respectively
$\Sigma^0_1$- and $\Pi^0_1$-complete, and ii) for $\epsilon = 0$, respectively
decidable and $\Pi^0_1$-complete. Although partial results toward the countable
case are also achieved, the computability of $\epsilon$E-logic over countable
models still remains largely unsolved. In addition, most of the results, of
this paper and of Kuyper and Terwijn, do not apply to individual languages with
a finite number of unary predicates. Reducing this requirement continues to be
a major point of research.
On the positive side, we derive the decidability of the corresponding
problems for monadic relational languages --- equality- and function-free
languages with finitely many unary and zero other predicates. This result holds
for all three of the unrestricted, the countable, and the finite model cases.
Applications in computational learning theory, weighted graphs, and neural
networks are discussed in the context of these decidability and undecidability
results.
| Greg Yang | 10.1080/11663081.2016.1139967 | 1410.3059 | null | null |
Machine Learning Techniques in Cognitive Radio Networks | cs.LG cs.NI | Cognitive radio is an intelligent radio that can be programmed and configured
dynamically to fully use the frequency resources that are not used by licensed
users. It defines the radio devices that are capable of learning and adapting
to their transmission to the external radio environment, which means it has
some kind of intelligence for monitoring the radio environment, learning the
environment and make smart decisions. In this paper, we are reviewing some
examples of the usage of machine learning techniques in cognitive radio
networks for implementing the intelligent radio.
| Peter Hossain, Adaulfo Komisarczuk, Garin Pawetczak, Sarah Van Dijk,
Isabella Axelsen | null | 1410.3145 | null | null |
Multi-Scale Local Shape Analysis and Feature Selection in Machine
Learning Applications | cs.CG cs.LG math.AT stat.ML | We introduce a method called multi-scale local shape analysis, or MLSA, for
extracting features that describe the local structure of points within a
dataset. The method uses both geometric and topological features at multiple
levels of granularity to capture diverse types of local information for
subsequent machine learning algorithms operating on the dataset. Using
synthetic and real dataset examples, we demonstrate significant performance
improvement of classification algorithms constructed for these datasets with
correspondingly augmented features.
| Paul Bendich, Ellen Gasparovic, John Harer, Rauf Izmailov, and Linda
Ness | null | 1410.3169 | null | null |
Propagation Kernels | stat.ML cs.LG | We introduce propagation kernels, a general graph-kernel framework for
efficiently measuring the similarity of structured data. Propagation kernels
are based on monitoring how information spreads through a set of given graphs.
They leverage early-stage distributions from propagation schemes such as random
walks to capture structural information encoded in node labels, attributes, and
edge information. This has two benefits. First, off-the-shelf propagation
schemes can be used to naturally construct kernels for many graph types,
including labeled, partially labeled, unlabeled, directed, and attributed
graphs. Second, by leveraging existing efficient and informative propagation
schemes, propagation kernels can be considerably faster than state-of-the-art
approaches without sacrificing predictive performance. We will also show that
if the graphs at hand have a regular structure, for instance when modeling
image or video data, one can exploit this regularity to scale the kernel
computation to large databases of graphs with thousands of nodes. We support
our contributions by exhaustive experiments on a number of real-world graphs
from a variety of application domains.
| Marion Neumann and Roman Garnett and Christian Bauckhage and Kristian
Kersting | null | 1410.3314 | null | null |
Generalization Analysis for Game-Theoretic Machine Learning | cs.LG cs.GT | For Internet applications like sponsored search, cautions need to be taken
when using machine learning to optimize their mechanisms (e.g., auction) since
self-interested agents in these applications may change their behaviors (and
thus the data distribution) in response to the mechanisms. To tackle this
problem, a framework called game-theoretic machine learning (GTML) was recently
proposed, which first learns a Markov behavior model to characterize agents'
behaviors, and then learns the optimal mechanism by simulating agents' behavior
changes in response to the mechanism. While GTML has demonstrated practical
success, its generalization analysis is challenging because the behavior data
are non-i.i.d. and dependent on the mechanism. To address this challenge,
first, we decompose the generalization error for GTML into the behavior
learning error and the mechanism learning error; second, for the behavior
learning error, we obtain novel non-asymptotic error bounds for both parametric
and non-parametric behavior learning methods; third, for the mechanism learning
error, we derive a uniform convergence bound based on a new concept called
nested covering number of the mechanism space and the generalization analysis
techniques developed for mixing sequences. To the best of our knowledge, this
is the first work on the generalization analysis of GTML, and we believe it has
general implications to the theoretical analysis of other complicated machine
learning problems.
| Haifang Li, Fei Tian, Wei Chen, Tao Qin, Tie-Yan Liu | null | 1410.3341 | null | null |
Fast Multilevel Support Vector Machines | stat.ML cs.LG | Solving different types of optimization models (including parameters fitting)
for support vector machines on large-scale training data is often an expensive
computational task. This paper proposes a multilevel algorithmic framework that
scales efficiently to very large data sets. Instead of solving the whole
training set in one optimization process, the support vectors are obtained and
gradually refined at multiple levels of coarseness of the data. The proposed
framework includes: (a) construction of hierarchy of large-scale data coarse
representations, and (b) a local processing of updating the hyperplane
throughout this hierarchy. Our multilevel framework substantially improves the
computational time without loosing the quality of classifiers. The algorithms
are demonstrated for both regular and weighted support vector machines.
Experimental results are presented for balanced and imbalanced classification
problems. Quality improvement on several imbalanced data sets has been
observed.
| Talayeh Razzaghi and Ilya Safro | null | 1410.3348 | null | null |
Ricci Curvature and the Manifold Learning Problem | math.DG cs.LG math.MG stat.ML | Consider a sample of $n$ points taken i.i.d from a submanifold $\Sigma$ of
Euclidean space. We show that there is a way to estimate the Ricci curvature of
$\Sigma$ with respect to the induced metric from the sample. Our method is
grounded in the notions of Carr\'e du Champ for diffusion semi-groups, the
theory of Empirical processes and local Principal Component Analysis.
| Antonio G. Ache and Micah W. Warren | null | 1410.3351 | null | null |
Testing Poisson Binomial Distributions | cs.DS cs.IT cs.LG math.IT | A Poisson Binomial distribution over $n$ variables is the distribution of the
sum of $n$ independent Bernoullis. We provide a sample near-optimal algorithm
for testing whether a distribution $P$ supported on $\{0,...,n\}$ to which we
have sample access is a Poisson Binomial distribution, or far from all Poisson
Binomial distributions. The sample complexity of our algorithm is $O(n^{1/4})$
to which we provide a matching lower bound. We note that our sample complexity
improves quadratically upon that of the naive "learn followed by tolerant-test"
approach, while instance optimal identity testing [VV14] is not applicable
since we are looking to simultaneously test against a whole family of
distributions.
| Jayadev Acharya and Constantinos Daskalakis | null | 1410.3386 | null | null |
Mining Block I/O Traces for Cache Preloading with Sparse Temporal
Non-parametric Mixture of Multivariate Poisson | cs.OS cs.LG cs.SY | Existing caching strategies, in the storage domain, though well suited to
exploit short range spatio-temporal patterns, are unable to leverage long-range
motifs for improving hitrates. Motivated by this, we investigate novel Bayesian
non-parametric modeling(BNP) techniques for count vectors, to capture long
range correlations for cache preloading, by mining Block I/O traces. Such
traces comprise of a sequence of memory accesses that can be aggregated into
high-dimensional sparse correlated count vector sequences.
While there are several state of the art BNP algorithms for clustering and
their temporal extensions for prediction, there has been no work on exploring
these for correlated count vectors. Our first contribution addresses this gap
by proposing a DP based mixture model of Multivariate Poisson (DP-MMVP) and its
temporal extension(HMM-DP-MMVP) that captures the full covariance structure of
multivariate count data. However, modeling full covariance structure for count
vectors is computationally expensive, particularly for high dimensional data.
Hence, we exploit sparsity in our count vectors, and as our main contribution,
introduce the Sparse DP mixture of multivariate Poisson(Sparse-DP-MMVP),
generalizing our DP-MMVP mixture model, also leading to more efficient
inference. We then discuss a temporal extension to our model for cache
preloading.
We take the first step towards mining historical data, to capture long range
patterns in storage traces for cache preloading. Experimentally, we show a
dramatic improvement in hitrates on benchmark traces and lay the groundwork for
further research in storage domain to reduce latencies using data mining
techniques to capture long range motifs.
| Lavanya Sita Tekumalla, Chiranjib Bhattacharyya | null | 1410.3463 | null | null |
Enhanced Higgs to $\tau^+\tau^-$ Searches with Deep Learning | hep-ph cs.LG hep-ex | The Higgs boson is thought to provide the interaction that imparts mass to
the fundamental fermions, but while measurements at the Large Hadron Collider
(LHC) are consistent with this hypothesis, current analysis techniques lack the
statistical power to cross the traditional 5$\sigma$ significance barrier
without more data. \emph{Deep learning} techniques have the potential to
increase the statistical power of this analysis by \emph{automatically}
learning complex, high-level data representations. In this work, deep neural
networks are used to detect the decay of the Higgs to a pair of tau leptons. A
Bayesian optimization algorithm is used to tune the network architecture and
training algorithm hyperparameters, resulting in a deep network of eight
non-linear processing layers that improves upon the performance of shallow
classifiers even without the use of features specifically engineered by
physicists for this application. The improvement in discovery significance is
equivalent to an increase in the accumulated dataset of 25\%.
| Pierre Baldi, Peter Sadowski, Daniel Whiteson | 10.1103/PhysRevLett.114.111801 | 1410.3469 | null | null |
A stochastic behavior analysis of stochastic restricted-gradient descent
algorithm in reproducing kernel Hilbert spaces | cs.LG stat.ML | This paper presents a stochastic behavior analysis of a kernel-based
stochastic restricted-gradient descent method. The restricted gradient gives a
steepest ascent direction within the so-called dictionary subspace. The
analysis provides the transient and steady state performance in the mean
squared error criterion. It also includes stability conditions in the mean and
mean-square sense. The present study is based on the analysis of the kernel
normalized least mean square (KNLMS) algorithm initially proposed by Chen et
al. Simulation results validate the analysis.
| Masa-aki Takizawa, Masahiro Yukawa, and Cedric Richard | null | 1410.3595 | null | null |
Detection of cheating by decimation algorithm | stat.ML cond-mat.dis-nn cond-mat.stat-mech cs.LG | We expand the item response theory to study the case of "cheating students"
for a set of exams, trying to detect them by applying a greedy algorithm of
inference. This extended model is closely related to the Boltzmann machine
learning. In this paper we aim to infer the correct biases and interactions of
our model by considering a relatively small number of sets of training data.
Nevertheless, the greedy algorithm that we employed in the present study
exhibits good performance with a few number of training data. The key point is
the sparseness of the interactions in our problem in the context of the
Boltzmann machine learning: the existence of cheating students is expected to
be very rare (possibly even in real world). We compare a standard approach to
infer the sparse interactions in the Boltzmann machine learning to our greedy
algorithm and we find the latter to be superior in several aspects.
| Shogo Yamanaka, Masayuki Ohzeki, Aurelien Decelle | 10.7566/JPSJ.84.024801 | 1410.3596 | null | null |
POLYGLOT-NER: Massive Multilingual Named Entity Recognition | cs.CL cs.LG | The increasing diversity of languages used on the web introduces a new level
of complexity to Information Retrieval (IR) systems. We can no longer assume
that textual content is written in one language or even the same language
family. In this paper, we demonstrate how to build massive multilingual
annotators with minimal human expertise and intervention. We describe a system
that builds Named Entity Recognition (NER) annotators for 40 major languages
using Wikipedia and Freebase. Our approach does not require NER human annotated
datasets or language specific resources like treebanks, parallel corpora, and
orthographic rules. The novelty of approach lies therein - using only language
agnostic techniques, while achieving competitive performance.
Our method learns distributed word representations (word embeddings) which
encode semantic and syntactic features of words in each language. Then, we
automatically generate datasets from Wikipedia link structure and Freebase
attributes. Finally, we apply two preprocessing stages (oversampling and exact
surface form matching) which do not require any linguistic expertise.
Our evaluation is two fold: First, we demonstrate the system performance on
human annotated datasets. Second, for languages where no gold-standard
benchmarks are available, we propose a new method, distant evaluation, based on
statistical machine translation.
| Rami Al-Rfou, Vivek Kulkarni, Bryan Perozzi, Steven Skiena | null | 1410.3791 | null | null |
An exact mapping between the Variational Renormalization Group and Deep
Learning | stat.ML cond-mat.stat-mech cs.LG cs.NE | Deep learning is a broad set of techniques that uses multiple layers of
representation to automatically learn relevant features directly from
structured data. Recently, such techniques have yielded record-breaking results
on a diverse set of difficult machine learning tasks in computer vision, speech
recognition, and natural language processing. Despite the enormous success of
deep learning, relatively little is understood theoretically about why these
techniques are so successful at feature learning and compression. Here, we show
that deep learning is intimately related to one of the most important and
successful techniques in theoretical physics, the renormalization group (RG).
RG is an iterative coarse-graining scheme that allows for the extraction of
relevant features (i.e. operators) as a physical system is examined at
different length scales. We construct an exact mapping from the variational
renormalization group, first introduced by Kadanoff, and deep learning
architectures based on Restricted Boltzmann Machines (RBMs). We illustrate
these ideas using the nearest-neighbor Ising Model in one and two-dimensions.
Our results suggests that deep learning algorithms may be employing a
generalized RG-like scheme to learn relevant features from data.
| Pankaj Mehta and David J. Schwab | null | 1410.3831 | null | null |
Tighter Low-rank Approximation via Sampling the Leveraged Element | cs.DS cs.LG stat.ML | In this work, we propose a new randomized algorithm for computing a low-rank
approximation to a given matrix. Taking an approach different from existing
literature, our method first involves a specific biased sampling, with an
element being chosen based on the leverage scores of its row and column, and
then involves weighted alternating minimization over the factored form of the
intended low-rank matrix, to minimize error only on these samples. Our method
can leverage input sparsity, yet produce approximations in {\em spectral} (as
opposed to the weaker Frobenius) norm; this combines the best aspects of
otherwise disparate current results, but with a dependence on the condition
number $\kappa = \sigma_1/\sigma_r$. In particular we require $O(nnz(M) +
\frac{n\kappa^2 r^5}{\epsilon^2})$ computations to generate a rank-$r$
approximation to $M$ in spectral norm. In contrast, the best existing method
requires $O(nnz(M)+ \frac{nr^2}{\epsilon^4})$ time to compute an approximation
in Frobenius norm. Besides the tightness in spectral norm, we have a better
dependence on the error $\epsilon$. Our method is naturally and highly
parallelizable.
Our new approach enables two extensions that are interesting on their own.
The first is a new method to directly compute a low-rank approximation (in
efficient factored form) to the product of two given matrices; it computes a
small random set of entries of the product, and then executes weighted
alternating minimization (as before) on these. The sampling strategy is
different because now we cannot access leverage scores of the product matrix
(but instead have to work with input matrices). The second extension is an
improved algorithm with smaller communication complexity for the distributed
PCA setting (where each server has small set of rows of the matrix, and want to
compute low rank approximation with small amount of communication with other
servers).
| Srinadh Bhojanapalli, Prateek Jain, Sujay Sanghavi | null | 1410.3886 | null | null |
Spotting Suspicious Link Behavior with fBox: An Adversarial Perspective | cs.LG cs.IR cs.SI | How can we detect suspicious users in large online networks? Online
popularity of a user or product (via follows, page-likes, etc.) can be
monetized on the premise of higher ad click-through rates or increased sales.
Web services and social networks which incentivize popularity thus suffer from
a major problem of fake connections from link fraudsters looking to make a
quick buck. Typical methods of catching this suspicious behavior use spectral
techniques to spot large groups of often blatantly fraudulent (but sometimes
honest) users. However, small-scale, stealthy attacks may go unnoticed due to
the nature of low-rank eigenanalysis used in practice.
In this work, we take an adversarial approach to find and prove claims about
the weaknesses of modern, state-of-the-art spectral methods and propose fBox,
an algorithm designed to catch small-scale, stealth attacks that slip below the
radar. Our algorithm has the following desirable properties: (a) it has
theoretical underpinnings, (b) it is shown to be highly effective on real data
and (c) it is scalable (linear on the input size). We evaluate fBox on a large,
public 41.7 million node, 1.5 billion edge who-follows-whom social graph from
Twitter in 2010 and with high precision identify many suspicious accounts which
have persisted without suspension even to this day.
| Neil Shah, Alex Beutel, Brian Gallagher, Christos Faloutsos | null | 1410.3915 | null | null |
A Logic-based Approach to Generatively Defined Discriminative Modeling | cs.LG | Conditional random fields (CRFs) are usually specified by graphical models
but in this paper we propose to use probabilistic logic programs and specify
them generatively. Our intension is first to provide a unified approach to CRFs
for complex modeling through the use of a Turing complete language and second
to offer a convenient way of realizing generative-discriminative pairs in
machine learning to compare generative and discriminative models and choose the
best model. We implemented our approach as the D-PRISM language by modifying
PRISM, a logic-based probabilistic modeling language for generative modeling,
while exploiting its dynamic programming mechanism for efficient probability
computation. We tested D-PRISM with logistic regression, a linear-chain CRF and
a CRF-CFG and empirically confirmed their excellent discriminative performance
compared to their generative counterparts, i.e.\ naive Bayes, an HMM and a
PCFG. We also introduced new CRF models, CRF-BNCs and CRF-LCGs. They are CRF
versions of Bayesian network classifiers and probabilistic left-corner grammars
respectively and easily implementable in D-PRISM. We empirically showed that
they outperform their generative counterparts as expected.
| Taisuke Sato, Keiichi Kubota, Yoshitaka Kameya | null | 1410.3935 | null | null |
Thompson sampling with the online bootstrap | cs.LG stat.CO stat.ML | Thompson sampling provides a solution to bandit problems in which new
observations are allocated to arms with the posterior probability that an arm
is optimal. While sometimes easy to implement and asymptotically optimal,
Thompson sampling can be computationally demanding in large scale bandit
problems, and its performance is dependent on the model fit to the observed
data. We introduce bootstrap Thompson sampling (BTS), a heuristic method for
solving bandit problems which modifies Thompson sampling by replacing the
posterior distribution used in Thompson sampling by a bootstrap distribution.
We first explain BTS and show that the performance of BTS is competitive to
Thompson sampling in the well-studied Bernoulli bandit case. Subsequently, we
detail why BTS using the online bootstrap is more scalable than regular
Thompson sampling, and we show through simulation that BTS is more robust to a
misspecified error distribution. BTS is an appealing modification of Thompson
sampling, especially when samples from the posterior are otherwise not
available or are costly.
| Dean Eckles and Maurits Kaptein | null | 1410.4009 | null | null |
Complexity Issues and Randomization Strategies in Frank-Wolfe Algorithms
for Machine Learning | stat.ML cs.LG cs.NA math.OC | Frank-Wolfe algorithms for convex minimization have recently gained
considerable attention from the Optimization and Machine Learning communities,
as their properties make them a suitable choice in a variety of applications.
However, as each iteration requires to optimize a linear model, a clever
implementation is crucial to make such algorithms viable on large-scale
datasets. For this purpose, approximation strategies based on a random sampling
have been proposed by several researchers. In this work, we perform an
experimental study on the effectiveness of these techniques, analyze possible
alternatives and provide some guidelines based on our results.
| Emanuele Frandi, Ricardo Nanculef, Johan Suykens | null | 1410.4062 | null | null |
Two-Layer Feature Reduction for Sparse-Group Lasso via Decomposition of
Convex Sets | cs.LG | Sparse-Group Lasso (SGL) has been shown to be a powerful regression technique
for simultaneously discovering group and within-group sparse patterns by using
a combination of the $\ell_1$ and $\ell_2$ norms. However, in large-scale
applications, the complexity of the regularizers entails great computational
challenges. In this paper, we propose a novel Two-Layer Feature REduction
method (TLFre) for SGL via a decomposition of its dual feasible set. The
two-layer reduction is able to quickly identify the inactive groups and the
inactive features, respectively, which are guaranteed to be absent from the
sparse representation and can be removed from the optimization. Existing
feature reduction methods are only applicable for sparse models with one
sparsity-inducing regularizer. To our best knowledge, TLFre is the first one
that is capable of dealing with multiple sparsity-inducing regularizers.
Moreover, TLFre has a very low computational cost and can be integrated with
any existing solvers. We also develop a screening method---called DPC
(DecomPosition of Convex set)---for the nonnegative Lasso problem. Experiments
on both synthetic and real data sets show that TLFre and DPC improve the
efficiency of SGL and nonnegative Lasso by several orders of magnitude.
| Jie Wang and Jieping Ye | null | 1410.4210 | null | null |
Implicit segmentation of Kannada characters in offline handwriting
recognition using hidden Markov models | cs.LG cs.CV | We describe a method for classification of handwritten Kannada characters
using Hidden Markov Models (HMMs). Kannada script is agglutinative, where
simple shapes are concatenated horizontally to form a character. This results
in a large number of characters making the task of classification difficult.
Character segmentation plays a significant role in reducing the number of
classes. Explicit segmentation techniques suffer when overlapping shapes are
present, which is common in the case of handwritten text. We use HMMs to take
advantage of the agglutinative nature of Kannada script, which allows us to
perform implicit segmentation of characters along with recognition. All the
experiments are performed on the Chars74k dataset that consists of 657
handwritten characters collected across multiple users. Gradient-based features
are extracted from individual characters and are used to train character HMMs.
The use of implicit segmentation technique at the character level resulted in
an improvement of around 10%. This system also outperformed an existing system
tested on the same dataset by around 16%. Analysis based on learning curves
showed that increasing the training data could result in better accuracy.
Accordingly, we collected additional data and obtained an improvement of 4%
with 6 additional samples.
| Manasij Venkatesh, Vikas Majjagi, and Deepu Vijayasenan | null | 1410.4341 | null | null |
Multi-Level Anomaly Detection on Time-Varying Graph Data | cs.SI cs.LG stat.ML | This work presents a novel modeling and analysis framework for graph
sequences which addresses the challenge of detecting and contextualizing
anomalies in labelled, streaming graph data. We introduce a generalization of
the BTER model of Seshadhri et al. by adding flexibility to community
structure, and use this model to perform multi-scale graph anomaly detection.
Specifically, probability models describing coarse subgraphs are built by
aggregating probabilities at finer levels, and these closely related
hierarchical models simultaneously detect deviations from expectation. This
technique provides insight into a graph's structure and internal context that
may shed light on a detected event. Additionally, this multi-scale analysis
facilitates intuitive visualizations by allowing users to narrow focus from an
anomalous graph to particular subgraphs or nodes causing the anomaly.
For evaluation, two hierarchical anomaly detectors are tested against a
baseline Gaussian method on a series of sampled graphs. We demonstrate that our
graph statistics-based approach outperforms both a distribution-based detector
and the baseline in a labeled setting with community structure, and it
accurately detects anomalies in synthetic and real-world datasets at the node,
subgraph, and graph levels. To illustrate the accessibility of information made
possible via this technique, the anomaly detector and an associated interactive
visualization tool are tested on NCAA football data, where teams and
conferences that moved within the league are identified with perfect recall,
and precision greater than 0.786.
| Robert A. Bridges, John Collins, Erik M. Ferragut, Jason Laska, Blair
D. Sullivan | null | 1410.4355 | null | null |
Multivariate Spearman's rho for aggregating ranks using copulas | stat.ML cs.LG | We study the problem of rank aggregation: given a set of ranked lists, we
want to form a consensus ranking. Furthermore, we consider the case of extreme
lists: i.e., only the rank of the best or worst elements are known. We impute
missing ranks by the average value and generalise Spearman's \rho to extreme
ranks. Our main contribution is the derivation of a non-parametric estimator
for rank aggregation based on multivariate extensions of Spearman's \rho, which
measures correlation between a set of ranked lists. Multivariate Spearman's
\rho is defined using copulas, and we show that the geometric mean of
normalised ranks maximises multivariate correlation. Motivated by this, we
propose a weighted geometric mean approach for learning to rank which has a
closed form least squares solution. When only the best or worst elements of a
ranked list are known, we impute the missing ranks by the average value,
allowing us to apply Spearman's \rho. Finally, we demonstrate good performance
on the rank aggregation benchmarks MQ2007 and MQ2008.
| Justin Bedo and Cheng Soon Ong | null | 1410.4391 | null | null |
Map Matching based on Conditional Random Fields and Route Preference
Mining for Uncertain Trajectories | cs.NI cs.LG | In order to improve offline map matching accuracy of low-sampling-rate GPS, a
map matching algorithm based on conditional random fields (CRF) and route
preference mining is proposed. In this algorithm, road offset distance and the
temporal-spatial relationship between the sampling points are used as features
of GPS trajectory in CRF model, which can utilize the advantages of integrating
the context information into features flexibly. When the sampling rate is too
low, it is difficult to guarantee the effectiveness using temporal-spatial
context modeled in CRF, and route preference of a driver is used as
replenishment to be superposed on the temporal-spatial transition features. The
experimental results show that this method can improve the accuracy of the
matching, especially in the case of low sampling rate.
| Xu Ming, Du Yi-man, Wu Jian-ping, Zhou Yang | 10.1155/2015/717095 | 1410.4461 | null | null |
MKL-RT: Multiple Kernel Learning for Ratio-trace Problems via Convex
Optimization | cs.CV cs.LG | In the recent past, automatic selection or combination of kernels (or
features) based on multiple kernel learning (MKL) approaches has been receiving
significant attention from various research communities. Though MKL has been
extensively studied in the context of support vector machines (SVM), it is
relatively less explored for ratio-trace problems. In this paper, we show that
MKL can be formulated as a convex optimization problem for a general class of
ratio-trace problems that encompasses many popular algorithms used in various
computer vision applications. We also provide an optimization procedure that is
guaranteed to converge to the global optimum of the proposed optimization
problem. We experimentally demonstrate that the proposed MKL approach, which we
refer to as MKL-RT, can be successfully used to select features for
discriminative dimensionality reduction and cross-modal retrieval. We also show
that the proposed convex MKL-RT approach performs better than the recently
proposed non-convex MKL-DR approach.
| Raviteja Vemulapalli, Vinay Praneeth Boda, and Rama Chellappa | null | 1410.4470 | null | null |
Graph-Sparse LDA: A Topic Model with Structured Sparsity | stat.ML cs.CL cs.LG | Originally designed to model text, topic modeling has become a powerful tool
for uncovering latent structure in domains including medicine, finance, and
vision. The goals for the model vary depending on the application: in some
cases, the discovered topics may be used for prediction or some other
downstream task. In other cases, the content of the topic itself may be of
intrinsic scientific interest.
Unfortunately, even using modern sparse techniques, the discovered topics are
often difficult to interpret due to the high dimensionality of the underlying
space. To improve topic interpretability, we introduce Graph-Sparse LDA, a
hierarchical topic model that leverages knowledge of relationships between
words (e.g., as encoded by an ontology). In our model, topics are summarized by
a few latent concept-words from the underlying graph that explain the observed
words. Graph-Sparse LDA recovers sparse, interpretable summaries on two
real-world biomedical datasets while matching state-of-the-art prediction
performance.
| Finale Doshi-Velez and Byron Wallace and Ryan Adams | null | 1410.4510 | null | null |
Learning a hyperplane regressor by minimizing an exact bound on the VC
dimension | cs.LG | The capacity of a learning machine is measured by its Vapnik-Chervonenkis
dimension, and learning machines with a low VC dimension generalize better. It
is well known that the VC dimension of SVMs can be very large or unbounded,
even though they generally yield state-of-the-art learning performance. In this
paper, we show how to learn a hyperplane regressor by minimizing an exact, or
\boldmath{$\Theta$} bound on its VC dimension. The proposed approach, termed as
the Minimal Complexity Machine (MCM) Regressor, involves solving a simple
linear programming problem. Experimental results show, that on a number of
benchmark datasets, the proposed approach yields regressors with error rates
much less than those obtained with conventional SVM regresssors, while often
using fewer support vectors. On some benchmark datasets, the number of support
vectors is less than one tenth the number used by SVMs, indicating that the MCM
does indeed learn simpler representations.
| Jayadeva, Suresh Chandra, Siddarth Sabharwal, and Sanjit S. Batra | 10.1016/j.neucom.2015.06.065 | 1410.4573 | null | null |
Non-parametric Bayesian Learning with Deep Learning Structure and Its
Applications in Wireless Networks | cs.LG cs.NE cs.NI stat.ML | In this paper, we present an infinite hierarchical non-parametric Bayesian
model to extract the hidden factors over observed data, where the number of
hidden factors for each layer is unknown and can be potentially infinite.
Moreover, the number of layers can also be infinite. We construct the model
structure that allows continuous values for the hidden factors and weights,
which makes the model suitable for various applications. We use the
Metropolis-Hastings method to infer the model structure. Then the performance
of the algorithm is evaluated by the experiments. Simulation results show that
the model fits the underlying structure of simulated data.
| Erte Pan and Zhu Han | null | 1410.4599 | null | null |
Domain-Independent Optimistic Initialization for Reinforcement Learning | cs.LG cs.AI | In Reinforcement Learning (RL), it is common to use optimistic initialization
of value functions to encourage exploration. However, such an approach
generally depends on the domain, viz., the scale of the rewards must be known,
and the feature representation must have a constant norm. We present a simple
approach that performs optimistic initialization with less dependence on the
domain.
| Marlos C. Machado, Sriram Srinivasan and Michael Bowling | null | 1410.4604 | null | null |
Learning to Execute | cs.NE cs.AI cs.LG | Recurrent Neural Networks (RNNs) with Long Short-Term Memory units (LSTM) are
widely used because they are expressive and are easy to train. Our interest
lies in empirically evaluating the expressiveness and the learnability of LSTMs
in the sequence-to-sequence regime by training them to evaluate short computer
programs, a domain that has traditionally been seen as too complex for neural
networks. We consider a simple class of programs that can be evaluated with a
single left-to-right pass using constant memory. Our main result is that LSTMs
can learn to map the character-level representations of such programs to their
correct outputs. Notably, it was necessary to use curriculum learning, and
while conventional curriculum learning proved ineffective, we developed a new
variant of curriculum learning that improved our networks' performance in all
experimental conditions. The improved curriculum had a dramatic impact on an
addition problem, making it possible to train an LSTM to add two 9-digit
numbers with 99% accuracy.
| Wojciech Zaremba, Ilya Sutskever | null | 1410.4615 | null | null |
KCRC-LCD: Discriminative Kernel Collaborative Representation with
Locality Constrained Dictionary for Visual Categorization | cs.CV cs.LG | We consider the image classification problem via kernel collaborative
representation classification with locality constrained dictionary (KCRC-LCD).
Specifically, we propose a kernel collaborative representation classification
(KCRC) approach in which kernel method is used to improve the discrimination
ability of collaborative representation classification (CRC). We then measure
the similarities between the query and atoms in the global dictionary in order
to construct a locality constrained dictionary (LCD) for KCRC. In addition, we
discuss several similarity measure approaches in LCD and further present a
simple yet effective unified similarity measure whose superiority is validated
in experiments. There are several appealing aspects associated with LCD. First,
LCD can be nicely incorporated under the framework of KCRC. The LCD similarity
measure can be kernelized under KCRC, which theoretically links CRC and LCD
under the kernel method. Second, KCRC-LCD becomes more scalable to both the
training set size and the feature dimension. Example shows that KCRC is able to
perfectly classify data with certain distribution, while conventional CRC fails
completely. Comprehensive experiments on many public datasets also show that
KCRC-LCD is a robust discriminative classifier with both excellent performance
and good scalability, being comparable or outperforming many other
state-of-the-art approaches.
| Weiyang Liu, Zhiding Yu, Lijia Lu, Yandong Wen, Hui Li and Yuexian Zou | null | 1410.4673 | null | null |
mS2GD: Mini-Batch Semi-Stochastic Gradient Descent in the Proximal
Setting | cs.LG stat.ML | We propose a mini-batching scheme for improving the theoretical complexity
and practical performance of semi-stochastic gradient descent applied to the
problem of minimizing a strongly convex composite function represented as the
sum of an average of a large number of smooth convex functions, and simple
nonsmooth convex function. Our method first performs a deterministic step
(computation of the gradient of the objective function at the starting point),
followed by a large number of stochastic steps. The process is repeated a few
times with the last iterate becoming the new starting point. The novelty of our
method is in introduction of mini-batching into the computation of stochastic
steps. In each step, instead of choosing a single function, we sample $b$
functions, compute their gradients, and compute the direction based on this. We
analyze the complexity of the method and show that the method benefits from two
speedup effects. First, we prove that as long as $b$ is below a certain
threshold, we can reach predefined accuracy with less overall work than without
mini-batching. Second, our mini-batching scheme admits a simple parallel
implementation, and hence is suitable for further acceleration by
parallelization.
| Jakub Kone\v{c}n\'y, Jie Liu, Peter Richt\'arik, Martin Tak\'a\v{c} | null | 1410.4744 | null | null |
A Hierarchical Multi-Output Nearest Neighbor Model for Multi-Output
Dependence Learning | stat.ML cs.LG | Multi-Output Dependence (MOD) learning is a generalization of standard
classification problems that allows for multiple outputs that are dependent on
each other. A primary issue that arises in the context of MOD learning is that
for any given input pattern there can be multiple correct output patterns. This
changes the learning task from function approximation to relation
approximation. Previous algorithms do not consider this problem, and thus
cannot be readily applied to MOD problems. To perform MOD learning, we
introduce the Hierarchical Multi-Output Nearest Neighbor model (HMONN) that
employs a basic learning model for each output and a modified nearest neighbor
approach to refine the initial results.
| Richard G. Morris and Tony Martinez and Michael R. Smith | null | 1410.4777 | null | null |
Generalized Conditional Gradient for Sparse Estimation | math.OC cs.LG stat.ML | Structured sparsity is an important modeling tool that expands the
applicability of convex formulations for data analysis, however it also creates
significant challenges for efficient algorithm design. In this paper we
investigate the generalized conditional gradient (GCG) algorithm for solving
structured sparse optimization problems---demonstrating that, with some
enhancements, it can provide a more efficient alternative to current state of
the art approaches. After providing a comprehensive overview of the convergence
properties of GCG, we develop efficient methods for evaluating polar operators,
a subroutine that is required in each GCG iteration. In particular, we show how
the polar operator can be efficiently evaluated in two important scenarios:
dictionary learning and structured sparse estimation. A further improvement is
achieved by interleaving GCG with fixed-rank local subspace optimization. A
series of experiments on matrix completion, multi-class classification,
multi-view dictionary learning and overlapping group lasso shows that the
proposed method can significantly reduce the training cost of current
alternatives.
| Yaoliang Yu, Xinhua Zhang, and Dale Schuurmans | null | 1410.4828 | null | null |
Gaussian Process Models with Parallelization and GPU acceleration | cs.DC cs.LG stat.ML | In this work, we present an extension of Gaussian process (GP) models with
sophisticated parallelization and GPU acceleration. The parallelization scheme
arises naturally from the modular computational structure w.r.t. datapoints in
the sparse Gaussian process formulation. Additionally, the computational
bottleneck is implemented with GPU acceleration for further speed up. Combining
both techniques allows applying Gaussian process models to millions of
datapoints. The efficiency of our algorithm is demonstrated with a synthetic
dataset. Its source code has been integrated into our popular software library
GPy.
| Zhenwen Dai, Andreas Damianou, James Hensman, Neil Lawrence | null | 1410.4984 | null | null |
On Bootstrapping Machine Learning Performance Predictors via Analytical
Models | cs.PF cs.LG | Performance modeling typically relies on two antithetic methodologies: white
box models, which exploit knowledge on system's internals and capture its
dynamics using analytical approaches, and black box techniques, which infer
relations among the input and output variables of a system based on the
evidences gathered during an initial training phase. In this paper we
investigate a technique, which we name Bootstrapping, which aims at reconciling
these two methodologies and at compensating the cons of the one with the pros
of the other. We thoroughly analyze the design space of this gray box modeling
technique, and identify a number of algorithmic and parametric trade-offs which
we evaluate via two realistic case studies, a Key-Value Store and a Total Order
Broadcast service.
| Diego Didona and Paolo Romano | null | 1410.5102 | null | null |
On Iterative Hard Thresholding Methods for High-dimensional M-Estimation | cs.LG stat.ML | The use of M-estimators in generalized linear regression models in high
dimensional settings requires risk minimization with hard $L_0$ constraints. Of
the known methods, the class of projected gradient descent (also known as
iterative hard thresholding (IHT)) methods is known to offer the fastest and
most scalable solutions. However, the current state-of-the-art is only able to
analyze these methods in extremely restrictive settings which do not hold in
high dimensional statistical models. In this work we bridge this gap by
providing the first analysis for IHT-style methods in the high dimensional
statistical setting. Our bounds are tight and match known minimax lower bounds.
Our results rely on a general analysis framework that enables us to analyze
several popular hard thresholding style algorithms (such as HTP, CoSaMP, SP) in
the high dimensional regression setting. We also extend our analysis to a large
family of "fully corrective methods" that includes two-stage and partial
hard-thresholding algorithms. We show that our results hold for the problem of
sparse regression, as well as low-rank matrix recovery.
| Prateek Jain, Ambuj Tewari, Purushottam Kar | null | 1410.5137 | null | null |
Naive Bayes and Text Classification I - Introduction and Theory | cs.LG | Naive Bayes classifiers, a family of classifiers that are based on the
popular Bayes' probability theorem, are known for creating simple yet well
performing models, especially in the fields of document classification and
disease prediction. In this article, we will look at the main concepts of naive
Bayes classification in the context of document categorization.
| Sebastian Raschka | null | 1410.5329 | null | null |
An Overview of General Performance Metrics of Binary Classifier Systems | cs.LG | This document provides a brief overview of different metrics and terminology
that is used to measure the performance of binary classification systems.
| Sebastian Raschka | null | 1410.5330 | null | null |
Scalable Parallel Factorizations of SDD Matrices and Efficient Sampling
for Gaussian Graphical Models | cs.DS cs.LG cs.NA math.NA stat.CO stat.ML | Motivated by a sampling problem basic to computational statistical inference,
we develop a nearly optimal algorithm for a fundamental problem in spectral
graph theory and numerical analysis. Given an $n\times n$ SDDM matrix ${\bf
\mathbf{M}}$, and a constant $-1 \leq p \leq 1$, our algorithm gives efficient
access to a sparse $n\times n$ linear operator $\tilde{\mathbf{C}}$ such that
$${\mathbf{M}}^{p} \approx \tilde{\mathbf{C}} \tilde{\mathbf{C}}^\top.$$ The
solution is based on factoring ${\bf \mathbf{M}}$ into a product of simple and
sparse matrices using squaring and spectral sparsification. For ${\mathbf{M}}$
with $m$ non-zero entries, our algorithm takes work nearly-linear in $m$, and
polylogarithmic depth on a parallel machine with $m$ processors. This gives the
first sampling algorithm that only requires nearly linear work and $n$ i.i.d.
random univariate Gaussian samples to generate i.i.d. random samples for
$n$-dimensional Gaussian random fields with SDDM precision matrices. For
sampling this natural subclass of Gaussian random fields, it is optimal in the
randomness and nearly optimal in the work and parallel complexity. In addition,
our sampling algorithm can be directly extended to Gaussian random fields with
SDD precision matrices.
| Dehua Cheng, Yu Cheng, Yan Liu, Richard Peng and Shang-Hua Teng | null | 1410.5392 | null | null |
Improved Asymmetric Locality Sensitive Hashing (ALSH) for Maximum Inner
Product Search (MIPS) | stat.ML cs.DS cs.IR cs.LG | Recently it was shown that the problem of Maximum Inner Product Search (MIPS)
is efficient and it admits provably sub-linear hashing algorithms. Asymmetric
transformations before hashing were the key in solving MIPS which was otherwise
hard. In the prior work, the authors use asymmetric transformations which
convert the problem of approximate MIPS into the problem of approximate near
neighbor search which can be efficiently solved using hashing. In this work, we
provide a different transformation which converts the problem of approximate
MIPS into the problem of approximate cosine similarity search which can be
efficiently solved using signed random projections. Theoretical analysis show
that the new scheme is significantly better than the original scheme for MIPS.
Experimental evaluations strongly support the theoretical findings.
| Anshumali Shrivastava and Ping Li | null | 1410.5410 | null | null |
Machine Learning of Coq Proof Guidance: First Experiments | cs.LO cs.LG | We report the results of the first experiments with learning proof
dependencies from the formalizations done with the Coq system. We explain the
process of obtaining the dependencies from the Coq proofs, the characterization
of formulas that is used for the learning, and the evaluation method. Various
machine learning methods are compared on a dataset of 5021 toplevel Coq proofs
coming from the CoRN repository. The best resulting method covers on average
75% of the needed proof dependencies among the first 100 predictions, which is
a comparable performance of such initial experiments on other large-theory
corpora.
| Cezary Kaliszyk, Lionel Mamane, Josef Urban | null | 1410.5467 | null | null |
Feature Selection Based on Confidence Machine | cs.LG | In machine learning and pattern recognition, feature selection has been a hot
topic in the literature. Unsupervised feature selection is challenging due to
the loss of labels which would supply the related information.How to define an
appropriate metric is the key for feature selection. We propose a filter method
for unsupervised feature selection which is based on the Confidence Machine.
Confidence Machine offers an estimation of confidence on a feature'reliability.
In this paper, we provide the math model of Confidence Machine in the context
of feature selection, which maximizes the relevance and minimizes the
redundancy of the selected feature. We compare our method against classic
feature selection methods Laplacian Score, Pearson Correlation and Principal
Component Analysis on benchmark data sets. The experimental results demonstrate
the efficiency and effectiveness of our method.
| Chang Liu and Yi Xu | null | 1410.5473 | null | null |
Using Mechanical Turk to Build Machine Translation Evaluation Sets | cs.CL cs.LG stat.ML | Building machine translation (MT) test sets is a relatively expensive task.
As MT becomes increasingly desired for more and more language pairs and more
and more domains, it becomes necessary to build test sets for each case. In
this paper, we investigate using Amazon's Mechanical Turk (MTurk) to make MT
test sets cheaply. We find that MTurk can be used to make test sets much
cheaper than professionally-produced test sets. More importantly, in
experiments with multiple MT systems, we find that the MTurk-produced test sets
yield essentially the same conclusions regarding system performance as the
professionally-produced test sets yield.
| Michael Bloodgood and Chris Callison-Burch | null | 1410.5491 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.