title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Learning like a Child: Fast Novel Visual Concept Learning from Sentence
Descriptions of Images | cs.CV cs.CL cs.LG | In this paper, we address the task of learning novel visual concepts, and
their interactions with other concepts, from a few images with sentence
descriptions. Using linguistic context and visual features, our method is able
to efficiently hypothesize the semantic meaning of new words and add them to
its word dictionary so that they can be used to describe images which contain
these novel concepts. Our method has an image captioning module based on m-RNN
with several improvements. In particular, we propose a transposed weight
sharing scheme, which not only improves performance on image captioning, but
also makes the model more suitable for the novel concept learning task. We
propose methods to prevent overfitting the new concepts. In addition, three
novel concept datasets are constructed for this new task. In the experiments,
we show that our method effectively learns novel visual concepts from a few
examples without disturbing the previously learned concepts. The project page
is http://www.stat.ucla.edu/~junhua.mao/projects/child_learning.html
| Junhua Mao, Wei Xu, Yi Yang, Jiang Wang, Zhiheng Huang, Alan Yuille | null | 1504.06692 | null | null |
Max-margin Deep Generative Models | cs.LG cs.CV | Deep generative models (DGMs) are effective on learning multilayered
representations of complex data and performing inference of input data by
exploring the generative ability. However, little work has been done on
examining or empowering the discriminative ability of DGMs on making accurate
predictions. This paper presents max-margin deep generative models (mmDGMs),
which explore the strongly discriminative principle of max-margin learning to
improve the discriminative power of DGMs, while retaining the generative
capability. We develop an efficient doubly stochastic subgradient algorithm for
the piecewise linear objective. Empirical results on MNIST and SVHN datasets
demonstrate that (1) max-margin learning can significantly improve the
prediction performance of DGMs and meanwhile retain the generative ability; and
(2) mmDGMs are competitive to the state-of-the-art fully discriminative
networks by employing deep convolutional neural networks (CNNs) as both
recognition and generative models.
| Chongxuan Li and Jun Zhu and Tianlin Shi and Bo Zhang | null | 1504.06787 | null | null |
Overlapping Communities Detection via Measure Space Embedding | cs.LG cs.SI stat.ML | We present a new algorithm for community detection. The algorithm uses random
walks to embed the graph in a space of measures, after which a modification of
$k$-means in that space is applied. The algorithm is therefore fast and easily
parallelizable. We evaluate the algorithm on standard random graph benchmarks,
including some overlapping community benchmarks, and find its performance to be
better or at least as good as previously known algorithms. We also prove a
linear time (in number of edges) guarantee for the algorithm on a
$p,q$-stochastic block model with $p \geq c\cdot N^{-\frac{1}{2} + \epsilon}$
and $p-q \geq c' \sqrt{p N^{-\frac{1}{2} + \epsilon} \log N}$.
| Mark Kozdoba and Shie Mannor | null | 1504.06796 | null | null |
Overlapping Community Detection by Online Cluster Aggregation | cs.LG cs.SI physics.soc-ph | We present a new online algorithm for detecting overlapping communities. The
main ingredients are a modification of an online k-means algorithm and a new
approach to modelling overlap in communities. An evaluation on large benchmark
graphs shows that the quality of discovered communities compares favorably to
several methods in the recent literature, while the running time is
significantly improved.
| Mark Kozdoba and Shie Mannor | null | 1504.06798 | null | null |
Analysis of Nuclear Norm Regularization for Full-rank Matrix Completion | cs.LG stat.ML | In this paper, we provide a theoretical analysis of the nuclear-norm
regularized least squares for full-rank matrix completion. Although similar
formulations have been examined by previous studies, their results are
unsatisfactory because only additive upper bounds are provided. Under the
assumption that the top eigenspaces of the target matrix are incoherent, we
derive a relative upper bound for recovering the best low-rank approximation of
the unknown matrix. Our relative upper bound is tighter than previous additive
bounds of other methods if the mass of the target matrix is concentrated on its
top eigenspaces, and also implies perfect recovery if it is low-rank. The
analysis is built upon the optimality condition of the regularized formulation
and existing guarantees for low-rank matrix completion. To the best of our
knowledge, this is first time such a relative bound is proved for the
regularized formulation of matrix completion.
| Lijun Zhang, Tianbao Yang, Rong Jin, Zhi-Hua Zhou | null | 1504.06817 | null | null |
Comparison of Training Methods for Deep Neural Networks | cs.LG cs.AI | This report describes the difficulties of training neural networks and in
particular deep neural networks. It then provides a literature review of
training methods for deep neural networks, with a focus on pre-training. It
focuses on Deep Belief Networks composed of Restricted Boltzmann Machines and
Stacked Autoencoders and provides an outreach on further and alternative
approaches. It also includes related practical recommendations from the
literature on training them. In the second part, initial experiments using some
of the covered methods are performed on two databases. In particular,
experiments are performed on the MNIST hand-written digit dataset and on facial
emotion data from a Kaggle competition. The results are discussed in the
context of results reported in other research papers. An error rate lower than
the best contribution to the Kaggle competition is achieved using an optimized
Stacked Autoencoder.
| Patrick O. Glauner | null | 1504.06825 | null | null |
Assessing binary classifiers using only positive and unlabeled data | stat.ML cs.IR cs.LG | Assessing the performance of a learned model is a crucial part of machine
learning. However, in some domains only positive and unlabeled examples are
available, which prohibits the use of most standard evaluation metrics. We
propose an approach to estimate any metric based on contingency tables,
including ROC and PR curves, using only positive and unlabeled data. Estimating
these performance metrics is essentially reduced to estimating the fraction of
(latent) positives in the unlabeled set, assuming known positives are a random
sample of all positives. We provide theoretical bounds on the quality of our
estimates, illustrate the importance of estimating the fraction of positives in
the unlabeled set and demonstrate empirically that we are able to reliably
estimate ROC and PR curves on real data.
| Marc Claesen, Jesse Davis, Frank De Smet, Bart De Moor | null | 1504.06837 | null | null |
FlowNet: Learning Optical Flow with Convolutional Networks | cs.CV cs.LG | Convolutional neural networks (CNNs) have recently been very successful in a
variety of computer vision tasks, especially on those linked to recognition.
Optical flow estimation has not been among the tasks where CNNs were
successful. In this paper we construct appropriate CNNs which are capable of
solving the optical flow estimation problem as a supervised learning task. We
propose and compare two architectures: a generic architecture and another one
including a layer that correlates feature vectors at different image locations.
Since existing ground truth data sets are not sufficiently large to train a
CNN, we generate a synthetic Flying Chairs dataset. We show that networks
trained on this unrealistic data still generalize very well to existing
datasets such as Sintel and KITTI, achieving competitive accuracy at frame
rates of 5 to 10 fps.
| Philipp Fischer, Alexey Dosovitskiy, Eddy Ilg, Philip H\"ausser, Caner
Haz{\i}rba\c{s}, Vladimir Golkov, Patrick van der Smagt, Daniel Cremers,
Thomas Brox | null | 1504.06852 | null | null |
Autonomy and Reliability of Continuous Active Learning for
Technology-Assisted Review | cs.IR cs.LG | We enhance the autonomy of the continuous active learning method shown by
Cormack and Grossman (SIGIR 2014) to be effective for technology-assisted
review, in which documents from a collection are retrieved and reviewed, using
relevance feedback, until substantially all of the relevant documents have been
reviewed. Autonomy is enhanced through the elimination of topic-specific and
dataset-specific tuning parameters, so that the sole input required by the user
is, at the outset, a short query, topic description, or single relevant
document; and, throughout the review, ongoing relevance assessments of the
retrieved documents. We show that our enhancements consistently yield superior
results to Cormack and Grossman's version of continuous active learning, and
other methods, not only on average, but on the vast majority of topics from
four separate sets of tasks: the legal datasets examined by Cormack and
Grossman, the Reuters RCV1-v2 subject categories, the TREC 6 AdHoc task, and
the construction of the TREC 2002 filtering test collection.
| Gordon V. Cormack and Maura R. Grossman | null | 1504.06868 | null | null |
Linear Spatial Pyramid Matching Using Non-convex and non-negative Sparse
Coding for Image Classification | cs.CV cs.LG | Recently sparse coding have been highly successful in image classification
mainly due to its capability of incorporating the sparsity of image
representation. In this paper, we propose an improved sparse coding model based
on linear spatial pyramid matching(SPM) and Scale Invariant Feature Transform
(SIFT ) descriptors. The novelty is the simultaneous non-convex and
non-negative characters added to the sparse coding model. Our numerical
experiments show that the improved approach using non-convex and non-negative
sparse coding is superior than the original ScSPM[1] on several typical
databases.
| Chengqiang Bao and Liangtian He and Yilun Wang | null | 1504.06897 | null | null |
Algorithms with Logarithmic or Sublinear Regret for Constrained
Contextual Bandits | cs.LG stat.ML | We study contextual bandits with budget and time constraints, referred to as
constrained contextual bandits.The time and budget constraints significantly
complicate the exploration and exploitation tradeoff because they introduce
complex coupling among contexts over time.Such coupling effects make it
difficult to obtain oracle solutions that assume known statistics of bandits.
To gain insight, we first study unit-cost systems with known context
distribution. When the expected rewards are known, we develop an approximation
of the oracle, referred to Adaptive-Linear-Programming (ALP), which achieves
near-optimality and only requires the ordering of expected rewards. With these
highly desirable features, we then combine ALP with the upper-confidence-bound
(UCB) method in the general case where the expected rewards are unknown {\it a
priori}. We show that the proposed UCB-ALP algorithm achieves logarithmic
regret except for certain boundary cases. Further, we design algorithms and
obtain similar regret analysis results for more general systems with unknown
context distribution and heterogeneous costs. To the best of our knowledge,
this is the first work that shows how to achieve logarithmic regret in
constrained contextual bandits. Moreover, this work also sheds light on the
study of computationally efficient algorithms for general constrained
contextual bandits.
| Huasen Wu, R. Srikant, Xin Liu, and Chong Jiang | null | 1504.06937 | null | null |
Random Forest for the Contextual Bandit Problem - extended version | cs.LG | To address the contextual bandit problem, we propose an online random forest
algorithm. The analysis of the proposed algorithm is based on the sample
complexity needed to find the optimal decision stump. Then, the decision stumps
are assembled in a random collection of decision trees, Bandit Forest. We show
that the proposed algorithm is optimal up to logarithmic factors. The
dependence of the sample complexity upon the number of contextual variables is
logarithmic. The computational cost of the proposed algorithm with respect to
the time horizon is linear. These analytical results allow the proposed
algorithm to be efficient in real applications, where the number of events to
process is huge, and where we expect that some contextual variables, chosen
from a large set, have potentially non- linear dependencies with the rewards.
In the experiments done to illustrate the theoretical analysis, Bandit Forest
obtain promising results in comparison with state-of-the-art algorithms.
| Rapha\"el F\'eraud and Robin Allesiardo and Tanguy Urvoy and Fabrice
Cl\'erot | null | 1504.06952 | null | null |
Accelerated kernel discriminant analysis | cs.LG | In this paper, using a novel matrix factorization and simultaneous reduction
to diagonal form approach (or in short simultaneous reduction approach),
Accelerated Kernel Discriminant Analysis (AKDA) and Accelerated Kernel Subclass
Discriminant Analysis (AKSDA) are proposed. Specifically, instead of performing
the simultaneous reduction of the between- and within-class or subclass scatter
matrices, the nonzero eigenpairs (NZEP) of the so-called core matrix, which is
of relatively small dimensionality, and the Cholesky factorization of the
kernel matrix are computed, achieving more than one order of magnitude speed up
over kernel discriminant analysis (KDA). Moreover, consisting of a few
elementary matrix operations and very stable numerical algorithms, AKDA and
AKSDA offer improved classification accuracy. The experimental evaluation on
various datasets confirms that the proposed approaches provide state-of-the-art
performance in terms of both training time and classification accuracy.
| Nikolaos Gkalelis and Vasileios Mezaris | null | 1504.07000 | null | null |
An Active Learning Based Approach For Effective Video Annotation And
Retrieval | cs.MM cs.IR cs.LG | Conventional multimedia annotation/retrieval systems such as Normalized
Continuous Relevance Model (NormCRM) [16] require a fully labeled training data
for a good performance. Active Learning, by determining an order for labeling
the training data, allows for a good performance even before the training data
is fully annotated. In this work we propose an active learning algorithm, which
combines a novel measure of sample uncertainty with a novel clustering-based
approach for determining sample density and diversity and integrate it with
NormCRM. The clusters are also iteratively refined to ensure both feature and
label-level agreement among samples. We show that our approach outperforms
multiple baselines both on a recent, open character animation dataset and on
the popular TRECVID corpus at both the tasks of annotation and text-based
retrieval of videos.
| Moitreya Chatterjee and Anton Leuski | null | 1504.07004 | null | null |
Fast Sampling for Bayesian Max-Margin Models | stat.ML cs.AI cs.LG | Bayesian max-margin models have shown superiority in various practical
applications, such as text categorization, collaborative prediction, social
network link prediction and crowdsourcing, and they conjoin the flexibility of
Bayesian modeling and predictive strengths of max-margin learning. However,
Monte Carlo sampling for these models still remains challenging, especially for
applications that involve large-scale datasets. In this paper, we present the
stochastic subgradient Hamiltonian Monte Carlo (HMC) methods, which are easy to
implement and computationally efficient. We show the approximate detailed
balance property of subgradient HMC which reveals a natural and validated
generalization of the ordinary HMC. Furthermore, we investigate the variants
that use stochastic subsampling and thermostats for better scalability and
mixing. Using stochastic subgradient Markov Chain Monte Carlo (MCMC), we
efficiently solve the posterior inference task of various Bayesian max-margin
models and extensive experimental results demonstrate the effectiveness of our
approach.
| Wenbo Hu, Jun Zhu, Bo Zhang | null | 1504.07107 | null | null |
Meta learning of bounds on the Bayes classifier error | cs.LG astro-ph.SR cs.CV cs.IT math.IT | Meta learning uses information from base learners (e.g. classifiers or
estimators) as well as information about the learning problem to improve upon
the performance of a single base learner. For example, the Bayes error rate of
a given feature space, if known, can be used to aid in choosing a classifier,
as well as in feature selection and model selection for the base classifiers
and the meta classifier. Recent work in the field of f-divergence functional
estimation has led to the development of simple and rapidly converging
estimators that can be used to estimate various bounds on the Bayes error. We
estimate multiple bounds on the Bayes error using an estimator that applies
meta learning to slowly converging plug-in estimators to obtain the parametric
convergence rate. We compare the estimated bounds empirically on simulated data
and then estimate the tighter bounds on features extracted from an image patch
analysis of sunspot continuum and magnetogram images.
| Kevin R. Moon, Veronique Delouille, Alfred O. Hero III | 10.1109/DSP-SPE.2015.7369520 | 1504.07116 | null | null |
Spectral MLE: Top-$K$ Rank Aggregation from Pairwise Comparisons | cs.LG cs.DS cs.IT math.IT math.ST stat.ML stat.TH | This paper explores the preference-based top-$K$ rank aggregation problem.
Suppose that a collection of items is repeatedly compared in pairs, and one
wishes to recover a consistent ordering that emphasizes the top-$K$ ranked
items, based on partially revealed preferences. We focus on the
Bradley-Terry-Luce (BTL) model that postulates a set of latent preference
scores underlying all items, where the odds of paired comparisons depend only
on the relative scores of the items involved.
We characterize the minimax limits on identifiability of top-$K$ ranked
items, in the presence of random and non-adaptive sampling. Our results
highlight a separation measure that quantifies the gap of preference scores
between the $K^{\text{th}}$ and $(K+1)^{\text{th}}$ ranked items. The minimum
sample complexity required for reliable top-$K$ ranking scales inversely with
the separation measure irrespective of other preference distribution metrics.
To approach this minimax limit, we propose a nearly linear-time ranking scheme,
called \emph{Spectral MLE}, that returns the indices of the top-$K$ items in
accordance to a careful score estimate. In a nutshell, Spectral MLE starts with
an initial score estimate with minimal squared loss (obtained via a spectral
method), and then successively refines each component with the assistance of
coordinate-wise MLEs. Encouragingly, Spectral MLE allows perfect top-$K$ item
identification under minimal sample complexity. The practical applicability of
Spectral MLE is further corroborated by numerical experiments.
| Yuxin Chen, Changho Suh | null | 1504.07218 | null | null |
Correlational Neural Networks | cs.CL cs.LG cs.NE stat.ML | Common Representation Learning (CRL), wherein different descriptions (or
views) of the data are embedded in a common subspace, is receiving a lot of
attention recently. Two popular paradigms here are Canonical Correlation
Analysis (CCA) based approaches and Autoencoder (AE) based approaches. CCA
based approaches learn a joint representation by maximizing correlation of the
views when projected to the common subspace. AE based methods learn a common
representation by minimizing the error of reconstructing the two views. Each of
these approaches has its own advantages and disadvantages. For example, while
CCA based approaches outperform AE based approaches for the task of transfer
learning, they are not as scalable as the latter. In this work we propose an AE
based approach called Correlational Neural Network (CorrNet), that explicitly
maximizes correlation among the views when projected to the common subspace.
Through a series of experiments, we demonstrate that the proposed CorrNet is
better than the above mentioned approaches with respect to its ability to learn
correlated common representations. Further, we employ CorrNet for several cross
language tasks and show that the representations learned using CorrNet perform
better than the ones learned using other state of the art approaches.
| Sarath Chandar, Mitesh M. Khapra, Hugo Larochelle, Balaraman Ravindran | null | 1504.07225 | null | null |
Sign Stable Random Projections for Large-Scale Learning | stat.ML cs.LG stat.CO | We study the use of "sign $\alpha$-stable random projections" (where
$0<\alpha\leq 2$) for building basic data processing tools in the context of
large-scale machine learning applications (e.g., classification, regression,
clustering, and near-neighbor search). After the processing by sign stable
random projections, the inner products of the processed data approximate
various types of nonlinear kernels depending on the value of $\alpha$. Thus,
this approach provides an effective strategy for approximating nonlinear
learning algorithms essentially at the cost of linear learning. When $\alpha
=2$, it is known that the corresponding nonlinear kernel is the arc-cosine
kernel. When $\alpha=1$, the procedure approximates the arc-cos-$\chi^2$ kernel
(under certain condition). When $\alpha\rightarrow0+$, it corresponds to the
resemblance kernel.
From practitioners' perspective, the method of sign $\alpha$-stable random
projections is ready to be tested for large-scale learning applications, where
$\alpha$ can be simply viewed as a tuning parameter. What is missing in the
literature is an extensive empirical study to show the effectiveness of sign
stable random projections, especially for $\alpha\neq 2$ or 1. The paper
supplies such a study on a wide variety of classification datasets. In
particular, we compare shoulder-by-shoulder sign stable random projections with
the recently proposed "0-bit consistent weighted sampling (CWS)" (Li 2015).
| Ping Li | null | 1504.07235 | null | null |
Surrogate regret bounds for generalized classification performance
metrics | cs.LG | We consider optimization of generalized performance metrics for binary
classification by means of surrogate losses. We focus on a class of metrics,
which are linear-fractional functions of the false positive and false negative
rates (examples of which include $F_{\beta}$-measure, Jaccard similarity
coefficient, AM measure, and many others). Our analysis concerns the following
two-step procedure. First, a real-valued function $f$ is learned by minimizing
a surrogate loss for binary classification on the training sample. It is
assumed that the surrogate loss is a strongly proper composite loss function
(examples of which include logistic loss, squared-error loss, exponential loss,
etc.). Then, given $f$, a threshold $\widehat{\theta}$ is tuned on a separate
validation sample, by direct optimization of the target performance metric. We
show that the regret of the resulting classifier (obtained from thresholding
$f$ on $\widehat{\theta}$) measured with respect to the target metric is
upperbounded by the regret of $f$ measured with respect to the surrogate loss.
We also extend our results to cover multilabel classification and provide
regret bounds for micro- and macro-averaging measures. Our findings are further
analyzed in a computational study on both synthetic and real data sets.
| Wojciech Kot{\l}owski, Krzysztof Dembczy\'nski | null | 1504.07272 | null | null |
Private Disclosure of Information in Health Tele-monitoring | cs.CR cs.AI cs.IT cs.LG math.IT | We present a novel framework, called Private Disclosure of Information (PDI),
which is aimed to prevent an adversary from inferring certain sensitive
information about subjects using the data that they disclosed during
communication with an intended recipient. We show cases where it is possible to
achieve perfect privacy regardless of the adversary's auxiliary knowledge while
preserving full utility of the information to the intended recipient and
provide sufficient conditions for such cases. We also demonstrate the
applicability of PDI on a real-world data set that simulates a health
tele-monitoring scenario.
| Daniel Aranki and Ruzena Bajcsy | null | 1504.07313 | null | null |
Lexical Translation Model Using a Deep Neural Network Architecture | cs.CL cs.LG cs.NE | In this paper we combine the advantages of a model using global source
sentence contexts, the Discriminative Word Lexicon, and neural networks. By
using deep neural networks instead of the linear maximum entropy model in the
Discriminative Word Lexicon models, we are able to leverage dependencies
between different source words due to the non-linearity. Furthermore, the
models for different target words can share parameters and therefore data
sparsity problems are effectively reduced.
By using this approach in a state-of-the-art translation system, we can
improve the performance by up to 0.5 BLEU points for three different language
pairs on the TED translation task.
| Thanh-Le Ha, Jan Niehues, Alex Waibel | null | 1504.07395 | null | null |
Deep Neural Networks Regularization for Structured Output Prediction | cs.LG stat.ML | A deep neural network model is a powerful framework for learning
representations. Usually, it is used to learn the relation $x \to y$ by
exploiting the regularities in the input $x$. In structured output prediction
problems, $y$ is multi-dimensional and structural relations often exist between
the dimensions. The motivation of this work is to learn the output dependencies
that may lie in the output data in order to improve the prediction accuracy.
Unfortunately, feedforward networks are unable to exploit the relations between
the outputs. In order to overcome this issue, we propose in this paper a
regularization scheme for training neural networks for these particular tasks
using a multi-task framework. Our scheme aims at incorporating the learning of
the output representation $y$ in the training process in an unsupervised
fashion while learning the supervised mapping function $x \to y$.
We evaluate our framework on a facial landmark detection problem which is a
typical structured output task. We show over two public challenging datasets
(LFPW and HELEN) that our regularization scheme improves the generalization of
deep neural networks and accelerates their training. The use of unlabeled data
and label-only data is also explored, showing an additional improvement of the
results. We provide an opensource implementation
(https://github.com/sbelharbi/structured-output-ae) of our framework.
| Soufiane Belharbi and Romain H\'erault and Cl\'ement Chatelain and
S\'ebastien Adam | null | 1504.07550 | null | null |
Differentially Private Release and Learning of Threshold Functions | cs.CR cs.LG | We prove new upper and lower bounds on the sample complexity of $(\epsilon,
\delta)$ differentially private algorithms for releasing approximate answers to
threshold functions. A threshold function $c_x$ over a totally ordered domain
$X$ evaluates to $c_x(y) = 1$ if $y \le x$, and evaluates to $0$ otherwise. We
give the first nontrivial lower bound for releasing thresholds with
$(\epsilon,\delta)$ differential privacy, showing that the task is impossible
over an infinite domain $X$, and moreover requires sample complexity $n \ge
\Omega(\log^*|X|)$, which grows with the size of the domain. Inspired by the
techniques used to prove this lower bound, we give an algorithm for releasing
thresholds with $n \le 2^{(1+ o(1))\log^*|X|}$ samples. This improves the
previous best upper bound of $8^{(1 + o(1))\log^*|X|}$ (Beimel et al., RANDOM
'13).
Our sample complexity upper and lower bounds also apply to the tasks of
learning distributions with respect to Kolmogorov distance and of properly PAC
learning thresholds with differential privacy. The lower bound gives the first
separation between the sample complexity of properly learning a concept class
with $(\epsilon,\delta)$ differential privacy and learning without privacy. For
properly learning thresholds in $\ell$ dimensions, this lower bound extends to
$n \ge \Omega(\ell \cdot \log^*|X|)$.
To obtain our results, we give reductions in both directions from releasing
and properly learning thresholds and the simpler interior point problem. Given
a database $D$ of elements from $X$, the interior point problem asks for an
element between the smallest and largest elements in $D$. We introduce new
recursive constructions for bounding the sample complexity of the interior
point problem, as well as further reductions and techniques for proving
impossibility results for other basic problems in differential privacy.
| Mark Bun and Kobbi Nissim and Uri Stemmer and Salil Vadhan | null | 1504.07553 | null | null |
Becoming the Expert - Interactive Multi-Class Machine Teaching | cs.CV cs.LG stat.ML | Compared to machines, humans are extremely good at classifying images into
categories, especially when they possess prior knowledge of the categories at
hand. If this prior information is not available, supervision in the form of
teaching images is required. To learn categories more quickly, people should
see important and representative images first, followed by less important
images later - or not at all. However, image-importance is individual-specific,
i.e. a teaching image is important to a student if it changes their overall
ability to discriminate between classes. Further, students keep learning, so
while image-importance depends on their current knowledge, it also varies with
time.
In this work we propose an Interactive Machine Teaching algorithm that
enables a computer to teach challenging visual concepts to a human. Our
adaptive algorithm chooses, online, which labeled images from a teaching set
should be shown to the student as they learn. We show that a teaching strategy
that probabilistically models the student's ability and progress, based on
their correct and incorrect answers, produces better 'experts'. We present
results using real human participants across several varied and challenging
real-world datasets.
| Edward Johns and Oisin Mac Aodha and Gabriel J. Brostow | null | 1504.07575 | null | null |
Or's of And's for Interpretable Classification, with Application to
Context-Aware Recommender Systems | cs.LG | We present a machine learning algorithm for building classifiers that are
comprised of a small number of disjunctions of conjunctions (or's of and's). An
example of a classifier of this form is as follows: If X satisfies (x1 = 'blue'
AND x3 = 'middle') OR (x1 = 'blue' AND x2 = '<15') OR (x1 = 'yellow'), then we
predict that Y=1, ELSE predict Y=0. An attribute-value pair is called a literal
and a conjunction of literals is called a pattern. Models of this form have the
advantage of being interpretable to human experts, since they produce a set of
conditions that concisely describe a specific class. We present two
probabilistic models for forming a pattern set, one with a Beta-Binomial prior,
and the other with Poisson priors. In both cases, there are prior parameters
that the user can set to encourage the model to have a desired size and shape,
to conform with a domain-specific definition of interpretability. We provide
two scalable MAP inference approaches: a pattern level search, which involves
association rule mining, and a literal level search. We show stronger priors
reduce computation. We apply the Bayesian Or's of And's (BOA) model to predict
user behavior with respect to in-vehicle context-aware personalized recommender
systems.
| Tong Wang, Cynthia Rudin, Finale Doshi-Velez, Yimin Liu, Erica
Klampfl, Perry MacNeille | null | 1504.07614 | null | null |
Nearly Optimal Deterministic Algorithm for Sparse Walsh-Hadamard
Transform | cs.IT cs.CC cs.LG math.FA math.IT | For every fixed constant $\alpha > 0$, we design an algorithm for computing
the $k$-sparse Walsh-Hadamard transform of an $N$-dimensional vector $x \in
\mathbb{R}^N$ in time $k^{1+\alpha} (\log N)^{O(1)}$. Specifically, the
algorithm is given query access to $x$ and computes a $k$-sparse $\tilde{x} \in
\mathbb{R}^N$ satisfying $\|\tilde{x} - \hat{x}\|_1 \leq c \|\hat{x} -
H_k(\hat{x})\|_1$, for an absolute constant $c > 0$, where $\hat{x}$ is the
transform of $x$ and $H_k(\hat{x})$ is its best $k$-sparse approximation. Our
algorithm is fully deterministic and only uses non-adaptive queries to $x$
(i.e., all queries are determined and performed in parallel when the algorithm
starts).
An important technical tool that we use is a construction of nearly optimal
and linear lossless condensers which is a careful instantiation of the GUV
condenser (Guruswami, Umans, Vadhan, JACM 2009). Moreover, we design a
deterministic and non-adaptive $\ell_1/\ell_1$ compressed sensing scheme based
on general lossless condensers that is equipped with a fast reconstruction
algorithm running in time $k^{1+\alpha} (\log N)^{O(1)}$ (for the GUV-based
condenser) and is of independent interest. Our scheme significantly simplifies
and improves an earlier expander-based construction due to Berinde, Gilbert,
Indyk, Karloff, Strauss (Allerton 2008).
Our methods use linear lossless condensers in a black box fashion; therefore,
any future improvement on explicit constructions of such condensers would
immediately translate to improved parameters in our framework (potentially
leading to $k (\log N)^{O(1)}$ reconstruction time with a reduced exponent in
the poly-logarithmic factor, and eliminating the extra parameter $\alpha$).
Finally, by allowing the algorithm to use randomness, while still using
non-adaptive queries, the running time of the algorithm can be improved to
$\tilde{O}(k \log^3 N)$.
| Mahdi Cheraghchi, Piotr Indyk | null | 1504.07648 | null | null |
Evaluation of Explore-Exploit Policies in Multi-result Ranking Systems | cs.LG | We analyze the problem of using Explore-Exploit techniques to improve
precision in multi-result ranking systems such as web search, query
autocompletion and news recommendation. Adopting an exploration policy directly
online, without understanding its impact on the production system, may have
unwanted consequences - the system may sustain large losses, create user
dissatisfaction, or collect exploration data which does not help improve
ranking quality. An offline framework is thus necessary to let us decide what
policy and how we should apply in a production environment to ensure positive
outcome. Here, we describe such an offline framework.
Using the framework, we study a popular exploration policy - Thompson
sampling. We show that there are different ways of implementing it in
multi-result ranking systems, each having different semantic interpretation and
leading to different results in terms of sustained click-through-rate (CTR)
loss and expected model improvement. In particular, we demonstrate that
Thompson sampling can act as an online learner optimizing CTR, which in some
cases can lead to an interesting outcome: lift in CTR during exploration. The
observation is important for production systems as it suggests that one can get
both valuable exploration data to improve ranking performance on the long run,
and at the same time increase CTR while exploration lasts.
| Dragomir Yankov, Pavel Berkhin, Lihong Li | null | 1504.07662 | null | null |
Explaining the Success of AdaBoost and Random Forests as Interpolating
Classifiers | stat.ML cs.LG stat.ME | There is a large literature explaining why AdaBoost is a successful
classifier. The literature on AdaBoost focuses on classifier margins and
boosting's interpretation as the optimization of an exponential likelihood
function. These existing explanations, however, have been pointed out to be
incomplete. A random forest is another popular ensemble method for which there
is substantially less explanation in the literature. We introduce a novel
perspective on AdaBoost and random forests that proposes that the two
algorithms work for similar reasons. While both classifiers achieve similar
predictive accuracy, random forests cannot be conceived as a direct
optimization procedure. Rather, random forests is a self-averaging,
interpolating algorithm which creates what we denote as a "spikey-smooth"
classifier, and we view AdaBoost in the same light. We conjecture that both
AdaBoost and random forests succeed because of this mechanism. We provide a
number of examples and some theoretical justification to support this
explanation. In the process, we question the conventional wisdom that suggests
that boosting algorithms for classification require regularization or early
stopping and should be limited to low complexity classes of learners, such as
decision stumps. We conclude that boosting should be used like random forests:
with large decision trees and without direct regularization or early stopping.
| Abraham J. Wyner, Matthew Olson, Justin Bleich, David Mease | null | 1504.07676 | null | null |
Dual Averaging on Compactly-Supported Distributions And Application to
No-Regret Learning on a Continuum | cs.LG math.OC | We consider an online learning problem on a continuum. A decision maker is
given a compact feasible set $S$, and is faced with the following sequential
problem: at iteration~$t$, the decision maker chooses a distribution $x^{(t)}
\in \Delta(S)$, then a loss function $\ell^{(t)} : S \to \mathbb{R}_+$ is
revealed, and the decision maker incurs expected loss $\langle \ell^{(t)},
x^{(t)} \rangle = \mathbb{E}_{s \sim x^{(t)}} \ell^{(t)}(s)$. We view the
problem as an online convex optimization problem on the space $\Delta(S)$ of
Lebesgue-continnuous distributions on $S$. We prove a general regret bound for
the Dual Averaging method on $L^2(S)$, then prove that dual averaging with
$\omega$-potentials (a class of strongly convex regularizers) achieves
sublinear regret when $S$ is uniformly fat (a condition weaker than convexity).
| Walid Krichene | null | 1504.07720 | null | null |
Market forecasting using Hidden Markov Models | stat.ML cs.LG | Working on the daily closing prices and logreturns, in this paper we deal
with the use of Hidden Markov Models (HMMs) to forecast the price of the
EUR/USD Futures. The aim of our work is to understand how the HMMs describe
different financial time series depending on their structure. Subsequently, we
analyse the forecasting methods exposed in the previous literature, putting on
evidence their pros and cons.
| Sara Rebagliati and Emanuela Sasso and Samuele Soraggi | null | 1504.07829 | null | null |
ASTROMLSKIT: A New Statistical Machine Learning Toolkit: A Platform for
Data Analytics in Astronomy | cs.CE astro-ph.IM cs.LG | Astroinformatics is a new impact area in the world of astronomy, occasionally
called the final frontier, where several astrophysicists, statisticians and
computer scientists work together to tackle various data intensive astronomical
problems. Exponential growth in the data volume and increased complexity of the
data augments difficult questions to the existing challenges. Classical
problems in Astronomy are compounded by accumulation of astronomical volume of
complex data, rendering the task of classification and interpretation
incredibly laborious. The presence of noise in the data makes analysis and
interpretation even more arduous. Machine learning algorithms and data analytic
techniques provide the right platform for the challenges posed by these
problems. A diverse range of open problem like star-galaxy separation,
detection and classification of exoplanets, classification of supernovae is
discussed. The focus of the paper is the applicability and efficacy of various
machine learning algorithms like K Nearest Neighbor (KNN), random forest (RF),
decision tree (DT), Support Vector Machine (SVM), Na\"ive Bayes and Linear
Discriminant Analysis (LDA) in analysis and inference of the decision theoretic
problems in Astronomy. The machine learning algorithms, integrated into
ASTROMLSKIT, a toolkit developed in the course of the work, have been used to
analyze HabCat data and supernovae data. Accuracy has been found to be
appreciably good.
| Snehanshu Saha, Surbhi Agrawal, Manikandan. R, Kakoli Bora, Swati
Routh, Anand Narasimhamurthy | null | 1504.07865 | null | null |
Learning Contextualized Music Semantics from Tags via a Siamese Network | cs.LG | Music information retrieval faces a challenge in modeling contextualized
musical concepts formulated by a set of co-occurring tags. In this paper, we
investigate the suitability of our recently proposed approach based on a
Siamese neural network in fighting off this challenge. By means of tag features
and probabilistic topic models, the network captures contextualized semantics
from tags via unsupervised learning. This leads to a distributed semantics
space and a potential solution to the out of vocabulary problem which has yet
to be sufficiently addressed. We explore the nature of the resultant
music-based semantics and address computational needs. We conduct experiments
on three public music tag collections -namely, CAL500, MagTag5K and Million
Song Dataset- and compare our approach to a number of state-of-the-art
semantics learning approaches. Comparative results suggest that this approach
outperforms previous approaches in terms of semantic priming and music tag
completion.
| Ubai Sandouk and Ke Chen | null | 1504.07968 | null | null |
Who Spoke What? A Latent Variable Framework for the Joint Decoding of
Multiple Speakers and their Keywords | cs.SD cs.LG | In this paper, we present a latent variable (LV) framework to identify all
the speakers and their keywords given a multi-speaker mixture signal. We
introduce two separate LVs to denote active speakers and the keywords uttered.
The dependency of a spoken keyword on the speaker is modeled through a
conditional probability mass function. The distribution of the mixture signal
is expressed in terms of the LV mass functions and speaker-specific-keyword
models. The proposed framework admits stochastic models, representing the
probability density function of the observation vectors given that a particular
speaker uttered a specific keyword, as speaker-specific-keyword models. The LV
mass functions are estimated in a Maximum Likelihood framework using the
Expectation Maximization (EM) algorithm. The active speakers and their keywords
are detected as modes of the joint distribution of the two LVs. In mixture
signals, containing two speakers uttering the keywords simultaneously, the
proposed framework achieves an accuracy of 82% for detecting both the speakers
and their respective keywords, using Student's-t mixture models as
speaker-specific-keyword models.
| Harshavardhan Sundar and Thippur V. Sreenivas | null | 1504.08021 | null | null |
A Deep Learning Model for Structured Outputs with High-order Interaction | cs.LG cs.NE | Many real-world applications are associated with structured data, where not
only input but also output has interplay. However, typical classification and
regression models often lack the ability of simultaneously exploring high-order
interaction within input and that within output. In this paper, we present a
deep learning model aiming to generate a powerful nonlinear functional mapping
from structured input to structured output. More specifically, we propose to
integrate high-order hidden units, guided discriminative pretraining, and
high-order auto-encoders for this purpose. We evaluate the model with three
datasets, and obtain state-of-the-art performances among competitive methods.
Our current work focuses on structured output regression, which is a less
explored area, although the model can be extended to handle structured label
classification.
| Hongyu Guo, Xiaodan Zhu, Martin Renqiang Min | null | 1504.08022 | null | null |
Note on Equivalence Between Recurrent Neural Network Time Series Models
and Variational Bayesian Models | cs.LG | We observe that the standard log likelihood training objective for a
Recurrent Neural Network (RNN) model of time series data is equivalent to a
variational Bayesian training objective, given the proper choice of generative
and inference models. This perspective may motivate extensions to both RNNs and
variational Bayesian models. We propose one such extension, where multiple
particles are used for the hidden state of an RNN, allowing a natural
representation of uncertainty or multimodality.
| Jascha Sohl-Dickstein, Diederik P. Kingma | null | 1504.08025 | null | null |
Semi-Orthogonal Multilinear PCA with Relaxed Start | stat.ML cs.CV cs.LG | Principal component analysis (PCA) is an unsupervised method for learning
low-dimensional features with orthogonal projections. Multilinear PCA methods
extend PCA to deal with multidimensional data (tensors) directly via
tensor-to-tensor projection or tensor-to-vector projection (TVP). However,
under the TVP setting, it is difficult to develop an effective multilinear PCA
method with the orthogonality constraint. This paper tackles this problem by
proposing a novel Semi-Orthogonal Multilinear PCA (SO-MPCA) approach. SO-MPCA
learns low-dimensional features directly from tensors via TVP by imposing the
orthogonality constraint in only one mode. This formulation results in more
captured variance and more learned features than full orthogonality. For better
generalization, we further introduce a relaxed start (RS) strategy to get
SO-MPCA-RS by fixing the starting projection vectors, which increases the bias
and reduces the variance of the learning model. Experiments on both face (2D)
and gait (3D) data demonstrate that SO-MPCA-RS outperforms other competing
algorithms on the whole, and the relaxed start strategy is also effective for
other TVP-based PCA methods.
| Qiquan Shi and Haiping Lu | null | 1504.08142 | null | null |
Multi-user lax communications: a multi-armed bandit approach | cs.LG cs.MA | Inspired by cognitive radio networks, we consider a setting where multiple
users share several channels modeled as a multi-user multi-armed bandit (MAB)
problem. The characteristics of each channel are unknown and are different for
each user. Each user can choose between the channels, but her success depends
on the particular channel chosen as well as on the selections of other users:
if two users select the same channel their messages collide and none of them
manages to send any data. Our setting is fully distributed, so there is no
central control. As in many communication systems, the users cannot set up a
direct communication protocol, so information exchange must be limited to a
minimum. We develop an algorithm for learning a stable configuration for the
multi-user MAB problem. We further offer both convergence guarantees and
experiments inspired by real communication networks, including comparison to
state-of-the-art algorithms.
| Orly Avner and Shie Mannor | null | 1504.08167 | null | null |
Model Selection and Overfitting in Genetic Programming: Empirical Study
[Extended Version] | cs.NE cs.LG | Genetic Programming has been very successful in solving a large area of
problems but its use as a machine learning algorithm has been limited so far.
One of the reasons is the problem of overfitting which cannot be solved or
suppresed as easily as in more traditional approaches. Another problem, closely
related to overfitting, is the selection of the final model from the
population.
In this article we present our research that addresses both problems:
overfitting and model selection. We compare several ways of dealing with
ovefitting, based on Random Sampling Technique (RST) and on using a validation
set, all with an emphasis on model selection. We subject each approach to a
thorough testing on artificial and real--world datasets and compare them with
the standard approach, which uses the full training data, as a baseline.
| Jan \v{Z}egklitz and Petr Po\v{s}\'ik | null | 1504.08168 | null | null |
Lateral Connections in Denoising Autoencoders Support Supervised
Learning | cs.LG cs.NE stat.ML | We show how a deep denoising autoencoder with lateral connections can be used
as an auxiliary unsupervised learning task to support supervised learning. The
proposed model is trained to minimize simultaneously the sum of supervised and
unsupervised cost functions by back-propagation, avoiding the need for
layer-wise pretraining. It improves the state of the art significantly in the
permutation-invariant MNIST classification task.
| Antti Rasmus, Harri Valpola, Tapani Raiko | null | 1504.08215 | null | null |
Hierarchical Subquery Evaluation for Active Learning on a Graph | cs.CV cs.LG stat.ML | To train good supervised and semi-supervised object classifiers, it is
critical that we not waste the time of the human experts who are providing the
training labels. Existing active learning strategies can have uneven
performance, being efficient on some datasets but wasteful on others, or
inconsistent just between runs on the same dataset. We propose perplexity based
graph construction and a new hierarchical subquery evaluation algorithm to
combat this variability, and to release the potential of Expected Error
Reduction.
Under some specific circumstances, Expected Error Reduction has been one of
the strongest-performing informativeness criteria for active learning. Until
now, it has also been prohibitively costly to compute for sizeable datasets. We
demonstrate our highly practical algorithm, comparing it to other active
learning measures on classification datasets that vary in sparsity,
dimensionality, and size. Our algorithm is consistent over multiple runs and
achieves high accuracy, while querying the human expert for labels at a
frequency that matches their desired time budget.
| Oisin Mac Aodha and Neill D.F. Campbell and Jan Kautz and Gabriel J.
Brostow | null | 1504.08219 | null | null |
Deep Neural Networks with Random Gaussian Weights: A Universal
Classification Strategy? | cs.NE cs.LG stat.ML | Three important properties of a classification machinery are: (i) the system
preserves the core information of the input data; (ii) the training examples
convey information about unseen data; and (iii) the system is able to treat
differently points from different classes. In this work we show that these
fundamental properties are satisfied by the architecture of deep neural
networks. We formally prove that these networks with random Gaussian weights
perform a distance-preserving embedding of the data, with a special treatment
for in-class and out-of-class data. Similar points at the input of the network
are likely to have a similar output. The theoretical analysis of deep networks
here presented exploits tools used in the compressed sensing and dictionary
learning literature, thereby making a formal connection between these important
topics. The derived results allow drawing conclusions on the metric learning
properties of the network and their relation to its structure, as well as
providing bounds on the required size of the training set such that the
training examples would represent faithfully the unseen data. The results are
validated with state-of-the-art trained networks.
| Raja Giryes and Guillermo Sapiro and Alex M. Bronstein | 10.1109/TSP.2016.2546221 | 1504.08291 | null | null |
On the Structure, Covering, and Learning of Poisson Multinomial
Distributions | cs.DS cs.LG math.PR math.ST stat.TH | An $(n,k)$-Poisson Multinomial Distribution (PMD) is the distribution of the
sum of $n$ independent random vectors supported on the set ${\cal
B}_k=\{e_1,\ldots,e_k\}$ of standard basis vectors in $\mathbb{R}^k$. We prove
a structural characterization of these distributions, showing that, for all
$\varepsilon >0$, any $(n, k)$-Poisson multinomial random vector is
$\varepsilon$-close, in total variation distance, to the sum of a discretized
multidimensional Gaussian and an independent $(\text{poly}(k/\varepsilon),
k)$-Poisson multinomial random vector. Our structural characterization extends
the multi-dimensional CLT of Valiant and Valiant, by simultaneously applying to
all approximation requirements $\varepsilon$. In particular, it overcomes
factors depending on $\log n$ and, importantly, the minimum eigenvalue of the
PMD's covariance matrix from the distance to a multidimensional Gaussian random
variable.
We use our structural characterization to obtain an $\varepsilon$-cover, in
total variation distance, of the set of all $(n, k)$-PMDs, significantly
improving the cover size of Daskalakis and Papadimitriou, and obtaining the
same qualitative dependence of the cover size on $n$ and $\varepsilon$ as the
$k=2$ cover of Daskalakis and Papadimitriou. We further exploit this structure
to show that $(n,k)$-PMDs can be learned to within $\varepsilon$ in total
variation distance from $\tilde{O}_k(1/\varepsilon^2)$ samples, which is
near-optimal in terms of dependence on $\varepsilon$ and independent of $n$. In
particular, our result generalizes the single-dimensional result of Daskalakis,
Diakonikolas, and Servedio for Poisson Binomials to arbitrary dimension.
| Constantinos Daskalakis and Gautam Kamath and Christos Tzamos | 10.1109/FOCS.2015.77 | 1504.08363 | null | null |
Thompson Sampling for Budgeted Multi-armed Bandits | cs.LG | Thompson sampling is one of the earliest randomized algorithms for
multi-armed bandits (MAB). In this paper, we extend the Thompson sampling to
Budgeted MAB, where there is random cost for pulling an arm and the total cost
is constrained by a budget. We start with the case of Bernoulli bandits, in
which the random rewards (costs) of an arm are independently sampled from a
Bernoulli distribution. To implement the Thompson sampling algorithm in this
case, at each round, we sample two numbers from the posterior distributions of
the reward and cost for each arm, obtain their ratio, select the arm with the
maximum ratio, and then update the posterior distributions. We prove that the
distribution-dependent regret bound of this algorithm is $O(\ln B)$, where $B$
denotes the budget. By introducing a Bernoulli trial, we further extend this
algorithm to the setting that the rewards (costs) are drawn from general
distributions, and prove that its regret bound remains almost the same. Our
simulation results demonstrate the effectiveness of the proposed algorithm.
| Yingce Xia, Haifang Li, Tao Qin, Nenghai Yu, Tie-Yan Liu | null | 1505.00146 | null | null |
Theory of Optimizing Pseudolinear Performance Measures: Application to
F-measure | cs.LG | Non-linear performance measures are widely used for the evaluation of
learning algorithms. For example, $F$-measure is a commonly used performance
measure for classification problems in machine learning and information
retrieval community. We study the theoretical properties of a subset of
non-linear performance measures called pseudo-linear performance measures which
includes $F$-measure, \emph{Jaccard Index}, among many others. We establish
that many notions of $F$-measures and \emph{Jaccard Index} are pseudo-linear
functions of the per-class false negatives and false positives for binary,
multiclass and multilabel classification. Based on this observation, we present
a general reduction of such performance measure optimization problem to
cost-sensitive classification problem with unknown costs. We then propose an
algorithm with provable guarantees to obtain an approximately optimal
classifier for the $F$-measure by solving a series of cost-sensitive
classification problems. The strength of our analysis is to be valid on any
dataset and any class of classifiers, extending the existing theoretical
results on pseudo-linear measures, which are asymptotic in nature. We also
establish the multi-objective nature of the $F$-score maximization problem by
linking the algorithm with the weighted-sum approach used in multi-objective
optimization. We present numerical experiments to illustrate the relative
importance of cost asymmetry and thresholding when learning linear classifiers
on various $F$-measure optimization tasks.
| Shameem A Puthiya Parambath, Nicolas Usunier, Yves Grandvalet | null | 1505.00199 | null | null |
Grounded Discovery of Coordinate Term Relationships between Software
Entities | cs.CL cs.AI cs.LG cs.SE | We present an approach for the detection of coordinate-term relationships
between entities from the software domain, that refer to Java classes. Usually,
relations are found by examining corpus statistics associated with text
entities. In some technical domains, however, we have access to additional
information about the real-world objects named by the entities, suggesting that
coupling information about the "grounded" entities with corpus statistics might
lead to improved methods for relation discovery. To this end, we develop a
similarity measure for Java classes using distributional information about how
they are used in software, which we combine with corpus statistics on the
distribution of contexts in which the classes appear in text. Using our
approach, cross-validation accuracy on this dataset can be improved
dramatically, from around 60% to 88%. Human labeling results show that our
classifier has an F1 score of 86% over the top 1000 predicted pairs.
| Dana Movshovitz-Attias, William W. Cohen | null | 1505.00277 | null | null |
Algorithms for Lipschitz Learning on Graphs | cs.LG cs.DS math.MG | We develop fast algorithms for solving regression problems on graphs where
one is given the value of a function at some vertices, and must find its
smoothest possible extension to all vertices. The extension we compute is the
absolutely minimal Lipschitz extension, and is the limit for large $p$ of
$p$-Laplacian regularization. We present an algorithm that computes a minimal
Lipschitz extension in expected linear time, and an algorithm that computes an
absolutely minimal Lipschitz extension in expected time $\widetilde{O} (m n)$.
The latter algorithm has variants that seem to run much faster in practice.
These extensions are particularly amenable to regularization: we can perform
$l_{0}$-regularization on the given values in polynomial time and
$l_{1}$-regularization on the initial function values and on graph edge weights
in time $\widetilde{O} (m^{3/2})$.
| Rasmus Kyng, Anup Rao, Sushant Sachdeva and Daniel A. Spielman | null | 1505.00290 | null | null |
Monotonous (Semi-)Nonnegative Matrix Factorization | cs.LG stat.ML | Nonnegative matrix factorization (NMF) factorizes a non-negative matrix into
product of two non-negative matrices, namely a signal matrix and a mixing
matrix. NMF suffers from the scale and ordering ambiguities. Often, the source
signals can be monotonous in nature. For example, in source separation problem,
the source signals can be monotonously increasing or decreasing while the
mixing matrix can have nonnegative entries. NMF methods may not be effective
for such cases as it suffers from the ordering ambiguity. This paper proposes
an approach to incorporate notion of monotonicity in NMF, labeled as monotonous
NMF. An algorithm based on alternating least-squares is proposed for recovering
monotonous signals from a data matrix. Further, the assumption on mixing matrix
is relaxed to extend monotonous NMF for data matrix with real numbers as
entries. The approach is illustrated using synthetic noisy data. The results
obtained by monotonous NMF are compared with standard NMF algorithms in the
literature, and it is shown that monotonous NMF estimates source signals well
in comparison to standard NMF algorithms when the underlying sources signals
are monotonous.
| Nirav Bhatt and Arun Ayyar | 10.1145/2732587.2732600 | 1505.00294 | null | null |
Multi-Object Classification and Unsupervised Scene Understanding Using
Deep Learning Features and Latent Tree Probabilistic Models | cs.CV cs.LG | Deep learning has shown state-of-art classification performance on datasets
such as ImageNet, which contain a single object in each image. However,
multi-object classification is far more challenging. We present a unified
framework which leverages the strengths of multiple machine learning methods,
viz deep learning, probabilistic models and kernel methods to obtain
state-of-art performance on Microsoft COCO, consisting of non-iconic images. We
incorporate contextual information in natural images through a conditional
latent tree probabilistic model (CLTM), where the object co-occurrences are
conditioned on the extracted fc7 features from pre-trained Imagenet CNN as
input. We learn the CLTM tree structure using conditional pairwise
probabilities for object co-occurrences, estimated through kernel methods, and
we learn its node and edge potentials by training a new 3-layer neural network,
which takes fc7 features as input. Object classification is carried out via
inference on the learnt conditional tree model, and we obtain significant gain
in precision-recall and F-measures on MS-COCO, especially for difficult object
categories. Moreover, the latent variables in the CLTM capture scene
information: the images with top activations for a latent node have common
themes such as being a grasslands or a food scene, and on on. In addition, we
show that a simple k-means clustering of the inferred latent nodes alone
significantly improves scene classification performance on the MIT-Indoor
dataset, without the need for any retraining, and without using scene labels
during training. Thus, we present a unified framework for multi-object
classification and unsupervised scene understanding.
| Tejaswi Nimmagadda and Anima Anandkumar | null | 1505.00308 | null | null |
Deconstructing Principal Component Analysis Using a Data Reconciliation
Perspective | cs.LG cs.SY stat.ME | Data reconciliation (DR) and Principal Component Analysis (PCA) are two
popular data analysis techniques in process industries. Data reconciliation is
used to obtain accurate and consistent estimates of variables and parameters
from erroneous measurements. PCA is primarily used as a method for reducing the
dimensionality of high dimensional data and as a preprocessing technique for
denoising measurements. These techniques have been developed and deployed
independently of each other. The primary purpose of this article is to
elucidate the close relationship between these two seemingly disparate
techniques. This leads to a unified framework for applying PCA and DR. Further,
we show how the two techniques can be deployed together in a collaborative and
consistent manner to process data. The framework has been extended to deal with
partially measured systems and to incorporate partial knowledge available about
the process model.
| Shankar Narasimhan and Nirav Bhatt | 10.1016/j.compchemeng.2015.03.016 | 1505.00314 | null | null |
Using PCA to Efficiently Represent State Spaces | cs.LG cs.RO | Reinforcement learning algorithms need to deal with the exponential growth of
states and actions when exploring optimal control in high-dimensional spaces.
This is known as the curse of dimensionality. By projecting the agent's state
onto a low-dimensional manifold, we can represent the state space in a smaller
and more efficient representation. By using this representation during
learning, the agent can converge to a good policy much faster. We test this
approach in the Mario Benchmarking Domain. When using dimensionality reduction
in Mario, learning converges much faster to a good policy. But, there is a
critical convergence-performance trade-off. By projecting onto a
low-dimensional manifold, we are ignoring important data. In this paper, we
explore this trade-off of convergence and performance. We find that learning in
as few as 4 dimensions (instead of 9), we can improve performance past learning
in the full dimensional space at a faster convergence rate.
| William Curran, Tim Brys, Matthew Taylor, William Smart | null | 1505.00322 | null | null |
Can deep learning help you find the perfect match? | cs.LG | Is he/she my type or not? The answer to this question depends on the personal
preferences of the one asking it. The individual process of obtaining a full
answer may generally be difficult and time consuming, but often an approximate
answer can be obtained simply by looking at a photo of the potential match.
Such approximate answers based on visual cues can be produced in a fraction of
a second, a phenomenon that has led to a series of recently successful dating
apps in which users rate others positively or negatively using primarily a
single photo. In this paper we explore using convolutional networks to create a
model of an individual's personal preferences based on rated photos. This
introduced task is difficult due to the large number of variations in profile
pictures and the noise in attractiveness labels. Toward this task we collect a
dataset comprised of $9364$ pictures and binary labels for each. We compare
performance of convolutional models trained in three ways: first directly on
the collected dataset, second with features transferred from a network trained
to predict gender, and third with features transferred from a network trained
on ImageNet. Our findings show that ImageNet features transfer best, producing
a model that attains $68.1\%$ accuracy on the test set and is moderately
successful at predicting matches.
| Harm de Vries, Jason Yosinski | null | 1505.00359 | null | null |
Making Sense of Hidden Layer Information in Deep Networks by Learning
Hierarchical Targets | cs.NE cs.LG | This paper proposes an architecture for deep neural networks with hidden
layer branches that learn targets of lower hierarchy than final layer targets.
The branches provide a channel for enforcing useful information in hidden layer
which helps in attaining better accuracy, both for the final layer and hidden
layers. The shared layers modify their weights using the gradients of all cost
functions higher than the branching layer. This model provides a flexible
inference system with many levels of targets which is modular and can be used
efficiently in situations requiring different levels of results according to
complexity. This paper applies the idea to a text classification task on 20
Newsgroups data set with two level of hierarchical targets and a comparison is
made with training without the use of hidden layer branches.
| Abhinav Tushar | null | 1505.00384 | null | null |
Highway Networks | cs.LG cs.NE | There is plenty of theoretical and empirical evidence that depth of neural
networks is a crucial ingredient for their success. However, network training
becomes more difficult with increasing depth and training of very deep networks
remains an open problem. In this extended abstract, we introduce a new
architecture designed to ease gradient-based training of very deep networks. We
refer to networks with this architecture as highway networks, since they allow
unimpeded information flow across several layers on "information highways". The
architecture is characterized by the use of gating units which learn to
regulate the flow of information through a network. Highway networks with
hundreds of layers can be trained directly using stochastic gradient descent
and with a variety of activation functions, opening up the possibility of
studying extremely deep and efficient architectures.
| Rupesh Kumar Srivastava, Klaus Greff, J\"urgen Schmidhuber | null | 1505.00387 | null | null |
Order-Revealing Encryption and the Hardness of Private Learning | cs.CR cs.CC cs.LG | An order-revealing encryption scheme gives a public procedure by which two
ciphertexts can be compared to reveal the ordering of their underlying
plaintexts. We show how to use order-revealing encryption to separate
computationally efficient PAC learning from efficient $(\epsilon,
\delta)$-differentially private PAC learning. That is, we construct a concept
class that is efficiently PAC learnable, but for which every efficient learner
fails to be differentially private. This answers a question of Kasiviswanathan
et al. (FOCS '08, SIAM J. Comput. '11).
To prove our result, we give a generic transformation from an order-revealing
encryption scheme into one with strongly correct comparison, which enables the
consistent comparison of ciphertexts that are not obtained as the valid
encryption of any message. We believe this construction may be of independent
interest.
| Mark Bun and Mark Zhandry | null | 1505.00388 | null | null |
Block Basis Factorization for Scalable Kernel Matrix Evaluation | stat.ML cs.LG cs.NA math.NA | Kernel methods are widespread in machine learning; however, they are limited
by the quadratic complexity of the construction, application, and storage of
kernel matrices. Low-rank matrix approximation algorithms are widely used to
address this problem and reduce the arithmetic and storage cost. However, we
observed that for some datasets with wide intra-class variability, the optimal
kernel parameter for smaller classes yields a matrix that is less well
approximated by low-rank methods. In this paper, we propose an efficient
structured low-rank approximation method -- the Block Basis Factorization (BBF)
-- and its fast construction algorithm to approximate radial basis function
(RBF) kernel matrices. Our approach has linear memory cost and floating-point
operations for many machine learning kernels. BBF works for a wide range of
kernel bandwidth parameters and extends the domain of applicability of low-rank
approximation methods significantly. Our empirical results demonstrate the
stability and superiority over the state-of-art kernel approximation
algorithms.
| Ruoxi Wang, Yingzhou Li, Michael W. Mahoney, Eric Darve | 10.1137/18M1212586 | 1505.00398 | null | null |
Visualization of Tradeoff in Evaluation: from Precision-Recall & PN to
LIFT, ROC & BIRD | cs.LG cs.AI cs.IR stat.ME stat.ML | Evaluation often aims to reduce the correctness or error characteristics of a
system down to a single number, but that always involves trade-offs. Another
way of dealing with this is to quote two numbers, such as Recall and Precision,
or Sensitivity and Specificity. But it can also be useful to see more than
this, and a graphical approach can explore sensitivity to cost, prevalence,
bias, noise, parameters and hyper-parameters.
Moreover, most techniques are implicitly based on two balanced classes, and
our ability to visualize graphically is intrinsically two dimensional, but we
often want to visualize in a multiclass context. We review the dichotomous
approaches relating to Precision, Recall, and ROC as well as the related LIFT
chart, exploring how they handle unbalanced and multiclass data, and deriving
new probabilistic and information theoretic variants of LIFT that help deal
with the issues associated with the handling of multiple and unbalanced
classes.
| David M. W. Powers | null | 1505.00401 | null | null |
Optimal Time-Series Motifs | cs.AI cs.LG | Motifs are the most repetitive/frequent patterns of a time-series. The
discovery of motifs is crucial for practitioners in order to understand and
interpret the phenomena occurring in sequential data. Currently, motifs are
searched among series sub-sequences, aiming at selecting the most frequently
occurring ones. Search-based methods, which try out series sub-sequence as
motif candidates, are currently believed to be the best methods in finding the
most frequent patterns.
However, this paper proposes an entirely new perspective in finding motifs.
We demonstrate that searching is non-optimal since the domain of motifs is
restricted, and instead we propose a principled optimization approach able to
find optimal motifs. We treat the occurrence frequency as a function and
time-series motifs as its parameters, therefore we \textit{learn} the optimal
motifs that maximize the frequency function. In contrast to searching, our
method is able to discover the most repetitive patterns (hence optimal), even
in cases where they do not explicitly occur as sub-sequences. Experiments on
several real-life time-series datasets show that the motifs found by our method
are highly more frequent than the ones found through searching, for exactly the
same distance threshold.
| Josif Grabocka and Nicolas Schilling and Lars Schmidt-Thieme | null | 1505.00423 | null | null |
Kernel Spectral Clustering and applications | cs.LG stat.ML | In this chapter we review the main literature related to kernel spectral
clustering (KSC), an approach to clustering cast within a kernel-based
optimization setting. KSC represents a least-squares support vector machine
based formulation of spectral clustering described by a weighted kernel PCA
objective. Just as in the classifier case, the binary clustering model is
expressed by a hyperplane in a high dimensional space induced by a kernel. In
addition, the multi-way clustering can be obtained by combining a set of binary
decision functions via an Error Correcting Output Codes (ECOC) encoding scheme.
Because of its model-based nature, the KSC method encompasses three main steps:
training, validation, testing. In the validation stage model selection is
performed to obtain tuning parameters, like the number of clusters present in
the data. This is a major advantage compared to classical spectral clustering
where the determination of the clustering parameters is unclear and relies on
heuristics. Once a KSC model is trained on a small subset of the entire data,
it is able to generalize well to unseen test points. Beyond the basic
formulation, sparse KSC algorithms based on the Incomplete Cholesky
Decomposition (ICD) and $L_0$, $L_1, L_0 + L_1$, Group Lasso regularization are
reviewed. In that respect, we show how it is possible to handle large scale
data. Also, two possible ways to perform hierarchical clustering and a soft
clustering method are presented. Finally, real-world applications such as image
segmentation, power load time-series clustering, document clustering and big
data learning are considered.
| Rocco Langone, Raghvendra Mall, Carlos Alzate, Johan A. K. Suykens | null | 1505.00477 | null | null |
Risk Bounds For Mode Clustering | math.ST cs.LG stat.ML stat.TH | Density mode clustering is a nonparametric clustering method. The clusters
are the basins of attraction of the modes of a density estimator. We study the
risk of mode-based clustering. We show that the clustering risk over the
cluster cores --- the regions where the density is high --- is very small even
in high dimensions. And under a low noise condition, the overall cluster risk
is small even beyond the cores, in high dimensions.
| Martin Azizyan, Yen-Chi Chen, Aarti Singh and Larry Wasserman | null | 1505.00482 | null | null |
Reinforcement Learning Neural Turing Machines - Revised | cs.LG | The Neural Turing Machine (NTM) is more expressive than all previously
considered models because of its external memory. It can be viewed as a broader
effort to use abstract external Interfaces and to learn a parametric model that
interacts with them.
The capabilities of a model can be extended by providing it with proper
Interfaces that interact with the world. These external Interfaces include
memory, a database, a search engine, or a piece of software such as a theorem
verifier. Some of these Interfaces are provided by the developers of the model.
However, many important existing Interfaces, such as databases and search
engines, are discrete.
We examine feasibility of learning models to interact with discrete
Interfaces. We investigate the following discrete Interfaces: a memory Tape, an
input Tape, and an output Tape. We use a Reinforcement Learning algorithm to
train a neural network that interacts with such Interfaces to solve simple
algorithmic tasks. Our Interfaces are expressive enough to make our model
Turing complete.
| Wojciech Zaremba and Ilya Sutskever | null | 1505.00521 | null | null |
On Regret-Optimal Learning in Decentralized Multi-player Multi-armed
Bandits | stat.ML cs.LG | We consider the problem of learning in single-player and multiplayer
multiarmed bandit models. Bandit problems are classes of online learning
problems that capture exploration versus exploitation tradeoffs. In a
multiarmed bandit model, players can pick among many arms, and each play of an
arm generates an i.i.d. reward from an unknown distribution. The objective is
to design a policy that maximizes the expected reward over a time horizon for a
single player setting and the sum of expected rewards for the multiplayer
setting. In the multiplayer setting, arms may give different rewards to
different players. There is no separate channel for coordination among the
players. Any attempt at communication is costly and adds to regret. We propose
two decentralizable policies, $\tt E^3$ ($\tt E$-$\tt cubed$) and $\tt
E^3$-$\tt TS$, that can be used in both single player and multiplayer settings.
These policies are shown to yield expected regret that grows at most as
O($\log^{1+\epsilon} T$). It is well known that $\log T$ is the lower bound on
the rate of growth of regret even in a centralized case. The proposed
algorithms improve on prior work where regret grew at O($\log^2 T$). More
fundamentally, these policies address the question of additional cost incurred
in decentralized online learning, suggesting that there is at most an
$\epsilon$-factor cost in terms of order of regret. This solves a problem of
relevance in many domains and had been open for a while.
| Naumaan Nayyar, Dileep Kalathil and Rahul Jain | null | 1505.00553 | null | null |
fastFM: A Library for Factorization Machines | cs.LG cs.IR | Factorization Machines (FM) are only used in a narrow range of applications
and are not part of the standard toolbox of machine learning models. This is a
pity, because even though FMs are recognized as being very successful for
recommender system type applications they are a general model to deal with
sparse and high dimensional features. Our Factorization Machine implementation
provides easy access to many solvers and supports regression, classification
and ranking tasks. Such an implementation simplifies the use of FM's for a wide
field of applications. This implementation has the potential to improve our
understanding of the FM model and drive new development.
| Immanuel Bayer | null | 1505.00641 | null | null |
Optimal Learning via the Fourier Transform for Sums of Independent
Integer Random Variables | cs.DS cs.IT cs.LG math.IT math.ST stat.TH | We study the structure and learnability of sums of independent integer random
variables (SIIRVs). For $k \in \mathbb{Z}_{+}$, a $k$-SIIRV of order $n \in
\mathbb{Z}_{+}$ is the probability distribution of the sum of $n$ independent
random variables each supported on $\{0, 1, \dots, k-1\}$. We denote by ${\cal
S}_{n,k}$ the set of all $k$-SIIRVs of order $n$.
In this paper, we tightly characterize the sample and computational
complexity of learning $k$-SIIRVs. More precisely, we design a computationally
efficient algorithm that uses $\widetilde{O}(k/\epsilon^2)$ samples, and learns
an arbitrary $k$-SIIRV within error $\epsilon,$ in total variation distance.
Moreover, we show that the {\em optimal} sample complexity of this learning
problem is $\Theta((k/\epsilon^2)\sqrt{\log(1/\epsilon)}).$ Our algorithm
proceeds by learning the Fourier transform of the target $k$-SIIRV in its
effective support. Its correctness relies on the {\em approximate sparsity} of
the Fourier transform of $k$-SIIRVs -- a structural property that we establish,
roughly stating that the Fourier transform of $k$-SIIRVs has small magnitude
outside a small set.
Along the way we prove several new structural results about $k$-SIIRVs. As
one of our main structural contributions, we give an efficient algorithm to
construct a sparse {\em proper} $\epsilon$-cover for ${\cal S}_{n,k},$ in total
variation distance. We also obtain a novel geometric characterization of the
space of $k$-SIIRVs. Our characterization allows us to prove a tight lower
bound on the size of $\epsilon$-covers for ${\cal S}_{n,k}$, and is the key
ingredient in our tight sample complexity lower bound.
Our approach of exploiting the sparsity of the Fourier transform in
distribution learning is general, and has recently found additional
applications.
| Ilias Diakonikolas, Daniel M. Kane, Alistair Stewart | null | 1505.00662 | null | null |
Interleaved Text/Image Deep Mining on a Large-Scale Radiology Database
for Automated Image Interpretation | cs.CV cs.LG | Despite tremendous progress in computer vision, there has not been an attempt
for machine learning on very large-scale medical image databases. We present an
interleaved text/image deep learning system to extract and mine the semantic
interactions of radiology images and reports from a national research
hospital's Picture Archiving and Communication System. With natural language
processing, we mine a collection of representative ~216K two-dimensional key
images selected by clinicians for diagnostic reference, and match the images
with their descriptions in an automated manner. Our system interleaves between
unsupervised learning and supervised learning on document- and sentence-level
text collections, to generate semantic labels and to predict them given an
image. Given an image of a patient scan, semantic topics in radiology levels
are predicted, and associated key-words are generated. Also, a number of
frequent disease types are detected as present or absent, to provide more
specific interpretation of a patient scan. This shows the potential of
large-scale learning and prediction in electronic patient records available in
most modern clinical institutions.
| Hoo-Chang Shin, Le Lu, Lauren Kim, Ari Seff, Jianhua Yao, Ronald M.
Summers | null | 1505.00670 | null | null |
Self-Expressive Decompositions for Matrix Approximation and Clustering | cs.IT cs.CV cs.LG math.IT stat.ML | Data-aware methods for dimensionality reduction and matrix decomposition aim
to find low-dimensional structure in a collection of data. Classical approaches
discover such structure by learning a basis that can efficiently express the
collection. Recently, "self expression", the idea of using a small subset of
data vectors to represent the full collection, has been developed as an
alternative to learning. Here, we introduce a scalable method for computing
sparse SElf-Expressive Decompositions (SEED). SEED is a greedy method that
constructs a basis by sequentially selecting incoherent vectors from the
dataset. After forming a basis from a subset of vectors in the dataset, SEED
then computes a sparse representation of the dataset with respect to this
basis. We develop sufficient conditions under which SEED exactly represents low
rank matrices and vectors sampled from a unions of independent subspaces. We
show how SEED can be used in applications ranging from matrix approximation and
denoising to clustering, and apply it to numerous real-world datasets. Our
results demonstrate that SEED is an attractive low-complexity alternative to
other sparse matrix factorization approaches such as sparse PCA and
self-expressive methods for clustering.
| Eva L. Dyer, Tom A. Goldstein, Raajen Patel, Konrad P. Kording, and
Richard G. Baraniuk | null | 1505.00824 | null | null |
A novel plasticity rule can explain the development of sensorimotor
intelligence | cs.RO cs.LG q-bio.NC | Grounding autonomous behavior in the nervous system is a fundamental
challenge for neuroscience. In particular, the self-organized behavioral
development provides more questions than answers. Are there special functional
units for curiosity, motivation, and creativity? This paper argues that these
features can be grounded in synaptic plasticity itself, without requiring any
higher level constructs. We propose differential extrinsic plasticity (DEP) as
a new synaptic rule for self-learning systems and apply it to a number of
complex robotic systems as a test case. Without specifying any purpose or goal,
seemingly purposeful and adaptive behavior is developed, displaying a certain
level of sensorimotor intelligence. These surprising results require no system
specific modifications of the DEP rule but arise rather from the underlying
mechanism of spontaneous symmetry breaking due to the tight
brain-body-environment coupling. The new synaptic rule is biologically
plausible and it would be an interesting target for a neurobiolocal
investigation. We also argue that this neuronal mechanism may have been a
catalyst in natural evolution.
| Ralf Der and Georg Martius | 10.1073/pnas.1508400112 | 1505.00835 | null | null |
Empirical Evaluation of Rectified Activations in Convolutional Network | cs.LG cs.CV stat.ML | In this paper we investigate the performance of different types of rectified
activation functions in convolutional neural network: standard rectified linear
unit (ReLU), leaky rectified linear unit (Leaky ReLU), parametric rectified
linear unit (PReLU) and a new randomized leaky rectified linear units (RReLU).
We evaluate these activation function on standard image classification task.
Our experiments suggest that incorporating a non-zero slope for negative part
in rectified activation units could consistently improve the results. Thus our
findings are negative on the common belief that sparsity is the key of good
performance in ReLU. Moreover, on small scale dataset, using deterministic
negative slope or learning it are both prone to overfitting. They are not as
effective as using their randomized counterpart. By using RReLU, we achieved
75.68\% accuracy on CIFAR-100 test set without multiple test or ensemble.
| Bing Xu, Naiyan Wang, Tianqi Chen, Mu Li | null | 1505.00853 | null | null |
Large-scale Classification of Fine-Art Paintings: Learning The Right
Metric on The Right Feature | cs.CV cs.IR cs.LG cs.MM | In the past few years, the number of fine-art collections that are digitized
and publicly available has been growing rapidly. With the availability of such
large collections of digitized artworks comes the need to develop multimedia
systems to archive and retrieve this pool of data. Measuring the visual
similarity between artistic items is an essential step for such multimedia
systems, which can benefit more high-level multimedia tasks. In order to model
this similarity between paintings, we should extract the appropriate visual
features for paintings and find out the best approach to learn the similarity
metric based on these features. We investigate a comprehensive list of visual
features and metric learning approaches to learn an optimized similarity
measure between paintings. We develop a machine that is able to make
aesthetic-related semantic-level judgments, such as predicting a painting's
style, genre, and artist, as well as providing similarity measures optimized
based on the knowledge available in the domain of art historical
interpretation. Our experiments show the value of using this similarity measure
for the aforementioned prediction tasks.
| Babak Saleh and Ahmed Elgammal | null | 1505.00855 | null | null |
An $O(n\log(n))$ Algorithm for Projecting Onto the Ordered Weighted
$\ell_1$ Norm Ball | math.OC cs.LG | The ordered weighted $\ell_1$ (OWL) norm is a newly developed generalization
of the Octogonal Shrinkage and Clustering Algorithm for Regression (OSCAR)
norm. This norm has desirable statistical properties and can be used to perform
simultaneous clustering and regression. In this paper, we show how to compute
the projection of an $n$-dimensional vector onto the OWL norm ball in
$O(n\log(n))$ operations. In addition, we illustrate the performance of our
algorithm on a synthetic regression test.
| Damek Davis | null | 1505.00870 | null | null |
Reinforced Decision Trees | cs.LG | In order to speed-up classification models when facing a large number of
categories, one usual approach consists in organizing the categories in a
particular structure, this structure being then used as a way to speed-up the
prediction computation. This is for example the case when using
error-correcting codes or even hierarchies of categories. But in the majority
of approaches, this structure is chosen \textit{by hand}, or during a
preliminary step, and not integrated in the learning process. We propose a new
model called Reinforced Decision Tree which simultaneously learns how to
organize categories in a tree structure and how to classify any input based on
this structure. This approach keeps the advantages of existing techniques (low
inference complexity) but allows one to build efficient classifiers in one
learning step. The learning algorithm is inspired by reinforcement learning and
policy-gradient techniques which allows us to integrate the two steps (building
the tree, and learning the classifier) in one single algorithm.
| Aur\'elia L\'eon and Ludovic Denoyer | null | 1505.00908 | null | null |
The Configurable SAT Solver Challenge (CSSC) | cs.AI cs.LG | It is well known that different solution strategies work well for different
types of instances of hard combinatorial problems. As a consequence, most
solvers for the propositional satisfiability problem (SAT) expose parameters
that allow them to be customized to a particular family of instances. In the
international SAT competition series, these parameters are ignored: solvers are
run using a single default parameter setting (supplied by the authors) for all
benchmark instances in a given track. While this competition format rewards
solvers with robust default settings, it does not reflect the situation faced
by a practitioner who only cares about performance on one particular
application and can invest some time into tuning solver parameters for this
application. The new Configurable SAT Solver Competition (CSSC) compares
solvers in this latter setting, scoring each solver by the performance it
achieved after a fully automated configuration step. This article describes the
CSSC in more detail, and reports the results obtained in its two instantiations
so far, CSSC 2013 and 2014.
| Frank Hutter and Marius Lindauer and Adrian Balint and Sam Bayless and
Holger Hoos and Kevin Leyton-Brown | null | 1505.01221 | null | null |
A Comprehensive Study On The Applications Of Machine Learning For
Diagnosis Of Cancer | cs.LG | Collectively, lung cancer, breast cancer and melanoma was diagnosed in over
535,340 people out of which, 209,400 deaths were reported [13]. It is estimated
that over 600,000 people will be diagnosed with these forms of cancer in 2015.
Most of the deaths from lung cancer, breast cancer and melanoma result due to
late detection. All of these cancers, if detected early, are 100% curable. In
this study, we develop and evaluate algorithms to diagnose Breast cancer,
Melanoma, and Lung cancer. In the first part of the study, we employed a
normalised Gradient Descent and an Artificial Neural Network to diagnose breast
cancer with an overall accuracy of 91% and 95% respectively. In the second part
of the study, an artificial neural network coupled with image processing and
analysis algorithms was employed to achieve an overall accuracy of 93% A naive
mobile based application that allowed people to take diagnostic tests on their
phones was developed. Finally, a Support Vector Machine algorithm incorporating
image processing and image analysis algorithms was developed to diagnose lung
cancer with an accuracy of 94%. All of the aforementioned systems had very low
false positive and false negative rates. We are developing an online network
that incorporates all of these systems and allows people to collaborate
globally.
| Mohnish Chakravarti, Tanay Kothari | null | 1505.01345 | null | null |
Re-scale boosting for regression and classification | cs.LG stat.ML | Boosting is a learning scheme that combines weak prediction rules to produce
a strong composite estimator, with the underlying intuition that one can obtain
accurate prediction rules by combining "rough" ones. Although boosting is
proved to be consistent and overfitting-resistant, its numerical convergence
rate is relatively slow. The aim of this paper is to develop a new boosting
strategy, called the re-scale boosting (RBoosting), to accelerate the numerical
convergence rate and, consequently, improve the learning performance of
boosting. Our studies show that RBoosting possesses the almost optimal
numerical convergence rate in the sense that, up to a logarithmic factor, it
can reach the minimax nonlinear approximation rate. We then use RBoosting to
tackle both the classification and regression problems, and deduce a tight
generalization error estimate. The theoretical and experimental results show
that RBoosting outperforms boosting in terms of generalization.
| Shaobo Lin, Yao Wang and Lin Xu | null | 1505.01371 | null | null |
Fast Differentially Private Matrix Factorization | cs.LG cs.AI | Differentially private collaborative filtering is a challenging task, both in
terms of accuracy and speed. We present a simple algorithm that is provably
differentially private, while offering good performance, using a novel
connection of differential privacy to Bayesian posterior sampling via
Stochastic Gradient Langevin Dynamics. Due to its simplicity the algorithm
lends itself to efficient implementation. By careful systems design and by
exploiting the power law behavior of the data to maximize CPU cache bandwidth
we are able to generate 1024 dimensional models at a rate of 8.5 million
recommendations per second on a single PC.
| Ziqi Liu, Yu-Xiang Wang, Alexander J. Smola | null | 1505.01419 | null | null |
Estimation from Pairwise Comparisons: Sharp Minimax Bounds with Topology
Dependence | cs.LG cs.IT math.IT stat.ML | Data in the form of pairwise comparisons arises in many domains, including
preference elicitation, sporting competitions, and peer grading among others.
We consider parametric ordinal models for such pairwise comparison data
involving a latent vector $w^* \in \mathbb{R}^d$ that represents the
"qualities" of the $d$ items being compared; this class of models includes the
two most widely used parametric models--the Bradley-Terry-Luce (BTL) and the
Thurstone models. Working within a standard minimax framework, we provide tight
upper and lower bounds on the optimal error in estimating the quality score
vector $w^*$ under this class of models. The bounds depend on the topology of
the comparison graph induced by the subset of pairs being compared via its
Laplacian spectrum. Thus, in settings where the subset of pairs may be chosen,
our results provide principled guidelines for making this choice. Finally, we
compare these error rates to those under cardinal measurement models and show
that the error rates in the ordinal and cardinal settings have identical
scalings apart from constant pre-factors.
| Nihar B. Shah, Sivaraman Balakrishnan, Joseph Bradley, Abhay Parekh,
Kannan Ramchandran, Martin J. Wainwright | null | 1505.01462 | null | null |
A Fixed-Size Encoding Method for Variable-Length Sequences with its
Application to Neural Network Language Models | cs.NE cs.CL cs.LG | In this paper, we propose the new fixed-size ordinally-forgetting encoding
(FOFE) method, which can almost uniquely encode any variable-length sequence of
words into a fixed-size representation. FOFE can model the word order in a
sequence using a simple ordinally-forgetting mechanism according to the
positions of words. In this work, we have applied FOFE to feedforward neural
network language models (FNN-LMs). Experimental results have shown that without
using any recurrent feedbacks, FOFE based FNN-LMs can significantly outperform
not only the standard fixed-input FNN-LMs but also the popular RNN-LMs.
| Shiliang Zhang, Hui Jiang, Mingbin Xu, Junfeng Hou, Lirong Dai | null | 1505.01504 | null | null |
Learning and Optimization with Submodular Functions | cs.LG | In many naturally occurring optimization problems one needs to ensure that
the definition of the optimization problem lends itself to solutions that are
tractable to compute. In cases where exact solutions cannot be computed
tractably, it is beneficial to have strong guarantees on the tractable
approximate solutions. In order operate under these criterion most optimization
problems are cast under the umbrella of convexity or submodularity. In this
report we will study design and optimization over a common class of functions
called submodular functions. Set functions, and specifically submodular set
functions, characterize a wide variety of naturally occurring optimization
problems, and the property of submodularity of set functions has deep
theoretical consequences with wide ranging applications. Informally, the
property of submodularity of set functions concerns the intuitive "principle of
diminishing returns. This property states that adding an element to a smaller
set has more value than adding it to a larger set. Common examples of
submodular monotone functions are entropies, concave functions of cardinality,
and matroid rank functions; non-monotone examples include graph cuts, network
flows, and mutual information.
In this paper we will review the formal definition of submodularity; the
optimization of submodular functions, both maximization and minimization; and
finally discuss some applications in relation to learning and reasoning using
submodular functions.
| Bharath Sankaran, Marjan Ghazvininejad, Xinran He, David Kale, Liron
Cohen | null | 1505.01576 | null | null |
Blind Compressive Sensing Framework for Collaborative Filtering | cs.IR cs.LG | Existing works based on latent factor models have focused on representing the
rating matrix as a product of user and item latent factor matrices, both being
dense. Latent (factor) vectors define the degree to which a trait is possessed
by an item or the affinity of user towards that trait. A dense user matrix is a
reasonable assumption as each user will like/dislike a trait to certain extent.
However, any item will possess only a few of the attributes and never all.
Hence, the item matrix should ideally have a sparse structure rather than a
dense one as formulated in earlier works. Therefore we propose to factor the
ratings matrix into a dense user matrix and a sparse item matrix which leads us
to the Blind Compressed Sensing (BCS) framework. We derive an efficient
algorithm for solving the BCS problem based on Majorization Minimization (MM)
technique. Our proposed approach is able to achieve significantly higher
accuracy and shorter run times as compared to existing approaches.
| Anupriya Gogna, Angshul Majumdar | null | 1505.01621 | null | null |
Context-Aware Mobility Management in HetNets: A Reinforcement Learning
Approach | cs.NI cs.LG | The use of small cell deployments in heterogeneous network (HetNet)
environments is expected to be a key feature of 4G networks and beyond, and
essential for providing higher user throughput and cell-edge coverage. However,
due to different coverage sizes of macro and pico base stations (BSs), such a
paradigm shift introduces additional requirements and challenges in dense
networks. Among these challenges is the handover performance of user equipment
(UEs), which will be impacted especially when high velocity UEs traverse
picocells. In this paper, we propose a coordination-based and context-aware
mobility management (MM) procedure for small cell networks using tools from
reinforcement learning. Here, macro and pico BSs jointly learn their long-term
traffic loads and optimal cell range expansion, and schedule their UEs based on
their velocities and historical rates (exchanged among tiers). The proposed
approach is shown to not only outperform the classical MM in terms of UE
throughput, but also to enable better fairness. In average, a gain of up to
80\% is achieved for UE throughput, while the handover failure probability is
reduced up to a factor of three by the proposed learning based MM approaches.
| Meryem Simsek, Mehdi Bennis, Ismail G\"uvenc | null | 1505.01625 | null | null |
A Survey of Predictive Modelling under Imbalanced Distributions | cs.LG | Many real world data mining applications involve obtaining predictive models
using data sets with strongly imbalanced distributions of the target variable.
Frequently, the least common values of this target variable are associated with
events that are highly relevant for end users (e.g. fraud detection, unusual
returns on stock markets, anticipation of catastrophes, etc.). Moreover, the
events may have different costs and benefits, which when associated with the
rarity of some of them on the available training data creates serious problems
to predictive modelling techniques. This paper presents a survey of existing
techniques for handling these important applications of predictive analytics.
Although most of the existing work addresses classification tasks (nominal
target variables), we also describe methods designed to handle similar problems
within regression tasks (numeric target variables). In this survey we discuss
the main challenges raised by imbalanced distributions, describe the main
approaches to these problems, propose a taxonomy of these methods and refer to
some related problems within predictive modelling.
| Paula Branco and Luis Torgo and Rita Ribeiro | null | 1505.01658 | null | null |
Integrating K-means with Quadratic Programming Feature Selection | cs.CV cs.LG | Several data mining problems are characterized by data in high dimensions.
One of the popular ways to reduce the dimensionality of the data is to perform
feature selection, i.e, select a subset of relevant and non-redundant features.
Recently, Quadratic Programming Feature Selection (QPFS) has been proposed
which formulates the feature selection problem as a quadratic program. It has
been shown to outperform many of the existing feature selection methods for a
variety of applications. Though, better than many existing approaches, the
running time complexity of QPFS is cubic in the number of features, which can
be quite computationally expensive even for moderately sized datasets. In this
paper we propose a novel method for feature selection by integrating k-means
clustering with QPFS. The basic variant of our approach runs k-means to bring
down the number of features which need to be passed on to QPFS. We then enhance
this idea, wherein we gradually refine the feature space from a very coarse
clustering to a fine-grained one, by interleaving steps of QPFS with k-means
clustering. Every step of QPFS helps in identifying the clusters of irrelevant
features (which can then be thrown away), whereas every step of k-means further
refines the clusters which are potentially relevant. We show that our iterative
refinement of clusters is guaranteed to converge. We provide bounds on the
number of distance computations involved in the k-means algorithm. Further,
each QPFS run is now cubic in number of clusters, which can be much smaller
than actual number of features. Experiments on eight publicly available
datasets show that our approach gives significant computational gains (both in
time and memory), over standard QPFS as well as other state of the art feature
selection methods, even while improving the overall accuracy.
| Yamuna Prasad, K. K. Biswas | null | 1505.01728 | null | null |
Object detection via a multi-region & semantic segmentation-aware CNN
model | cs.CV cs.LG cs.NE | We propose an object detection system that relies on a multi-region deep
convolutional neural network (CNN) that also encodes semantic
segmentation-aware features. The resulting CNN-based representation aims at
capturing a diverse set of discriminative appearance factors and exhibits
localization sensitivity that is essential for accurate object localization. We
exploit the above properties of our recognition module by integrating it on an
iterative localization mechanism that alternates between scoring a box proposal
and refining its location with a deep CNN regression model. Thanks to the
efficient use of our modules, we detect objects with very high localization
accuracy. On the detection challenges of PASCAL VOC2007 and PASCAL VOC2012 we
achieve mAP of 78.2% and 73.9% correspondingly, surpassing any other published
work by a significant margin.
| Spyros Gidaris, Nikos Komodakis | null | 1505.01749 | null | null |
Optimal Decision-Theoretic Classification Using Non-Decomposable
Performance Metrics | cs.LG stat.ML | We provide a general theoretical analysis of expected out-of-sample utility,
also referred to as decision-theoretic classification, for non-decomposable
binary classification metrics such as F-measure and Jaccard coefficient. Our
key result is that the expected out-of-sample utility for many performance
metrics is provably optimized by a classifier which is equivalent to a signed
thresholding of the conditional probability of the positive class. Our analysis
bridges a gap in the literature on binary classification, revealed in light of
recent results for non-decomposable metrics in population utility maximization
style classification. Our results identify checkable properties of a
performance metric which are sufficient to guarantee a probability ranking
principle. We propose consistent estimators for optimal expected out-of-sample
classification. As a consequence of the probability ranking principle,
computational requirements can be reduced from exponential to cubic complexity
in the general case, and further reduced to quadratic complexity in special
cases. We provide empirical results on simulated and benchmark datasets
evaluating the performance of the proposed algorithms for decision-theoretic
classification and comparing them to baseline and state-of-the-art methods in
population utility maximization for non-decomposable metrics.
| Nagarajan Natarajan, Oluwasanmi Koyejo, Pradeep Ravikumar, Inderjit S.
Dhillon | null | 1505.01802 | null | null |
Language Models for Image Captioning: The Quirks and What Works | cs.CL cs.AI cs.CV cs.LG | Two recent approaches have achieved state-of-the-art results in image
captioning. The first uses a pipelined process where a set of candidate words
is generated by a convolutional neural network (CNN) trained on images, and
then a maximum entropy (ME) language model is used to arrange these words into
a coherent sentence. The second uses the penultimate activation layer of the
CNN as input to a recurrent neural network (RNN) that then generates the
caption sequence. In this paper, we compare the merits of these different
language modeling approaches for the first time by using the same
state-of-the-art CNN as input. We examine issues in the different approaches,
including linguistic irregularities, caption repetition, and data set overlap.
By combining key aspects of the ME and RNN methods, we achieve a new record
performance over previously published results on the benchmark COCO dataset.
However, the gains we see in BLEU do not translate to human judgments.
| Jacob Devlin, Hao Cheng, Hao Fang, Saurabh Gupta, Li Deng, Xiaodong
He, Geoffrey Zweig, Margaret Mitchell | null | 1505.01809 | null | null |
DART: Dropouts meet Multiple Additive Regression Trees | cs.LG stat.ML | Multiple Additive Regression Trees (MART), an ensemble model of boosted
regression trees, is known to deliver high prediction accuracy for diverse
tasks, and it is widely used in practice. However, it suffers an issue which we
call over-specialization, wherein trees added at later iterations tend to
impact the prediction of only a few instances, and make negligible contribution
towards the remaining instances. This negatively affects the performance of the
model on unseen data, and also makes the model over-sensitive to the
contributions of the few, initially added tress. We show that the commonly used
tool to address this issue, that of shrinkage, alleviates the problem only to a
certain extent and the fundamental issue of over-specialization still remains.
In this work, we explore a different approach to address the problem that of
employing dropouts, a tool that has been recently proposed in the context of
learning deep neural networks. We propose a novel way of employing dropouts in
MART, resulting in the DART algorithm. We evaluate DART on ranking, regression
and classification tasks, using large scale, publicly available datasets, and
show that DART outperforms MART in each of the tasks, with a significant
margin. We also show that DART overcomes the issue of over-specialization to a
considerable extent.
| K. V. Rashmi and Ran Gilad-Bachrach | null | 1505.01866 | null | null |
An Asymptotically Optimal Policy for Uniform Bandits of Unknown Support | stat.ML cs.LG | Consider the problem of a controller sampling sequentially from a finite
number of $N \geq 2$ populations, specified by random variables $X^i_k$, $ i =
1,\ldots , N,$ and $k = 1, 2, \ldots$; where $X^i_k$ denotes the outcome from
population $i$ the $k^{th}$ time it is sampled. It is assumed that for each
fixed $i$, $\{ X^i_k \}_{k \geq 1}$ is a sequence of i.i.d. uniform random
variables over some interval $[a_i, b_i]$, with the support (i.e., $a_i, b_i$)
unknown to the controller. The objective is to have a policy $\pi$ for
deciding, based on available data, from which of the $N$ populations to sample
from at any time $n=1,2,\ldots$ so as to maximize the expected sum of outcomes
of $n$ samples or equivalently to minimize the regret due to lack on
information of the parameters $\{ a_i \}$ and $\{ b_i \}$. In this paper, we
present a simple inflated sample mean (ISM) type policy that is asymptotically
optimal in the sense of its regret achieving the asymptotic lower bound of
Burnetas and Katehakis (1996). Additionally, finite horizon regret bounds are
given.
| Wesley Cowan and Michael N. Katehakis | null | 1505.01918 | null | null |
Deep Learning for Medical Image Segmentation | cs.LG cs.AI cs.CV | This report provides an overview of the current state of the art deep
learning architectures and optimisation techniques, and uses the ADNI
hippocampus MRI dataset as an example to compare the effectiveness and
efficiency of different convolutional architectures on the task of patch-based
3-dimensional hippocampal segmentation, which is important in the diagnosis of
Alzheimer's Disease. We found that a slightly unconventional "stacked 2D"
approach provides much better classification performance than simple 2D patches
without requiring significantly more computational power. We also examined the
popular "tri-planar" approach used in some recently published studies, and
found that it provides much better results than the 2D approaches, but also
with a moderate increase in computational power requirement. Finally, we
evaluated a full 3D convolutional architecture, and found that it provides
marginally better results than the tri-planar approach, but at the cost of a
very significant increase in computational power requirement.
| Matthew Lai | null | 1505.02000 | null | null |
Exploring Models and Data for Image Question Answering | cs.LG cs.AI cs.CL cs.CV | This work aims to address the problem of image-based question-answering (QA)
with new models and datasets. In our work, we propose to use neural networks
and visual semantic embeddings, without intermediate stages such as object
detection and image segmentation, to predict answers to simple questions about
images. Our model performs 1.8 times better than the only published results on
an existing image QA dataset. We also present a question generation algorithm
that converts image descriptions, which are widely available, into QA form. We
used this algorithm to produce an order-of-magnitude larger dataset, with more
evenly distributed answers. A suite of baseline results on this new dataset are
also presented.
| Mengye Ren, Ryan Kiros, Richard Zemel | null | 1505.02074 | null | null |
Human Social Interaction Modeling Using Temporal Deep Networks | cs.CY cs.LG | We present a novel approach to computational modeling of social interactions
based on modeling of essential social interaction predicates (ESIPs) such as
joint attention and entrainment. Based on sound social psychological theory and
methodology, we collect a new "Tower Game" dataset consisting of audio-visual
capture of dyadic interactions labeled with the ESIPs. We expect this dataset
to provide a new avenue for research in computational social interaction
modeling. We propose a novel joint Discriminative Conditional Restricted
Boltzmann Machine (DCRBM) model that combines a discriminative component with
the generative power of CRBMs. Such a combination enables us to uncover
actionable constituents of the ESIPs in two steps. First, we train the DCRBM
model on the labeled data and get accurate (76\%-49\% across various ESIPs)
detection of the predicates. Second, we exploit the generative capability of
DCRBMs to activate the trained model so as to generate the lower-level data
corresponding to the specific ESIP that closely matches the actual training
data (with mean square error 0.01-0.1 for generating 100 frames). We are thus
able to decompose the ESIPs into their constituent actionable behaviors. Such a
purely computational determination of how to establish an ESIP such as
engagement is unprecedented.
| Mohamed R. Amer, Behjat Siddiquie, Amir Tamrakar, David A. Salter,
Brian Lande, Darius Mehri and Ajay Divakaran | null | 1505.02137 | null | null |
Equitability, interval estimation, and statistical power | math.ST cs.LG q-bio.QM stat.ME stat.ML stat.TH | For analysis of a high-dimensional dataset, a common approach is to test a
null hypothesis of statistical independence on all variable pairs using a
non-parametric measure of dependence. However, because this approach attempts
to identify any non-trivial relationship no matter how weak, it often
identifies too many relationships to be useful. What is needed is a way of
identifying a smaller set of relationships that merit detailed further
analysis.
Here we formally present and characterize equitability, a property of
measures of dependence that aims to overcome this challenge. Notionally, an
equitable statistic is a statistic that, given some measure of noise, assigns
similar scores to equally noisy relationships of different types [Reshef et al.
2011]. We begin by formalizing this idea via a new object called the
interpretable interval, which functions as an interval estimate of the amount
of noise in a relationship of unknown type. We define an equitable statistic as
one with small interpretable intervals.
We then draw on the equivalence of interval estimation and hypothesis testing
to show that under moderate assumptions an equitable statistic is one that
yields well powered tests for distinguishing not only between trivial and
non-trivial relationships of all kinds but also between non-trivial
relationships of different strengths. This means that equitability allows us to
specify a threshold relationship strength $x_0$ and to search for relationships
of all kinds with strength greater than $x_0$. Thus, equitability can be
thought of as a strengthening of power against independence that enables
fruitful analysis of data sets with a small number of strong, interesting
relationships and a large number of weaker ones. We conclude with a
demonstration of how our two equivalent characterizations of equitability can
be used to evaluate the equitability of a statistic in practice.
| Yakir A. Reshef, David N. Reshef, Pardis C. Sabeti, Michael M.
Mitzenmacher | null | 1505.02212 | null | null |
Measuring dependence powerfully and equitably | stat.ME cs.IT cs.LG math.IT q-bio.QM stat.ML | Given a high-dimensional data set we often wish to find the strongest
relationships within it. A common strategy is to evaluate a measure of
dependence on every variable pair and retain the highest-scoring pairs for
follow-up. This strategy works well if the statistic used is equitable [Reshef
et al. 2015a], i.e., if, for some measure of noise, it assigns similar scores
to equally noisy relationships regardless of relationship type (e.g., linear,
exponential, periodic).
In this paper, we introduce and characterize a population measure of
dependence called MIC*. We show three ways that MIC* can be viewed: as the
population value of MIC, a highly equitable statistic from [Reshef et al.
2011], as a canonical "smoothing" of mutual information, and as the supremum of
an infinite sequence defined in terms of optimal one-dimensional partitions of
the marginals of the joint distribution. Based on this theory, we introduce an
efficient approach for computing MIC* from the density of a pair of random
variables, and we define a new consistent estimator MICe for MIC* that is
efficiently computable. In contrast, there is no known polynomial-time
algorithm for computing the original equitable statistic MIC. We show through
simulations that MICe has better bias-variance properties than MIC. We then
introduce and prove the consistency of a second statistic, TICe, that is a
trivial side-product of the computation of MICe and whose goal is powerful
independence testing rather than equitability.
We show in simulations that MICe and TICe have good equitability and power
against independence respectively. The analyses here complement a more in-depth
empirical evaluation of several leading measures of dependence [Reshef et al.
2015b] that shows state-of-the-art performance for MICe and TICe.
| Yakir A. Reshef, David N. Reshef, Hilary K. Finucane, Pardis C.
Sabeti, Michael M. Mitzenmacher | null | 1505.02213 | null | null |
An Empirical Study of Leading Measures of Dependence | stat.ME cs.IT cs.LG math.IT q-bio.QM stat.ML | In exploratory data analysis, we are often interested in identifying
promising pairwise associations for further analysis while filtering out
weaker, less interesting ones. This can be accomplished by computing a measure
of dependence on all variable pairs and examining the highest-scoring pairs,
provided the measure of dependence used assigns similar scores to equally noisy
relationships of different types. This property, called equitability, is
formalized in Reshef et al. [2015b]. In addition to equitability, measures of
dependence can also be assessed by the power of their corresponding
independence tests as well as their runtime.
Here we present extensive empirical evaluation of the equitability, power
against independence, and runtime of several leading measures of dependence.
These include two statistics introduced in Reshef et al. [2015a]: MICe, which
has equitability as its primary goal, and TICe, which has power against
independence as its goal. Regarding equitability, our analysis finds that MICe
is the most equitable method on functional relationships in most of the
settings we considered, although mutual information estimation proves the most
equitable at large sample sizes in some specific settings. Regarding power
against independence, we find that TICe, along with Heller and Gorfine's S^DDP,
is the state of the art on the relationships we tested. Our analyses also show
a trade-off between power against independence and equitability consistent with
the theory in Reshef et al. [2015b]. In terms of runtime, MICe and TICe are
significantly faster than many other measures of dependence tested, and
computing either one makes computing the other trivial. This suggests that a
fast and useful strategy for achieving a combination of power against
independence and equitability may be to filter relationships by TICe and then
to examine the MICe of only the significant ones.
| David N. Reshef, Yakir A. Reshef, Pardis C. Sabeti, Michael M.
Mitzenmacher | null | 1505.02214 | null | null |
Newton Sketch: A Linear-time Optimization Algorithm with
Linear-Quadratic Convergence | math.OC cs.DS cs.LG stat.ML | We propose a randomized second-order method for optimization known as the
Newton Sketch: it is based on performing an approximate Newton step using a
randomly projected or sub-sampled Hessian. For self-concordant functions, we
prove that the algorithm has super-linear convergence with exponentially high
probability, with convergence and complexity guarantees that are independent of
condition numbers and related problem-dependent quantities. Given a suitable
initialization, similar guarantees also hold for strongly convex and smooth
objectives without self-concordance. When implemented using randomized
projections based on a sub-sampled Hadamard basis, the algorithm typically has
substantially lower complexity than Newton's method. We also describe
extensions of our methods to programs involving convex constraints that are
equipped with self-concordant barriers. We discuss and illustrate applications
to linear programs, quadratic programs with convex constraints, logistic
regression and other generalized linear models, as well as semidefinite
programs.
| Mert Pilanci, Martin J. Wainwright | null | 1505.02250 | null | null |
Probabilistic Cascading for Large Scale Hierarchical Classification | cs.LG cs.CL cs.IR | Hierarchies are frequently used for the organization of objects. Given a
hierarchy of classes, two main approaches are used, to automatically classify
new instances: flat classification and cascade classification. Flat
classification ignores the hierarchy, while cascade classification greedily
traverses the hierarchy from the root to the predicted leaf. In this paper we
propose a new approach, which extends cascade classification to predict the
right leaf by estimating the probability of each root-to-leaf path. We provide
experimental results which indicate that, using the same classification
algorithm, one can achieve better results with our approach, compared to the
traditional flat and cascade classifications.
| Aris Kosmopoulos and Georgios Paliouras and Ion Androutsopoulos | null | 1505.02251 | null | null |
Should we really use post-hoc tests based on mean-ranks? | cs.LG math.ST physics.data-an q-bio.QM stat.ML stat.TH | The statistical comparison of multiple algorithms over multiple data sets is
fundamental in machine learning. This is typically carried out by the Friedman
test. When the Friedman test rejects the null hypothesis, multiple comparisons
are carried out to establish which are the significant differences among
algorithms. The multiple comparisons are usually performed using the mean-ranks
test. The aim of this technical note is to discuss the inconsistencies of the
mean-ranks post-hoc test with the goal of discouraging its use in machine
learning as well as in medicine, psychology, etc.. We show that the outcome of
the mean-ranks test depends on the pool of algorithms originally included in
the experiment. In other words, the outcome of the comparison between
algorithms A and B depends also on the performance of the other algorithms
included in the original experiment. This can lead to paradoxical situations.
For instance the difference between A and B could be declared significant if
the pool comprises algorithms C, D, E and not significant if the pool comprises
algorithms F, G, H. To overcome these issues, we suggest instead to perform the
multiple comparison using a test whose outcome only depends on the two
algorithms being compared, such as the sign-test or the Wilcoxon signed-rank
test.
| Alessio Benavoli and Giorgio Corani and Francesca Mangili | null | 1505.02288 | null | null |
Estimation with Norm Regularization | stat.ML cs.LG | Analysis of non-asymptotic estimation error and structured statistical
recovery based on norm regularized regression, such as Lasso, needs to consider
four aspects: the norm, the loss function, the design matrix, and the noise
model. This paper presents generalizations of such estimation error analysis on
all four aspects compared to the existing literature. We characterize the
restricted error set where the estimation error vector lies, establish
relations between error sets for the constrained and regularized problems, and
present an estimation error bound applicable to any norm. Precise
characterizations of the bound is presented for isotropic as well as
anisotropic subGaussian design matrices, subGaussian noise models, and convex
loss functions, including least squares and generalized linear models. Generic
chaining and associated results play an important role in the analysis. A key
result from the analysis is that the sample complexity of all such estimators
depends on the Gaussian width of a spherical cap corresponding to the
restricted error set. Further, once the number of samples $n$ crosses the
required sample complexity, the estimation error decreases as
$\frac{c}{\sqrt{n}}$, where $c$ depends on the Gaussian width of the unit norm
ball.
| Arindam Banerjee, Sheng Chen, Farideh Fazayeli, Vidyashankar Sivakumar | null | 1505.02294 | null | null |
Simultaneous Clustering and Model Selection for Multinomial
Distribution: A Comparative Study | cs.LG stat.ME stat.ML | In this paper, we study different discrete data clustering methods, which use
the Model-Based Clustering (MBC) framework with the Multinomial distribution.
Our study comprises several relevant issues, such as initialization, model
estimation and model selection. Additionally, we propose a novel MBC method by
efficiently combining the partitional and hierarchical clustering techniques.
We conduct experiments on both synthetic and real data and evaluate the methods
using accuracy, stability and computation time. Our study identifies
appropriate strategies to be used for discrete data analysis with the MBC
methods. Moreover, our proposed method is very competitive w.r.t. clustering
accuracy and better w.r.t. stability and computation time.
| Md. Abul Hasnat, Julien Velcin, St\'ephane Bonnevay and Julien Jacques | null | 1505.02324 | null | null |
Bayesian Sparse Tucker Models for Dimension Reduction and Tensor
Completion | cs.LG cs.NA stat.ML | Tucker decomposition is the cornerstone of modern machine learning on
tensorial data analysis, which have attracted considerable attention for
multiway feature extraction, compressive sensing, and tensor completion. The
most challenging problem is related to determination of model complexity (i.e.,
multilinear rank), especially when noise and missing data are present. In
addition, existing methods cannot take into account uncertainty information of
latent factors, resulting in low generalization performance. To address these
issues, we present a class of probabilistic generative Tucker models for tensor
decomposition and completion with structural sparsity over multilinear latent
space. To exploit structural sparse modeling, we introduce two group sparsity
inducing priors by hierarchial representation of Laplace and Student-t
distributions, which facilitates fully posterior inference. For model learning,
we derived variational Bayesian inferences over all model (hyper)parameters,
and developed efficient and scalable algorithms based on multilinear
operations. Our methods can automatically adapt model complexity and infer an
optimal multilinear rank by the principle of maximum lower bound of model
evidence. Experimental results and comparisons on synthetic, chemometrics and
neuroimaging data demonstrate remarkable performance of our models for
recovering ground-truth of multilinear rank and missing entries.
| Qibin Zhao, Liqing Zhang, Andrzej Cichocki | null | 1505.02343 | null | null |
Bounded-Distortion Metric Learning | cs.LG | Metric learning aims to embed one metric space into another to benefit tasks
like classification and clustering. Although a greatly distorted metric space
has a high degree of freedom to fit training data, it is prone to overfitting
and numerical inaccuracy. This paper presents {\it bounded-distortion metric
learning} (BDML), a new metric learning framework which amounts to finding an
optimal Mahalanobis metric space with a bounded-distortion constraint. An
efficient solver based on the multiplicative weights update method is proposed.
Moreover, we generalize BDML to pseudo-metric learning and devise the
semidefinite relaxation and a randomized algorithm to approximately solve it.
We further provide theoretical analysis to show that distortion is a key
ingredient for stability and generalization ability of our BDML algorithm.
Extensive experiments on several benchmark datasets yield promising results.
| Renjie Liao, Jianping Shi, Ziyang Ma, Jun Zhu and Jiaya Jia | null | 1505.02377 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.