title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
On Symmetric and Asymmetric LSHs for Inner Product Search | stat.ML cs.DS cs.IR cs.LG | We consider the problem of designing locality sensitive hashes (LSH) for
inner product similarity, and of the power of asymmetric hashes in this
context. Shrivastava and Li argue that there is no symmetric LSH for the
problem and propose an asymmetric LSH based on different mappings for query and
database points. However, we show there does exist a simple symmetric LSH that
enjoys stronger guarantees and better empirical performance than the asymmetric
LSH they suggest. We also show a variant of the settings where asymmetry is
in-fact needed, but there a different asymmetric LSH is required.
| Behnam Neyshabur, Nathan Srebro | null | 1410.5518 | null | null |
Where do goals come from? A Generic Approach to Autonomous Goal-System
Development | cs.LG cs.AI | Goals express agents' intentions and allow them to organize their behavior
based on low-dimensional abstractions of high-dimensional world states. How can
agents develop such goals autonomously? This paper proposes a detailed
conceptual and computational account to this longstanding problem. We argue to
consider goals as high-level abstractions of lower-level intention mechanisms
such as rewards and values, and point out that goals need to be considered
alongside with a detection of the own actions' effects. We propose Latent Goal
Analysis as a computational learning formulation thereof, and show
constructively that any reward or value function can by explained by goals and
such self-detection as latent mechanisms. We first show that learned goals
provide a highly effective dimensionality reduction in a practical
reinforcement learning problem. Then, we investigate a developmental scenario
in which entirely task-unspecific rewards induced by visual saliency lead to
self and goal representations that constitute goal-directed reaching.
| Matthias Rolf and Minoru Asada | null | 1410.5557 | null | null |
Regularizing Recurrent Networks - On Injected Noise and Norm-based
Methods | stat.ML cs.LG | Advancements in parallel processing have lead to a surge in multilayer
perceptrons' (MLP) applications and deep learning in the past decades.
Recurrent Neural Networks (RNNs) give additional representational power to
feedforward MLPs by providing a way to treat sequential data. However, RNNs are
hard to train using conventional error backpropagation methods because of the
difficulty in relating inputs over many time-steps. Regularization approaches
from MLP sphere, like dropout and noisy weight training, have been
insufficiently applied and tested on simple RNNs. Moreover, solutions have been
proposed to improve convergence in RNNs but not enough to improve the long term
dependency remembering capabilities thereof.
In this study, we aim to empirically evaluate the remembering and
generalization ability of RNNs on polyphonic musical datasets. The models are
trained with injected noise, random dropout, norm-based regularizers and their
respective performances compared to well-initialized plain RNNs and advanced
regularization methods like fast-dropout. We conclude with evidence that
training with noise does not improve performance as conjectured by a few works
in RNN optimization before ours.
| Saahil Ognawala and Justin Bayer | null | 1410.5684 | null | null |
Robust Multidimensional Mean-Payoff Games are Undecidable | cs.LO cs.LG | Mean-payoff games play a central role in quantitative synthesis and
verification. In a single-dimensional game a weight is assigned to every
transition and the objective of the protagonist is to assure a non-negative
limit-average weight. In the multidimensional setting, a weight vector is
assigned to every transition and the objective of the protagonist is to satisfy
a boolean condition over the limit-average weight of each dimension, e.g.,
$\LimAvg(x_1) \leq 0 \vee \LimAvg(x_2)\geq 0 \wedge \LimAvg(x_3) \geq 0$. We
recently proved that when one of the players is restricted to finite-memory
strategies then the decidability of determining the winner is inter-reducible
with Hilbert's Tenth problem over rationals (a fundamental long-standing open
problem). In this work we allow arbitrary (infinite-memory) strategies for both
players and we show that the problem is undecidable.
| Yaron Velner | null | 1410.5703 | null | null |
Optimal Feature Selection from VMware ESXi 5.1 Feature Set | cs.DC cs.LG | A study of VMware ESXi 5.1 server has been carried out to find the optimal
set of parameters which suggest usage of different resources of the server.
Feature selection algorithms have been used to extract the optimum set of
parameters of the data obtained from VMware ESXi 5.1 server using esxtop
command. Multiple virtual machines (VMs) are running in the mentioned server.
K-means algorithm is used for clustering the VMs. The goodness of each cluster
is determined by Davies Bouldin index and Dunn index respectively. The best
cluster is further identified by the determined indices. The features of the
best cluster are considered into a set of optimal parameters.
| Amartya Hatua | null | 1410.5784 | null | null |
Artifact reduction in multichannel pervasive EEG using hybrid WPT-ICA
and WPT-EMD signal decomposition techniques | physics.med-ph cs.LG stat.AP stat.ME | In order to reduce the muscle artifacts in multi-channel pervasive
Electroencephalogram (EEG) signals, we here propose and compare two hybrid
algorithms by combining the concept of wavelet packet transform (WPT),
empirical mode decomposition (EMD) and Independent Component Analysis (ICA).
The signal cleaning performances of WPT-EMD and WPT-ICA algorithms have been
compared using a signal-to-noise ratio (SNR)-like criterion for artifacts. The
algorithms have been tested on multiple trials of four different artifact cases
viz. eye-blinking and muscle artifacts including left and right hand movement
and head-shaking.
| Valentina Bono, Wasifa Jamal, Saptarshi Das, Koushik Maharatna | 10.1109/ICASSP.2014.6854728 | 1410.5801 | null | null |
Daily Stress Recognition from Mobile Phone Data, Weather Conditions and
Individual Traits | cs.CY cs.LG physics.data-an stat.AP stat.ML | Research has proven that stress reduces quality of life and causes many
diseases. For this reason, several researchers devised stress detection systems
based on physiological parameters. However, these systems require that
obtrusive sensors are continuously carried by the user. In our paper, we
propose an alternative approach providing evidence that daily stress can be
reliably recognized based on behavioral metrics, derived from the user's mobile
phone activity and from additional indicators, such as the weather conditions
(data pertaining to transitory properties of the environment) and the
personality traits (data concerning permanent dispositions of individuals). Our
multifactorial statistical model, which is person-independent, obtains the
accuracy score of 72.28% for a 2-class daily stress recognition problem. The
model is efficient to implement for most of multimedia applications due to
highly reduced low-dimensional feature space (32d). Moreover, we identify and
discuss the indicators which have strong predictive power.
| Andrey Bogomolov, Bruno Lepri, Michela Ferron, Fabio Pianesi, Alex
(Sandy) Pentland | 10.1145/2647868.2654933 | 1410.5816 | null | null |
Bucking the Trend: Large-Scale Cost-Focused Active Learning for
Statistical Machine Translation | cs.CL cs.LG stat.ML | We explore how to improve machine translation systems by adding more
translation data in situations where we already have substantial resources. The
main challenge is how to buck the trend of diminishing returns that is commonly
encountered. We present an active learning-style data solicitation algorithm to
meet this challenge. We test it, gathering annotations via Amazon Mechanical
Turk, and find that we get an order of magnitude increase in performance rates
of improvement.
| Michael Bloodgood and Chris Callison-Burch | null | 1410.5877 | null | null |
Mean-Field Networks | cs.LG stat.ML | The mean field algorithm is a widely used approximate inference algorithm for
graphical models whose exact inference is intractable. In each iteration of
mean field, the approximate marginals for each variable are updated by getting
information from the neighbors. This process can be equivalently converted into
a feedforward network, with each layer representing one iteration of mean field
and with tied weights on all layers. This conversion enables a few natural
extensions, e.g. untying the weights in the network. In this paper, we study
these mean field networks (MFNs), and use them as inference tools as well as
discriminative models. Preliminary experiment results show that MFNs can learn
to do inference very efficiently and perform significantly better than mean
field as discriminative models.
| Yujia Li and Richard Zemel | null | 1410.5884 | null | null |
Active Regression by Stratification | stat.ML cs.LG | We propose a new active learning algorithm for parametric linear regression
with random design. We provide finite sample convergence guarantees for general
distributions in the misspecified model. This is the first active learner for
this setting that provably can improve over passive learning. Unlike other
learning settings (such as classification), in regression the passive learning
rate of $O(1/\epsilon)$ cannot in general be improved upon. Nonetheless, the
so-called `constant' in the rate of convergence, which is characterized by a
distribution-dependent risk, can be improved in many cases. For a given
distribution, achieving the optimal risk requires prior knowledge of the
distribution. Following the stratification technique advocated in Monte-Carlo
function integration, our active learner approaches the optimal risk using
piecewise constant approximations.
| Sivan Sabato and Remi Munos | null | 1410.5920 | null | null |
Cosine Similarity Measure According to a Convex Cost Function | cs.LG | In this paper, we describe a new vector similarity measure associated with a
convex cost function. Given two vectors, we determine the surface normals of
the convex function at the vectors. The angle between the two surface normals
is the similarity measure. Convex cost function can be the negative entropy
function, total variation (TV) function and filtered variation function. The
convex cost function need not be differentiable everywhere. In general, we need
to compute the gradient of the cost function to compute the surface normals. If
the gradient does not exist at a given vector, it is possible to use the
subgradients and the normal producing the smallest angle between the two
vectors is used to compute the similarity measure.
| Osman Gunay, Cem Emre Akbas, A. Enis Cetin | null | 1410.6093 | null | null |
Online Energy Price Matrix Factorization for Power Grid Topology
Tracking | stat.ML cs.LG math.OC stat.AP | Grid security and open markets are two major smart grid goals. Transparency
of market data facilitates a competitive and efficient energy environment, yet
it may also reveal critical physical system information. Recovering the grid
topology based solely on publicly available market data is explored here.
Real-time energy prices are calculated as the Lagrange multipliers of
network-constrained economic dispatch; that is, via a linear program (LP)
typically solved every 5 minutes. Granted the grid Laplacian is a parameter of
this LP, one could infer such a topology-revealing matrix upon observing
successive LP dual outcomes. The matrix of spatio-temporal prices is first
shown to factor as the product of the inverse Laplacian times a sparse matrix.
Leveraging results from sparse matrix decompositions, topology recovery schemes
with complementary strengths are subsequently formulated. Solvers scalable to
high-dimensional and streaming market data are devised. Numerical validation
using real load data on the IEEE 30-bus grid provide useful input for current
and future market designs.
| Vassilis Kekatos, Georgios B. Giannakis, and Ross Baldick | 10.1109/TSG.2015.2469098 | 1410.6095 | null | null |
Attribute Efficient Linear Regression with Data-Dependent Sampling | cs.LG stat.ML | In this paper we analyze a budgeted learning setting, in which the learner
can only choose and observe a small subset of the attributes of each training
example. We develop efficient algorithms for ridge and lasso linear regression,
which utilize the geometry of the data by a novel data-dependent sampling
scheme. When the learner has prior knowledge on the second moments of the
attributes, the optimal sampling probabilities can be calculated precisely, and
result in data-dependent improvements factors for the excess risk over the
state-of-the-art that may be as large as $O(\sqrt{d})$, where $d$ is the
problem's dimension. Moreover, under reasonable assumptions our algorithms can
use less attributes than full-information algorithms, which is the main concern
in budgeted learning settings. To the best of our knowledge, these are the
first algorithms able to do so in our setting. Where no such prior knowledge is
available, we develop a simple estimation technique that given a sufficient
amount of training examples, achieves similar improvements. We complement our
theoretical analysis with experiments on several data sets which support our
claims.
| Doron Kukliansky, Ohad Shamir | null | 1410.6382 | null | null |
On Lower and Upper Bounds in Smooth Strongly Convex Optimization - A
Unified Approach via Linear Iterative Methods | math.OC cs.LG | In this thesis we develop a novel framework to study smooth and strongly
convex optimization algorithms, both deterministic and stochastic. Focusing on
quadratic functions we are able to examine optimization algorithms as a
recursive application of linear operators. This, in turn, reveals a powerful
connection between a class of optimization algorithms and the analytic theory
of polynomials whereby new lower and upper bounds are derived. In particular,
we present a new and natural derivation of Nesterov's well-known Accelerated
Gradient Descent method by employing simple 'economic' polynomials. This rather
natural interpretation of AGD contrasts with earlier ones which lacked a
simple, yet solid, motivation. Lastly, whereas existing lower bounds are only
valid when the dimensionality scales with the number of iterations, our lower
bound holds in the natural regime where the dimensionality is fixed.
| Yossi Arjevani | null | 1410.6387 | null | null |
A Parallel and Efficient Algorithm for Learning to Match | cs.LG cs.AI | Many tasks in data mining and related fields can be formalized as matching
between objects in two heterogeneous domains, including collaborative
filtering, link prediction, image tagging, and web search. Machine learning
techniques, referred to as learning-to-match in this paper, have been
successfully applied to the problems. Among them, a class of state-of-the-art
methods, named feature-based matrix factorization, formalize the task as an
extension to matrix factorization by incorporating auxiliary features into the
model. Unfortunately, making those algorithms scale to real world problems is
challenging, and simple parallelization strategies fail due to the complex
cross talking patterns between sub-tasks. In this paper, we tackle this
challenge with a novel parallel and efficient algorithm for feature-based
matrix factorization. Our algorithm, based on coordinate descent, can easily
handle hundreds of millions of instances and features on a single machine. The
key recipe of this algorithm is an iterative relaxation of the objective to
facilitate parallel updates of parameters, with guaranteed convergence on
minimizing the original objective function. Experimental results demonstrate
that the proposed method is effective on a wide range of matching problems,
with efficiency significantly improved upon the baselines while accuracy
retained unchanged.
| Jingbo Shang, Tianqi Chen, Hang Li, Zhengdong Lu, Yong Yu | null | 1410.6414 | null | null |
Model Selection for Topic Models via Spectral Decomposition | stat.ML cs.IR cs.LG stat.CO | Topic models have achieved significant successes in analyzing large-scale
text corpus. In practical applications, we are always confronted with the
challenge of model selection, i.e., how to appropriately set the number of
topics. Following recent advances in topic model inference via tensor
decomposition, we make a first attempt to provide theoretical analysis on model
selection in latent Dirichlet allocation. Under mild conditions, we derive the
upper bound and lower bound on the number of topics given a text collection of
finite size. Experimental results demonstrate that our bounds are accurate and
tight. Furthermore, using Gaussian mixture model as an example, we show that
our methodology can be easily generalized to model selection analysis for other
latent models.
| Dehua Cheng, Xinran He, Yan Liu | null | 1410.6466 | null | null |
Online and Stochastic Gradient Methods for Non-decomposable Loss
Functions | cs.LG stat.ML | Modern applications in sensitive domains such as biometrics and medicine
frequently require the use of non-decomposable loss functions such as
precision@k, F-measure etc. Compared to point loss functions such as
hinge-loss, these offer much more fine grained control over prediction, but at
the same time present novel challenges in terms of algorithm design and
analysis. In this work we initiate a study of online learning techniques for
such non-decomposable loss functions with an aim to enable incremental learning
as well as design scalable solvers for batch problems. To this end, we propose
an online learning framework for such loss functions. Our model enjoys several
nice properties, chief amongst them being the existence of efficient online
learning algorithms with sublinear regret and online to batch conversion
bounds. Our model is a provable extension of existing online learning models
for point loss functions. We instantiate two popular losses, prec@k and pAUC,
in our model and prove sublinear regret bounds for both of them. Our proofs
require a novel structural lemma over ranked lists which may be of independent
interest. We then develop scalable stochastic gradient descent solvers for
non-decomposable loss functions. We show that for a large family of loss
functions satisfying a certain uniform convergence property (that includes
prec@k, pAUC, and F-measure), our methods provably converge to the empirical
risk minimizer. Such uniform convergence results were not known for these
losses and we establish these using novel proof techniques. We then use
extensive experimentation on real life and benchmark datasets to establish that
our method can be orders of magnitude faster than a recently proposed cutting
plane method.
| Purushottam Kar, Harikrishna Narasimhan, Prateek Jain | null | 1410.6776 | null | null |
Dimensionality Reduction for k-Means Clustering and Low Rank
Approximation | cs.DS cs.LG | We show how to approximate a data matrix $\mathbf{A}$ with a much smaller
sketch $\mathbf{\tilde A}$ that can be used to solve a general class of
constrained k-rank approximation problems to within $(1+\epsilon)$ error.
Importantly, this class of problems includes $k$-means clustering and
unconstrained low rank approximation (i.e. principal component analysis). By
reducing data points to just $O(k)$ dimensions, our methods generically
accelerate any exact, approximate, or heuristic algorithm for these ubiquitous
problems.
For $k$-means dimensionality reduction, we provide $(1+\epsilon)$ relative
error results for many common sketching techniques, including random row
projection, column selection, and approximate SVD. For approximate principal
component analysis, we give a simple alternative to known algorithms that has
applications in the streaming setting. Additionally, we extend recent work on
column-based matrix reconstruction, giving column subsets that not only `cover'
a good subspace for $\bv{A}$, but can be used directly to compute this
subspace.
Finally, for $k$-means clustering, we show how to achieve a $(9+\epsilon)$
approximation by Johnson-Lindenstrauss projecting data points to just $O(\log
k/\epsilon^2)$ dimensions. This gives the first result that leverages the
specific structure of $k$-means to achieve dimension independent of input size
and sublinear in $k$.
| Michael B. Cohen, Sam Elder, Cameron Musco, Christopher Musco,
Madalina Persu | null | 1410.6801 | null | null |
Clustering Words by Projection Entropy | cs.CL cs.LG | We apply entropy agglomeration (EA), a recently introduced algorithm, to
cluster the words of a literary text. EA is a greedy agglomerative procedure
that minimizes projection entropy (PE), a function that can quantify the
segmentedness of an element set. To apply it, the text is reduced to a feature
allocation, a combinatorial object to represent the word occurences in the
text's paragraphs. The experiment results demonstrate that EA, despite its
reduction and simplicity, is useful in capturing significant relationships
among the words in the text. This procedure was implemented in Python and
published as a free software: REBUS.
| I\c{s}{\i}k Bar{\i}\c{s} Fidaner, Ali Taylan Cemgil | null | 1410.6830 | null | null |
Covariance Matrices for Mean Field Variational Bayes | stat.ML cs.LG stat.ME | Mean Field Variational Bayes (MFVB) is a popular posterior approximation
method due to its fast runtime on large-scale data sets. However, it is well
known that a major failing of MFVB is its (sometimes severe) underestimates of
the uncertainty of model variables and lack of information about model variable
covariance. We develop a fast, general methodology for exponential families
that augments MFVB to deliver accurate uncertainty estimates for model
variables -- both for individual variables and coherently across variables.
MFVB for exponential families defines a fixed-point equation in the means of
the approximating posterior, and our approach yields a covariance estimate by
perturbing this fixed point. Inspired by linear response theory, we call our
method linear response variational Bayes (LRVB). We demonstrate the accuracy of
our method on simulated data sets.
| Ryan Giordano, Tamara Broderick | null | 1410.6853 | null | null |
Screening Rules for Overlapping Group Lasso | stat.ML cs.LG | Recently, to solve large-scale lasso and group lasso problems, screening
rules have been developed, the goal of which is to reduce the problem size by
efficiently discarding zero coefficients using simple rules independently of
the others. However, screening for overlapping group lasso remains an open
challenge because the overlaps between groups make it infeasible to test each
group independently. In this paper, we develop screening rules for overlapping
group lasso. To address the challenge arising from groups with overlaps, we
take into account overlapping groups only if they are inclusive of the group
being tested, and then we derive screening rules, adopting the dual polytope
projection approach. This strategy allows us to screen each group independently
of each other. In our experiments, we demonstrate the efficiency of our
screening rules on various datasets.
| Seunghak Lee and Eric P. Xing | null | 1410.6880 | null | null |
Differentially- and non-differentially-private random decision trees | cs.LG | We consider supervised learning with random decision trees, where the tree
construction is completely random. The method is popularly used and works well
in practice despite the simplicity of the setting, but its statistical
mechanism is not yet well-understood. In this paper we provide strong
theoretical guarantees regarding learning with random decision trees. We
analyze and compare three different variants of the algorithm that have minimal
memory requirements: majority voting, threshold averaging and probabilistic
averaging. The random structure of the tree enables us to adapt these methods
to a differentially-private setting thus we also propose differentially-private
versions of all three schemes. We give upper-bounds on the generalization error
and mathematically explain how the accuracy depends on the number of random
decision trees. Furthermore, we prove that only logarithmic (in the size of the
dataset) number of independently selected random decision trees suffice to
correctly classify most of the data, even when differential-privacy guarantees
must be maintained. We empirically show that majority voting and threshold
averaging give the best accuracy, also for conservative users requiring high
privacy guarantees. Furthermore, we demonstrate that a simple majority voting
rule is an especially good candidate for the differentially-private classifier
since it is much less sensitive to the choice of forest parameters than other
methods.
| Mariusz Bojarski, Anna Choromanska, Krzysztof Choromanski, Yann LeCun | null | 1410.6973 | null | null |
Notes on using Determinantal Point Processes for Clustering with
Applications to Text Clustering | cs.LG | In this paper, we compare three initialization schemes for the KMEANS
clustering algorithm: 1) random initialization (KMEANSRAND), 2) KMEANS++, and
3) KMEANSD++. Both KMEANSRAND and KMEANS++ have a major that the value of k
needs to be set by the user of the algorithms. (Kang 2013) recently proposed a
novel use of determinantal point processes for sampling the initial centroids
for the KMEANS algorithm (we call it KMEANSD++). They, however, do not provide
any evaluation establishing that KMEANSD++ is better than other algorithms. In
this paper, we show that the performance of KMEANSD++ is comparable to KMEANS++
(both of which are better than KMEANSRAND) with KMEANSD++ having an additional
that it can automatically approximate the value of k.
| Apoorv Agarwal, Anna Choromanska, Krzysztof Choromanski | null | 1410.6975 | null | null |
Local Rademacher Complexity for Multi-label Learning | stat.ML cs.LG | We analyze the local Rademacher complexity of empirical risk minimization
(ERM)-based multi-label learning algorithms, and in doing so propose a new
algorithm for multi-label learning. Rather than using the trace norm to
regularize the multi-label predictor, we instead minimize the tail sum of the
singular values of the predictor in multi-label learning. Benefiting from the
use of the local Rademacher complexity, our algorithm, therefore, has a sharper
generalization error bound and a faster convergence rate. Compared to methods
that minimize over all singular values, concentrating on the tail singular
values results in better recovery of the low-rank structure of the multi-label
predictor, which plays an import role in exploiting label correlations. We
propose a new conditional singular value thresholding algorithm to solve the
resulting objective function. Empirical studies on real-world datasets validate
our theoretical results and demonstrate the effectiveness of the proposed
algorithm.
| Chang Xu, Tongliang Liu, Dacheng Tao, Chao Xu | null | 1410.6990 | null | null |
A provable SVD-based algorithm for learning topics in dominant admixture
corpus | stat.ML cs.LG | Topic models, such as Latent Dirichlet Allocation (LDA), posit that documents
are drawn from admixtures of distributions over words, known as topics. The
inference problem of recovering topics from admixtures, is NP-hard. Assuming
separability, a strong assumption, [4] gave the first provable algorithm for
inference. For LDA model, [6] gave a provable algorithm using tensor-methods.
But [4,6] do not learn topic vectors with bounded $l_1$ error (a natural
measure for probability vectors). Our aim is to develop a model which makes
intuitive and empirically supported assumptions and to design an algorithm with
natural, simple components such as SVD, which provably solves the inference
problem for the model with bounded $l_1$ error. A topic in LDA and other models
is essentially characterized by a group of co-occurring words. Motivated by
this, we introduce topic specific Catchwords, group of words which occur with
strictly greater frequency in a topic than any other topic individually and are
required to have high frequency together rather than individually. A major
contribution of the paper is to show that under this more realistic assumption,
which is empirically verified on real corpora, a singular value decomposition
(SVD) based algorithm with a crucial pre-processing step of thresholding, can
provably recover the topics from a collection of documents drawn from Dominant
admixtures. Dominant admixtures are convex combination of distributions in
which one distribution has a significantly higher contribution than others.
Apart from the simplicity of the algorithm, the sample complexity has near
optimal dependence on $w_0$, the lowest probability that a topic is dominant,
and is better than [4]. Empirical evidence shows that on several real world
corpora, both Catchwords and Dominant admixture assumptions hold and the
proposed algorithm substantially outperforms the state of the art [5].
| Trapit Bansal, Chiranjib Bhattacharyya, Ravindran Kannan | null | 1410.6991 | null | null |
A PTAS for Agnostically Learning Halfspaces | cs.DS cs.LG | We present a PTAS for agnostically learning halfspaces w.r.t. the uniform
distribution on the $d$ dimensional sphere. Namely, we show that for every
$\mu>0$ there is an algorithm that runs in time
$\mathrm{poly}(d,\frac{1}{\epsilon})$, and is guaranteed to return a classifier
with error at most $(1+\mu)\mathrm{opt}+\epsilon$, where $\mathrm{opt}$ is the
error of the best halfspace classifier. This improves on Awasthi, Balcan and
Long [ABL14] who showed an algorithm with an (unspecified) constant
approximation ratio. Our algorithm combines the classical technique of
polynomial regression (e.g. [LMN89, KKMS05]), together with the new
localization technique of [ABL14].
| Amit Daniely | null | 1410.7050 | null | null |
Sparse Distributed Learning via Heterogeneous Diffusion Adaptive
Networks | cs.LG cs.DC cs.SY stat.ML | In-network distributed estimation of sparse parameter vectors via diffusion
LMS strategies has been studied and investigated in recent years. In all the
existing works, some convex regularization approach has been used at each node
of the network in order to achieve an overall network performance superior to
that of the simple diffusion LMS, albeit at the cost of increased computational
overhead. In this paper, we provide analytical as well as experimental results
which show that the convex regularization can be selectively applied only to
some chosen nodes keeping rest of the nodes sparsity agnostic, while still
enjoying the same optimum behavior as can be realized by deploying the convex
regularization at all the nodes. Due to the incorporation of unregularized
learning at a subset of nodes, less computational cost is needed in the
proposed approach. We also provide a guideline for selection of the sparsity
aware nodes and a closed form expression for the optimum regularization
parameter.
| Bijit Kumar Das, Mrityunjoy Chakraborty and Jer\'onimo Arenas-Garc\'ia | 10.1109/ISCAS.2015.7168664 | 1410.7057 | null | null |
Random Sampling in an Age of Automation: Minimizing Expenditures through
Balanced Collection and Annotation | cs.CY cs.LG stat.ME | Methods for automated collection and annotation are changing the
cost-structures of sampling surveys for a wide range of applications. Digital
samples in the form of images or audio recordings can be collected rapidly, and
annotated by computer programs or crowd workers. We consider the problem of
estimating a population mean under these new cost-structures, and propose a
Hybrid-Offset sampling design. This design utilizes two annotators: a primary,
which is accurate but costly (e.g. a human expert) and an auxiliary which is
noisy but cheap (e.g. a computer program), in order to minimize total sampling
expenditures. Our analysis gives necessary conditions for the Hybrid-Offset
design and specifies optimal sample sizes for both annotators. Simulations on
data from a coral reef survey program indicate that the Hybrid-Offset design
outperforms several alternative sampling designs. In particular, sampling
expenditures are reduced 50% compared to the Conventional design currently
deployed by the coral ecologists.
| Oscar Beijbom | null | 1410.7074 | null | null |
A data-driven method for syndrome type identification and classification
in traditional Chinese medicine | cs.LG stat.AP | Objective: The efficacy of traditional Chinese medicine (TCM) treatments for
Western medicine (WM) diseases relies heavily on the proper classification of
patients into TCM syndrome types. We develop a data-driven method for solving
the classification problem, where syndrome types are identified and quantified
based on patterns detected in unlabeled symptom survey data.
Method: Latent class analysis (LCA) has been applied in WM research to solve
a similar problem, i.e., to identify subtypes of a patient population in the
absence of a gold standard. A widely known weakness of LCA is that it makes an
unrealistically strong independence assumption. We relax the assumption by
first detecting symptom co-occurrence patterns from survey data and use those
patterns instead of the symptoms as features for LCA. Results: The result of
the investigation is a six-step method: Data collection, symptom co-occurrence
pattern discovery, pattern interpretation, syndrome identification, syndrome
type identification, and syndrome type classification. A software package
called Lantern is developed to support the application of the method. The
method is illustrated using a data set on Vascular Mild Cognitive Impairment
(VMCI).
Conclusions: A data-driven method for TCM syndrome identification and
classification is presented. The method can be used to answer the following
questions about a Western medicine disease: What TCM syndrome types are there
among the patients with the disease? What is the prevalence of each syndrome
type? What are the statistical characteristics of each syndrome type in terms
of occurrence of symptoms? How can we determine the syndrome type(s) of a
patient?
| Nevin L. Zhang, Chen Fu, Teng Fei Liu, Bao Xin Chen, Kin Man Poon, Pei
Xian Chen, Yun Ling Zhang | null | 1410.7140 | null | null |
Exponentiated Subgradient Algorithm for Online Optimization under the
Random Permutation Model | math.OC cs.DS cs.LG | Online optimization problems arise in many resource allocation tasks, where
the future demands for each resource and the associated utility functions
change over time and are not known apriori, yet resources need to be allocated
at every point in time despite the future uncertainty. In this paper, we
consider online optimization problems with general concave utilities. We modify
and extend an online optimization algorithm proposed by Devanur et al. for
linear programming to this general setting. The model we use for the arrival of
the utilities and demands is known as the random permutation model, where a
fixed collection of utilities and demands are presented to the algorithm in
random order. We prove that under this model the algorithm achieves a
competitive ratio of $1-O(\epsilon)$ under a near-optimal assumption that the
bid to budget ratio is $O (\frac{\epsilon^2}{\log({m}/{\epsilon})})$, where $m$
is the number of resources, while enjoying a significantly lower computational
cost than the optimal algorithm proposed by Kesselheim et al. We draw a
connection between the proposed algorithm and subgradient methods used in
convex optimization. In addition, we present numerical experiments that
demonstrate the performance and speed of this algorithm in comparison to
existing algorithms.
| Reza Eghbali, Jon Swenson, Maryam Fazel | null | 1410.7171 | null | null |
Heteroscedastic Treed Bayesian Optimisation | cs.LG math.OC stat.ML | Optimising black-box functions is important in many disciplines, such as
tuning machine learning models, robotics, finance and mining exploration.
Bayesian optimisation is a state-of-the-art technique for the global
optimisation of black-box functions which are expensive to evaluate. At the
core of this approach is a Gaussian process prior that captures our belief
about the distribution over functions. However, in many cases a single Gaussian
process is not flexible enough to capture non-stationarity in the objective
function. Consequently, heteroscedasticity negatively affects performance of
traditional Bayesian methods. In this paper, we propose a novel prior model
with hierarchical parameter learning that tackles the problem of
non-stationarity in Bayesian optimisation. Our results demonstrate substantial
improvements in a wide range of applications, including automatic machine
learning and mining exploration.
| John-Alexander M. Assael, Ziyu Wang, Bobak Shahriari, Nando de Freitas | null | 1410.7172 | null | null |
Exact and Heuristic Algorithms for Semi-Nonnegative Matrix Factorization | math.NA cs.LG cs.NA math.OC stat.ML | Given a matrix $M$ (not necessarily nonnegative) and a factorization rank
$r$, semi-nonnegative matrix factorization (semi-NMF) looks for a matrix $U$
with $r$ columns and a nonnegative matrix $V$ with $r$ rows such that $UV$ is
the best possible approximation of $M$ according to some metric. In this paper,
we study the properties of semi-NMF from which we develop exact and heuristic
algorithms. Our contribution is threefold. First, we prove that the error of a
semi-NMF of rank $r$ has to be smaller than the best unconstrained
approximation of rank $r-1$. This leads us to a new initialization procedure
based on the singular value decomposition (SVD) with a guarantee on the quality
of the approximation. Second, we propose an exact algorithm (that is, an
algorithm that finds an optimal solution), also based on the SVD, for a certain
class of matrices (including nonnegative irreducible matrices) from which we
derive an initialization for matrices not belonging to that class. Numerical
experiments illustrate that this second approach performs extremely well, and
allows us to compute optimal semi-NMF decompositions in many situations.
Finally, we analyze the computational complexity of semi-NMF proving its
NP-hardness, already in the rank-one case (that is, for $r = 1$), and we show
that semi-NMF is sometimes ill-posed (that is, an optimal solution does not
exist).
| Nicolas Gillis and Abhishek Kumar | 10.1137/140993272 | 1410.7220 | null | null |
Feature Selection through Minimization of the VC dimension | cs.LG | Feature selection involes identifying the most relevant subset of input
features, with a view to improving generalization of predictive models by
reducing overfitting. Directly searching for the most relevant combination of
attributes is NP-hard. Variable selection is of critical importance in many
applications, such as micro-array data analysis, where selecting a small number
of discriminative features is crucial to developing useful models of disease
mechanisms, as well as for prioritizing targets for drug discovery. The
recently proposed Minimal Complexity Machine (MCM) provides a way to learn a
hyperplane classifier by minimizing an exact (\boldmath{$\Theta$}) bound on its
VC dimension. It is well known that a lower VC dimension contributes to good
generalization. For a linear hyperplane classifier in the input space, the VC
dimension is upper bounded by the number of features; hence, a linear
classifier with a small VC dimension is parsimonious in the set of features it
employs. In this paper, we use the linear MCM to learn a classifier in which a
large number of weights are zero; features with non-zero weights are the ones
that are chosen. Selected features are used to learn a kernel SVM classifier.
On a number of benchmark datasets, the features chosen by the linear MCM yield
comparable or better test set accuracy than when methods such as ReliefF and
FCBF are used for the task. The linear MCM typically chooses one-tenth the
number of attributes chosen by the other methods; on some very high dimensional
datasets, the MCM chooses about $0.6\%$ of the features; in comparison, ReliefF
and FCBF choose 70 to 140 times more features, thus demonstrating that
minimizing the VC dimension may provide a new, and very effective route for
feature selection and for learning sparse representations.
| Jayadeva, Sanjit S. Batra, and Siddharth Sabharwal | null | 1410.7372 | null | null |
Maximally Informative Hierarchical Representations of High-Dimensional
Data | stat.ML cs.LG physics.data-an | We consider a set of probabilistic functions of some input variables as a
representation of the inputs. We present bounds on how informative a
representation is about input data. We extend these bounds to hierarchical
representations so that we can quantify the contribution of each layer towards
capturing the information in the original data. The special form of these
bounds leads to a simple, bottom-up optimization procedure to construct
hierarchical representations that are also maximally informative about the
data. This optimization has linear computational complexity and constant sample
complexity in the number of variables. These results establish a new approach
to unsupervised learning of deep representations that is both principled and
practical. We demonstrate the usefulness of the approach on both synthetic and
real-world data.
| Greg Ver Steeg and Aram Galstyan | null | 1410.7404 | null | null |
Fast Function to Function Regression | stat.ML cs.LG | We analyze the problem of regression when both input covariates and output
responses are functions from a nonparametric function class. Function to
function regression (FFR) covers a large range of interesting applications
including time-series prediction problems, and also more general tasks like
studying a mapping between two separate types of distributions. However,
previous nonparametric estimators for FFR type problems scale badly
computationally with the number of input/output pairs in a data-set. Given the
complexity of a mapping between general functions it may be necessary to
consider large data-sets in order to achieve a low estimation risk. To address
this issue, we develop a novel scalable nonparametric estimator, the
Triple-Basis Estimator (3BE), which is capable of operating over datasets with
many instances. To the best of our knowledge, the 3BE is the first
nonparametric FFR estimator that can scale to massive datasets. We analyze the
3BE's risk and derive an upperbound rate. Furthermore, we show an improvement
of several orders of magnitude in terms of prediction speed and a reduction in
error over previous estimators in various real-world data-sets.
| Junier Oliva, Willie Neiswanger, Barnabas Poczos, Eric Xing, Jeff
Schneider | null | 1410.7414 | null | null |
Consensus Message Passing for Layered Graphical Models | cs.CV cs.AI cs.LG | Generative models provide a powerful framework for probabilistic reasoning.
However, in many domains their use has been hampered by the practical
difficulties of inference. This is particularly the case in computer vision,
where models of the imaging process tend to be large, loopy and layered. For
this reason bottom-up conditional models have traditionally dominated in such
domains. We find that widely-used, general-purpose message passing inference
algorithms such as Expectation Propagation (EP) and Variational Message Passing
(VMP) fail on the simplest of vision models. With these models in mind, we
introduce a modification to message passing that learns to exploit their
layered structure by passing 'consensus' messages that guide inference towards
good solutions. Experiments on a variety of problems show that the proposed
technique leads to significantly more accurate inference results, not only when
compared to standard EP and VMP, but also when compared to competitive
bottom-up conditional models.
| Varun Jampani, S. M. Ali Eslami, Daniel Tarlow, Pushmeet Kohli and
John Winn | null | 1410.7452 | null | null |
Parallel training of DNNs with Natural Gradient and Parameter Averaging | cs.NE cs.LG stat.ML | We describe the neural-network training framework used in the Kaldi speech
recognition toolkit, which is geared towards training DNNs with large amounts
of training data using multiple GPU-equipped or multi-core machines. In order
to be as hardware-agnostic as possible, we needed a way to use multiple
machines without generating excessive network traffic. Our method is to average
the neural network parameters periodically (typically every minute or two), and
redistribute the averaged parameters to the machines for further training. Each
machine sees different data. By itself, this method does not work very well.
However, we have another method, an approximate and efficient implementation of
Natural Gradient for Stochastic Gradient Descent (NG-SGD), which seems to allow
our periodic-averaging method to work well, as well as substantially improving
the convergence of SGD on a single machine.
| Daniel Povey, Xiaohui Zhang, Sanjeev Khudanpur | null | 1410.7455 | null | null |
Learning deep dynamical models from image pixels | stat.ML cs.LG cs.NE cs.SY | Modeling dynamical systems is important in many disciplines, e.g., control,
robotics, or neurotechnology. Commonly the state of these systems is not
directly observed, but only available through noisy and potentially
high-dimensional observations. In these cases, system identification, i.e.,
finding the measurement mapping and the transition mapping (system dynamics) in
latent space can be challenging. For linear system dynamics and measurement
mappings efficient solutions for system identification are available. However,
in practical applications, the linearity assumptions does not hold, requiring
non-linear system identification techniques. If additionally the observations
are high-dimensional (e.g., images), non-linear system identification is
inherently hard. To address the problem of non-linear system identification
from high-dimensional observations, we combine recent advances in deep learning
and system identification. In particular, we jointly learn a low-dimensional
embedding of the observation by means of deep auto-encoders and a predictive
transition model in this low-dimensional space. We demonstrate that our model
enables learning good predictive models of dynamical systems from pixel
information only.
| Niklas Wahlstr\"om, Thomas B. Sch\"on, Marc Peter Deisenroth | null | 1410.7550 | null | null |
Fast Algorithms for Online Stochastic Convex Programming | cs.LG cs.DS math.OC | We introduce the online stochastic Convex Programming (CP) problem, a very
general version of stochastic online problems which allows arbitrary concave
objectives and convex feasibility constraints. Many well-studied problems like
online stochastic packing and covering, online stochastic matching with concave
returns, etc. form a special case of online stochastic CP. We present fast
algorithms for these problems, which achieve near-optimal regret guarantees for
both the i.i.d. and the random permutation models of stochastic inputs. When
applied to the special case online packing, our ideas yield a simpler and
faster primal-dual algorithm for this well studied problem, which achieves the
optimal competitive ratio. Our techniques make explicit the connection of
primal-dual paradigm and online learning to online stochastic CP.
| Shipra Agrawal, Nikhil R. Devanur | null | 1410.7596 | null | null |
Learning graphical models from the Glauber dynamics | cs.LG cs.IT math.IT stat.CO stat.ML | In this paper we consider the problem of learning undirected graphical models
from data generated according to the Glauber dynamics. The Glauber dynamics is
a Markov chain that sequentially updates individual nodes (variables) in a
graphical model and it is frequently used to sample from the stationary
distribution (to which it converges given sufficient time). Additionally, the
Glauber dynamics is a natural dynamical model in a variety of settings. This
work deviates from the standard formulation of graphical model learning in the
literature, where one assumes access to i.i.d. samples from the distribution.
Much of the research on graphical model learning has been directed towards
finding algorithms with low computational cost. As the main result of this
work, we establish that the problem of reconstructing binary pairwise graphical
models is computationally tractable when we observe the Glauber dynamics.
Specifically, we show that a binary pairwise graphical model on $p$ nodes with
maximum degree $d$ can be learned in time $f(d)p^2\log p$, for a function
$f(d)$, using nearly the information-theoretic minimum number of samples.
| Guy Bresler and David Gamarnik and Devavrat Shah | null | 1410.7659 | null | null |
Non-convex Robust PCA | cs.IT cs.LG math.IT stat.ML | We propose a new method for robust PCA -- the task of recovering a low-rank
matrix from sparse corruptions that are of unknown value and support. Our
method involves alternating between projecting appropriate residuals onto the
set of low-rank matrices, and the set of sparse matrices; each projection is
{\em non-convex} but easy to compute. In spite of this non-convexity, we
establish exact recovery of the low-rank matrix, under the same conditions that
are required by existing methods (which are based on convex optimization). For
an $m \times n$ input matrix ($m \leq n)$, our method has a running time of
$O(r^2mn)$ per iteration, and needs $O(\log(1/\epsilon))$ iterations to reach
an accuracy of $\epsilon$. This is close to the running time of simple PCA via
the power method, which requires $O(rmn)$ per iteration, and
$O(\log(1/\epsilon))$ iterations. In contrast, existing methods for robust PCA,
which are based on convex optimization, have $O(m^2n)$ complexity per
iteration, and take $O(1/\epsilon)$ iterations, i.e., exponentially more
iterations for the same accuracy.
Experiments on both synthetic and real data establishes the improved speed
and accuracy of our method over existing convex implementations.
| Praneeth Netrapalli and U N Niranjan and Sujay Sanghavi and Animashree
Anandkumar and Prateek Jain | null | 1410.7660 | null | null |
Trend Filtering on Graphs | stat.ML cs.AI cs.LG stat.ME | We introduce a family of adaptive estimators on graphs, based on penalizing
the $\ell_1$ norm of discrete graph differences. This generalizes the idea of
trend filtering [Kim et al. (2009), Tibshirani (2014)], used for univariate
nonparametric regression, to graphs. Analogous to the univariate case, graph
trend filtering exhibits a level of local adaptivity unmatched by the usual
$\ell_2$-based graph smoothers. It is also defined by a convex minimization
problem that is readily solved (e.g., by fast ADMM or Newton algorithms). We
demonstrate the merits of graph trend filtering through examples and theory.
| Yu-Xiang Wang, James Sharpnack, Alex Smola, Ryan J. Tibshirani | null | 1410.7690 | null | null |
Anomaly Detection Framework Using Rule Extraction for Efficient
Intrusion Detection | cs.LG cs.CR | Huge datasets in cyber security, such as network traffic logs, can be
analyzed using machine learning and data mining methods. However, the amount of
collected data is increasing, which makes analysis more difficult. Many machine
learning methods have not been designed for big datasets, and consequently are
slow and difficult to understand. We address the issue of efficient network
traffic classification by creating an intrusion detection framework that
applies dimensionality reduction and conjunctive rule extraction. The system
can perform unsupervised anomaly detection and use this information to create
conjunctive rules that classify huge amounts of traffic in real time. We test
the implemented system with the widely used KDD Cup 99 dataset and real-world
network logs to confirm that the performance is satisfactory. This system is
transparent and does not work like a black box, making it intuitive for domain
experts, such as network administrators.
| Antti Juvonen and Tuomo Sipola | null | 1410.7709 | null | null |
Generalized Product of Experts for Automatic and Principled Fusion of
Gaussian Process Predictions | cs.LG cs.AI stat.ML | In this work, we propose a generalized product of experts (gPoE) framework
for combining the predictions of multiple probabilistic models. We identify
four desirable properties that are important for scalability, expressiveness
and robustness, when learning and inferring with a combination of multiple
models. Through analysis and experiments, we show that gPoE of Gaussian
processes (GP) have these qualities, while no other existing combination
schemes satisfy all of them at the same time. The resulting GP-gPoE is highly
scalable as individual GP experts can be independently learned in parallel;
very expressive as the way experts are combined depends on the input rather
than fixed; the combined prediction is still a valid probabilistic model with
natural interpretation; and finally robust to unreliable predictions from
individual experts.
| Yanshuai Cao, David J. Fleet | null | 1410.7827 | null | null |
Fast Learning of Relational Dependency Networks | cs.LG | A Relational Dependency Network (RDN) is a directed graphical model widely
used for multi-relational data. These networks allow cyclic dependencies,
necessary to represent relational autocorrelations. We describe an approach for
learning both the RDN's structure and its parameters, given an input relational
database: First learn a Bayesian network (BN), then transform the Bayesian
network to an RDN. Thus fast Bayes net learning can provide fast RDN learning.
The BN-to-RDN transform comprises a simple, local adjustment of the Bayes net
structure and a closed-form transform of the Bayes net parameters. This method
can learn an RDN for a dataset with a million tuples in minutes. We empirically
compare our approach to state-of-the art RDN learning methods that use
functional gradient boosting, on five benchmark datasets. Learning RDNs via BNs
scales much better to large datasets than learning RDNs with boosting, and
provides competitive accuracy in predictions.
| Oliver Schulte, Zhensong Qian, Arthur E. Kirkpatrick, Xiaoqian Yin,
Yan Sun | null | 1410.7835 | null | null |
A Markov Decision Process Analysis of the Cold Start Problem in Bayesian
Information Filtering | cs.LG cs.IR math.OC | We consider the information filtering problem, in which we face a stream of
items, and must decide which ones to forward to a user to maximize the number
of relevant items shown, minus a penalty for each irrelevant item shown.
Forwarding decisions are made separately in a personalized way for each user.
We focus on the cold-start setting for this problem, in which we have limited
historical data on the user's preferences, and must rely on feedback from
forwarded articles to learn which the fraction of items relevant to the user in
each of several item categories. Performing well in this setting requires
trading exploration vs. exploitation, forwarding items that are likely to be
irrelevant, to allow learning that will improve later performance. In a
Bayesian setting, and using Markov decision processes, we show how the
Bayes-optimal forwarding algorithm can be computed efficiently when the user
will examine each forwarded article, and how an upper bound on the
Bayes-optimal procedure and a heuristic index policy can be obtained for the
setting when the user will examine only a limited number of forwarded items. We
present results from simulation experiments using parameters estimated using
historical data from arXiv.org.
| Xiaoting Zhao, Peter I. Frazier | null | 1410.7852 | null | null |
Collaborative Multi-sensor Classification via Sparsity-based
Representation | cs.CV cs.LG stat.ML | In this paper, we propose a general collaborative sparse representation
framework for multi-sensor classification, which takes into account the
correlations as well as complementary information between heterogeneous sensors
simultaneously while considering joint sparsity within each sensor's
observations. We also robustify our models to deal with the presence of sparse
noise and low-rank interference signals. Specifically, we demonstrate that
incorporating the noise or interference signal as a low-rank component in our
models is essential in a multi-sensor classification problem when multiple
co-located sources/sensors simultaneously record the same physical event. We
further extend our frameworks to kernelized models which rely on sparsely
representing a test sample in terms of all the training samples in a feature
space induced by a kernel function. A fast and efficient algorithm based on
alternative direction method is proposed where its convergence to an optimal
solution is guaranteed. Extensive experiments are conducted on several real
multi-sensor data sets and results are compared with the conventional
classifiers to verify the effectiveness of the proposed methods.
| Minh Dao, Nam H. Nguyen, Nasser M. Nasrabadi, and Trac D. Tran | null | 1410.7876 | null | null |
Global Bandits with Holder Continuity | cs.LG | Standard Multi-Armed Bandit (MAB) problems assume that the arms are
independent. However, in many application scenarios, the information obtained
by playing an arm provides information about the remainder of the arms. Hence,
in such applications, this informativeness can and should be exploited to
enable faster convergence to the optimal solution. In this paper, we introduce
and formalize the Global MAB (GMAB), in which arms are globally informative
through a global parameter, i.e., choosing an arm reveals information about all
the arms. We propose a greedy policy for the GMAB which always selects the arm
with the highest estimated expected reward, and prove that it achieves bounded
parameter-dependent regret. Hence, this policy selects suboptimal arms only
finitely many times, and after a finite number of initial time steps, the
optimal arm is selected in all of the remaining time steps with probability
one. In addition, we also study how the informativeness of the arms about each
other's rewards affects the speed of learning. Specifically, we prove that the
parameter-free (worst-case) regret is sublinear in time, and decreases with the
informativeness of the arms. We also prove a sublinear in time Bayesian risk
bound for the GMAB which reduces to the well-known Bayesian risk bound for
linearly parameterized bandits when the arms are fully informative. GMABs have
applications ranging from drug and treatment discovery to dynamic pricing.
| Onur Atan, Cem Tekin, Mihaela van der Schaar | null | 1410.7890 | null | null |
Towards a Visual Turing Challenge | cs.AI cs.CL cs.CV cs.LG | As language and visual understanding by machines progresses rapidly, we are
observing an increasing interest in holistic architectures that tightly
interlink both modalities in a joint learning and inference process. This trend
has allowed the community to progress towards more challenging and open tasks
and refueled the hope at achieving the old AI dream of building machines that
could pass a turing test in open domains. In order to steadily make progress
towards this goal, we realize that quantifying performance becomes increasingly
difficult. Therefore we ask how we can precisely define such challenges and how
we can evaluate different algorithms on this open tasks? In this paper, we
summarize and discuss such challenges as well as try to give answers where
appropriate options are available in the literature. We exemplify some of the
solutions on a recently presented dataset of question-answering task based on
real-world indoor images that establishes a visual turing challenge. Finally,
we argue despite the success of unique ground-truth annotation, we likely have
to step away from carefully curated dataset and rather rely on 'social
consensus' as the main driving force to create suitable benchmarks. Providing
coverage in this inherently ambiguous output space is an emerging challenge
that we face in order to make quantifiable progress in this area.
| Mateusz Malinowski and Mario Fritz | null | 1410.8027 | null | null |
Latent Feature Based FM Model For Rating Prediction | cs.LG cs.IR stat.ML | Rating Prediction is a basic problem in Recommender System, and one of the
most widely used method is Factorization Machines(FM). However, traditional
matrix factorization methods fail to utilize the benefit of implicit feedback,
which has been proved to be important in Rating Prediction problem. In this
work, we consider a specific situation, movie rating prediction, where we
assume that watching history has a big influence on his/her rating behavior on
an item. We introduce two models, Latent Dirichlet Allocation(LDA) and
word2vec, both of which perform state-of-the-art results in training latent
features. Based on that, we propose two feature based models. One is the
Topic-based FM Model which provides the implicit feedback to the matrix
factorization. The other is the Vector-based FM Model which expresses the order
info of watching history. Empirical results on three datasets demonstrate that
our method performs better than the baseline model and confirm that
Vector-based FM Model usually works better as it contains the order info.
| Xudong Liu, Bin Zhang, Ting Zhang and Chang Liu | null | 1410.8034 | null | null |
High-Performance Distributed ML at Scale through Parameter Server
Consistency Models | cs.LG stat.ML | As Machine Learning (ML) applications increase in data size and model
complexity, practitioners turn to distributed clusters to satisfy the increased
computational and memory demands. Unfortunately, effective use of clusters for
ML requires considerable expertise in writing distributed code, while
highly-abstracted frameworks like Hadoop have not, in practice, approached the
performance seen in specialized ML implementations. The recent Parameter Server
(PS) paradigm is a middle ground between these extremes, allowing easy
conversion of single-machine parallel ML applications into distributed ones,
while maintaining high throughput through relaxed "consistency models" that
allow inconsistent parameter reads. However, due to insufficient theoretical
study, it is not clear which of these consistency models can really ensure
correct ML algorithm output; at the same time, there remain many
theoretically-motivated but undiscovered opportunities to maximize
computational throughput. Motivated by this challenge, we study both the
theoretical guarantees and empirical behavior of iterative-convergent ML
algorithms in existing PS consistency models. We then use the gleaned insights
to improve a consistency model using an "eager" PS communication mechanism, and
implement it as a new PS system that enables ML algorithms to reach their
solution more quickly.
| Wei Dai, Abhimanu Kumar, Jinliang Wei, Qirong Ho, Garth Gibson, Eric
P. Xing | null | 1410.8043 | null | null |
Detecting Structural Irregularity in Electronic Dictionaries Using
Language Modeling | cs.CL cs.LG | Dictionaries are often developed using tools that save to Extensible Markup
Language (XML)-based standards. These standards often allow high-level
repeating elements to represent lexical entries, and utilize descendants of
these repeating elements to represent the structure within each lexical entry,
in the form of an XML tree. In many cases, dictionaries are published that have
errors and inconsistencies that are expensive to find manually. This paper
discusses a method for dictionary writers to quickly audit structural
regularity across entries in a dictionary by using statistical language
modeling. The approach learns the patterns of XML nodes that could occur within
an XML tree, and then calculates the probability of each XML tree in the
dictionary against these patterns to look for entries that diverge from the
norm.
| Paul Rodrigues, David Zajic, David Doermann, Michael Bloodgood and
Peng Ye | null | 1410.8149 | null | null |
Addressing the Rare Word Problem in Neural Machine Translation | cs.CL cs.LG cs.NE | Neural Machine Translation (NMT) is a new approach to machine translation
that has shown promising results that are comparable to traditional approaches.
A significant weakness in conventional NMT systems is their inability to
correctly translate very rare words: end-to-end NMTs tend to have relatively
small vocabularies with a single unk symbol that represents every possible
out-of-vocabulary (OOV) word. In this paper, we propose and implement an
effective technique to address this problem. We train an NMT system on data
that is augmented by the output of a word alignment algorithm, allowing the NMT
system to emit, for each OOV word in the target sentence, the position of its
corresponding word in the source sentence. This information is later utilized
in a post-processing step that translates every OOV word using a dictionary.
Our experiments on the WMT14 English to French translation task show that this
method provides a substantial improvement of up to 2.8 BLEU points over an
equivalent NMT system that does not use this technique. With 37.5 BLEU points,
our NMT system is the first to surpass the best result achieved on a WMT14
contest task.
| Minh-Thang Luong, Ilya Sutskever, Quoc V. Le, Oriol Vinyals, Wojciech
Zaremba | null | 1410.8206 | null | null |
Notes on Noise Contrastive Estimation and Negative Sampling | cs.LG | Estimating the parameters of probabilistic models of language such as maxent
models and probabilistic neural models is computationally difficult since it
involves evaluating partition functions by summing over an entire vocabulary,
which may be millions of word types in size. Two closely related
strategies---noise contrastive estimation (Mnih and Teh, 2012; Mnih and
Kavukcuoglu, 2013; Vaswani et al., 2013) and negative sampling (Mikolov et al.,
2012; Goldberg and Levy, 2014)---have emerged as popular solutions to this
computational problem, but some confusion remains as to which is more
appropriate and when. This document explicates their relationships to each
other and to other estimation techniques. The analysis shows that, although
they are superficially similar, NCE is a general parameter estimation technique
that is asymptotically unbiased, while negative sampling is best understood as
a family of binary classification models that are useful for learning word
representations but not as a general-purpose estimator.
| Chris Dyer | null | 1410.8251 | null | null |
Bootstrap-Based Regularization for Low-Rank Matrix Estimation | stat.ME cs.LG stat.ML | We develop a flexible framework for low-rank matrix estimation that allows us
to transform noise models into regularization schemes via a simple bootstrap
algorithm. Effectively, our procedure seeks an autoencoding basis for the
observed matrix that is stable with respect to the specified noise model; we
call the resulting procedure a stable autoencoder. In the simplest case, with
an isotropic noise model, our method is equivalent to a classical singular
value shrinkage estimator. For non-isotropic noise models, e.g., Poisson noise,
the method does not reduce to singular value shrinkage, and instead yields new
estimators that perform well in experiments. Moreover, by iterating our stable
autoencoding scheme, we can automatically generate low-rank estimates without
specifying the target rank as a tuning parameter.
| Julie Josse and Stefan Wager | null | 1410.8275 | null | null |
Towards Learning Object Affordance Priors from Technical Texts | cs.LG cs.AI cs.CL cs.RO | Everyday activities performed by artificial assistants can potentially be
executed naively and dangerously given their lack of common sense knowledge.
This paper presents conceptual work towards obtaining prior knowledge on the
usual modality (passive or active) of any given entity, and their affordance
estimates, by extracting high-confidence ability modality semantic relations (X
can Y relationship) from non-figurative texts, by analyzing co-occurrence of
grammatical instances of subjects and verbs, and verbs and objects. The
discussion includes an outline of the concept, potential and limitations, and
possible feature and learning framework adoption.
| Nicholas H. Kirk | null | 1410.8326 | null | null |
Learning circuits with few negations | cs.CC cs.DM cs.LG | Monotone Boolean functions, and the monotone Boolean circuits that compute
them, have been intensively studied in complexity theory. In this paper we
study the structure of Boolean functions in terms of the minimum number of
negations in any circuit computing them, a complexity measure that interpolates
between monotone functions and the class of all functions. We study this
generalization of monotonicity from the vantage point of learning theory,
giving near-matching upper and lower bounds on the uniform-distribution
learnability of circuits in terms of the number of negations they contain. Our
upper bounds are based on a new structural characterization of negation-limited
circuits that extends a classical result of A. A. Markov. Our lower bounds,
which employ Fourier-analytic tools from hardness amplification, give new
results even for circuits with no negations (i.e. monotone functions).
| Eric Blais, Cl\'ement L. Canonne, Igor C. Oliveira, Rocco A. Servedio
and Li-Yang Tan | null | 1410.8420 | null | null |
NICE: Non-linear Independent Components Estimation | cs.LG | We propose a deep learning framework for modeling complex high-dimensional
densities called Non-linear Independent Component Estimation (NICE). It is
based on the idea that a good representation is one in which the data has a
distribution that is easy to model. For this purpose, a non-linear
deterministic transformation of the data is learned that maps it to a latent
space so as to make the transformed data conform to a factorized distribution,
i.e., resulting in independent latent variables. We parametrize this
transformation so that computing the Jacobian determinant and inverse transform
is trivial, yet we maintain the ability to learn complex non-linear
transformations, via a composition of simple building blocks, each based on a
deep neural network. The training criterion is simply the exact log-likelihood,
which is tractable. Unbiased ancestral sampling is also easy. We show that this
approach yields good generative models on four image datasets and can be used
for inpainting.
| Laurent Dinh, David Krueger and Yoshua Bengio | null | 1410.8516 | null | null |
A random forest system combination approach for error detection in
digital dictionaries | cs.CL cs.LG stat.ML | When digitizing a print bilingual dictionary, whether via optical character
recognition or manual entry, it is inevitable that errors are introduced into
the electronic version that is created. We investigate automating the process
of detecting errors in an XML representation of a digitized print dictionary
using a hybrid approach that combines rule-based, feature-based, and language
model-based methods. We investigate combining methods and show that using
random forests is a promising approach. We find that in isolation, unsupervised
methods rival the performance of supervised methods. Random forests typically
require training data so we investigate how we can apply random forests to
combine individual base methods that are themselves unsupervised without
requiring large amounts of training data. Experiments reveal empirically that a
relatively small amount of data is sufficient and can potentially be further
reduced through specific selection criteria.
| Michael Bloodgood, Peng Ye, Paul Rodrigues, David Zajic and David
Doermann | null | 1410.8553 | null | null |
An ensemble-based system for automatic screening of diabetic retinopathy | cs.CV cs.LG stat.AP stat.ML | In this paper, an ensemble-based method for the screening of diabetic
retinopathy (DR) is proposed. This approach is based on features extracted from
the output of several retinal image processing algorithms, such as image-level
(quality assessment, pre-screening, AM/FM), lesion-specific (microaneurysms,
exudates) and anatomical (macula, optic disc) components. The actual decision
about the presence of the disease is then made by an ensemble of machine
learning classifiers. We have tested our approach on the publicly available
Messidor database, where 90% sensitivity, 91% specificity and 90% accuracy and
0.989 AUC are achieved in a disease/no-disease setting. These results are
highly competitive in this field and suggest that retinal image processing is a
valid approach for automatic DR screening.
| Balint Antal, Andras Hajdu | 10.1016/j.knosys.2013.12.023 | 1410.8576 | null | null |
An Online Algorithm for Learning Selectivity to Mixture Means | q-bio.NC cs.LG | We develop a biologically-plausible learning rule called Triplet BCM that
provably converges to the class means of general mixture models. This rule
generalizes the classical BCM neural rule, and provides a novel interpretation
of classical BCM as performing a kind of tensor decomposition. It achieves a
substantial generalization over classical BCM by incorporating triplets of
samples from the mixtures, which provides a novel information processing
interpretation to spike-timing-dependent plasticity. We provide complete proofs
of convergence of this learning rule, and an extended discussion of the
connection between BCM and tensor learning.
| Matthew Lawlor and Steven Zucker | null | 1410.8580 | null | null |
DeepSentiBank: Visual Sentiment Concept Classification with Deep
Convolutional Neural Networks | cs.CV cs.LG cs.MM cs.NE | This paper introduces a visual sentiment concept classification method based
on deep convolutional neural networks (CNNs). The visual sentiment concepts are
adjective noun pairs (ANPs) automatically discovered from the tags of web
photos, and can be utilized as effective statistical cues for detecting
emotions depicted in the images. Nearly one million Flickr images tagged with
these ANPs are downloaded to train the classifiers of the concepts. We adopt
the popular model of deep convolutional neural networks which recently shows
great performance improvement on classifying large-scale web-based image
dataset such as ImageNet. Our deep CNNs model is trained based on Caffe, a
newly developed deep learning framework. To deal with the biased training data
which only contains images with strong sentiment and to prevent overfitting, we
initialize the model with the model weights trained from ImageNet. Performance
evaluation shows the newly trained deep CNNs model SentiBank 2.0 (or called
DeepSentiBank) is significantly improved in both annotation accuracy and
retrieval performance, compared to its predecessors which mainly use binary SVM
classification models.
| Tao Chen, Damian Borth, Trevor Darrell and Shih-Fu Chang | null | 1410.8586 | null | null |
A Comparison of learning algorithms on the Arcade Learning Environment | cs.LG cs.AI | Reinforcement learning agents have traditionally been evaluated on small toy
problems. With advances in computing power and the advent of the Arcade
Learning Environment, it is now possible to evaluate algorithms on diverse and
difficult problems within a consistent framework. We discuss some challenges
posed by the arcade learning environment which do not manifest in simpler
environments. We then provide a comparison of model-free, linear learning
algorithms on this challenging problem set.
| Aaron Defazio and Thore Graepel | null | 1410.8620 | null | null |
Partition-wise Linear Models | stat.ML cs.LG | Region-specific linear models are widely used in practical applications
because of their non-linear but highly interpretable model representations. One
of the key challenges in their use is non-convexity in simultaneous
optimization of regions and region-specific models. This paper proposes novel
convex region-specific linear models, which we refer to as partition-wise
linear models. Our key ideas are 1) assigning linear models not to regions but
to partitions (region-specifiers) and representing region-specific linear
models by linear combinations of partition-specific models, and 2) optimizing
regions via partition selection from a large number of given partition
candidates by means of convex structured regularizations. In addition to
providing initialization-free globally-optimal solutions, our convex
formulation makes it possible to derive a generalization bound and to use such
advanced optimization techniques as proximal methods and decomposition of the
proximal maps for sparsity-inducing regularizations. Experimental results
demonstrate that our partition-wise linear models perform better than or are at
least competitive with state-of-the-art region-specific or locally linear
models.
| Hidekazu Oiwa, Ryohei Fujimaki | null | 1410.8675 | null | null |
Learning Mixtures of Ranking Models | cs.LG | This work concerns learning probabilistic models for ranking data in a
heterogeneous population. The specific problem we study is learning the
parameters of a Mallows Mixture Model. Despite being widely studied, current
heuristics for this problem do not have theoretical guarantees and can get
stuck in bad local optima. We present the first polynomial time algorithm which
provably learns the parameters of a mixture of two Mallows models. A key
component of our algorithm is a novel use of tensor decomposition techniques to
learn the top-k prefix in both the rankings. Before this work, even the
question of identifiability in the case of a mixture of two Mallows models was
unresolved.
| Pranjal Awasthi, Avrim Blum, Or Sheffet, Aravindan Vijayaraghavan | null | 1410.8750 | null | null |
Supervised learning model for parsing Arabic language | cs.CL cs.LG | Parsing the Arabic language is a difficult task given the specificities of
this language and given the scarcity of digital resources (grammars and
annotated corpora). In this paper, we suggest a method for Arabic parsing based
on supervised machine learning. We used the SVMs algorithm to select the
syntactic labels of the sentence. Furthermore, we evaluated our parser
following the cross validation method by using the Penn Arabic Treebank. The
obtained results are very encouraging.
| Nabil Khoufi, Chafik Aloulou, Lamia Hadrich Belguith | null | 1410.8783 | null | null |
Greedy Subspace Clustering | stat.ML cs.IT cs.LG math.IT | We consider the problem of subspace clustering: given points that lie on or
near the union of many low-dimensional linear subspaces, recover the subspaces.
To this end, one first identifies sets of points close to the same subspace and
uses the sets to estimate the subspaces. As the geometric structure of the
clusters (linear subspaces) forbids proper performance of general distance
based approaches such as K-means, many model-specific methods have been
proposed. In this paper, we provide new simple and efficient algorithms for
this problem. Our statistical analysis shows that the algorithms are guaranteed
exact (perfect) clustering performance under certain conditions on the number
of points and the affinity between subspaces. These conditions are weaker than
those considered in the standard statistical literature. Experimental results
on synthetic data generated from the standard unions of subspaces model
demonstrate our theory. We also show that our algorithm performs competitively
against state-of-the-art algorithms on real-world applications such as motion
segmentation and face clustering, with much simpler implementation and lower
computational cost.
| Dohyung Park, Constantine Caramanis, Sujay Sanghavi | null | 1410.8864 | null | null |
Rapid Adaptation of POS Tagging for Domain Specific Uses | cs.CL cs.LG stat.ML | Part-of-speech (POS) tagging is a fundamental component for performing
natural language tasks such as parsing, information extraction, and question
answering. When POS taggers are trained in one domain and applied in
significantly different domains, their performance can degrade dramatically. We
present a methodology for rapid adaptation of POS taggers to new domains. Our
technique is unsupervised in that a manually annotated corpus for the new
domain is not necessary. We use suffix information gathered from large amounts
of raw text as well as orthographic information to increase the lexical
coverage. We present an experiment in the Biological domain where our POS
tagger achieves results comparable to POS taggers specifically trained to this
domain.
| John E. Miller, Michael Bloodgood, Manabu Torii and K. Vijay-Shanker | null | 1411.0007 | null | null |
Validation of Matching | cs.LG stat.ML | We introduce a technique to compute probably approximately correct (PAC)
bounds on precision and recall for matching algorithms. The bounds require some
verified matches, but those matches may be used to develop the algorithms. The
bounds can be applied to network reconciliation or entity resolution
algorithms, which identify nodes in different networks or values in a data set
that correspond to the same entity. For network reconciliation, the bounds do
not require knowledge of the network generation process.
| Ya Le, Eric Bax, Nicola Barbieri, David Garcia Soriano, Jitesh Mehta,
James Li | null | 1411.0023 | null | null |
Robust sketching for multiple square-root LASSO problems | math.OC cs.LG cs.SY stat.ML | Many learning tasks, such as cross-validation, parameter search, or
leave-one-out analysis, involve multiple instances of similar problems, each
instance sharing a large part of learning data with the others. We introduce a
robust framework for solving multiple square-root LASSO problems, based on a
sketch of the learning data that uses low-rank approximations. Our approach
allows a dramatic reduction in computational effort, in effect reducing the
number of observations from $m$ (the number of observations to start with) to
$k$ (the number of singular values retained in the low-rank model), while not
sacrificing---sometimes even improving---the statistical performance.
Theoretical analysis, as well as numerical experiments on both synthetic and
real data, illustrate the efficiency of the method in large scale applications.
| Vu Pham, Laurent El Ghaoui, Arturo Fernandez | null | 1411.0024 | null | null |
Entropy of Overcomplete Kernel Dictionaries | cs.IT cs.CV cs.LG cs.NE math.IT stat.ML | In signal analysis and synthesis, linear approximation theory considers a
linear decomposition of any given signal in a set of atoms, collected into a
so-called dictionary. Relevant sparse representations are obtained by relaxing
the orthogonality condition of the atoms, yielding overcomplete dictionaries
with an extended number of atoms. More generally than the linear decomposition,
overcomplete kernel dictionaries provide an elegant nonlinear extension by
defining the atoms through a mapping kernel function (e.g., the gaussian
kernel). Models based on such kernel dictionaries are used in neural networks,
gaussian processes and online learning with kernels.
The quality of an overcomplete dictionary is evaluated with a diversity
measure the distance, the approximation, the coherence and the Babel measures.
In this paper, we develop a framework to examine overcomplete kernel
dictionaries with the entropy from information theory. Indeed, a higher value
of the entropy is associated to a further uniform spread of the atoms over the
space. For each of the aforementioned diversity measures, we derive lower
bounds on the entropy. Several definitions of the entropy are examined, with an
extensive analysis in both the input space and the mapped feature space.
| Paul Honeine | null | 1411.0161 | null | null |
Near-Optimal Density Estimation in Near-Linear Time Using Variable-Width
Histograms | cs.LG cs.DS math.ST stat.TH | Let $p$ be an unknown and arbitrary probability distribution over $[0,1)$. We
consider the problem of {\em density estimation}, in which a learning algorithm
is given i.i.d. draws from $p$ and must (with high probability) output a
hypothesis distribution that is close to $p$. The main contribution of this
paper is a highly efficient density estimation algorithm for learning using a
variable-width histogram, i.e., a hypothesis distribution with a piecewise
constant probability density function.
In more detail, for any $k$ and $\epsilon$, we give an algorithm that makes
$\tilde{O}(k/\epsilon^2)$ draws from $p$, runs in $\tilde{O}(k/\epsilon^2)$
time, and outputs a hypothesis distribution $h$ that is piecewise constant with
$O(k \log^2(1/\epsilon))$ pieces. With high probability the hypothesis $h$
satisfies $d_{\mathrm{TV}}(p,h) \leq C \cdot \mathrm{opt}_k(p) + \epsilon$,
where $d_{\mathrm{TV}}$ denotes the total variation distance (statistical
distance), $C$ is a universal constant, and $\mathrm{opt}_k(p)$ is the smallest
total variation distance between $p$ and any $k$-piecewise constant
distribution. The sample size and running time of our algorithm are optimal up
to logarithmic factors. The "approximation factor" $C$ in our result is
inherent in the problem, as we prove that no algorithm with sample size bounded
in terms of $k$ and $\epsilon$ can achieve $C<2$ regardless of what kind of
hypothesis distribution it uses.
| Siu-On Chan, Ilias Diakonikolas, Rocco A. Servedio, Xiaorui Sun | null | 1411.0169 | null | null |
Synchronization Clustering based on a Linearized Version of Vicsek model | cs.LG cs.DB | This paper presents a kind of effective synchronization clustering method
based on a linearized version of Vicsek model. This method can be represented
by an Effective Synchronization Clustering algorithm (ESynC), an Improved
version of ESynC algorithm (IESynC), a Shrinking Synchronization Clustering
algorithm based on another linear Vicsek model (SSynC), and an effective
Multi-level Synchronization Clustering algorithm (MSynC). After some analysis
and comparisions, we find that ESynC algorithm based on the Linearized version
of the Vicsek model has better synchronization effect than SynC algorithm based
on an extensive Kuramoto model and a similar synchronization clustering
algorithm based on the original Vicsek model. By simulated experiments of some
artificial data sets, we observe that ESynC algorithm, IESynC algorithm, and
SSynC algorithm can get better synchronization effect although it needs less
iterative times and less time than SynC algorithm. In some simulations, we also
observe that IESynC algorithm and SSynC algorithm can get some improvements in
time cost than ESynC algorithm. At last, it gives some research expectations to
popularize this algorithm.
| Xinquan Chen | null | 1411.0189 | null | null |
Population Empirical Bayes | stat.ML cs.LG | Bayesian predictive inference analyzes a dataset to make predictions about
new observations. When a model does not match the data, predictive accuracy
suffers. We develop population empirical Bayes (POP-EB), a hierarchical
framework that explicitly models the empirical population distribution as part
of Bayesian analysis. We introduce a new concept, the latent dataset, as a
hierarchical variable and set the empirical population as its prior. This leads
to a new predictive density that mitigates model mismatch. We efficiently apply
this method to complex models by proposing a stochastic variational inference
algorithm, called bumping variational inference (BUMP-VI). We demonstrate
improved predictive accuracy over classical Bayesian inference in three models:
a linear regression model of health data, a Bayesian mixture model of natural
images, and a latent Dirichlet allocation topic model of scientific documents.
| Alp Kucukelbir, David M. Blei | null | 1411.0292 | null | null |
Geodesic Exponential Kernels: When Curvature and Linearity Conflict | cs.LG cs.CV | We consider kernel methods on general geodesic metric spaces and provide both
negative and positive results. First we show that the common Gaussian kernel
can only be generalized to a positive definite kernel on a geodesic metric
space if the space is flat. As a result, for data on a Riemannian manifold, the
geodesic Gaussian kernel is only positive definite if the Riemannian manifold
is Euclidean. This implies that any attempt to design geodesic Gaussian kernels
on curved Riemannian manifolds is futile. However, we show that for spaces with
conditionally negative definite distances the geodesic Laplacian kernel can be
generalized while retaining positive definiteness. This implies that geodesic
Laplacian kernels can be generalized to some curved spaces, including spheres
and hyperbolic spaces. Our theoretical results are verified empirically.
| Aasa Feragen, Francois Lauze, S{\o}ren Hauberg | null | 1411.0296 | null | null |
Fast Randomized Kernel Methods With Statistical Guarantees | stat.ML cs.LG stat.CO | One approach to improving the running time of kernel-based machine learning
methods is to build a small sketch of the input and use it in lieu of the full
kernel matrix in the machine learning task of interest. Here, we describe a
version of this approach that comes with running time guarantees as well as
improved guarantees on its statistical performance. By extending the notion of
\emph{statistical leverage scores} to the setting of kernel ridge regression,
our main statistical result is to identify an importance sampling distribution
that reduces the size of the sketch (i.e., the required number of columns to be
sampled) to the \emph{effective dimensionality} of the problem. This quantity
is often much smaller than previous bounds that depend on the \emph{maximal
degrees of freedom}. Our main algorithmic result is to present a fast algorithm
to compute approximations to these scores. This algorithm runs in time that is
linear in the number of samples---more precisely, the running time is
$O(np^2)$, where the parameter $p$ depends only on the trace of the kernel
matrix and the regularization parameter---and it can be applied to the matrix
of feature vectors, without having to form the full kernel matrix. This is
obtained via a variant of length-squared sampling that we adapt to the kernel
setting in a way that is of independent interest. Lastly, we provide empirical
results illustrating our theory, and we discuss how this new notion of the
statistical leverage of a data point captures in a fine way the difficulty of
the original statistical learning problem.
| Ahmed El Alaoui, Michael W. Mahoney | null | 1411.0306 | null | null |
Iterative Hessian sketch: Fast and accurate solution approximation for
constrained least-squares | math.OC cs.IT cs.LG math.IT stat.ML | We study randomized sketching methods for approximately solving least-squares
problem with a general convex constraint. The quality of a least-squares
approximation can be assessed in different ways: either in terms of the value
of the quadratic objective function (cost approximation), or in terms of some
distance measure between the approximate minimizer and the true minimizer
(solution approximation). Focusing on the latter criterion, our first main
result provides a general lower bound on any randomized method that sketches
both the data matrix and vector in a least-squares problem; as a surprising
consequence, the most widely used least-squares sketch is sub-optimal for
solution approximation. We then present a new method known as the iterative
Hessian sketch, and show that it can be used to obtain approximations to the
original least-squares problem using a projection dimension proportional to the
statistical complexity of the least-squares minimizer, and a logarithmic number
of iterations. We illustrate our general theory with simulations for both
unconstrained and constrained versions of least-squares, including
$\ell_1$-regularization and nuclear norm constraints. We also numerically
demonstrate the practicality of our approach in a real face expression
classification experiment.
| Mert Pilanci and Martin J. Wainwright | null | 1411.0347 | null | null |
Distributed Submodular Maximization | cs.LG cs.AI cs.DC cs.IR | Many large-scale machine learning problems--clustering, non-parametric
learning, kernel machines, etc.--require selecting a small yet representative
subset from a large dataset. Such problems can often be reduced to maximizing a
submodular set function subject to various constraints. Classical approaches to
submodular optimization require centralized access to the full dataset, which
is impractical for truly large-scale problems. In this paper, we consider the
problem of submodular function maximization in a distributed fashion. We
develop a simple, two-stage protocol GreeDi, that is easily implemented using
MapReduce style computations. We theoretically analyze our approach, and show
that under certain natural conditions, performance close to the centralized
approach can be achieved. We begin with monotone submodular maximization
subject to a cardinality constraint, and then extend this approach to obtain
approximation guarantees for (not necessarily monotone) submodular maximization
subject to more general constraints including matroid or knapsack constraints.
In our extensive experiments, we demonstrate the effectiveness of our approach
on several applications, including sparse Gaussian process inference and
exemplar based clustering on tens of millions of examples using Hadoop.
| Baharan Mirzasoleiman, Amin Karbasi, Rik Sarkar, and Andreas Krause | null | 1411.0541 | null | null |
Correlation Clustering with Constrained Cluster Sizes and Extended
Weights Bounds | cs.LG cs.DS | We consider the problem of correlation clustering on graphs with constraints
on both the cluster sizes and the positive and negative weights of edges. Our
contributions are twofold: First, we introduce the problem of correlation
clustering with bounded cluster sizes. Second, we extend the regime of weight
values for which the clustering may be performed with constant approximation
guarantees in polynomial time and apply the results to the bounded cluster size
problem.
| Gregory J. Puleo, Olgica Milenkovic | null | 1411.0547 | null | null |
Bayesian feature selection with strongly-regularizing priors maps to the
Ising Model | cond-mat.stat-mech cs.LG stat.ML | Identifying small subsets of features that are relevant for prediction and/or
classification tasks is a central problem in machine learning and statistics.
The feature selection task is especially important, and computationally
difficult, for modern datasets where the number of features can be comparable
to, or even exceed, the number of samples. Here, we show that feature selection
with Bayesian inference takes a universal form and reduces to calculating the
magnetizations of an Ising model, under some mild conditions. Our results
exploit the observation that the evidence takes a universal form for
strongly-regularizing priors --- priors that have a large effect on the
posterior probability even in the infinite data limit. We derive explicit
expressions for feature selection for generalized linear models, a large class
of statistical techniques that include linear and logistic regression. We
illustrate the power of our approach by analyzing feature selection in a
logistic regression-based classifier trained to distinguish between the letters
B and D in the notMNIST dataset.
| Charles K. Fisher and Pankaj Mehta | null | 1411.0591 | null | null |
Factorbird - a Parameter Server Approach to Distributed Matrix
Factorization | cs.LG | We present Factorbird, a prototype of a parameter server approach for
factorizing large matrices with Stochastic Gradient Descent-based algorithms.
We designed Factorbird to meet the following desiderata: (a) scalability to
tall and wide matrices with dozens of billions of non-zeros, (b) extensibility
to different kinds of models and loss functions as long as they can be
optimized using Stochastic Gradient Descent (SGD), and (c) adaptability to both
batch and streaming scenarios. Factorbird uses a parameter server in order to
scale to models that exceed the memory of an individual machine, and employs
lock-free Hogwild!-style learning with a special partitioning scheme to
drastically reduce conflicting updates. We also discuss other aspects of the
design of our system such as how to efficiently grid search for hyperparameters
at scale. We present experiments of Factorbird on a matrix built from a subset
of Twitter's interaction graph, consisting of more than 38 billion non-zeros
and about 200 million rows and columns, which is to the best of our knowledge
the largest matrix on which factorization results have been reported in the
literature.
| Sebastian Schelter, Venu Satuluri, Reza Zadeh | null | 1411.0602 | null | null |
Active Inference for Binary Symmetric Hidden Markov Models | stat.ML cond-mat.dis-nn cs.IT cs.LG math.IT | We consider active maximum a posteriori (MAP) inference problem for Hidden
Markov Models (HMM), where, given an initial MAP estimate of the hidden
sequence, we select to label certain states in the sequence to improve the
estimation accuracy of the remaining states. We develop an analytical approach
to this problem for the case of binary symmetric HMMs, and obtain a closed form
solution that relates the expected error reduction to model parameters under
the specified active inference scheme. We then use this solution to determine
most optimal active inference scheme in terms of error reduction, and examine
the relation of those schemes to heuristic principles of uncertainty reduction
and solution unicity.
| Armen E. Allahverdyan and Aram Galstyan | 10.1007/s10955-015-1321-y | 1411.0630 | null | null |
Clustering memes in social media streams | cs.SI cs.CY cs.LG physics.soc-ph | The problem of clustering content in social media has pervasive applications,
including the identification of discussion topics, event detection, and content
recommendation. Here we describe a streaming framework for online detection and
clustering of memes in social media, specifically Twitter. A pre-clustering
procedure, namely protomeme detection, first isolates atomic tokens of
information carried by the tweets. Protomemes are thereafter aggregated, based
on multiple similarity measures, to obtain memes as cohesive groups of tweets
reflecting actual concepts or topics of discussion. The clustering algorithm
takes into account various dimensions of the data and metadata, including
natural language, the social network, and the patterns of information
diffusion. As a result, our system can build clusters of semantically,
structurally, and topically related tweets. The clustering process is based on
a variant of Online K-means that incorporates a memory mechanism, used to
"forget" old memes and replace them over time with the new ones. The evaluation
of our framework is carried out by using a dataset of Twitter trending topics.
Over a one-week period, we systematically determined whether our algorithm was
able to recover the trending hashtags. We show that the proposed method
outperforms baseline algorithms that only use content features, as well as a
state-of-the-art event detection method that assumes full knowledge of the
underlying follower network. We finally show that our online learning framework
is flexible, due to its independence of the adopted clustering algorithm, and
best suited to work in a streaming scenario.
| Mohsen JafariAsbagh, Emilio Ferrara, Onur Varol, Filippo Menczer,
Alessandro Flammini | 10.1007/s13278-014-0237-x | 1411.0652 | null | null |
Approachability in Stackelberg Stochastic Games with Vector Costs | cs.LG cs.GT cs.SY math.OC | The notion of approachability was introduced by Blackwell [1] in the context
of vector-valued repeated games. The famous Blackwell's approachability theorem
prescribes a strategy for approachability, i.e., for `steering' the average
cost of a given agent towards a given target set, irrespective of the
strategies of the other agents. In this paper, motivated by the multi-objective
optimization/decision making problems in dynamically changing environments, we
address the approachability problem in Stackelberg stochastic games with vector
valued cost functions. We make two main contributions. Firstly, we give a
simple and computationally tractable strategy for approachability for
Stackelberg stochastic games along the lines of Blackwell's. Secondly, we give
a reinforcement learning algorithm for learning the approachable strategy when
the transition kernel is unknown. We also recover as a by-product Blackwell's
necessary and sufficient condition for approachability for convex sets in this
set up and thus a complete characterization. We also give sufficient conditions
for non-convex sets.
| Dileep Kalathil, Vivek Borkar, Rahul Jain | null | 1411.0728 | null | null |
CUR Algorithm for Partially Observed Matrices | cs.LG | CUR matrix decomposition computes the low rank approximation of a given
matrix by using the actual rows and columns of the matrix. It has been a very
useful tool for handling large matrices. One limitation with the existing
algorithms for CUR matrix decomposition is that they need an access to the {\it
full} matrix, a requirement that can be difficult to fulfill in many real world
applications. In this work, we alleviate this limitation by developing a CUR
decomposition algorithm for partially observed matrices. In particular, the
proposed algorithm computes the low rank approximation of the target matrix
based on (i) the randomly sampled rows and columns, and (ii) a subset of
observed entries that are randomly sampled from the matrix. Our analysis shows
the relative error bound, measured by spectral norm, for the proposed algorithm
when the target matrix is of full rank. We also show that only $O(n r\ln r)$
observed entries are needed by the proposed algorithm to perfectly recover a
rank $r$ matrix of size $n\times n$, which improves the sample complexity of
the existing algorithms for matrix completion. Empirical studies on both
synthetic and real-world datasets verify our theoretical claims and demonstrate
the effectiveness of the proposed algorithm.
| Miao Xu, Rong Jin, Zhi-Hua Zhou | null | 1411.0860 | null | null |
Convex Optimization for Big Data | math.OC cs.LG stat.ML | This article reviews recent advances in convex optimization algorithms for
Big Data, which aim to reduce the computational, storage, and communications
bottlenecks. We provide an overview of this emerging field, describe
contemporary approximation techniques like first-order methods and
randomization for scalability, and survey the important role of parallel and
distributed computation. The new Big Data algorithms are based on surprisingly
simple principles and attain staggering accelerations even on classical
problems.
| Volkan Cevher and Stephen Becker and Mark Schmidt | 10.1109/MSP.2014.2329397 | 1411.0972 | null | null |
Iterated geometric harmonics for data imputation and reconstruction of
missing data | cs.LG stat.ML | The method of geometric harmonics is adapted to the situation of incomplete
data by means of the iterated geometric harmonics (IGH) scheme. The method is
tested on natural and synthetic data sets with 50--500 data points and
dimensionality of 400--10,000. Experiments suggest that the algorithm converges
to a near optimal solution within 4--6 iterations, at runtimes of less than 30
minutes on a medium-grade desktop computer. The imputation of missing data
values is applied to collections of damaged images (suffering from data
annihilation rates of up to 70\%) which are reconstructed with a surprising
degree of accuracy.
| Chad Eckman, Jonathan A. Lindgren, Erin P. J. Pearse, David J. Sacco,
Zachariah Zhang | null | 1411.0997 | null | null |
A statistical model for tensor PCA | cs.LG cs.IT math.IT stat.ML | We consider the Principal Component Analysis problem for large tensors of
arbitrary order $k$ under a single-spike (or rank-one plus noise) model. On the
one hand, we use information theory, and recent results in probability theory,
to establish necessary and sufficient conditions under which the principal
component can be estimated using unbounded computational resources. It turns
out that this is possible as soon as the signal-to-noise ratio $\beta$ becomes
larger than $C\sqrt{k\log k}$ (and in particular $\beta$ can remain bounded as
the problem dimensions increase).
On the other hand, we analyze several polynomial-time estimation algorithms,
based on tensor unfolding, power iteration and message passing ideas from
graphical models. We show that, unless the signal-to-noise ratio diverges in
the system dimensions, none of these approaches succeeds. This is possibly
related to a fundamental limitation of computationally tractable estimators for
this problem.
We discuss various initializations for tensor power iteration, and show that
a tractable initialization based on the spectrum of the matricized tensor
outperforms significantly baseline methods, statistically and computationally.
Finally, we consider the case in which additional side information is available
about the unknown signal. We characterize the amount of side information that
allows the iterative algorithms to converge to a good estimate.
| Andrea Montanari and Emile Richard | null | 1411.1076 | null | null |
Fast Exact Matrix Completion with Finite Samples | cs.NA cs.DS cs.IT cs.LG math.IT stat.ML | Matrix completion is the problem of recovering a low rank matrix by observing
a small fraction of its entries. A series of recent works [KOM12,JNS13,HW14]
have proposed fast non-convex optimization based iterative algorithms to solve
this problem. However, the sample complexity in all these results is
sub-optimal in its dependence on the rank, condition number and the desired
accuracy.
In this paper, we present a fast iterative algorithm that solves the matrix
completion problem by observing $O(nr^5 \log^3 n)$ entries, which is
independent of the condition number and the desired accuracy. The run time of
our algorithm is $O(nr^7\log^3 n\log 1/\epsilon)$ which is near linear in the
dimension of the matrix. To the best of our knowledge, this is the first near
linear time algorithm for exact matrix completion with finite sample complexity
(i.e. independent of $\epsilon$).
Our algorithm is based on a well known projected gradient descent method,
where the projection is onto the (non-convex) set of low rank matrices. There
are two key ideas in our result: 1) our argument is based on a $\ell_{\infty}$
norm potential function (as opposed to the spectral norm) and provides a novel
way to obtain perturbation bounds for it. 2) we prove and use a natural
extension of the Davis-Kahan theorem to obtain perturbation bounds on the best
low rank approximation of matrices with good eigen-gap. Both of these ideas may
be of independent interest.
| Prateek Jain and Praneeth Netrapalli | null | 1411.1087 | null | null |
Expectation-Maximization for Learning Determinantal Point Processes | stat.ML cs.LG | A determinantal point process (DPP) is a probabilistic model of set diversity
compactly parameterized by a positive semi-definite kernel matrix. To fit a DPP
to a given task, we would like to learn the entries of its kernel matrix by
maximizing the log-likelihood of the available data. However, log-likelihood is
non-convex in the entries of the kernel matrix, and this learning problem is
conjectured to be NP-hard. Thus, previous work has instead focused on more
restricted convex learning settings: learning only a single weight for each row
of the kernel matrix, or learning weights for a linear combination of DPPs with
fixed kernel matrices. In this work we propose a novel algorithm for learning
the full kernel matrix. By changing the kernel parameterization from matrix
entries to eigenvalues and eigenvectors, and then lower-bounding the likelihood
in the manner of expectation-maximization algorithms, we obtain an effective
optimization procedure. We test our method on a real-world product
recommendation task, and achieve relative gains of up to 16.5% in test
log-likelihood compared to the naive approach of maximizing likelihood by
projected gradient ascent on the entries of the kernel matrix.
| Jennifer Gillenwater, Alex Kulesza, Emily Fox, Ben Taskar | null | 1411.1088 | null | null |
Do Convnets Learn Correspondence? | cs.CV cs.LG cs.NE | Convolutional neural nets (convnets) trained from massive labeled datasets
have substantially improved the state-of-the-art in image classification and
object detection. However, visual understanding requires establishing
correspondence on a finer level than object category. Given their large pooling
regions and training from whole-image labels, it is not clear that convnets
derive their success from an accurate correspondence model which could be used
for precise localization. In this paper, we study the effectiveness of convnet
activation features for tasks requiring correspondence. We present evidence
that convnet features localize at a much finer scale than their receptive field
sizes, that they can be used to perform intraclass alignment as well as
conventional hand-engineered features, and that they outperform conventional
features in keypoint prediction on objects from PASCAL VOC 2011.
| Jonathan Long, Ning Zhang, Trevor Darrell | null | 1411.1091 | null | null |
Projecting Markov Random Field Parameters for Fast Mixing | cs.LG stat.ML | Markov chain Monte Carlo (MCMC) algorithms are simple and extremely powerful
techniques to sample from almost arbitrary distributions. The flaw in practice
is that it can take a large and/or unknown amount of time to converge to the
stationary distribution. This paper gives sufficient conditions to guarantee
that univariate Gibbs sampling on Markov Random Fields (MRFs) will be fast
mixing, in a precise sense. Further, an algorithm is given to project onto this
set of fast-mixing parameters in the Euclidean norm. Following recent work, we
give an example use of this to project in various divergence measures,
comparing univariate marginals obtained by sampling after projection to common
variational methods and Gibbs sampling on the original parameters.
| Xianghang Liu and Justin Domke | null | 1411.1119 | null | null |
Distributed Low-Rank Estimation Based on Joint Iterative Optimization in
Wireless Sensor Networks | cs.IT cs.LG math.IT | This paper proposes a novel distributed reduced--rank scheme and an adaptive
algorithm for distributed estimation in wireless sensor networks. The proposed
distributed scheme is based on a transformation that performs dimensionality
reduction at each agent of the network followed by a reduced-dimension
parameter vector. A distributed reduced-rank joint iterative estimation
algorithm is developed, which has the ability to achieve significantly reduced
communication overhead and improved performance when compared with existing
techniques. Simulation results illustrate the advantages of the proposed
strategy in terms of convergence rate and mean square error performance.
| S. Xu, R. C. de Lamare and H. V. Poor | null | 1411.1125 | null | null |
Global Convergence of Stochastic Gradient Descent for Some Non-convex
Matrix Problems | cs.LG math.OC stat.ML | Stochastic gradient descent (SGD) on a low-rank factorization is commonly
employed to speed up matrix problems including matrix completion, subspace
tracking, and SDP relaxation. In this paper, we exhibit a step size scheme for
SGD on a low-rank least-squares problem, and we prove that, under broad
sampling conditions, our method converges globally from a random starting point
within $O(\epsilon^{-1} n \log n)$ steps with constant probability for
constant-rank problems. Our modification of SGD relates it to stochastic power
iteration. We also show experiments to illustrate the runtime and convergence
of the algorithm.
| Christopher De Sa, Kunle Olukotun, and Christopher R\'e | null | 1411.1134 | null | null |
Conditional Random Field Autoencoders for Unsupervised Structured
Prediction | cs.LG cs.CL | We introduce a framework for unsupervised learning of structured predictors
with overlapping, global features. Each input's latent representation is
predicted conditional on the observable data using a feature-rich conditional
random field. Then a reconstruction of the input is (re)generated, conditional
on the latent structure, using models for which maximum likelihood estimation
has a closed-form. Our autoencoder formulation enables efficient learning
without making unrealistic independence assumptions or restricting the kinds of
features that can be used. We illustrate insightful connections to traditional
autoencoders, posterior regularization and multi-view learning. We show
competitive results with instantiations of the model for two canonical NLP
tasks: part-of-speech induction and bitext word alignment, and show that
training our model can be substantially more efficient than comparable
feature-rich baselines.
| Waleed Ammar, Chris Dyer, Noah A. Smith | null | 1411.1147 | null | null |
On the Complexity of Learning with Kernels | cs.LG stat.ML | A well-recognized limitation of kernel learning is the requirement to handle
a kernel matrix, whose size is quadratic in the number of training examples.
Many methods have been proposed to reduce this computational cost, mostly by
using a subset of the kernel matrix entries, or some form of low-rank matrix
approximation, or a random projection method. In this paper, we study lower
bounds on the error attainable by such methods as a function of the number of
entries observed in the kernel matrix or the rank of an approximate kernel
matrix. We show that there are kernel learning problems where no such method
will lead to non-trivial computational savings. Our results also quantify how
the problem difficulty depends on parameters such as the nature of the loss
function, the regularization parameter, the norm of the desired predictor, and
the kernel matrix rank. Our results also suggest cases where more efficient
kernel learning might be possible.
| Nicol\`o Cesa-Bianchi, Yishay Mansour and Ohad Shamir | null | 1411.1158 | null | null |
Rapid Skill Capture in a First-Person Shooter | cs.HC cs.LG | Various aspects of computer game design, including adaptive elements of game
levels, characteristics of 'bot' behavior, and player matching in multiplayer
games, would ideally be sensitive to a player's skill level. Yet, while
difficulty and player learning have been explored in the context of games,
there has been little work analyzing skill per se, and how it pertains to a
player's input. To this end, we present a data set of 476 game logs from over
40 players of a first-person shooter game (Red Eclipse) as a basis of a case
study. We then analyze different metrics of skill and show that some of these
can be predicted using only a few seconds of keyboard and mouse input. We argue
that the techniques used here are useful for adapting games to match players'
skill levels rapidly, perhaps more rapidly than solutions based on performance
averaging such as TrueSkill.
| David Buckley, Ke Chen, Joshua Knowles | null | 1411.1316 | null | null |
Eigenvectors of Orthogonally Decomposable Functions | cs.LG | The Eigendecomposition of quadratic forms (symmetric matrices) guaranteed by
the spectral theorem is a foundational result in applied mathematics. Motivated
by a shared structure found in inferential problems of recent interest---namely
orthogonal tensor decompositions, Independent Component Analysis (ICA), topic
models, spectral clustering, and Gaussian mixture learning---we generalize the
eigendecomposition from quadratic forms to a broad class of "orthogonally
decomposable" functions. We identify a key role of convexity in our extension,
and we generalize two traditional characterizations of eigenvectors: First, the
eigenvectors of a quadratic form arise from the optima structure of the
quadratic form on the sphere. Second, the eigenvectors are the fixed points of
the power iteration.
In our setting, we consider a simple first order generalization of the power
method which we call gradient iteration. It leads to efficient and easily
implementable methods for basis recovery. It includes influential Machine
Learning methods such as cumulant-based FastICA and the tensor power iteration
for orthogonally decomposable tensors as special cases.
We provide a complete theoretical analysis of gradient iteration using the
structure theory of discrete dynamical systems to show almost sure convergence
and fast (super-linear) convergence rates. The analysis also extends to the
case when the observed function is only approximately orthogonally
decomposable, with bounds that are polynomial in dimension and other relevant
parameters, such as perturbation size. Our perturbation results can be
considered as a non-linear version of the classical Davis-Kahan theorem for
perturbations of eigenvectors of symmetric matrices.
| Mikhail Belkin, Luis Rademacher, James Voss | null | 1411.1420 | null | null |
On the Information Theoretic Limits of Learning Ising Models | cs.LG | We provide a general framework for computing lower-bounds on the sample
complexity of recovering the underlying graphs of Ising models, given i.i.d
samples. While there have been recent results for specific graph classes, these
involve fairly extensive technical arguments that are specialized to each
specific graph class. In contrast, we isolate two key graph-structural
ingredients that can then be used to specify sample complexity lower-bounds.
Presence of these structural properties makes the graph class hard to learn. We
derive corollaries of our main result that not only recover existing recent
results, but also provide lower bounds for novel graph classes not considered
previously. We also extend our framework to the random graph setting and derive
corollaries for Erd\H{o}s-R\'{e}nyi graphs in a certain dense setting.
| Karthikeyan Shanmugam, Rashish Tandon, Alexandros G. Dimakis, Pradeep
Ravikumar | null | 1411.1434 | null | null |
Electrocardiography Separation of Mother and Baby | cs.CV cs.LG | Extraction of Electrocardiography (ECG or EKG) signals of mother and baby is
a challenging task, because one single device is used and it receives a mixture
of multiple heart beats. In this paper, we would like to design a filter to
separate the signals from each other.
| Wei Wang | null | 1411.1446 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.