title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Analyzing Tensor Power Method Dynamics in Overcomplete Regime | cs.LG stat.ML | We present a novel analysis of the dynamics of tensor power iterations in the
overcomplete regime where the tensor CP rank is larger than the input
dimension. Finding the CP decomposition of an overcomplete tensor is NP-hard in
general. We consider the case where the tensor components are randomly drawn,
and show that the simple power iteration recovers the components with bounded
error under mild initialization conditions. We apply our analysis to
unsupervised learning of latent variable models, such as multi-view mixture
models and spherical Gaussian mixtures. Given the third order moment tensor, we
learn the parameters using tensor power iterations. We prove it can correctly
learn the model parameters when the number of hidden components $k$ is much
larger than the data dimension $d$, up to $k = o(d^{1.5})$. We initialize the
power iterations with data samples and prove its success under mild conditions
on the signal-to-noise ratio of the samples. Our analysis significantly expands
the class of latent variable models where spectral methods are applicable. Our
analysis also deals with noise in the input tensor leading to sample complexity
result in the application to learning latent variable models.
| Anima Anandkumar, Rong Ge, Majid Janzamin | null | 1411.1488 | null | null |
Efficient Representations for Life-Long Learning and Autoencoding | cs.LG | It has been a long-standing goal in machine learning, as well as in AI more
generally, to develop life-long learning systems that learn many different
tasks over time, and reuse insights from tasks learned, "learning to learn" as
they do so. In this work we pose and provide efficient algorithms for several
natural theoretical formulations of this goal. Specifically, we consider the
problem of learning many different target functions over time, that share
certain commonalities that are initially unknown to the learning algorithm. Our
aim is to learn new internal representations as the algorithm learns new target
functions, that capture this commonality and allow subsequent learning tasks to
be solved more efficiently and from less data. We develop efficient algorithms
for two very different kinds of commonalities that target functions might
share: one based on learning common low-dimensional and unions of
low-dimensional subspaces and one based on learning nonlinear Boolean
combinations of features. Our algorithms for learning Boolean feature
combinations additionally have a dual interpretation, and can be viewed as
giving an efficient procedure for constructing near-optimal sparse Boolean
autoencoders under a natural "anchor-set" assumption.
| Maria-Florina Balcan, Avrim Blum, Santosh Vempala | null | 1411.1490 | null | null |
Convolutional Neural Network-based Place Recognition | cs.CV cs.LG cs.NE | Recently Convolutional Neural Networks (CNNs) have been shown to achieve
state-of-the-art performance on various classification tasks. In this paper, we
present for the first time a place recognition technique based on CNN models,
by combining the powerful features learnt by CNNs with a spatial and sequential
filter. Applying the system to a 70 km benchmark place recognition dataset we
achieve a 75% increase in recall at 100% precision, significantly outperforming
all previous state of the art techniques. We also conduct a comprehensive
performance comparison of the utility of features from all 21 layers for place
recognition, both for the benchmark dataset and for a second dataset with more
significant viewpoint changes.
| Zetao Chen, Obadiah Lam, Adam Jacobson and Michael Milford | null | 1411.1509 | null | null |
Large-Margin Determinantal Point Processes | stat.ML cs.CV cs.LG | Determinantal point processes (DPPs) offer a powerful approach to modeling
diversity in many applications where the goal is to select a diverse subset. We
study the problem of learning the parameters (the kernel matrix) of a DPP from
labeled training data. We make two contributions. First, we show how to
reparameterize a DPP's kernel matrix with multiple kernel functions, thus
enhancing modeling flexibility. Second, we propose a novel parameter estimation
technique based on the principle of large margin separation. In contrast to the
state-of-the-art method of maximum likelihood estimation, our large-margin loss
function explicitly models errors in selecting the target subsets, and it can
be customized to trade off different types of errors (precision vs. recall).
Extensive empirical studies validate our contributions, including applications
on challenging document and video summarization, where flexibility in modeling
the kernel matrix and balancing different errors is indispensable.
| Boqing Gong, Wei-lun Chao, Kristen Grauman and Fei Sha | null | 1411.1537 | null | null |
A Hybrid Recurrent Neural Network For Music Transcription | cs.LG | We investigate the problem of incorporating higher-level symbolic score-like
information into Automatic Music Transcription (AMT) systems to improve their
performance. We use recurrent neural networks (RNNs) and their variants as
music language models (MLMs) and present a generative architecture for
combining these models with predictions from a frame level acoustic classifier.
We also compare different neural network architectures for acoustic modeling.
The proposed model computes a distribution over possible output sequences given
the acoustic input signal and we present an algorithm for performing a global
search for good candidate transcriptions. The performance of the proposed model
is evaluated on piano music from the MAPS dataset and we observe that the
proposed model consistently outperforms existing transcription methods.
| Siddharth Sigtia, Emmanouil Benetos, Nicolas Boulanger-Lewandowski,
Tillman Weyde, Artur S. d'Avila Garcez, Simon Dixon | null | 1411.1623 | null | null |
Submodular meets Structured: Finding Diverse Subsets in
Exponentially-Large Structured Item Sets | cs.LG cs.AI cs.CV cs.IR stat.ML | To cope with the high level of ambiguity faced in domains such as Computer
Vision or Natural Language processing, robust prediction methods often search
for a diverse set of high-quality candidate solutions or proposals. In
structured prediction problems, this becomes a daunting task, as the solution
space (image labelings, sentence parses, etc.) is exponentially large. We study
greedy algorithms for finding a diverse subset of solutions in
structured-output spaces by drawing new connections between submodular
functions over combinatorial item sets and High-Order Potentials (HOPs) studied
for graphical models. Specifically, we show via examples that when marginal
gains of submodular diversity functions allow structured representations, this
enables efficient (sub-linear time) approximate maximization by reducing the
greedy augmentation step to inference in a factor graph with appropriately
constructed HOPs. We discuss benefits, tradeoffs, and show that our
constructions lead to significantly better proposals.
| Adarsh Prasad, Stefanie Jegelka and Dhruv Batra | null | 1411.1752 | null | null |
Conditional Generative Adversarial Nets | cs.LG cs.AI cs.CV stat.ML | Generative Adversarial Nets [8] were recently introduced as a novel way to
train generative models. In this work we introduce the conditional version of
generative adversarial nets, which can be constructed by simply feeding the
data, y, we wish to condition on to both the generator and discriminator. We
show that this model can generate MNIST digits conditioned on class labels. We
also illustrate how this model could be used to learn a multi-modal model, and
provide preliminary examples of an application to image tagging in which we
demonstrate how this approach can generate descriptive tags which are not part
of training labels.
| Mehdi Mirza, Simon Osindero | null | 1411.1784 | null | null |
How transferable are features in deep neural networks? | cs.LG cs.NE | Many deep neural networks trained on natural images exhibit a curious
phenomenon in common: on the first layer they learn features similar to Gabor
filters and color blobs. Such first-layer features appear not to be specific to
a particular dataset or task, but general in that they are applicable to many
datasets and tasks. Features must eventually transition from general to
specific by the last layer of the network, but this transition has not been
studied extensively. In this paper we experimentally quantify the generality
versus specificity of neurons in each layer of a deep convolutional neural
network and report a few surprising results. Transferability is negatively
affected by two distinct issues: (1) the specialization of higher layer neurons
to their original task at the expense of performance on the target task, which
was expected, and (2) optimization difficulties related to splitting networks
between co-adapted neurons, which was not expected. In an example network
trained on ImageNet, we demonstrate that either of these two issues may
dominate, depending on whether features are transferred from the bottom,
middle, or top of the network. We also document that the transferability of
features decreases as the distance between the base task and target task
increases, but that transferring features even from distant tasks can be better
than using random features. A final surprising result is that initializing a
network with transferred features from almost any number of layers can produce
a boost to generalization that lingers even after fine-tuning to the target
dataset.
| Jason Yosinski, Jeff Clune, Yoshua Bengio, Hod Lipson | null | 1411.1792 | null | null |
Beta Process Non-negative Matrix Factorization with Stochastic
Structured Mean-Field Variational Inference | stat.ML cs.LG | Beta process is the standard nonparametric Bayesian prior for latent factor
model. In this paper, we derive a structured mean-field variational inference
algorithm for a beta process non-negative matrix factorization (NMF) model with
Poisson likelihood. Unlike the linear Gaussian model, which is well-studied in
the nonparametric Bayesian literature, NMF model with beta process prior does
not enjoy the conjugacy. We leverage the recently developed stochastic
structured mean-field variational inference to relax the conjugacy constraint
and restore the dependencies among the latent variables in the approximating
variational distribution. Preliminary results on both synthetic and real
examples demonstrate that the proposed inference algorithm can reasonably
recover the hidden structure of the data.
| Dawen Liang, Matthew D. Hoffman | null | 1411.1804 | null | null |
Variational Tempering | stat.ML cs.LG | Variational inference (VI) combined with data subsampling enables approximate
posterior inference over large data sets, but suffers from poor local optima.
We first formulate a deterministic annealing approach for the generic class of
conditionally conjugate exponential family models. This approach uses a
decreasing temperature parameter which deterministically deforms the objective
during the course of the optimization. A well-known drawback to this annealing
approach is the choice of the cooling schedule. We therefore introduce
variational tempering, a variational algorithm that introduces a temperature
latent variable to the model. In contrast to related work in the Markov chain
Monte Carlo literature, this algorithm results in adaptive annealing schedules.
Lastly, we develop local variational tempering, which assigns a latent
temperature to each data point; this allows for dynamic annealing that varies
across data. Compared to the traditional VI, all proposed approaches find
improved predictive likelihoods on held-out data.
| Stephan Mandt, James McInerney, Farhan Abrol, Rajesh Ranganath, and
David Blei | null | 1411.1810 | null | null |
Power-Law Graph Cuts | cs.CV cs.LG stat.ML | Algorithms based on spectral graph cut objectives such as normalized cuts,
ratio cuts and ratio association have become popular in recent years because
they are widely applicable and simple to implement via standard eigenvector
computations. Despite strong performance for a number of clustering tasks,
spectral graph cut algorithms still suffer from several limitations: first,
they require the number of clusters to be known in advance, but this
information is often unknown a priori; second, they tend to produce clusters
with uniform sizes. In some cases, the true clusters exhibit a known size
distribution; in image segmentation, for instance, human-segmented images tend
to yield segment sizes that follow a power-law distribution. In this paper, we
propose a general framework of power-law graph cut algorithms that produce
clusters whose sizes are power-law distributed, and also does not fix the
number of clusters upfront. To achieve our goals, we treat the Pitman-Yor
exchangeable partition probability function (EPPF) as a regularizer to graph
cut objectives. Because the resulting objectives cannot be solved by relaxing
via eigenvectors, we derive a simple iterative algorithm to locally optimize
the objectives. Moreover, we show that our proposed algorithm can be viewed as
performing MAP inference on a particular Pitman-Yor mixture model. Our
experiments on various data sets show the effectiveness of our algorithms.
| Xiangyang Zhou, Jiaxin Zhang, Brian Kulis | null | 1411.1971 | null | null |
A totally unimodular view of structured sparsity | cs.LG stat.ML | This paper describes a simple framework for structured sparse recovery based
on convex optimization. We show that many structured sparsity models can be
naturally represented by linear matrix inequalities on the support of the
unknown parameters, where the constraint matrix has a totally unimodular (TU)
structure. For such structured models, tight convex relaxations can be obtained
in polynomial time via linear programming. Our modeling framework unifies the
prevalent structured sparsity norms in the literature, introduces new
interesting ones, and renders their tightness and tractability arguments
transparent.
| Marwa El Halabi and Volkan Cevher | null | 1411.1990 | null | null |
Partitioning Well-Clustered Graphs: Spectral Clustering Works! | cs.DS cs.LG | In this paper we study variants of the widely used spectral clustering that
partitions a graph into k clusters by (1) embedding the vertices of a graph
into a low-dimensional space using the bottom eigenvectors of the Laplacian
matrix, and (2) grouping the embedded points into k clusters via k-means
algorithms. We show that, for a wide class of graphs, spectral clustering gives
a good approximation of the optimal clustering. While this approach was
proposed in the early 1990s and has comprehensive applications, prior to our
work similar results were known only for graphs generated from stochastic
models.
We also give a nearly-linear time algorithm for partitioning well-clustered
graphs based on computing a matrix exponential and approximate nearest neighbor
data structures.
| Richard Peng and He Sun and Luca Zanetti | null | 1411.2021 | null | null |
Online Collaborative-Filtering on Graphs | cs.LG | A common phenomena in modern recommendation systems is the use of feedback
from one user to infer the `value' of an item to other users. This results in
an exploration vs. exploitation trade-off, in which items of possibly low value
have to be presented to users in order to ascertain their value. Existing
approaches to solving this problem focus on the case where the number of items
are small, or admit some underlying structure -- it is unclear, however, if
good recommendation is possible when dealing with content-rich settings with
unstructured content.
We consider this problem under a simple natural model, wherein the number of
items and the number of item-views are of the same order, and an `access-graph'
constrains which user is allowed to see which item. Our main insight is that
the presence of the access-graph in fact makes good recommendation possible --
however this requires the exploration policy to be designed to take advantage
of the access-graph. Our results demonstrate the importance of `serendipity' in
exploration, and how higher graph-expansion translates to a higher quality of
recommendations; it also suggests a reason why in some settings, simple
policies like Twitter's `Latest-First' policy achieve a good performance.
From a technical perspective, our model presents a way to study
exploration-exploitation tradeoffs in settings where the number of `trials' and
`strategies' are large (potentially infinite), and more importantly, of the
same order. Our algorithms admit competitive-ratio guarantees which hold for
the worst-case user, under both finite-population and infinite-horizon
settings, and are parametrized in terms of properties of the underlying graph.
Conversely, we also demonstrate that improperly-designed policies can be highly
sub-optimal, and that in many settings, our results are order-wise optimal.
| Siddhartha Banerjee, Sujay Sanghavi, Sanjay Shakkottai | null | 1411.2057 | null | null |
Learning Theory for Distribution Regression | math.ST cs.LG math.FA stat.ML stat.TH | We focus on the distribution regression problem: regressing to vector-valued
outputs from probability measures. Many important machine learning and
statistical tasks fit into this framework, including multi-instance learning
and point estimation problems without analytical solution (such as
hyperparameter or entropy estimation). Despite the large number of available
heuristics in the literature, the inherent two-stage sampled nature of the
problem makes the theoretical analysis quite challenging, since in practice
only samples from sampled distributions are observable, and the estimates have
to rely on similarities computed between sets of points. To the best of our
knowledge, the only existing technique with consistency guarantees for
distribution regression requires kernel density estimation as an intermediate
step (which often performs poorly in practice), and the domain of the
distributions to be compact Euclidean. In this paper, we study a simple,
analytically computable, ridge regression-based alternative to distribution
regression, where we embed the distributions to a reproducing kernel Hilbert
space, and learn the regressor from the embeddings to the outputs. Our main
contribution is to prove that this scheme is consistent in the two-stage
sampled setup under mild conditions (on separable topological domains enriched
with kernels): we present an exact computational-statistical efficiency
trade-off analysis showing that our estimator is able to match the one-stage
sampled minimax optimal rate [Caponnetto and De Vito, 2007; Steinwart et al.,
2009]. This result answers a 17-year-old open question, establishing the
consistency of the classical set kernel [Haussler, 1999; Gaertner et. al, 2002]
in regression. We also cover consistency for more recent kernels on
distributions, including those due to [Christmann and Steinwart, 2010].
| Zoltan Szabo, Bharath Sriperumbudur, Barnabas Poczos, Arthur Gretton | null | 1411.2066 | null | null |
Covariate-assisted spectral clustering | stat.ML cs.LG math.ST stat.ME stat.TH | Biological and social systems consist of myriad interacting units. The
interactions can be represented in the form of a graph or network. Measurements
of these graphs can reveal the underlying structure of these interactions,
which provides insight into the systems that generated the graphs. Moreover, in
applications such as connectomics, social networks, and genomics, graph data
are accompanied by contextualizing measures on each node. We utilize these node
covariates to help uncover latent communities in a graph, using a modification
of spectral clustering. Statistical guarantees are provided under a joint
mixture model that we call the node-contextualized stochastic blockmodel,
including a bound on the mis-clustering rate. The bound is used to derive
conditions for achieving perfect clustering. For most simulated cases,
covariate-assisted spectral clustering yields results superior to regularized
spectral clustering without node covariates and to an adaptation of canonical
correlation analysis. We apply our clustering method to large brain graphs
derived from diffusion MRI data, using the node locations or neurological
region membership as covariates. In both cases, covariate-assisted spectral
clustering yields clusters that are easier to interpret neurologically.
| Norbert Binkiewicz, Joshua T. Vogelstein, and Karl Rohe | 10.1093/biomet/asx008 | 1411.2158 | null | null |
Model-Parallel Inference for Big Topic Models | cs.DC cs.LG stat.ML | In real world industrial applications of topic modeling, the ability to
capture gigantic conceptual space by learning an ultra-high dimensional topical
representation, i.e., the so-called "big model", is becoming the next
desideratum after enthusiasms on "big data", especially for fine-grained
downstream tasks such as online advertising, where good performances are
usually achieved by regression-based predictors built on millions if not
billions of input features. The conventional data-parallel approach for
training gigantic topic models turns out to be rather inefficient in utilizing
the power of parallelism, due to the heavy dependency on a centralized image of
"model". Big model size also poses another challenge on the storage, where
available model size is bounded by the smallest RAM of nodes. To address these
issues, we explore another type of parallelism, namely model-parallelism, which
enables training of disjoint blocks of a big topic model in parallel. By
integrating data-parallelism with model-parallelism, we show that dependencies
between distributed elements can be handled seamlessly, achieving not only
faster convergence but also an ability to tackle significantly bigger model
size. We describe an architecture for model-parallel inference of LDA, and
present a variant of collapsed Gibbs sampling algorithm tailored for it.
Experimental results demonstrate the ability of this system to handle topic
modeling with unprecedented amount of 200 billion model variables only on a
low-end cluster with very limited computational resources and bandwidth.
| Xun Zheng, Jin Kyu Kim, Qirong Ho, Eric P. Xing | null | 1411.2305 | null | null |
N$^3$LARS: Minimum Redundancy Maximum Relevance Feature Selection for
Large and High-dimensional Data | stat.ML cs.LG | We propose a feature selection method that finds non-redundant features from
a large and high-dimensional data in nonlinear way. Specifically, we propose a
nonlinear extension of the non-negative least-angle regression (LARS) called
N${}^3$LARS, where the similarity between input and output is measured through
the normalized version of the Hilbert-Schmidt Independence Criterion (HSIC). An
advantage of N${}^3$LARS is that it can easily incorporate with map-reduce
frameworks such as Hadoop and Spark. Thus, with the help of distributed
computing, a set of features can be efficiently selected from a large and
high-dimensional data. Moreover, N${}^3$LARS is a convex method and can find a
global optimum solution. The effectiveness of the proposed method is first
demonstrated through feature selection experiments for classification and
regression with small and high-dimensional datasets. Finally, we evaluate our
proposed method over a large and high-dimensional biology dataset.
| Makoto Yamada, Avishek Saha, Hua Ouyang, Dawei Yin, Yi Chang | null | 1411.2331 | null | null |
Multi-Task Metric Learning on Network Data | stat.ML cs.LG | Multi-task learning (MTL) improves prediction performance in different
contexts by learning models jointly on multiple different, but related tasks.
Network data, which are a priori data with a rich relational structure, provide
an important context for applying MTL. In particular, the explicit relational
structure implies that network data is not i.i.d. data. Network data also often
comes with significant metadata (i.e., attributes) associated with each entity
(node). Moreover, due to the diversity and variation in network data (e.g.,
multi-relational links or multi-category entities), various tasks can be
performed and often a rich correlation exists between them. Learning algorithms
should exploit all of these additional sources of information for better
performance. In this work we take a metric-learning point of view for the MTL
problem in the network context. Our approach builds on structure preserving
metric learning (SPML). In particular SPML learns a Mahalanobis distance metric
for node attributes using network structure as supervision, so that the learned
distance function encodes the structure and can be used to predict link
patterns from attributes. SPML is described for single-task learning on single
network. Herein, we propose a multi-task version of SPML, abbreviated as
MT-SPML, which is able to learn across multiple related tasks on multiple
networks via shared intermediate parametrization. MT-SPML learns a specific
metric for each task and a common metric for all tasks. The task correlation is
carried through the common metric and the individual metrics encode task
specific information. When combined together, they are structure-preserving
with respect to individual tasks. MT-SPML works on general networks, thus is
suitable for a wide variety of problems. In experiments, we challenge MT-SPML
on two real-word problems, where MT-SPML achieves significant improvement.
| Chen Fang and Daniel N. Rockmore | null | 1411.2337 | null | null |
Similarity Learning for High-Dimensional Sparse Data | cs.LG cs.AI stat.ML | A good measure of similarity between data points is crucial to many tasks in
machine learning. Similarity and metric learning methods learn such measures
automatically from data, but they do not scale well respect to the
dimensionality of the data. In this paper, we propose a method that can learn
efficiently similarity measure from high-dimensional sparse data. The core idea
is to parameterize the similarity measure as a convex combination of rank-one
matrices with specific sparsity structures. The parameters are then optimized
with an approximate Frank-Wolfe procedure to maximally satisfy relative
similarity constraints on the training data. Our algorithm greedily
incorporates one pair of features at a time into the similarity measure,
providing an efficient way to control the number of active features and thus
reduce overfitting. It enjoys very appealing convergence guarantees and its
time and memory complexity depends on the sparsity of the data instead of the
dimension of the feature space. Our experiments on real-world high-dimensional
datasets demonstrate its potential for classification, dimensionality reduction
and data exploration.
| Kuan Liu and Aur\'elien Bellet and Fei Sha | null | 1411.2374 | null | null |
Unifying Visual-Semantic Embeddings with Multimodal Neural Language
Models | cs.LG cs.CL cs.CV | Inspired by recent advances in multimodal learning and machine translation,
we introduce an encoder-decoder pipeline that learns (a): a multimodal joint
embedding space with images and text and (b): a novel language model for
decoding distributed representations from our space. Our pipeline effectively
unifies joint image-text embedding models with multimodal neural language
models. We introduce the structure-content neural language model that
disentangles the structure of a sentence to its content, conditioned on
representations produced by the encoder. The encoder allows one to rank images
and sentences while the decoder can generate novel descriptions from scratch.
Using LSTM to encode sentences, we match the state-of-the-art performance on
Flickr8K and Flickr30K without using object detections. We also set new best
results when using the 19-layer Oxford convolutional network. Furthermore we
show that with linear encoders, the learned embedding space captures multimodal
regularities in terms of vector space arithmetic e.g. *image of a blue car* -
"blue" + "red" is near images of red cars. Sample captions generated for 800
images are made available for comparison.
| Ryan Kiros, Ruslan Salakhutdinov, Richard S. Zemel | null | 1411.2539 | null | null |
Deep Exponential Families | stat.ML cs.LG | We describe \textit{deep exponential families} (DEFs), a class of latent
variable models that are inspired by the hidden structures used in deep neural
networks. DEFs capture a hierarchy of dependencies between latent variables,
and are easily generalized to many settings through exponential families. We
perform inference using recent "black box" variational inference techniques. We
then evaluate various DEFs on text and combine multiple DEFs into a model for
pairwise recommendation data. In an extensive study, we show that going beyond
one layer improves predictions for DEFs. We demonstrate that DEFs find
interesting exploratory structure in large data sets, and give better
predictive performance than state-of-the-art models.
| Rajesh Ranganath, Linpeng Tang, Laurent Charlin, David M. Blei | null | 1411.2581 | null | null |
A chain rule for the expected suprema of Gaussian processes | cs.LG | The expected supremum of a Gaussian process indexed by the image of an index
set under a function class is bounded in terms of separate properties of the
index set and the function class. The bound is relevant to the estimation of
nonlinear transformations or the analysis of learning algorithms whenever
hypotheses are chosen from composite classes, as is the case for multi-layer
models.
| Andreas Maurer | 10.1007/978-3-319-11662-4_18 | 1411.2635 | null | null |
Preserving Statistical Validity in Adaptive Data Analysis | cs.LG cs.DS | A great deal of effort has been devoted to reducing the risk of spurious
scientific discoveries, from the use of sophisticated validation techniques, to
deep statistical methods for controlling the false discovery rate in multiple
hypothesis testing. However, there is a fundamental disconnect between the
theoretical results and the practice of data analysis: the theory of
statistical inference assumes a fixed collection of hypotheses to be tested, or
learning algorithms to be applied, selected non-adaptively before the data are
gathered, whereas in practice data is shared and reused with hypotheses and new
analyses being generated on the basis of data exploration and the outcomes of
previous analyses.
In this work we initiate a principled study of how to guarantee the validity
of statistical inference in adaptive data analysis. As an instance of this
problem, we propose and investigate the question of estimating the expectations
of $m$ adaptively chosen functions on an unknown distribution given $n$ random
samples.
We show that, surprisingly, there is a way to estimate an exponential in $n$
number of expectations accurately even if the functions are chosen adaptively.
This gives an exponential improvement over standard empirical estimators that
are limited to a linear number of estimates. Our result follows from a general
technique that counter-intuitively involves actively perturbing and
coordinating the estimates, using techniques developed for privacy
preservation. We give additional applications of this technique to our
question.
| Cynthia Dwork and Vitaly Feldman and Moritz Hardt and Toniann Pitassi
and Omer Reingold and Aaron Roth | null | 1411.2664 | null | null |
The Bayesian Echo Chamber: Modeling Social Influence via Linguistic
Accommodation | stat.ML cs.CL cs.LG cs.SI | We present the Bayesian Echo Chamber, a new Bayesian generative model for
social interaction data. By modeling the evolution of people's language usage
over time, this model discovers latent influence relationships between them.
Unlike previous work on inferring influence, which has primarily focused on
simple temporal dynamics evidenced via turn-taking behavior, our model captures
more nuanced influence relationships, evidenced via linguistic accommodation
patterns in interaction content. The model, which is based on a discrete analog
of the multivariate Hawkes process, permits a fully Bayesian inference
algorithm. We validate our model's ability to discover latent influence
patterns using transcripts of arguments heard by the US Supreme Court and the
movie "12 Angry Men." We showcase our model's capabilities by using it to infer
latent influence patterns from Federal Open Market Committee meeting
transcripts, demonstrating state-of-the-art performance at uncovering social
dynamics in group discussions.
| Fangjian Guo, Charles Blundell, Hanna Wallach and Katherine Heller | null | 1411.2674 | null | null |
Inferring User Preferences by Probabilistic Logical Reasoning over
Social Networks | cs.SI cs.AI cs.CL cs.LG | We propose a framework for inferring the latent attitudes or preferences of
users by performing probabilistic first-order logical reasoning over the social
network graph. Our method answers questions about Twitter users like {\em Does
this user like sushi?} or {\em Is this user a New York Knicks fan?} by building
a probabilistic model that reasons over user attributes (the user's location or
gender) and the social network (the user's friends and spouse), via inferences
like homophily (I am more likely to like sushi if spouse or friends like sushi,
I am more likely to like the Knicks if I live in New York). The algorithm uses
distant supervision, semi-supervised data harvesting and vector space models to
extract user attributes (e.g. spouse, education, location) and preferences
(likes and dislikes) from text. The extracted propositions are then fed into a
probabilistic reasoner (we investigate both Markov Logic and Probabilistic Soft
Logic). Our experiments show that probabilistic logical reasoning significantly
improves the performance on attribute and relation extraction, and also
achieves an F-score of 0.791 at predicting a users likes or dislikes,
significantly better than two strong baselines.
| Jiwei Li, Alan Ritter and Dan Jurafsky | null | 1411.2679 | null | null |
Speaker Identification From Youtube Obtained Data | cs.SD cs.LG | An efficient, and intuitive algorithm is presented for the identification of
speakers from a long dataset (like YouTube long discussion, Cocktail party
recorded audio or video).The goal of automatic speaker identification is to
identify the number of different speakers and prepare a model for that speaker
by extraction, characterization and speaker-specific information contained in
the speech signal. It has many diverse application specially in the field of
Surveillance, Immigrations at Airport, cyber security, transcription in
multi-source of similar sound source, where it is difficult to assign
transcription arbitrary. The most commonly speech parametrization used in
speaker verification, K-mean, cepstral analysis, is detailed. Gaussian mixture
modeling, which is the speaker modeling technique is then explained. Gaussian
mixture models (GMM), perhaps the most robust machine learning algorithm has
been introduced examine and judge carefully speaker identification in text
independent. The application or employment of Gaussian mixture models for
monitoring & Analysing speaker identity is encouraged by the familiarity,
awareness, or understanding gained through experience that Gaussian spectrum
depict the characteristics of speaker's spectral conformational pattern and
remarkable ability of GMM to construct capricious densities after that we
illustrate 'Expectation maximization' an iterative algorithm which takes some
arbitrary value in initial estimation and carry on the iterative process until
the convergence of value is observed,so by doing various number of experiments
we are able to obtain 79 ~ 82% of identification rate using Vector quantization
and 85 ~ 92.6% of identification rate using GMM modeling by Expectation
maximization parameter estimation depending on variation of parameter.
| Nitesh Kumar Chaudhary | 10.5121/sipij.2014.5503 | 1411.2795 | null | null |
A new estimate of mutual information based measure of dependence between
two variables: properties and fast implementation | cs.IT cs.LG math.IT | This article proposes a new method to estimate an existing mutual information
based dependence measure using histogram density estimates. Finding a suitable
bin length for histogram is an open problem. We propose a new way of computing
the bin length for histogram using a function of maximum separation between
points. The chosen bin length leads to consistent density estimates for
histogram method. The values of density thus obtained are used to calculate an
estimate of an existing dependence measure. The proposed estimate is named as
Mutual Information Based Dependence Index (MIDI). Some important properties of
MIDI have also been stated. The performance of the proposed method has been
compared to generally accepted measures like Distance Correlation (dcor),
Maximal Information Coefficient (MINE) in terms of accuracy and computational
complexity with the help of several artificial data sets with different amounts
of noise. The proposed method is able to detect many types of relationships
between variables, without making any assumption about the functional form of
the relationship. The power statistics of proposed method illustrate their
effectiveness in detecting non linear relationship. Thus, it is able to achieve
generality without a high rate of false positive cases. MIDI is found to work
better on a real life data set than competing methods. The proposed method is
found to overcome some of the limitations which occur with dcor and MINE.
Computationally, MIDI is found to be better than dcor and MINE, in terms of
time and memory, making it suitable for large data sets.
| Namita Jain and C.A. Murthy | 10.1007/s13042-015-0418-6 | 1411.2883 | null | null |
Bounded Regret for Finite-Armed Structured Bandits | cs.LG | We study a new type of K-armed bandit problem where the expected return of
one arm may depend on the returns of other arms. We present a new algorithm for
this general class of problems and show that under certain circumstances it is
possible to achieve finite expected cumulative regret. We also give
problem-dependent lower bounds on the cumulative regret showing that at least
in special cases the new algorithm is nearly optimal.
| Tor Lattimore and Remi Munos | null | 1411.2919 | null | null |
Deep Multi-Instance Transfer Learning | cs.LG stat.ML | We present a new approach for transferring knowledge from groups to
individuals that comprise them. We evaluate our method in text, by inferring
the ratings of individual sentences using full-review ratings. This approach,
which combines ideas from transfer learning, deep learning and multi-instance
learning, reduces the need for laborious human labelling of fine-grained data
when abundant labels are available at the group level.
| Dimitrios Kotzias, Misha Denil, Phil Blunsom, Nando de Freitas | null | 1411.3128 | null | null |
Warranty Cost Estimation Using Bayesian Network | cs.AI cs.LG | All multi-component product manufacturing companies face the problem of
warranty cost estimation. Failure rate analysis of components plays a key role
in this problem. Data source used for failure rate analysis has traditionally
been past failure data of components. However, failure rate analysis can be
improved by means of fusion of additional information, such as symptoms
observed during after-sale service of the product, geographical information
(hilly or plains areas), and information from tele-diagnostic analytics. In
this paper, we propose an approach, which learns dependency between
part-failures and symptoms gleaned from such diverse sources of information, to
predict expected number of failures with better accuracy. We also indicate how
the optimum warranty period can be computed. We demonstrate, through empirical
results, that our method can improve the warranty cost estimates significantly.
| Karamjit Singh, Puneet Agarwal, Gautam Shroff | null | 1411.3197 | null | null |
On TD(0) with function approximation: Concentration bounds and a
centered variant with exponential convergence | cs.LG math.OC stat.ML | We provide non-asymptotic bounds for the well-known temporal difference
learning algorithm TD(0) with linear function approximators. These include
high-probability bounds as well as bounds in expectation. Our analysis suggests
that a step-size inversely proportional to the number of iterations cannot
guarantee optimal rate of convergence unless we assume (partial) knowledge of
the stationary distribution for the Markov chain underlying the policy
considered. We also provide bounds for the iterate averaged TD(0) variant,
which gets rid of the step-size dependency while exhibiting the optimal rate of
convergence. Furthermore, we propose a variant of TD(0) with linear
approximators that incorporates a centering sequence, and establish that it
exhibits an exponential rate of convergence in expectation. We demonstrate the
usefulness of our bounds on two synthetic experimental settings.
| Nathaniel Korda and L.A. Prashanth | null | 1411.3224 | null | null |
Using Gaussian Measures for Efficient Constraint Based Clustering | cs.LG cs.IR | In this paper we present a novel iterative multiphase clustering technique
for efficiently clustering high dimensional data points. For this purpose we
implement clustering feature (CF) tree on a real data set and a Gaussian
density distribution constraint on the resultant CF tree. The post processing
by the application of Gaussian density distribution function on the
micro-clusters leads to refinement of the previously formed clusters thus
improving their quality. This algorithm also succeeds in overcoming the
inherent drawbacks of conventional hierarchical methods of clustering like
inability to undo the change made to the dendogram of the data points.
Moreover, the constraint measure applied in the algorithm makes this clustering
technique suitable for need driven data analysis. We provide veracity of our
claim by evaluating our algorithm with other similar clustering algorithms.
Introduction
| Chandrima Sarkar, Atanu Roy | null | 1411.3302 | null | null |
Statistically Significant Detection of Linguistic Change | cs.CL cs.IR cs.LG | We propose a new computational approach for tracking and detecting
statistically significant linguistic shifts in the meaning and usage of words.
Such linguistic shifts are especially prevalent on the Internet, where the
rapid exchange of ideas can quickly change a word's meaning. Our meta-analysis
approach constructs property time series of word usage, and then uses
statistically sound change point detection algorithms to identify significant
linguistic shifts.
We consider and analyze three approaches of increasing complexity to generate
such linguistic property time series, the culmination of which uses
distributional characteristics inferred from word co-occurrences. Using
recently proposed deep neural language models, we first train vector
representations of words for each time period. Second, we warp the vector
spaces into one unified coordinate system. Finally, we construct a
distance-based distributional time series for each word to track it's
linguistic displacement over time.
We demonstrate that our approach is scalable by tracking linguistic change
across years of micro-blogging using Twitter, a decade of product reviews using
a corpus of movie reviews from Amazon, and a century of written books using the
Google Book-ngrams. Our analysis reveals interesting patterns of language usage
change commensurate with each medium.
| Vivek Kulkarni, Rami Al-Rfou, Bryan Perozzi, and Steven Skiena | null | 1411.3315 | null | null |
A Randomized Algorithm for CCA | stat.ML cs.LG | We present RandomizedCCA, a randomized algorithm for computing canonical
analysis, suitable for large datasets stored either out of core or on a
distributed file system. Accurate results can be obtained in as few as two data
passes, which is relevant for distributed processing frameworks in which
iteration is expensive (e.g., Hadoop). The strategy also provides an excellent
initializer for standard iterative solutions.
| Paul Mineiro, Nikos Karampatziakis | null | 1411.3409 | null | null |
Multi-view Anomaly Detection via Probabilistic Latent Variable Models | stat.ML cs.LG | We propose a nonparametric Bayesian probabilistic latent variable model for
multi-view anomaly detection, which is the task of finding instances that have
inconsistent views. With the proposed model, all views of a non-anomalous
instance are assumed to be generated from a single latent vector. On the other
hand, an anomalous instance is assumed to have multiple latent vectors, and its
different views are generated from different latent vectors. By inferring the
number of latent vectors used for each instance with Dirichlet process priors,
we obtain multi-view anomaly scores. The proposed model can be seen as a robust
extension of probabilistic canonical correlation analysis for noisy multi-view
data. We present Bayesian inference procedures for the proposed model based on
a stochastic EM algorithm. The effectiveness of the proposed model is
demonstrated in terms of performance when detecting multi-view anomalies and
imputing missing values in multi-view data with anomalies.
| Tomoharu Iwata, Makoto Yamada | null | 1411.3413 | null | null |
SelfieBoost: A Boosting Algorithm for Deep Learning | stat.ML cs.LG | We describe and analyze a new boosting algorithm for deep learning called
SelfieBoost. Unlike other boosting algorithms, like AdaBoost, which construct
ensembles of classifiers, SelfieBoost boosts the accuracy of a single network.
We prove a $\log(1/\epsilon)$ convergence rate for SelfieBoost under some "SGD
success" assumption which seems to hold in practice.
| Shai Shalev-Shwartz | null | 1411.3436 | null | null |
Greedy metrics in orthogonal greedy learning | cs.LG | Orthogonal greedy learning (OGL) is a stepwise learning scheme that adds a
new atom from a dictionary via the steepest gradient descent and build the
estimator via orthogonal projecting the target function to the space spanned by
the selected atoms in each greedy step. Here, "greed" means choosing a new atom
according to the steepest gradient descent principle. OGL then avoids the
overfitting/underfitting by selecting an appropriate iteration number. In this
paper, we point out that the overfitting/underfitting can also be avoided via
redefining "greed" in OGL. To this end, we introduce a new greedy metric,
called $\delta$-greedy thresholds, to refine "greed" and theoretically verifies
its feasibility. Furthermore, we reveals that such a greedy metric can bring an
adaptive termination rule on the premise of maintaining the prominent learning
performance of OGL. Our results show that the steepest gradient descent is not
the unique greedy metric of OGL and some other more suitable metric may lessen
the hassle of model-selection of OGL.
| Lin Xu, Shaobo Lin, Jinshan Zeng, Zongben Xu | null | 1411.3553 | null | null |
Jamming Bandits | cs.IT cs.LG math.IT | Can an intelligent jammer learn and adapt to unknown environments in an
electronic warfare-type scenario? In this paper, we answer this question in the
positive, by developing a cognitive jammer that adaptively and optimally
disrupts the communication between a victim transmitter-receiver pair. We
formalize the problem using a novel multi-armed bandit framework where the
jammer can choose various physical layer parameters such as the signaling
scheme, power level and the on-off/pulsing duration in an attempt to obtain
power efficient jamming strategies. We first present novel online learning
algorithms to maximize the jamming efficacy against static transmitter-receiver
pairs and prove that our learning algorithm converges to the optimal (in terms
of the error rate inflicted at the victim and the energy used) jamming
strategy. Even more importantly, we prove that the rate of convergence to the
optimal jamming strategy is sub-linear, i.e. the learning is fast in comparison
to existing reinforcement learning algorithms, which is particularly important
in dynamically changing wireless environments. Also, we characterize the
performance of the proposed bandit-based learning algorithm against multiple
static and adaptive transmitter-receiver pairs.
| SaiDhiraj Amuru, Cem Tekin, Mihaela van der Schaar, R. Michael Buehrer | null | 1411.3652 | null | null |
Minimal Realization Problems for Hidden Markov Models | cs.LG | Consider a stationary discrete random process with alphabet size d, which is
assumed to be the output process of an unknown stationary Hidden Markov Model
(HMM). Given the joint probabilities of finite length strings of the process,
we are interested in finding a finite state generative model to describe the
entire process. In particular, we focus on two classes of models: HMMs and
quasi-HMMs, which is a strictly larger class of models containing HMMs. In the
main theorem, we show that if the random process is generated by an HMM of
order less or equal than k, and whose transition and observation probability
matrix are in general position, namely almost everywhere on the parameter
space, both the minimal quasi-HMM realization and the minimal HMM realization
can be efficiently computed based on the joint probabilities of all the length
N strings, for N > 4 lceil log_d(k) rceil +1. In this paper, we also aim to
compare and connect the two lines of literature: realization theory of HMMs,
and the recent development in learning latent variable models with tensor
decomposition techniques.
| Qingqing Huang, Rong Ge, Sham Kakade, Munther Dahleh | null | 1411.3698 | null | null |
Acoustic Scene Classification | cs.SD cs.LG | In this article we present an account of the state-of-the-art in acoustic
scene classification (ASC), the task of classifying environments from the
sounds they produce. Starting from a historical review of previous research in
this area, we define a general framework for ASC and present different imple-
mentations of its components. We then describe a range of different algorithms
submitted for a data challenge that was held to provide a general and fair
benchmark for ASC techniques. The dataset recorded for this purpose is
presented, along with the performance metrics that are used to evaluate the
algorithms and statistical significance tests to compare the submitted methods.
We use a baseline method that employs MFCCS, GMMS and a maximum likelihood
criterion as a benchmark, and only find sufficient evidence to conclude that
three algorithms significantly outperform it. We also evaluate the human
classification accuracy in performing a similar classification task. The best
performing algorithm achieves a mean accuracy that matches the median accuracy
obtained by humans, and common pairs of classes are misclassified by both
computers and humans. However, all acoustic scenes are correctly classified by
at least some individuals, while there are scenes that are misclassified by all
algorithms.
| Daniele Barchiesi, Dimitrios Giannoulis, Dan Stowell, Mark D. Plumbley | 10.1109/MSP.2014.2326181 | 1411.3715 | null | null |
Deep Narrow Boltzmann Machines are Universal Approximators | stat.ML cs.LG math.PR | We show that deep narrow Boltzmann machines are universal approximators of
probability distributions on the activities of their visible units, provided
they have sufficiently many hidden layers, each containing the same number of
units as the visible layer. We show that, within certain parameter domains,
deep Boltzmann machines can be studied as feedforward networks. We provide
upper and lower bounds on the sufficient depth and width of universal
approximators. These results settle various intuitions regarding undirected
networks and, in particular, they show that deep narrow Boltzmann machines are
at least as compact universal approximators as narrow sigmoid belief networks
and restricted Boltzmann machines, with respect to the currently available
bounds for those models.
| Guido Montufar | null | 1411.3784 | null | null |
Asymmetric Minwise Hashing | stat.ML cs.DB cs.DS cs.IR cs.LG | Minwise hashing (Minhash) is a widely popular indexing scheme in practice.
Minhash is designed for estimating set resemblance and is known to be
suboptimal in many applications where the desired measure is set overlap (i.e.,
inner product between binary vectors) or set containment. Minhash has inherent
bias towards smaller sets, which adversely affects its performance in
applications where such a penalization is not desirable. In this paper, we
propose asymmetric minwise hashing (MH-ALSH), to provide a solution to this
problem. The new scheme utilizes asymmetric transformations to cancel the bias
of traditional minhash towards smaller sets, making the final "collision
probability" monotonic in the inner product. Our theoretical comparisons show
that for the task of retrieving with binary inner products asymmetric minhash
is provably better than traditional minhash and other recently proposed hashing
algorithms for general inner products. Thus, we obtain an algorithmic
improvement over existing approaches in the literature. Experimental
evaluations on four publicly available high-dimensional datasets validate our
claims and the proposed scheme outperforms, often significantly, other hashing
algorithms on the task of near neighbor retrieval with set containment. Our
proposal is simple and easy to implement in practice.
| Anshumali Shrivastava, Ping Li | null | 1411.3787 | null | null |
Predictive Encoding of Contextual Relationships for Perceptual
Inference, Interpolation and Prediction | cs.LG cs.CV cs.NE | We propose a new neurally-inspired model that can learn to encode the global
relationship context of visual events across time and space and to use the
contextual information to modulate the analysis by synthesis process in a
predictive coding framework. The model learns latent contextual representations
by maximizing the predictability of visual events based on local and global
contextual information through both top-down and bottom-up processes. In
contrast to standard predictive coding models, the prediction error in this
model is used to update the contextual representation but does not alter the
feedforward input for the next layer, and is thus more consistent with
neurophysiological observations. We establish the computational feasibility of
this model by demonstrating its ability in several aspects. We show that our
model can outperform state-of-art performances of gated Boltzmann machines
(GBM) in estimation of contextual information. Our model can also interpolate
missing events or predict future events in image sequences while simultaneously
estimating contextual information. We show it achieves state-of-art
performances in terms of prediction accuracy in a variety of tasks and
possesses the ability to interpolate missing frames, a function that is lacking
in GBM.
| Mingmin Zhao, Chengxu Zhuang, Yizhou Wang, Tai Sing Lee | null | 1411.3815 | null | null |
Learning Fuzzy Controllers in Mobile Robotics with Embedded
Preprocessing | cs.RO cs.AI cs.LG | The automatic design of controllers for mobile robots usually requires two
stages. In the first stage,sensorial data are preprocessed or transformed into
high level and meaningful values of variables whichare usually defined from
expert knowledge. In the second stage, a machine learning technique is applied
toobtain a controller that maps these high level variables to the control
commands that are actually sent tothe robot. This paper describes an algorithm
that is able to embed the preprocessing stage into the learningstage in order
to get controllers directly starting from sensorial raw data with no expert
knowledgeinvolved. Due to the high dimensionality of the sensorial data, this
approach uses Quantified Fuzzy Rules(QFRs), that are able to transform
low-level input variables into high-level input variables, reducingthe
dimensionality through summarization. The proposed learning algorithm, called
Iterative QuantifiedFuzzy Rule Learning (IQFRL), is based on genetic
programming. IQFRL is able to learn rules with differentstructures, and can
manage linguistic variables with multiple granularities. The algorithm has been
testedwith the implementation of the wall-following behavior both in several
realistic simulated environmentswith different complexity and on a Pioneer 3-AT
robot in two real environments. Results have beencompared with several
well-known learning algorithms combined with different data
preprocessingtechniques, showing that IQFRL exhibits a better and statistically
significant performance. Moreover,three real world applications for which IQFRL
plays a central role are also presented: path and objecttracking with static
and moving obstacles avoidance.
| I. Rodr\'iguez-Fdez, M. Mucientes, A. Bugar\'in | 10.1016/j.asoc.2014.09.021 | 1411.3895 | null | null |
Sample-targeted clinical trial adaptation | cs.LG | Clinical trial adaptation refers to any adjustment of the trial protocol
after the onset of the trial. The main goal is to make the process of
introducing new medical interventions to patients more efficient by reducing
the cost and the time associated with evaluating their safety and efficacy. The
principal question is how should adaptation be performed so as to minimize the
chance of distorting the outcome of the trial. We propose a novel method for
achieving this. Unlike previous work our approach focuses on trial adaptation
by sample size adjustment. We adopt a recently proposed stratification
framework based on collected auxiliary data and show that this information
together with the primary measured variables can be used to make a
probabilistically informed choice of the particular sub-group a sample should
be removed from. Experiments on simulated data are used to illustrate the
effectiveness of our method and its application in practice.
| Ognjen Arandjelovic | null | 1411.3919 | null | null |
How to Scale Up Kernel Methods to Be As Good As Deep Neural Nets | cs.LG cs.AI stat.ML | The computational complexity of kernel methods has often been a major barrier
for applying them to large-scale learning problems. We argue that this barrier
can be effectively overcome. In particular, we develop methods to scale up
kernel models to successfully tackle large-scale learning problems that are so
far only approachable by deep learning architectures. Based on the seminal work
by Rahimi and Recht on approximating kernel functions with features derived
from random projections, we advance the state-of-the-art by proposing methods
that can efficiently train models with hundreds of millions of parameters, and
learn optimal representations from multiple kernels. We conduct extensive
empirical studies on problems from image recognition and automatic speech
recognition, and show that the performance of our kernel models matches that of
well-engineered deep neural nets (DNNs). To the best of our knowledge, this is
the first time that a direct comparison between these two methods on
large-scale problems is reported. Our kernel methods have several appealing
properties: training with convex optimization, cost for training a single model
comparable to DNNs, and significantly reduced total cost due to fewer
hyperparameters to tune for model selection. Our contrastive study between
these two very different but equally competitive models sheds light on
fundamental questions such as how to learn good representations.
| Zhiyun Lu and Avner May and Kuan Liu and Alireza Bagheri Garakani and
Dong Guo and Aur\'elien Bellet and Linxi Fan and Michael Collins and Brian
Kingsbury and Michael Picheny and Fei Sha | null | 1411.4000 | null | null |
Deep Belief Network Training Improvement Using Elite Samples Minimizing
Free Energy | cs.LG cs.CV | Nowadays this is very popular to use deep architectures in machine learning.
Deep Belief Networks (DBNs) are deep architectures that use stack of Restricted
Boltzmann Machines (RBM) to create a powerful generative model using training
data. In this paper we present an improvement in a common method that is
usually used in training of RBMs. The new method uses free energy as a
criterion to obtain elite samples from generative model. We argue that these
samples can more accurately compute gradient of log probability of training
data. According to the results, an error rate of 0.99% was achieved on MNIST
test set. This result shows that the proposed method outperforms the method
presented in the first paper introducing DBN (1.25% error rate) and general
classification methods such as SVM (1.4% error rate) and KNN (with 1.6% error
rate). In another test using ISOLET dataset, letter classification error
dropped to 3.59% compared to 5.59% error rate achieved in those papers using
this dataset. The implemented method is available online at
"http://ceit.aut.ac.ir/~keyvanrad/DeeBNet Toolbox.html".
| Mohammad Ali Keyvanrad, Mohammad Mehdi Homayounpour | 10.1142/S0218001415510064 | 1411.4046 | null | null |
Dynamic Programming for Instance Annotation in Multi-instance
Multi-label Learning | stat.ML cs.LG | Labeling data for classification requires significant human effort. To reduce
labeling cost, instead of labeling every instance, a group of instances (bag)
is labeled by a single bag label. Computer algorithms are then used to infer
the label for each instance in a bag, a process referred to as instance
annotation. This task is challenging due to the ambiguity regarding the
instance labels. We propose a discriminative probabilistic model for the
instance annotation problem and introduce an expectation maximization framework
for inference, based on the maximum likelihood approach. For many probabilistic
approaches, brute-force computation of the instance label posterior probability
given its bag label is exponential in the number of instances in the bag. Our
key contribution is a dynamic programming method for computing the posterior
that is linear in the number of instances. We evaluate our methods using both
benchmark and real world data sets, in the domain of bird song, image
annotation, and activity recognition. In many cases, the proposed framework
outperforms, sometimes significantly, the current state-of-the-art MIML
learning methods, both in instance label prediction and bag label prediction.
| Anh T. Pham, Raviv Raich, and Xiaoli Z. Fern | null | 1411.4068 | null | null |
A unified view of generative models for networks: models, methods,
opportunities, and challenges | stat.ML cs.LG cs.SI physics.soc-ph | Research on probabilistic models of networks now spans a wide variety of
fields, including physics, sociology, biology, statistics, and machine
learning. These efforts have produced a diverse ecology of models and methods.
Despite this diversity, many of these models share a common underlying
structure: pairwise interactions (edges) are generated with probability
conditional on latent vertex attributes. Differences between models generally
stem from different philosophical choices about how to learn from data or
different empirically-motivated goals. The highly interdisciplinary nature of
work on these generative models, however, has inhibited the development of a
unified view of their similarities and differences. For instance, novel
theoretical models and optimization techniques developed in machine learning
are largely unknown within the social and biological sciences, which have
instead emphasized model interpretability. Here, we describe a unified view of
generative models for networks that draws together many of these disparate
threads and highlights the fundamental similarities and differences that span
these fields. We then describe a number of opportunities and challenges for
future work that are revealed by this view.
| Abigail Z. Jacobs and Aaron Clauset | null | 1411.4070 | null | null |
Learning Multi-Relational Semantics Using Neural-Embedding Models | cs.CL cs.LG stat.ML | In this paper we present a unified framework for modeling multi-relational
representations, scoring, and learning, and conduct an empirical study of
several recent multi-relational embedding models under the framework. We
investigate the different choices of relation operators based on linear and
bilinear transformations, and also the effects of entity representations by
incorporating unsupervised vectors pre-trained on extra textual resources. Our
results show several interesting findings, enabling the design of a simple
embedding model that achieves the new state-of-the-art performance on a popular
knowledge base completion task evaluated on Freebase.
| Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, Li Deng | null | 1411.4072 | null | null |
Association Rule Based Flexible Machine Learning Module for Embedded
System Platforms like Android | cs.CY cs.HC cs.LG | The past few years have seen a tremendous growth in the popularity of
smartphones. As newer features continue to be added to smartphones to increase
their utility, their significance will only increase in future. Combining
machine learning with mobile computing can enable smartphones to become
'intelligent' devices, a feature which is hitherto unseen in them. Also, the
combination of machine learning and context aware computing can enable
smartphones to gauge user's requirements proactively, depending upon their
environment and context. Accordingly, necessary services can be provided to
users.
In this paper, we have explored the methods and applications of integrating
machine learning and context aware computing on the Android platform, to
provide higher utility to the users. To achieve this, we define a Machine
Learning (ML) module which is incorporated in the basic Android architecture.
Firstly, we have outlined two major functionalities that the ML module should
provide. Then, we have presented three architectures, each of which
incorporates the ML module at a different level in the Android architecture.
The advantages and shortcomings of each of these architectures have been
evaluated. Lastly, we have explained a few applications in which our proposed
system can be incorporated such that their functionality is improved.
| Amiraj Dhawan, Shruti Bhave, Amrita Aurora, Vishwanathan Iyer | 10.14569/IJARAI.2014.030101 | 1411.4076 | null | null |
Error Rate Bounds and Iterative Weighted Majority Voting for
Crowdsourcing | stat.ML cs.HC cs.LG math.PR math.ST stat.TH | Crowdsourcing has become an effective and popular tool for human-powered
computation to label large datasets. Since the workers can be unreliable, it is
common in crowdsourcing to assign multiple workers to one task, and to
aggregate the labels in order to obtain results of high quality. In this paper,
we provide finite-sample exponential bounds on the error rate (in probability
and in expectation) of general aggregation rules under the Dawid-Skene
crowdsourcing model. The bounds are derived for multi-class labeling, and can
be used to analyze many aggregation methods, including majority voting,
weighted majority voting and the oracle Maximum A Posteriori (MAP) rule. We
show that the oracle MAP rule approximately optimizes our upper bound on the
mean error rate of weighted majority voting in certain setting. We propose an
iterative weighted majority voting (IWMV) method that optimizes the error rate
bound and approximates the oracle MAP rule. Its one step version has a provable
theoretical guarantee on the error rate. The IWMV method is intuitive and
computationally simple. Experimental results on simulated and real data show
that IWMV performs at least on par with the state-of-the-art methods, and it
has a much lower computational cost (around one hundred times faster) than the
state-of-the-art methods.
| Hongwei Li and Bin Yu | null | 1411.4086 | null | null |
Deep Deconvolutional Networks for Scene Parsing | stat.ML cs.CV cs.LG | Scene parsing is an important and challenging prob- lem in computer vision.
It requires labeling each pixel in an image with the category it belongs to.
Tradition- ally, it has been approached with hand-engineered features from
color information in images. Recently convolutional neural networks (CNNs),
which automatically learn hierar- chies of features, have achieved record
performance on the task. These approaches typically include a post-processing
technique, such as superpixels, to produce the final label- ing. In this paper,
we propose a novel network architecture that combines deep deconvolutional
neural networks with CNNs. Our experiments show that deconvolutional neu- ral
networks are capable of learning higher order image structure beyond edge
primitives in comparison to CNNs. The new network architecture is employed for
multi-patch training, introduced as part of this work. Multi-patch train- ing
makes it possible to effectively learn spatial priors from scenes. The proposed
approach yields state-of-the-art per- formance on four scene parsing datasets,
namely Stanford Background, SIFT Flow, CamVid, and KITTI. In addition, our
system has the added advantage of having a training system that can be
completely automated end-to-end with- out requiring any post-processing.
| Rahul Mohan | null | 1411.4101 | null | null |
Anisotropic Agglomerative Adaptive Mean-Shift | cs.CV cs.LG | Mean Shift today, is widely used for mode detection and clustering. The
technique though, is challenged in practice due to assumptions of isotropicity
and homoscedasticity. We present an adaptive Mean Shift methodology that allows
for full anisotropic clustering, through unsupervised local bandwidth
selection. The bandwidth matrices evolve naturally, adapting locally through
agglomeration, and in turn guiding further agglomeration. The online
methodology is practical and effecive for low-dimensional feature spaces,
preserving better detail and clustering salience. Additionally, conventional
Mean Shift either critically depends on a per instance choice of bandwidth, or
relies on offline methods which are inflexible and/or again data instance
specific. The presented approach, due to its adaptive design, also alleviates
this issue - with a default form performing generally well. The methodology
though, allows for effective tuning of results.
| Rahul Sawhney, Henrik I. Christensen and Gary R. Bradski | null | 1411.4102 | null | null |
Definition of Visual Speech Element and Research on a Method of
Extracting Feature Vector for Korean Lip-Reading | cs.CL cs.CV cs.LG | In this paper, we defined the viseme (visual speech element) and described
about the method of extracting visual feature vector. We defined the 10 visemes
based on vowel by analyzing of Korean utterance and proposed the method of
extracting the 20-dimensional visual feature vector, combination of static
features and dynamic features. Lastly, we took an experiment in recognizing
words based on 3-viseme HMM and evaluated the efficiency.
| Ha Jong Won, Li Gwang Chol, Kim Hyok Chol, Li Kum Song (College of
Computer Science, Kim Il Sung University) | null | 1411.4114 | null | null |
Investigating the Role of Prior Disambiguation in Deep-learning
Compositional Models of Meaning | cs.CL cs.LG cs.NE | This paper aims to explore the effect of prior disambiguation on neural
network- based compositional models, with the hope that better semantic
representations for text compounds can be produced. We disambiguate the input
word vectors before they are fed into a compositional deep net. A series of
evaluations shows the positive effect of prior disambiguation for such deep
models.
| Jianpeng Cheng, Dimitri Kartsaklis, Edward Grefenstette | null | 1411.4116 | null | null |
Revisiting Kernelized Locality-Sensitive Hashing for Improved
Large-Scale Image Retrieval | cs.CV cs.LG stat.ML | We present a simple but powerful reinterpretation of kernelized
locality-sensitive hashing (KLSH), a general and popular method developed in
the vision community for performing approximate nearest-neighbor searches in an
arbitrary reproducing kernel Hilbert space (RKHS). Our new perspective is based
on viewing the steps of the KLSH algorithm in an appropriately projected space,
and has several key theoretical and practical benefits. First, it eliminates
the problematic conceptual difficulties that are present in the existing
motivation of KLSH. Second, it yields the first formal retrieval performance
bounds for KLSH. Third, our analysis reveals two techniques for boosting the
empirical performance of KLSH. We evaluate these extensions on several
large-scale benchmark image retrieval data sets, and show that our analysis
leads to improved recall performance of at least 12%, and sometimes much
higher, over the standard KLSH method.
| Ke Jiang, Qichao Que, Brian Kulis | null | 1411.4199 | null | null |
HIPAD - A Hybrid Interior-Point Alternating Direction algorithm for
knowledge-based SVM and feature selection | stat.ML cs.LG | We consider classification tasks in the regime of scarce labeled training
data in high dimensional feature space, where specific expert knowledge is also
available. We propose a new hybrid optimization algorithm that solves the
elastic-net support vector machine (SVM) through an alternating direction
method of multipliers in the first phase, followed by an interior-point method
for the classical SVM in the second phase. Both SVM formulations are adapted to
knowledge incorporation. Our proposed algorithm addresses the challenges of
automatic feature selection, high optimization accuracy, and algorithmic
flexibility for taking advantage of prior knowledge. We demonstrate the
effectiveness and efficiency of our algorithm and compare it with existing
methods on a collection of synthetic and real-world data.
| Zhiwei Qin, Xiaocheng Tang, Ioannis Akrotirianakis, Amit Chakraborty | null | 1411.4286 | null | null |
Influence Functions for Machine Learning: Nonparametric Estimators for
Entropies, Divergences and Mutual Informations | stat.ML cs.AI cs.LG | We propose and analyze estimators for statistical functionals of one or more
distributions under nonparametric assumptions. Our estimators are based on the
theory of influence functions, which appear in the semiparametric statistics
literature. We show that estimators based either on data-splitting or a
leave-one-out technique enjoy fast rates of convergence and other favorable
theoretical properties. We apply this framework to derive estimators for
several popular information theoretic quantities, and via empirical evaluation,
show the advantage of this approach over existing estimators.
| Kirthevasan Kandasamy, Akshay Krishnamurthy, Barnabas Poczos, Larry
Wasserman, James M. Robins | null | 1411.4342 | null | null |
Errata: Distant Supervision for Relation Extraction with Matrix
Completion | cs.CL cs.LG | The essence of distantly supervised relation extraction is that it is an
incomplete multi-label classification problem with sparse and noisy features.
To tackle the sparsity and noise challenges, we propose solving the
classification problem using matrix completion on factorized matrix of
minimized rank. We formulate relation classification as completing the unknown
labels of testing items (entity pairs) in a sparse matrix that concatenates
training and testing textual features with training labels. Our algorithmic
framework is based on the assumption that the rank of item-by-feature and
item-by-label joint matrix is low. We apply two optimization models to recover
the underlying low-rank matrix leveraging the sparsity of feature-label matrix.
The matrix completion problem is then solved by the fixed point continuation
(FPC) algorithm, which can find the global optimum. Experiments on two widely
used datasets with different dimensions of textual features demonstrate that
our low-rank matrix completion approach significantly outperforms the baseline
and the state-of-the-art methods.
| Miao Fan, Deli Zhao, Qiang Zhou, Zhiyuan Liu, Thomas Fang Zheng,
Edward Y. Chang | null | 1411.4455 | null | null |
Joint cross-domain classification and subspace learning for unsupervised
adaptation | cs.CV cs.LG | Domain adaptation aims at adapting the knowledge acquired on a source domain
to a new different but related target domain. Several approaches have
beenproposed for classification tasks in the unsupervised scenario, where no
labeled target data are available. Most of the attention has been dedicated to
searching a new domain-invariant representation, leaving the definition of the
prediction function to a second stage. Here we propose to learn both jointly.
Specifically we learn the source subspace that best matches the target subspace
while at the same time minimizing a regularized misclassification loss. We
provide an alternating optimization technique based on stochastic sub-gradient
descent to solve the learning problem and we demonstrate its performance on
several domain adaptation tasks.
| Basura Fernando and Tatiana Tommasi and Tinne Tuytelaars | null | 1411.4491 | null | null |
Outlier-Robust Convex Segmentation | cs.LG stat.ML | We derive a convex optimization problem for the task of segmenting sequential
data, which explicitly treats presence of outliers. We describe two algorithms
for solving this problem, one exact and one a top-down novel approach, and we
derive a consistency results for the case of two segments and no outliers.
Robustness to outliers is evaluated on two real-world tasks related to speech
segmentation. Our algorithms outperform baseline segmentation algorithms.
| Itamar Katz and Koby Crammer | null | 1411.4503 | null | null |
Parallel Gaussian Process Regression for Big Data: Low-Rank
Representation Meets Markov Approximation | stat.ML cs.DC cs.LG | The expressive power of a Gaussian process (GP) model comes at a cost of poor
scalability in the data size. To improve its scalability, this paper presents a
low-rank-cum-Markov approximation (LMA) of the GP model that is novel in
leveraging the dual computational advantages stemming from complementing a
low-rank approximate representation of the full-rank GP based on a support set
of inputs with a Markov approximation of the resulting residual process; the
latter approximation is guaranteed to be closest in the Kullback-Leibler
distance criterion subject to some constraint and is considerably more refined
than that of existing sparse GP models utilizing low-rank representations due
to its more relaxed conditional independence assumption (especially with larger
data). As a result, our LMA method can trade off between the size of the
support set and the order of the Markov property to (a) incur lower
computational cost than such sparse GP models while achieving predictive
performance comparable to them and (b) accurately represent features/patterns
of any scale. Interestingly, varying the Markov order produces a spectrum of
LMAs with PIC approximation and full-rank GP at the two extremes. An advantage
of our LMA method is that it is amenable to parallelization on multiple
machines/cores, thereby gaining greater scalability. Empirical evaluation on
three real-world datasets in clusters of up to 32 computing nodes shows that
our centralized and parallel LMA methods are significantly more time-efficient
and scalable than state-of-the-art sparse and full-rank GP regression methods
while achieving comparable predictive performances.
| Kian Hsiang Low, Jiangbo Yu, Jie Chen, Patrick Jaillet | null | 1411.4510 | null | null |
Implicitly Constrained Semi-Supervised Linear Discriminant Analysis | stat.ML cs.LG | Semi-supervised learning is an important and active topic of research in
pattern recognition. For classification using linear discriminant analysis
specifically, several semi-supervised variants have been proposed. Using any
one of these methods is not guaranteed to outperform the supervised classifier
which does not take the additional unlabeled data into account. In this work we
compare traditional Expectation Maximization type approaches for
semi-supervised linear discriminant analysis with approaches based on intrinsic
constraints and propose a new principled approach for semi-supervised linear
discriminant analysis, using so-called implicit constraints. We explore the
relationships between these methods and consider the question if and in what
sense we can expect improvement in performance over the supervised procedure.
The constraint based approaches are more robust to misspecification of the
model, and may outperform alternatives that make more assumptions on the data,
in terms of the log-likelihood of unseen objects.
| Jesse H. Krijthe and Marco Loog | null | 1411.4521 | null | null |
Cross-Modal Similarity Learning : A Low Rank Bilinear Formulation | cs.MM cs.IR cs.LG | The cross-media retrieval problem has received much attention in recent years
due to the rapid increasing of multimedia data on the Internet. A new approach
to the problem has been raised which intends to match features of different
modalities directly. In this research, there are two critical issues: how to
get rid of the heterogeneity between different modalities and how to match the
cross-modal features of different dimensions. Recently metric learning methods
show a good capability in learning a distance metric to explore the
relationship between data points. However, the traditional metric learning
algorithms only focus on single-modal features, which suffer difficulties in
addressing the cross-modal features of different dimensions. In this paper, we
propose a cross-modal similarity learning algorithm for the cross-modal feature
matching. The proposed method takes a bilinear formulation, and with the
nuclear-norm penalization, it achieves low-rank representation. Accordingly,
the accelerated proximal gradient algorithm is successfully imported to find
the optimal solution with a fast convergence rate O(1/t^2). Experiments on
three well known image-text cross-media retrieval databases show that the
proposed method achieves the best performance compared to the state-of-the-art
algorithms.
| Cuicui Kang, Shengcai Liao, Yonghao He, Jian Wang, Wenjia Niu, Shiming
Xiang, Chunhong Pan | null | 1411.4738 | null | null |
Nonnegative Tensor Factorization for Directional Blind Audio Source
Separation | stat.ML cs.LG | We augment the nonnegative matrix factorization method for audio source
separation with cues about directionality of sound propagation. This improves
separation quality greatly and removes the need for training data, with only a
twofold increase in run time. This is the first method which can exploit
directional information from microphone arrays much smaller than the wavelength
of sound, working both in simulation and in practice on millimeter-scale
microphone arrays.
| Noah D. Stein | null | 1411.5010 | null | null |
Music Data Analysis: A State-of-the-art Survey | cs.DB cs.LG cs.SD | Music accounts for a significant chunk of interest among various online
activities. This is reflected by wide array of alternatives offered in music
related web/mobile apps, information portals, featuring millions of artists,
songs and events attracting user activity at similar scale. Availability of
large scale structured and unstructured data has attracted similar level of
attention by data science community. This paper attempts to offer current
state-of-the-art in music related analysis. Various approaches involving
machine learning, information theory, social network analysis, semantic web and
linked open data are represented in the form of taxonomy along with data
sources and use cases addressed by the research community.
| Shubhanshu Gupta | null | 1411.5014 | null | null |
Learning nonparametric differential equations with operator-valued
kernels and gradient matching | cs.LG stat.ML | Modeling dynamical systems with ordinary differential equations implies a
mechanistic view of the process underlying the dynamics. However in many cases,
this knowledge is not available. To overcome this issue, we introduce a general
framework for nonparametric ODE models using penalized regression in
Reproducing Kernel Hilbert Spaces (RKHS) based on operator-valued kernels.
Moreover, we extend the scope of gradient matching approaches to nonparametric
ODE. A smooth estimate of the solution ODE is built to provide an approximation
of the derivative of the ODE solution which is in turn used to learn the
nonparametric ODE model. This approach benefits from the flexibility of
penalized regression in RKHS allowing for ridge or (structured) sparse
regression as well. Very good results are shown on 3 different ODE systems.
| Markus Heinonen, Florence d'Alch\'e-Buc | null | 1411.5172 | null | null |
Large-Margin Classification with Multiple Decision Rules | stat.ML cs.LG | Binary classification is a common statistical learning problem in which a
model is estimated on a set of covariates for some outcome indicating the
membership of one of two classes. In the literature, there exists a distinction
between hard and soft classification. In soft classification, the conditional
class probability is modeled as a function of the covariates. In contrast, hard
classification methods only target the optimal prediction boundary. While hard
and soft classification methods have been studied extensively, not much work
has been done to compare the actual tasks of hard and soft classification. In
this paper we propose a spectrum of statistical learning problems which span
the hard and soft classification tasks based on fitting multiple decision rules
to the data. By doing so, we reveal a novel collection of learning tasks of
increasing complexity. We study the problems using the framework of
large-margin classifiers and a class of piecewise linear convex surrogates, for
which we derive statistical properties and a corresponding sub-gradient descent
algorithm. We conclude by applying our approach to simulation settings and a
magnetic resonance imaging (MRI) dataset from the Alzheimer's Disease
Neuroimaging Initiative (ADNI) study.
| Patrick K. Kimes, D. Neil Hayes, J. S. Marron and Yufeng Liu | null | 1411.5260 | null | null |
ConceptLearner: Discovering Visual Concepts from Weakly Labeled Image
Collections | cs.CV cs.AI cs.LG | Discovering visual knowledge from weakly labeled data is crucial to scale up
computer vision recognition system, since it is expensive to obtain fully
labeled data for a large number of concept categories. In this paper, we
propose ConceptLearner, which is a scalable approach to discover visual
concepts from weakly labeled image collections. Thousands of visual concept
detectors are learned automatically, without human in the loop for additional
annotation. We show that these learned detectors could be applied to recognize
concepts at image-level and to detect concepts at image region-level
accurately. Under domain-specific supervision, we further evaluate the learned
concepts for scene recognition on SUN database and for object detection on
Pascal VOC 2007. ConceptLearner shows promising performance compared to fully
supervised and weakly supervised methods.
| Bolei Zhou, Vignesh Jagadeesh, Robinson Piramuthu | null | 1411.5328 | null | null |
Unification of field theory and maximum entropy methods for learning
probability densities | physics.data-an cs.LG q-bio.QM stat.ML | The need to estimate smooth probability distributions (a.k.a. probability
densities) from finite sampled data is ubiquitous in science. Many approaches
to this problem have been described, but none is yet regarded as providing a
definitive solution. Maximum entropy estimation and Bayesian field theory are
two such approaches. Both have origins in statistical physics, but the
relationship between them has remained unclear. Here I unify these two methods
by showing that every maximum entropy density estimate can be recovered in the
infinite smoothness limit of an appropriate Bayesian field theory. I also show
that Bayesian field theory estimation can be performed without imposing any
boundary conditions on candidate densities, and that the infinite smoothness
limit of these theories recovers the most common types of maximum entropy
estimates. Bayesian field theory is thus seen to provide a natural test of the
validity of the maximum entropy null hypothesis. Bayesian field theory also
returns a lower entropy density estimate when the maximum entropy hypothesis is
falsified. The computations necessary for this approach can be performed
rapidly for one-dimensional data, and software for doing this is provided.
Based on these results, I argue that Bayesian field theory is poised to provide
a definitive solution to the density estimation problem in one dimension.
| Justin B. Kinney | 10.1103/PhysRevE.92.032107 | 1411.5371 | null | null |
Stochastic Block Transition Models for Dynamic Networks | cs.SI cs.LG physics.soc-ph stat.ME | There has been great interest in recent years on statistical models for
dynamic networks. In this paper, I propose a stochastic block transition model
(SBTM) for dynamic networks that is inspired by the well-known stochastic block
model (SBM) for static networks and previous dynamic extensions of the SBM.
Unlike most existing dynamic network models, it does not make a hidden Markov
assumption on the edge-level dynamics, allowing the presence or absence of
edges to directly influence future edge probabilities while retaining the
interpretability of the SBM. I derive an approximate inference procedure for
the SBTM and demonstrate that it is significantly better at reproducing
durations of edges in real social network data.
| Kevin S. Xu | null | 1411.5404 | null | null |
Private Empirical Risk Minimization Beyond the Worst Case: The Effect of
the Constraint Set Geometry | cs.LG cs.CR stat.ML | Empirical Risk Minimization (ERM) is a standard technique in machine
learning, where a model is selected by minimizing a loss function over
constraint set. When the training dataset consists of private information, it
is natural to use a differentially private ERM algorithm, and this problem has
been the subject of a long line of work started with Chaudhuri and Monteleoni
2008. A private ERM algorithm outputs an approximate minimizer of the loss
function and its error can be measured as the difference from the optimal value
of the loss function. When the constraint set is arbitrary, the required error
bounds are fairly well understood \cite{BassilyST14}. In this work, we show
that the geometric properties of the constraint set can be used to derive
significantly better results. Specifically, we show that a differentially
private version of Mirror Descent leads to error bounds of the form
$\tilde{O}(G_{\mathcal{C}}/n)$ for a lipschitz loss function, improving on the
$\tilde{O}(\sqrt{p}/n)$ bounds in Bassily, Smith and Thakurta 2014. Here $p$ is
the dimensionality of the problem, $n$ is the number of data points in the
training set, and $G_{\mathcal{C}}$ denotes the Gaussian width of the
constraint set that we optimize over. We show similar improvements for strongly
convex functions, and for smooth functions. In addition, we show that when the
loss function is Lipschitz with respect to the $\ell_1$ norm and $\mathcal{C}$
is $\ell_1$-bounded, a differentially private version of the Frank-Wolfe
algorithm gives error bounds of the form $\tilde{O}(n^{-2/3})$. This captures
the important and common case of sparse linear regression (LASSO), when the
data $x_i$ satisfies $|x_i|_{\infty} \leq 1$ and we optimize over the $\ell_1$
ball. We show new lower bounds for this setting, that together with known
bounds, imply that all our upper bounds are tight.
| Kunal Talwar, Abhradeep Thakurta, Li Zhang | null | 1411.5417 | null | null |
Differentially Private Algorithms for Empirical Machine Learning | cs.LG | An important use of private data is to build machine learning classifiers.
While there is a burgeoning literature on differentially private classification
algorithms, we find that they are not practical in real applications due to two
reasons. First, existing differentially private classifiers provide poor
accuracy on real world datasets. Second, there is no known differentially
private algorithm for empirically evaluating the private classifier on a
private test dataset.
In this paper, we develop differentially private algorithms that mirror real
world empirical machine learning workflows. We consider the private classifier
training algorithm as a blackbox. We present private algorithms for selecting
features that are input to the classifier. Though adding a preprocessing step
takes away some of the privacy budget from the actual classification process
(thus potentially making it noisier and less accurate), we show that our novel
preprocessing techniques significantly increase classifier accuracy on three
real-world datasets. We also present the first private algorithms for
empirically constructing receiver operating characteristic (ROC) curves on a
private test set.
| Ben Stoddard and Yan Chen and Ashwin Machanavajjhala | null | 1411.5428 | null | null |
Linking GloVe with word2vec | cs.CL cs.LG stat.ML | The Global Vectors for word representation (GloVe), introduced by Jeffrey
Pennington et al. is reported to be an efficient and effective method for
learning vector representations of words. State-of-the-art performance is also
provided by skip-gram with negative-sampling (SGNS) implemented in the word2vec
tool. In this note, we explain the similarities between the training objectives
of the two models, and show that the objective of SGNS is similar to the
objective of a specialized form of GloVe, though their cost functions are
defined differently.
| Tianze Shi, Zhiyuan Liu | null | 1411.5595 | null | null |
No-Regret Learnability for Piecewise Linear Losses | cs.LG | In the convex optimization approach to online regret minimization, many
methods have been developed to guarantee a $O(\sqrt{T})$ bound on regret for
subdifferentiable convex loss functions with bounded subgradients, by using a
reduction to linear loss functions. This suggests that linear loss functions
tend to be the hardest ones to learn against, regardless of the underlying
decision spaces. We investigate this question in a systematic fashion looking
at the interplay between the set of possible moves for both the decision maker
and the adversarial environment. This allows us to highlight sharp distinctive
behaviors about the learnability of piecewise linear loss functions. On the one
hand, when the decision set of the decision maker is a polyhedron, we establish
$\Omega(\sqrt{T})$ lower bounds on regret for a large class of piecewise linear
loss functions with important applications in online linear optimization,
repeated zero-sum Stackelberg games, online prediction with side information,
and online two-stage optimization. On the other hand, we exhibit $o(\sqrt{T})$
learning rates, achieved by the Follow-The-Leader algorithm, in online linear
optimization when the boundary of the decision maker's decision set is curved
and when $0$ does not lie in the convex hull of the environment's decision set.
Hence, the curvature of the decision maker's decision set is a determining
factor for the optimal learning rate. These results hold in a completely
adversarial setting.
| Arthur Flajolet, Patrick Jaillet | null | 1411.5649 | null | null |
A Joint Probabilistic Classification Model of Relevant and Irrelevant
Sentences in Mathematical Word Problems | cs.CL cs.IR cs.LG stat.ML | Estimating the difficulty level of math word problems is an important task
for many educational applications. Identification of relevant and irrelevant
sentences in math word problems is an important step for calculating the
difficulty levels of such problems. This paper addresses a novel application of
text categorization to identify two types of sentences in mathematical word
problems, namely relevant and irrelevant sentences. A novel joint probabilistic
classification model is proposed to estimate the joint probability of
classification decisions for all sentences of a math word problem by utilizing
the correlation among all sentences along with the correlation between the
question sentence and other sentences, and sentence text. The proposed model is
compared with i) a SVM classifier which makes independent classification
decisions for individual sentences by only using the sentence text and ii) a
novel SVM classifier that considers the correlation between the question
sentence and other sentences along with the sentence text. An extensive set of
experiments demonstrates the effectiveness of the joint probabilistic
classification model for identifying relevant and irrelevant sentences as well
as the novel SVM classifier that utilizes the correlation between the question
sentence and other sentences. Furthermore, empirical results and analysis show
that i) it is highly beneficial not to remove stopwords and ii) utilizing part
of speech tagging does not make a significant improvement although it has been
shown to be effective for the related task of math word problem type
classification.
| Suleyman Cetintas, Luo Si, Yan Ping Xin, Dake Zhang, Joo Young Park,
Ron Tzur | null | 1411.5732 | null | null |
Fuzzy Adaptive Resonance Theory, Diffusion Maps and their applications
to Clustering and Biclustering | cs.NE cs.LG | In this paper, we describe an algorithm FARDiff (Fuzzy Adaptive Resonance
Dif- fusion) which combines Diffusion Maps and Fuzzy Adaptive Resonance Theory
to do clustering on high dimensional data. We describe some applications of
this method and some problems for future research.
| S. B. Damelin, Y. Gu, D. C. Wunsch II, R. Xu | null | 1411.5737 | null | null |
Randomized Dual Coordinate Ascent with Arbitrary Sampling | math.OC cs.LG cs.NA math.NA | We study the problem of minimizing the average of a large number of smooth
convex functions penalized with a strongly convex regularizer. We propose and
analyze a novel primal-dual method (Quartz) which at every iteration samples
and updates a random subset of the dual variables, chosen according to an
arbitrary distribution. In contrast to typical analysis, we directly bound the
decrease of the primal-dual error (in expectation), without the need to first
analyze the dual error. Depending on the choice of the sampling, we obtain
efficient serial, parallel and distributed variants of the method. In the
serial case, our bounds match the best known bounds for SDCA (both with uniform
and importance sampling). With standard mini-batching, our bounds predict
initial data-independent speedup as well as additional data-driven speedup
which depends on spectral and sparsity properties of the data. We calculate
theoretical speedup factors and find that they are excellent predictors of
actual speedup in practice. Moreover, we illustrate that it is possible to
design an efficient mini-batch importance sampling. The distributed variant of
Quartz is the first distributed SDCA-like method with an analysis for
non-separable data.
| Zheng Qu and Peter Richt\'arik and Tong Zhang | null | 1411.5873 | null | null |
Falling Rule Lists | cs.AI cs.LG | Falling rule lists are classification models consisting of an ordered list of
if-then rules, where (i) the order of rules determines which example should be
classified by each rule, and (ii) the estimated probability of success
decreases monotonically down the list. These kinds of rule lists are inspired
by healthcare applications where patients would be stratified into risk sets
and the highest at-risk patients should be considered first. We provide a
Bayesian framework for learning falling rule lists that does not rely on
traditional greedy decision tree learning methods.
| Fulton Wang, Cynthia Rudin | null | 1411.5899 | null | null |
Understanding image representations by measuring their equivariance and
equivalence | cs.CV cs.LG cs.NE | Despite the importance of image representations such as histograms of
oriented gradients and deep Convolutional Neural Networks (CNN), our
theoretical understanding of them remains limited. Aiming at filling this gap,
we investigate three key mathematical properties of representations:
equivariance, invariance, and equivalence. Equivariance studies how
transformations of the input image are encoded by the representation,
invariance being a special case where a transformation has no effect.
Equivalence studies whether two representations, for example two different
parametrisations of a CNN, capture the same visual information or not. A number
of methods to establish these properties empirically are proposed, including
introducing transformation and stitching layers in CNNs. These methods are then
applied to popular representations to reveal insightful aspects of their
structure, including clarifying at which layers in a CNN certain geometric
invariances are achieved. While the focus of the paper is theoretical, direct
applications to structured-output regression are demonstrated too.
| Karel Lenc, Andrea Vedaldi | null | 1411.5908 | null | null |
Learning to Generate Chairs, Tables and Cars with Convolutional Networks | cs.CV cs.LG cs.NE | We train generative 'up-convolutional' neural networks which are able to
generate images of objects given object style, viewpoint, and color. We train
the networks on rendered 3D models of chairs, tables, and cars. Our experiments
show that the networks do not merely learn all images by heart, but rather find
a meaningful representation of 3D models allowing them to assess the similarity
of different models, interpolate between given views to generate the missing
ones, extrapolate views, and invent new objects not present in the training set
by recombining training instances, or even two different object classes.
Moreover, we show that such generative networks can be used to find
correspondences between different objects from the dataset, outperforming
existing approaches on this task.
| Alexey Dosovitskiy, Jost Tobias Springenberg, Maxim Tatarchenko and
Thomas Brox | null | 1411.5928 | null | null |
On the Impossibility of Convex Inference in Human Computation | stat.ML cs.HC cs.LG | Human computation or crowdsourcing involves joint inference of the
ground-truth-answers and the worker-abilities by optimizing an objective
function, for instance, by maximizing the data likelihood based on an assumed
underlying model. A variety of methods have been proposed in the literature to
address this inference problem. As far as we know, none of the objective
functions in existing methods is convex. In machine learning and applied
statistics, a convex function such as the objective function of support vector
machines (SVMs) is generally preferred, since it can leverage the
high-performance algorithms and rigorous guarantees established in the
extensive literature on convex optimization. One may thus wonder if there
exists a meaningful convex objective function for the inference problem in
human computation. In this paper, we investigate this convexity issue for human
computation. We take an axiomatic approach by formulating a set of axioms that
impose two mild and natural assumptions on the objective function for the
inference. Under these axioms, we show that it is unfortunately impossible to
ensure convexity of the inference problem. On the other hand, we show that
interestingly, in the absence of a requirement to model "spammers", one can
construct reasonable objective functions for crowdsourcing that guarantee
convex inference.
| Nihar B. Shah and Dengyong Zhou | null | 1411.5977 | null | null |
Clustering evolving data using kernel-based methods | cs.SI cs.LG stat.ML | In this thesis, we propose several modelling strategies to tackle evolving
data in different contexts. In the framework of static clustering, we start by
introducing a soft kernel spectral clustering (SKSC) algorithm, which can
better deal with overlapping clusters with respect to kernel spectral
clustering (KSC) and provides more interpretable outcomes. Afterwards, a whole
strategy based upon KSC for community detection of static networks is proposed,
where the extraction of a high quality training sub-graph, the choice of the
kernel function, the model selection and the applicability to large-scale data
are key aspects. This paves the way for the development of a novel clustering
algorithm for the analysis of evolving networks called kernel spectral
clustering with memory effect (MKSC), where the temporal smoothness between
clustering results in successive time steps is incorporated at the level of the
primal optimization problem, by properly modifying the KSC formulation. Later
on, an application of KSC to fault detection of an industrial machine is
presented. Here, a smart pre-processing of the data by means of a proper
windowing operation is necessary to catch the ongoing degradation process
affecting the machine. In this way, in a genuinely unsupervised manner, it is
possible to raise an early warning when necessary, in an online fashion.
Finally, we propose a new algorithm called incremental kernel spectral
clustering (IKSC) for online learning of non-stationary data. This ambitious
challenge is faced by taking advantage of the out-of-sample property of kernel
spectral clustering (KSC) to adapt the initial model, in order to tackle
merging, splitting or drifting of clusters across time. Real-world applications
considered in this thesis include image segmentation, time-series clustering,
community detection of static and evolving networks.
| Rocco Langone | null | 1411.5988 | null | null |
PU Learning for Matrix Completion | cs.LG cs.NA stat.ML | In this paper, we consider the matrix completion problem when the
observations are one-bit measurements of some underlying matrix M, and in
particular the observed samples consist only of ones and no zeros. This problem
is motivated by modern applications such as recommender systems and social
networks where only "likes" or "friendships" are observed. The problem of
learning from only positive and unlabeled examples, called PU
(positive-unlabeled) learning, has been studied in the context of binary
classification. We consider the PU matrix completion problem, where an
underlying real-valued matrix M is first quantized to generate one-bit
observations and then a subset of positive entries is revealed. Under the
assumption that M has bounded nuclear norm, we provide recovery guarantees for
two different observation models: 1) M parameterizes a distribution that
generates a binary matrix, 2) M is thresholded to obtain a binary matrix. For
the first case, we propose a "shifted matrix completion" method that recovers M
using only a subset of indices corresponding to ones, while for the second
case, we propose a "biased matrix completion" method that recovers the
(thresholded) binary matrix. Both methods yield strong error bounds --- if M is
n by n, the Frobenius error is bounded as O(1/((1-rho)n), where 1-rho denotes
the fraction of ones observed. This implies a sample complexity of O(n\log n)
ones to achieve a small error, when M is dense and n is large. We extend our
methods and guarantees to the inductive matrix completion problem, where rows
and columns of M have associated features. We provide efficient and scalable
optimization procedures for both the methods and demonstrate the effectiveness
of the proposed methods for link prediction (on real-world networks consisting
of over 2 million nodes and 90 million links) and semi-supervised clustering
tasks.
| Cho-Jui Hsieh and Nagarajan Natarajan and Inderjit S. Dhillon | null | 1411.6081 | null | null |
Efficiently learning Ising models on arbitrary graphs | cs.LG cs.IT math.IT stat.ML | We consider the problem of reconstructing the graph underlying an Ising model
from i.i.d. samples. Over the last fifteen years this problem has been of
significant interest in the statistics, machine learning, and statistical
physics communities, and much of the effort has been directed towards finding
algorithms with low computational cost for various restricted classes of
models. Nevertheless, for learning Ising models on general graphs with $p$
nodes of degree at most $d$, it is not known whether or not it is possible to
improve upon the $p^{d}$ computation needed to exhaustively search over all
possible neighborhoods for each node.
In this paper we show that a simple greedy procedure allows to learn the
structure of an Ising model on an arbitrary bounded-degree graph in time on the
order of $p^2$. We make no assumptions on the parameters except what is
necessary for identifiability of the model, and in particular the results hold
at low-temperatures as well as for highly non-uniform models. The proof rests
on a new structural property of Ising models: we show that for any node there
exists at least one neighbor with which it has a high mutual information. This
structural property may be of independent interest.
| Guy Bresler | null | 1411.6156 | null | null |
Characterization of the equivalence of robustification and
regularization in linear and matrix regression | math.ST cs.LG math.OC stat.ML stat.TH | The notion of developing statistical methods in machine learning which are
robust to adversarial perturbations in the underlying data has been the subject
of increasing interest in recent years. A common feature of this work is that
the adversarial robustification often corresponds exactly to regularization
methods which appear as a loss function plus a penalty. In this paper we deepen
and extend the understanding of the connection between robustification and
regularization (as achieved by penalization) in regression problems.
Specifically, (a) in the context of linear regression, we characterize
precisely under which conditions on the model of uncertainty used and on the
loss function penalties robustification and regularization are equivalent, and
(b) we extend the characterization of robustification and regularization to
matrix regression problems (matrix completion and Principal Component
Analysis).
| Dimitris Bertsimas and Martin S. Copenhaver | null | 1411.6160 | null | null |
Kickback cuts Backprop's red-tape: Biologically plausible credit
assignment in neural networks | cs.LG cs.NE q-bio.NC | Error backpropagation is an extremely effective algorithm for assigning
credit in artificial neural networks. However, weight updates under Backprop
depend on lengthy recursive computations and require separate output and error
messages -- features not shared by biological neurons, that are perhaps
unnecessary. In this paper, we revisit Backprop and the credit assignment
problem. We first decompose Backprop into a collection of interacting learning
algorithms; provide regret bounds on the performance of these sub-algorithms;
and factorize Backprop's error signals. Using these results, we derive a new
credit assignment algorithm for nonparametric regression, Kickback, that is
significantly simpler than Backprop. Finally, we provide a sufficient condition
for Kickback to follow error gradients, and show that Kickback matches
Backprop's performance on real-world regression benchmarks.
| David Balduzzi, Hastagiri Vanchinathan, Joachim Buhmann | null | 1411.6191 | null | null |
Compound Rank-k Projections for Bilinear Analysis | cs.LG | In many real-world applications, data are represented by matrices or
high-order tensors. Despite the promising performance, the existing
two-dimensional discriminant analysis algorithms employ a single projection
model to exploit the discriminant information for projection, making the model
less flexible. In this paper, we propose a novel Compound Rank-k Projection
(CRP) algorithm for bilinear analysis. CRP deals with matrices directly without
transforming them into vectors, and it therefore preserves the correlations
within the matrix and decreases the computation complexity. Different from the
existing two dimensional discriminant analysis algorithms, objective function
values of CRP increase monotonically.In addition, CRP utilizes multiple rank-k
projection models to enable a larger search space in which the optimal solution
can be found. In this way, the discriminant ability is enhanced.
| Xiaojun Chang, Feiping Nie, Sen Wang, Yi Yang, Xiaofang Zhou and
Chengqi Zhang | 10.1109/TNNLS.2015.2441735 | 1411.6231 | null | null |
Semi-supervised Feature Analysis by Mining Correlations among Multiple
Tasks | cs.LG | In this paper, we propose a novel semi-supervised feature selection framework
by mining correlations among multiple tasks and apply it to different
multimedia applications. Instead of independently computing the importance of
features for each task, our algorithm leverages shared knowledge from multiple
related tasks, thus, improving the performance of feature selection. Note that
we build our algorithm on assumption that different tasks share common
structures. The proposed algorithm selects features in a batch mode, by which
the correlations between different features are taken into consideration.
Besides, considering the fact that labeling a large amount of training data in
real world is both time-consuming and tedious, we adopt manifold learning which
exploits both labeled and unlabeled training data for feature space analysis.
Since the objective function is non-smooth and difficult to solve, we propose
an iterative algorithm with fast convergence. Extensive experiments on
different applications demonstrate that our algorithm outperforms other
state-of-the-art feature selection algorithms.
| Xiaojun Chang and Yi Yang | 10.1109/TNNLS.2016.2582746 | 1411.6232 | null | null |
A Convex Sparse PCA for Feature Analysis | cs.LG | Principal component analysis (PCA) has been widely applied to dimensionality
reduction and data pre-processing for different applications in engineering,
biology and social science. Classical PCA and its variants seek for linear
projections of the original variables to obtain a low dimensional feature
representation with maximal variance. One limitation is that it is very
difficult to interpret the results of PCA. In addition, the classical PCA is
vulnerable to certain noisy data. In this paper, we propose a convex sparse
principal component analysis (CSPCA) algorithm and apply it to feature
analysis. First we show that PCA can be formulated as a low-rank regression
optimization problem. Based on the discussion, the l 2 , 1 -norm minimization
is incorporated into the objective function to make the regression coefficients
sparse, thereby robust to the outliers. In addition, based on the sparse model
used in CSPCA, an optimal weight is assigned to each of the original feature,
which in turn provides the output with good interpretability. With the output
of our CSPCA, we can effectively analyze the importance of each feature under
the PCA criteria. The objective function is convex, and we propose an iterative
algorithm to optimize it. We apply the CSPCA algorithm to feature selection and
conduct extensive experiments on six different benchmark datasets. Experimental
results demonstrate that the proposed algorithm outperforms state-of-the-art
unsupervised feature selection algorithms.
| Xiaojun Chang, Feiping Nie, Yi Yang, and Heng Huang | 10.1145/2910585 | 1411.6233 | null | null |
Balanced k-Means and Min-Cut Clustering | cs.LG | Clustering is an effective technique in data mining to generate groups that
are the matter of interest. Among various clustering approaches, the family of
k-means algorithms and min-cut algorithms gain most popularity due to their
simplicity and efficacy. The classical k-means algorithm partitions a number of
data points into several subsets by iteratively updating the clustering centers
and the associated data points. By contrast, a weighted undirected graph is
constructed in min-cut algorithms which partition the vertices of the graph
into two sets. However, existing clustering algorithms tend to cluster minority
of data points into a subset, which shall be avoided when the target dataset is
balanced. To achieve more accurate clustering for balanced dataset, we propose
to leverage exclusive lasso on k-means and min-cut to regulate the balance
degree of the clustering results. By optimizing our objective functions that
build atop the exclusive lasso, we can make the clustering result as much
balanced as possible. Extensive experiments on several large-scale datasets
validate the advantage of the proposed algorithms compared to the
state-of-the-art clustering algorithms.
| Xiaojun Chang, Feiping Nie, Zhigang Ma, and Yi Yang | null | 1411.6235 | null | null |
Improved Spectral Clustering via Embedded Label Propagation | cs.LG | Spectral clustering is a key research topic in the field of machine learning
and data mining. Most of the existing spectral clustering algorithms are built
upon Gaussian Laplacian matrices, which are sensitive to parameters. We propose
a novel parameter free, distance consistent Locally Linear Embedding. The
proposed distance consistent LLE promises that edges between closer data points
have greater weight.Furthermore, we propose a novel improved spectral
clustering via embedded label propagation. Our algorithm is built upon two
advancements of the state of the art:1) label propagation,which propagates a
node\'s labels to neighboring nodes according to their proximity; and 2)
manifold learning, which has been widely used in its capacity to leverage the
manifold structure of data points. First we perform standard spectral
clustering on original data and assign each cluster to k nearest data points.
Next, we propagate labels through dense, unlabeled data regions. Extensive
experiments with various datasets validate the superiority of the proposed
algorithm compared to current state of the art spectral algorithms.
| Xiaojun Chang, Feiping Nie, Yi Yang and Heng Huang | null | 1411.6241 | null | null |
Structure Regularization for Structured Prediction: Theories and
Experiments | cs.LG | While there are many studies on weight regularization, the study on structure
regularization is rare. Many existing systems on structured prediction focus on
increasing the level of structural dependencies within the model. However, this
trend could have been misdirected, because our study suggests that complex
structures are actually harmful to generalization ability in structured
prediction. To control structure-based overfitting, we propose a structure
regularization framework via \emph{structure decomposition}, which decomposes
training samples into mini-samples with simpler structures, deriving a model
with better generalization power. We show both theoretically and empirically
that structure regularization can effectively control overfitting risk and lead
to better accuracy. As a by-product, the proposed method can also substantially
accelerate the training speed. The method and the theoretical results can apply
to general graphical models with arbitrary structures. Experiments on
well-known tasks demonstrate that our method can easily beat the benchmark
systems on those highly-competitive tasks, achieving state-of-the-art
accuracies yet with substantially faster training speed.
| Xu Sun | null | 1411.6243 | null | null |
Target Fishing: A Single-Label or Multi-Label Problem? | q-bio.BM cs.LG stat.ML | According to Cobanoglu et al and Murphy, it is now widely acknowledged that
the single target paradigm (one protein or target, one disease, one drug) that
has been the dominant premise in drug development in the recent past is
untenable. More often than not, a drug-like compound (ligand) can be
promiscuous - that is, it can interact with more than one target protein. In
recent years, in in silico target prediction methods the promiscuity issue has
been approached computationally in different ways. In this study we confine
attention to the so-called ligand-based target prediction machine learning
approaches, commonly referred to as target-fishing. With a few exceptions, the
target-fishing approaches that are currently ubiquitous in cheminformatics
literature can be essentially viewed as single-label multi-classification
schemes; these approaches inherently bank on the single target paradigm
assumption that a ligand can home in on one specific target. In order to
address the ligand promiscuity issue, one might be able to cast target-fishing
as a multi-label multi-class classification problem. For illustrative and
comparison purposes, single-label and multi-label Naive Bayes classification
models (denoted here by SMM and MMM, respectively) for target-fishing were
implemented. The models were constructed and tested on 65,587 compounds and 308
targets retrieved from the ChEMBL17 database. SMM and MMM performed
differently: for 16,344 test compounds, the MMM model returned recall and
precision values of 0.8058 and 0.6622, respectively; the corresponding recall
and precision values yielded by the SMM model were 0.7805 and 0.7596,
respectively. However, at a significance level of 0.05 and one degree of
freedom McNemar test performed on the target prediction results returned by SMM
and MMM for the 16,344 test ligands gave a chi-squared value of 15.656, in
favour of the MMM approach.
| Avid M. Afzal, Hamse Y. Mussa, Richard E. Turner, Andreas Bender,
Robert C. Glen | null | 1411.6285 | null | null |
Revenue Optimization in Posted-Price Auctions with Strategic Buyers | cs.LG | We study revenue optimization learning algorithms for posted-price auctions
with strategic buyers. We analyze a very broad family of monotone regret
minimization algorithms for this problem, which includes the previously best
known algorithm, and show that no algorithm in that family admits a strategic
regret more favorable than $\Omega(\sqrt{T})$. We then introduce a new
algorithm that achieves a strategic regret differing from the lower bound only
by a factor in $O(\log T)$, an exponential improvement upon the previous best
algorithm. Our new algorithm admits a natural analysis and simpler proofs, and
the ideas behind its design are general. We also report the results of
empirical evaluations comparing our algorithm with the previous state of the
art and show a consistent exponential improvement in several different
scenarios.
| Mehryar Mohri and Andres Mu\~noz Medina | null | 1411.6305 | null | null |
Diversifying Sparsity Using Variational Determinantal Point Processes | cs.LG cs.AI stat.ML | We propose a novel diverse feature selection method based on determinantal
point processes (DPPs). Our model enables one to flexibly define diversity
based on the covariance of features (similar to orthogonal matching pursuit) or
alternatively based on side information. We introduce our approach in the
context of Bayesian sparse regression, employing a DPP as a variational
approximation to the true spike and slab posterior distribution. We
subsequently show how this variational DPP approximation generalizes and
extends mean-field approximation, and can be learned efficiently by exploiting
the fast sampling properties of DPPs. Our motivating application comes from
bioinformatics, where we aim to identify a diverse set of genes whose
expression profiles predict a tumor type where the diversity is defined with
respect to a gene-gene interaction network. We also explore an application in
spatial statistics. In both cases, we demonstrate that the proposed method
yields significantly more diverse feature sets than classic sparse methods,
without compromising accuracy.
| Nematollah Kayhan Batmanghelich, Gerald Quon, Alex Kulesza, Manolis
Kellis, Polina Golland, Luke Bornn | null | 1411.6307 | null | null |
A Convex Formulation for Spectral Shrunk Clustering | cs.LG | Spectral clustering is a fundamental technique in the field of data mining
and information processing. Most existing spectral clustering algorithms
integrate dimensionality reduction into the clustering process assisted by
manifold learning in the original space. However, the manifold in
reduced-dimensional subspace is likely to exhibit altered properties in
contrast with the original space. Thus, applying manifold information obtained
from the original space to the clustering process in a low-dimensional subspace
is prone to inferior performance. Aiming to address this issue, we propose a
novel convex algorithm that mines the manifold structure in the low-dimensional
subspace. In addition, our unified learning process makes the manifold learning
particularly tailored for the clustering. Compared with other related methods,
the proposed algorithm results in more structured clustering result. To
validate the efficacy of the proposed algorithm, we perform extensive
experiments on several benchmark datasets in comparison with some
state-of-the-art clustering approaches. The experimental results demonstrate
that the proposed algorithm has quite promising clustering performance.
| Xiaojun Chang, Feiping Nie, Zhigang Ma, Yi Yang and Xiaofang Zhou | null | 1411.6308 | null | null |
On the High-dimensional Power of Linear-time Kernel Two-Sample Testing
under Mean-difference Alternatives | math.ST cs.AI cs.IT cs.LG math.IT stat.ML stat.TH | Nonparametric two sample testing deals with the question of consistently
deciding if two distributions are different, given samples from both, without
making any parametric assumptions about the form of the distributions. The
current literature is split into two kinds of tests - those which are
consistent without any assumptions about how the distributions may differ
(\textit{general} alternatives), and those which are designed to specifically
test easier alternatives, like a difference in means (\textit{mean-shift}
alternatives).
The main contribution of this paper is to explicitly characterize the power
of a popular nonparametric two sample test, designed for general alternatives,
under a mean-shift alternative in the high-dimensional setting. Specifically,
we explicitly derive the power of the linear-time Maximum Mean Discrepancy
statistic using the Gaussian kernel, where the dimension and sample size can
both tend to infinity at any rate, and the two distributions differ in their
means. As a corollary, we find that if the signal-to-noise ratio is held
constant, then the test's power goes to one if the number of samples increases
faster than the dimension increases. This is the first explicit power
derivation for a general nonparametric test in the high-dimensional setting,
and also the first analysis of how tests designed for general alternatives
perform when faced with easier ones.
| Aaditya Ramdas, Sashank J. Reddi, Barnabas Poczos, Aarti Singh, Larry
Wasserman | null | 1411.6314 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.