categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
list |
---|---|---|---|---|---|---|---|---|---|---|
stat.ML cs.LG | null | 1311.0707 | null | null | http://arxiv.org/pdf/1311.0707v3 | 2014-02-14T15:15:43Z | 2013-11-04T14:13:27Z | Generative Modelling for Unsupervised Score Calibration | Score calibration enables automatic speaker recognizers to make
cost-effective accept / reject decisions. Traditional calibration requires
supervised data, which is an expensive resource. We propose a 2-component GMM
for unsupervised calibration and demonstrate good performance relative to a
supervised baseline on NIST SRE'10 and SRE'12. A Bayesian analysis demonstrates
that the uncertainty associated with the unsupervised calibration parameter
estimates is surprisingly small.
| [
"Niko Br\\\"ummer and Daniel Garcia-Romero",
"['Niko Brümmer' 'Daniel Garcia-Romero']"
] |
cs.LG | null | 1311.0800 | null | null | http://arxiv.org/pdf/1311.0800v1 | 2013-11-04T18:19:25Z | 2013-11-04T18:19:25Z | Distributed Exploration in Multi-Armed Bandits | We study exploration in Multi-Armed Bandits in a setting where $k$ players
collaborate in order to identify an $\epsilon$-optimal arm. Our motivation
comes from recent employment of bandit algorithms in computationally intensive,
large-scale applications. Our results demonstrate a non-trivial tradeoff
between the number of arm pulls required by each of the players, and the amount
of communication between them. In particular, our main result shows that by
allowing the $k$ players to communicate only once, they are able to learn
$\sqrt{k}$ times faster than a single player. That is, distributing learning to
$k$ players gives rise to a factor $\sqrt{k}$ parallel speed-up. We complement
this result with a lower bound showing this is in general the best possible. On
the other extreme, we present an algorithm that achieves the ideal factor $k$
speed-up in learning performance, with communication only logarithmic in
$1/\epsilon$.
| [
"['Eshcar Hillel' 'Zohar Karnin' 'Tomer Koren' 'Ronny Lempel' 'Oren Somekh']",
"Eshcar Hillel, Zohar Karnin, Tomer Koren, Ronny Lempel, Oren Somekh"
] |
cs.LG | null | 1311.0914 | null | null | http://arxiv.org/pdf/1311.0914v1 | 2013-11-04T22:06:40Z | 2013-11-04T22:06:40Z | A Divide-and-Conquer Solver for Kernel Support Vector Machines | The kernel support vector machine (SVM) is one of the most widely used
classification methods; however, the amount of computation required becomes the
bottleneck when facing millions of samples. In this paper, we propose and
analyze a novel divide-and-conquer solver for kernel SVMs (DC-SVM). In the
division step, we partition the kernel SVM problem into smaller subproblems by
clustering the data, so that each subproblem can be solved independently and
efficiently. We show theoretically that the support vectors identified by the
subproblem solution are likely to be support vectors of the entire kernel SVM
problem, provided that the problem is partitioned appropriately by kernel
clustering. In the conquer step, the local solutions from the subproblems are
used to initialize a global coordinate descent solver, which converges quickly
as suggested by our analysis. By extending this idea, we develop a multilevel
Divide-and-Conquer SVM algorithm with adaptive clustering and early prediction
strategy, which outperforms state-of-the-art methods in terms of training
speed, testing accuracy, and memory usage. As an example, on the covtype
dataset with half-a-million samples, DC-SVM is 7 times faster than LIBSVM in
obtaining the exact SVM solution (to within $10^{-6}$ relative error) which
achieves 96.15% prediction accuracy. Moreover, with our proposed early
prediction strategy, DC-SVM achieves about 96% accuracy in only 12 minutes,
which is more than 100 times faster than LIBSVM.
| [
"Cho-Jui Hsieh and Si Si and Inderjit S. Dhillon",
"['Cho-Jui Hsieh' 'Si Si' 'Inderjit S. Dhillon']"
] |
cs.LG | null | 1311.0989 | null | null | http://arxiv.org/pdf/1311.0989v2 | 2014-05-23T12:02:06Z | 2013-11-05T08:46:26Z | Large Margin Distribution Machine | Support vector machine (SVM) has been one of the most popular learning
algorithms, with the central idea of maximizing the minimum margin, i.e., the
smallest distance from the instances to the classification boundary. Recent
theoretical results, however, disclosed that maximizing the minimum margin does
not necessarily lead to better generalization performances, and instead, the
margin distribution has been proven to be more crucial. In this paper, we
propose the Large margin Distribution Machine (LDM), which tries to achieve a
better generalization performance by optimizing the margin distribution. We
characterize the margin distribution by the first- and second-order statistics,
i.e., the margin mean and variance. The LDM is a general learning approach
which can be used in any place where SVM can be applied, and its superiority is
verified both theoretically and empirically in this paper.
| [
"Teng Zhang, Zhi-Hua Zhou",
"['Teng Zhang' 'Zhi-Hua Zhou']"
] |
stat.ML cs.LG | null | 1311.1040 | null | null | http://arxiv.org/pdf/1311.1040v2 | 2016-12-28T02:09:17Z | 2013-11-05T13:07:46Z | Combined Independent Component Analysis and Canonical Polyadic
Decomposition via Joint Diagonalization | Recently, there has been a trend to combine independent component analysis
and canonical polyadic decomposition (ICA-CPD) for an enhanced robustness for
the computation of CPD, and ICA-CPD could be further converted into CPD of a
5th-order partially symmetric tensor, by calculating the eigenmatrices of the
4th-order cumulant slices of a trilinear mixture. In this study, we propose a
new 5th-order CPD algorithm constrained with partial symmetry based on joint
diagonalization. As the main steps involved in the proposed algorithm undergo
no updating iterations for the loading matrices, it is much faster than the
existing algorithm based on alternating least squares and enhanced line search,
with competent performances. Simulation results are provided to demonstrate the
performance of the proposed algorithm.
| [
"Xiao-Feng Gong, Cheng-Yuan Wang, Ya-Na Hao, and Qiu-Hua Lin",
"['Xiao-Feng Gong' 'Cheng-Yuan Wang' 'Ya-Na Hao' 'Qiu-Hua Lin']"
] |
stat.ME cs.LG stat.ML | 10.1080/01621459.2014.998762 | 1311.1189 | null | null | http://arxiv.org/abs/1311.1189v1 | 2013-11-05T20:41:17Z | 2013-11-05T20:41:17Z | Statistical Inference in Hidden Markov Models using $k$-segment
Constraints | Hidden Markov models (HMMs) are one of the most widely used statistical
methods for analyzing sequence data. However, the reporting of output from HMMs
has largely been restricted to the presentation of the most-probable (MAP)
hidden state sequence, found via the Viterbi algorithm, or the sequence of most
probable marginals using the forward-backward (F-B) algorithm. In this article,
we expand the amount of information we could obtain from the posterior
distribution of an HMM by introducing linear-time dynamic programming
algorithms that, we collectively call $k$-segment algorithms, that allow us to
i) find MAP sequences, ii) compute posterior probabilities and iii) simulate
sample paths conditional on a user specified number of segments, i.e.
contiguous runs in a hidden state, possibly of a particular type. We illustrate
the utility of these methods using simulated and real examples and highlight
the application of prospective and retrospective use of these methods for
fitting HMMs or exploring existing model fits.
| [
"Michalis K. Titsias, Christopher Yau, Christopher C. Holmes",
"['Michalis K. Titsias' 'Christopher Yau' 'Christopher C. Holmes']"
] |
stat.ML cs.LG | null | 1311.1354 | null | null | http://arxiv.org/pdf/1311.1354v3 | 2015-07-16T08:37:23Z | 2013-11-06T11:25:42Z | How to Center Binary Deep Boltzmann Machines | This work analyzes centered binary Restricted Boltzmann Machines (RBMs) and
binary Deep Boltzmann Machines (DBMs), where centering is done by subtracting
offset values from visible and hidden variables. We show analytically that (i)
centering results in a different but equivalent parameterization for artificial
neural networks in general, (ii) the expected performance of centered binary
RBMs/DBMs is invariant under simultaneous flip of data and offsets, for any
offset value in the range of zero to one, (iii) centering can be reformulated
as a different update rule for normal binary RBMs/DBMs, and (iv) using the
enhanced gradient is equivalent to setting the offset values to the average
over model and data mean. Furthermore, numerical simulations suggest that (i)
optimal generative performance is achieved by subtracting mean values from
visible as well as hidden variables, (ii) centered RBMs/DBMs reach
significantly higher log-likelihood values than normal binary RBMs/DBMs, (iii)
centering variants whose offsets depend on the model mean, like the enhanced
gradient, suffer from severe divergence problems, (iv) learning is stabilized
if an exponentially moving average over the batch means is used for the offset
values instead of the current batch mean, which also prevents the enhanced
gradient from diverging, (v) centered RBMs/DBMs reach higher LL values than
normal RBMs/DBMs while having a smaller norm of the weight matrix, (vi)
centering leads to an update direction that is closer to the natural gradient
and that the natural gradient is extremly efficient for training RBMs, (vii)
centering dispense the need for greedy layer-wise pre-training of DBMs, (viii)
furthermore we show that pre-training often even worsen the results
independently whether centering is used or not, and (ix) centering is also
beneficial for auto encoders.
| [
"Jan Melchior, Asja Fischer, Laurenz Wiskott",
"['Jan Melchior' 'Asja Fischer' 'Laurenz Wiskott']"
] |
cs.CV cs.IR cs.LG | null | 1311.1406 | null | null | http://arxiv.org/pdf/1311.1406v1 | 2013-11-04T19:03:31Z | 2013-11-04T19:03:31Z | TOP-SPIN: TOPic discovery via Sparse Principal component INterference | We propose a novel topic discovery algorithm for unlabeled images based on
the bag-of-words (BoW) framework. We first extract a dictionary of visual words
and subsequently for each image compute a visual word occurrence histogram. We
view these histograms as rows of a large matrix from which we extract sparse
principal components (PCs). Each PC identifies a sparse combination of visual
words which co-occur frequently in some images but seldom appear in others.
Each sparse PC corresponds to a topic, and images whose interference with the
PC is high belong to that topic, revealing the common parts possessed by the
images. We propose to solve the associated sparse PCA problems using an
Alternating Maximization (AM) method, which we modify for purpose of
efficiently extracting multiple PCs in a deflation scheme. Our approach attacks
the maximization problem in sparse PCA directly and is scalable to
high-dimensional data. Experiments on automatic topic discovery and category
prediction demonstrate encouraging performance of our approach.
| [
"Martin Tak\\'a\\v{c}, Selin Damla Ahipa\\c{s}ao\\u{g}lu, Ngai-Man Cheung,\n Peter Richt\\'arik",
"['Martin Takáč' 'Selin Damla Ahipaşaoğlu' 'Ngai-Man Cheung'\n 'Peter Richtárik']"
] |
cs.LG cs.CE q-bio.QM | null | 1311.1422 | null | null | http://arxiv.org/pdf/1311.1422v2 | 2013-11-12T19:17:57Z | 2013-11-06T15:37:27Z | Structural Learning for Template-free Protein Folding | The thesis is aimed to solve the template-free protein folding problem by
tackling two important components: efficient sampling in vast conformation
space, and design of knowledge-based potentials with high accuracy. We have
proposed the first-order and second-order CRF-Sampler to sample structures from
the continuous local dihedral angles space by modeling the lower and higher
order conditional dependency between neighboring dihedral angles given the
primary sequence information. A framework combining the Conditional Random
Fields and the energy function is introduced to guide the local conformation
sampling using long range constraints with the energy function.
The relationship between the sequence profile and the local dihedral angle
distribution is nonlinear. Hence we proposed the CNF-Folder to model this
complex relationship by applying a novel machine learning model Conditional
Neural Fields which utilizes the structural graphical model with the neural
network. CRF-Samplers and CNF-Folder perform very well in CASP8 and CASP9.
Further, a novel pairwise distance statistical potential (EPAD) is designed
to capture the dependency of the energy profile on the positions of the
interacting amino acids as well as the types of those amino acids, opposing the
common assumption that this energy profile depends only on the types of amino
acids. EPAD has also been successfully applied in the CASP 10 Free Modeling
experiment with CNF-Folder, especially outstanding on some uncommon structured
targets.
| [
"['Feng Zhao']",
"Feng Zhao"
] |
cs.CL cs.LG math.CT math.LO | null | 1311.1539 | null | null | http://arxiv.org/pdf/1311.1539v1 | 2013-11-06T22:06:15Z | 2013-11-06T22:06:15Z | Category-Theoretic Quantitative Compositional Distributional Models of
Natural Language Semantics | This thesis is about the problem of compositionality in distributional
semantics. Distributional semantics presupposes that the meanings of words are
a function of their occurrences in textual contexts. It models words as
distributions over these contexts and represents them as vectors in high
dimensional spaces. The problem of compositionality for such models concerns
itself with how to produce representations for larger units of text by
composing the representations of smaller units of text.
This thesis focuses on a particular approach to this compositionality
problem, namely using the categorical framework developed by Coecke, Sadrzadeh,
and Clark, which combines syntactic analysis formalisms with distributional
semantic representations of meaning to produce syntactically motivated
composition operations. This thesis shows how this approach can be
theoretically extended and practically implemented to produce concrete
compositional distributional models of natural language semantics. It
furthermore demonstrates that such models can perform on par with, or better
than, other competing approaches in the field of natural language processing.
There are three principal contributions to computational linguistics in this
thesis. The first is to extend the DisCoCat framework on the syntactic front
and semantic front, incorporating a number of syntactic analysis formalisms and
providing learning procedures allowing for the generation of concrete
compositional distributional models. The second contribution is to evaluate the
models developed from the procedures presented here, showing that they
outperform other compositional distributional models present in the literature.
The third contribution is to show how using category theory to solve linguistic
problems forms a sound basis for research, illustrated by examples of work on
this topic, that also suggest directions for future research.
| [
"Edward Grefenstette",
"['Edward Grefenstette']"
] |
cs.LG math.OC stat.ML | null | 1311.1644 | null | null | http://arxiv.org/pdf/1311.1644v1 | 2013-11-07T11:33:14Z | 2013-11-07T11:33:14Z | The Maximum Entropy Relaxation Path | The relaxed maximum entropy problem is concerned with finding a probability
distribution on a finite set that minimizes the relative entropy to a given
prior distribution, while satisfying relaxed max-norm constraints with respect
to a third observed multinomial distribution. We study the entire relaxation
path for this problem in detail. We show existence and a geometric description
of the relaxation path. Specifically, we show that the maximum entropy
relaxation path admits a planar geometric description as an increasing,
piecewise linear function in the inverse relaxation parameter. We derive fast
algorithms for tracking the path. In various realistic settings, our algorithms
require $O(n\log(n))$ operations for probability distributions on $n$ points,
making it possible to handle large problems. Once the path has been recovered,
we show that given a validation set, the family of admissible models is reduced
from an infinite family to a small, discrete set. We demonstrate the merits of
our approach in experiments with synthetic data and discuss its potential for
the estimation of compact n-gram language models.
| [
"['Moshe Dubiner' 'Matan Gavish' 'Yoram Singer']",
"Moshe Dubiner, Matan Gavish and Yoram Singer"
] |
cs.IR cs.AI cs.LG stat.ML | null | 1311.1704 | null | null | http://arxiv.org/pdf/1311.1704v3 | 2014-05-20T19:19:30Z | 2013-11-07T14:58:40Z | Scalable Recommendation with Poisson Factorization | We develop a Bayesian Poisson matrix factorization model for forming
recommendations from sparse user behavior data. These data are large user/item
matrices where each user has provided feedback on only a small subset of items,
either explicitly (e.g., through star ratings) or implicitly (e.g., through
views or purchases). In contrast to traditional matrix factorization
approaches, Poisson factorization implicitly models each user's limited
attention to consume items. Moreover, because of the mathematical form of the
Poisson likelihood, the model needs only to explicitly consider the observed
entries in the matrix, leading to both scalable computation and good predictive
performance. We develop a variational inference algorithm for approximate
posterior inference that scales up to massive data sets. This is an efficient
algorithm that iterates over the observed entries and adjusts an approximate
posterior over the user/item representations. We apply our method to large
real-world user data containing users rating movies, users listening to songs,
and users reading scientific papers. In all these settings, Bayesian Poisson
factorization outperforms state-of-the-art matrix factorization methods.
| [
"Prem Gopalan, Jake M. Hofman, David M. Blei",
"['Prem Gopalan' 'Jake M. Hofman' 'David M. Blei']"
] |
stat.ME cs.LG cs.SI physics.data-an stat.ML | null | 1311.1731 | null | null | http://arxiv.org/pdf/1311.1731v2 | 2013-11-08T04:09:51Z | 2013-11-07T16:20:02Z | Stochastic blockmodel approximation of a graphon: Theory and consistent
estimation | Non-parametric approaches for analyzing network data based on exchangeable
graph models (ExGM) have recently gained interest. The key object that defines
an ExGM is often referred to as a graphon. This non-parametric perspective on
network modeling poses challenging questions on how to make inference on the
graphon underlying observed network data. In this paper, we propose a
computationally efficient procedure to estimate a graphon from a set of
observed networks generated from it. This procedure is based on a stochastic
blockmodel approximation (SBA) of the graphon. We show that, by approximating
the graphon with a stochastic block model, the graphon can be consistently
estimated, that is, the estimation error vanishes as the size of the graph
approaches infinity.
| [
"['Edoardo M Airoldi' 'Thiago B Costa' 'Stanley H Chan']",
"Edoardo M Airoldi, Thiago B Costa, Stanley H Chan"
] |
cs.LG cs.AI cs.NE cs.RO cs.SY | null | 1311.1761 | null | null | http://arxiv.org/pdf/1311.1761v1 | 2013-11-07T17:39:31Z | 2013-11-07T17:39:31Z | Exploring Deep and Recurrent Architectures for Optimal Control | Sophisticated multilayer neural networks have achieved state of the art
results on multiple supervised tasks. However, successful applications of such
multilayer networks to control have so far been limited largely to the
perception portion of the control pipeline. In this paper, we explore the
application of deep and recurrent neural networks to a continuous,
high-dimensional locomotion task, where the network is used to represent a
control policy that maps the state of the system (represented by joint angles)
directly to the torques at each joint. By using a recent reinforcement learning
algorithm called guided policy search, we can successfully train neural network
controllers with thousands of parameters, allowing us to compare a variety of
architectures. We discuss the differences between the locomotion control task
and previous supervised perception tasks, present experimental results
comparing various architectures, and discuss future directions in the
application of techniques from deep learning to the problem of optimal control.
| [
"Sergey Levine",
"['Sergey Levine']"
] |
cs.NE cs.LG stat.ML | null | 1311.1780 | null | null | http://arxiv.org/pdf/1311.1780v7 | 2014-09-02T00:53:40Z | 2013-11-07T18:30:37Z | Learned-Norm Pooling for Deep Feedforward and Recurrent Neural Networks | In this paper we propose and investigate a novel nonlinear unit, called $L_p$
unit, for deep neural networks. The proposed $L_p$ unit receives signals from
several projections of a subset of units in the layer below and computes a
normalized $L_p$ norm. We notice two interesting interpretations of the $L_p$
unit. First, the proposed unit can be understood as a generalization of a
number of conventional pooling operators such as average, root-mean-square and
max pooling widely used in, for instance, convolutional neural networks (CNN),
HMAX models and neocognitrons. Furthermore, the $L_p$ unit is, to a certain
degree, similar to the recently proposed maxout unit (Goodfellow et al., 2013)
which achieved the state-of-the-art object recognition results on a number of
benchmark datasets. Secondly, we provide a geometrical interpretation of the
activation function based on which we argue that the $L_p$ unit is more
efficient at representing complex, nonlinear separating boundaries. Each $L_p$
unit defines a superelliptic boundary, with its exact shape defined by the
order $p$. We claim that this makes it possible to model arbitrarily shaped,
curved boundaries more efficiently by combining a few $L_p$ units of different
orders. This insight justifies the need for learning different orders for each
unit in the model. We empirically evaluate the proposed $L_p$ units on a number
of datasets and show that multilayer perceptrons (MLP) consisting of the $L_p$
units achieve the state-of-the-art results on a number of benchmark datasets.
Furthermore, we evaluate the proposed $L_p$ unit on the recently proposed deep
recurrent neural networks (RNN).
| [
"['Caglar Gulcehre' 'Kyunghyun Cho' 'Razvan Pascanu' 'Yoshua Bengio']",
"Caglar Gulcehre, Kyunghyun Cho, Razvan Pascanu and Yoshua Bengio"
] |
cs.LG cs.GT | null | 1311.1869 | null | null | http://arxiv.org/pdf/1311.1869v1 | 2013-11-08T02:47:40Z | 2013-11-08T02:47:40Z | Optimization, Learning, and Games with Predictable Sequences | We provide several applications of Optimistic Mirror Descent, an online
learning algorithm based on the idea of predictable sequences. First, we
recover the Mirror Prox algorithm for offline optimization, prove an extension
to Holder-smooth functions, and apply the results to saddle-point type
problems. Next, we prove that a version of Optimistic Mirror Descent (which has
a close relation to the Exponential Weights algorithm) can be used by two
strongly-uncoupled players in a finite zero-sum matrix game to converge to the
minimax equilibrium at the rate of O((log T)/T). This addresses a question of
Daskalakis et al 2011. Further, we consider a partial information version of
the problem. We then apply the results to convex programming and exhibit a
simple algorithm for the approximate Max Flow problem.
| [
"['Alexander Rakhlin' 'Karthik Sridharan']",
"Alexander Rakhlin and Karthik Sridharan"
] |
cs.LG stat.ML | null | 1311.1903 | null | null | http://arxiv.org/pdf/1311.1903v1 | 2013-11-08T08:44:11Z | 2013-11-08T08:44:11Z | Moment-based Uniform Deviation Bounds for $k$-means and Friends | Suppose $k$ centers are fit to $m$ points by heuristically minimizing the
$k$-means cost; what is the corresponding fit over the source distribution?
This question is resolved here for distributions with $p\geq 4$ bounded
moments; in particular, the difference between the sample cost and distribution
cost decays with $m$ and $p$ as $m^{\min\{-1/4, -1/2+2/p\}}$. The essential
technical contribution is a mechanism to uniformly control deviations in the
face of unbounded parameter sets, cost functions, and source distributions. To
further demonstrate this mechanism, a soft clustering variant of $k$-means cost
is also considered, namely the log likelihood of a Gaussian mixture, subject to
the constraint that all covariance matrices have bounded spectrum. Lastly, a
rate with refined constants is provided for $k$-means instances possessing some
cluster structure.
| [
"['Matus Telgarsky' 'Sanjoy Dasgupta']",
"Matus Telgarsky, Sanjoy Dasgupta"
] |
cs.LG | null | 1311.1958 | null | null | http://arxiv.org/pdf/1311.1958v3 | 2014-05-20T19:14:08Z | 2013-11-07T12:14:24Z | Constructing Time Series Shape Association Measures: Minkowski Distance
and Data Standardization | It is surprising that last two decades many works in time series data mining
and clustering were concerned with measures of similarity of time series but
not with measures of association that can be used for measuring possible direct
and inverse relationships between time series. Inverse relationships can exist
between dynamics of prices and sell volumes, between growth patterns of
competitive companies, between well production data in oilfields, between wind
velocity and air pollution concentration etc. The paper develops a theoretical
basis for analysis and construction of time series shape association measures.
Starting from the axioms of time series shape association measures it studies
the methods of construction of measures satisfying these axioms. Several
general methods of construction of such measures suitable for measuring time
series shape similarity and shape association are proposed. Time series shape
association measures based on Minkowski distance and data standardization
methods are considered. The cosine similarity and the Pearsons correlation
coefficient are obtained as particular cases of the proposed general methods
that can be used also for construction of new association measures in data
analysis.
| [
"['Ildar Batyrshin']",
"Ildar Batyrshin"
] |
cs.LG | 10.1162/NECO_a_00600 | 1311.2097 | null | null | http://arxiv.org/abs/1311.2097v3 | 2014-01-23T21:18:34Z | 2013-11-08T22:25:26Z | Risk-sensitive Reinforcement Learning | We derive a family of risk-sensitive reinforcement learning methods for
agents, who face sequential decision-making tasks in uncertain environments. By
applying a utility function to the temporal difference (TD) error, nonlinear
transformations are effectively applied not only to the received rewards but
also to the true transition probabilities of the underlying Markov decision
process. When appropriate utility functions are chosen, the agents' behaviors
express key features of human behavior as predicted by prospect theory
(Kahneman and Tversky, 1979), for example different risk-preferences for gains
and losses as well as the shape of subjective probability curves. We derive a
risk-sensitive Q-learning algorithm, which is necessary for modeling human
behavior when transition probabilities are unknown, and prove its convergence.
As a proof of principle for the applicability of the new framework we apply it
to quantify human behavior in a sequential investment task. We find, that the
risk-sensitive variant provides a significantly better fit to the behavioral
data and that it leads to an interpretation of the subject's responses which is
indeed consistent with prospect theory. The analysis of simultaneously measured
fMRI signals show a significant correlation of the risk-sensitive TD error with
BOLD signal change in the ventral striatum. In addition we find a significant
correlation of the risk-sensitive Q-values with neural activity in the
striatum, cingulate cortex and insula, which is not present if standard
Q-values are used.
| [
"Yun Shen, Michael J. Tobia, Tobias Sommer, Klaus Obermayer",
"['Yun Shen' 'Michael J. Tobia' 'Tobias Sommer' 'Klaus Obermayer']"
] |
cs.DS cs.DM cs.LG | null | 1311.2110 | null | null | http://arxiv.org/pdf/1311.2110v1 | 2013-11-08T23:42:34Z | 2013-11-08T23:42:34Z | Curvature and Optimal Algorithms for Learning and Minimizing Submodular
Functions | We investigate three related and important problems connected to machine
learning: approximating a submodular function everywhere, learning a submodular
function (in a PAC-like setting [53]), and constrained minimization of
submodular functions. We show that the complexity of all three problems depends
on the 'curvature' of the submodular function, and provide lower and upper
bounds that refine and improve previous results [3, 16, 18, 52]. Our proof
techniques are fairly generic. We either use a black-box transformation of the
function (for approximation and learning), or a transformation of algorithms to
use an appropriate surrogate function (for minimization). Curiously, curvature
has been known to influence approximations for submodular maximization [7, 55],
but its effect on minimization, approximation and learning has hitherto been
open. We complete this picture, and also support our theoretical claims by
empirical results.
| [
"Rishabh Iyer, Stefanie Jegelka and Jeff Bilmes",
"['Rishabh Iyer' 'Stefanie Jegelka' 'Jeff Bilmes']"
] |
cs.LG | null | 1311.2115 | null | null | http://arxiv.org/pdf/1311.2115v7 | 2014-11-30T01:35:55Z | 2013-11-09T00:54:37Z | Fast large-scale optimization by unifying stochastic gradient and
quasi-Newton methods | We present an algorithm for minimizing a sum of functions that combines the
computational efficiency of stochastic gradient descent (SGD) with the second
order curvature information leveraged by quasi-Newton methods. We unify these
disparate approaches by maintaining an independent Hessian approximation for
each contributing function in the sum. We maintain computational tractability
and limit memory requirements even for high dimensional optimization problems
by storing and manipulating these quadratic approximations in a shared, time
evolving, low dimensional subspace. Each update step requires only a single
contributing function or minibatch evaluation (as in SGD), and each step is
scaled using an approximate inverse Hessian and little to no adjustment of
hyperparameters is required (as is typical for quasi-Newton methods). This
algorithm contrasts with earlier stochastic second order techniques that treat
the Hessian of each contributing function as a noisy approximation to the full
Hessian, rather than as a target for direct estimation. We experimentally
demonstrate improved convergence on seven diverse optimization problems. The
algorithm is released as open source Python and MATLAB packages.
| [
"Jascha Sohl-Dickstein, Ben Poole, Surya Ganguli",
"['Jascha Sohl-Dickstein' 'Ben Poole' 'Surya Ganguli']"
] |
cs.LG | null | 1311.2137 | null | null | http://arxiv.org/pdf/1311.2137v1 | 2013-11-09T06:15:15Z | 2013-11-09T06:15:15Z | A Structured Prediction Approach for Missing Value Imputation | Missing value imputation is an important practical problem. There is a large
body of work on it, but there does not exist any work that formulates the
problem in a structured output setting. Also, most applications have
constraints on the imputed data, for example on the distribution associated
with each variable. None of the existing imputation methods use these
constraints. In this paper we propose a structured output approach for missing
value imputation that also incorporates domain constraints. We focus on large
margin models, but it is easy to extend the ideas to probabilistic models. We
deal with the intractable inference step in learning via a piecewise training
technique that is simple, efficient, and effective. Comparison with existing
state-of-the-art and baseline imputation methods shows that our method gives
significantly improved performance on the Hamming loss measure.
| [
"['Rahul Kidambi' 'Vinod Nair' 'Sundararajan Sellamanickam'\n 'S. Sathiya Keerthi']",
"Rahul Kidambi, Vinod Nair, Sundararajan Sellamanickam, S. Sathiya\n Keerthi"
] |
cs.LG | null | 1311.2139 | null | null | http://arxiv.org/pdf/1311.2139v1 | 2013-11-09T06:47:22Z | 2013-11-09T06:47:22Z | Large Margin Semi-supervised Structured Output Learning | In structured output learning, obtaining labelled data for real-world
applications is usually costly, while unlabelled examples are available in
abundance. Semi-supervised structured classification has been developed to
handle large amounts of unlabelled structured data. In this work, we consider
semi-supervised structural SVMs with domain constraints. The optimization
problem, which in general is not convex, contains the loss terms associated
with the labelled and unlabelled examples along with the domain constraints. We
propose a simple optimization approach, which alternates between solving a
supervised learning problem and a constraint matching problem. Solving the
constraint matching problem is difficult for structured prediction, and we
propose an efficient and effective hill-climbing method to solve it. The
alternating optimization is carried out within a deterministic annealing
framework, which helps in effective constraint matching, and avoiding local
minima which are not very useful. The algorithm is simple to implement and
achieves comparable generalization performance on benchmark datasets.
| [
"P. Balamurugan, Shirish Shevade, Sundararajan Sellamanickam",
"['P. Balamurugan' 'Shirish Shevade' 'Sundararajan Sellamanickam']"
] |
cs.IT cs.LG math.IT stat.ML | null | 1311.2150 | null | null | http://arxiv.org/pdf/1311.2150v1 | 2013-11-09T08:28:27Z | 2013-11-09T08:28:27Z | Pattern-Coupled Sparse Bayesian Learning for Recovery of Block-Sparse
Signals | We consider the problem of recovering block-sparse signals whose structures
are unknown \emph{a priori}. Block-sparse signals with nonzero coefficients
occurring in clusters arise naturally in many practical scenarios. However, the
knowledge of the block structure is usually unavailable in practice. In this
paper, we develop a new sparse Bayesian learning method for recovery of
block-sparse signals with unknown cluster patterns. Specifically, a
pattern-coupled hierarchical Gaussian prior model is introduced to characterize
the statistical dependencies among coefficients, in which a set of
hyperparameters are employed to control the sparsity of signal coefficients.
Unlike the conventional sparse Bayesian learning framework in which each
individual hyperparameter is associated independently with each coefficient, in
this paper, the prior for each coefficient not only involves its own
hyperparameter, but also the hyperparameters of its immediate neighbors. In
doing this way, the sparsity patterns of neighboring coefficients are related
to each other and the hierarchical model has the potential to encourage
structured-sparse solutions. The hyperparameters, along with the sparse signal,
are learned by maximizing their posterior probability via an
expectation-maximization (EM) algorithm. Numerical results show that the
proposed algorithm presents uniform superiority over other existing methods in
a series of experiments.
| [
"Jun Fang, Yanning Shen, Hongbin Li (IEEE), and Pu Wang",
"['Jun Fang' 'Yanning Shen' 'Hongbin Li' 'Pu Wang']"
] |
stat.ML cs.LG math.ST stat.TH | null | 1311.2234 | null | null | http://arxiv.org/pdf/1311.2234v2 | 2014-03-09T02:30:26Z | 2013-11-10T00:44:01Z | FuSSO: Functional Shrinkage and Selection Operator | We present the FuSSO, a functional analogue to the LASSO, that efficiently
finds a sparse set of functional input covariates to regress a real-valued
response against. The FuSSO does so in a semi-parametric fashion, making no
parametric assumptions about the nature of input functional covariates and
assuming a linear form to the mapping of functional covariates to the response.
We provide a statistical backing for use of the FuSSO via proof of asymptotic
sparsistency under various conditions. Furthermore, we observe good results on
both synthetic and real-world data.
| [
"Junier B. Oliva, Barnabas Poczos, Timothy Verstynen, Aarti Singh, Jeff\n Schneider, Fang-Cheng Yeh, Wen-Yih Tseng",
"['Junier B. Oliva' 'Barnabas Poczos' 'Timothy Verstynen' 'Aarti Singh'\n 'Jeff Schneider' 'Fang-Cheng Yeh' 'Wen-Yih Tseng']"
] |
stat.ML cs.LG math.ST stat.TH | null | 1311.2236 | null | null | http://arxiv.org/pdf/1311.2236v2 | 2014-03-09T03:41:35Z | 2013-11-10T01:17:19Z | Fast Distribution To Real Regression | We study the problem of distribution to real-value regression, where one aims
to regress a mapping $f$ that takes in a distribution input covariate $P\in
\mathcal{I}$ (for a non-parametric family of distributions $\mathcal{I}$) and
outputs a real-valued response $Y=f(P) + \epsilon$. This setting was recently
studied, and a "Kernel-Kernel" estimator was introduced and shown to have a
polynomial rate of convergence. However, evaluating a new prediction with the
Kernel-Kernel estimator scales as $\Omega(N)$. This causes the difficult
situation where a large amount of data may be necessary for a low estimation
risk, but the computation cost of estimation becomes infeasible when the
data-set is too large. To this end, we propose the Double-Basis estimator,
which looks to alleviate this big data problem in two ways: first, the
Double-Basis estimator is shown to have a computation complexity that is
independent of the number of of instances $N$ when evaluating new predictions
after training; secondly, the Double-Basis estimator is shown to have a fast
rate of convergence for a general class of mappings $f\in\mathcal{F}$.
| [
"['Junier B. Oliva' 'Willie Neiswanger' 'Barnabas Poczos' 'Jeff Schneider'\n 'Eric Xing']",
"Junier B. Oliva, Willie Neiswanger, Barnabas Poczos, Jeff Schneider,\n Eric Xing"
] |
null | null | 1311.2241 | null | null | http://arxiv.org/pdf/1311.2241v1 | 2013-11-10T02:39:48Z | 2013-11-10T02:39:48Z | Learning Gaussian Graphical Models with Observed or Latent FVSs | Gaussian Graphical Models (GGMs) or Gauss Markov random fields are widely used in many applications, and the trade-off between the modeling capacity and the efficiency of learning and inference has been an important research problem. In this paper, we study the family of GGMs with small feedback vertex sets (FVSs), where an FVS is a set of nodes whose removal breaks all the cycles. Exact inference such as computing the marginal distributions and the partition function has complexity $O(k^{2}n)$ using message-passing algorithms, where k is the size of the FVS, and n is the total number of nodes. We propose efficient structure learning algorithms for two cases: 1) All nodes are observed, which is useful in modeling social or flight networks where the FVS nodes often correspond to a small number of high-degree nodes, or hubs, while the rest of the networks is modeled by a tree. Regardless of the maximum degree, without knowing the full graph structure, we can exactly compute the maximum likelihood estimate in $O(kn^2+n^2log n)$ if the FVS is known or in polynomial time if the FVS is unknown but has bounded size. 2) The FVS nodes are latent variables, where structure learning is equivalent to decomposing a inverse covariance matrix (exactly or approximately) into the sum of a tree-structured matrix and a low-rank matrix. By incorporating efficient inference into the learning steps, we can obtain a learning algorithm using alternating low-rank correction with complexity $O(kn^{2}+n^{2}log n)$ per iteration. We also perform experiments using both synthetic data as well as real data of flight delays to demonstrate the modeling capacity with FVSs of various sizes. | [
"['Ying Liu' 'Alan S. Willsky']"
] |
cs.CL cs.LG | null | 1311.2252 | null | null | http://arxiv.org/pdf/1311.2252v1 | 2013-11-10T09:15:16Z | 2013-11-10T09:15:16Z | Semantic Sort: A Supervised Approach to Personalized Semantic
Relatedness | We propose and study a novel supervised approach to learning statistical
semantic relatedness models from subjectively annotated training examples. The
proposed semantic model consists of parameterized co-occurrence statistics
associated with textual units of a large background knowledge corpus. We
present an efficient algorithm for learning such semantic models from a
training sample of relatedness preferences. Our method is corpus independent
and can essentially rely on any sufficiently large (unstructured) collection of
coherent texts. Moreover, the approach facilitates the fitting of semantic
models for specific users or groups of users. We present the results of
extensive range of experiments from small to large scale, indicating that the
proposed method is effective and competitive with the state-of-the-art.
| [
"['Ran El-Yaniv' 'David Yanay']",
"Ran El-Yaniv and David Yanay"
] |
cs.LG | null | 1311.2271 | null | null | http://arxiv.org/pdf/1311.2271v1 | 2013-11-10T13:28:19Z | 2013-11-10T13:28:19Z | More data speeds up training time in learning halfspaces over sparse
vectors | The increased availability of data in recent years has led several authors to
ask whether it is possible to use data as a {\em computational} resource. That
is, if more data is available, beyond the sample complexity limit, is it
possible to use the extra examples to speed up the computation time required to
perform the learning task?
We give the first positive answer to this question for a {\em natural
supervised learning problem} --- we consider agnostic PAC learning of
halfspaces over $3$-sparse vectors in $\{-1,1,0\}^n$. This class is
inefficiently learnable using $O\left(n/\epsilon^2\right)$ examples. Our main
contribution is a novel, non-cryptographic, methodology for establishing
computational-statistical gaps, which allows us to show that, under a widely
believed assumption that refuting random $\mathrm{3CNF}$ formulas is hard, it
is impossible to efficiently learn this class using only
$O\left(n/\epsilon^2\right)$ examples. We further show that under stronger
hardness assumptions, even $O\left(n^{1.499}/\epsilon^2\right)$ examples do not
suffice. On the other hand, we show a new algorithm that learns this class
efficiently using $\tilde{\Omega}\left(n^2/\epsilon^2\right)$ examples. This
formally establishes the tradeoff between sample and computational complexity
for a natural supervised learning problem.
| [
"['Amit Daniely' 'Nati Linial' 'Shai Shalev Shwartz']",
"Amit Daniely, Nati Linial, Shai Shalev Shwartz"
] |
cs.LG cs.CC | null | 1311.2272 | null | null | http://arxiv.org/pdf/1311.2272v2 | 2014-03-09T19:11:40Z | 2013-11-10T13:35:50Z | From average case complexity to improper learning complexity | The basic problem in the PAC model of computational learning theory is to
determine which hypothesis classes are efficiently learnable. There is
presently a dearth of results showing hardness of learning problems. Moreover,
the existing lower bounds fall short of the best known algorithms.
The biggest challenge in proving complexity results is to establish hardness
of {\em improper learning} (a.k.a. representation independent learning).The
difficulty in proving lower bounds for improper learning is that the standard
reductions from $\mathbf{NP}$-hard problems do not seem to apply in this
context. There is essentially only one known approach to proving lower bounds
on improper learning. It was initiated in (Kearns and Valiant 89) and relies on
cryptographic assumptions.
We introduce a new technique for proving hardness of improper learning, based
on reductions from problems that are hard on average. We put forward a (fairly
strong) generalization of Feige's assumption (Feige 02) about the complexity of
refuting random constraint satisfaction problems. Combining this assumption
with our new technique yields far reaching implications. In particular,
1. Learning $\mathrm{DNF}$'s is hard.
2. Agnostically learning halfspaces with a constant approximation ratio is
hard.
3. Learning an intersection of $\omega(1)$ halfspaces is hard.
| [
"Amit Daniely, Nati Linial, Shai Shalev-Shwartz",
"['Amit Daniely' 'Nati Linial' 'Shai Shalev-Shwartz']"
] |
cs.LG | null | 1311.2276 | null | null | http://arxiv.org/pdf/1311.2276v1 | 2013-11-10T14:17:47Z | 2013-11-10T14:17:47Z | A Quantitative Evaluation Framework for Missing Value Imputation
Algorithms | We consider the problem of quantitatively evaluating missing value imputation
algorithms. Given a dataset with missing values and a choice of several
imputation algorithms to fill them in, there is currently no principled way to
rank the algorithms using a quantitative metric. We develop a framework based
on treating imputation evaluation as a problem of comparing two distributions
and show how it can be used to compute quantitative metrics. We present an
efficient procedure for applying this framework to practical datasets,
demonstrate several metrics derived from the existing literature on comparing
distributions, and propose a new metric called Neighborhood-based Dissimilarity
Score which is fast to compute and provides similar results. Results are shown
on several datasets, metrics, and imputations algorithms.
| [
"['Vinod Nair' 'Rahul Kidambi' 'Sundararajan Sellamanickam'\n 'S. Sathiya Keerthi' 'Johannes Gehrke' 'Vijay Narayanan']",
"Vinod Nair, Rahul Kidambi, Sundararajan Sellamanickam, S. Sathiya\n Keerthi, Johannes Gehrke, Vijay Narayanan"
] |
cs.LG | null | 1311.2334 | null | null | http://arxiv.org/pdf/1311.2334v4 | 2014-01-29T20:08:17Z | 2013-11-11T02:37:16Z | Embed and Conquer: Scalable Embeddings for Kernel k-Means on MapReduce | The kernel $k$-means is an effective method for data clustering which extends
the commonly-used $k$-means algorithm to work on a similarity matrix over
complex data structures. The kernel $k$-means algorithm is however
computationally very complex as it requires the complete data matrix to be
calculated and stored. Further, the kernelized nature of the kernel $k$-means
algorithm hinders the parallelization of its computations on modern
infrastructures for distributed computing. In this paper, we are defining a
family of kernel-based low-dimensional embeddings that allows for scaling
kernel $k$-means on MapReduce via an efficient and unified parallelization
strategy. Afterwards, we propose two methods for low-dimensional embedding that
adhere to our definition of the embedding family. Exploiting the proposed
parallelization strategy, we present two scalable MapReduce algorithms for
kernel $k$-means. We demonstrate the effectiveness and efficiency of the
proposed algorithms through an empirical evaluation on benchmark data sets.
| [
"['Ahmed Elgohary' 'Ahmed K. Farahat' 'Mohamed S. Kamel' 'Fakhri Karray']",
"Ahmed Elgohary, Ahmed K. Farahat, Mohamed S. Kamel, Fakhri Karray"
] |
cs.LG | null | 1311.2378 | null | null | http://arxiv.org/pdf/1311.2378v1 | 2013-11-11T08:26:09Z | 2013-11-11T08:26:09Z | An Empirical Evaluation of Sequence-Tagging Trainers | The task of assigning label sequences to a set of observed sequences is
common in computational linguistics. Several models for sequence labeling have
been proposed over the last few years. Here, we focus on discriminative models
for sequence labeling. Many batch and online (updating model parameters after
visiting each example) learning algorithms have been proposed in the
literature. On large datasets, online algorithms are preferred as batch
learning methods are slow. These online algorithms were designed to solve
either a primal or a dual problem. However, there has been no systematic
comparison of these algorithms in terms of their speed, generalization
performance (accuracy/likelihood) and their ability to achieve steady state
generalization performance fast. With this aim, we compare different algorithms
and make recommendations, useful for a practitioner. We conclude that the
selection of an algorithm for sequence labeling depends on the evaluation
criterion used and its implementation simplicity.
| [
"['P. Balamurugan' 'Shirish Shevade' 'S. Sundararajan' 'S. S Keerthi']",
"P. Balamurugan, Shirish Shevade, S. Sundararajan and S. S Keerthi"
] |
math.ST cs.LG stat.ML stat.TH | null | 1311.2483 | null | null | http://arxiv.org/pdf/1311.2483v1 | 2013-11-11T16:30:06Z | 2013-11-11T16:30:06Z | Global Sensitivity Analysis with Dependence Measures | Global sensitivity analysis with variance-based measures suffers from several
theoretical and practical limitations, since they focus only on the variance of
the output and handle multivariate variables in a limited way. In this paper,
we introduce a new class of sensitivity indices based on dependence measures
which overcomes these insufficiencies. Our approach originates from the idea to
compare the output distribution with its conditional counterpart when one of
the input variables is fixed. We establish that this comparison yields
previously proposed indices when it is performed with Csiszar f-divergences, as
well as sensitivity indices which are well-known dependence measures between
random variables. This leads us to investigate completely new sensitivity
indices based on recent state-of-the-art dependence measures, such as distance
correlation and the Hilbert-Schmidt independence criterion. We also emphasize
the potential of feature selection techniques relying on such dependence
measures as alternatives to screening in high dimension.
| [
"['Sébastien Da Veiga']",
"S\\'ebastien Da Veiga (IFPEN, - M\\'ethodes d'Analyse Stochastique des\n Codes et Traitements Num\\'eriques)"
] |
cs.DS cs.LG | null | 1311.2495 | null | null | http://arxiv.org/pdf/1311.2495v4 | 2015-02-03T23:43:37Z | 2013-11-11T16:47:25Z | The Noisy Power Method: A Meta Algorithm with Applications | We provide a new robust convergence analysis of the well-known power method
for computing the dominant singular vectors of a matrix that we call the noisy
power method. Our result characterizes the convergence behavior of the
algorithm when a significant amount noise is introduced after each
matrix-vector multiplication. The noisy power method can be seen as a
meta-algorithm that has recently found a number of important applications in a
broad range of machine learning problems including alternating minimization for
matrix completion, streaming principal component analysis (PCA), and
privacy-preserving spectral analysis. Our general analysis subsumes several
existing ad-hoc convergence bounds and resolves a number of open problems in
multiple applications including streaming PCA and privacy-preserving singular
vector computation.
| [
"['Moritz Hardt' 'Eric Price']",
"Moritz Hardt and Eric Price"
] |
cs.LG stat.ML | null | 1311.2503 | null | null | http://arxiv.org/pdf/1311.2503v1 | 2013-11-11T17:05:22Z | 2013-11-11T17:05:22Z | Predictable Feature Analysis | Every organism in an environment, whether biological, robotic or virtual,
must be able to predict certain aspects of its environment in order to survive
or perform whatever task is intended. It needs a model that is capable of
estimating the consequences of possible actions, so that planning, control, and
decision-making become feasible. For scientific purposes, such models are
usually created in a problem specific manner using differential equations and
other techniques from control- and system-theory. In contrast to that, we aim
for an unsupervised approach that builds up the desired model in a
self-organized fashion. Inspired by Slow Feature Analysis (SFA), our approach
is to extract sub-signals from the input, that behave as predictable as
possible. These "predictable features" are highly relevant for modeling,
because predictability is a desired property of the needed
consequence-estimating model by definition. In our approach, we measure
predictability with respect to a certain prediction model. We focus here on the
solution of the arising optimization problem and present a tractable algorithm
based on algebraic methods which we call Predictable Feature Analysis (PFA). We
prove that the algorithm finds the globally optimal signal, if this signal can
be predicted with low error. To deal with cases where the optimal signal has a
significant prediction error, we provide a robust, heuristically motivated
variant of the algorithm and verify it empirically. Additionally, we give
formal criteria a prediction-model must meet to be suitable for measuring
predictability in the PFA setting and also provide a suitable default-model
along with a formal proof that it meets these criteria.
| [
"['Stefan Richthofer' 'Laurenz Wiskott']",
"Stefan Richthofer, Laurenz Wiskott"
] |
cs.LG stat.ML | null | 1311.2547 | null | null | http://arxiv.org/pdf/1311.2547v4 | 2014-07-30T23:40:04Z | 2013-11-11T19:50:51Z | Learning Mixtures of Linear Classifiers | We consider a discriminative learning (regression) problem, whereby the
regression function is a convex combination of k linear classifiers. Existing
approaches are based on the EM algorithm, or similar techniques, without
provable guarantees. We develop a simple method based on spectral techniques
and a `mirroring' trick, that discovers the subspace spanned by the
classifiers' parameter vectors. Under a probabilistic assumption on the feature
vector distribution, we prove that this approach has nearly optimal statistical
efficiency.
| [
"Yuekai Sun, Stratis Ioannidis, Andrea Montanari",
"['Yuekai Sun' 'Stratis Ioannidis' 'Andrea Montanari']"
] |
cs.LG cs.DC stat.ML | null | 1311.2663 | null | null | http://arxiv.org/pdf/1311.2663v5 | 2014-02-01T14:35:04Z | 2013-11-12T02:36:03Z | DinTucker: Scaling up Gaussian process models on multidimensional arrays
with billions of elements | Infinite Tucker Decomposition (InfTucker) and random function prior models,
as nonparametric Bayesian models on infinite exchangeable arrays, are more
powerful models than widely-used multilinear factorization methods including
Tucker and PARAFAC decomposition, (partly) due to their capability of modeling
nonlinear relationships between array elements. Despite their great predictive
performance and sound theoretical foundations, they cannot handle massive data
due to a prohibitively high training time. To overcome this limitation, we
present Distributed Infinite Tucker (DINTUCKER), a large-scale nonlinear tensor
decomposition algorithm on MAPREDUCE. While maintaining the predictive accuracy
of InfTucker, it is scalable on massive data. DINTUCKER is based on a new
hierarchical Bayesian model that enables local training of InfTucker on
subarrays and information integration from all local training results. We use
distributed stochastic gradient descent, coupled with variational inference, to
train this model. We apply DINTUCKER to multidimensional arrays with billions
of elements from applications in the "Read the Web" project (Carlson et al.,
2010) and in information security and compare it with the state-of-the-art
large-scale tensor decomposition method, GigaTensor. On both datasets,
DINTUCKER achieves significantly higher prediction accuracy with less
computational time.
| [
"['Shandian Zhe' 'Yuan Qi' 'Youngja Park' 'Ian Molloy' 'Suresh Chari']",
"Shandian Zhe and Yuan Qi and Youngja Park and Ian Molloy and Suresh\n Chari"
] |
cs.NI cs.CR cs.LG | 10.5121/csit.2013.3704 | 1311.2677 | null | null | http://arxiv.org/abs/1311.2677v1 | 2013-11-12T05:32:48Z | 2013-11-12T05:32:48Z | Sampling Based Approaches to Handle Imbalances in Network Traffic
Dataset for Machine Learning Techniques | Network traffic data is huge, varying and imbalanced because various classes
are not equally distributed. Machine learning (ML) algorithms for traffic
analysis uses the samples from this data to recommend the actions to be taken
by the network administrators as well as training. Due to imbalances in
dataset, it is difficult to train machine learning algorithms for traffic
analysis and these may give biased or false results leading to serious
degradation in performance of these algorithms. Various techniques can be
applied during sampling to minimize the effect of imbalanced instances. In this
paper various sampling techniques have been analysed in order to compare the
decrease in variation in imbalances of network traffic datasets sampled for
these algorithms. Various parameters like missing classes in samples,
probability of sampling of the different instances have been considered for
comparison.
| [
"Raman Singh, Harish Kumar and R.K. Singla",
"['Raman Singh' 'Harish Kumar' 'R. K. Singla']"
] |
stat.ML cs.LG cs.SI math.ST physics.soc-ph stat.TH | null | 1311.2694 | null | null | http://arxiv.org/pdf/1311.2694v2 | 2013-11-20T05:40:00Z | 2013-11-12T07:00:13Z | Hypothesis Testing for Automated Community Detection in Networks | Community detection in networks is a key exploratory tool with applications
in a diverse set of areas, ranging from finding communities in social and
biological networks to identifying link farms in the World Wide Web. The
problem of finding communities or clusters in a network has received much
attention from statistics, physics and computer science. However, most
clustering algorithms assume knowledge of the number of clusters k. In this
paper we propose to automatically determine k in a graph generated from a
Stochastic Blockmodel. Our main contribution is twofold; first, we
theoretically establish the limiting distribution of the principal eigenvalue
of the suitably centered and scaled adjacency matrix, and use that distribution
for our hypothesis test. Secondly, we use this test to design a recursive
bipartitioning algorithm. Using quantifiable classification tasks on real world
networks with ground truth, we show that our algorithm outperforms existing
probabilistic models for learning overlapping clusters, and on unlabeled
networks, we show that we uncover nested community structure.
| [
"Peter J. Bickel, Purnamrita Sarkar",
"['Peter J. Bickel' 'Purnamrita Sarkar']"
] |
cs.NE cs.LG | null | 1311.2746 | null | null | http://arxiv.org/pdf/1311.2746v1 | 2013-11-12T12:03:40Z | 2013-11-12T12:03:40Z | Deep neural networks for single channel source separation | In this paper, a novel approach for single channel source separation (SCSS)
using a deep neural network (DNN) architecture is introduced. Unlike previous
studies in which DNN and other classifiers were used for classifying
time-frequency bins to obtain hard masks for each source, we use the DNN to
classify estimated source spectra to check for their validity during
separation. In the training stage, the training data for the source signals are
used to train a DNN. In the separation stage, the trained DNN is utilized to
aid in estimation of each source in the mixed signal. Single channel source
separation problem is formulated as an energy minimization problem where each
source spectra estimate is encouraged to fit the trained DNN model and the
mixed signal spectrum is encouraged to be written as a weighted sum of the
estimated source spectra. The proposed approach works regardless of the energy
scale differences between the source signals in the training and separation
stages. Nonnegative matrix factorization (NMF) is used to initialize the DNN
estimate for each source. The experimental results show that using DNN
initialized by NMF for source separation improves the quality of the separated
signal compared with using NMF for source separation.
| [
"Emad M. Grais, Mehmet Umut Sen, Hakan Erdogan",
"['Emad M. Grais' 'Mehmet Umut Sen' 'Hakan Erdogan']"
] |
math.ST cs.LG stat.TH | null | 1311.2799 | null | null | http://arxiv.org/pdf/1311.2799v1 | 2013-11-12T14:53:51Z | 2013-11-12T14:53:51Z | Aggregation of Affine Estimators | We consider the problem of aggregating a general collection of affine
estimators for fixed design regression. Relevant examples include some commonly
used statistical estimators such as least squares, ridge and robust least
squares estimators. Dalalyan and Salmon (2012) have established that, for this
problem, exponentially weighted (EW) model selection aggregation leads to sharp
oracle inequalities in expectation, but similar bounds in deviation were not
previously known. While results indicate that the same aggregation scheme may
not satisfy sharp oracle inequalities with high probability, we prove that a
weaker notion of oracle inequality for EW that holds with high probability.
Moreover, using a generalization of the newly introduced $Q$-aggregation scheme
we also prove sharp oracle inequalities that hold with high probability.
Finally, we apply our results to universal aggregation and show that our
proposed estimator leads simultaneously to all the best known bounds for
aggregation, including $\ell_q$-aggregation, $q \in (0,1)$, with high
probability.
| [
"['Dong Dai' 'Philippe Rigollet' 'Lucy Xia' 'Tong Zhang']",
"Dong Dai, Philippe Rigollet, Lucy Xia and Tong Zhang"
] |
stat.ML cs.LG | null | 1311.2838 | null | null | http://arxiv.org/pdf/1311.2838v2 | 2014-05-10T10:45:51Z | 2013-11-12T17:05:04Z | A PAC-Bayesian bound for Lifelong Learning | Transfer learning has received a lot of attention in the machine learning
community over the last years, and several effective algorithms have been
developed. However, relatively little is known about their theoretical
properties, especially in the setting of lifelong learning, where the goal is
to transfer information to tasks for which no data have been observed so far.
In this work we study lifelong learning from a theoretical perspective. Our
main result is a PAC-Bayesian generalization bound that offers a unified view
on existing paradigms for transfer learning, such as the transfer of parameters
or the transfer of low-dimensional representations. We also use the bound to
derive two principled lifelong learning algorithms, and we show that these
yield results comparable with existing methods.
| [
"Anastasia Pentina and Christoph H. Lampert",
"['Anastasia Pentina' 'Christoph H. Lampert']"
] |
cs.LG cs.NA | null | 1311.2854 | null | null | http://arxiv.org/pdf/1311.2854v3 | 2015-05-12T14:39:32Z | 2013-11-12T17:42:34Z | Spectral Clustering via the Power Method -- Provably | Spectral clustering is one of the most important algorithms in data mining
and machine intelligence; however, its computational complexity limits its
application to truly large scale data analysis. The computational bottleneck in
spectral clustering is computing a few of the top eigenvectors of the
(normalized) Laplacian matrix corresponding to the graph representing the data
to be clustered. One way to speed up the computation of these eigenvectors is
to use the "power method" from the numerical linear algebra literature.
Although the power method has been empirically used to speed up spectral
clustering, the theory behind this approach, to the best of our knowledge,
remains unexplored. This paper provides the \emph{first} such rigorous
theoretical justification, arguing that a small number of power iterations
suffices to obtain near-optimal partitionings using the approximate
eigenvectors. Specifically, we prove that solving the $k$-means clustering
problem on the approximate eigenvectors obtained via the power method gives an
additive-error approximation to solving the $k$-means problem on the optimal
eigenvectors.
| [
"Christos Boutsidis and Alex Gittens and Prabhanjan Kambadur",
"['Christos Boutsidis' 'Alex Gittens' 'Prabhanjan Kambadur']"
] |
cs.LG cs.SI stat.ML | null | 1311.2889 | null | null | http://arxiv.org/pdf/1311.2889v1 | 2013-11-01T14:24:32Z | 2013-11-01T14:24:32Z | Reinforcement Learning for Matrix Computations: PageRank as an Example | Reinforcement learning has gained wide popularity as a technique for
simulation-driven approximate dynamic programming. A less known aspect is that
the very reasons that make it effective in dynamic programming can also be
leveraged for using it for distributed schemes for certain matrix computations
involving non-negative matrices. In this spirit, we propose a reinforcement
learning algorithm for PageRank computation that is fashioned after analogous
schemes for approximate dynamic programming. The algorithm has the advantage of
ease of distributed implementation and more importantly, of being model-free,
i.e., not dependent on any specific assumptions about the transition
probabilities in the random web-surfer model. We analyze its convergence and
finite time behavior and present some supporting numerical experiments.
| [
"Vivek S. Borkar and Adwaitvedant S. Mathkar",
"['Vivek S. Borkar' 'Adwaitvedant S. Mathkar']"
] |
cs.LG cs.DS stat.ML | null | 1311.2891 | null | null | http://arxiv.org/pdf/1311.2891v3 | 2014-02-18T03:34:38Z | 2013-11-12T19:21:03Z | The More, the Merrier: the Blessing of Dimensionality for Learning Large
Gaussian Mixtures | In this paper we show that very large mixtures of Gaussians are efficiently
learnable in high dimension. More precisely, we prove that a mixture with known
identical covariance matrices whose number of components is a polynomial of any
fixed degree in the dimension n is polynomially learnable as long as a certain
non-degeneracy condition on the means is satisfied. It turns out that this
condition is generic in the sense of smoothed complexity, as soon as the
dimensionality of the space is high enough. Moreover, we prove that no such
condition can possibly exist in low dimension and the problem of learning the
parameters is generically hard. In contrast, much of the existing work on
Gaussian Mixtures relies on low-dimensional projections and thus hits an
artificial barrier. Our main result on mixture recovery relies on a new
"Poissonization"-based technique, which transforms a mixture of Gaussians to a
linear map of a product distribution. The problem of learning this map can be
efficiently solved using some recent results on tensor decompositions and
Independent Component Analysis (ICA), thus giving an algorithm for recovering
the mixture. In addition, we combine our low-dimensional hardness results for
Gaussian mixtures with Poissonization to show how to embed difficult instances
of low-dimensional Gaussian mixtures into the ICA setting, thus establishing
exponential information-theoretic lower bounds for underdetermined ICA in low
dimension. To the best of our knowledge, this is the first such result in the
literature. In addition to contributing to the problem of Gaussian mixture
learning, we believe that this work is among the first steps toward better
understanding the rare phenomenon of the "blessing of dimensionality" in the
computational aspects of statistical inference.
| [
"['Joseph Anderson' 'Mikhail Belkin' 'Navin Goyal' 'Luis Rademacher'\n 'James Voss']",
"Joseph Anderson, Mikhail Belkin, Navin Goyal, Luis Rademacher, James\n Voss"
] |
stat.ML cs.LG stat.ME | null | 1311.2971 | null | null | http://arxiv.org/pdf/1311.2971v1 | 2013-11-12T22:15:26Z | 2013-11-12T22:15:26Z | Approximate Inference in Continuous Determinantal Point Processes | Determinantal point processes (DPPs) are random point processes well-suited
for modeling repulsion. In machine learning, the focus of DPP-based models has
been on diverse subset selection from a discrete and finite base set. This
discrete setting admits an efficient sampling algorithm based on the
eigendecomposition of the defining kernel matrix. Recently, there has been
growing interest in using DPPs defined on continuous spaces. While the
discrete-DPP sampler extends formally to the continuous case, computationally,
the steps required are not tractable in general. In this paper, we present two
efficient DPP sampling schemes that apply to a wide range of kernel functions:
one based on low rank approximations via Nystrom and random Fourier feature
techniques and another based on Gibbs sampling. We demonstrate the utility of
continuous DPPs in repulsive mixture modeling and synthesizing human poses
spanning activity spaces.
| [
"Raja Hafiz Affandi, Emily B. Fox, Ben Taskar",
"['Raja Hafiz Affandi' 'Emily B. Fox' 'Ben Taskar']"
] |
stat.ML cs.CC cs.IT cs.LG math.IT | null | 1311.2972 | null | null | http://arxiv.org/pdf/1311.2972v2 | 2014-05-17T19:38:34Z | 2013-11-12T22:15:35Z | Learning Mixtures of Discrete Product Distributions using Spectral
Decompositions | We study the problem of learning a distribution from samples, when the
underlying distribution is a mixture of product distributions over discrete
domains. This problem is motivated by several practical applications such as
crowd-sourcing, recommendation systems, and learning Boolean functions. The
existing solutions either heavily rely on the fact that the number of
components in the mixtures is finite or have sample/time complexity that is
exponential in the number of components. In this paper, we introduce a
polynomial time/sample complexity method for learning a mixture of $r$ discrete
product distributions over $\{1, 2, \dots, \ell\}^n$, for general $\ell$ and
$r$. We show that our approach is statistically consistent and further provide
finite sample guarantees.
We use techniques from the recent work on tensor decompositions for
higher-order moment matching. A crucial step in these moment matching methods
is to construct a certain matrix and a certain tensor with low-rank spectral
decompositions. These tensors are typically estimated directly from the
samples. The main challenge in learning mixtures of discrete product
distributions is that these low-rank tensors cannot be obtained directly from
the sample moments. Instead, we reduce the tensor estimation problem to: $a$)
estimating a low-rank matrix using only off-diagonal block elements; and $b$)
estimating a tensor using a small number of linear measurements. Leveraging on
recent developments in matrix completion, we give an alternating minimization
based method to estimate the low-rank matrix, and formulate the tensor
completion problem as a least-squares problem.
| [
"Prateek Jain and Sewoong Oh",
"['Prateek Jain' 'Sewoong Oh']"
] |
cs.LG | null | 1311.2987 | null | null | http://arxiv.org/pdf/1311.2987v1 | 2013-11-13T00:11:09Z | 2013-11-13T00:11:09Z | Learning Input and Recurrent Weight Matrices in Echo State Networks | Echo State Networks (ESNs) are a special type of the temporally deep network
model, the Recurrent Neural Network (RNN), where the recurrent matrix is
carefully designed and both the recurrent and input matrices are fixed. An ESN
uses the linearity of the activation function of the output units to simplify
the learning of the output matrix. In this paper, we devise a special technique
that take advantage of this linearity in the output units of an ESN, to learn
the input and recurrent matrices. This has not been done in earlier ESNs due to
their well known difficulty in learning those matrices. Compared to the
technique of BackPropagation Through Time (BPTT) in learning general RNNs, our
proposed method exploits linearity of activation function in the output units
to formulate the relationships amongst the various matrices in an RNN. These
relationships results in the gradient of the cost function having an analytical
form and being more accurate. This would enable us to compute the gradients
instead of obtaining them by recursion as in BPTT. Experimental results on
phone state classification show that learning one or both the input and
recurrent matrices in an ESN yields superior results compared to traditional
ESNs that do not learn these matrices, especially when longer time steps are
used.
| [
"['Hamid Palangi' 'Li Deng' 'Rabab K Ward']",
"Hamid Palangi, Li Deng, Rabab K Ward"
] |
stat.ML cs.LG | null | 1311.3001 | null | null | http://arxiv.org/pdf/1311.3001v1 | 2013-11-13T02:23:34Z | 2013-11-13T02:23:34Z | Informed Source Separation: A Bayesian Tutorial | Source separation problems are ubiquitous in the physical sciences; any
situation where signals are superimposed calls for source separation to
estimate the original signals. In this tutorial I will discuss the Bayesian
approach to the source separation problem. This approach has a specific
advantage in that it requires the designer to explicitly describe the signal
model in addition to any other information or assumptions that go into the
problem description. This leads naturally to the idea of informed source
separation, where the algorithm design incorporates relevant information about
the specific problem. This approach promises to enable researchers to design
their own high-quality algorithms that are specifically tailored to the problem
at hand.
| [
"Kevin H. Knuth",
"['Kevin H. Knuth']"
] |
cs.LG | null | 1311.3157 | null | null | http://arxiv.org/pdf/1311.3157v1 | 2013-11-12T17:25:29Z | 2013-11-12T17:25:29Z | Multiple Closed-Form Local Metric Learning for K-Nearest Neighbor
Classifier | Many researches have been devoted to learn a Mahalanobis distance metric,
which can effectively improve the performance of kNN classification. Most
approaches are iterative and computational expensive and linear rigidity still
critically limits metric learning algorithm to perform better. We proposed a
computational economical framework to learn multiple metrics in closed-form.
| [
"['Jianbo Ye']",
"Jianbo Ye"
] |
cs.LG stat.ML | null | 1311.3287 | null | null | http://arxiv.org/pdf/1311.3287v2 | 2013-12-08T01:58:58Z | 2013-11-13T20:42:21Z | Nonparametric Estimation of Multi-View Latent Variable Models | Spectral methods have greatly advanced the estimation of latent variable
models, generating a sequence of novel and efficient algorithms with strong
theoretical guarantees. However, current spectral algorithms are largely
restricted to mixtures of discrete or Gaussian distributions. In this paper, we
propose a kernel method for learning multi-view latent variable models,
allowing each mixture component to be nonparametric. The key idea of the method
is to embed the joint distribution of a multi-view latent variable into a
reproducing kernel Hilbert space, and then the latent parameters are recovered
using a robust tensor power method. We establish that the sample complexity for
the proposed method is quadratic in the number of latent components and is a
low order polynomial in the other relevant parameters. Thus, our non-parametric
tensor approach to learning latent variable models enjoys good sample and
computational efficiencies. Moreover, the non-parametric tensor power method
compares favorably to EM algorithm and other existing spectral algorithms in
our experiments.
| [
"['Le Song' 'Animashree Anandkumar' 'Bo Dai' 'Bo Xie']",
"Le Song, Animashree Anandkumar, Bo Dai, Bo Xie"
] |
cs.LG stat.ML | null | 1311.3315 | null | null | http://arxiv.org/pdf/1311.3315v3 | 2014-05-13T14:24:33Z | 2013-11-13T21:33:05Z | Sparse Matrix Factorization | We investigate the problem of factorizing a matrix into several sparse
matrices and propose an algorithm for this under randomness and sparsity
assumptions. This problem can be viewed as a simplification of the deep
learning problem where finding a factorization corresponds to finding edges in
different layers and values of hidden units. We prove that under certain
assumptions for a sparse linear deep network with $n$ nodes in each layer, our
algorithm is able to recover the structure of the network and values of top
layer hidden units for depths up to $\tilde O(n^{1/6})$. We further discuss the
relation among sparse matrix factorization, deep learning, sparse recovery and
dictionary learning.
| [
"Behnam Neyshabur, Rina Panigrahy",
"['Behnam Neyshabur' 'Rina Panigrahy']"
] |
stat.ML cs.AI cs.LG | null | 1311.3368 | null | null | http://arxiv.org/pdf/1311.3368v1 | 2013-11-14T02:39:45Z | 2013-11-14T02:39:45Z | Anytime Belief Propagation Using Sparse Domains | Belief Propagation has been widely used for marginal inference, however it is
slow on problems with large-domain variables and high-order factors. Previous
work provides useful approximations to facilitate inference on such models, but
lacks important anytime properties such as: 1) providing accurate and
consistent marginals when stopped early, 2) improving the approximation when
run longer, and 3) converging to the fixed point of BP. To this end, we propose
a message passing algorithm that works on sparse (partially instantiated)
domains, and converges to consistent marginals using dynamic message
scheduling. The algorithm grows the sparse domains incrementally, selecting the
next value to add using prioritization schemes based on the gradients of the
marginal inference objective. Our experiments demonstrate local anytime
consistency and fast convergence, providing significant speedups over BP to
obtain low-error marginals: up to 25 times on grid models, and up to 6 times on
a real-world natural language processing task.
| [
"['Sameer Singh' 'Sebastian Riedel' 'Andrew McCallum']",
"Sameer Singh and Sebastian Riedel and Andrew McCallum"
] |
cs.LG stat.ML | null | 1311.3494 | null | null | http://arxiv.org/pdf/1311.3494v6 | 2014-10-28T13:25:09Z | 2013-11-14T13:21:15Z | Fundamental Limits of Online and Distributed Algorithms for Statistical
Learning and Estimation | Many machine learning approaches are characterized by information constraints
on how they interact with the training data. These include memory and
sequential access constraints (e.g. fast first-order methods to solve
stochastic optimization problems); communication constraints (e.g. distributed
learning); partial access to the underlying data (e.g. missing features and
multi-armed bandits) and more. However, currently we have little understanding
how such information constraints fundamentally affect our performance,
independent of the learning problem semantics. For example, are there learning
problems where any algorithm which has small memory footprint (or can use any
bounded number of bits from each example, or has certain communication
constraints) will perform worse than what is possible without such constraints?
In this paper, we describe how a single set of results implies positive answers
to the above, for several different settings.
| [
"['Ohad Shamir']",
"Ohad Shamir"
] |
cs.DS cs.LG stat.ML | null | 1311.3651 | null | null | http://arxiv.org/pdf/1311.3651v4 | 2014-01-20T06:19:39Z | 2013-11-14T20:49:55Z | Smoothed Analysis of Tensor Decompositions | Low rank tensor decompositions are a powerful tool for learning generative
models, and uniqueness results give them a significant advantage over matrix
decomposition methods. However, tensors pose significant algorithmic challenges
and tensors analogs of much of the matrix algebra toolkit are unlikely to exist
because of hardness results. Efficient decomposition in the overcomplete case
(where rank exceeds dimension) is particularly challenging. We introduce a
smoothed analysis model for studying these questions and develop an efficient
algorithm for tensor decomposition in the highly overcomplete case (rank
polynomial in the dimension). In this setting, we show that our algorithm is
robust to inverse polynomial error -- a crucial property for applications in
learning since we are only allowed a polynomial number of samples. While
algorithms are known for exact tensor decomposition in some overcomplete
settings, our main contribution is in analyzing their stability in the
framework of smoothed analysis.
Our main technical contribution is to show that tensor products of perturbed
vectors are linearly independent in a robust sense (i.e. the associated matrix
has singular values that are at least an inverse polynomial). This key result
paves the way for applying tensor methods to learning problems in the smoothed
setting. In particular, we use it to obtain results for learning multi-view
models and mixtures of axis-aligned Gaussians where there are many more
"components" than dimensions. The assumption here is that the model is not
adversarially chosen, formalized by a perturbation of model parameters. We
believe this an appealing way to analyze realistic instances of learning
problems, since this framework allows us to overcome many of the usual
limitations of using tensor methods.
| [
"Aditya Bhaskara, Moses Charikar, Ankur Moitra and Aravindan\n Vijayaraghavan",
"['Aditya Bhaskara' 'Moses Charikar' 'Ankur Moitra'\n 'Aravindan Vijayaraghavan']"
] |
cs.SI cs.LG | null | 1311.3669 | null | null | http://arxiv.org/pdf/1311.3669v1 | 2013-11-14T21:01:15Z | 2013-11-14T21:01:15Z | Scalable Influence Estimation in Continuous-Time Diffusion Networks | If a piece of information is released from a media site, can it spread, in 1
month, to a million web pages? This influence estimation problem is very
challenging since both the time-sensitive nature of the problem and the issue
of scalability need to be addressed simultaneously. In this paper, we propose a
randomized algorithm for influence estimation in continuous-time diffusion
networks. Our algorithm can estimate the influence of every node in a network
with |V| nodes and |E| edges to an accuracy of $\varepsilon$ using
$n=O(1/\varepsilon^2)$ randomizations and up to logarithmic factors
O(n|E|+n|V|) computations. When used as a subroutine in a greedy influence
maximization algorithm, our proposed method is guaranteed to find a set of
nodes with an influence of at least (1-1/e)OPT-2$\varepsilon$, where OPT is the
optimal value. Experiments on both synthetic and real-world data show that the
proposed method can easily scale up to networks of millions of nodes while
significantly improves over previous state-of-the-arts in terms of the accuracy
of the estimated influence and the quality of the selected nodes in maximizing
the influence.
| [
"['Nan Du' 'Le Song' 'Manuel Gomez Rodriguez' 'Hongyuan Zha']",
"Nan Du, Le Song, Manuel Gomez Rodriguez, Hongyuan Zha"
] |
cs.LG cs.AI | null | 1311.3735 | null | null | http://arxiv.org/pdf/1311.3735v1 | 2013-11-15T06:14:15Z | 2013-11-15T06:14:15Z | Ensemble Relational Learning based on Selective Propositionalization | Dealing with structured data needs the use of expressive representation
formalisms that, however, puts the problem to deal with the computational
complexity of the machine learning process. Furthermore, real world domains
require tools able to manage their typical uncertainty. Many statistical
relational learning approaches try to deal with these problems by combining the
construction of relevant relational features with a probabilistic tool. When
the combination is static (static propositionalization), the constructed
features are considered as boolean features and used offline as input to a
statistical learner; while, when the combination is dynamic (dynamic
propositionalization), the feature construction and probabilistic tool are
combined into a single process. In this paper we propose a selective
propositionalization method that search the optimal set of relational features
to be used by a probabilistic learner in order to minimize a loss function. The
new propositionalization approach has been combined with the random subspace
ensemble method. Experiments on real-world datasets shows the validity of the
proposed method.
| [
"Nicola Di Mauro and Floriana Esposito",
"['Nicola Di Mauro' 'Floriana Esposito']"
] |
stat.ML cs.LG q-bio.NC | null | 1311.3859 | null | null | http://arxiv.org/pdf/1311.3859v2 | 2013-11-20T12:26:50Z | 2013-11-15T14:19:31Z | Mapping cognitive ontologies to and from the brain | Imaging neuroscience links brain activation maps to behavior and cognition
via correlational studies. Due to the nature of the individual experiments,
based on eliciting neural response from a small number of stimuli, this link is
incomplete, and unidirectional from the causal point of view. To come to
conclusions on the function implied by the activation of brain regions, it is
necessary to combine a wide exploration of the various brain functions and some
inversion of the statistical inference. Here we introduce a methodology for
accumulating knowledge towards a bidirectional link between observed brain
activity and the corresponding function. We rely on a large corpus of imaging
studies and a predictive engine. Technically, the challenges are to find
commonality between the studies without denaturing the richness of the corpus.
The key elements that we contribute are labeling the tasks performed with a
cognitive ontology, and modeling the long tail of rare paradigms in the corpus.
To our knowledge, our approach is the first demonstration of predicting the
cognitive content of completely new brain images. To that end, we propose a
method that predicts the experimental paradigms across different studies.
| [
"Yannick Schwartz (INRIA Saclay - Ile de France, NEUROSPIN), Bertrand\n Thirion (INRIA Saclay - Ile de France, NEUROSPIN), Ga\\\"el Varoquaux (INRIA\n Saclay - Ile de France, LNAO)",
"['Yannick Schwartz' 'Bertrand Thirion' 'Gaël Varoquaux']"
] |
cs.AI cs.LG | null | 1311.3959 | null | null | http://arxiv.org/pdf/1311.3959v4 | 2016-05-01T12:27:39Z | 2013-11-15T19:40:58Z | Clustering Markov Decision Processes For Continual Transfer | We present algorithms to effectively represent a set of Markov decision
processes (MDPs), whose optimal policies have already been learned, by a
smaller source subset for lifelong, policy-reuse-based transfer learning in
reinforcement learning. This is necessary when the number of previous tasks is
large and the cost of measuring similarity counteracts the benefit of transfer.
The source subset forms an `$\epsilon$-net' over the original set of MDPs, in
the sense that for each previous MDP $M_p$, there is a source $M^s$ whose
optimal policy has $<\epsilon$ regret in $M_p$. Our contributions are as
follows. We present EXP-3-Transfer, a principled policy-reuse algorithm that
optimally reuses a given source policy set when learning for a new MDP. We
present a framework to cluster the previous MDPs to extract a source subset.
The framework consists of (i) a distance $d_V$ over MDPs to measure
policy-based similarity between MDPs; (ii) a cost function $g(\cdot)$ that uses
$d_V$ to measure how good a particular clustering is for generating useful
source tasks for EXP-3-Transfer and (iii) a provably convergent algorithm,
MHAV, for finding the optimal clustering. We validate our algorithms through
experiments in a surveillance domain.
| [
"['M. M. Hassan Mahmud' 'Majd Hawasly' 'Benjamin Rosman'\n 'Subramanian Ramamoorthy']",
"M. M. Hassan Mahmud, Majd Hawasly, Benjamin Rosman, Subramanian\n Ramamoorthy"
] |
cs.AI cs.LG | null | 1311.4086 | null | null | http://arxiv.org/pdf/1311.4086v1 | 2013-11-16T18:13:42Z | 2013-11-16T18:13:42Z | A hybrid decision support system : application on healthcare | Many systems based on knowledge, especially expert systems for medical
decision support have been developed. Only systems are based on production
rules, and cannot learn and evolve only by updating them. In addition, taking
into account several criteria induces an exorbitant number of rules to be
injected into the system. It becomes difficult to translate medical knowledge
or a support decision as a simple rule. Moreover, reasoning based on generic
cases became classic and can even reduce the range of possible solutions. To
remedy that, we propose an approach based on using a multi-criteria decision
guided by a case-based reasoning (CBR) approach.
| [
"['Abdelhak Mansoul' 'Baghdad Atmani' 'Sofia Benbelkacem']",
"Abdelhak Mansoul, Baghdad Atmani, Sofia Benbelkacem"
] |
cs.LG cs.DC cs.IR stat.ML | null | 1311.4150 | null | null | http://arxiv.org/pdf/1311.4150v1 | 2013-11-17T11:52:42Z | 2013-11-17T11:52:42Z | Towards Big Topic Modeling | To solve the big topic modeling problem, we need to reduce both time and
space complexities of batch latent Dirichlet allocation (LDA) algorithms.
Although parallel LDA algorithms on the multi-processor architecture have low
time and space complexities, their communication costs among processors often
scale linearly with the vocabulary size and the number of topics, leading to a
serious scalability problem. To reduce the communication complexity among
processors for a better scalability, we propose a novel communication-efficient
parallel topic modeling architecture based on power law, which consumes orders
of magnitude less communication time when the number of topics is large. We
combine the proposed communication-efficient parallel architecture with the
online belief propagation (OBP) algorithm referred to as POBP for big topic
modeling tasks. Extensive empirical results confirm that POBP has the following
advantages to solve the big topic modeling problem: 1) high accuracy, 2)
communication-efficient, 3) fast speed, and 4) constant memory usage when
compared with recent state-of-the-art parallel LDA algorithms on the
multi-processor architecture.
| [
"['Jian-Feng Yan' 'Jia Zeng' 'Zhi-Qiang Liu' 'Yang Gao']",
"Jian-Feng Yan, Jia Zeng, Zhi-Qiang Liu, Yang Gao"
] |
cs.CV cs.LG | null | 1311.4158 | null | null | http://arxiv.org/pdf/1311.4158v5 | 2014-03-11T19:56:59Z | 2013-11-17T13:22:44Z | Unsupervised Learning of Invariant Representations in Hierarchical
Architectures | The present phase of Machine Learning is characterized by supervised learning
algorithms relying on large sets of labeled examples ($n \to \infty$). The next
phase is likely to focus on algorithms capable of learning from very few
labeled examples ($n \to 1$), like humans seem able to do. We propose an
approach to this problem and describe the underlying theory, based on the
unsupervised, automatic learning of a ``good'' representation for supervised
learning, characterized by small sample complexity ($n$). We consider the case
of visual object recognition though the theory applies to other domains. The
starting point is the conjecture, proved in specific cases, that image
representations which are invariant to translations, scaling and other
transformations can considerably reduce the sample complexity of learning. We
prove that an invariant and unique (discriminative) signature can be computed
for each image patch, $I$, in terms of empirical distributions of the
dot-products between $I$ and a set of templates stored during unsupervised
learning. A module performing filtering and pooling, like the simple and
complex cells described by Hubel and Wiesel, can compute such estimates.
Hierarchical architectures consisting of this basic Hubel-Wiesel moduli inherit
its properties of invariance, stability, and discriminability while capturing
the compositional organization of the visual world in terms of wholes and
parts. The theory extends existing deep learning convolutional architectures
for image and speech recognition. It also suggests that the main computational
goal of the ventral stream of visual cortex is to provide a hierarchical
representation of new objects/images which is invariant to transformations,
stable, and discriminative for recognition---and that this representation may
be continuously learned in an unsupervised way during development and visual
experience.
| [
"['Fabio Anselmi' 'Joel Z. Leibo' 'Lorenzo Rosasco' 'Jim Mutch'\n 'Andrea Tacchetti' 'Tomaso Poggio']",
"Fabio Anselmi, Joel Z. Leibo, Lorenzo Rosasco, Jim Mutch, Andrea\n Tacchetti, Tomaso Poggio"
] |
cs.LG | null | 1311.4235 | null | null | http://arxiv.org/pdf/1311.4235v1 | 2013-11-18T00:48:14Z | 2013-11-18T00:48:14Z | On the definition of a general learning system with user-defined
operators | In this paper, we push forward the idea of machine learning systems whose
operators can be modified and fine-tuned for each problem. This allows us to
propose a learning paradigm where users can write (or adapt) their operators,
according to the problem, data representation and the way the information
should be navigated. To achieve this goal, data instances, background
knowledge, rules, programs and operators are all written in the same functional
language, Erlang. Since changing operators affect how the search space needs to
be explored, heuristics are learnt as a result of a decision process based on
reinforcement learning where each action is defined as a choice of operator and
rule. As a result, the architecture can be seen as a 'system for writing
machine learning systems' or to explore new operators where the policy reuse
(as a kind of transfer learning) is allowed. States and actions are represented
in a Q matrix which is actually a table, from which a supervised model is
learnt. This makes it possible to have a more flexible mapping between old and
new problems, since we work with an abstraction of rules and actions. We
include some examples sharing reuse and the application of the system gErl to
IQ problems. In order to evaluate gErl, we will test it against some structured
problems: a selection of IQ test tasks and some experiments on some structured
prediction problems (list patterns).
| [
"['Fernando Martínez-Plumed' 'Cèsar Ferri' 'José Hernández-Orallo'\n 'María-José Ramírez-Quintana']",
"Fernando Mart\\'inez-Plumed and C\\`esar Ferri and Jos\\'e\n Hern\\'andez-Orallo and Mar\\'ia-Jos\\'e Ram\\'irez-Quintana"
] |
cs.LG cs.NA cs.RO math.OC | null | 1311.4296 | null | null | http://arxiv.org/pdf/1311.4296v1 | 2013-11-18T08:48:13Z | 2013-11-18T08:48:13Z | Reflection methods for user-friendly submodular optimization | Recently, it has become evident that submodularity naturally captures widely
occurring concepts in machine learning, signal processing and computer vision.
Consequently, there is need for efficient optimization procedures for
submodular functions, especially for minimization problems. While general
submodular minimization is challenging, we propose a new method that exploits
existing decomposability of submodular functions. In contrast to previous
approaches, our method is neither approximate, nor impractical, nor does it
need any cumbersome parameter tuning. Moreover, it is easy to implement and
parallelize. A key component of our method is a formulation of the discrete
submodular minimization problem as a continuous best approximation problem that
is solved through a sequence of reflections, and its solution can be easily
thresholded to obtain an optimal discrete solution. This method solves both the
continuous and discrete formulations of the problem, and therefore has
applications in learning, inference, and reconstruction. In our experiments, we
illustrate the benefits of our method on two image segmentation tasks.
| [
"Stefanie Jegelka, Francis Bach (INRIA Paris - Rocquencourt, LIENS),\n Suvrit Sra (MPI)",
"['Stefanie Jegelka' 'Francis Bach' 'Suvrit Sra']"
] |
cs.AI cs.LG | null | 1311.4319 | null | null | http://arxiv.org/pdf/1311.4319v1 | 2013-11-18T10:22:53Z | 2013-11-18T10:22:53Z | Ranking Algorithms by Performance | A common way of doing algorithm selection is to train a machine learning
model and predict the best algorithm from a portfolio to solve a particular
problem. While this method has been highly successful, choosing only a single
algorithm has inherent limitations -- if the choice was bad, no remedial action
can be taken and parallelism cannot be exploited, to name but a few problems.
In this paper, we investigate how to predict the ranking of the portfolio
algorithms on a particular problem. This information can be used to choose the
single best algorithm, but also to allocate resources to the algorithms
according to their rank. We evaluate a range of approaches to predict the
ranking of a set of algorithms on a problem. We furthermore introduce a
framework for categorizing ranking predictions that allows to judge the
expressiveness of the predictive output. Our experimental evaluation
demonstrates on a range of data sets from the literature that it is beneficial
to consider the relationship between algorithms when predicting rankings. We
furthermore show that relatively naive approaches deliver rankings of good
quality already.
| [
"['Lars Kotthoff']",
"Lars Kotthoff"
] |
cs.LG cs.SY physics.data-an stat.ML | null | 1311.4468 | null | null | http://arxiv.org/pdf/1311.4468v3 | 2014-04-01T15:52:02Z | 2013-11-18T17:31:48Z | Stochastic processes and feedback-linearisation for online
identification and Bayesian adaptive control of fully-actuated mechanical
systems | This work proposes a new method for simultaneous probabilistic identification
and control of an observable, fully-actuated mechanical system. Identification
is achieved by conditioning stochastic process priors on observations of
configurations and noisy estimates of configuration derivatives. In contrast to
previous work that has used stochastic processes for identification, we
leverage the structural knowledge afforded by Lagrangian mechanics and learn
the drift and control input matrix functions of the control-affine system
separately. We utilise feedback-linearisation to reduce, in expectation, the
uncertain nonlinear control problem to one that is easy to regulate in a
desired manner. Thereby, our method combines the flexibility of nonparametric
Bayesian learning with epistemological guarantees on the expected closed-loop
trajectory. We illustrate our method in the context of torque-actuated pendula
where the dynamics are learned with a combination of normal and log-normal
processes.
| [
"Jan-Peter Calliess, Antonis Papachristodoulou and Stephen J. Roberts",
"['Jan-Peter Calliess' 'Antonis Papachristodoulou' 'Stephen J. Roberts']"
] |
stat.ML cs.LG | null | 1311.4472 | null | null | http://arxiv.org/pdf/1311.4472v2 | 2013-12-06T22:02:14Z | 2013-11-18T17:56:28Z | A Component Lasso | We propose a new sparse regression method called the component lasso, based
on a simple idea. The method uses the connected-components structure of the
sample covariance matrix to split the problem into smaller ones. It then solves
the subproblems separately, obtaining a coefficient vector for each one. Then,
it uses non-negative least squares to recombine the different vectors into a
single solution. This step is useful in selecting and reweighting components
that are correlated with the response. Simulated and real data examples show
that the component lasso can outperform standard regression methods such as the
lasso and elastic net, achieving a lower mean squared error as well as better
support recovery.
| [
"Nadine Hussami and Robert Tibshirani",
"['Nadine Hussami' 'Robert Tibshirani']"
] |
cs.LG | null | 1311.4486 | null | null | http://arxiv.org/pdf/1311.4486v2 | 2013-11-26T03:20:56Z | 2013-11-18T18:41:20Z | Discriminative Density-ratio Estimation | The covariate shift is a challenging problem in supervised learning that
results from the discrepancy between the training and test distributions. An
effective approach which recently drew a considerable attention in the research
community is to reweight the training samples to minimize that discrepancy. In
specific, many methods are based on developing Density-ratio (DR) estimation
techniques that apply to both regression and classification problems. Although
these methods work well for regression problems, their performance on
classification problems is not satisfactory. This is due to a key observation
that these methods focus on matching the sample marginal distributions without
paying attention to preserving the separation between classes in the reweighted
space. In this paper, we propose a novel method for Discriminative
Density-ratio (DDR) estimation that addresses the aforementioned problem and
aims at estimating the density-ratio of joint distributions in a class-wise
manner. The proposed algorithm is an iterative procedure that alternates
between estimating the class information for the test data and estimating new
density ratio for each class. To incorporate the estimated class information of
the test data, a soft matching technique is proposed. In addition, we employ an
effective criterion which adopts mutual information as an indicator to stop the
iterative procedure while resulting in a decision boundary that lies in a
sparse region. Experiments on synthetic and benchmark datasets demonstrate the
superiority of the proposed method in terms of both accuracy and robustness.
| [
"Yun-Qian Miao, Ahmed K. Farahat, Mohamed S. Kamel",
"['Yun-Qian Miao' 'Ahmed K. Farahat' 'Mohamed S. Kamel']"
] |
cs.AI cs.LG cs.LO | null | 1311.4639 | null | null | http://arxiv.org/pdf/1311.4639v1 | 2013-11-19T07:39:58Z | 2013-11-19T07:39:58Z | Post-Proceedings of the First International Workshop on Learning and
Nonmonotonic Reasoning | Knowledge Representation and Reasoning and Machine Learning are two important
fields in AI. Nonmonotonic logic programming (NMLP) and Answer Set Programming
(ASP) provide formal languages for representing and reasoning with commonsense
knowledge and realize declarative problem solving in AI. On the other side,
Inductive Logic Programming (ILP) realizes Machine Learning in logic
programming, which provides a formal background to inductive learning and the
techniques have been applied to the fields of relational learning and data
mining. Generally speaking, NMLP and ASP realize nonmonotonic reasoning while
lack the ability of learning. By contrast, ILP realizes inductive learning
while most techniques have been developed under the classical monotonic logic.
With this background, some researchers attempt to combine techniques in the
context of nonmonotonic ILP. Such combination will introduce a learning
mechanism to programs and would exploit new applications on the NMLP side,
while on the ILP side it will extend the representation language and enable us
to use existing solvers. Cross-fertilization between learning and nonmonotonic
reasoning can also occur in such as the use of answer set solvers for ILP,
speed-up learning while running answer set solvers, learning action theories,
learning transition rules in dynamical systems, abductive learning, learning
biological networks with inhibition, and applications involving default and
negation. This workshop is the first attempt to provide an open forum for the
identification of problems and discussion of possible collaborations among
researchers with complementary expertise. The workshop was held on September
15th of 2013 in Corunna, Spain. This post-proceedings contains five technical
papers (out of six accepted papers) and the abstract of the invited talk by Luc
De Raedt.
| [
"Katsumi Inoue and Chiaki Sakama (Editors)",
"['Katsumi Inoue' 'Chiaki Sakama']"
] |
cs.LG cs.IT cs.NA math.IT stat.ML | null | 1311.4643 | null | null | http://arxiv.org/pdf/1311.4643v1 | 2013-11-19T08:00:50Z | 2013-11-19T08:00:50Z | Near-Optimal Entrywise Sampling for Data Matrices | We consider the problem of selecting non-zero entries of a matrix $A$ in
order to produce a sparse sketch of it, $B$, that minimizes $\|A-B\|_2$. For
large $m \times n$ matrices, such that $n \gg m$ (for example, representing $n$
observations over $m$ attributes) we give sampling distributions that exhibit
four important properties. First, they have closed forms computable from
minimal information regarding $A$. Second, they allow sketching of matrices
whose non-zeros are presented to the algorithm in arbitrary order as a stream,
with $O(1)$ computation per non-zero. Third, the resulting sketch matrices are
not only sparse, but their non-zero entries are highly compressible. Lastly,
and most importantly, under mild assumptions, our distributions are provably
competitive with the optimal offline distribution. Note that the probabilities
in the optimal offline distribution may be complex functions of all the entries
in the matrix. Therefore, regardless of computational complexity, the optimal
distribution might be impossible to compute in the streaming model.
| [
"Dimitris Achlioptas, Zohar Karnin, Edo Liberty",
"['Dimitris Achlioptas' 'Zohar Karnin' 'Edo Liberty']"
] |
stat.ML cs.DC cs.LG stat.CO | null | 1311.4780 | null | null | http://arxiv.org/pdf/1311.4780v2 | 2014-03-21T04:25:50Z | 2013-11-19T15:23:04Z | Asymptotically Exact, Embarrassingly Parallel MCMC | Communication costs, resulting from synchronization requirements during
learning, can greatly slow down many parallel machine learning algorithms. In
this paper, we present a parallel Markov chain Monte Carlo (MCMC) algorithm in
which subsets of data are processed independently, with very little
communication. First, we arbitrarily partition data onto multiple machines.
Then, on each machine, any classical MCMC method (e.g., Gibbs sampling) may be
used to draw samples from a posterior distribution given the data subset.
Finally, the samples from each machine are combined to form samples from the
full posterior. This embarrassingly parallel algorithm allows each machine to
act independently on a subset of the data (without communication) until the
final combination stage. We prove that our algorithm generates asymptotically
exact samples and empirically demonstrate its ability to parallelize burn-in
and sampling in several models.
| [
"Willie Neiswanger, Chong Wang, Eric Xing",
"['Willie Neiswanger' 'Chong Wang' 'Eric Xing']"
] |
cs.LG stat.ML | null | 1311.4803 | null | null | http://arxiv.org/pdf/1311.4803v2 | 2014-02-06T20:07:49Z | 2013-11-19T16:56:55Z | Beating the Minimax Rate of Active Learning with Prior Knowledge | Active learning refers to the learning protocol where the learner is allowed
to choose a subset of instances for labeling. Previous studies have shown that,
compared with passive learning, active learning is able to reduce the label
complexity exponentially if the data are linearly separable or satisfy the
Tsybakov noise condition with parameter $\kappa=1$. In this paper, we propose a
novel active learning algorithm using a convex surrogate loss, with the goal to
broaden the cases for which active learning achieves an exponential
improvement. We make use of a convex loss not only because it reduces the
computational cost, but more importantly because it leads to a tight bound for
the empirical process (i.e., the difference between the empirical estimation
and the expectation) when the current solution is close to the optimal one.
Under the assumption that the norm of the optimal classifier that minimizes the
convex risk is available, our analysis shows that the introduction of the
convex surrogate loss yields an exponential reduction in the label complexity
even when the parameter $\kappa$ of the Tsybakov noise is larger than $1$. To
the best of our knowledge, this is the first work that improves the minimax
rate of active learning by utilizing certain priori knowledge.
| [
"['Lijun Zhang' 'Mehrdad Mahdavi' 'Rong Jin']",
"Lijun Zhang and Mehrdad Mahdavi and Rong Jin"
] |
stat.ML cs.LG | null | 1311.4825 | null | null | http://arxiv.org/pdf/1311.4825v3 | 2015-06-08T13:27:19Z | 2013-11-19T18:29:19Z | Gaussian Process Optimization with Mutual Information | In this paper, we analyze a generic algorithm scheme for sequential global
optimization using Gaussian processes. The upper bounds we derive on the
cumulative regret for this generic algorithm improve by an exponential factor
the previously known bounds for algorithms like GP-UCB. We also introduce the
novel Gaussian Process Mutual Information algorithm (GP-MI), which
significantly improves further these upper bounds for the cumulative regret. We
confirm the efficiency of this algorithm on synthetic and real tasks against
the natural competitor, GP-UCB, and also the Expected Improvement heuristic.
| [
"['Emile Contal' 'Vianney Perchet' 'Nicolas Vayatis']",
"Emile Contal, Vianney Perchet, Nicolas Vayatis"
] |
stat.ML cs.LG | 10.1016/j.patrec.2014.08.013 | 1311.4833 | null | null | http://arxiv.org/abs/1311.4833v1 | 2013-11-19T18:46:59Z | 2013-11-19T18:46:59Z | Domain Adaptation of Majority Votes via Perturbed Variation-based Label
Transfer | We tackle the PAC-Bayesian Domain Adaptation (DA) problem. This arrives when
one desires to learn, from a source distribution, a good weighted majority vote
(over a set of classifiers) on a different target distribution. In this
context, the disagreement between classifiers is known crucial to control. In
non-DA supervised setting, a theoretical bound - the C-bound - involves this
disagreement and leads to a majority vote learning algorithm: MinCq. In this
work, we extend MinCq to DA by taking advantage of an elegant divergence
between distribution called the Perturbed Varation (PV). Firstly, justified by
a new formulation of the C-bound, we provide to MinCq a target sample labeled
thanks to a PV-based self-labeling focused on regions where the source and
target marginal distributions are closer. Secondly, we propose an original
process for tuning the hyperparameters. Our framework shows very promising
results on a toy problem.
| [
"['Emilie Morvant']",
"Emilie Morvant (IST Austria)"
] |
cs.LG cs.DS | null | 1311.5022 | null | null | http://arxiv.org/pdf/1311.5022v3 | 2015-09-30T16:43:29Z | 2013-11-20T11:39:26Z | Extended Formulations for Online Linear Bandit Optimization | On-line linear optimization on combinatorial action sets (d-dimensional
actions) with bandit feedback, is known to have complexity in the order of the
dimension of the problem. The exponential weighted strategy achieves the best
known regret bound that is of the order of $d^{2}\sqrt{n}$ (where $d$ is the
dimension of the problem, $n$ is the time horizon). However, such strategies
are provably suboptimal or computationally inefficient. The complexity is
attributed to the combinatorial structure of the action set and the dearth of
efficient exploration strategies of the set. Mirror descent with entropic
regularization function comes close to solving this problem by enforcing a
meticulous projection of weights with an inherent boundary condition. Entropic
regularization in mirror descent is the only known way of achieving a
logarithmic dependence on the dimension. Here, we argue otherwise and recover
the original intuition of exponential weighting by borrowing a technique from
discrete optimization and approximation algorithms called `extended
formulation'. Such formulations appeal to the underlying geometry of the set
with a guaranteed logarithmic dependence on the dimension underpinned by an
information theoretic entropic analysis.
| [
"Shaona Ghosh, Adam Prugel-Bennett",
"['Shaona Ghosh' 'Adam Prugel-Bennett']"
] |
cs.LG | null | 1311.5068 | null | null | http://arxiv.org/pdf/1311.5068v1 | 2013-11-20T14:31:00Z | 2013-11-20T14:31:00Z | Gromov-Hausdorff stability of linkage-based hierarchical clustering
methods | A hierarchical clustering method is stable if small perturbations on the data
set produce small perturbations in the result. These perturbations are measured
using the Gromov-Hausdorff metric. We study the problem of stability on
linkage-based hierarchical clustering methods. We obtain that, under some basic
conditions, standard linkage-based methods are semi-stable. This means that
they are stable if the input data is close enough to an ultrametric space. We
prove that, apart from exotic examples, introducing any unchaining condition in
the algorithm always produces unstable methods.
| [
"A. Mart\\'inez-P\\'erez",
"['A. Martínez-Pérez']"
] |
cs.LG stat.ML | null | 1311.5422 | null | null | http://arxiv.org/pdf/1311.5422v2 | 2013-11-22T04:49:54Z | 2013-11-20T16:45:51Z | Sparse Overlapping Sets Lasso for Multitask Learning and its Application
to fMRI Analysis | Multitask learning can be effective when features useful in one task are also
useful for other tasks, and the group lasso is a standard method for selecting
a common subset of features. In this paper, we are interested in a less
restrictive form of multitask learning, wherein (1) the available features can
be organized into subsets according to a notion of similarity and (2) features
useful in one task are similar, but not necessarily identical, to the features
best suited for other tasks. The main contribution of this paper is a new
procedure called Sparse Overlapping Sets (SOS) lasso, a convex optimization
that automatically selects similar features for related learning tasks. Error
bounds are derived for SOSlasso and its consistency is established for squared
error loss. In particular, SOSlasso is motivated by multi- subject fMRI studies
in which functional activity is classified using brain voxels as features.
Experiments with real and synthetic data demonstrate the advantages of SOSlasso
compared to the lasso and group lasso.
| [
"Nikhil Rao, Christopher Cox, Robert Nowak, Timothy Rogers",
"['Nikhil Rao' 'Christopher Cox' 'Robert Nowak' 'Timothy Rogers']"
] |
cs.SI cs.LG math.ST physics.soc-ph stat.ML stat.TH | 10.1109/TSP.2014.2336613 | 1311.5552 | null | null | http://arxiv.org/abs/1311.5552v3 | 2014-09-08T17:14:10Z | 2013-11-21T20:43:44Z | Bayesian Discovery of Threat Networks | A novel unified Bayesian framework for network detection is developed, under
which a detection algorithm is derived based on random walks on graphs. The
algorithm detects threat networks using partial observations of their activity,
and is proved to be optimum in the Neyman-Pearson sense. The algorithm is
defined by a graph, at least one observation, and a diffusion model for threat.
A link to well-known spectral detection methods is provided, and the
equivalence of the random walk and harmonic solutions to the Bayesian
formulation is proven. A general diffusion model is introduced that utilizes
spatio-temporal relationships between vertices, and is used for a specific
space-time formulation that leads to significant performance improvements on
coordinated covert networks. This performance is demonstrated using a new
hybrid mixed-membership blockmodel introduced to simulate random covert
networks with realistic properties.
| [
"Steven T. Smith, Edward K. Kao, Kenneth D. Senne, Garrett Bernstein,\n and Scott Philips",
"['Steven T. Smith' 'Edward K. Kao' 'Kenneth D. Senne' 'Garrett Bernstein'\n 'Scott Philips']"
] |
stat.ML cs.LG | null | 1311.5599 | null | null | http://arxiv.org/pdf/1311.5599v1 | 2013-11-21T22:16:00Z | 2013-11-21T22:16:00Z | Compressive Measurement Designs for Estimating Structured Signals in
Structured Clutter: A Bayesian Experimental Design Approach | This work considers an estimation task in compressive sensing, where the goal
is to estimate an unknown signal from compressive measurements that are
corrupted by additive pre-measurement noise (interference, or clutter) as well
as post-measurement noise, in the specific setting where some (perhaps limited)
prior knowledge on the signal, interference, and noise is available. The
specific aim here is to devise a strategy for incorporating this prior
information into the design of an appropriate compressive measurement strategy.
Here, the prior information is interpreted as statistics of a prior
distribution on the relevant quantities, and an approach based on Bayesian
Experimental Design is proposed. Experimental results on synthetic data
demonstrate that the proposed approach outperforms traditional random
compressive measurement designs, which are agnostic to the prior information,
as well as several other knowledge-enhanced sensing matrix designs based on
more heuristic notions.
| [
"Swayambhoo Jain, Akshay Soni, and Jarvis Haupt",
"['Swayambhoo Jain' 'Akshay Soni' 'Jarvis Haupt']"
] |
cs.LG | null | 1311.5636 | null | null | http://arxiv.org/pdf/1311.5636v1 | 2013-11-22T01:49:26Z | 2013-11-22T01:49:26Z | Learning Non-Linear Feature Maps | Feature selection plays a pivotal role in learning, particularly in areas
were parsimonious features can provide insight into the underlying process,
such as biology. Recent approaches for non-linear feature selection employing
greedy optimisation of Centred Kernel Target Alignment(KTA), while exhibiting
strong results in terms of generalisation accuracy and sparsity, can become
computationally prohibitive for high-dimensional datasets. We propose randSel,
a randomised feature selection algorithm, with attractive scaling properties.
Our theoretical analysis of randSel provides strong probabilistic guarantees
for the correct identification of relevant features. Experimental results on
real and artificial data, show that the method successfully identifies
effective features, performing better than a number of competitive approaches.
| [
"Dimitrios Athanasakis, John Shawe-Taylor, Delmiro Fernandez-Reyes",
"['Dimitrios Athanasakis' 'John Shawe-Taylor' 'Delmiro Fernandez-Reyes']"
] |
cs.LG cs.NA stat.ML | null | 1311.5750 | null | null | http://arxiv.org/pdf/1311.5750v2 | 2013-11-25T04:19:39Z | 2013-11-22T13:52:07Z | Gradient Hard Thresholding Pursuit for Sparsity-Constrained Optimization | Hard Thresholding Pursuit (HTP) is an iterative greedy selection procedure
for finding sparse solutions of underdetermined linear systems. This method has
been shown to have strong theoretical guarantee and impressive numerical
performance. In this paper, we generalize HTP from compressive sensing to a
generic problem setup of sparsity-constrained convex optimization. The proposed
algorithm iterates between a standard gradient descent step and a hard
thresholding step with or without debiasing. We prove that our method enjoys
the strong guarantees analogous to HTP in terms of rate of convergence and
parameter estimation accuracy. Numerical evidences show that our method is
superior to the state-of-the-art greedy selection methods in sparse logistic
regression and sparse precision matrix estimation tasks.
| [
"Xiao-Tong Yuan, Ping Li, Tong Zhang",
"['Xiao-Tong Yuan' 'Ping Li' 'Tong Zhang']"
] |
cs.IT cs.LG math.IT math.OC stat.ML | null | 1311.5871 | null | null | http://arxiv.org/pdf/1311.5871v2 | 2014-07-16T15:47:44Z | 2013-11-22T20:29:38Z | Finding sparse solutions of systems of polynomial equations via
group-sparsity optimization | The paper deals with the problem of finding sparse solutions to systems of
polynomial equations possibly perturbed by noise. In particular, we show how
these solutions can be recovered from group-sparse solutions of a derived
system of linear equations. Then, two approaches are considered to find these
group-sparse solutions. The first one is based on a convex relaxation resulting
in a second-order cone programming formulation which can benefit from efficient
reweighting techniques for sparsity enhancement. For this approach, sufficient
conditions for the exact recovery of the sparsest solution to the polynomial
system are derived in the noiseless setting, while stable recovery results are
obtained for the noisy case. Though lacking a similar analysis, the second
approach provides a more computationally efficient algorithm based on a greedy
strategy adding the groups one-by-one. With respect to previous work, the
proposed methods recover the sparsest solution in a very short computing time
while remaining at least as accurate in terms of the probability of success.
This probability is empirically analyzed to emphasize the relationship between
the ability of the methods to solve the polynomial system and the sparsity of
the solution.
| [
"Fabien Lauer (LORIA), Henrik Ohlsson",
"['Fabien Lauer' 'Henrik Ohlsson']"
] |
cs.CV cs.LG stat.CO | null | 1311.5947 | null | null | http://arxiv.org/pdf/1311.5947v1 | 2013-11-23T02:30:14Z | 2013-11-23T02:30:14Z | Fast Training of Effective Multi-class Boosting Using Coordinate Descent
Optimization | Wepresentanovelcolumngenerationbasedboostingmethod for multi-class
classification. Our multi-class boosting is formulated in a single optimization
problem as in Shen and Hao (2011). Different from most existing multi-class
boosting methods, which use the same set of weak learners for all the classes,
we train class specified weak learners (i.e., each class has a different set of
weak learners). We show that using separate weak learner sets for each class
leads to fast convergence, without introducing additional computational
overhead in the training procedure. To further make the training more efficient
and scalable, we also propose a fast co- ordinate descent method for solving
the optimization problem at each boosting iteration. The proposed coordinate
descent method is conceptually simple and easy to implement in that it is a
closed-form solution for each coordinate update. Experimental results on a
variety of datasets show that, compared to a range of existing multi-class
boosting meth- ods, the proposed method has much faster convergence rate and
better generalization performance in most cases. We also empirically show that
the proposed fast coordinate descent algorithm needs less training time than
the MultiBoost algorithm in Shen and Hao (2011).
| [
"Guosheng Lin, Chunhua Shen, Anton van den Hengel, David Suter",
"['Guosheng Lin' 'Chunhua Shen' 'Anton van den Hengel' 'David Suter']"
] |
cs.LG | null | 1311.6041 | null | null | http://arxiv.org/pdf/1311.6041v3 | 2013-12-01T06:02:58Z | 2013-11-23T19:19:37Z | No Free Lunch Theorem and Bayesian probability theory: two sides of the
same coin. Some implications for black-box optimization and metaheuristics | Challenging optimization problems, which elude acceptable solution via
conventional calculus methods, arise commonly in different areas of industrial
design and practice. Hard optimization problems are those who manifest the
following behavior: a) high number of independent input variables; b) very
complex or irregular multi-modal fitness; c) computational expensive fitness
evaluation. This paper will focus on some theoretical issues that have strong
implications for practice. I will stress how an interpretation of the No Free
Lunch theorem leads naturally to a general Bayesian optimization framework. The
choice of a prior over the space of functions is a critical and inevitable step
in every black-box optimization.
| [
"Loris Serafino",
"['Loris Serafino']"
] |
cs.LG cs.NE | null | 1311.6091 | null | null | http://arxiv.org/pdf/1311.6091v3 | 2014-03-06T03:06:36Z | 2013-11-24T08:04:41Z | A Primal-Dual Method for Training Recurrent Neural Networks Constrained
by the Echo-State Property | We present an architecture of a recurrent neural network (RNN) with a
fully-connected deep neural network (DNN) as its feature extractor. The RNN is
equipped with both causal temporal prediction and non-causal look-ahead, via
auto-regression (AR) and moving-average (MA), respectively. The focus of this
paper is a primal-dual training method that formulates the learning of the RNN
as a formal optimization problem with an inequality constraint that provides a
sufficient condition for the stability of the network dynamics. Experimental
results demonstrate the effectiveness of this new method, which achieves 18.86%
phone recognition error on the TIMIT benchmark for the core test set. The
result approaches the best result of 17.7%, which was obtained by using RNN
with long short-term memory (LSTM). The results also show that the proposed
primal-dual training method produces lower recognition errors than the popular
RNN methods developed earlier based on the carefully tuned threshold parameter
that heuristically prevents the gradient from exploding.
| [
"Jianshu Chen and Li Deng",
"['Jianshu Chen' 'Li Deng']"
] |
cs.SY cs.LG math.OC stat.ML | 10.1109/TCYB.2014.2319577 | 1311.6107 | null | null | http://arxiv.org/abs/1311.6107v3 | 2014-05-11T07:33:16Z | 2013-11-24T11:26:07Z | Off-policy reinforcement learning for $ H_\infty $ control design | The $H_\infty$ control design problem is considered for nonlinear systems
with unknown internal system model. It is known that the nonlinear $ H_\infty $
control problem can be transformed into solving the so-called
Hamilton-Jacobi-Isaacs (HJI) equation, which is a nonlinear partial
differential equation that is generally impossible to be solved analytically.
Even worse, model-based approaches cannot be used for approximately solving HJI
equation, when the accurate system model is unavailable or costly to obtain in
practice. To overcome these difficulties, an off-policy reinforcement leaning
(RL) method is introduced to learn the solution of HJI equation from real
system data instead of mathematical system model, and its convergence is
proved. In the off-policy RL method, the system data can be generated with
arbitrary policies rather than the evaluating policy, which is extremely
important and promising for practical systems. For implementation purpose, a
neural network (NN) based actor-critic structure is employed and a least-square
NN weight update algorithm is derived based on the method of weighted
residuals. Finally, the developed NN-based off-policy RL method is tested on a
linear F16 aircraft plant, and further applied to a rotational/translational
actuator system.
| [
"['Biao Luo' 'Huai-Ning Wu' 'Tingwen Huang']",
"Biao Luo, Huai-Ning Wu, Tingwen Huang"
] |
cs.LG | null | 1311.6184 | null | null | http://arxiv.org/pdf/1311.6184v4 | 2014-05-09T23:01:46Z | 2013-11-24T23:28:49Z | Bounding the Test Log-Likelihood of Generative Models | Several interesting generative learning algorithms involve a complex
probability distribution over many random variables, involving intractable
normalization constants or latent variable normalization. Some of them may even
not have an analytic expression for the unnormalized probability function and
no tractable approximation. This makes it difficult to estimate the quality of
these models, once they have been trained, or to monitor their quality (e.g.
for early stopping) while training. A previously proposed method is based on
constructing a non-parametric density estimator of the model's probability
function from samples generated by the model. We revisit this idea, propose a
more efficient estimator, and prove that it provides a lower bound on the true
test log-likelihood, and an unbiased estimator as the number of generated
samples goes to infinity, although one that incorporates the effect of poor
mixing. We further propose a biased variant of the estimator that can be used
reliably with a finite number of samples for the purpose of model comparison.
| [
"['Yoshua Bengio' 'Li Yao' 'Kyunghyun Cho']",
"Yoshua Bengio, Li Yao and Kyunghyun Cho"
] |
cs.LG | 10.1109/MLSP.2013.6661985 | 1311.6211 | null | null | http://arxiv.org/abs/1311.6211v1 | 2013-11-25T05:27:41Z | 2013-11-25T05:27:41Z | Novelty Detection Under Multi-Instance Multi-Label Framework | Novelty detection plays an important role in machine learning and signal
processing. This paper studies novelty detection in a new setting where the
data object is represented as a bag of instances and associated with multiple
class labels, referred to as multi-instance multi-label (MIML) learning.
Contrary to the common assumption in MIML that each instance in a bag belongs
to one of the known classes, in novelty detection, we focus on the scenario
where bags may contain novel-class instances. The goal is to determine, for any
given instance in a new bag, whether it belongs to a known class or a novel
class. Detecting novelty in the MIML setting captures many real-world phenomena
and has many potential applications. For example, in a collection of tagged
images, the tag may only cover a subset of objects existing in the images.
Discovering an object whose class has not been previously tagged can be useful
for the purpose of soliciting a label for the new object class. To address this
novel problem, we present a discriminative framework for detecting new class
instances. Experiments demonstrate the effectiveness of our proposed method,
and reveal that the presence of unlabeled novel instances in training bags is
helpful to the detection of such instances in testing stage.
| [
"['Qi Lou' 'Raviv Raich' 'Forrest Briggs' 'Xiaoli Z. Fern']",
"Qi Lou, Raviv Raich, Forrest Briggs, Xiaoli Z. Fern"
] |
cs.SI cs.IR cs.LG stat.ML | null | 1311.6334 | null | null | http://arxiv.org/pdf/1311.6334v1 | 2013-11-25T15:25:28Z | 2013-11-25T15:25:28Z | Learning Reputation in an Authorship Network | The problem of searching for experts in a given academic field is hugely
important in both industry and academia. We study exactly this issue with
respect to a database of authors and their publications. The idea is to use
Latent Semantic Indexing (LSI) and Latent Dirichlet Allocation (LDA) to perform
topic modelling in order to find authors who have worked in a query field. We
then construct a coauthorship graph and motivate the use of influence
maximisation and a variety of graph centrality measures to obtain a ranked list
of experts. The ranked lists are further improved using a Markov Chain-based
rank aggregation approach. The complete method is readily scalable to large
datasets. To demonstrate the efficacy of the approach we report on an extensive
set of computational simulations using the Arnetminer dataset. An improvement
in mean average precision is demonstrated over the baseline case of simply
using the order of authors found by the topic models.
| [
"Charanpal Dhanjal (LTCI), St\\'ephan Cl\\'emen\\c{c}on (LTCI)",
"['Charanpal Dhanjal' 'Stéphan Clémençon']"
] |
cs.MM cs.IR cs.LG | null | 1311.6355 | null | null | http://arxiv.org/pdf/1311.6355v1 | 2013-11-06T12:20:35Z | 2013-11-06T12:20:35Z | Exploration in Interactive Personalized Music Recommendation: A
Reinforcement Learning Approach | Current music recommender systems typically act in a greedy fashion by
recommending songs with the highest user ratings. Greedy recommendation,
however, is suboptimal over the long term: it does not actively gather
information on user preferences and fails to recommend novel songs that are
potentially interesting. A successful recommender system must balance the needs
to explore user preferences and to exploit this information for recommendation.
This paper presents a new approach to music recommendation by formulating this
exploration-exploitation trade-off as a reinforcement learning task called the
multi-armed bandit. To learn user preferences, it uses a Bayesian model, which
accounts for both audio content and the novelty of recommendations. A
piecewise-linear approximation to the model and a variational inference
algorithm are employed to speed up Bayesian inference. One additional benefit
of our approach is a single unified model for both music recommendation and
playlist generation. Both simulation results and a user study indicate strong
potential for the new approach.
| [
"Xinxi Wang, Yi Wang, David Hsu, Ye Wang",
"['Xinxi Wang' 'Yi Wang' 'David Hsu' 'Ye Wang']"
] |
stat.ML cs.CV cs.LG | null | 1311.6371 | null | null | http://arxiv.org/pdf/1311.6371v3 | 2013-11-27T07:43:48Z | 2013-11-25T17:22:22Z | On Approximate Inference for Generalized Gaussian Process Models | A generalized Gaussian process model (GGPM) is a unifying framework that
encompasses many existing Gaussian process (GP) models, such as GP regression,
classification, and counting. In the GGPM framework, the observation likelihood
of the GP model is itself parameterized using the exponential family
distribution (EFD). In this paper, we consider efficient algorithms for
approximate inference on GGPMs using the general form of the EFD. A particular
GP model and its associated inference algorithms can then be formed by changing
the parameters of the EFD, thus greatly simplifying its creation for
task-specific output domains. We demonstrate the efficacy of this framework by
creating several new GP models for regressing to non-negative reals and to real
intervals. We also consider a closed-form Taylor approximation for efficient
inference on GGPMs, and elaborate on its connections with other model-specific
heuristic closed-form approximations. Finally, we present a comprehensive set
of experiments to compare approximate inference algorithms on a wide variety of
GGPMs.
| [
"Lifeng Shang and Antoni B. Chan",
"['Lifeng Shang' 'Antoni B. Chan']"
] |
cs.LG stat.ML | null | 1311.6392 | null | null | http://arxiv.org/pdf/1311.6392v2 | 2013-12-27T13:21:40Z | 2013-11-25T18:31:40Z | A Comprehensive Approach to Universal Piecewise Nonlinear Regression
Based on Trees | In this paper, we investigate adaptive nonlinear regression and introduce
tree based piecewise linear regression algorithms that are highly efficient and
provide significantly improved performance with guaranteed upper bounds in an
individual sequence manner. We use a tree notion in order to partition the
space of regressors in a nested structure. The introduced algorithms adapt not
only their regression functions but also the complete tree structure while
achieving the performance of the "best" linear mixture of a doubly exponential
number of partitions, with a computational complexity only polynomial in the
number of nodes of the tree. While constructing these algorithms, we also avoid
using any artificial "weighting" of models (with highly data dependent
parameters) and, instead, directly minimize the final regression error, which
is the ultimate performance goal. The introduced methods are generic such that
they can readily incorporate different tree construction methods such as random
trees in their framework and can use different regressor or partitioning
functions as demonstrated in the paper.
| [
"['N. Denizcan Vanli' 'Suleyman S. Kozat']",
"N. Denizcan Vanli and Suleyman S. Kozat"
] |
cs.LG | null | 1311.6396 | null | null | http://arxiv.org/pdf/1311.6396v2 | 2014-01-22T21:00:52Z | 2013-11-25T18:36:26Z | A Unified Approach to Universal Prediction: Generalized Upper and Lower
Bounds | We study sequential prediction of real-valued, arbitrary and unknown
sequences under the squared error loss as well as the best parametric predictor
out of a large, continuous class of predictors. Inspired by recent results from
computational learning theory, we refrain from any statistical assumptions and
define the performance with respect to the class of general parametric
predictors. In particular, we present generic lower and upper bounds on this
relative performance by transforming the prediction task into a parameter
learning problem. We first introduce the lower bounds on this relative
performance in the mixture of experts framework, where we show that for any
sequential algorithm, there always exists a sequence for which the performance
of the sequential algorithm is lower bounded by zero. We then introduce a
sequential learning algorithm to predict such arbitrary and unknown sequences,
and calculate upper bounds on its total squared prediction error for every
bounded sequence. We further show that in some scenarios we achieve matching
lower and upper bounds demonstrating that our algorithms are optimal in a
strong minimax sense such that their performances cannot be improved further.
As an interesting result we also prove that for the worst case scenario, the
performance of randomized algorithms can be achieved by sequential algorithms
so that randomized algorithms does not improve the performance.
| [
"['N. Denizcan Vanli' 'Suleyman S. Kozat']",
"N. Denizcan Vanli and Suleyman S. Kozat"
] |
math.OC cs.LG stat.ML | null | 1311.6425 | null | null | http://arxiv.org/pdf/1311.6425v1 | 2013-11-25T19:57:49Z | 2013-11-25T19:57:49Z | Robust Multimodal Graph Matching: Sparse Coding Meets Graph Matching | Graph matching is a challenging problem with very important applications in a
wide range of fields, from image and video analysis to biological and
biomedical problems. We propose a robust graph matching algorithm inspired in
sparsity-related techniques. We cast the problem, resembling group or
collaborative sparsity formulations, as a non-smooth convex optimization
problem that can be efficiently solved using augmented Lagrangian techniques.
The method can deal with weighted or unweighted graphs, as well as multimodal
data, where different graphs represent different types of data. The proposed
approach is also naturally integrated with collaborative graph inference
techniques, solving general network inference problems where the observed
variables, possibly coming from different modalities, are not in
correspondence. The algorithm is tested and compared with state-of-the-art
graph matching techniques in both synthetic and real graphs. We also present
results on multimodal graphs and applications to collaborative inference of
brain connectivity from alignment-free functional magnetic resonance imaging
(fMRI) data. The code is publicly available.
| [
"['Marcelo Fiori' 'Pablo Sprechmann' 'Joshua Vogelstein' 'Pablo Musé'\n 'Guillermo Sapiro']",
"Marcelo Fiori, Pablo Sprechmann, Joshua Vogelstein, Pablo Mus\\'e,\n Guillermo Sapiro"
] |
cs.CV cs.LG stat.ML | null | 1311.6510 | null | null | http://arxiv.org/pdf/1311.6510v1 | 2013-11-25T22:59:24Z | 2013-11-25T22:59:24Z | Are all training examples equally valuable? | When learning a new concept, not all training examples may prove equally
useful for training: some may have higher or lower training value than others.
The goal of this paper is to bring to the attention of the vision community the
following considerations: (1) some examples are better than others for training
detectors or classifiers, and (2) in the presence of better examples, some
examples may negatively impact performance and removing them may be beneficial.
In this paper, we propose an approach for measuring the training value of an
example, and use it for ranking and greedily sorting examples. We test our
methods on different vision tasks, models, datasets and classifiers. Our
experiments show that the performance of current state-of-the-art detectors and
classifiers can be improved when training on a subset, rather than the whole
training set.
| [
"['Agata Lapedriza' 'Hamed Pirsiavash' 'Zoya Bylinskii' 'Antonio Torralba']",
"Agata Lapedriza and Hamed Pirsiavash and Zoya Bylinskii and Antonio\n Torralba"
] |
cs.IT cs.LG math.IT | 10.1109/TIT.2013.2273353 | 1311.6536 | null | null | http://arxiv.org/abs/1311.6536v1 | 2013-11-26T01:23:45Z | 2013-11-26T01:23:45Z | Universal Codes from Switching Strategies | We discuss algorithms for combining sequential prediction strategies, a task
which can be viewed as a natural generalisation of the concept of universal
coding. We describe a graphical language based on Hidden Markov Models for
defining prediction strategies, and we provide both existing and new models as
examples. The models include efficient, parameterless models for switching
between the input strategies over time, including a model for the case where
switches tend to occur in clusters, and finally a new model for the scenario
where the prediction strategies have a known relationship, and where jumps are
typically between strongly related ones. This last model is relevant for coding
time series data where parameter drift is expected. As theoretical ontributions
we introduce an interpolation construction that is useful in the development
and analysis of new algorithms, and we establish a new sophisticated lemma for
analysing the individual sequence regret of parameterised models.
| [
"['Wouter M. Koolen' 'Steven de Rooij']",
"Wouter M. Koolen and Steven de Rooij"
] |
cs.LG math.OC stat.ML | null | 1311.6547 | null | null | http://arxiv.org/pdf/1311.6547v4 | 2015-07-14T16:07:49Z | 2013-11-26T03:36:21Z | Practical Inexact Proximal Quasi-Newton Method with Global Complexity
Analysis | Recently several methods were proposed for sparse optimization which make
careful use of second-order information [10, 28, 16, 3] to improve local
convergence rates. These methods construct a composite quadratic approximation
using Hessian information, optimize this approximation using a first-order
method, such as coordinate descent and employ a line search to ensure
sufficient descent. Here we propose a general framework, which includes
slightly modified versions of existing algorithms and also a new algorithm,
which uses limited memory BFGS Hessian approximations, and provide a novel
global convergence rate analysis, which covers methods that solve subproblems
via coordinate descent.
| [
"['Katya Scheinberg' 'Xiaocheng Tang']",
"Katya Scheinberg and Xiaocheng Tang"
] |
cs.LG | 10.1007/978-3-319-57454-7_53 | 1311.6556 | null | null | http://arxiv.org/abs/1311.6556v2 | 2014-12-08T07:34:34Z | 2013-11-26T05:13:18Z | Double Ramp Loss Based Reject Option Classifier | We consider the problem of learning reject option classifiers. The goodness
of a reject option classifier is quantified using $0-d-1$ loss function wherein
a loss $d \in (0,.5)$ is assigned for rejection. In this paper, we propose {\em
double ramp loss} function which gives a continuous upper bound for $(0-d-1)$
loss. Our approach is based on minimizing regularized risk under the double
ramp loss using {\em difference of convex (DC) programming}. We show the
effectiveness of our approach through experiments on synthetic and benchmark
datasets. Our approach performs better than the state of the art reject option
classification approaches.
| [
"Naresh Manwani, Kalpit Desai, Sanand Sasidharan, Ramasubramanian\n Sundararajan",
"['Naresh Manwani' 'Kalpit Desai' 'Sanand Sasidharan'\n 'Ramasubramanian Sundararajan']"
] |
cs.AI cs.LG stat.ML | null | 1311.6594 | null | null | http://arxiv.org/pdf/1311.6594v2 | 2014-05-20T10:17:31Z | 2013-11-26T09:03:10Z | Auto-adaptative Laplacian Pyramids for High-dimensional Data Analysis | Non-linear dimensionality reduction techniques such as manifold learning
algorithms have become a common way for processing and analyzing
high-dimensional patterns that often have attached a target that corresponds to
the value of an unknown function. Their application to new points consists in
two steps: first, embedding the new data point into the low dimensional space
and then, estimating the function value on the test point from its neighbors in
the embedded space.
However, finding the low dimension representation of a test point, while easy
for simple but often not powerful enough procedures such as PCA, can be much
more complicated for methods that rely on some kind of eigenanalysis, such as
Spectral Clustering (SC) or Diffusion Maps (DM). Similarly, when a target
function is to be evaluated, averaging methods like nearest neighbors may give
unstable results if the function is noisy. Thus, the smoothing of the target
function with respect to the intrinsic, low-dimensional representation that
describes the geometric structure of the examined data is a challenging task.
In this paper we propose Auto-adaptive Laplacian Pyramids (ALP), an extension
of the standard Laplacian Pyramids model that incorporates a modified LOOCV
procedure that avoids the large cost of the standard one and offers the
following advantages: (i) it selects automatically the optimal function
resolution (stopping time) adapted to the data and its noise, (ii) it is easy
to apply as it does not require parameterization, (iii) it does not overfit the
training set and (iv) it adds no extra cost compared to other classical
interpolation methods. We illustrate numerically ALP's behavior on a synthetic
problem and apply it to the computation of the DM projection of new patterns
and to the extension to them of target function values on a radiation
forecasting problem over very high dimensional patterns.
| [
"\\'Angela Fern\\'andez, Neta Rabin, Dalia Fishelov, Jos\\'e R. Dorronsoro",
"['Ángela Fernández' 'Neta Rabin' 'Dalia Fishelov' 'José R. Dorronsoro']"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.