title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Active Semi-Supervised Learning Using Sampling Theory for Graph Signals | cs.LG stat.ML | We consider the problem of offline, pool-based active semi-supervised
learning on graphs. This problem is important when the labeled data is scarce
and expensive whereas unlabeled data is easily available. The data points are
represented by the vertices of an undirected graph with the similarity between
them captured by the edge weights. Given a target number of nodes to label, the
goal is to choose those nodes that are most informative and then predict the
unknown labels. We propose a novel framework for this problem based on our
recent results on sampling theory for graph signals. A graph signal is a
real-valued function defined on each node of the graph. A notion of frequency
for such signals can be defined using the spectrum of the graph Laplacian
matrix. The sampling theory for graph signals aims to extend the traditional
Nyquist-Shannon sampling theory by allowing us to identify the class of graph
signals that can be reconstructed from their values on a subset of vertices.
This approach allows us to define a criterion for active learning based on
sampling set selection which aims at maximizing the frequency of the signals
that can be reconstructed from their samples on the set. Experiments show the
effectiveness of our method.
| Akshay Gadde, Aamir Anis and Antonio Ortega | null | 1405.4324 | null | null |
Identification of functionally related enzymes by learning-to-rank
methods | cs.LG cs.CE q-bio.QM stat.ML | Enzyme sequences and structures are routinely used in the biological sciences
as queries to search for functionally related enzymes in online databases. To
this end, one usually departs from some notion of similarity, comparing two
enzymes by looking for correspondences in their sequences, structures or
surfaces. For a given query, the search operation results in a ranking of the
enzymes in the database, from very similar to dissimilar enzymes, while
information about the biological function of annotated database enzymes is
ignored.
In this work we show that rankings of that kind can be substantially improved
by applying kernel-based learning algorithms. This approach enables the
detection of statistical dependencies between similarities of the active cleft
and the biological function of annotated enzymes. This is in contrast to
search-based approaches, which do not take annotated training data into
account. Similarity measures based on the active cleft are known to outperform
sequence-based or structure-based measures under certain conditions. We
consider the Enzyme Commission (EC) classification hierarchy for obtaining
annotated enzymes during the training phase. The results of a set of sizeable
experiments indicate a consistent and significant improvement for a set of
similarity measures that exploit information about small cavities in the
surface of enzymes.
| Michiel Stock, Thomas Fober, Eyke H\"ullermeier, Serghei Glinca,
Gerhard Klebe, Tapio Pahikkala, Antti Airola, Bernard De Baets, Willem
Waegeman | null | 1405.4394 | null | null |
A two-step learning approach for solving full and almost full cold start
problems in dyadic prediction | cs.LG | Dyadic prediction methods operate on pairs of objects (dyads), aiming to
infer labels for out-of-sample dyads. We consider the full and almost full cold
start problem in dyadic prediction, a setting that occurs when both objects in
an out-of-sample dyad have not been observed during training, or if one of them
has been observed, but very few times. A popular approach for addressing this
problem is to train a model that makes predictions based on a pairwise feature
representation of the dyads, or, in case of kernel methods, based on a tensor
product pairwise kernel. As an alternative to such a kernel approach, we
introduce a novel two-step learning algorithm that borrows ideas from the
fields of pairwise learning and spectral filtering. We show theoretically that
the two-step method is very closely related to the tensor product kernel
approach, and experimentally that it yields a slightly better predictive
performance. Moreover, unlike existing tensor product kernel methods, the
two-step method allows closed-form solutions for training and parameter
selection via cross-validation estimates both in the full and almost full cold
start settings, making the approach much more efficient and straightforward to
implement.
| Tapio Pahikkala, Michiel Stock, Antti Airola, Tero Aittokallio,
Bernard De Baets, Willem Waegeman | null | 1405.4423 | null | null |
Machine Learning in Wireless Sensor Networks: Algorithms, Strategies,
and Applications | cs.NI cs.LG | Wireless sensor networks monitor dynamic environments that change rapidly
over time. This dynamic behavior is either caused by external factors or
initiated by the system designers themselves. To adapt to such conditions,
sensor networks often adopt machine learning techniques to eliminate the need
for unnecessary redesign. Machine learning also inspires many practical
solutions that maximize resource utilization and prolong the lifespan of the
network. In this paper, we present an extensive literature review over the
period 2002-2013 of machine learning methods that were used to address common
issues in wireless sensor networks (WSNs). The advantages and disadvantages of
each proposed algorithm are evaluated against the corresponding problem. We
also provide a comparative guide to aid WSN designers in developing suitable
machine learning solutions for their specific application challenges.
| Mohammad Abu Alsheikh, Shaowei Lin, Dusit Niyato and Hwee-Pink Tan | 10.1109/COMST.2014.2320099 | 1405.4463 | null | null |
Online Learning with Composite Loss Functions | cs.LG | We study a new class of online learning problems where each of the online
algorithm's actions is assigned an adversarial value, and the loss of the
algorithm at each step is a known and deterministic function of the values
assigned to its recent actions. This class includes problems where the
algorithm's loss is the minimum over the recent adversarial values, the maximum
over the recent values, or a linear combination of the recent values. We
analyze the minimax regret of this class of problems when the algorithm
receives bandit feedback, and prove that when the minimum or maximum functions
are used, the minimax regret is $\tilde \Omega(T^{2/3})$ (so called hard online
learning problems), and when a linear function is used, the minimax regret is
$\tilde O(\sqrt{T})$ (so called easy learning problems). Previously, the only
online learning problem that was known to be provably hard was the multi-armed
bandit with switching costs.
| Ofer Dekel, Jian Ding, Tomer Koren, Yuval Peres | null | 1405.4471 | null | null |
A Distributed Algorithm for Training Nonlinear Kernel Machines | cs.LG | This paper concerns the distributed training of nonlinear kernel machines on
Map-Reduce. We show that a re-formulation of Nystr\"om approximation based
solution which is solved using gradient based techniques is well suited for
this, especially when it is necessary to work with a large number of basis
points. The main advantages of this approach are: avoidance of computing the
pseudo-inverse of the kernel sub-matrix corresponding to the basis points;
simplicity and efficiency of the distributed part of the computations; and,
friendliness to stage-wise addition of basis points. We implement the method
using an AllReduce tree on Hadoop and demonstrate its value on a few large
benchmark datasets.
| Dhruv Mahajan, S. Sathiya Keerthi, S. Sundararajan | null | 1405.4543 | null | null |
A distributed block coordinate descent method for training $l_1$
regularized linear classifiers | cs.LG | Distributed training of $l_1$ regularized classifiers has received great
attention recently. Most existing methods approach this problem by taking steps
obtained from approximating the objective by a quadratic approximation that is
decoupled at the individual variable level. These methods are designed for
multicore and MPI platforms where communication costs are low. They are
inefficient on systems such as Hadoop running on a cluster of commodity
machines where communication costs are substantial. In this paper we design a
distributed algorithm for $l_1$ regularization that is much better suited for
such systems than existing algorithms. A careful cost analysis is used to
support these points and motivate our method. The main idea of our algorithm is
to do block optimization of many variables on the actual objective function
within each computing node; this increases the computational cost per step that
is matched with the communication cost, and decreases the number of outer
iterations, thus yielding a faster overall method. Distributed Gauss-Seidel and
Gauss-Southwell greedy schemes are used for choosing variables to update in
each step. We establish global convergence theory for our algorithm, including
Q-linear rate of convergence. Experiments on two benchmark problems show our
method to be much faster than existing methods.
| Dhruv Mahajan, S. Sathiya Keerthi, S. Sundararajan | null | 1405.4544 | null | null |
ESSP: An Efficient Approach to Minimizing Dense and Nonsubmodular Energy
Functions | cs.CV cs.LG | Many recent advances in computer vision have demonstrated the impressive
power of dense and nonsubmodular energy functions in solving visual labeling
problems. However, minimizing such energies is challenging. None of existing
techniques (such as s-t graph cut, QPBO, BP and TRW-S) can individually do this
well. In this paper, we present an efficient method, namely ESSP, to optimize
binary MRFs with arbitrary pairwise potentials, which could be nonsubmodular
and with dense connectivity. We also provide a comparative study of our
approach and several recent promising methods. From our study, we make some
reasonable recommendations of combining existing methods that perform the best
in different situations for this challenging problem. Experimental results
validate that for dense and nonsubmodular energy functions, the proposed
approach can usually obtain lower energies than the best combination of other
techniques using comparably reasonable time.
| Wei Feng and Jiaya Jia and Zhi-Qiang Liu | null | 1405.4583 | null | null |
A Parallel Way to Select the Parameters of SVM Based on the Ant
Optimization Algorithm | cs.NE cs.LG | A large number of experimental data shows that Support Vector Machine (SVM)
algorithm has obvious advantages in text classification, handwriting
recognition, image classification, bioinformatics, and some other fields. To
some degree, the optimization of SVM depends on its kernel function and Slack
variable, the determinant of which is its parameters $\delta$ and c in the
classification function. That is to say,to optimize the SVM algorithm, the
optimization of the two parameters play a huge role. Ant Colony Optimization
(ACO) is optimization algorithm which simulate ants to find the optimal path.In
the available literature, we mix the ACO algorithm and Parallel algorithm
together to find a well parameters.
| Chao Zhang, Hong-cen Mei, Hao Yang | null | 1405.4589 | null | null |
Modelling Data Dispersion Degree in Automatic Robust Estimation for
Multivariate Gaussian Mixture Models with an Application to Noisy Speech
Processing | cs.CL cs.LG stat.ML | The trimming scheme with a prefixed cutoff portion is known as a method of
improving the robustness of statistical models such as multivariate Gaussian
mixture models (MG- MMs) in small scale tests by alleviating the impacts of
outliers. However, when this method is applied to real- world data, such as
noisy speech processing, it is hard to know the optimal cut-off portion to
remove the outliers and sometimes removes useful data samples as well. In this
paper, we propose a new method based on measuring the dispersion degree (DD) of
the training data to avoid this problem, so as to realise automatic robust
estimation for MGMMs. The DD model is studied by using two different measures.
For each one, we theoretically prove that the DD of the data samples in a
context of MGMMs approximately obeys a specific (chi or chi-square)
distribution. The proposed method is evaluated on a real-world application with
a moderately-sized speaker recognition task. Experiments show that the proposed
method can significantly improve the robustness of the conventional training
method of GMMs for speaker recognition.
| Dalei Wu and Haiqing Wu | null | 1405.4599 | null | null |
On the saddle point problem for non-convex optimization | cs.LG cs.NE | A central challenge to many fields of science and engineering involves
minimizing non-convex error functions over continuous, high dimensional spaces.
Gradient descent or quasi-Newton methods are almost ubiquitously used to
perform such minimizations, and it is often thought that a main source of
difficulty for the ability of these local methods to find the global minimum is
the proliferation of local minima with much higher error than the global
minimum. Here we argue, based on results from statistical physics, random
matrix theory, and neural network theory, that a deeper and more profound
difficulty originates from the proliferation of saddle points, not local
minima, especially in high dimensional problems of practical interest. Such
saddle points are surrounded by high error plateaus that can dramatically slow
down learning, and give the illusory impression of the existence of a local
minimum. Motivated by these arguments, we propose a new algorithm, the
saddle-free Newton method, that can rapidly escape high dimensional saddle
points, unlike gradient descent and quasi-Newton methods. We apply this
algorithm to deep neural network training, and provide preliminary numerical
evidence for its superior performance.
| Razvan Pascanu, Yann N. Dauphin, Surya Ganguli and Yoshua Bengio | null | 1405.4604 | null | null |
Lipschitz Bandits: Regret Lower Bounds and Optimal Algorithms | cs.LG | We consider stochastic multi-armed bandit problems where the expected reward
is a Lipschitz function of the arm, and where the set of arms is either
discrete or continuous. For discrete Lipschitz bandits, we derive asymptotic
problem specific lower bounds for the regret satisfied by any algorithm, and
propose OSLB and CKL-UCB, two algorithms that efficiently exploit the Lipschitz
structure of the problem. In fact, we prove that OSLB is asymptotically
optimal, as its asymptotic regret matches the lower bound. The regret analysis
of our algorithms relies on a new concentration inequality for weighted sums of
KL divergences between the empirical distributions of rewards and their true
distributions. For continuous Lipschitz bandits, we propose to first discretize
the action space, and then apply OSLB or CKL-UCB, algorithms that provably
exploit the structure efficiently. This approach is shown, through numerical
experiments, to significantly outperform existing algorithms that directly deal
with the continuous set of arms. Finally the results and algorithms are
extended to contextual bandits with similarities.
| Stefan Magureanu and Richard Combes and Alexandre Proutiere | null | 1405.4758 | null | null |
Scalable Semidefinite Relaxation for Maximum A Posterior Estimation | cs.LG cs.CV cs.IT math.IT math.OC stat.ML | Maximum a posteriori (MAP) inference over discrete Markov random fields is a
fundamental task spanning a wide spectrum of real-world applications, which is
known to be NP-hard for general graphs. In this paper, we propose a novel
semidefinite relaxation formulation (referred to as SDR) to estimate the MAP
assignment. Algorithmically, we develop an accelerated variant of the
alternating direction method of multipliers (referred to as SDPAD-LR) that can
effectively exploit the special structure of the new relaxation. Encouragingly,
the proposed procedure allows solving SDR for large-scale problems, e.g.,
problems on a grid graph comprising hundreds of thousands of variables with
multiple states per node. Compared with prior SDP solvers, SDPAD-LR is capable
of attaining comparable accuracy while exhibiting remarkably improved
scalability, in contrast to the commonly held belief that semidefinite
relaxation can only been applied on small-scale MRF problems. We have evaluated
the performance of SDR on various benchmark datasets including OPENGM2 and PIC
in terms of both the quality of the solutions and computation time.
Experimental results demonstrate that for a broad class of problems, SDPAD-LR
outperforms state-of-the-art algorithms in producing better MAP assignment in
an efficient manner.
| Qixing Huang, Yuxin Chen, and Leonidas Guibas | null | 1405.4807 | null | null |
Screening Tests for Lasso Problems | cs.LG stat.ML | This paper is a survey of dictionary screening for the lasso problem. The
lasso problem seeks a sparse linear combination of the columns of a dictionary
to best match a given target vector. This sparse representation has proven
useful in a variety of subsequent processing and decision tasks. For a given
target vector, dictionary screening quickly identifies a subset of dictionary
columns that will receive zero weight in a solution of the corresponding lasso
problem. These columns can be removed from the dictionary prior to solving the
lasso problem without impacting the optimality of the solution obtained. This
has two potential advantages: it reduces the size of the dictionary, allowing
the lasso problem to be solved with less resources, and it may speed up
obtaining a solution. Using a geometrically intuitive framework, we provide
basic insights for understanding useful lasso screening tests and their
limitations. We also provide illustrative numerical studies on several
datasets.
| Zhen James Xiang, Yun Wang and Peter J. Ramadge | 10.1109/TPAMI.2016.2568185 | 1405.4897 | null | null |
Convex Optimization: Algorithms and Complexity | math.OC cs.CC cs.LG cs.NA stat.ML | This monograph presents the main complexity theorems in convex optimization
and their corresponding algorithms. Starting from the fundamental theory of
black-box optimization, the material progresses towards recent advances in
structural optimization and stochastic optimization. Our presentation of
black-box optimization, strongly influenced by Nesterov's seminal book and
Nemirovski's lecture notes, includes the analysis of cutting plane methods, as
well as (accelerated) gradient descent schemes. We also pay special attention
to non-Euclidean settings (relevant algorithms include Frank-Wolfe, mirror
descent, and dual averaging) and discuss their relevance in machine learning.
We provide a gentle introduction to structural optimization with FISTA (to
optimize a sum of a smooth and a simple non-smooth term), saddle-point mirror
prox (Nemirovski's alternative to Nesterov's smoothing), and a concise
description of interior point methods. In stochastic optimization we discuss
stochastic gradient descent, mini-batches, random coordinate descent, and
sublinear algorithms. We also briefly touch upon convex relaxation of
combinatorial problems and the use of randomness to round solutions, as well as
random walks based methods.
| S\'ebastien Bubeck | null | 1405.4980 | null | null |
Unimodal Bandits: Regret Lower Bounds and Optimal Algorithms | cs.LG stat.ML | We consider stochastic multi-armed bandits where the expected reward is a
unimodal function over partially ordered arms. This important class of problems
has been recently investigated in (Cope 2009, Yu 2011). The set of arms is
either discrete, in which case arms correspond to the vertices of a finite
graph whose structure represents similarity in rewards, or continuous, in which
case arms belong to a bounded interval. For discrete unimodal bandits, we
derive asymptotic lower bounds for the regret achieved under any algorithm, and
propose OSUB, an algorithm whose regret matches this lower bound. Our algorithm
optimally exploits the unimodal structure of the problem, and surprisingly, its
asymptotic regret does not depend on the number of arms. We also provide a
regret upper bound for OSUB in non-stationary environments where the expected
rewards smoothly evolve over time. The analytical results are supported by
numerical experiments showing that OSUB performs significantly better than the
state-of-the-art algorithms. For continuous sets of arms, we provide a brief
discussion. We show that combining an appropriate discretization of the set of
arms with the UCB algorithm yields an order-optimal regret, and in practice,
outperforms recently proposed algorithms designed to exploit the unimodal
structure.
| Richard Combes and Alexandre Proutiere | null | 1405.5096 | null | null |
Predicting Online Video Engagement Using Clickstreams | cs.LG cs.IR | In the nascent days of e-content delivery, having a superior product was
enough to give companies an edge against the competition. With today's fiercely
competitive market, one needs to be multiple steps ahead, especially when it
comes to understanding consumers. Focusing on a large set of web portals owned
and managed by a private communications company, we propose methods by which
these sites' clickstream data can be used to provide a deep understanding of
their visitors, as well as their interests and preferences. We further expand
the use of this data to show that it can be effectively used to predict user
engagement to video streams.
| Everaldo Aguiar, Saurabh Nagrecha, Nitesh V. Chawla | null | 1405.5147 | null | null |
Gaussian Approximation of Collective Graphical Models | cs.LG cs.AI stat.ML | The Collective Graphical Model (CGM) models a population of independent and
identically distributed individuals when only collective statistics (i.e.,
counts of individuals) are observed. Exact inference in CGMs is intractable,
and previous work has explored Markov Chain Monte Carlo (MCMC) and MAP
approximations for learning and inference. This paper studies Gaussian
approximations to the CGM. As the population grows large, we show that the CGM
distribution converges to a multivariate Gaussian distribution (GCGM) that
maintains the conditional independence properties of the original CGM. If the
observations are exact marginals of the CGM or marginals that are corrupted by
Gaussian noise, inference in the GCGM approximation can be computed efficiently
in closed form. If the observations follow a different noise model (e.g.,
Poisson), then expectation propagation provides efficient and accurate
approximate inference. The accuracy and speed of GCGM inference is compared to
the MCMC and MAP methods on a simulated bird migration problem. The GCGM
matches or exceeds the accuracy of the MAP method while being significantly
faster.
| Li-Ping Liu, Daniel Sheldon, Thomas G. Dietterich | null | 1405.5156 | null | null |
Approximate resilience, monotonicity, and the complexity of agnostic
learning | cs.LG cs.CC cs.DM | A function $f$ is $d$-resilient if all its Fourier coefficients of degree at
most $d$ are zero, i.e., $f$ is uncorrelated with all low-degree parities. We
study the notion of $\mathit{approximate}$ $\mathit{resilience}$ of Boolean
functions, where we say that $f$ is $\alpha$-approximately $d$-resilient if $f$
is $\alpha$-close to a $[-1,1]$-valued $d$-resilient function in $\ell_1$
distance. We show that approximate resilience essentially characterizes the
complexity of agnostic learning of a concept class $C$ over the uniform
distribution. Roughly speaking, if all functions in a class $C$ are far from
being $d$-resilient then $C$ can be learned agnostically in time $n^{O(d)}$ and
conversely, if $C$ contains a function close to being $d$-resilient then
agnostic learning of $C$ in the statistical query (SQ) framework of Kearns has
complexity of at least $n^{\Omega(d)}$. This characterization is based on the
duality between $\ell_1$ approximation by degree-$d$ polynomials and
approximate $d$-resilience that we establish. In particular, it implies that
$\ell_1$ approximation by low-degree polynomials, known to be sufficient for
agnostic learning over product distributions, is in fact necessary.
Focusing on monotone Boolean functions, we exhibit the existence of
near-optimal $\alpha$-approximately
$\widetilde{\Omega}(\alpha\sqrt{n})$-resilient monotone functions for all
$\alpha>0$. Prior to our work, it was conceivable even that every monotone
function is $\Omega(1)$-far from any $1$-resilient function. Furthermore, we
construct simple, explicit monotone functions based on ${\sf Tribes}$ and ${\sf
CycleRun}$ that are close to highly resilient functions. Our constructions are
based on a fairly general resilience analysis and amplification. These
structural results, together with the characterization, imply nearly optimal
lower bounds for agnostic learning of monotone juntas.
| Dana Dachman-Soled and Vitaly Feldman and Li-Yang Tan and Andrew Wan
and Karl Wimmer | null | 1405.5268 | null | null |
Fast Distributed Coordinate Descent for Non-Strongly Convex Losses | math.OC cs.LG | We propose an efficient distributed randomized coordinate descent method for
minimizing regularized non-strongly convex loss functions. The method attains
the optimal $O(1/k^2)$ convergence rate, where $k$ is the iteration counter.
The core of the work is the theoretical study of stepsize parameters. We have
implemented the method on Archer - the largest supercomputer in the UK - and
show that the method is capable of solving a (synthetic) LASSO optimization
problem with 50 billion variables.
| Olivier Fercoq and Zheng Qu and Peter Richt\'arik and Martin
Tak\'a\v{c} | null | 1405.5300 | null | null |
Compressive Sampling Using EM Algorithm | stat.ME cs.LG stat.ML | Conventional approaches of sampling signals follow the celebrated theorem of
Nyquist and Shannon. Compressive sampling, introduced by Donoho, Romberg and
Tao, is a new paradigm that goes against the conventional methods in data
acquisition and provides a way of recovering signals using fewer samples than
the traditional methods use. Here we suggest an alternative way of
reconstructing the original signals in compressive sampling using EM algorithm.
We first propose a naive approach which has certain computational difficulties
and subsequently modify it to a new approach which performs better than the
conventional methods of compressive sampling. The comparison of the different
approaches and the performance of the new approach has been studied using
simulated data.
| Atanu Kumar Ghosh, Arnab Chakraborty | null | 1405.5311 | null | null |
Off-Policy Shaping Ensembles in Reinforcement Learning | cs.AI cs.LG | Recent advances of gradient temporal-difference methods allow to learn
off-policy multiple value functions in parallel with- out sacrificing
convergence guarantees or computational efficiency. This opens up new
possibilities for sound ensemble techniques in reinforcement learning. In this
work we propose learning an ensemble of policies related through
potential-based shaping rewards. The ensemble induces a combination policy by
using a voting mechanism on its components. Learning happens in real time, and
we empirically show the combination policy to outperform the individual
policies of the ensemble.
| Anna Harutyunyan and Tim Brys and Peter Vrancx and Ann Nowe | null | 1405.5358 | null | null |
On Learning Where To Look | cs.CV cs.LG | Current automatic vision systems face two major challenges: scalability and
extreme variability of appearance. First, the computational time required to
process an image typically scales linearly with the number of pixels in the
image, therefore limiting the resolution of input images to thumbnail size.
Second, variability in appearance and pose of the objects constitute a major
hurdle for robust recognition and detection. In this work, we propose a model
that makes baby steps towards addressing these challenges. We describe a
learning based method that recognizes objects through a series of glimpses.
This system performs an amount of computation that scales with the complexity
of the input rather than its number of pixels. Moreover, the proposed method is
potentially more robust to changes in appearance since its parameters are
learned in a data driven manner. Preliminary experiments on a handwritten
dataset of digits demonstrate the computational advantages of this approach.
| Marc'Aurelio Ranzato | null | 1405.5488 | null | null |
Kernel Mean Shrinkage Estimators | stat.ML cs.LG | A mean function in a reproducing kernel Hilbert space (RKHS), or a kernel
mean, is central to kernel methods in that it is used by many classical
algorithms such as kernel principal component analysis, and it also forms the
core inference step of modern kernel methods that rely on embedding probability
distributions in RKHSs. Given a finite sample, an empirical average has been
used commonly as a standard estimator of the true kernel mean. Despite a
widespread use of this estimator, we show that it can be improved thanks to the
well-known Stein phenomenon. We propose a new family of estimators called
kernel mean shrinkage estimators (KMSEs), which benefit from both theoretical
justifications and good empirical performance. The results demonstrate that the
proposed estimators outperform the standard one, especially in a "large d,
small n" paradigm.
| Krikamol Muandet, Bharath Sriperumbudur, Kenji Fukumizu, Arthur
Gretton, Bernhard Sch\"olkopf | null | 1405.5505 | null | null |
Descriptor Matching with Convolutional Neural Networks: a Comparison to
SIFT | cs.CV cs.LG | Latest results indicate that features learned via convolutional neural
networks outperform previous descriptors on classification tasks by a large
margin. It has been shown that these networks still work well when they are
applied to datasets or recognition tasks different from those they were trained
on. However, descriptors like SIFT are not only used in recognition but also
for many correspondence problems that rely on descriptor matching. In this
paper we compare features from various layers of convolutional neural nets to
standard SIFT descriptors. We consider a network that was trained on ImageNet
and another one that was trained without supervision. Surprisingly,
convolutional neural networks clearly outperform SIFT on descriptor matching.
This paper has been merged with arXiv:1406.6909
| Philipp Fischer, Alexey Dosovitskiy, Thomas Brox | null | 1405.5769 | null | null |
Node Classification in Uncertain Graphs | cs.DB cs.LG | In many real applications that use and analyze networked data, the links in
the network graph may be erroneous, or derived from probabilistic techniques.
In such cases, the node classification problem can be challenging, since the
unreliability of the links may affect the final results of the classification
process. If the information about link reliability is not used explicitly, the
classification accuracy in the underlying network may be affected adversely. In
this paper, we focus on situations that require the analysis of the uncertainty
that is present in the graph structure. We study the novel problem of node
classification in uncertain graphs, by treating uncertainty as a first-class
citizen. We propose two techniques based on a Bayes model and automatic
parameter selection, and show that the incorporation of uncertainty in the
classification process as a first-class citizen is beneficial. We
experimentally evaluate the proposed approach using different real data sets,
and study the behavior of the algorithms under different conditions. The
results demonstrate the effectiveness and efficiency of our approach.
| Michele Dallachiesa and Charu Aggarwal and Themis Palpanas | null | 1405.5829 | null | null |
Learning to Generate Networks | cs.LG cs.SI physics.soc-ph | We investigate the problem of learning to generate complex networks from
data. Specifically, we consider whether deep belief networks, dependency
networks, and members of the exponential random graph family can learn to
generate networks whose complex behavior is consistent with a set of input
examples. We find that the deep model is able to capture the complex behavior
of small networks, but that no model is able capture this behavior for networks
with more than a handful of nodes.
| James Atwood, Don Towsley, Krista Gile, and David Jensen | null | 1405.5868 | null | null |
Asymmetric LSH (ALSH) for Sublinear Time Maximum Inner Product Search
(MIPS) | stat.ML cs.DS cs.IR cs.LG | We present the first provably sublinear time algorithm for approximate
\emph{Maximum Inner Product Search} (MIPS). Our proposal is also the first
hashing algorithm for searching with (un-normalized) inner product as the
underlying similarity measure. Finding hashing schemes for MIPS was considered
hard. We formally show that the existing Locality Sensitive Hashing (LSH)
framework is insufficient for solving MIPS, and then we extend the existing LSH
framework to allow asymmetric hashing schemes. Our proposal is based on an
interesting mathematical phenomenon in which inner products, after independent
asymmetric transformations, can be converted into the problem of approximate
near neighbor search. This key observation makes efficient sublinear hashing
scheme for MIPS possible. In the extended asymmetric LSH (ALSH) framework, we
provide an explicit construction of provably fast hashing scheme for MIPS. The
proposed construction and the extended LSH framework could be of independent
theoretical interest. Our proposed algorithm is simple and easy to implement.
We evaluate the method, for retrieving inner products, in the collaborative
filtering task of item recommendations on Netflix and Movielens datasets.
| Anshumali Shrivastava and Ping Li | null | 1405.5869 | null | null |
LASS: a simple assignment model with Laplacian smoothing | cs.LG math.OC stat.ML | We consider the problem of learning soft assignments of $N$ items to $K$
categories given two sources of information: an item-category similarity
matrix, which encourages items to be assigned to categories they are similar to
(and to not be assigned to categories they are dissimilar to), and an item-item
similarity matrix, which encourages similar items to have similar assignments.
We propose a simple quadratic programming model that captures this intuition.
We give necessary conditions for its solution to be unique, define an
out-of-sample mapping, and derive a simple, effective training algorithm based
on the alternating direction method of multipliers. The model predicts
reasonable assignments from even a few similarity values, and can be seen as a
generalization of semisupervised learning. It is particularly useful when items
naturally belong to multiple categories, as for example when annotating
documents with keywords or pictures with tags, with partially tagged items, or
when the categories have complex interrelations (e.g. hierarchical) that are
unknown.
| Miguel \'A. Carreira-Perpi\~n\'an and Weiran Wang | null | 1405.5960 | null | null |
On the Optimal Solution of Weighted Nuclear Norm Minimization | cs.CV cs.LG stat.ML | In recent years, the nuclear norm minimization (NNM) problem has been
attracting much attention in computer vision and machine learning. The NNM
problem is capitalized on its convexity and it can be solved efficiently. The
standard nuclear norm regularizes all singular values equally, which is however
not flexible enough to fit real scenarios. Weighted nuclear norm minimization
(WNNM) is a natural extension and generalization of NNM. By assigning properly
different weights to different singular values, WNNM can lead to
state-of-the-art results in applications such as image denoising. Nevertheless,
so far the global optimal solution of WNNM problem is not completely solved yet
due to its non-convexity in general cases. In this article, we study the
theoretical properties of WNNM and prove that WNNM can be equivalently
transformed into a quadratic programming problem with linear constraints. This
implies that WNNM is equivalent to a convex problem and its global optimum can
be readily achieved by off-the-shelf convex optimization solvers. We further
show that when the weights are non-descending, the globally optimal solution of
WNNM can be obtained in closed-form.
| Qi Xie, Deyu Meng, Shuhang Gu, Lei Zhang, Wangmeng Zuo, Xiangchu Feng
and Zongben Xu | null | 1405.6012 | null | null |
Online Linear Optimization via Smoothing | cs.LG | We present a new optimization-theoretic approach to analyzing
Follow-the-Leader style algorithms, particularly in the setting where
perturbations are used as a tool for regularization. We show that adding a
strongly convex penalty function to the decision rule and adding stochastic
perturbations to data correspond to deterministic and stochastic smoothing
operations, respectively. We establish an equivalence between "Follow the
Regularized Leader" and "Follow the Perturbed Leader" up to the smoothness
properties. This intuition leads to a new generic analysis framework that
recovers and improves the previous known regret bounds of the class of
algorithms commonly known as Follow the Perturbed Leader.
| Jacob Abernethy, Chansoo Lee, Abhinav Sinha, Ambuj Tewari | null | 1405.6076 | null | null |
An enhanced neural network based approach towards object extraction | cs.CV cs.LG cs.NE | The improvements in spectral and spatial resolution of the satellite images
have facilitated the automatic extraction and identification of the features
from satellite images and aerial photographs. An automatic object extraction
method is presented for extracting and identifying the various objects from
satellite images and the accuracy of the system is verified with regard to IRS
satellite images. The system is based on neural network and simulates the
process of visual interpretation from remote sensing images and hence increases
the efficiency of image analysis. This approach obtains the basic
characteristics of the various features and the performance is enhanced by the
automatic learning approach, intelligent interpretation, and intelligent
interpolation. The major advantage of the method is its simplicity and that the
system identifies the features not only based on pixel value but also based on
the shape, haralick features etc of the objects. Further the system allows
flexibility for identifying the features within the same category based on size
and shape. The successful application of the system verified its effectiveness
and the accuracy of the system were assessed by ground truth verification.
| S.K. Katiyar and P.V. Arun | null | 1405.6137 | null | null |
A Bi-clustering Framework for Consensus Problems | cs.CV cs.LG stat.ML | We consider grouping as a general characterization for problems such as
clustering, community detection in networks, and multiple parametric model
estimation. We are interested in merging solutions from different grouping
algorithms, distilling all their good qualities into a consensus solution. In
this paper, we propose a bi-clustering framework and perspective for reaching
consensus in such grouping problems. In particular, this is the first time that
the task of finding/fitting multiple parametric models to a dataset is formally
posed as a consensus problem. We highlight the equivalence of these tasks and
establish the connection with the computational Gestalt program, that seeks to
provide a psychologically-inspired detection theory for visual events. We also
present a simple but powerful bi-clustering algorithm, specially tuned to the
nature of the problem we address, though general enough to handle many
different instances inscribed within our characterization. The presentation is
accompanied with diverse and extensive experimental results in clustering,
community detection, and multiple parametric model estimation in image
processing applications.
| Mariano Tepper and Guillermo Sapiro | 10.1137/140967325 | 1405.6159 | null | null |
Automated Fabric Defect Inspection: A Survey of Classifiers | cs.CV cs.LG | Quality control at each stage of production in textile industry has become a
key factor to retaining the existence in the highly competitive global market.
Problems of manual fabric defect inspection are lack of accuracy and high time
consumption, where early and accurate fabric defect detection is a significant
phase of quality control. Computer vision based, i.e. automated fabric defect
inspection systems are thought by many researchers of different countries to be
very useful to resolve these problems. There are two major challenges to be
resolved to attain a successful automated fabric defect inspection system. They
are defect detection and defect classification. In this work, we discuss
different techniques used for automated fabric defect classification, then show
a survey of classifiers used in automated fabric defect inspection systems, and
finally, compare these classifiers by using performance metrics. This work is
expected to be very useful for the researchers in the area of automated fabric
defect inspection to understand and evaluate the many potential options in this
field.
| Md. Tarek Habib, Rahat Hossain Faisal, M. Rokonuzzaman, Farruk Ahmed | 10.5121/ijfcst.2014.4102 | 1405.6177 | null | null |
Coupled Item-based Matrix Factorization | cs.LG cs.IR | The essence of the challenges cold start and sparsity in Recommender Systems
(RS) is that the extant techniques, such as Collaborative Filtering (CF) and
Matrix Factorization (MF), mainly rely on the user-item rating matrix, which
sometimes is not informative enough for predicting recommendations. To solve
these challenges, the objective item attributes are incorporated as
complementary information. However, most of the existing methods for inferring
the relationships between items assume that the attributes are "independently
and identically distributed (iid)", which does not always hold in reality. In
fact, the attributes are more or less coupled with each other by some implicit
relationships. Therefore, in this pa-per we propose an attribute-based coupled
similarity measure to capture the implicit relationships between items. We then
integrate the implicit item coupling into MF to form the Coupled Item-based
Matrix Factorization (CIMF) model. Experimental results on two open data sets
demonstrate that CIMF outperforms the benchmark methods.
| Fangfang Li, Guandong Xu, Longbing Cao | null | 1405.6223 | null | null |
Efficient Model Learning for Human-Robot Collaborative Tasks | cs.RO cs.AI cs.LG cs.SY | We present a framework for learning human user models from joint-action
demonstrations that enables the robot to compute a robust policy for a
collaborative task with a human. The learning takes place completely
automatically, without any human intervention. First, we describe the
clustering of demonstrated action sequences into different human types using an
unsupervised learning algorithm. These demonstrated sequences are also used by
the robot to learn a reward function that is representative for each type,
through the employment of an inverse reinforcement learning algorithm. The
learned model is then used as part of a Mixed Observability Markov Decision
Process formulation, wherein the human type is a partially observable variable.
With this framework, we can infer, either offline or online, the human type of
a new user that was not included in the training set, and can compute a policy
for the robot that will be aligned to the preference of this new user and will
be robust to deviations of the human actions from prior demonstrations. Finally
we validate the approach using data collected in human subject experiments, and
conduct proof-of-concept demonstrations in which a person performs a
collaborative task with a small industrial robot.
| Stefanos Nikolaidis, Keren Gu, Ramya Ramakrishnan, and Julie Shah | 10.1145/2696454.2696455 | 1405.6341 | null | null |
Multi-view Metric Learning for Multi-view Video Summarization | cs.CV cs.LG cs.MM | Traditional methods on video summarization are designed to generate summaries
for single-view video records; and thus they cannot fully exploit the
redundancy in multi-view video records. In this paper, we present a multi-view
metric learning framework for multi-view video summarization that combines the
advantages of maximum margin clustering with the disagreement minimization
criterion. The learning framework thus has the ability to find a metric that
best separates the data, and meanwhile to force the learned metric to maintain
original intrinsic information between data points, for example geometric
information. Facilitated by such a framework, a systematic solution to the
multi-view video summarization problem is developed. To the best of our
knowledge, it is the first time to address multi-view video summarization from
the viewpoint of metric learning. The effectiveness of the proposed method is
demonstrated by experiments.
| Yanwei Fu, Lingbo Wang, Yanwen Guo | null | 1405.6434 | null | null |
The role of dimensionality reduction in linear classification | cs.LG math.OC stat.ML | Dimensionality reduction (DR) is often used as a preprocessing step in
classification, but usually one first fixes the DR mapping, possibly using
label information, and then learns a classifier (a filter approach). Best
performance would be obtained by optimizing the classification error jointly
over DR mapping and classifier (a wrapper approach), but this is a difficult
nonconvex problem, particularly with nonlinear DR. Using the method of
auxiliary coordinates, we give a simple, efficient algorithm to train a
combination of nonlinear DR and a classifier, and apply it to a RBF mapping
with a linear SVM. This alternates steps where we train the RBF mapping and a
linear SVM as usual regression and classification, respectively, with a
closed-form step that coordinates both. The resulting nonlinear low-dimensional
classifier achieves classification errors competitive with the state-of-the-art
but is fast at training and testing, and allows the user to trade off runtime
for classification accuracy easily. We then study the role of nonlinear DR in
linear classification, and the interplay between the DR mapping, the number of
latent dimensions and the number of classes. When trained jointly, the DR
mapping takes an extreme role in eliminating variation: it tends to collapse
classes in latent space, erasing all manifold structure, and lay out class
centroids so they are linearly separable with maximum margin.
| Weiran Wang and Miguel \'A. Carreira-Perpi\~n\'an | null | 1405.6444 | null | null |
Fast and Robust Archetypal Analysis for Representation Learning | cs.CV cs.LG stat.ML | We revisit a pioneer unsupervised learning technique called archetypal
analysis, which is related to successful data analysis methods such as sparse
coding and non-negative matrix factorization. Since it was proposed, archetypal
analysis did not gain a lot of popularity even though it produces more
interpretable models than other alternatives. Because no efficient
implementation has ever been made publicly available, its application to
important scientific problems may have been severely limited. Our goal is to
bring back into favour archetypal analysis. We propose a fast optimization
scheme using an active-set strategy, and provide an efficient open-source
implementation interfaced with Matlab, R, and Python. Then, we demonstrate the
usefulness of archetypal analysis for computer vision tasks, such as codebook
learning, signal classification, and large image collection visualization.
| Yuansi Chen (EECS, INRIA Grenoble Rh\^one-Alpes / LJK Laboratoire Jean
Kuntzmann), Julien Mairal (INRIA Grenoble Rh\^one-Alpes / LJK Laboratoire
Jean Kuntzmann), Zaid Harchaoui (INRIA Grenoble Rh\^one-Alpes / LJK
Laboratoire Jean Kuntzmann) | null | 1405.6472 | null | null |
Automatic large-scale classification of bird sounds is strongly improved
by unsupervised feature learning | cs.SD cs.LG | Automatic species classification of birds from their sound is a computational
tool of increasing importance in ecology, conservation monitoring and vocal
communication studies. To make classification useful in practice, it is crucial
to improve its accuracy while ensuring that it can run at big data scales. Many
approaches use acoustic measures based on spectrogram-type data, such as the
Mel-frequency cepstral coefficient (MFCC) features which represent a
manually-designed summary of spectral information. However, recent work in
machine learning has demonstrated that features learnt automatically from data
can often outperform manually-designed feature transforms. Feature learning can
be performed at large scale and "unsupervised", meaning it requires no manual
data labelling, yet it can improve performance on "supervised" tasks such as
classification. In this work we introduce a technique for feature learning from
large volumes of bird sound recordings, inspired by techniques that have proven
useful in other domains. We experimentally compare twelve different feature
representations derived from the Mel spectrum (of which six use this
technique), using four large and diverse databases of bird vocalisations, with
a random forest classifier. We demonstrate that MFCCs are of limited power in
this context, leading to worse performance than the raw Mel spectral data.
Conversely, we demonstrate that unsupervised feature learning provides a
substantial boost over MFCCs and Mel spectra without adding computational
complexity after the model has been trained. The boost is particularly notable
for single-label classification tasks at large scale. The spectro-temporal
activations learned through our procedure resemble spectro-temporal receptive
fields calculated from avian primary auditory forebrain.
| Dan Stowell and Mark D. Plumbley | 10.7717/peerj.488 | 1405.6524 | null | null |
Robust Temporally Coherent Laplacian Protrusion Segmentation of 3D
Articulated Bodies | cs.CV cs.GR cs.LG | In motion analysis and understanding it is important to be able to fit a
suitable model or structure to the temporal series of observed data, in order
to describe motion patterns in a compact way, and to discriminate between them.
In an unsupervised context, i.e., no prior model of the moving object(s) is
available, such a structure has to be learned from the data in a bottom-up
fashion. In recent times, volumetric approaches in which the motion is captured
from a number of cameras and a voxel-set representation of the body is built
from the camera views, have gained ground due to attractive features such as
inherent view-invariance and robustness to occlusions. Automatic, unsupervised
segmentation of moving bodies along entire sequences, in a temporally-coherent
and robust way, has the potential to provide a means of constructing a
bottom-up model of the moving body, and track motion cues that may be later
exploited for motion classification. Spectral methods such as locally linear
embedding (LLE) can be useful in this context, as they preserve "protrusions",
i.e., high-curvature regions of the 3D volume, of articulated shapes, while
improving their separation in a lower dimensional space, making them in this
way easier to cluster. In this paper we therefore propose a spectral approach
to unsupervised and temporally-coherent body-protrusion segmentation along time
sequences. Volumetric shapes are clustered in an embedding space, clusters are
propagated in time to ensure coherence, and merged or split to accommodate
changes in the body's topology. Experiments on both synthetic and real
sequences of dense voxel-set data are shown. This supports the ability of the
proposed method to cluster body-parts consistently over time in a totally
unsupervised fashion, its robustness to sampling density and shape quality, and
its potential for bottom-up model construction
| Fabio Cuzzolin, Diana Mateus and Radu Horaud | 10.1007/s11263-014-0754-0 | 1405.6563 | null | null |
Stabilized Nearest Neighbor Classifier and Its Statistical Properties | stat.ML cs.LG | The stability of statistical analysis is an important indicator for
reproducibility, which is one main principle of scientific method. It entails
that similar statistical conclusions can be reached based on independent
samples from the same underlying population. In this paper, we introduce a
general measure of classification instability (CIS) to quantify the sampling
variability of the prediction made by a classification method. Interestingly,
the asymptotic CIS of any weighted nearest neighbor classifier turns out to be
proportional to the Euclidean norm of its weight vector. Based on this concise
form, we propose a stabilized nearest neighbor (SNN) classifier, which
distinguishes itself from other nearest neighbor classifiers, by taking the
stability into consideration. In theory, we prove that SNN attains the minimax
optimal convergence rate in risk, and a sharp convergence rate in CIS. The
latter rate result is established for general plug-in classifiers under a
low-noise condition. Extensive simulated and real examples demonstrate that SNN
achieves a considerable improvement in CIS over existing nearest neighbor
classifiers, with comparable classification accuracy. We implement the
algorithm in a publicly available R package snn.
| Wei Sun (Yahoo Labs), Xingye Qiao (Binghamton) and Guang Cheng
(Purdue) | null | 1405.6642 | null | null |
On the Computational Intractability of Exact and Approximate Dictionary
Learning | cs.IT cs.LG math.IT | The efficient sparse coding and reconstruction of signal vectors via linear
observations has received a tremendous amount of attention over the last
decade. In this context, the automated learning of a suitable basis or
overcomplete dictionary from training data sets of certain signal classes for
use in sparse representations has turned out to be of particular importance
regarding practical signal processing applications. Most popular dictionary
learning algorithms involve NP-hard sparse recovery problems in each iteration,
which may give some indication about the complexity of dictionary learning but
does not constitute an actual proof of computational intractability. In this
technical note, we show that learning a dictionary with which a given set of
training signals can be represented as sparsely as possible is indeed NP-hard.
Moreover, we also establish hardness of approximating the solution to within
large factors of the optimal sparsity level. Furthermore, we give NP-hardness
and non-approximability results for a recent dictionary learning variation
called the sensor permutation problem. Along the way, we also obtain a new
non-approximability result for the classical sparse recovery problem from
compressed sensing.
| Andreas M. Tillmann | 10.1109/LSP.2014.2345761 | 1405.6664 | null | null |
Statistique et Big Data Analytics; Volum\'etrie, L'Attaque des Clones | stat.OT cs.LG math.ST stat.TH | This article assumes acquired the skills and expertise of a statistician in
unsupervised (NMF, k-means, SVD) and supervised learning (regression, CART,
random forest). What skills and knowledge do a statistician must acquire to
reach the "Volume" scale of big data? After a quick overview of the different
strategies available and especially of those imposed by Hadoop, the algorithms
of some available learning methods are outlined in order to understand how they
are adapted to the strong stresses of the Map-Reduce functionalities
| Philippe Besse (IMT), Nathalie Villa-Vialaneix (MIAT INRA) | null | 1405.6676 | null | null |
Visualizing Random Forest with Self-Organising Map | cs.LG | Random Forest (RF) is a powerful ensemble method for classification and
regression tasks. It consists of decision trees set. Although, a single tree is
well interpretable for human, the ensemble of trees is a black-box model. The
popular technique to look inside the RF model is to visualize a RF proximity
matrix obtained on data samples with Multidimensional Scaling (MDS) method.
Herein, we present a novel method based on Self-Organising Maps (SOM) for
revealing intrinsic relationships in data that lay inside the RF used for
classification tasks. We propose an algorithm to learn the SOM with the
proximity matrix obtained from the RF. The visualization of RF proximity matrix
with MDS and SOM is compared. What is more, the SOM learned with the RF
proximity matrix has better classification accuracy in comparison to SOM
learned with Euclidean distance. Presented approach enables better
understanding of the RF and additionally improves accuracy of the SOM.
| Piotr P{\l}o\'nski and Krzysztof Zaremba | 10.1007/978-3-319-07176-3_6 | 1405.6684 | null | null |
Proximal Reinforcement Learning: A New Theory of Sequential Decision
Making in Primal-Dual Spaces | cs.LG | In this paper, we set forth a new vision of reinforcement learning developed
by us over the past few years, one that yields mathematically rigorous
solutions to longstanding important questions that have remained unresolved:
(i) how to design reliable, convergent, and robust reinforcement learning
algorithms (ii) how to guarantee that reinforcement learning satisfies
pre-specified "safety" guarantees, and remains in a stable region of the
parameter space (iii) how to design "off-policy" temporal difference learning
algorithms in a reliable and stable manner, and finally (iv) how to integrate
the study of reinforcement learning into the rich theory of stochastic
optimization. In this paper, we provide detailed answers to all these questions
using the powerful framework of proximal operators.
The key idea that emerges is the use of primal dual spaces connected through
the use of a Legendre transform. This allows temporal difference updates to
occur in dual spaces, allowing a variety of important technical advantages. The
Legendre transform elegantly generalizes past algorithms for solving
reinforcement learning problems, such as natural gradient methods, which we
show relate closely to the previously unconnected framework of mirror descent
methods. Equally importantly, proximal operator theory enables the systematic
development of operator splitting methods that show how to safely and reliably
decompose complex products of gradients that occur in recent variants of
gradient-based temporal difference learning. This key technical innovation
makes it possible to finally design "true" stochastic gradient methods for
reinforcement learning. Finally, Legendre transforms enable a variety of other
benefits, including modeling sparsity and domain geometry. Our work builds
extensively on recent work on the convergence of saddle-point algorithms, and
on the theory of monotone operators.
| Sridhar Mahadevan, Bo Liu, Philip Thomas, Will Dabney, Steve Giguere,
Nicholas Jacek, Ian Gemp, Ji Liu | null | 1405.6757 | null | null |
Layered Logic Classifiers: Exploring the `And' and `Or' Relations | stat.ML cs.LG | Designing effective and efficient classifier for pattern analysis is a key
problem in machine learning and computer vision. Many the solutions to the
problem require to perform logic operations such as `and', `or', and `not'.
Classification and regression tree (CART) include these operations explicitly.
Other methods such as neural networks, SVM, and boosting learn/compute a
weighted sum on features (weak classifiers), which weakly perform the 'and' and
'or' operations. However, it is hard for these classifiers to deal with the
'xor' pattern directly. In this paper, we propose layered logic classifiers for
patterns of complicated distributions by combining the `and', `or', and `not'
operations. The proposed algorithm is very general and easy to implement. We
test the classifiers on several typical datasets from the Irvine repository and
two challenging vision applications, object segmentation and pedestrian
detection. We observe significant improvements on all the datasets over the
widely used decision stump based AdaBoost algorithm. The resulting classifiers
have much less training complexity than decision tree based AdaBoost, and can
be applied in a wide range of domains.
| Zhuowen Tu and Piotr Dollar and Yingnian Wu | null | 1405.6804 | null | null |
Supervised Dictionary Learning by a Variational Bayesian Group Sparse
Nonnegative Matrix Factorization | cs.CV cs.LG stat.ML | Nonnegative matrix factorization (NMF) with group sparsity constraints is
formulated as a probabilistic graphical model and, assuming some observed data
have been generated by the model, a feasible variational Bayesian algorithm is
derived for learning model parameters. When used in a supervised learning
scenario, NMF is most often utilized as an unsupervised feature extractor
followed by classification in the obtained feature subspace. Having mapped the
class labels to a more general concept of groups which underlie sparsity of the
coefficients, what the proposed group sparse NMF model allows is incorporating
class label information to find low dimensional label-driven dictionaries which
not only aim to represent the data faithfully, but are also suitable for class
discrimination. Experiments performed in face recognition and facial expression
recognition domains point to advantages of classification in such label-driven
feature subspaces over classification in feature subspaces obtained in an
unsupervised manner.
| Ivan Ivek | null | 1405.6914 | null | null |
Large Scale, Large Margin Classification using Indefinite Similarity
Measures | cs.LG cs.CV stat.ML | Despite the success of the popular kernelized support vector machines, they
have two major limitations: they are restricted to Positive Semi-Definite (PSD)
kernels, and their training complexity scales at least quadratically with the
size of the data. Many natural measures of similarity between pairs of samples
are not PSD e.g. invariant kernels, and those that are implicitly or explicitly
defined by latent variable models. In this paper, we investigate scalable
approaches for using indefinite similarity measures in large margin frameworks.
In particular we show that a normalization of similarity to a subset of the
data points constitutes a representation suitable for linear classifiers. The
result is a classifier which is competitive to kernelized SVM in terms of
accuracy, despite having better training and test time complexities.
Experimental results demonstrate that on CIFAR-10 dataset, the model equipped
with similarity measures invariant to rigid and non-rigid deformations, can be
made more than 5 times sparser while being more accurate than kernelized SVM
using RBF kernels.
| Omid Aghazadeh and Stefan Carlsson | null | 1405.6922 | null | null |
Futility Analysis in the Cross-Validation of Machine Learning Models | stat.ML cs.LG | Many machine learning models have important structural tuning parameters that
cannot be directly estimated from the data. The common tactic for setting these
parameters is to use resampling methods, such as cross--validation or the
bootstrap, to evaluate a candidate set of values and choose the best based on
some pre--defined criterion. Unfortunately, this process can be time consuming.
However, the model tuning process can be streamlined by adaptively resampling
candidate values so that settings that are clearly sub-optimal can be
discarded. The notion of futility analysis is introduced in this context. An
example is shown that illustrates how adaptive resampling can be used to reduce
training time. Simulation studies are used to understand how the potential
speed--up is affected by parallel processing techniques.
| Max Kuhn | null | 1405.6974 | null | null |
Differentially Private Empirical Risk Minimization: Efficient Algorithms
and Tight Error Bounds | cs.LG cs.CR stat.ML | In this paper, we initiate a systematic investigation of differentially
private algorithms for convex empirical risk minimization. Various
instantiations of this problem have been studied before. We provide new
algorithms and matching lower bounds for private ERM assuming only that each
data point's contribution to the loss function is Lipschitz bounded and that
the domain of optimization is bounded. We provide a separate set of algorithms
and matching lower bounds for the setting in which the loss functions are known
to also be strongly convex.
Our algorithms run in polynomial time, and in some cases even match the
optimal non-private running time (as measured by oracle complexity). We give
separate algorithms (and lower bounds) for $(\epsilon,0)$- and
$(\epsilon,\delta)$-differential privacy; perhaps surprisingly, the techniques
used for designing optimal algorithms in the two cases are completely
different.
Our lower bounds apply even to very simple, smooth function families, such as
linear and quadratic functions. This implies that algorithms from previous work
can be used to obtain optimal error rates, under the additional assumption that
the contributions of each data point to the loss function is smooth. We show
that simple approaches to smoothing arbitrary loss functions (in order to apply
previous techniques) do not yield optimal error rates. In particular, optimal
algorithms were not previously known for problems such as training support
vector machines and the high-dimensional median.
| Raef Bassily, Adam Smith, Abhradeep Thakurta | null | 1405.7085 | null | null |
An Easy to Use Repository for Comparing and Improving Machine Learning
Algorithm Usage | stat.ML cs.LG | The results from most machine learning experiments are used for a specific
purpose and then discarded. This results in a significant loss of information
and requires rerunning experiments to compare learning algorithms. This also
requires implementation of another algorithm for comparison, that may not
always be correctly implemented. By storing the results from previous
experiments, machine learning algorithms can be compared easily and the
knowledge gained from them can be used to improve their performance. The
purpose of this work is to provide easy access to previous experimental results
for learning and comparison. These stored results are comprehensive -- storing
the prediction for each test instance as well as the learning algorithm,
hyperparameters, and training set that were used. Previous results are
particularly important for meta-learning, which, in a broad sense, is the
process of learning from previous machine learning results such that the
learning process is improved. While other experiment databases do exist, one of
our focuses is on easy access to the data. We provide meta-learning data sets
that are ready to be downloaded for meta-learning experiments. In addition,
queries to the underlying database can be made if specific information is
desired. We also differ from previous experiment databases in that our
databases is designed at the instance level, where an instance is an example in
a data set. We store the predictions of a learning algorithm trained on a
specific training set for each instance in the test set. Data set level
information can then be obtained by aggregating the results from the instances.
The instance level information can be used for many tasks such as determining
the diversity of a classifier or algorithmically determining the optimal subset
of training instances for a learning algorithm.
| Michael R. Smith and Andrew White and Christophe Giraud-Carrier and
Tony Martinez | null | 1405.7292 | null | null |
BayesOpt: A Bayesian Optimization Library for Nonlinear Optimization,
Experimental Design and Bandits | cs.LG | BayesOpt is a library with state-of-the-art Bayesian optimization methods to
solve nonlinear optimization, stochastic bandits or sequential experimental
design problems. Bayesian optimization is sample efficient by building a
posterior distribution to capture the evidence and prior knowledge for the
target function. Built in standard C++, the library is extremely efficient
while being portable and flexible. It includes a common interface for C, C++,
Python, Matlab and Octave.
| Ruben Martinez-Cantin | null | 1405.7430 | null | null |
Universal Compression of Envelope Classes: Tight Characterization via
Poisson Sampling | cs.IT cs.LG math.IT | The Poisson-sampling technique eliminates dependencies among symbol
appearances in a random sequence. It has been used to simplify the analysis and
strengthen the performance guarantees of randomized algorithms. Applying this
method to universal compression, we relate the redundancies of fixed-length and
Poisson-sampled sequences, use the relation to derive a simple single-letter
formula that approximates the redundancy of any envelope class to within an
additive logarithmic term. As a first application, we consider i.i.d.
distributions over a small alphabet as a step-envelope class, and provide a
short proof that determines the redundancy of discrete distributions over a
small al- phabet up to the first order terms. We then show the strength of our
method by applying the formula to tighten the existing bounds on the redundancy
of exponential and power-law classes, in particular answering a question posed
by Boucheron, Garivier and Gassiat.
| Jayadev Acharya and Ashkan Jafarpour and Alon Orlitsky and Ananda
Theertha Suresh | null | 1405.7460 | null | null |
Effect of Different Distance Measures on the Performance of K-Means
Algorithm: An Experimental Study in Matlab | cs.LG | K-means algorithm is a very popular clustering algorithm which is famous for
its simplicity. Distance measure plays a very important rule on the performance
of this algorithm. We have different distance measure techniques available. But
choosing a proper technique for distance calculation is totally dependent on
the type of the data that we are going to cluster. In this paper an
experimental study is done in Matlab to cluster the iris and wine data sets
with different distance measures and thereby observing the variation of the
performances shown.
| Mr. Dibya Jyoti Bora, Dr. Anil Kumar Gupta | null | 1405.7471 | null | null |
Simultaneous Feature and Expert Selection within Mixture of Experts | cs.LG | A useful strategy to deal with complex classification scenarios is the
"divide and conquer" approach. The mixture of experts (MOE) technique makes use
of this strategy by joinly training a set of classifiers, or experts, that are
specialized in different regions of the input space. A global model, or gate
function, complements the experts by learning a function that weights their
relevance in different parts of the input space. Local feature selection
appears as an attractive alternative to improve the specialization of experts
and gate function, particularly, for the case of high dimensional data. Our
main intuition is that particular subsets of dimensions, or subspaces, are
usually more appropriate to classify instances located in different regions of
the input space. Accordingly, this work contributes with a regularized variant
of MoE that incorporates an embedded process for local feature selection using
$L1$ regularization, with a simultaneous expert selection. The experiments are
still pending.
| Billy Peralta | null | 1405.7624 | null | null |
Using Local Alignments for Relation Recognition | cs.CL cs.IR cs.LG | This paper discusses the problem of marrying structural similarity with
semantic relatedness for Information Extraction from text. Aiming at accurate
recognition of relations, we introduce local alignment kernels and explore
various possibilities of using them for this task. We give a definition of a
local alignment (LA) kernel based on the Smith-Waterman score as a sequence
similarity measure and proceed with a range of possibilities for computing
similarity between elements of sequences. We show how distributional similarity
measures obtained from unlabeled data can be incorporated into the learning
task as semantic knowledge. Our experiments suggest that the LA kernel yields
promising results on various biomedical corpora outperforming two baselines by
a large margin. Additional series of experiments have been conducted on the
data sets of seven general relation types, where the performance of the LA
kernel is comparable to the current state-of-the-art results.
| Sophia Katrenko, Pieter Adriaans, Maarten van Someren | 10.1613/jair.2964 | 1405.7713 | null | null |
Learning to Act Greedily: Polymatroid Semi-Bandits | cs.LG cs.AI stat.ML | Many important optimization problems, such as the minimum spanning tree and
minimum-cost flow, can be solved optimally by a greedy method. In this work, we
study a learning variant of these problems, where the model of the problem is
unknown and has to be learned by interacting repeatedly with the environment in
the bandit setting. We formalize our learning problem quite generally, as
learning how to maximize an unknown modular function on a known polymatroid. We
propose a computationally efficient algorithm for solving our problem and bound
its expected cumulative regret. Our gap-dependent upper bound is tight up to a
constant and our gap-free upper bound is tight up to polylogarithmic factors.
Finally, we evaluate our method on three problems and demonstrate that it is
practical.
| Branislav Kveton, Zheng Wen, Azin Ashkan, and Michal Valko | null | 1405.7752 | null | null |
Generalization Bounds for Learning with Linear, Polygonal, Quadratic and
Conic Side Knowledge | stat.ML cs.LG | In this paper, we consider a supervised learning setting where side knowledge
is provided about the labels of unlabeled examples. The side knowledge has the
effect of reducing the hypothesis space, leading to tighter generalization
bounds, and thus possibly better generalization. We consider several types of
side knowledge, the first leading to linear and polygonal constraints on the
hypothesis space, the second leading to quadratic constraints, and the last
leading to conic constraints. We show how different types of domain knowledge
can lead directly to these kinds of side knowledge. We prove bounds on
complexity measures of the hypothesis space for quadratic and conic side
knowledge, and show that these bounds are tight in a specific sense for the
quadratic case.
| Theja Tulabandhula and Cynthia Rudin | null | 1405.7764 | null | null |
Flip-Flop Sublinear Models for Graphs: Proof of Theorem 1 | cs.LG | We prove that there is no class-dual for almost all sublinear models on
graphs.
| Brijnesh Jain | null | 1405.7897 | null | null |
Semantic Composition and Decomposition: From Recognition to Generation | cs.CL cs.AI cs.LG | Semantic composition is the task of understanding the meaning of text by
composing the meanings of the individual words in the text. Semantic
decomposition is the task of understanding the meaning of an individual word by
decomposing it into various aspects (factors, constituents, components) that
are latent in the meaning of the word. We take a distributional approach to
semantics, in which a word is represented by a context vector. Much recent work
has considered the problem of recognizing compositions and decompositions, but
we tackle the more difficult generation problem. For simplicity, we focus on
noun-modifier bigrams and noun unigrams. A test for semantic composition is,
given context vectors for the noun and modifier in a noun-modifier bigram ("red
salmon"), generate a noun unigram that is synonymous with the given bigram
("sockeye"). A test for semantic decomposition is, given a context vector for a
noun unigram ("snifter"), generate a noun-modifier bigram that is synonymous
with the given unigram ("brandy glass"). With a vocabulary of about 73,000
unigrams from WordNet, there are 73,000 candidate unigram compositions for a
bigram and 5,300,000,000 (73,000 squared) candidate bigram decompositions for a
unigram. We generate ranked lists of potential solutions in two passes. A fast
unsupervised learning algorithm generates an initial list of candidates and
then a slower supervised learning algorithm refines the list. We evaluate the
candidate solutions by comparing them to WordNet synonym sets. For
decomposition (unigram to bigram), the top 100 most highly ranked bigrams
include a WordNet synonym of the given unigram 50.7% of the time. For
composition (bigram to unigram), the top 100 most highly ranked unigrams
include a WordNet synonym of the given bigram 77.8% of the time.
| Peter D. Turney | null | 1405.7908 | null | null |
Optimal CUR Matrix Decompositions | cs.DS cs.LG math.NA | The CUR decomposition of an $m \times n$ matrix $A$ finds an $m \times c$
matrix $C$ with a subset of $c < n$ columns of $A,$ together with an $r \times
n$ matrix $R$ with a subset of $r < m$ rows of $A,$ as well as a $c \times r$
low-rank matrix $U$ such that the matrix $C U R$ approximates the matrix $A,$
that is, $ || A - CUR ||_F^2 \le (1+\epsilon) || A - A_k||_F^2$, where
$||.||_F$ denotes the Frobenius norm and $A_k$ is the best $m \times n$ matrix
of rank $k$ constructed via the SVD. We present input-sparsity-time and
deterministic algorithms for constructing such a CUR decomposition where
$c=O(k/\epsilon)$ and $r=O(k/\epsilon)$ and rank$(U) = k$. Up to constant
factors, our algorithms are simultaneously optimal in $c, r,$ and rank$(U)$.
| Christos Boutsidis and David P. Woodruff | null | 1405.7910 | null | null |
Estimating Vector Fields on Manifolds and the Embedding of Directed
Graphs | stat.ML cs.LG | This paper considers the problem of embedding directed graphs in Euclidean
space while retaining directional information. We model a directed graph as a
finite set of observations from a diffusion on a manifold endowed with a vector
field. This is the first generative model of its kind for directed graphs. We
introduce a graph embedding algorithm that estimates all three features of this
model: the low-dimensional embedding of the manifold, the data density and the
vector field. In the process, we also obtain new theoretical results on the
limits of "Laplacian type" matrices derived from directed graphs. The
application of our method to both artificially constructed and real data
highlights its strengths.
| Dominique Perrault-Joncas and Marina Meila | null | 1406.0013 | null | null |
Improved graph Laplacian via geometric self-consistency | stat.ML cs.LG | We address the problem of setting the kernel bandwidth used by Manifold
Learning algorithms to construct the graph Laplacian. Exploiting the connection
between manifold geometry, represented by the Riemannian metric, and the
Laplace-Beltrami operator, we set the bandwidth by optimizing the Laplacian's
ability to preserve the geometry of the data. Experiments show that this
principled approach is effective and robust.
| Dominique Perrault-Joncas and Marina Meila | null | 1406.0118 | null | null |
$l_1$-regularized Outlier Isolation and Regression | cs.CV cs.LG stat.ML | This paper proposed a new regression model called $l_1$-regularized outlier
isolation and regression (LOIRE) and a fast algorithm based on block coordinate
descent to solve this model. Besides, assuming outliers are gross errors
following a Bernoulli process, this paper also presented a Bernoulli estimate
model which, in theory, should be very accurate and robust due to its complete
elimination of affections caused by outliers. Though this Bernoulli estimate is
hard to solve, it could be approximately achieved through a process which takes
LOIRE as an important intermediate step. As a result, the approximate Bernoulli
estimate is a good combination of Bernoulli estimate's accuracy and LOIRE
regression's efficiency with several simulations conducted to strongly verify
this point. Moreover, LOIRE can be further extended to realize robust rank
factorization which is powerful in recovering low-rank component from massive
corruptions. Extensive experimental results showed that the proposed method
outperforms state-of-the-art methods like RPCA and GoDec in the aspect of
computation speed with a competitive performance.
| Sheng Han, Suzhen Wang, Xinyu Wu | null | 1406.0156 | null | null |
Feature Selection for Linear SVM with Provable Guarantees | stat.ML cs.LG | We give two provably accurate feature-selection techniques for the linear
SVM. The algorithms run in deterministic and randomized time respectively. Our
algorithms can be used in an unsupervised or supervised setting. The supervised
approach is based on sampling features from support vectors. We prove that the
margin in the feature space is preserved to within $\epsilon$-relative error of
the margin in the full feature space in the worst-case. In the unsupervised
setting, we also provide worst-case guarantees of the radius of the minimum
enclosing ball, thereby ensuring comparable generalization as in the full
feature space and resolving an open problem posed in Dasgupta et al. We present
extensive experiments on real-world datasets to support our theory and to
demonstrate that our method is competitive and often better than prior
state-of-the-art, for which there are no known provable guarantees.
| Saurabh Paul, Malik Magdon-Ismail and Petros Drineas | null | 1406.0167 | null | null |
Convex Total Least Squares | stat.ML cs.LG q-bio.GN q-bio.QM stat.AP | We study the total least squares (TLS) problem that generalizes least squares
regression by allowing measurement errors in both dependent and independent
variables. TLS is widely used in applied fields including computer vision,
system identification and econometrics. The special case when all dependent and
independent variables have the same level of uncorrelated Gaussian noise, known
as ordinary TLS, can be solved by singular value decomposition (SVD). However,
SVD cannot solve many important practical TLS problems with realistic noise
structure, such as having varying measurement noise, known structure on the
errors, or large outliers requiring robust error-norms. To solve such problems,
we develop convex relaxation approaches for a general class of structured TLS
(STLS). We show both theoretically and experimentally, that while the plain
nuclear norm relaxation incurs large approximation errors for STLS, the
re-weighted nuclear norm approach is very effective, and achieves better
accuracy on challenging STLS problems than popular non-convex solvers. We
describe a fast solution based on augmented Lagrangian formulation, and apply
our approach to an important class of biological problems that use population
average measurements to infer cell-type and physiological-state specific
expression levels that are very hard to measure directly.
| Dmitry Malioutov and Nikolai Slavov | null | 1406.0189 | null | null |
Inference of Sparse Networks with Unobserved Variables. Application to
Gene Regulatory Networks | stat.ML cs.LG q-bio.MN q-bio.QM stat.AP | Networks are a unifying framework for modeling complex systems and network
inference problems are frequently encountered in many fields. Here, I develop
and apply a generative approach to network inference (RCweb) for the case when
the network is sparse and the latent (not observed) variables affect the
observed ones. From all possible factor analysis (FA) decompositions explaining
the variance in the data, RCweb selects the FA decomposition that is consistent
with a sparse underlying network. The sparsity constraint is imposed by a novel
method that significantly outperforms (in terms of accuracy, robustness to
noise, complexity scaling, and computational efficiency) Bayesian methods and
MLE methods using l1 norm relaxation such as K-SVD and l1--based sparse
principle component analysis (PCA). Results from simulated models demonstrate
that RCweb recovers exactly the model structures for sparsity as low (as
non-sparse) as 50% and with ratio of unobserved to observed variables as high
as 2. RCweb is robust to noise, with gradual decrease in the parameter ranges
as the noise level increases.
| Nikolai Slavov | null | 1406.0193 | null | null |
Holistic Measures for Evaluating Prediction Models in Smart Grids | cs.LG | The performance of prediction models is often based on "abstract metrics"
that estimate the model's ability to limit residual errors between the observed
and predicted values. However, meaningful evaluation and selection of
prediction models for end-user domains requires holistic and
application-sensitive performance measures. Inspired by energy consumption
prediction models used in the emerging "big data" domain of Smart Power Grids,
we propose a suite of performance measures to rationally compare models along
the dimensions of scale independence, reliability, volatility and cost. We
include both application independent and dependent measures, the latter
parameterized to allow customization by domain experts to fit their scenario.
While our measures are generalizable to other domains, we offer an empirical
analysis using real energy use data for three Smart Grid applications:
planning, customer education and demand response, which are relevant for energy
sustainability. Our results underscore the value of the proposed measures to
offer a deeper insight into models' behavior and their impact on real
applications, which benefit both data mining researchers and practitioners.
| Saima Aman, Yogesh Simmhan, Viktor K. Prasanna | 10.1109/TKDE.2014.2327022 | 1406.0223 | null | null |
On Classification with Bags, Groups and Sets | stat.ML cs.CV cs.LG | Many classification problems can be difficult to formulate directly in terms
of the traditional supervised setting, where both training and test samples are
individual feature vectors. There are cases in which samples are better
described by sets of feature vectors, that labels are only available for sets
rather than individual samples, or, if individual labels are available, that
these are not independent. To better deal with such problems, several
extensions of supervised learning have been proposed, where either training
and/or test objects are sets of feature vectors. However, having been proposed
rather independently of each other, their mutual similarities and differences
have hitherto not been mapped out. In this work, we provide an overview of such
learning scenarios, propose a taxonomy to illustrate the relationships between
them, and discuss directions for further research in these areas.
| Veronika Cheplygina, David M. J. Tax, Marco Loog | 10.1016/j.patrec.2015.03.008 | 1406.0281 | null | null |
Transductive Learning for Multi-Task Copula Processes | cs.LG stat.ML | We tackle the problem of multi-task learning with copula process.
Multivariable prediction in spatial and spatial-temporal processes such as
natural resource estimation and pollution monitoring have been typically
addressed using techniques based on Gaussian processes and co-Kriging. While
the Gaussian prior assumption is convenient from analytical and computational
perspectives, nature is dominated by non-Gaussian likelihoods. Copula processes
are an elegant and flexible solution to handle various non-Gaussian likelihoods
by capturing the dependence structure of random variables with cumulative
distribution functions rather than their marginals. We show how multi-task
learning for copula processes can be used to improve multivariable prediction
for problems where the simple Gaussianity prior assumption does not hold. Then,
we present a transductive approximation for multi-task learning and derive
analytical expressions for the copula process model. The approach is evaluated
and compared to other techniques in one artificial dataset and two publicly
available datasets for natural resource estimation and concrete slump
prediction.
| Markus Schneider and Fabio Ramos | null | 1406.0304 | null | null |
Universal Convexification via Risk-Aversion | cs.SY cs.LG math.OC | We develop a framework for convexifying a fairly general class of
optimization problems. Under additional assumptions, we analyze the
suboptimality of the solution to the convexified problem relative to the
original nonconvex problem and prove additive approximation guarantees. We then
develop algorithms based on stochastic gradient methods to solve the resulting
optimization problems and show bounds on convergence rates. %We show a simple
application of this framework to supervised learning, where one can perform
integration explicitly and can use standard (non-stochastic) optimization
algorithms with better convergence guarantees. We then extend this framework to
apply to a general class of discrete-time dynamical systems. In this context,
our convexification approach falls under the well-studied paradigm of
risk-sensitive Markov Decision Processes. We derive the first known model-based
and model-free policy gradient optimization algorithms with guaranteed
convergence to the optimal solution. Finally, we present numerical results
validating our formulation in different applications.
| Krishnamurthy Dvijotham, Maryam Fazel and Emanuel Todorov | null | 1406.0554 | null | null |
A Game-theoretic Machine Learning Approach for Revenue Maximization in
Sponsored Search | cs.GT cs.LG | Sponsored search is an important monetization channel for search engines, in
which an auction mechanism is used to select the ads shown to users and
determine the prices charged from advertisers. There have been several pieces
of work in the literature that investigate how to design an auction mechanism
in order to optimize the revenue of the search engine. However, due to some
unrealistic assumptions used, the practical values of these studies are not
very clear. In this paper, we propose a novel \emph{game-theoretic machine
learning} approach, which naturally combines machine learning and game theory,
and learns the auction mechanism using a bilevel optimization framework. In
particular, we first learn a Markov model from historical data to describe how
advertisers change their bids in response to an auction mechanism, and then for
any given auction mechanism, we use the learnt model to predict its
corresponding future bid sequences. Next we learn the auction mechanism through
empirical revenue maximization on the predicted bid sequences. We show that the
empirical revenue will converge when the prediction period approaches infinity,
and a Genetic Programming algorithm can effectively optimize this empirical
revenue. Our experiments indicate that the proposed approach is able to produce
a much more effective auction mechanism than several baselines.
| Di He, Wei Chen, Liwei Wang, Tie-Yan Liu | null | 1406.0728 | null | null |
Supervised classification-based stock prediction and portfolio
optimization | q-fin.ST cs.CE cs.LG q-fin.PM stat.ML | As the number of publicly traded companies as well as the amount of their
financial data grows rapidly, it is highly desired to have tracking, analysis,
and eventually stock selections automated. There have been few works focusing
on estimating the stock prices of individual companies. However, many of those
have worked with very small number of financial parameters. In this work, we
apply machine learning techniques to address automated stock picking, while
using a larger number of financial parameters for individual companies than the
previous studies. Our approaches are based on the supervision of prediction
parameters using company fundamentals, time-series properties, and correlation
information between different stocks. We examine a variety of supervised
learning techniques and found that using stock fundamentals is a useful
approach for the classification problem, when combined with the high
dimensional data handling capabilities of support vector machine. The portfolio
our system suggests by predicting the behavior of stocks results in a 3% larger
growth on average than the overall market within a 3-month time period, as the
out-of-sample test suggests.
| Sercan Arik, Sukru Burc Eryilmaz, Adam Goldberg | null | 1406.0824 | null | null |
Learning Phrase Representations using RNN Encoder-Decoder for
Statistical Machine Translation | cs.CL cs.LG cs.NE stat.ML | In this paper, we propose a novel neural network model called RNN
Encoder-Decoder that consists of two recurrent neural networks (RNN). One RNN
encodes a sequence of symbols into a fixed-length vector representation, and
the other decodes the representation into another sequence of symbols. The
encoder and decoder of the proposed model are jointly trained to maximize the
conditional probability of a target sequence given a source sequence. The
performance of a statistical machine translation system is empirically found to
improve by using the conditional probabilities of phrase pairs computed by the
RNN Encoder-Decoder as an additional feature in the existing log-linear model.
Qualitatively, we show that the proposed model learns a semantically and
syntactically meaningful representation of linguistic phrases.
| Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry
Bahdanau, Fethi Bougares, Holger Schwenk and Yoshua Bengio | null | 1406.1078 | null | null |
Linear Convergence of Variance-Reduced Stochastic Gradient without
Strong Convexity | cs.NA cs.LG stat.CO stat.ML | Stochastic gradient algorithms estimate the gradient based on only one or a
few samples and enjoy low computational cost per iteration. They have been
widely used in large-scale optimization problems. However, stochastic gradient
algorithms are usually slow to converge and achieve sub-linear convergence
rates, due to the inherent variance in the gradient computation. To accelerate
the convergence, some variance-reduced stochastic gradient algorithms, e.g.,
proximal stochastic variance-reduced gradient (Prox-SVRG) algorithm, have
recently been proposed to solve strongly convex problems. Under the strongly
convex condition, these variance-reduced stochastic gradient algorithms achieve
a linear convergence rate. However, many machine learning problems are convex
but not strongly convex. In this paper, we introduce Prox-SVRG and its
projected variant called Variance-Reduced Projected Stochastic Gradient (VRPSG)
to solve a class of non-strongly convex optimization problems widely used in
machine learning. As the main technical contribution of this paper, we show
that both VRPSG and Prox-SVRG achieve a linear convergence rate without strong
convexity. A key ingredient in our proof is a Semi-Strongly Convex (SSC)
inequality which is the first to be rigorously proved for a class of
non-strongly convex problems in both constrained and regularized settings.
Moreover, the SSC inequality is independent of algorithms and may be applied to
analyze other stochastic gradient algorithms besides VRPSG and Prox-SVRG, which
may be of independent interest. To the best of our knowledge, this is the first
work that establishes the linear convergence rate for the variance-reduced
stochastic gradient algorithms on solving both constrained and regularized
problems without strong convexity.
| Pinghua Gong and Jieping Ye | null | 1406.1102 | null | null |
PAC Learning, VC Dimension, and the Arithmetic Hierarchy | math.LO cs.LG cs.LO | We compute that the index set of PAC-learnable concept classes is
$m$-complete $\Sigma^0_3$ within the set of indices for all concept classes of
a reasonable form. All concept classes considered are computable enumerations
of computable $\Pi^0_1$ classes, in a sense made precise here. This family of
concept classes is sufficient to cover all standard examples, and also has the
property that PAC learnability is equivalent to finite VC dimension.
| Wesley Calvert | null | 1406.1111 | null | null |
Learning to Diversify via Weighted Kernels for Classifier Ensemble | cs.LG cs.CV | Classifier ensemble generally should combine diverse component classifiers.
However, it is difficult to give a definitive connection between diversity
measure and ensemble accuracy. Given a list of available component classifiers,
how to adaptively and diversely ensemble classifiers becomes a big challenge in
the literature. In this paper, we argue that diversity, not direct diversity on
samples but adaptive diversity with data, is highly correlated to ensemble
accuracy, and we propose a novel technology for classifier ensemble, learning
to diversify, which learns to adaptively combine classifiers by considering
both accuracy and diversity. Specifically, our approach, Learning TO Diversify
via Weighted Kernels (L2DWK), performs classifier combination by optimizing a
direct but simple criterion: maximizing ensemble accuracy and adaptive
diversity simultaneously by minimizing a convex loss function. Given a measure
formulation, the diversity is calculated with weighted kernels (i.e., the
diversity is measured on the component classifiers' outputs which are kernelled
and weighted), and the kernel weights are automatically learned. We minimize
this loss function by estimating the kernel weights in conjunction with the
classifier weights, and propose a self-training algorithm for conducting this
convex optimization procedure iteratively. Extensive experiments on a variety
of 32 UCI classification benchmark datasets show that the proposed approach
consistently outperforms state-of-the-art ensembles such as Bagging, AdaBoost,
Random Forests, Gasen, Regularized Selective Ensemble, and Ensemble Pruning via
Semi-Definite Programming.
| Xu-Cheng Yin and Chun Yang and Hong-Wei Hao | null | 1406.1167 | null | null |
Discovering Structure in High-Dimensional Data Through Correlation
Explanation | cs.LG cs.AI stat.ML | We introduce a method to learn a hierarchy of successively more abstract
representations of complex data based on optimizing an information-theoretic
objective. Intuitively, the optimization searches for a set of latent factors
that best explain the correlations in the data as measured by multivariate
mutual information. The method is unsupervised, requires no model assumptions,
and scales linearly with the number of variables which makes it an attractive
approach for very high dimensional systems. We demonstrate that Correlation
Explanation (CorEx) automatically discovers meaningful structure for data from
diverse sources including personality tests, DNA, and human language.
| Greg Ver Steeg and Aram Galstyan | null | 1406.1222 | null | null |
Multi-task Neural Networks for QSAR Predictions | stat.ML cs.LG cs.NE | Although artificial neural networks have occasionally been used for
Quantitative Structure-Activity/Property Relationship (QSAR/QSPR) studies in
the past, the literature has of late been dominated by other machine learning
techniques such as random forests. However, a variety of new neural net
techniques along with successful applications in other domains have renewed
interest in network approaches. In this work, inspired by the winning team's
use of neural networks in a recent QSAR competition, we used an artificial
neural network to learn a function that predicts activities of compounds for
multiple assays at the same time. We conducted experiments leveraging recent
methods for dealing with overfitting in neural networks as well as other tricks
from the neural networks literature. We compared our methods to alternative
methods reported to perform well on these tasks and found that our neural net
methods provided superior performance.
| George E. Dahl and Navdeep Jaitly and Ruslan Salakhutdinov | null | 1406.1231 | null | null |
Faster Rates for the Frank-Wolfe Method over Strongly-Convex Sets | math.OC cs.LG | The Frank-Wolfe method (a.k.a. conditional gradient algorithm) for smooth
optimization has regained much interest in recent years in the context of large
scale optimization and machine learning. A key advantage of the method is that
it avoids projections - the computational bottleneck in many applications -
replacing it by a linear optimization step. Despite this advantage, the known
convergence rates of the FW method fall behind standard first order methods for
most settings of interest. It is an active line of research to derive faster
linear optimization-based algorithms for various settings of convex
optimization.
In this paper we consider the special case of optimization over strongly
convex sets, for which we prove that the vanila FW method converges at a rate
of $\frac{1}{t^2}$. This gives a quadratic improvement in convergence rate
compared to the general case, in which convergence is of the order
$\frac{1}{t}$, and known to be tight. We show that various balls induced by
$\ell_p$ norms, Schatten norms and group norms are strongly convex on one hand
and on the other hand, linear optimization over these sets is straightforward
and admits a closed-form solution. We further show how several previous
fast-rate results for the FW method follow easily from our analysis.
| Dan Garber, Elad Hazan | null | 1406.1305 | null | null |
Learning the Information Divergence | cs.LG | Information divergence that measures the difference between two nonnegative
matrices or tensors has found its use in a variety of machine learning
problems. Examples are Nonnegative Matrix/Tensor Factorization, Stochastic
Neighbor Embedding, topic models, and Bayesian network optimization. The
success of such a learning task depends heavily on a suitable divergence. A
large variety of divergences have been suggested and analyzed, but very few
results are available for an objective choice of the optimal divergence for a
given task. Here we present a framework that facilitates automatic selection of
the best divergence among a given family, based on standard maximum likelihood
estimation. We first propose an approximated Tweedie distribution for the
beta-divergence family. Selecting the best beta then becomes a machine learning
problem solved by maximum likelihood. Next, we reformulate alpha-divergence in
terms of beta-divergence, which enables automatic selection of alpha by maximum
likelihood with reuse of the learning principle for beta-divergence.
Furthermore, we show the connections between gamma and beta-divergences as well
as R\'enyi and alpha-divergences, such that our automatic selection framework
is extended to non-separable divergences. Experiments on both synthetic and
real-world data demonstrate that our method can quite accurately select
information divergence across different learning problems and various
divergence families.
| Onur Dikmen and Zhirong Yang and Erkki Oja | null | 1406.1385 | null | null |
Advances in Learning Bayesian Networks of Bounded Treewidth | cs.AI cs.LG stat.ML | This work presents novel algorithms for learning Bayesian network structures
with bounded treewidth. Both exact and approximate methods are developed. The
exact method combines mixed-integer linear programming formulations for
structure learning and treewidth computation. The approximate method consists
in uniformly sampling $k$-trees (maximal graphs of treewidth $k$), and
subsequently selecting, exactly or approximately, the best structure whose
moral graph is a subgraph of that $k$-tree. Some properties of these methods
are discussed and proven. The approaches are empirically compared to each other
and to a state-of-the-art method for learning bounded treewidth structures on a
collection of public data sets with up to 100 variables. The experiments show
that our exact algorithm outperforms the state of the art, and that the
approximate approach is fairly accurate.
| Siqi Nie, Denis Deratani Maua, Cassio Polpo de Campos, Qiang Ji | null | 1406.1411 | null | null |
Iterative Neural Autoregressive Distribution Estimator (NADE-k) | stat.ML cs.LG | Training of the neural autoregressive density estimator (NADE) can be viewed
as doing one step of probabilistic inference on missing values in data. We
propose a new model that extends this inference scheme to multiple steps,
arguing that it is easier to learn to improve a reconstruction in $k$ steps
rather than to learn to reconstruct in a single inference step. The proposed
model is an unsupervised building block for deep learning that combines the
desirable properties of NADE and multi-predictive training: (1) Its test
likelihood can be computed analytically, (2) it is easy to generate independent
samples from it, and (3) it uses an inference engine that is a superset of
variational inference for Boltzmann machines. The proposed NADE-k is
competitive with the state-of-the-art in density estimation on the two datasets
tested.
| Tapani Raiko, Li Yao, Kyunghyun Cho and Yoshua Bengio | null | 1406.1485 | null | null |
Systematic N-tuple Networks for Position Evaluation: Exceeding 90% in
the Othello League | cs.NE cs.AI cs.LG | N-tuple networks have been successfully used as position evaluation functions
for board games such as Othello or Connect Four. The effectiveness of such
networks depends on their architecture, which is determined by the placement of
constituent n-tuples, sequences of board locations, providing input to the
network. The most popular method of placing n-tuples consists in randomly
generating a small number of long, snake-shaped board location sequences. In
comparison, we show that learning n-tuple networks is significantly more
effective if they involve a large number of systematically placed, short,
straight n-tuples. Moreover, we demonstrate that in order to obtain the best
performance and the steepest learning curve for Othello it is enough to use
n-tuples of size just 2, yielding a network consisting of only 288 weights. The
best such network evolved in this study has been evaluated in the online
Othello League, obtaining the performance of nearly 96% --- more than any other
player to date.
| Wojciech Ja\'skowski | null | 1406.1509 | null | null |
Machine learning approach for text and document mining | cs.IR cs.LG | Text Categorization (TC), also known as Text Classification, is the task of
automatically classifying a set of text documents into different categories
from a predefined set. If a document belongs to exactly one of the categories,
it is a single-label classification task; otherwise, it is a multi-label
classification task. TC uses several tools from Information Retrieval (IR) and
Machine Learning (ML) and has received much attention in the last years from
both researchers in the academia and industry developers. In this paper, we
first categorize the documents using KNN based machine learning approach and
then return the most relevant documents.
| Vishwanath Bijalwan, Pinki Kumari, Jordan Pascual and Vijay Bhaskar
Semwal | null | 1406.1580 | null | null |
Learning to Discover Efficient Mathematical Identities | cs.LG | In this paper we explore how machine learning techniques can be applied to
the discovery of efficient mathematical identities. We introduce an attribute
grammar framework for representing symbolic expressions. Given a set of grammar
rules we build trees that combine different rules, looking for branches which
yield compositions that are analytically equivalent to a target expression, but
of lower computational complexity. However, as the size of the trees grows
exponentially with the complexity of the target expression, brute force search
is impractical for all but the simplest of expressions. Consequently, we
introduce two novel learning approaches that are able to learn from simpler
expressions to guide the tree search. The first of these is a simple n-gram
model, the other being a recursive neural-network. We show how these approaches
enable us to derive complex identities, beyond reach of brute-force search, or
human derivation.
| Wojciech Zaremba, Karol Kurach, Rob Fergus | null | 1406.1584 | null | null |
Separable Cosparse Analysis Operator Learning | cs.LG stat.ML | The ability of having a sparse representation for a certain class of signals
has many applications in data analysis, image processing, and other research
fields. Among sparse representations, the cosparse analysis model has recently
gained increasing interest. Many signals exhibit a multidimensional structure,
e.g. images or three-dimensional MRI scans. Most data analysis and learning
algorithms use vectorized signals and thereby do not account for this
underlying structure. The drawback of not taking the inherent structure into
account is a dramatic increase in computational cost. We propose an algorithm
for learning a cosparse Analysis Operator that adheres to the preexisting
structure of the data, and thus allows for a very efficient implementation.
This is achieved by enforcing a separable structure on the learned operator.
Our learning algorithm is able to deal with multidimensional data of arbitrary
order. We evaluate our method on volumetric data at the example of
three-dimensional MRI scans.
| Matthias Seibert, Julian W\"ormann, R\'emi Gribonval, Martin
Kleinsteuber | null | 1406.1621 | null | null |
Variational inference of latent state sequences using Recurrent Networks | stat.ML cs.LG | Recent advances in the estimation of deep directed graphical models and
recurrent networks let us contribute to the removal of a blind spot in the area
of probabilistc modelling of time series. The proposed methods i) can infer
distributed latent state-space trajectories with nonlinear transitions, ii)
scale to large data sets thanks to the use of a stochastic objective and fast,
approximate inference, iii) enable the design of rich emission models which iv)
will naturally lead to structured outputs. Two different paths of introducing
latent state sequences are pursued, leading to the variational recurrent auto
encoder (VRAE) and the variational one step predictor (VOSP). The use of
independent Wiener processes as priors on the latent state sequence is a viable
compromise between efficient computation of the Kullback-Leibler divergence
from the variational approximation of the posterior and maintaining a
reasonable belief in the dynamics. We verify our methods empirically, obtaining
results close or superior to the state of the art. We also show qualitative
results for denoising and missing value imputation.
| Justin Bayer, Christian Osendorfer | null | 1406.1655 | null | null |
Computational role of eccentricity dependent cortical magnification | cs.LG q-bio.NC | We develop a sampling extension of M-theory focused on invariance to scale
and translation. Quite surprisingly, the theory predicts an architecture of
early vision with increasing receptive field sizes and a high resolution fovea
-- in agreement with data about the cortical magnification factor, V1 and the
retina. From the slope of the inverse of the magnification factor, M-theory
predicts a cortical "fovea" in V1 in the order of $40$ by $40$ basic units at
each receptive field size -- corresponding to a foveola of size around $26$
minutes of arc at the highest resolution, $\approx 6$ degrees at the lowest
resolution. It also predicts uniform scale invariance over a fixed range of
scales independently of eccentricity, while translation invariance should
depend linearly on spatial frequency. Bouma's law of crowding follows in the
theory as an effect of cortical area-by-cortical area pooling; the Bouma
constant is the value expected if the signature responsible for recognition in
the crowding experiments originates in V2. From a broader perspective, the
emerging picture suggests that visual recognition under natural conditions
takes place by composing information from a set of fixations, with each
fixation providing recognition from a space-scale image fragment -- that is an
image patch represented at a set of increasing sizes and decreasing
resolutions.
| Tomaso Poggio, Jim Mutch, Leyla Isik | null | 1406.1770 | null | null |
Logarithmic Time Online Multiclass prediction | cs.LG | We study the problem of multiclass classification with an extremely large
number of classes (k), with the goal of obtaining train and test time
complexity logarithmic in the number of classes. We develop top-down tree
construction approaches for constructing logarithmic depth trees. On the
theoretical front, we formulate a new objective function, which is optimized at
each node of the tree and creates dynamic partitions of the data which are both
pure (in terms of class labels) and balanced. We demonstrate that under
favorable conditions, we can construct logarithmic depth trees that have leaves
with low label entropy. However, the objective function at the nodes is
challenging to optimize computationally. We address the empirical problem with
a new online decision tree construction procedure. Experiments demonstrate that
this online algorithm quickly achieves improvement in test error compared to
more common logarithmic training time approaches, which makes it a plausible
method in computationally constrained large-k applications.
| Anna Choromanska and John Langford | null | 1406.1822 | null | null |
Recursive Neural Networks Can Learn Logical Semantics | cs.CL cs.LG cs.NE | Tree-structured recursive neural networks (TreeRNNs) for sentence meaning
have been successful for many applications, but it remains an open question
whether the fixed-length representations that they learn can support tasks as
demanding as logical deduction. We pursue this question by evaluating whether
two such models---plain TreeRNNs and tree-structured neural tensor networks
(TreeRNTNs)---can correctly learn to identify logical relationships such as
entailment and contradiction using these representations. In our first set of
experiments, we generate artificial data from a logical grammar and use it to
evaluate the models' ability to learn to handle basic relational reasoning,
recursive structures, and quantification. We then evaluate the models on the
more natural SICK challenge data. Both models perform competitively on the SICK
data and generalize well in all three experiments on simulated data, suggesting
that they can learn suitable representations for logical inference in natural
language.
| Samuel R. Bowman, Christopher Potts, Christopher D. Manning | null | 1406.1827 | null | null |
Analyzing noise in autoencoders and deep networks | cs.NE cs.LG | Autoencoders have emerged as a useful framework for unsupervised learning of
internal representations, and a wide variety of apparently conceptually
disparate regularization techniques have been proposed to generate useful
features. Here we extend existing denoising autoencoders to additionally inject
noise before the nonlinearity, and at the hidden unit activations. We show that
a wide variety of previous methods, including denoising, contractive, and
sparse autoencoders, as well as dropout can be interpreted using this
framework. This noise injection framework reaps practical benefits by providing
a unified strategy to develop new internal representations by designing the
nature of the injected noise. We show that noisy autoencoders outperform
denoising autoencoders at the very task of denoising, and are competitive with
other single-layer techniques on MNIST, and CIFAR-10. We also show that types
of noise other than dropout improve performance in a deep network through
sparsifying, decorrelating, and spreading information across representations.
| Ben Poole, Jascha Sohl-Dickstein, Surya Ganguli | null | 1406.1831 | null | null |
Unsupervised Feature Learning through Divergent Discriminative Feature
Accumulation | cs.NE cs.LG | Unlike unsupervised approaches such as autoencoders that learn to reconstruct
their inputs, this paper introduces an alternative approach to unsupervised
feature learning called divergent discriminative feature accumulation (DDFA)
that instead continually accumulates features that make novel discriminations
among the training set. Thus DDFA features are inherently discriminative from
the start even though they are trained without knowledge of the ultimate
classification problem. Interestingly, DDFA also continues to add new features
indefinitely (so it does not depend on a hidden layer size), is not based on
minimizing error, and is inherently divergent instead of convergent, thereby
providing a unique direction of research for unsupervised feature learning. In
this paper the quality of its learned features is demonstrated on the MNIST
dataset, where its performance confirms that indeed DDFA is a viable technique
for learning useful features.
| Paul A. Szerlip, Gregory Morse, Justin K. Pugh, and Kenneth O. Stanley | null | 1406.1833 | null | null |
A Credit Assignment Compiler for Joint Prediction | cs.LG | Many machine learning applications involve jointly predicting multiple
mutually dependent output variables. Learning to search is a family of methods
where the complex decision problem is cast into a sequence of decisions via a
search space. Although these methods have shown promise both in theory and in
practice, implementing them has been burdensomely awkward. In this paper, we
show the search space can be defined by an arbitrary imperative program,
turning learning to search into a credit assignment compiler. Altogether with
the algorithmic improvements for the compiler, we radically reduce the
complexity of programming and the running time. We demonstrate the feasibility
of our approach on multiple joint prediction tasks. In all cases, we obtain
accuracies as high as alternative approaches, at drastically reduced execution
and programming time.
| Kai-Wei Chang, He He, Hal Daum\'e III, John Langford, Stephane Ross | null | 1406.1837 | null | null |
Model-based Reinforcement Learning and the Eluder Dimension | stat.ML cs.LG | We consider the problem of learning to optimize an unknown Markov decision
process (MDP). We show that, if the MDP can be parameterized within some known
function class, we can obtain regret bounds that scale with the dimensionality,
rather than cardinality, of the system. We characterize this dependence
explicitly as $\tilde{O}(\sqrt{d_K d_E T})$ where $T$ is time elapsed, $d_K$ is
the Kolmogorov dimension and $d_E$ is the \emph{eluder dimension}. These
represent the first unified regret bounds for model-based reinforcement
learning and provide state of the art guarantees in several important settings.
Moreover, we present a simple and computationally efficient algorithm
\emph{posterior sampling for reinforcement learning} (PSRL) that satisfies
these bounds.
| Ian Osband, Benjamin Van Roy | null | 1406.1853 | null | null |
A Drifting-Games Analysis for Online Learning and Applications to
Boosting | cs.LG | We provide a general mechanism to design online learning algorithms based on
a minimax analysis within a drifting-games framework. Different online learning
settings (Hedge, multi-armed bandit problems and online convex optimization)
are studied by converting into various kinds of drifting games. The original
minimax analysis for drifting games is then used and generalized by applying a
series of relaxations, starting from choosing a convex surrogate of the 0-1
loss function. With different choices of surrogates, we not only recover
existing algorithms, but also propose new algorithms that are totally
parameter-free and enjoy other useful properties. Moreover, our drifting-games
framework naturally allows us to study high probability bounds without
resorting to any concentration results, and also a generalized notion of regret
that measures how good the algorithm is compared to all but the top small
fraction of candidates. Finally, we translate our new Hedge algorithm into a
new adaptive boosting algorithm that is computationally faster as shown in
experiments, since it ignores a large number of examples on each round.
| Haipeng Luo and Robert E. Schapire | null | 1406.1856 | null | null |
Learning Word Representations with Hierarchical Sparse Coding | cs.CL cs.LG stat.ML | We propose a new method for learning word representations using hierarchical
regularization in sparse coding inspired by the linguistic study of word
meanings. We show an efficient learning algorithm based on stochastic proximal
methods that is significantly faster than previous approaches, making it
possible to perform hierarchical sparse coding on a corpus of billions of word
tokens. Experiments on various benchmark tasks---word similarity ranking,
analogies, sentence completion, and sentiment analysis---demonstrate that the
method outperforms or is competitive with state-of-the-art methods. Our word
representations are available at
\url{http://www.ark.cs.cmu.edu/dyogatam/wordvecs/}.
| Dani Yogatama and Manaal Faruqui and Chris Dyer and Noah A. Smith | null | 1406.2035 | null | null |
Training Convolutional Networks with Noisy Labels | cs.CV cs.LG cs.NE | The availability of large labeled datasets has allowed Convolutional Network
models to achieve impressive recognition results. However, in many settings
manual annotation of the data is impractical; instead our data has noisy
labels, i.e. there is some freely available label for each image which may or
may not be accurate. In this paper, we explore the performance of
discriminatively-trained Convnets when trained on such noisy data. We introduce
an extra noise layer into the network which adapts the network outputs to match
the noisy label distribution. The parameters of this noise layer can be
estimated as part of the training process and involve simple modifications to
current training infrastructures for deep networks. We demonstrate the
approaches on several datasets, including large scale experiments on the
ImageNet classification benchmark.
| Sainbayar Sukhbaatar, Joan Bruna, Manohar Paluri, Lubomir Bourdev and
Rob Fergus | null | 1406.2080 | null | null |
Fast and Flexible ADMM Algorithms for Trend Filtering | stat.ML cs.LG cs.NA math.OC stat.AP | This paper presents a fast and robust algorithm for trend filtering, a
recently developed nonparametric regression tool. It has been shown that, for
estimating functions whose derivatives are of bounded variation, trend
filtering achieves the minimax optimal error rate, while other popular methods
like smoothing splines and kernels do not. Standing in the way of a more
widespread practical adoption, however, is a lack of scalable and numerically
stable algorithms for fitting trend filtering estimates. This paper presents a
highly efficient, specialized ADMM routine for trend filtering. Our algorithm
is competitive with the specialized interior point methods that are currently
in use, and yet is far more numerically robust. Furthermore, the proposed ADMM
implementation is very simple, and importantly, it is flexible enough to extend
to many interesting related problems, such as sparse trend filtering and
isotonic trend filtering. Software for our method is freely available, in both
the C and R languages.
| Aaditya Ramdas and Ryan J. Tibshirani | 10.1080/10618600.2015.1054033 | 1406.2082 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.