title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
A hybrid decision support system : application on healthcare | cs.AI cs.LG | Many systems based on knowledge, especially expert systems for medical
decision support have been developed. Only systems are based on production
rules, and cannot learn and evolve only by updating them. In addition, taking
into account several criteria induces an exorbitant number of rules to be
injected into the system. It becomes difficult to translate medical knowledge
or a support decision as a simple rule. Moreover, reasoning based on generic
cases became classic and can even reduce the range of possible solutions. To
remedy that, we propose an approach based on using a multi-criteria decision
guided by a case-based reasoning (CBR) approach.
| Abdelhak Mansoul, Baghdad Atmani, Sofia Benbelkacem | null | 1311.4086 | null | null |
Towards Big Topic Modeling | cs.LG cs.DC cs.IR stat.ML | To solve the big topic modeling problem, we need to reduce both time and
space complexities of batch latent Dirichlet allocation (LDA) algorithms.
Although parallel LDA algorithms on the multi-processor architecture have low
time and space complexities, their communication costs among processors often
scale linearly with the vocabulary size and the number of topics, leading to a
serious scalability problem. To reduce the communication complexity among
processors for a better scalability, we propose a novel communication-efficient
parallel topic modeling architecture based on power law, which consumes orders
of magnitude less communication time when the number of topics is large. We
combine the proposed communication-efficient parallel architecture with the
online belief propagation (OBP) algorithm referred to as POBP for big topic
modeling tasks. Extensive empirical results confirm that POBP has the following
advantages to solve the big topic modeling problem: 1) high accuracy, 2)
communication-efficient, 3) fast speed, and 4) constant memory usage when
compared with recent state-of-the-art parallel LDA algorithms on the
multi-processor architecture.
| Jian-Feng Yan, Jia Zeng, Zhi-Qiang Liu, Yang Gao | null | 1311.4150 | null | null |
Unsupervised Learning of Invariant Representations in Hierarchical
Architectures | cs.CV cs.LG | The present phase of Machine Learning is characterized by supervised learning
algorithms relying on large sets of labeled examples ($n \to \infty$). The next
phase is likely to focus on algorithms capable of learning from very few
labeled examples ($n \to 1$), like humans seem able to do. We propose an
approach to this problem and describe the underlying theory, based on the
unsupervised, automatic learning of a ``good'' representation for supervised
learning, characterized by small sample complexity ($n$). We consider the case
of visual object recognition though the theory applies to other domains. The
starting point is the conjecture, proved in specific cases, that image
representations which are invariant to translations, scaling and other
transformations can considerably reduce the sample complexity of learning. We
prove that an invariant and unique (discriminative) signature can be computed
for each image patch, $I$, in terms of empirical distributions of the
dot-products between $I$ and a set of templates stored during unsupervised
learning. A module performing filtering and pooling, like the simple and
complex cells described by Hubel and Wiesel, can compute such estimates.
Hierarchical architectures consisting of this basic Hubel-Wiesel moduli inherit
its properties of invariance, stability, and discriminability while capturing
the compositional organization of the visual world in terms of wholes and
parts. The theory extends existing deep learning convolutional architectures
for image and speech recognition. It also suggests that the main computational
goal of the ventral stream of visual cortex is to provide a hierarchical
representation of new objects/images which is invariant to transformations,
stable, and discriminative for recognition---and that this representation may
be continuously learned in an unsupervised way during development and visual
experience.
| Fabio Anselmi, Joel Z. Leibo, Lorenzo Rosasco, Jim Mutch, Andrea
Tacchetti, Tomaso Poggio | null | 1311.4158 | null | null |
On the definition of a general learning system with user-defined
operators | cs.LG | In this paper, we push forward the idea of machine learning systems whose
operators can be modified and fine-tuned for each problem. This allows us to
propose a learning paradigm where users can write (or adapt) their operators,
according to the problem, data representation and the way the information
should be navigated. To achieve this goal, data instances, background
knowledge, rules, programs and operators are all written in the same functional
language, Erlang. Since changing operators affect how the search space needs to
be explored, heuristics are learnt as a result of a decision process based on
reinforcement learning where each action is defined as a choice of operator and
rule. As a result, the architecture can be seen as a 'system for writing
machine learning systems' or to explore new operators where the policy reuse
(as a kind of transfer learning) is allowed. States and actions are represented
in a Q matrix which is actually a table, from which a supervised model is
learnt. This makes it possible to have a more flexible mapping between old and
new problems, since we work with an abstraction of rules and actions. We
include some examples sharing reuse and the application of the system gErl to
IQ problems. In order to evaluate gErl, we will test it against some structured
problems: a selection of IQ test tasks and some experiments on some structured
prediction problems (list patterns).
| Fernando Mart\'inez-Plumed and C\`esar Ferri and Jos\'e
Hern\'andez-Orallo and Mar\'ia-Jos\'e Ram\'irez-Quintana | null | 1311.4235 | null | null |
Reflection methods for user-friendly submodular optimization | cs.LG cs.NA cs.RO math.OC | Recently, it has become evident that submodularity naturally captures widely
occurring concepts in machine learning, signal processing and computer vision.
Consequently, there is need for efficient optimization procedures for
submodular functions, especially for minimization problems. While general
submodular minimization is challenging, we propose a new method that exploits
existing decomposability of submodular functions. In contrast to previous
approaches, our method is neither approximate, nor impractical, nor does it
need any cumbersome parameter tuning. Moreover, it is easy to implement and
parallelize. A key component of our method is a formulation of the discrete
submodular minimization problem as a continuous best approximation problem that
is solved through a sequence of reflections, and its solution can be easily
thresholded to obtain an optimal discrete solution. This method solves both the
continuous and discrete formulations of the problem, and therefore has
applications in learning, inference, and reconstruction. In our experiments, we
illustrate the benefits of our method on two image segmentation tasks.
| Stefanie Jegelka, Francis Bach (INRIA Paris - Rocquencourt, LIENS),
Suvrit Sra (MPI) | null | 1311.4296 | null | null |
Ranking Algorithms by Performance | cs.AI cs.LG | A common way of doing algorithm selection is to train a machine learning
model and predict the best algorithm from a portfolio to solve a particular
problem. While this method has been highly successful, choosing only a single
algorithm has inherent limitations -- if the choice was bad, no remedial action
can be taken and parallelism cannot be exploited, to name but a few problems.
In this paper, we investigate how to predict the ranking of the portfolio
algorithms on a particular problem. This information can be used to choose the
single best algorithm, but also to allocate resources to the algorithms
according to their rank. We evaluate a range of approaches to predict the
ranking of a set of algorithms on a problem. We furthermore introduce a
framework for categorizing ranking predictions that allows to judge the
expressiveness of the predictive output. Our experimental evaluation
demonstrates on a range of data sets from the literature that it is beneficial
to consider the relationship between algorithms when predicting rankings. We
furthermore show that relatively naive approaches deliver rankings of good
quality already.
| Lars Kotthoff | null | 1311.4319 | null | null |
Stochastic processes and feedback-linearisation for online
identification and Bayesian adaptive control of fully-actuated mechanical
systems | cs.LG cs.SY physics.data-an stat.ML | This work proposes a new method for simultaneous probabilistic identification
and control of an observable, fully-actuated mechanical system. Identification
is achieved by conditioning stochastic process priors on observations of
configurations and noisy estimates of configuration derivatives. In contrast to
previous work that has used stochastic processes for identification, we
leverage the structural knowledge afforded by Lagrangian mechanics and learn
the drift and control input matrix functions of the control-affine system
separately. We utilise feedback-linearisation to reduce, in expectation, the
uncertain nonlinear control problem to one that is easy to regulate in a
desired manner. Thereby, our method combines the flexibility of nonparametric
Bayesian learning with epistemological guarantees on the expected closed-loop
trajectory. We illustrate our method in the context of torque-actuated pendula
where the dynamics are learned with a combination of normal and log-normal
processes.
| Jan-Peter Calliess, Antonis Papachristodoulou and Stephen J. Roberts | null | 1311.4468 | null | null |
A Component Lasso | stat.ML cs.LG | We propose a new sparse regression method called the component lasso, based
on a simple idea. The method uses the connected-components structure of the
sample covariance matrix to split the problem into smaller ones. It then solves
the subproblems separately, obtaining a coefficient vector for each one. Then,
it uses non-negative least squares to recombine the different vectors into a
single solution. This step is useful in selecting and reweighting components
that are correlated with the response. Simulated and real data examples show
that the component lasso can outperform standard regression methods such as the
lasso and elastic net, achieving a lower mean squared error as well as better
support recovery.
| Nadine Hussami and Robert Tibshirani | null | 1311.4472 | null | null |
Discriminative Density-ratio Estimation | cs.LG | The covariate shift is a challenging problem in supervised learning that
results from the discrepancy between the training and test distributions. An
effective approach which recently drew a considerable attention in the research
community is to reweight the training samples to minimize that discrepancy. In
specific, many methods are based on developing Density-ratio (DR) estimation
techniques that apply to both regression and classification problems. Although
these methods work well for regression problems, their performance on
classification problems is not satisfactory. This is due to a key observation
that these methods focus on matching the sample marginal distributions without
paying attention to preserving the separation between classes in the reweighted
space. In this paper, we propose a novel method for Discriminative
Density-ratio (DDR) estimation that addresses the aforementioned problem and
aims at estimating the density-ratio of joint distributions in a class-wise
manner. The proposed algorithm is an iterative procedure that alternates
between estimating the class information for the test data and estimating new
density ratio for each class. To incorporate the estimated class information of
the test data, a soft matching technique is proposed. In addition, we employ an
effective criterion which adopts mutual information as an indicator to stop the
iterative procedure while resulting in a decision boundary that lies in a
sparse region. Experiments on synthetic and benchmark datasets demonstrate the
superiority of the proposed method in terms of both accuracy and robustness.
| Yun-Qian Miao, Ahmed K. Farahat, Mohamed S. Kamel | null | 1311.4486 | null | null |
Post-Proceedings of the First International Workshop on Learning and
Nonmonotonic Reasoning | cs.AI cs.LG cs.LO | Knowledge Representation and Reasoning and Machine Learning are two important
fields in AI. Nonmonotonic logic programming (NMLP) and Answer Set Programming
(ASP) provide formal languages for representing and reasoning with commonsense
knowledge and realize declarative problem solving in AI. On the other side,
Inductive Logic Programming (ILP) realizes Machine Learning in logic
programming, which provides a formal background to inductive learning and the
techniques have been applied to the fields of relational learning and data
mining. Generally speaking, NMLP and ASP realize nonmonotonic reasoning while
lack the ability of learning. By contrast, ILP realizes inductive learning
while most techniques have been developed under the classical monotonic logic.
With this background, some researchers attempt to combine techniques in the
context of nonmonotonic ILP. Such combination will introduce a learning
mechanism to programs and would exploit new applications on the NMLP side,
while on the ILP side it will extend the representation language and enable us
to use existing solvers. Cross-fertilization between learning and nonmonotonic
reasoning can also occur in such as the use of answer set solvers for ILP,
speed-up learning while running answer set solvers, learning action theories,
learning transition rules in dynamical systems, abductive learning, learning
biological networks with inhibition, and applications involving default and
negation. This workshop is the first attempt to provide an open forum for the
identification of problems and discussion of possible collaborations among
researchers with complementary expertise. The workshop was held on September
15th of 2013 in Corunna, Spain. This post-proceedings contains five technical
papers (out of six accepted papers) and the abstract of the invited talk by Luc
De Raedt.
| Katsumi Inoue and Chiaki Sakama (Editors) | null | 1311.4639 | null | null |
Near-Optimal Entrywise Sampling for Data Matrices | cs.LG cs.IT cs.NA math.IT stat.ML | We consider the problem of selecting non-zero entries of a matrix $A$ in
order to produce a sparse sketch of it, $B$, that minimizes $\|A-B\|_2$. For
large $m \times n$ matrices, such that $n \gg m$ (for example, representing $n$
observations over $m$ attributes) we give sampling distributions that exhibit
four important properties. First, they have closed forms computable from
minimal information regarding $A$. Second, they allow sketching of matrices
whose non-zeros are presented to the algorithm in arbitrary order as a stream,
with $O(1)$ computation per non-zero. Third, the resulting sketch matrices are
not only sparse, but their non-zero entries are highly compressible. Lastly,
and most importantly, under mild assumptions, our distributions are provably
competitive with the optimal offline distribution. Note that the probabilities
in the optimal offline distribution may be complex functions of all the entries
in the matrix. Therefore, regardless of computational complexity, the optimal
distribution might be impossible to compute in the streaming model.
| Dimitris Achlioptas, Zohar Karnin, Edo Liberty | null | 1311.4643 | null | null |
Asymptotically Exact, Embarrassingly Parallel MCMC | stat.ML cs.DC cs.LG stat.CO | Communication costs, resulting from synchronization requirements during
learning, can greatly slow down many parallel machine learning algorithms. In
this paper, we present a parallel Markov chain Monte Carlo (MCMC) algorithm in
which subsets of data are processed independently, with very little
communication. First, we arbitrarily partition data onto multiple machines.
Then, on each machine, any classical MCMC method (e.g., Gibbs sampling) may be
used to draw samples from a posterior distribution given the data subset.
Finally, the samples from each machine are combined to form samples from the
full posterior. This embarrassingly parallel algorithm allows each machine to
act independently on a subset of the data (without communication) until the
final combination stage. We prove that our algorithm generates asymptotically
exact samples and empirically demonstrate its ability to parallelize burn-in
and sampling in several models.
| Willie Neiswanger, Chong Wang, Eric Xing | null | 1311.4780 | null | null |
Beating the Minimax Rate of Active Learning with Prior Knowledge | cs.LG stat.ML | Active learning refers to the learning protocol where the learner is allowed
to choose a subset of instances for labeling. Previous studies have shown that,
compared with passive learning, active learning is able to reduce the label
complexity exponentially if the data are linearly separable or satisfy the
Tsybakov noise condition with parameter $\kappa=1$. In this paper, we propose a
novel active learning algorithm using a convex surrogate loss, with the goal to
broaden the cases for which active learning achieves an exponential
improvement. We make use of a convex loss not only because it reduces the
computational cost, but more importantly because it leads to a tight bound for
the empirical process (i.e., the difference between the empirical estimation
and the expectation) when the current solution is close to the optimal one.
Under the assumption that the norm of the optimal classifier that minimizes the
convex risk is available, our analysis shows that the introduction of the
convex surrogate loss yields an exponential reduction in the label complexity
even when the parameter $\kappa$ of the Tsybakov noise is larger than $1$. To
the best of our knowledge, this is the first work that improves the minimax
rate of active learning by utilizing certain priori knowledge.
| Lijun Zhang and Mehrdad Mahdavi and Rong Jin | null | 1311.4803 | null | null |
Gaussian Process Optimization with Mutual Information | stat.ML cs.LG | In this paper, we analyze a generic algorithm scheme for sequential global
optimization using Gaussian processes. The upper bounds we derive on the
cumulative regret for this generic algorithm improve by an exponential factor
the previously known bounds for algorithms like GP-UCB. We also introduce the
novel Gaussian Process Mutual Information algorithm (GP-MI), which
significantly improves further these upper bounds for the cumulative regret. We
confirm the efficiency of this algorithm on synthetic and real tasks against
the natural competitor, GP-UCB, and also the Expected Improvement heuristic.
| Emile Contal, Vianney Perchet, Nicolas Vayatis | null | 1311.4825 | null | null |
Domain Adaptation of Majority Votes via Perturbed Variation-based Label
Transfer | stat.ML cs.LG | We tackle the PAC-Bayesian Domain Adaptation (DA) problem. This arrives when
one desires to learn, from a source distribution, a good weighted majority vote
(over a set of classifiers) on a different target distribution. In this
context, the disagreement between classifiers is known crucial to control. In
non-DA supervised setting, a theoretical bound - the C-bound - involves this
disagreement and leads to a majority vote learning algorithm: MinCq. In this
work, we extend MinCq to DA by taking advantage of an elegant divergence
between distribution called the Perturbed Varation (PV). Firstly, justified by
a new formulation of the C-bound, we provide to MinCq a target sample labeled
thanks to a PV-based self-labeling focused on regions where the source and
target marginal distributions are closer. Secondly, we propose an original
process for tuning the hyperparameters. Our framework shows very promising
results on a toy problem.
| Emilie Morvant (IST Austria) | 10.1016/j.patrec.2014.08.013 | 1311.4833 | null | null |
Extended Formulations for Online Linear Bandit Optimization | cs.LG cs.DS | On-line linear optimization on combinatorial action sets (d-dimensional
actions) with bandit feedback, is known to have complexity in the order of the
dimension of the problem. The exponential weighted strategy achieves the best
known regret bound that is of the order of $d^{2}\sqrt{n}$ (where $d$ is the
dimension of the problem, $n$ is the time horizon). However, such strategies
are provably suboptimal or computationally inefficient. The complexity is
attributed to the combinatorial structure of the action set and the dearth of
efficient exploration strategies of the set. Mirror descent with entropic
regularization function comes close to solving this problem by enforcing a
meticulous projection of weights with an inherent boundary condition. Entropic
regularization in mirror descent is the only known way of achieving a
logarithmic dependence on the dimension. Here, we argue otherwise and recover
the original intuition of exponential weighting by borrowing a technique from
discrete optimization and approximation algorithms called `extended
formulation'. Such formulations appeal to the underlying geometry of the set
with a guaranteed logarithmic dependence on the dimension underpinned by an
information theoretic entropic analysis.
| Shaona Ghosh, Adam Prugel-Bennett | null | 1311.5022 | null | null |
Gromov-Hausdorff stability of linkage-based hierarchical clustering
methods | cs.LG | A hierarchical clustering method is stable if small perturbations on the data
set produce small perturbations in the result. These perturbations are measured
using the Gromov-Hausdorff metric. We study the problem of stability on
linkage-based hierarchical clustering methods. We obtain that, under some basic
conditions, standard linkage-based methods are semi-stable. This means that
they are stable if the input data is close enough to an ultrametric space. We
prove that, apart from exotic examples, introducing any unchaining condition in
the algorithm always produces unstable methods.
| A. Mart\'inez-P\'erez | null | 1311.5068 | null | null |
Sparse Overlapping Sets Lasso for Multitask Learning and its Application
to fMRI Analysis | cs.LG stat.ML | Multitask learning can be effective when features useful in one task are also
useful for other tasks, and the group lasso is a standard method for selecting
a common subset of features. In this paper, we are interested in a less
restrictive form of multitask learning, wherein (1) the available features can
be organized into subsets according to a notion of similarity and (2) features
useful in one task are similar, but not necessarily identical, to the features
best suited for other tasks. The main contribution of this paper is a new
procedure called Sparse Overlapping Sets (SOS) lasso, a convex optimization
that automatically selects similar features for related learning tasks. Error
bounds are derived for SOSlasso and its consistency is established for squared
error loss. In particular, SOSlasso is motivated by multi- subject fMRI studies
in which functional activity is classified using brain voxels as features.
Experiments with real and synthetic data demonstrate the advantages of SOSlasso
compared to the lasso and group lasso.
| Nikhil Rao, Christopher Cox, Robert Nowak, Timothy Rogers | null | 1311.5422 | null | null |
Bayesian Discovery of Threat Networks | cs.SI cs.LG math.ST physics.soc-ph stat.ML stat.TH | A novel unified Bayesian framework for network detection is developed, under
which a detection algorithm is derived based on random walks on graphs. The
algorithm detects threat networks using partial observations of their activity,
and is proved to be optimum in the Neyman-Pearson sense. The algorithm is
defined by a graph, at least one observation, and a diffusion model for threat.
A link to well-known spectral detection methods is provided, and the
equivalence of the random walk and harmonic solutions to the Bayesian
formulation is proven. A general diffusion model is introduced that utilizes
spatio-temporal relationships between vertices, and is used for a specific
space-time formulation that leads to significant performance improvements on
coordinated covert networks. This performance is demonstrated using a new
hybrid mixed-membership blockmodel introduced to simulate random covert
networks with realistic properties.
| Steven T. Smith, Edward K. Kao, Kenneth D. Senne, Garrett Bernstein,
and Scott Philips | 10.1109/TSP.2014.2336613 | 1311.5552 | null | null |
Compressive Measurement Designs for Estimating Structured Signals in
Structured Clutter: A Bayesian Experimental Design Approach | stat.ML cs.LG | This work considers an estimation task in compressive sensing, where the goal
is to estimate an unknown signal from compressive measurements that are
corrupted by additive pre-measurement noise (interference, or clutter) as well
as post-measurement noise, in the specific setting where some (perhaps limited)
prior knowledge on the signal, interference, and noise is available. The
specific aim here is to devise a strategy for incorporating this prior
information into the design of an appropriate compressive measurement strategy.
Here, the prior information is interpreted as statistics of a prior
distribution on the relevant quantities, and an approach based on Bayesian
Experimental Design is proposed. Experimental results on synthetic data
demonstrate that the proposed approach outperforms traditional random
compressive measurement designs, which are agnostic to the prior information,
as well as several other knowledge-enhanced sensing matrix designs based on
more heuristic notions.
| Swayambhoo Jain, Akshay Soni, and Jarvis Haupt | null | 1311.5599 | null | null |
Learning Non-Linear Feature Maps | cs.LG | Feature selection plays a pivotal role in learning, particularly in areas
were parsimonious features can provide insight into the underlying process,
such as biology. Recent approaches for non-linear feature selection employing
greedy optimisation of Centred Kernel Target Alignment(KTA), while exhibiting
strong results in terms of generalisation accuracy and sparsity, can become
computationally prohibitive for high-dimensional datasets. We propose randSel,
a randomised feature selection algorithm, with attractive scaling properties.
Our theoretical analysis of randSel provides strong probabilistic guarantees
for the correct identification of relevant features. Experimental results on
real and artificial data, show that the method successfully identifies
effective features, performing better than a number of competitive approaches.
| Dimitrios Athanasakis, John Shawe-Taylor, Delmiro Fernandez-Reyes | null | 1311.5636 | null | null |
Gradient Hard Thresholding Pursuit for Sparsity-Constrained Optimization | cs.LG cs.NA stat.ML | Hard Thresholding Pursuit (HTP) is an iterative greedy selection procedure
for finding sparse solutions of underdetermined linear systems. This method has
been shown to have strong theoretical guarantee and impressive numerical
performance. In this paper, we generalize HTP from compressive sensing to a
generic problem setup of sparsity-constrained convex optimization. The proposed
algorithm iterates between a standard gradient descent step and a hard
thresholding step with or without debiasing. We prove that our method enjoys
the strong guarantees analogous to HTP in terms of rate of convergence and
parameter estimation accuracy. Numerical evidences show that our method is
superior to the state-of-the-art greedy selection methods in sparse logistic
regression and sparse precision matrix estimation tasks.
| Xiao-Tong Yuan, Ping Li, Tong Zhang | null | 1311.5750 | null | null |
Finding sparse solutions of systems of polynomial equations via
group-sparsity optimization | cs.IT cs.LG math.IT math.OC stat.ML | The paper deals with the problem of finding sparse solutions to systems of
polynomial equations possibly perturbed by noise. In particular, we show how
these solutions can be recovered from group-sparse solutions of a derived
system of linear equations. Then, two approaches are considered to find these
group-sparse solutions. The first one is based on a convex relaxation resulting
in a second-order cone programming formulation which can benefit from efficient
reweighting techniques for sparsity enhancement. For this approach, sufficient
conditions for the exact recovery of the sparsest solution to the polynomial
system are derived in the noiseless setting, while stable recovery results are
obtained for the noisy case. Though lacking a similar analysis, the second
approach provides a more computationally efficient algorithm based on a greedy
strategy adding the groups one-by-one. With respect to previous work, the
proposed methods recover the sparsest solution in a very short computing time
while remaining at least as accurate in terms of the probability of success.
This probability is empirically analyzed to emphasize the relationship between
the ability of the methods to solve the polynomial system and the sparsity of
the solution.
| Fabien Lauer (LORIA), Henrik Ohlsson | null | 1311.5871 | null | null |
Fast Training of Effective Multi-class Boosting Using Coordinate Descent
Optimization | cs.CV cs.LG stat.CO | Wepresentanovelcolumngenerationbasedboostingmethod for multi-class
classification. Our multi-class boosting is formulated in a single optimization
problem as in Shen and Hao (2011). Different from most existing multi-class
boosting methods, which use the same set of weak learners for all the classes,
we train class specified weak learners (i.e., each class has a different set of
weak learners). We show that using separate weak learner sets for each class
leads to fast convergence, without introducing additional computational
overhead in the training procedure. To further make the training more efficient
and scalable, we also propose a fast co- ordinate descent method for solving
the optimization problem at each boosting iteration. The proposed coordinate
descent method is conceptually simple and easy to implement in that it is a
closed-form solution for each coordinate update. Experimental results on a
variety of datasets show that, compared to a range of existing multi-class
boosting meth- ods, the proposed method has much faster convergence rate and
better generalization performance in most cases. We also empirically show that
the proposed fast coordinate descent algorithm needs less training time than
the MultiBoost algorithm in Shen and Hao (2011).
| Guosheng Lin, Chunhua Shen, Anton van den Hengel, David Suter | null | 1311.5947 | null | null |
No Free Lunch Theorem and Bayesian probability theory: two sides of the
same coin. Some implications for black-box optimization and metaheuristics | cs.LG | Challenging optimization problems, which elude acceptable solution via
conventional calculus methods, arise commonly in different areas of industrial
design and practice. Hard optimization problems are those who manifest the
following behavior: a) high number of independent input variables; b) very
complex or irregular multi-modal fitness; c) computational expensive fitness
evaluation. This paper will focus on some theoretical issues that have strong
implications for practice. I will stress how an interpretation of the No Free
Lunch theorem leads naturally to a general Bayesian optimization framework. The
choice of a prior over the space of functions is a critical and inevitable step
in every black-box optimization.
| Loris Serafino | null | 1311.6041 | null | null |
A Primal-Dual Method for Training Recurrent Neural Networks Constrained
by the Echo-State Property | cs.LG cs.NE | We present an architecture of a recurrent neural network (RNN) with a
fully-connected deep neural network (DNN) as its feature extractor. The RNN is
equipped with both causal temporal prediction and non-causal look-ahead, via
auto-regression (AR) and moving-average (MA), respectively. The focus of this
paper is a primal-dual training method that formulates the learning of the RNN
as a formal optimization problem with an inequality constraint that provides a
sufficient condition for the stability of the network dynamics. Experimental
results demonstrate the effectiveness of this new method, which achieves 18.86%
phone recognition error on the TIMIT benchmark for the core test set. The
result approaches the best result of 17.7%, which was obtained by using RNN
with long short-term memory (LSTM). The results also show that the proposed
primal-dual training method produces lower recognition errors than the popular
RNN methods developed earlier based on the carefully tuned threshold parameter
that heuristically prevents the gradient from exploding.
| Jianshu Chen and Li Deng | null | 1311.6091 | null | null |
Off-policy reinforcement learning for $ H_\infty $ control design | cs.SY cs.LG math.OC stat.ML | The $H_\infty$ control design problem is considered for nonlinear systems
with unknown internal system model. It is known that the nonlinear $ H_\infty $
control problem can be transformed into solving the so-called
Hamilton-Jacobi-Isaacs (HJI) equation, which is a nonlinear partial
differential equation that is generally impossible to be solved analytically.
Even worse, model-based approaches cannot be used for approximately solving HJI
equation, when the accurate system model is unavailable or costly to obtain in
practice. To overcome these difficulties, an off-policy reinforcement leaning
(RL) method is introduced to learn the solution of HJI equation from real
system data instead of mathematical system model, and its convergence is
proved. In the off-policy RL method, the system data can be generated with
arbitrary policies rather than the evaluating policy, which is extremely
important and promising for practical systems. For implementation purpose, a
neural network (NN) based actor-critic structure is employed and a least-square
NN weight update algorithm is derived based on the method of weighted
residuals. Finally, the developed NN-based off-policy RL method is tested on a
linear F16 aircraft plant, and further applied to a rotational/translational
actuator system.
| Biao Luo, Huai-Ning Wu, Tingwen Huang | 10.1109/TCYB.2014.2319577 | 1311.6107 | null | null |
Bounding the Test Log-Likelihood of Generative Models | cs.LG | Several interesting generative learning algorithms involve a complex
probability distribution over many random variables, involving intractable
normalization constants or latent variable normalization. Some of them may even
not have an analytic expression for the unnormalized probability function and
no tractable approximation. This makes it difficult to estimate the quality of
these models, once they have been trained, or to monitor their quality (e.g.
for early stopping) while training. A previously proposed method is based on
constructing a non-parametric density estimator of the model's probability
function from samples generated by the model. We revisit this idea, propose a
more efficient estimator, and prove that it provides a lower bound on the true
test log-likelihood, and an unbiased estimator as the number of generated
samples goes to infinity, although one that incorporates the effect of poor
mixing. We further propose a biased variant of the estimator that can be used
reliably with a finite number of samples for the purpose of model comparison.
| Yoshua Bengio, Li Yao and Kyunghyun Cho | null | 1311.6184 | null | null |
Novelty Detection Under Multi-Instance Multi-Label Framework | cs.LG | Novelty detection plays an important role in machine learning and signal
processing. This paper studies novelty detection in a new setting where the
data object is represented as a bag of instances and associated with multiple
class labels, referred to as multi-instance multi-label (MIML) learning.
Contrary to the common assumption in MIML that each instance in a bag belongs
to one of the known classes, in novelty detection, we focus on the scenario
where bags may contain novel-class instances. The goal is to determine, for any
given instance in a new bag, whether it belongs to a known class or a novel
class. Detecting novelty in the MIML setting captures many real-world phenomena
and has many potential applications. For example, in a collection of tagged
images, the tag may only cover a subset of objects existing in the images.
Discovering an object whose class has not been previously tagged can be useful
for the purpose of soliciting a label for the new object class. To address this
novel problem, we present a discriminative framework for detecting new class
instances. Experiments demonstrate the effectiveness of our proposed method,
and reveal that the presence of unlabeled novel instances in training bags is
helpful to the detection of such instances in testing stage.
| Qi Lou, Raviv Raich, Forrest Briggs, Xiaoli Z. Fern | 10.1109/MLSP.2013.6661985 | 1311.6211 | null | null |
Learning Reputation in an Authorship Network | cs.SI cs.IR cs.LG stat.ML | The problem of searching for experts in a given academic field is hugely
important in both industry and academia. We study exactly this issue with
respect to a database of authors and their publications. The idea is to use
Latent Semantic Indexing (LSI) and Latent Dirichlet Allocation (LDA) to perform
topic modelling in order to find authors who have worked in a query field. We
then construct a coauthorship graph and motivate the use of influence
maximisation and a variety of graph centrality measures to obtain a ranked list
of experts. The ranked lists are further improved using a Markov Chain-based
rank aggregation approach. The complete method is readily scalable to large
datasets. To demonstrate the efficacy of the approach we report on an extensive
set of computational simulations using the Arnetminer dataset. An improvement
in mean average precision is demonstrated over the baseline case of simply
using the order of authors found by the topic models.
| Charanpal Dhanjal (LTCI), St\'ephan Cl\'emen\c{c}on (LTCI) | null | 1311.6334 | null | null |
Exploration in Interactive Personalized Music Recommendation: A
Reinforcement Learning Approach | cs.MM cs.IR cs.LG | Current music recommender systems typically act in a greedy fashion by
recommending songs with the highest user ratings. Greedy recommendation,
however, is suboptimal over the long term: it does not actively gather
information on user preferences and fails to recommend novel songs that are
potentially interesting. A successful recommender system must balance the needs
to explore user preferences and to exploit this information for recommendation.
This paper presents a new approach to music recommendation by formulating this
exploration-exploitation trade-off as a reinforcement learning task called the
multi-armed bandit. To learn user preferences, it uses a Bayesian model, which
accounts for both audio content and the novelty of recommendations. A
piecewise-linear approximation to the model and a variational inference
algorithm are employed to speed up Bayesian inference. One additional benefit
of our approach is a single unified model for both music recommendation and
playlist generation. Both simulation results and a user study indicate strong
potential for the new approach.
| Xinxi Wang, Yi Wang, David Hsu, Ye Wang | null | 1311.6355 | null | null |
On Approximate Inference for Generalized Gaussian Process Models | stat.ML cs.CV cs.LG | A generalized Gaussian process model (GGPM) is a unifying framework that
encompasses many existing Gaussian process (GP) models, such as GP regression,
classification, and counting. In the GGPM framework, the observation likelihood
of the GP model is itself parameterized using the exponential family
distribution (EFD). In this paper, we consider efficient algorithms for
approximate inference on GGPMs using the general form of the EFD. A particular
GP model and its associated inference algorithms can then be formed by changing
the parameters of the EFD, thus greatly simplifying its creation for
task-specific output domains. We demonstrate the efficacy of this framework by
creating several new GP models for regressing to non-negative reals and to real
intervals. We also consider a closed-form Taylor approximation for efficient
inference on GGPMs, and elaborate on its connections with other model-specific
heuristic closed-form approximations. Finally, we present a comprehensive set
of experiments to compare approximate inference algorithms on a wide variety of
GGPMs.
| Lifeng Shang and Antoni B. Chan | null | 1311.6371 | null | null |
A Comprehensive Approach to Universal Piecewise Nonlinear Regression
Based on Trees | cs.LG stat.ML | In this paper, we investigate adaptive nonlinear regression and introduce
tree based piecewise linear regression algorithms that are highly efficient and
provide significantly improved performance with guaranteed upper bounds in an
individual sequence manner. We use a tree notion in order to partition the
space of regressors in a nested structure. The introduced algorithms adapt not
only their regression functions but also the complete tree structure while
achieving the performance of the "best" linear mixture of a doubly exponential
number of partitions, with a computational complexity only polynomial in the
number of nodes of the tree. While constructing these algorithms, we also avoid
using any artificial "weighting" of models (with highly data dependent
parameters) and, instead, directly minimize the final regression error, which
is the ultimate performance goal. The introduced methods are generic such that
they can readily incorporate different tree construction methods such as random
trees in their framework and can use different regressor or partitioning
functions as demonstrated in the paper.
| N. Denizcan Vanli and Suleyman S. Kozat | null | 1311.6392 | null | null |
A Unified Approach to Universal Prediction: Generalized Upper and Lower
Bounds | cs.LG | We study sequential prediction of real-valued, arbitrary and unknown
sequences under the squared error loss as well as the best parametric predictor
out of a large, continuous class of predictors. Inspired by recent results from
computational learning theory, we refrain from any statistical assumptions and
define the performance with respect to the class of general parametric
predictors. In particular, we present generic lower and upper bounds on this
relative performance by transforming the prediction task into a parameter
learning problem. We first introduce the lower bounds on this relative
performance in the mixture of experts framework, where we show that for any
sequential algorithm, there always exists a sequence for which the performance
of the sequential algorithm is lower bounded by zero. We then introduce a
sequential learning algorithm to predict such arbitrary and unknown sequences,
and calculate upper bounds on its total squared prediction error for every
bounded sequence. We further show that in some scenarios we achieve matching
lower and upper bounds demonstrating that our algorithms are optimal in a
strong minimax sense such that their performances cannot be improved further.
As an interesting result we also prove that for the worst case scenario, the
performance of randomized algorithms can be achieved by sequential algorithms
so that randomized algorithms does not improve the performance.
| N. Denizcan Vanli and Suleyman S. Kozat | null | 1311.6396 | null | null |
Robust Multimodal Graph Matching: Sparse Coding Meets Graph Matching | math.OC cs.LG stat.ML | Graph matching is a challenging problem with very important applications in a
wide range of fields, from image and video analysis to biological and
biomedical problems. We propose a robust graph matching algorithm inspired in
sparsity-related techniques. We cast the problem, resembling group or
collaborative sparsity formulations, as a non-smooth convex optimization
problem that can be efficiently solved using augmented Lagrangian techniques.
The method can deal with weighted or unweighted graphs, as well as multimodal
data, where different graphs represent different types of data. The proposed
approach is also naturally integrated with collaborative graph inference
techniques, solving general network inference problems where the observed
variables, possibly coming from different modalities, are not in
correspondence. The algorithm is tested and compared with state-of-the-art
graph matching techniques in both synthetic and real graphs. We also present
results on multimodal graphs and applications to collaborative inference of
brain connectivity from alignment-free functional magnetic resonance imaging
(fMRI) data. The code is publicly available.
| Marcelo Fiori, Pablo Sprechmann, Joshua Vogelstein, Pablo Mus\'e,
Guillermo Sapiro | null | 1311.6425 | null | null |
Are all training examples equally valuable? | cs.CV cs.LG stat.ML | When learning a new concept, not all training examples may prove equally
useful for training: some may have higher or lower training value than others.
The goal of this paper is to bring to the attention of the vision community the
following considerations: (1) some examples are better than others for training
detectors or classifiers, and (2) in the presence of better examples, some
examples may negatively impact performance and removing them may be beneficial.
In this paper, we propose an approach for measuring the training value of an
example, and use it for ranking and greedily sorting examples. We test our
methods on different vision tasks, models, datasets and classifiers. Our
experiments show that the performance of current state-of-the-art detectors and
classifiers can be improved when training on a subset, rather than the whole
training set.
| Agata Lapedriza and Hamed Pirsiavash and Zoya Bylinskii and Antonio
Torralba | null | 1311.6510 | null | null |
Universal Codes from Switching Strategies | cs.IT cs.LG math.IT | We discuss algorithms for combining sequential prediction strategies, a task
which can be viewed as a natural generalisation of the concept of universal
coding. We describe a graphical language based on Hidden Markov Models for
defining prediction strategies, and we provide both existing and new models as
examples. The models include efficient, parameterless models for switching
between the input strategies over time, including a model for the case where
switches tend to occur in clusters, and finally a new model for the scenario
where the prediction strategies have a known relationship, and where jumps are
typically between strongly related ones. This last model is relevant for coding
time series data where parameter drift is expected. As theoretical ontributions
we introduce an interpolation construction that is useful in the development
and analysis of new algorithms, and we establish a new sophisticated lemma for
analysing the individual sequence regret of parameterised models.
| Wouter M. Koolen and Steven de Rooij | 10.1109/TIT.2013.2273353 | 1311.6536 | null | null |
Practical Inexact Proximal Quasi-Newton Method with Global Complexity
Analysis | cs.LG math.OC stat.ML | Recently several methods were proposed for sparse optimization which make
careful use of second-order information [10, 28, 16, 3] to improve local
convergence rates. These methods construct a composite quadratic approximation
using Hessian information, optimize this approximation using a first-order
method, such as coordinate descent and employ a line search to ensure
sufficient descent. Here we propose a general framework, which includes
slightly modified versions of existing algorithms and also a new algorithm,
which uses limited memory BFGS Hessian approximations, and provide a novel
global convergence rate analysis, which covers methods that solve subproblems
via coordinate descent.
| Katya Scheinberg and Xiaocheng Tang | null | 1311.6547 | null | null |
Double Ramp Loss Based Reject Option Classifier | cs.LG | We consider the problem of learning reject option classifiers. The goodness
of a reject option classifier is quantified using $0-d-1$ loss function wherein
a loss $d \in (0,.5)$ is assigned for rejection. In this paper, we propose {\em
double ramp loss} function which gives a continuous upper bound for $(0-d-1)$
loss. Our approach is based on minimizing regularized risk under the double
ramp loss using {\em difference of convex (DC) programming}. We show the
effectiveness of our approach through experiments on synthetic and benchmark
datasets. Our approach performs better than the state of the art reject option
classification approaches.
| Naresh Manwani, Kalpit Desai, Sanand Sasidharan, Ramasubramanian
Sundararajan | 10.1007/978-3-319-57454-7_53 | 1311.6556 | null | null |
Auto-adaptative Laplacian Pyramids for High-dimensional Data Analysis | cs.AI cs.LG stat.ML | Non-linear dimensionality reduction techniques such as manifold learning
algorithms have become a common way for processing and analyzing
high-dimensional patterns that often have attached a target that corresponds to
the value of an unknown function. Their application to new points consists in
two steps: first, embedding the new data point into the low dimensional space
and then, estimating the function value on the test point from its neighbors in
the embedded space.
However, finding the low dimension representation of a test point, while easy
for simple but often not powerful enough procedures such as PCA, can be much
more complicated for methods that rely on some kind of eigenanalysis, such as
Spectral Clustering (SC) or Diffusion Maps (DM). Similarly, when a target
function is to be evaluated, averaging methods like nearest neighbors may give
unstable results if the function is noisy. Thus, the smoothing of the target
function with respect to the intrinsic, low-dimensional representation that
describes the geometric structure of the examined data is a challenging task.
In this paper we propose Auto-adaptive Laplacian Pyramids (ALP), an extension
of the standard Laplacian Pyramids model that incorporates a modified LOOCV
procedure that avoids the large cost of the standard one and offers the
following advantages: (i) it selects automatically the optimal function
resolution (stopping time) adapted to the data and its noise, (ii) it is easy
to apply as it does not require parameterization, (iii) it does not overfit the
training set and (iv) it adds no extra cost compared to other classical
interpolation methods. We illustrate numerically ALP's behavior on a synthetic
problem and apply it to the computation of the DM projection of new patterns
and to the extension to them of target function values on a radiation
forecasting problem over very high dimensional patterns.
| \'Angela Fern\'andez, Neta Rabin, Dalia Fishelov, Jos\'e R. Dorronsoro | null | 1311.6594 | null | null |
Recommending with an Agenda: Active Learning of Private Attributes using
Matrix Factorization | cs.LG cs.CY | Recommender systems leverage user demographic information, such as age,
gender, etc., to personalize recommendations and better place their targeted
ads. Oftentimes, users do not volunteer this information due to privacy
concerns, or due to a lack of initiative in filling out their online profiles.
We illustrate a new threat in which a recommender learns private attributes of
users who do not voluntarily disclose them. We design both passive and active
attacks that solicit ratings for strategically selected items, and could thus
be used by a recommender system to pursue this hidden agenda. Our methods are
based on a novel usage of Bayesian matrix factorization in an active learning
setting. Evaluations on multiple datasets illustrate that such attacks are
indeed feasible and use significantly fewer rated items than static inference
methods. Importantly, they succeed without sacrificing the quality of
recommendations to users.
| Smriti Bhagat, Udi Weinsberg, Stratis Ioannidis, Nina Taft | null | 1311.6802 | null | null |
A Novel Family of Adaptive Filtering Algorithms Based on The Logarithmic
Cost | cs.LG | We introduce a novel family of adaptive filtering algorithms based on a
relative logarithmic cost. The new family intrinsically combines the higher and
lower order measures of the error into a single continuous update based on the
error amount. We introduce important members of this family of algorithms such
as the least mean logarithmic square (LMLS) and least logarithmic absolute
difference (LLAD) algorithms that improve the convergence performance of the
conventional algorithms. However, our approach and analysis are generic such
that they cover other well-known cost functions as described in the paper. The
LMLS algorithm achieves comparable convergence performance with the least mean
fourth (LMF) algorithm and extends the stability bound on the step size. The
LLAD and least mean square (LMS) algorithms demonstrate similar convergence
performance in impulse-free noise environments while the LLAD algorithm is
robust against impulsive interferences and outperforms the sign algorithm (SA).
We analyze the transient, steady state and tracking performance of the
introduced algorithms and demonstrate the match of the theoretical analyzes and
simulation results. We show the extended stability bound of the LMLS algorithm
and analyze the robustness of the LLAD algorithm against impulsive
interferences. Finally, we demonstrate the performance of our algorithms in
different scenarios through numerical examples.
| Muhammed O. Sayin, N. Denizcan Vanli, Suleyman S. Kozat | 10.1109/TSP.2014.2333559 | 1311.6809 | null | null |
Semi-Supervised Sparse Coding | stat.ML cs.LG | Sparse coding approximates the data sample as a sparse linear combination of
some basic codewords and uses the sparse codes as new presentations. In this
paper, we investigate learning discriminative sparse codes by sparse coding in
a semi-supervised manner, where only a few training samples are labeled. By
using the manifold structure spanned by the data set of both labeled and
unlabeled samples and the constraints provided by the labels of the labeled
samples, we learn the variable class labels for all the samples. Furthermore,
to improve the discriminative ability of the learned sparse codes, we assume
that the class labels could be predicted from the sparse codes directly using a
linear classifier. By solving the codebook, sparse codes, class labels and
classifier parameters simultaneously in a unified objective function, we
develop a semi-supervised sparse coding algorithm. Experiments on two
real-world pattern recognition problems demonstrate the advantage of the
proposed methods over supervised sparse coding methods on partially labeled
data sets.
| Jim Jing-Yan Wang and Xin Gao | 10.1109/IJCNN.2014.6889449 | 1311.6834 | null | null |
Learning Prices for Repeated Auctions with Strategic Buyers | cs.LG cs.GT | Inspired by real-time ad exchanges for online display advertising, we
consider the problem of inferring a buyer's value distribution for a good when
the buyer is repeatedly interacting with a seller through a posted-price
mechanism. We model the buyer as a strategic agent, whose goal is to maximize
her long-term surplus, and we are interested in mechanisms that maximize the
seller's long-term revenue. We define the natural notion of strategic regret
--- the lost revenue as measured against a truthful (non-strategic) buyer. We
present seller algorithms that are no-(strategic)-regret when the buyer
discounts her future surplus --- i.e. the buyer prefers showing advertisements
to users sooner rather than later. We also give a lower bound on strategic
regret that increases as the buyer's discounting weakens and shows, in
particular, that any seller algorithm will suffer linear strategic regret if
there is no discounting.
| Kareem Amin, Afshin Rostamizadeh, Umar Syed | null | 1311.6838 | null | null |
Color and Shape Content Based Image Classification using RBF Network and
PSO Technique: A Survey | cs.CV cs.LG cs.NE | The improvement of the accuracy of image query retrieval used image
classification technique. Image classification is well known technique of
supervised learning. The improved method of image classification increases the
working efficiency of image query retrieval. For the improvements of
classification technique we used RBF neural network function for better
prediction of feature used in image retrieval.Colour content is represented by
pixel values in image classification using radial base function(RBF) technique.
This approach provides better result compare to SVM technique in image
representation.Image is represented by matrix though RBF using pixel values of
colour intensity of image. Firstly we using RGB colour model. In this colour
model we use red, green and blue colour intensity values in matrix.SVM with
partical swarm optimization for image classification is implemented in content
of images which provide better Results based on the proposed approach are found
encouraging in terms of color image classification accuracy.
| Abhishek Pandey, Anjna Jayant Deen and Rajeev Pandey (Dept. of CSE,
UIT-RGPV) | null | 1311.6881 | null | null |
Dimensionality reduction for click-through rate prediction: Dense versus
sparse representation | stat.ML cs.LG stat.AP stat.ME | In online advertising, display ads are increasingly being placed based on
real-time auctions where the advertiser who wins gets to serve the ad. This is
called real-time bidding (RTB). In RTB, auctions have very tight time
constraints on the order of 100ms. Therefore mechanisms for bidding
intelligently such as clickthrough rate prediction need to be sufficiently
fast. In this work, we propose to use dimensionality reduction of the
user-website interaction graph in order to produce simplified features of users
and websites that can be used as predictors of clickthrough rate. We
demonstrate that the Infinite Relational Model (IRM) as a dimensionality
reduction offers comparable predictive performance to conventional
dimensionality reduction schemes, while achieving the most economical usage of
features and fastest computations at run-time. For applications such as
real-time bidding, where fast database I/O and few computations are key to
success, we thus recommend using IRM based features as predictors to exploit
the recommender effects from bipartite graphs.
| Bjarne {\O}rum Fruergaard, Toke Jansen Hansen, Lars Kai Hansen | null | 1311.6976 | null | null |
Sparse Linear Dynamical System with Its Application in Multivariate
Clinical Time Series | cs.AI cs.LG stat.ML | Linear Dynamical System (LDS) is an elegant mathematical framework for
modeling and learning multivariate time series. However, in general, it is
difficult to set the dimension of its hidden state space. A small number of
hidden states may not be able to model the complexities of a time series, while
a large number of hidden states can lead to overfitting. In this paper, we
study methods that impose an $\ell_1$ regularization on the transition matrix
of an LDS model to alleviate the problem of choosing the optimal number of
hidden states. We incorporate a generalized gradient descent method into the
Maximum a Posteriori (MAP) framework and use Expectation Maximization (EM) to
iteratively achieve sparsity on the transition matrix of an LDS model. We show
that our Sparse Linear Dynamical System (SLDS) improves the predictive
performance when compared to ordinary LDS on a multivariate clinical time
series dataset.
| Zitao Liu and Milos Hauskrecht | null | 1311.7071 | null | null |
Using Multiple Samples to Learn Mixture Models | stat.ML cs.LG | In the mixture models problem it is assumed that there are $K$ distributions
$\theta_{1},\ldots,\theta_{K}$ and one gets to observe a sample from a mixture
of these distributions with unknown coefficients. The goal is to associate
instances with their generating distributions, or to identify the parameters of
the hidden distributions. In this work we make the assumption that we have
access to several samples drawn from the same $K$ underlying distributions, but
with different mixing weights. As with topic modeling, having multiple samples
is often a reasonable assumption. Instead of pooling the data into one sample,
we prove that it is possible to use the differences between the samples to
better recover the underlying structure. We present algorithms that recover the
underlying structure under milder assumptions than the current state of art
when either the dimensionality or the separation is high. The methods, when
applied to topic modeling, allow generalization to words not present in the
training data.
| Jason D Lee, Ran Gilad-Bachrach, and Rich Caruana | null | 1311.7184 | null | null |
ADMM Algorithm for Graphical Lasso with an $\ell_{\infty}$ Element-wise
Norm Constraint | cs.LG math.OC stat.ML | We consider the problem of Graphical lasso with an additional $\ell_{\infty}$
element-wise norm constraint on the precision matrix. This problem has
applications in high-dimensional covariance decomposition such as in
\citep{Janzamin-12}. We propose an ADMM algorithm to solve this problem. We
also use a continuation strategy on the penalty parameter to have a fast
implemenation of the algorithm.
| Karthik Mohan | null | 1311.7198 | null | null |
Spatially-Adaptive Reconstruction in Computed Tomography using Neural
Networks | cs.CV cs.LG cs.NE | We propose a supervised machine learning approach for boosting existing
signal and image recovery methods and demonstrate its efficacy on example of
image reconstruction in computed tomography. Our technique is based on a local
nonlinear fusion of several image estimates, all obtained by applying a chosen
reconstruction algorithm with different values of its control parameters.
Usually such output images have different bias/variance trade-off. The fusion
of the images is performed by feed-forward neural network trained on a set of
known examples. Numerical experiments show an improvement in reconstruction
quality relatively to existing direct and iterative reconstruction methods.
| Joseph Shtok, Michael Zibulevsky and Michael Elad | null | 1311.7251 | null | null |
Algorithmic Identification of Probabilities | cs.LG | TThe problem is to identify a probability associated with a set of natural
numbers, given an infinite data sequence of elements from the set. If the given
sequence is drawn i.i.d. and the probability mass function involved (the
target) belongs to a computably enumerable (c.e.) or co-computably enumerable
(co-c.e.) set of computable probability mass functions, then there is an
algorithm to almost surely identify the target in the limit. The technical tool
is the strong law of large numbers. If the set is finite and the elements of
the sequence are dependent while the sequence is typical in the sense of
Martin-L\"of for at least one measure belonging to a c.e. or co-c.e. set of
computable measures, then there is an algorithm to identify in the limit a
computable measure for which the sequence is typical (there may be more than
one such measure). The technical tool is the theory of Kolmogorov complexity.
We give the algorithms and consider the associated predictions.
| Paul M.B. Vitanyi (CWI and University of Amsterdam, NL), Nick Chater
(University of Warwick, UK) | null | 1311.7385 | null | null |
The Power of Asymmetry in Binary Hashing | cs.LG cs.CV cs.IR | When approximating binary similarity using the hamming distance between short
binary hashes, we show that even if the similarity is symmetric, we can have
shorter and more accurate hashes by using two distinct code maps. I.e. by
approximating the similarity between $x$ and $x'$ as the hamming distance
between $f(x)$ and $g(x')$, for two distinct binary codes $f,g$, rather than as
the hamming distance between $f(x)$ and $f(x')$.
| Behnam Neyshabur, Payman Yadollahpour, Yury Makarychev, Ruslan
Salakhutdinov, Nathan Srebro | null | 1311.7662 | null | null |
Combination of Diverse Ranking Models for Personalized Expedia Hotel
Searches | cs.LG | The ICDM Challenge 2013 is to apply machine learning to the problem of hotel
ranking, aiming to maximize purchases according to given hotel characteristics,
location attractiveness of hotels, user's aggregated purchase history and
competitive online travel agency information for each potential hotel choice.
This paper describes the solution of team "binghsu & MLRush & BrickMover". We
conduct simple feature engineering work and train different models by each
individual team member. Afterwards, we use listwise ensemble method to combine
each model's output. Besides describing effective model and features, we will
discuss about the lessons we learned while using deep learning in this
competition.
| Xudong Liu, Bing Xu, Yuyu Zhang, Qiang Yan, Liang Pang, Qiang Li,
Hanxiao Sun, Bin Wang | null | 1311.7679 | null | null |
Stochastic Optimization of Smooth Loss | cs.LG | In this paper, we first prove a high probability bound rather than an
expectation bound for stochastic optimization with smooth loss. Furthermore,
the existing analysis requires the knowledge of optimal classifier for tuning
the step size in order to achieve the desired bound. However, this information
is usually not accessible in advanced. We also propose a strategy to address
the limitation.
| Rong Jin | null | 1312.0048 | null | null |
One-Class Classification: Taxonomy of Study and Review of Techniques | cs.LG cs.AI | One-class classification (OCC) algorithms aim to build classification models
when the negative class is either absent, poorly sampled or not well defined.
This unique situation constrains the learning of efficient classifiers by
defining class boundary just with the knowledge of positive class. The OCC
problem has been considered and applied under many research themes, such as
outlier/novelty detection and concept learning. In this paper we present a
unified view of the general problem of OCC by presenting a taxonomy of study
for OCC problems, which is based on the availability of training data,
algorithms used and the application domains applied. We further delve into each
of the categories of the proposed taxonomy and present a comprehensive
literature review of the OCC algorithms, techniques and methodologies with a
focus on their significance, limitations and applications. We conclude our
paper by discussing some open research problems in the field of OCC and present
our vision for future research.
| Shehroz S.Khan, Michael G.Madden | 10.1017/S026988891300043X | 1312.0049 | null | null |
Efficient Learning and Planning with Compressed Predictive States | cs.LG stat.ML | Predictive state representations (PSRs) offer an expressive framework for
modelling partially observable systems. By compactly representing systems as
functions of observable quantities, the PSR learning approach avoids using
local-minima prone expectation-maximization and instead employs a globally
optimal moment-based algorithm. Moreover, since PSRs do not require a
predetermined latent state structure as an input, they offer an attractive
framework for model-based reinforcement learning when agents must plan without
a priori access to a system model. Unfortunately, the expressiveness of PSRs
comes with significant computational cost, and this cost is a major factor
inhibiting the use of PSRs in applications. In order to alleviate this
shortcoming, we introduce the notion of compressed PSRs (CPSRs). The CPSR
learning approach combines recent advancements in dimensionality reduction,
incremental matrix decomposition, and compressed sensing. We show how this
approach provides a principled avenue for learning accurate approximations of
PSRs, drastically reducing the computational costs associated with learning
while also providing effective regularization. Going further, we propose a
planning framework which exploits these learned models. And we show that this
approach facilitates model-learning and planning in large complex partially
observable domains, a task that is infeasible without the principled use of
compression.
| William L. Hamilton, Mahdi Milani Fard, and Joelle Pineau | null | 1312.0286 | null | null |
Practical Collapsed Stochastic Variational Inference for the HDP | cs.LG | Recent advances have made it feasible to apply the stochastic variational
paradigm to a collapsed representation of latent Dirichlet allocation (LDA).
While the stochastic variational paradigm has successfully been applied to an
uncollapsed representation of the hierarchical Dirichlet process (HDP), no
attempts to apply this type of inference in a collapsed setting of
non-parametric topic modeling have been put forward so far. In this paper we
explore such a collapsed stochastic variational Bayes inference for the HDP.
The proposed online algorithm is easy to implement and accounts for the
inference of hyper-parameters. First experiments show a promising improvement
in predictive performance.
| Arnim Bleier | null | 1312.0412 | null | null |
Consistency of weighted majority votes | math.PR cs.LG stat.ML | We revisit the classical decision-theoretic problem of weighted expert voting
from a statistical learning perspective. In particular, we examine the
consistency (both asymptotic and finitary) of the optimal Nitzan-Paroush
weighted majority and related rules. In the case of known expert competence
levels, we give sharp error estimates for the optimal rule. When the competence
levels are unknown, they must be empirically estimated. We provide frequentist
and Bayesian analyses for this situation. Some of our proof techniques are
non-standard and may be of independent interest. The bounds we derive are
nearly optimal, and several challenging open problems are posed. Experimental
results are provided to illustrate the theory.
| Daniel Berend and Aryeh Kontorovich | null | 1312.0451 | null | null |
Bidirectional Recursive Neural Networks for Token-Level Labeling with
Structure | cs.LG cs.CL stat.ML | Recently, deep architectures, such as recurrent and recursive neural networks
have been successfully applied to various natural language processing tasks.
Inspired by bidirectional recurrent neural networks which use representations
that summarize the past and future around an instance, we propose a novel
architecture that aims to capture the structural information around an input,
and use it to label instances. We apply our method to the task of opinion
expression extraction, where we employ the binary parse tree of a sentence as
the structure, and word vector representations as the initial representation of
a single token. We conduct preliminary experiments to investigate its
performance and compare it to the sequential approach.
| Ozan \.Irsoy, Claire Cardie | null | 1312.0493 | null | null |
Sensing-Aware Kernel SVM | cs.LG | We propose a novel approach for designing kernels for support vector machines
(SVMs) when the class label is linked to the observation through a latent state
and the likelihood function of the observation given the state (the sensing
model) is available. We show that the Bayes-optimum decision boundary is a
hyperplane under a mapping defined by the likelihood function. Combining this
with the maximum margin principle yields kernels for SVMs that leverage
knowledge of the sensing model in an optimal way. We derive the optimum kernel
for the bag-of-words (BoWs) sensing model and demonstrate its superior
performance over other kernels in document and image classification tasks.
These results indicate that such optimum sensing-aware kernel SVMs can match
the performance of rather sophisticated state-of-the-art approaches.
| Weicong Ding, Prakash Ishwar, Venkatesh Saligrama, W. Clem Karl | null | 1312.0512 | null | null |
Grid Topology Identification using Electricity Prices | cs.LG cs.SY stat.AP stat.ML | The potential of recovering the topology of a grid using solely publicly
available market data is explored here. In contemporary whole-sale electricity
markets, real-time prices are typically determined by solving the
network-constrained economic dispatch problem. Under a linear DC model,
locational marginal prices (LMPs) correspond to the Lagrange multipliers of the
linear program involved. The interesting observation here is that the matrix of
spatiotemporally varying LMPs exhibits the following property: Once
premultiplied by the weighted grid Laplacian, it yields a low-rank and sparse
matrix. Leveraging this rich structure, a regularized maximum likelihood
estimator (MLE) is developed to recover the grid Laplacian from the LMPs. The
convex optimization problem formulated includes low rank- and
sparsity-promoting regularizers, and it is solved using a scalable algorithm.
Numerical tests on prices generated for the IEEE 14-bus benchmark provide
encouraging topology recovery results.
| Vassilis Kekatos, Georgios B. Giannakis, Ross Baldick | null | 1312.0516 | null | null |
SpeedMachines: Anytime Structured Prediction | cs.LG | Structured prediction plays a central role in machine learning applications
from computational biology to computer vision. These models require
significantly more computation than unstructured models, and, in many
applications, algorithms may need to make predictions within a computational
budget or in an anytime fashion. In this work we propose an anytime technique
for learning structured prediction that, at training time, incorporates both
structural elements and feature computation trade-offs that affect test-time
inference. We apply our technique to the challenging problem of scene
understanding in computer vision and demonstrate efficient and anytime
predictions that gradually improve towards state-of-the-art classification
performance as the allotted time increases.
| Alexander Grubb, Daniel Munoz, J. Andrew Bagnell, Martial Hebert | null | 1312.0579 | null | null |
Efficient coordinate-descent for orthogonal matrices through Givens
rotations | cs.LG stat.ML | Optimizing over the set of orthogonal matrices is a central component in
problems like sparse-PCA or tensor decomposition. Unfortunately, such
optimization is hard since simple operations on orthogonal matrices easily
break orthogonality, and correcting orthogonality usually costs a large amount
of computation. Here we propose a framework for optimizing orthogonal matrices,
that is the parallel of coordinate-descent in Euclidean spaces. It is based on
{\em Givens-rotations}, a fast-to-compute operation that affects a small number
of entries in the learned matrix, and preserves orthogonality. We show two
applications of this approach: an algorithm for tensor decomposition that is
used in learning mixture models, and an algorithm for sparse-PCA. We study the
parameter regime where a Givens rotation approach converges faster and achieves
a superior model on a genome-wide brain-wide mRNA expression dataset.
| Uri Shalit and Gal Chechik | null | 1312.0624 | null | null |
Image Representation Learning Using Graph Regularized Auto-Encoders | cs.LG | We consider the problem of image representation for the tasks of unsupervised
learning and semi-supervised learning. In those learning tasks, the raw image
vectors may not provide enough representation for their intrinsic structures
due to their highly dense feature space. To overcome this problem, the raw
image vectors should be mapped to a proper representation space which can
capture the latent structure of the original data and represent the data
explicitly for further learning tasks such as clustering.
Inspired by the recent research works on deep neural network and
representation learning, in this paper, we introduce the multiple-layer
auto-encoder into image representation, we also apply the locally invariant
ideal to our image representation with auto-encoders and propose a novel
method, called Graph regularized Auto-Encoder (GAE). GAE can provide a compact
representation which uncovers the hidden semantics and simultaneously respects
the intrinsic geometric structure.
Extensive experiments on image clustering show encouraging results of the
proposed algorithm in comparison to the state-of-the-art algorithms on
real-word cases.
| Yiyi Liao, Yue Wang, Yong Liu | null | 1312.0786 | null | null |
Test Set Selection using Active Information Acquisition for Predictive
Models | cs.AI cs.LG stat.ML | In this paper, we consider active information acquisition when the prediction
model is meant to be applied on a targeted subset of the population. The goal
is to label a pre-specified fraction of customers in the target or test set by
iteratively querying for information from the non-target or training set. The
number of queries is limited by an overall budget. Arising in the context of
two rather disparate applications- banking and medical diagnosis, we pose the
active information acquisition problem as a constrained optimization problem.
We propose two greedy iterative algorithms for solving the above problem. We
conduct experiments with synthetic data and compare results of our proposed
algorithms with few other baseline approaches. The experimental results show
that our proposed approaches perform better than the baseline schemes.
| Sneha Chaudhari, Pankaj Dayama, Vinayaka Pandit, Indrajit Bhattacharya | null | 1312.0790 | null | null |
Understanding Alternating Minimization for Matrix Completion | cs.LG cs.DS stat.ML | Alternating Minimization is a widely used and empirically successful
heuristic for matrix completion and related low-rank optimization problems.
Theoretical guarantees for Alternating Minimization have been hard to come by
and are still poorly understood. This is in part because the heuristic is
iterative and non-convex in nature. We give a new algorithm based on
Alternating Minimization that provably recovers an unknown low-rank matrix from
a random subsample of its entries under a standard incoherence assumption. Our
results reduce the sample size requirements of the Alternating Minimization
approach by at least a quartic factor in the rank and the condition number of
the unknown matrix. These improvements apply even if the matrix is only close
to low-rank in the Frobenius norm. Our algorithm runs in nearly linear time in
the dimension of the matrix and, in a broad range of parameters, gives the
strongest sample bounds among all subquadratic time algorithms that we are
aware of.
Underlying our work is a new robust convergence analysis of the well-known
Power Method for computing the dominant singular vectors of a matrix. This
viewpoint leads to a conceptually simple understanding of Alternating
Minimization. In addition, we contribute a new technique for controlling the
coherence of intermediate solutions arising in iterative algorithms based on a
smoothed analysis of the QR factorization. These techniques may be of interest
beyond their application here.
| Moritz Hardt | null | 1312.0925 | null | null |
Analysis of Distributed Stochastic Dual Coordinate Ascent | cs.DC cs.LG | In \citep{Yangnips13}, the author presented distributed stochastic dual
coordinate ascent (DisDCA) algorithms for solving large-scale regularized loss
minimization. Extraordinary performances have been observed and reported for
the well-motivated updates, as referred to the practical updates, compared to
the naive updates. However, no serious analysis has been provided to understand
the updates and therefore the convergence rates. In the paper, we bridge the
gap by providing a theoretical analysis of the convergence rates of the
practical DisDCA algorithm. Our analysis helped by empirical studies has shown
that it could yield an exponential speed-up in the convergence by increasing
the number of dual updates at each iteration. This result justifies the
superior performances of the practical DisDCA as compared to the naive variant.
As a byproduct, our analysis also reveals the convergence behavior of the
one-communication DisDCA.
| Tianbao Yang, Shenghuo Zhu, Rong Jin, Yuanqing Lin | null | 1312.1031 | null | null |
Faster and Sample Near-Optimal Algorithms for Proper Learning Mixtures
of Gaussians | cs.DS cs.LG math.PR math.ST stat.TH | We provide an algorithm for properly learning mixtures of two
single-dimensional Gaussians without any separability assumptions. Given
$\tilde{O}(1/\varepsilon^2)$ samples from an unknown mixture, our algorithm
outputs a mixture that is $\varepsilon$-close in total variation distance, in
time $\tilde{O}(1/\varepsilon^5)$. Our sample complexity is optimal up to
logarithmic factors, and significantly improves upon both Kalai et al., whose
algorithm has a prohibitive dependence on $1/\varepsilon$, and Feldman et al.,
whose algorithm requires bounds on the mixture parameters and depends
pseudo-polynomially in these parameters.
One of our main contributions is an improved and generalized algorithm for
selecting a good candidate distribution from among competing hypotheses.
Namely, given a collection of $N$ hypotheses containing at least one candidate
that is $\varepsilon$-close to an unknown distribution, our algorithm outputs a
candidate which is $O(\varepsilon)$-close to the distribution. The algorithm
requires ${O}(\log{N}/\varepsilon^2)$ samples from the unknown distribution and
${O}(N \log N/\varepsilon^2)$ time, which improves previous such results (such
as the Scheff\'e estimator) from a quadratic dependence of the running time on
$N$ to quasilinear. Given the wide use of such results for the purpose of
hypothesis selection, our improved algorithm implies immediate improvements to
any such use.
| Constantinos Daskalakis, Gautam Kamath | null | 1312.1054 | null | null |
Multiscale Dictionary Learning for Estimating Conditional Distributions | stat.ML cs.LG | Nonparametric estimation of the conditional distribution of a response given
high-dimensional features is a challenging problem. It is important to allow
not only the mean but also the variance and shape of the response density to
change flexibly with features, which are massive-dimensional. We propose a
multiscale dictionary learning model, which expresses the conditional response
density as a convex combination of dictionary densities, with the densities
used and their weights dependent on the path through a tree decomposition of
the feature space. A fast graph partitioning algorithm is applied to obtain the
tree decomposition, with Bayesian methods then used to adaptively prune and
average over different sub-trees in a soft probabilistic manner. The algorithm
scales efficiently to approximately one million features. State of the art
predictive performance is demonstrated for toy examples and two neuroscience
applications including up to a million features.
| Francesca Petralia, Joshua Vogelstein and David B. Dunson | null | 1312.1099 | null | null |
Interpreting random forest classification models using a feature
contribution method | cs.LG | Model interpretation is one of the key aspects of the model evaluation
process. The explanation of the relationship between model variables and
outputs is relatively easy for statistical models, such as linear regressions,
thanks to the availability of model parameters and their statistical
significance. For "black box" models, such as random forest, this information
is hidden inside the model structure. This work presents an approach for
computing feature contributions for random forest classification models. It
allows for the determination of the influence of each variable on the model
prediction for an individual instance. By analysing feature contributions for a
training dataset, the most significant variables can be determined and their
typical contribution towards predictions made for individual classes, i.e.,
class-specific feature contribution "patterns", are discovered. These patterns
represent a standard behaviour of the model and allow for an additional
assessment of the model reliability for a new data. Interpretation of feature
contributions for two UCI benchmark datasets shows the potential of the
proposed methodology. The robustness of results is demonstrated through an
extensive analysis of feature contributions calculated for a large number of
generated random forest models.
| Anna Palczewska and Jan Palczewski and Richard Marchese Robinson and
Daniel Neagu | null | 1312.1121 | null | null |
Bandits and Experts in Metric Spaces | cs.DS cs.LG | In a multi-armed bandit problem, an online algorithm chooses from a set of
strategies in a sequence of trials so as to maximize the total payoff of the
chosen strategies. While the performance of bandit algorithms with a small
finite strategy set is quite well understood, bandit problems with large
strategy sets are still a topic of very active investigation, motivated by
practical applications such as online auctions and web advertisement. The goal
of such research is to identify broad and natural classes of strategy sets and
payoff functions which enable the design of efficient solutions.
In this work we study a very general setting for the multi-armed bandit
problem in which the strategies form a metric space, and the payoff function
satisfies a Lipschitz condition with respect to the metric. We refer to this
problem as the "Lipschitz MAB problem". We present a solution for the
multi-armed bandit problem in this setting. That is, for every metric space we
define an isometry invariant which bounds from below the performance of
Lipschitz MAB algorithms for this metric space, and we present an algorithm
which comes arbitrarily close to meeting this bound. Furthermore, our technique
gives even better results for benign payoff functions. We also address the
full-feedback ("best expert") version of the problem, where after every round
the payoffs from all arms are revealed.
| Robert Kleinberg, Aleksandrs Slivkins and Eli Upfal | null | 1312.1277 | null | null |
Bandit Online Optimization Over the Permutahedron | cs.LG | The permutahedron is the convex polytope with vertex set consisting of the
vectors $(\pi(1),\dots, \pi(n))$ for all permutations (bijections) $\pi$ over
$\{1,\dots, n\}$. We study a bandit game in which, at each step $t$, an
adversary chooses a hidden weight weight vector $s_t$, a player chooses a
vertex $\pi_t$ of the permutahedron and suffers an observed loss of
$\sum_{i=1}^n \pi(i) s_t(i)$.
A previous algorithm CombBand of Cesa-Bianchi et al (2009) guarantees a
regret of $O(n\sqrt{T \log n})$ for a time horizon of $T$. Unfortunately,
CombBand requires at each step an $n$-by-$n$ matrix permanent approximation to
within improved accuracy as $T$ grows, resulting in a total running time that
is super linear in $T$, making it impractical for large time horizons.
We provide an algorithm of regret $O(n^{3/2}\sqrt{T})$ with total time
complexity $O(n^3T)$. The ideas are a combination of CombBand and a recent
algorithm by Ailon (2013) for online optimization over the permutahedron in the
full information setting. The technical core is a bound on the variance of the
Plackett-Luce noisy sorting process's "pseudo loss". The bound is obtained by
establishing positive semi-definiteness of a family of 3-by-3 matrices
generated from rational functions of exponentials of 3 parameters.
| Nir Ailon and Kohei Hatano and Eiji Takimoto | null | 1312.1530 | null | null |
Max-Min Distance Nonnegative Matrix Factorization | stat.ML cs.LG cs.NA | Nonnegative Matrix Factorization (NMF) has been a popular representation
method for pattern classification problem. It tries to decompose a nonnegative
matrix of data samples as the product of a nonnegative basic matrix and a
nonnegative coefficient matrix, and the coefficient matrix is used as the new
representation. However, traditional NMF methods ignore the class labels of the
data samples. In this paper, we proposed a supervised novel NMF algorithm to
improve the discriminative ability of the new representation. Using the class
labels, we separate all the data sample pairs into within-class pairs and
between-class pairs. To improve the discriminate ability of the new NMF
representations, we hope that the maximum distance of the within-class pairs in
the new NMF space could be minimized, while the minimum distance of the
between-class pairs pairs could be maximized. With this criterion, we construct
an objective function and optimize it with regard to basic and coefficient
matrices and slack variables alternatively, resulting in a iterative algorithm.
| Jim Jing-Yan Wang | null | 1312.1613 | null | null |
Semi-Stochastic Gradient Descent Methods | stat.ML cs.LG cs.NA math.NA math.OC | In this paper we study the problem of minimizing the average of a large
number ($n$) of smooth convex loss functions. We propose a new method, S2GD
(Semi-Stochastic Gradient Descent), which runs for one or several epochs in
each of which a single full gradient and a random number of stochastic
gradients is computed, following a geometric law. The total work needed for the
method to output an $\varepsilon$-accurate solution in expectation, measured in
the number of passes over data, or equivalently, in units equivalent to the
computation of a single gradient of the loss, is
$O((\kappa/n)\log(1/\varepsilon))$, where $\kappa$ is the condition number.
This is achieved by running the method for $O(\log(1/\varepsilon))$ epochs,
with a single gradient evaluation and $O(\kappa)$ stochastic gradient
evaluations in each. The SVRG method of Johnson and Zhang arises as a special
case. If our method is limited to a single epoch only, it needs to evaluate at
most $O((\kappa/\varepsilon)\log(1/\varepsilon))$ stochastic gradients. In
contrast, SVRG requires $O(\kappa/\varepsilon^2)$ stochastic gradients. To
illustrate our theoretical results, S2GD only needs the workload equivalent to
about 2.1 full gradient evaluations to find an $10^{-6}$-accurate solution for
a problem with $n=10^9$ and $\kappa=10^3$.
| Jakub Kone\v{c}n\'y and Peter Richt\'arik | null | 1312.1666 | null | null |
Curriculum Learning for Handwritten Text Line Recognition | cs.LG | Recurrent Neural Networks (RNN) have recently achieved the best performance
in off-line Handwriting Text Recognition. At the same time, learning RNN by
gradient descent leads to slow convergence, and training times are particularly
long when the training database consists of full lines of text. In this paper,
we propose an easy way to accelerate stochastic gradient descent in this
set-up, and in the general context of learning to recognize sequences. The
principle is called Curriculum Learning, or shaping. The idea is to first learn
to recognize short sequences before training on all available training
sequences. Experiments on three different handwritten text databases (Rimes,
IAM, OpenHaRT) show that a simple implementation of this strategy can
significantly speed up the training of RNN for Text Recognition, and even
significantly improve performance in some cases.
| J\'er\^ome Louradour and Christopher Kermorvant | null | 1312.1737 | null | null |
Dual coordinate solvers for large-scale structural SVMs | cs.LG cs.CV | This manuscript describes a method for training linear SVMs (including binary
SVMs, SVM regression, and structural SVMs) from large, out-of-core training
datasets. Current strategies for large-scale learning fall into one of two
camps; batch algorithms which solve the learning problem given a finite
datasets, and online algorithms which can process out-of-core datasets. The
former typically requires datasets small enough to fit in memory. The latter is
often phrased as a stochastic optimization problem; such algorithms enjoy
strong theoretical properties but often require manual tuned annealing
schedules, and may converge slowly for problems with large output spaces (e.g.,
structural SVMs). We discuss an algorithm for an "intermediate" regime in which
the data is too large to fit in memory, but the active constraints (support
vectors) are small enough to remain in memory. In this case, one can design
rather efficient learning algorithms that are as stable as batch algorithms,
but capable of processing out-of-core datasets. We have developed such a
MATLAB-based solver and used it to train a collection of recognition systems
for articulated pose estimation, facial analysis, 3D object recognition, and
action classification, all with publicly-available code. This writeup describes
the solver in detail.
| Deva Ramanan | null | 1312.1743 | null | null |
Understanding Deep Architectures using a Recursive Convolutional Network | cs.LG | A key challenge in designing convolutional network models is sizing them
appropriately. Many factors are involved in these decisions, including number
of layers, feature maps, kernel sizes, etc. Complicating this further is the
fact that each of these influence not only the numbers and dimensions of the
activation units, but also the total number of parameters. In this paper we
focus on assessing the independent contributions of three of these linked
variables: The numbers of layers, feature maps, and parameters. To accomplish
this, we employ a recursive convolutional network whose weights are tied
between layers; this allows us to vary each of the three factors in a
controlled setting. We find that while increasing the numbers of layers and
parameters each have clear benefit, the number of feature maps (and hence
dimensionality of the representation) appears ancillary, and finds most of its
benefit through the introduction of more weights. Our results (i) empirically
confirm the notion that adding layers alone increases computational power,
within the context of convolutional layers, and (ii) suggest that precise
sizing of convolutional feature map dimensions is itself of little concern;
more attention should be paid to the number of parameters in these layers
instead.
| David Eigen, Jason Rolfe, Rob Fergus, Yann LeCun | null | 1312.1847 | null | null |
From Maxout to Channel-Out: Encoding Information on Sparse Pathways | cs.NE cs.CV cs.LG stat.ML | Motivated by an important insight from neural science, we propose a new
framework for understanding the success of the recently proposed "maxout"
networks. The framework is based on encoding information on sparse pathways and
recognizing the correct pathway at inference time. Elaborating further on this
insight, we propose a novel deep network architecture, called "channel-out"
network, which takes a much better advantage of sparse pathway encoding. In
channel-out networks, pathways are not only formed a posteriori, but they are
also actively selected according to the inference outputs from the lower
layers. From a mathematical perspective, channel-out networks can represent a
wider class of piece-wise continuous functions, thereby endowing the network
with more expressive power than that of maxout networks. We test our
channel-out networks on several well-known image classification benchmarks,
setting new state-of-the-art performance on CIFAR-100 and STL-10, which
represent some of the "harder" image classification benchmarks.
| Qi Wang and Joseph JaJa | null | 1312.1909 | null | null |
Robust Subspace System Identification via Weighted Nuclear Norm
Optimization | cs.SY cs.LG stat.ML | Subspace identification is a classical and very well studied problem in
system identification. The problem was recently posed as a convex optimization
problem via the nuclear norm relaxation. Inspired by robust PCA, we extend this
framework to handle outliers. The proposed framework takes the form of a convex
optimization problem with an objective that trades off fit, rank and sparsity.
As in robust PCA, it can be problematic to find a suitable regularization
parameter. We show how the space in which a suitable parameter should be sought
can be limited to a bounded open set of the two dimensional parameter space. In
practice, this is very useful since it restricts the parameter space that is
needed to be surveyed.
| Dorsa Sadigh, Henrik Ohlsson, S. Shankar Sastry, Sanjit A. Seshia | null | 1312.2132 | null | null |
End-to-end Phoneme Sequence Recognition using Convolutional Neural
Networks | cs.LG cs.CL cs.NE | Most phoneme recognition state-of-the-art systems rely on a classical neural
network classifiers, fed with highly tuned features, such as MFCC or PLP
features. Recent advances in ``deep learning'' approaches questioned such
systems, but while some attempts were made with simpler features such as
spectrograms, state-of-the-art systems still rely on MFCCs. This might be
viewed as a kind of failure from deep learning approaches, which are often
claimed to have the ability to train with raw signals, alleviating the need of
hand-crafted features. In this paper, we investigate a convolutional neural
network approach for raw speech signals. While convolutional architectures got
tremendous success in computer vision or text processing, they seem to have
been let down in the past recent years in the speech processing field. We show
that it is possible to learn an end-to-end phoneme sequence classifier system
directly from raw signal, with similar performance on the TIMIT and WSJ
datasets than existing systems based on MFCC, questioning the need of complex
hand-crafted features on large datasets.
| Dimitri Palaz, Ronan Collobert, Mathew Magimai.-Doss | null | 1312.2137 | null | null |
Sequential Monte Carlo Inference of Mixed Membership Stochastic
Blockmodels for Dynamic Social Networks | cs.SI cs.LG stat.ML | Many kinds of data can be represented as a network or graph. It is crucial to
infer the latent structure underlying such a network and to predict unobserved
links in the network. Mixed Membership Stochastic Blockmodel (MMSB) is a
promising model for network data. Latent variables and unknown parameters in
MMSB have been estimated through Bayesian inference with the entire network;
however, it is important to estimate them online for evolving networks. In this
paper, we first develop online inference methods for MMSB through sequential
Monte Carlo methods, also known as particle filters. We then extend them for
time-evolving networks, taking into account the temporal dependency of the
network structure. We demonstrate through experiments that the time-dependent
particle filter outperformed several baselines in terms of prediction
performance in an online condition.
| Tomoki Kobayashi, Koji Eguchi | null | 1312.2154 | null | null |
Budgeted Influence Maximization for Multiple Products | cs.LG cs.SI stat.ML | The typical algorithmic problem in viral marketing aims to identify a set of
influential users in a social network, who, when convinced to adopt a product,
shall influence other users in the network and trigger a large cascade of
adoptions. However, the host (the owner of an online social platform) often
faces more constraints than a single product, endless user attentions,
unlimited budget and unbounded time; in reality, multiple products need to be
advertised, each user can tolerate only a small number of recommendations,
influencing user has a cost and advertisers have only limited budgets, and the
adoptions need to be maximized within a short time window.
Given theses myriads of user, monetary, and timing constraints, it is
extremely challenging for the host to design principled and efficient viral
market algorithms with provable guarantees. In this paper, we provide a novel
solution by formulating the problem as a submodular maximization in a
continuous-time diffusion model under an intersection of a matroid and multiple
knapsack constraints. We also propose an adaptive threshold greedy algorithm
which can be faster than the traditional greedy algorithm with lazy evaluation,
and scalable to networks with million of nodes. Furthermore, our mathematical
formulation allows us to prove that the algorithm can achieve an approximation
factor of $k_a/(2+2 k)$ when $k_a$ out of the $k$ knapsack constraints are
active, which also improves over previous guarantees from combinatorial
optimization literature. In the case when influencing each user has uniform
cost, the approximation becomes even better to a factor of $1/3$. Extensive
synthetic and real world experiments demonstrate that our budgeted influence
maximization algorithm achieves the-state-of-the-art in terms of both
effectiveness and scalability, often beating the next best by significant
margins.
| Nan Du, Yingyu Liang, Maria Florina Balcan, Le Song | null | 1312.2164 | null | null |
bartMachine: Machine Learning with Bayesian Additive Regression Trees | stat.ML cs.LG | We present a new package in R implementing Bayesian additive regression trees
(BART). The package introduces many new features for data analysis using BART
such as variable selection, interaction detection, model diagnostic plots,
incorporation of missing data and the ability to save trees for future
prediction. It is significantly faster than the current R implementation,
parallelized, and capable of handling both large sample sizes and
high-dimensional data.
| Adam Kapelner and Justin Bleich | null | 1312.2171 | null | null |
Machine Learning Techniques for Intrusion Detection | cs.CR cs.LG cs.NI | An Intrusion Detection System (IDS) is a software that monitors a single or a
network of computers for malicious activities (attacks) that are aimed at
stealing or censoring information or corrupting network protocols. Most
techniques used in today's IDS are not able to deal with the dynamic and
complex nature of cyber attacks on computer networks. Hence, efficient adaptive
methods like various techniques of machine learning can result in higher
detection rates, lower false alarm rates and reasonable computation and
communication costs. In this paper, we study several such schemes and compare
their performance. We divide the schemes into methods based on classical
artificial intelligence (AI) and methods based on computational intelligence
(CI). We explain how various characteristics of CI techniques can be used to
build efficient IDS.
| Mahdi Zamani and Mahnush Movahedi | null | 1312.2177 | null | null |
CEAI: CCM based Email Authorship Identification Model | cs.LG | In this paper we present a model for email authorship identification (EAI) by
employing a Cluster-based Classification (CCM) technique. Traditionally,
stylometric features have been successfully employed in various authorship
analysis tasks; we extend the traditional feature-set to include some more
interesting and effective features for email authorship identification (e.g.
the last punctuation mark used in an email, the tendency of an author to use
capitalization at the start of an email, or the punctuation after a greeting or
farewell). We also included Info Gain feature selection based content features.
It is observed that the use of such features in the authorship identification
process has a positive impact on the accuracy of the authorship identification
task. We performed experiments to justify our arguments and compared the
results with other base line models. Experimental results reveal that the
proposed CCM-based email authorship identification model, along with the
proposed feature set, outperforms the state-of-the-art support vector machine
(SVM)-based models, as well as the models proposed by Iqbal et al. [1, 2]. The
proposed model attains an accuracy rate of 94% for 10 authors, 89% for 25
authors, and 81% for 50 authors, respectively on Enron dataset, while 89.5%
accuracy has been achieved on authors' constructed real email dataset. The
results on Enron dataset have been achieved on quite a large number of authors
as compared to the models proposed by Iqbal et al. [1, 2].
| Sarwat Nizamani, Nasrullah Memon | null | 1312.2451 | null | null |
Automatic recognition and tagging of topologically different regimes in
dynamical systems | cs.CG cs.LG math.DS nlin.CD physics.data-an | Complex systems are commonly modeled using nonlinear dynamical systems. These
models are often high-dimensional and chaotic. An important goal in studying
physical systems through the lens of mathematical models is to determine when
the system undergoes changes in qualitative behavior. A detailed description of
the dynamics can be difficult or impossible to obtain for high-dimensional and
chaotic systems. Therefore, a more sensible goal is to recognize and mark
transitions of a system between qualitatively different regimes of behavior. In
practice, one is interested in developing techniques for detection of such
transitions from sparse observations, possibly contaminated by noise. In this
paper we develop a framework to accurately tag different regimes of complex
systems based on topological features. In particular, our framework works with
a high degree of success in picking out a cyclically orbiting regime from a
stationary equilibrium regime in high-dimensional stochastic dynamical systems.
| Jesse Berwald, Marian Gidea and Mikael Vejdemo-Johansson | null | 1312.2482 | null | null |
Kernel-based Distance Metric Learning in the Output Space | cs.LG | In this paper we present two related, kernel-based Distance Metric Learning
(DML) methods. Their respective models non-linearly map data from their
original space to an output space, and subsequent distance measurements are
performed in the output space via a Mahalanobis metric. The dimensionality of
the output space can be directly controlled to facilitate the learning of a
low-rank metric. Both methods allow for simultaneous inference of the
associated metric and the mapping to the output space, which can be used to
visualize the data, when the output space is 2- or 3-dimensional. Experimental
results for a collection of classification tasks illustrate the advantages of
the proposed methods over other traditional and kernel-based DML approaches.
| Cong Li, Michael Georgiopoulos, Georgios C. Anagnostopoulos | 10.1109/IJCNN.2013.6706862 | 1312.2578 | null | null |
Multi-Task Classification Hypothesis Space with Improved Generalization
Bounds | cs.LG | This paper presents a RKHS, in general, of vector-valued functions intended
to be used as hypothesis space for multi-task classification. It extends
similar hypothesis spaces that have previously considered in the literature.
Assuming this space, an improved Empirical Rademacher Complexity-based
generalization bound is derived. The analysis is itself extended to an MKL
setting. The connection between the proposed hypothesis space and a Group-Lasso
type regularizer is discussed. Finally, experimental results, with some
SVM-based Multi-Task Learning problems, underline the quality of the derived
bounds and validate the paper's analysis.
| Cong Li, Michael Georgiopoulos, Georgios C. Anagnostopoulos | null | 1312.2606 | null | null |
Improving circuit miniaturization and its efficiency using Rough Set
Theory | cs.LG cs.AI | High-speed, accuracy, meticulousness and quick response are notion of the
vital necessities for modern digital world. An efficient electronic circuit
unswervingly affects the maneuver of the whole system. Different tools are
required to unravel different types of engineering tribulations. Improving the
efficiency, accuracy and low power consumption in an electronic circuit is
always been a bottle neck problem. So the need of circuit miniaturization is
always there. It saves a lot of time and power that is wasted in switching of
gates, the wiring-crises is reduced, cross-sectional area of chip is reduced,
the number of transistors that can implemented in chip is multiplied many
folds. Therefore to trounce with this problem we have proposed an Artificial
intelligence (AI) based approach that make use of Rough Set Theory for its
implementation. Theory of rough set has been proposed by Z Pawlak in the year
1982. Rough set theory is a new mathematical tool which deals with uncertainty
and vagueness. Decisions can be generated using rough set theory by reducing
the unwanted and superfluous data. We have condensed the number of gates
without upsetting the productivity of the given circuit. This paper proposes an
approach with the help of rough set theory which basically lessens the number
of gates in the circuit, based on decision rules.
| Sarvesh SS Rawat, Dheeraj Dilip Mor, Anugrah Kumar, Sanjiban Shekar
Roy, Rohit kumar | null | 1312.2710 | null | null |
Performance Analysis Of Regularized Linear Regression Models For
Oxazolines And Oxazoles Derivitive Descriptor Dataset | cs.LG | Regularized regression techniques for linear regression have been created the
last few ten years to reduce the flaws of ordinary least squares regression
with regard to prediction accuracy. In this paper, new methods for using
regularized regression in model choice are introduced, and we distinguish the
conditions in which regularized regression develops our ability to discriminate
models. We applied all the five methods that use penalty-based (regularization)
shrinkage to handle Oxazolines and Oxazoles derivatives descriptor dataset with
far more predictors than observations. The lasso, ridge, elasticnet, lars and
relaxed lasso further possess the desirable property that they simultaneously
select relevant predictive descriptors and optimally estimate their effects.
Here, we comparatively evaluate the performance of five regularized linear
regression methods The assessment of the performance of each model by means of
benchmark experiments is an established exercise. Cross-validation and
resampling methods are generally used to arrive point evaluates the
efficiencies which are compared to recognize methods with acceptable features.
Predictive accuracy was evaluated using the root mean squared error (RMSE) and
Square of usual correlation between predictors and observed mean inhibitory
concentration of antitubercular activity (R square). We found that all five
regularized regression models were able to produce feasible models and
efficient capturing the linearity in the data. The elastic net and lars had
similar accuracies as well as lasso and relaxed lasso had similar accuracies
but outperformed ridge regression in terms of the RMSE and R square metrics.
| Doreswamy and Chanabasayya .M. Vastrad | 10.5121/ijcsity.2013.1408 | 1312.2789 | null | null |
Active Player Modelling | cs.LG | We argue for the use of active learning methods for player modelling. In
active learning, the learning algorithm chooses where to sample the search
space so as to optimise learning progress. We hypothesise that player modelling
based on active learning could result in vastly more efficient learning, but
will require big changes in how data is collected. Some example active player
modelling scenarios are described. A particular form of active learning is also
equivalent to an influential formalisation of (human and machine) curiosity,
and games with active learning could therefore be seen as being curious about
the player. We further hypothesise that this form of curiosity is symmetric,
and therefore that games that explore their players based on the principles of
active learning will turn out to select game configurations that are
interesting to the player that is being explored.
| Julian Togelius, Noor Shaker, Georgios N. Yannakakis | null | 1312.2936 | null | null |
Protein Contact Prediction by Integrating Joint Evolutionary Coupling
Analysis and Supervised Learning | q-bio.QM cs.LG math.OC q-bio.BM stat.ML | Protein contacts contain important information for protein structure and
functional study, but contact prediction from sequence remains very
challenging. Both evolutionary coupling (EC) analysis and supervised machine
learning methods are developed to predict contacts, making use of different
types of information, respectively. This paper presents a group graphical lasso
(GGL) method for contact prediction that integrates joint multi-family EC
analysis and supervised learning. Different from existing single-family EC
analysis that uses residue co-evolution information in only the target protein
family, our joint EC analysis uses residue co-evolution in both the target
family and its related families, which may have divergent sequences but similar
folds. To implement joint EC analysis, we model a set of related protein
families using Gaussian graphical models (GGM) and then co-estimate their
precision matrices by maximum-likelihood, subject to the constraint that the
precision matrices shall share similar residue co-evolution patterns. To
further improve the accuracy of the estimated precision matrices, we employ a
supervised learning method to predict contact probability from a variety of
evolutionary and non-evolutionary information and then incorporate the
predicted probability as prior into our GGL framework. Experiments show that
our method can predict contacts much more accurately than existing methods, and
that our method performs better on both conserved and family-specific contacts.
| Jianzhu Ma, Sheng Wang, Zhiyong Wang and Jinbo Xu | null | 1312.2988 | null | null |
Clustering for high-dimension, low-sample size data using distance
vectors | stat.ML cs.LG | In high-dimension, low-sample size (HDLSS) data, it is not always true that
closeness of two objects reflects a hidden cluster structure. We point out the
important fact that it is not the closeness, but the "values" of distance that
contain information of the cluster structure in high-dimensional space. Based
on this fact, we propose an efficient and simple clustering approach, called
distance vector clustering, for HDLSS data. Under the assumptions given in the
work of Hall et al. (2005), we show the proposed approach provides a true
cluster label under milder conditions when the dimension tends to infinity with
the sample size fixed. The effectiveness of the distance vector clustering
approach is illustrated through a numerical experiment and real data analysis.
| Yoshikazu Terada | null | 1312.3386 | null | null |
Online Bayesian Passive-Aggressive Learning | cs.LG | Online Passive-Aggressive (PA) learning is an effective framework for
performing max-margin online learning. But the deterministic formulation and
estimated single large-margin model could limit its capability in discovering
descriptive structures underlying complex data. This pa- per presents online
Bayesian Passive-Aggressive (BayesPA) learning, which subsumes the online PA
and extends naturally to incorporate latent variables and perform nonparametric
Bayesian inference, thus providing great flexibility for explorative analysis.
We apply BayesPA to topic modeling and derive efficient online learning
algorithms for max-margin topic models. We further develop nonparametric
methods to resolve the number of topics. Experimental results on real datasets
show that our approaches significantly improve time efficiency while
maintaining comparable results with the batch counterparts.
| Tianlin Shi and Jun Zhu | null | 1312.3388 | null | null |
Relative Upper Confidence Bound for the K-Armed Dueling Bandit Problem | cs.LG | This paper proposes a new method for the K-armed dueling bandit problem, a
variation on the regular K-armed bandit problem that offers only relative
feedback about pairs of arms. Our approach extends the Upper Confidence Bound
algorithm to the relative setting by using estimates of the pairwise
probabilities to select a promising arm and applying Upper Confidence Bound
with the winner as a benchmark. We prove a finite-time regret bound of order
O(log t). In addition, our empirical results using real data from an
information retrieval application show that it greatly outperforms the state of
the art.
| Masrour Zoghi, Shimon Whiteson, Remi Munos, Maarten de Rijke | null | 1312.3393 | null | null |
Unsupervised learning of depth and motion | cs.CV cs.LG stat.ML | We present a model for the joint estimation of disparity and motion. The
model is based on learning about the interrelations between images from
multiple cameras, multiple frames in a video, or the combination of both. We
show that learning depth and motion cues, as well as their combinations, from
data is possible within a single type of architecture and a single type of
learning algorithm, by using biologically inspired "complex cell" like units,
which encode correlations between the pixels across image pairs. Our
experimental results show that the learning of depth and motion makes it
possible to achieve state-of-the-art performance in 3-D activity analysis, and
to outperform existing hand-engineered 3-D motion features by a very large
margin.
| Kishore Konda, Roland Memisevic | null | 1312.3429 | null | null |
Sparse Matrix-based Random Projection for Classification | cs.LG cs.CV stat.ML | As a typical dimensionality reduction technique, random projection can be
simply implemented with linear projection, while maintaining the pairwise
distances of high-dimensional data with high probability. Considering this
technique is mainly exploited for the task of classification, this paper is
developed to study the construction of random matrix from the viewpoint of
feature selection, rather than of traditional distance preservation. This
yields a somewhat surprising theoretical result, that is, the sparse random
matrix with exactly one nonzero element per column, can present better feature
selection performance than other more dense matrices, if the projection
dimension is sufficiently large (namely, not much smaller than the number of
feature elements); otherwise, it will perform comparably to others. For random
projection, this theoretical result implies considerable improvement on both
complexity and performance, which is widely confirmed with the classification
experiments on both synthetic data and real data.
| Weizhi Lu and Weiyu Li and Kidiyo Kpalma and Joseph Ronsin | null | 1312.3522 | null | null |
Efficient Baseline-free Sampling in Parameter Exploring Policy
Gradients: Super Symmetric PGPE | cs.LG | Policy Gradient methods that explore directly in parameter space are among
the most effective and robust direct policy search methods and have drawn a lot
of attention lately. The basic method from this field, Policy Gradients with
Parameter-based Exploration, uses two samples that are symmetric around the
current hypothesis to circumvent misleading reward in \emph{asymmetrical}
reward distributed problems gathered with the usual baseline approach. The
exploration parameters are still updated by a baseline approach - leaving the
exploration prone to asymmetric reward distributions. In this paper we will
show how the exploration parameters can be sampled quasi symmetric despite
having limited instead of free parameters for exploration. We give a
transformation approximation to get quasi symmetric samples with respect to the
exploration without changing the overall sampling distribution. Finally we will
demonstrate that sampling symmetrically also for the exploration parameters is
superior in needs of samples and robustness than the original sampling
approach.
| Frank Sehnke | 10.1007/978-3-642-40728-4_17 | 1312.3811 | null | null |
A Methodology for Player Modeling based on Machine Learning | cs.AI cs.LG | AI is gradually receiving more attention as a fundamental feature to increase
the immersion in digital games. Among the several AI approaches, player
modeling is becoming an important one. The main idea is to understand and model
the player characteristics and behaviors in order to develop a better AI. In
this work, we discuss several aspects of this new field. We proposed a taxonomy
to organize the area, discussing several facets of this topic, ranging from
implementation decisions up to what a model attempts to describe. We then
classify, in our taxonomy, some of the most important works in this field. We
also presented a generic approach to deal with player modeling using ML, and we
instantiated this approach to model players' preferences in the game
Civilization IV. The instantiation of this approach has several steps. We first
discuss a generic representation, regardless of what is being modeled, and
evaluate it performing experiments with the strategy game Civilization IV.
Continuing the instantiation of the proposed approach we evaluated the
applicability of using game score information to distinguish different
preferences. We presented a characterization of virtual agents in the game,
comparing their behavior with their stated preferences. Once we have
characterized these agents, we were able to observe that different preferences
generate different behaviors, measured by several game indicators. We then
tackled the preference modeling problem as a binary classification task, with a
supervised learning approach. We compared four different methods, based on
different paradigms (SVM, AdaBoost, NaiveBayes and JRip), evaluating them on a
set of matches played by different virtual agents. We conclude our work using
the learned models to infer human players' preferences. Using some of the
evaluated classifiers we obtained accuracies over 60% for most of the inferred
preferences.
| Marlos C. Machado | null | 1312.3903 | null | null |
An Extensive Evaluation of Filtering Misclassified Instances in
Supervised Classification Tasks | cs.LG stat.ML | Removing or filtering outliers and mislabeled instances prior to training a
learning algorithm has been shown to increase classification accuracy. A
popular approach for handling outliers and mislabeled instances is to remove
any instance that is misclassified by a learning algorithm. However, an
examination of which learning algorithms to use for filtering as well as their
effects on multiple learning algorithms over a large set of data sets has not
been done. Previous work has generally been limited due to the large
computational requirements to run such an experiment, and, thus, the
examination has generally been limited to learning algorithms that are
computationally inexpensive and using a small number of data sets. In this
paper, we examine 9 learning algorithms as filtering algorithms as well as
examining the effects of filtering in the 9 chosen learning algorithms on a set
of 54 data sets. In addition to using each learning algorithm individually as a
filter, we also use the set of learning algorithms as an ensemble filter and
use an adaptive algorithm that selects a subset of the learning algorithms for
filtering for a specific task and learning algorithm. We find that for most
cases, using an ensemble of learning algorithms for filtering produces the
greatest increase in classification accuracy. We also compare filtering with a
majority voting ensemble. The voting ensemble significantly outperforms
filtering unless there are high amounts of noise present in the data set.
Additionally, we find that a majority voting ensemble is robust to noise as
filtering with a voting ensemble does not increase the classification accuracy
of the voting ensemble.
| Michael R. Smith and Tony Martinez | null | 1312.3970 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.