title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Classroom Video Assessment and Retrieval via Multiple Instance Learning | cs.IR cs.CY cs.LG | We propose a multiple instance learning approach to content-based retrieval
of classroom video for the purpose of supporting human assessing the learning
environment. The key element of our approach is a mapping between the semantic
concepts of the assessment system and features of the video that can be
measured using techniques from the fields of computer vision and speech
analysis. We report on a formative experiment in content-based video retrieval
involving trained experts in the Classroom Assessment Scoring System, a widely
used framework for assessment and improvement of learning environments. The
results of this experiment suggest that our approach has potential application
to productivity enhancement in assessment and to broader retrieval tasks.
| Qifeng Qiao and Peter A. Beling | null | 1403.6248 | null | null |
Updating Formulas and Algorithms for Computing Entropy and Gini Index
from Time-Changing Data Streams | cs.AI cs.LG | Despite growing interest in data stream mining the most successful
incremental learners, such as VFDT, still use periodic recomputation to update
attribute information gains and Gini indices. This note provides simple
incremental formulas and algorithms for computing entropy and Gini index from
time-changing data streams.
| Blaz Sovdat | null | 1403.6348 | null | null |
Evaluating topic coherence measures | cs.LG cs.CL cs.IR | Topic models extract representative word sets - called topics - from word
counts in documents without requiring any semantic annotations. Topics are not
guaranteed to be well interpretable, therefore, coherence measures have been
proposed to distinguish between good and bad topics. Studies of topic coherence
so far are limited to measures that score pairs of individual words. For the
first time, we include coherence measures from scientific philosophy that score
pairs of more complex word subsets and apply them to topic scoring.
| Frank Rosner, Alexander Hinneburg, Michael R\"oder, Martin Nettling,
Andreas Both | null | 1403.6397 | null | null |
Multi-agent Inverse Reinforcement Learning for Two-person Zero-sum Games | cs.GT cs.AI cs.LG | The focus of this paper is a Bayesian framework for solving a class of
problems termed multi-agent inverse reinforcement learning (MIRL). Compared to
the well-known inverse reinforcement learning (IRL) problem, MIRL is formalized
in the context of stochastic games, which generalize Markov decision processes
to game theoretic scenarios. We establish a theoretical foundation for
competitive two-agent zero-sum MIRL problems and propose a Bayesian solution
approach in which the generative model is based on an assumption that the two
agents follow a minimax bi-policy. Numerical results are presented comparing
the Bayesian MIRL method with two existing methods in the context of an
abstract soccer game. Investigation centers on relationships between the extent
of prior information and the quality of learned rewards. Results suggest that
covariance structure is more important than mean value in reward priors.
| Xiaomin Lin and Peter A. Beling and Randy Cogill | 10.1109/TCIAIG.2017.2679115 | 1403.6508 | null | null |
Variance-Constrained Actor-Critic Algorithms for Discounted and Average
Reward MDPs | cs.LG math.OC stat.ML | In many sequential decision-making problems we may want to manage risk by
minimizing some measure of variability in rewards in addition to maximizing a
standard criterion. Variance related risk measures are among the most common
risk-sensitive criteria in finance and operations research. However, optimizing
many such criteria is known to be a hard problem. In this paper, we consider
both discounted and average reward Markov decision processes. For each
formulation, we first define a measure of variability for a policy, which in
turn gives us a set of risk-sensitive criteria to optimize. For each of these
criteria, we derive a formula for computing its gradient. We then devise
actor-critic algorithms that operate on three timescales - a TD critic on the
fastest timescale, a policy gradient (actor) on the intermediate timescale, and
a dual ascent for Lagrange multipliers on the slowest timescale. In the
discounted setting, we point out the difficulty in estimating the gradient of
the variance of the return and incorporate simultaneous perturbation approaches
to alleviate this. The average setting, on the other hand, allows for an actor
update using compatible features to estimate the gradient of the variance. We
establish the convergence of our algorithms to locally risk-sensitive optimal
policies. Finally, we demonstrate the usefulness of our algorithms in a traffic
signal control application.
| Prashanth L.A. and Mohammad Ghavamzadeh | null | 1403.6530 | null | null |
DeepWalk: Online Learning of Social Representations | cs.SI cs.LG | We present DeepWalk, a novel approach for learning latent representations of
vertices in a network. These latent representations encode social relations in
a continuous vector space, which is easily exploited by statistical models.
DeepWalk generalizes recent advancements in language modeling and unsupervised
feature learning (or deep learning) from sequences of words to graphs. DeepWalk
uses local information obtained from truncated random walks to learn latent
representations by treating walks as the equivalent of sentences. We
demonstrate DeepWalk's latent representations on several multi-label network
classification tasks for social networks such as BlogCatalog, Flickr, and
YouTube. Our results show that DeepWalk outperforms challenging baselines which
are allowed a global view of the network, especially in the presence of missing
information. DeepWalk's representations can provide $F_1$ scores up to 10%
higher than competing methods when labeled data is sparse. In some experiments,
DeepWalk's representations are able to outperform all baseline methods while
using 60% less training data. DeepWalk is also scalable. It is an online
learning algorithm which builds useful incremental results, and is trivially
parallelizable. These qualities make it suitable for a broad class of real
world applications such as network classification, and anomaly detection.
| Bryan Perozzi, Rami Al-Rfou and Steven Skiena | 10.1145/2623330.2623732 | 1403.6652 | null | null |
Beyond L2-Loss Functions for Learning Sparse Models | stat.ML cs.CV cs.LG math.OC | Incorporating sparsity priors in learning tasks can give rise to simple, and
interpretable models for complex high dimensional data. Sparse models have
found widespread use in structure discovery, recovering data from corruptions,
and a variety of large scale unsupervised and supervised learning problems.
Assuming the availability of sufficient data, these methods infer dictionaries
for sparse representations by optimizing for high-fidelity reconstruction. In
most scenarios, the reconstruction quality is measured using the squared
Euclidean distance, and efficient algorithms have been developed for both batch
and online learning cases. However, new application domains motivate looking
beyond conventional loss functions. For example, robust loss functions such as
$\ell_1$ and Huber are useful in learning outlier-resilient models, and the
quantile loss is beneficial in discovering structures that are the
representative of a particular quantile. These new applications motivate our
work in generalizing sparse learning to a broad class of convex loss functions.
In particular, we consider the class of piecewise linear quadratic (PLQ) cost
functions that includes Huber, as well as $\ell_1$, quantile, Vapnik, hinge
loss, and smoothed variants of these penalties. We propose an algorithm to
learn dictionaries and obtain sparse codes when the data reconstruction
fidelity is measured using any smooth PLQ cost function. We provide convergence
guarantees for the proposed algorithm, and demonstrate the convergence behavior
using empirical experiments. Furthermore, we present three case studies that
require the use of PLQ cost functions: (i) robust image modeling, (ii) tag
refinement for image annotation and retrieval and (iii) computing empirical
confidence limits for subspace clustering.
| Karthikeyan Natesan Ramamurthy, Aleksandr Y. Aravkin, Jayaraman J.
Thiagarajan | null | 1403.6706 | null | null |
Comparison of Multi-agent and Single-agent Inverse Learning on a
Simulated Soccer Example | cs.LG cs.GT | We compare the performance of Inverse Reinforcement Learning (IRL) with the
relative new model of Multi-agent Inverse Reinforcement Learning (MIRL). Before
comparing the methods, we extend a published Bayesian IRL approach that is only
applicable to the case where the reward is only state dependent to a general
one capable of tackling the case where the reward depends on both state and
action. Comparison between IRL and MIRL is made in the context of an abstract
soccer game, using both a game model in which the reward depends only on state
and one in which it depends on both state and action. Results suggest that the
IRL approach performs much worse than the MIRL approach. We speculate that the
underperformance of IRL is because it fails to capture equilibrium information
in the manner possible in MIRL.
| Xiaomin Lin and Peter A. Beling and Randy Cogill | null | 1403.6822 | null | null |
Online Learning of k-CNF Boolean Functions | cs.LG | This paper revisits the problem of learning a k-CNF Boolean function from
examples in the context of online learning under the logarithmic loss. In doing
so, we give a Bayesian interpretation to one of Valiant's celebrated PAC
learning algorithms, which we then build upon to derive two efficient, online,
probabilistic, supervised learning algorithms for predicting the output of an
unknown k-CNF Boolean function. We analyze the loss of our methods, and show
that the cumulative log-loss can be upper bounded, ignoring logarithmic
factors, by a polynomial function of the size of each example.
| Joel Veness and Marcus Hutter | null | 1403.6863 | null | null |
Automatic Segmentation of Broadcast News Audio using Self Similarity
Matrix | cs.SD cs.LG cs.MM | Generally audio news broadcast on radio is com- posed of music, commercials,
news from correspondents and recorded statements in addition to the actual news
read by the newsreader. When news transcripts are available, automatic
segmentation of audio news broadcast to time align the audio with the text
transcription to build frugal speech corpora is essential. We address the
problem of identifying segmentation in the audio news broadcast corresponding
to the news read by the newsreader so that they can be mapped to the text
transcripts. The existing techniques produce sub-optimal solutions when used to
extract newsreader read segments. In this paper, we propose a new technique
which is able to identify the acoustic change points reliably using an acoustic
Self Similarity Matrix (SSM). We describe the two pass technique in detail and
verify its performance on real audio news broadcast of All India Radio for
different languages.
| Sapna Soni and Ahmed Imran and Sunil Kumar Kopparapu | null | 1403.6901 | null | null |
Closed-Form Training of Conditional Random Fields for Large Scale Image
Segmentation | cs.LG cs.CV | We present LS-CRF, a new method for very efficient large-scale training of
Conditional Random Fields (CRFs). It is inspired by existing closed-form
expressions for the maximum likelihood parameters of a generative graphical
model with tree topology. LS-CRF training requires only solving a set of
independent regression problems, for which closed-form expression as well as
efficient iterative solvers are available. This makes it orders of magnitude
faster than conventional maximum likelihood learning for CRFs that require
repeated runs of probabilistic inference. At the same time, the models learned
by our method still allow for joint inference at test time. We apply LS-CRF to
the task of semantic image segmentation, showing that it is highly efficient,
even for loopy models where probabilistic inference is problematic. It allows
the training of image segmentation models from significantly larger training
sets than had been used previously. We demonstrate this on two new datasets
that form a second contribution of this paper. They consist of over 180,000
images with figure-ground segmentation annotations. Our large-scale experiments
show that the possibilities of CRF-based image segmentation are far from
exhausted, indicating, for example, that semi-supervised learning and the use
of non-linear predictors are promising directions for achieving higher
segmentation accuracy in the future.
| Alexander Kolesnikov, Matthieu Guillaumin, Vittorio Ferrari and
Christoph H. Lampert | null | 1403.7057 | null | null |
Conclusions from a NAIVE Bayes Operator Predicting the Medicare 2011
Transaction Data Set | cs.LG cs.CY physics.data-an | Introduction: The United States Federal Government operates one of the worlds
largest medical insurance programs, Medicare, to ensure payment for clinical
services for the elderly, illegal aliens and those without the ability to pay
for their care directly. This paper evaluates the Medicare 2011 Transaction
Data Set which details the transfer of funds from Medicare to private and
public clinical care facilities for specific clinical services for the
operational year 2011. Methods: Data mining was conducted to establish the
relationships between reported and computed transaction values in the data set
to better understand the drivers of Medicare transactions at a programmatic
level. Results: The models averaged 88 for average model accuracy and 38 for
average Kappa during training. Some reported classes are highly independent
from the available data as their predictability remains stable regardless of
redaction of supporting and contradictory evidence. DRG or procedure type
appears to be unpredictable from the available financial transaction values.
Conclusions: Overlay hypotheses such as charges being driven by the volume
served or DRG being related to charges or payments is readily false in this
analysis despite 28 million Americans being billed through Medicare in 2011 and
the program distributing over 70 billion in this transaction set alone. It may
be impossible to predict the dependencies and data structures the payer of last
resort without data from payers of first and second resort. Political concerns
about Medicare would be better served focusing on these first and second order
payer systems as what Medicare costs is not dependent on Medicare itself.
| Nick Williams | null | 1403.7087 | null | null |
A study on cost behaviors of binary classification measures in
class-imbalanced problems | cs.LG | This work investigates into cost behaviors of binary classification measures
in a background of class-imbalanced problems. Twelve performance measures are
studied, such as F measure, G-means in terms of accuracy rates, and of recall
and precision, balance error rate (BER), Matthews correlation coefficient
(MCC), Kappa coefficient, etc. A new perspective is presented for those
measures by revealing their cost functions with respect to the class imbalance
ratio. Basically, they are described by four types of cost functions. The
functions provides a theoretical understanding why some measures are suitable
for dealing with class-imbalanced problems. Based on their cost functions, we
are able to conclude that G-means of accuracy rates and BER are suitable
measures because they show "proper" cost behaviors in terms of "a
misclassification from a small class will cause a greater cost than that from a
large class". On the contrary, F1 measure, G-means of recall and precision, MCC
and Kappa coefficient measures do not produce such behaviors so that they are
unsuitable to serve our goal in dealing with the problems properly.
| Bao-Gang Hu and Wei-Ming Dong | null | 1403.7100 | null | null |
Data Generators for Learning Systems Based on RBF Networks | stat.ML cs.AI cs.LG | There are plenty of problems where the data available is scarce and
expensive. We propose a generator of semi-artificial data with similar
properties to the original data which enables development and testing of
different data mining algorithms and optimization of their parameters. The
generated data allow a large scale experimentation and simulations without
danger of overfitting. The proposed generator is based on RBF networks, which
learn sets of Gaussian kernels. These Gaussian kernels can be used in a
generative mode to generate new data from the same distributions. To assess
quality of the generated data we evaluated the statistical properties of the
generated data, structural similarity and predictive similarity using
supervised and unsupervised learning techniques. To determine usability of the
proposed generator we conducted a large scale evaluation using 51 UCI data
sets. The results show a considerable similarity between the original and
generated data and indicate that the method can be useful in several
development and simulation scenarios. We analyze possible improvements in
classification performance by adding different amounts of generated data to the
training set, performance on high dimensional data sets, and conditions when
the proposed approach is successful.
| Marko Robnik-\v{S}ikonja | 10.1109/TNNLS.2015.2429711 | 1403.7308 | null | null |
Distributed Reconstruction of Nonlinear Networks: An ADMM Approach | math.OC cs.DC cs.LG cs.SY | In this paper, we present a distributed algorithm for the reconstruction of
large-scale nonlinear networks. In particular, we focus on the identification
from time-series data of the nonlinear functional forms and associated
parameters of large-scale nonlinear networks. Recently, a nonlinear network
reconstruction problem was formulated as a nonconvex optimisation problem based
on the combination of a marginal likelihood maximisation procedure with
sparsity inducing priors. Using a convex-concave procedure (CCCP), an iterative
reweighted lasso algorithm was derived to solve the initial nonconvex
optimisation problem. By exploiting the structure of the objective function of
this reweighted lasso algorithm, a distributed algorithm can be designed. To
this end, we apply the alternating direction method of multipliers (ADMM) to
decompose the original problem into several subproblems. To illustrate the
effectiveness of the proposed methods, we use our approach to identify a
network of interconnected Kuramoto oscillators with different network sizes
(500~100,000 nodes).
| Wei Pan, Aivar Sootla and Guy-Bart Stan | null | 1403.7429 | null | null |
Approximate Decentralized Bayesian Inference | cs.LG | This paper presents an approximate method for performing Bayesian inference
in models with conditional independence over a decentralized network of
learning agents. The method first employs variational inference on each
individual learning agent to generate a local approximate posterior, the agents
transmit their local posteriors to other agents in the network, and finally
each agent combines its set of received local posteriors. The key insight in
this work is that, for many Bayesian models, approximate inference schemes
destroy symmetry and dependencies in the model that are crucial to the correct
application of Bayes' rule when combining the local posteriors. The proposed
method addresses this issue by including an additional optimization step in the
combination procedure that accounts for these broken dependencies. Experiments
on synthetic and real data demonstrate that the decentralized method provides
advantages in computational performance and predictive test likelihood over
previous batch and distributed methods.
| Trevor Campbell and Jonathan P. How | null | 1403.7471 | null | null |
DimmWitted: A Study of Main-Memory Statistical Analytics | cs.DB cs.LG math.OC stat.ML | We perform the first study of the tradeoff space of access methods and
replication to support statistical analytics using first-order methods executed
in the main memory of a Non-Uniform Memory Access (NUMA) machine. Statistical
analytics systems differ from conventional SQL-analytics in the amount and
types of memory incoherence they can tolerate. Our goal is to understand
tradeoffs in accessing the data in row- or column-order and at what granularity
one should share the model and data for a statistical task. We study this new
tradeoff space, and discover there are tradeoffs between hardware and
statistical efficiency. We argue that our tradeoff study may provide valuable
information for designers of analytics engines: for each system we consider,
our prototype engine can run at least one popular task at least 100x faster. We
conduct our study across five architectures using popular models including
SVMs, logistic regression, Gibbs sampling, and neural networks.
| Ce Zhang and Christopher R\'e | null | 1403.7550 | null | null |
Relevant Feature Selection Model Using Data Mining for Intrusion
Detection System | cs.CR cs.LG | Network intrusions have become a significant threat in recent years as a
result of the increased demand of computer networks for critical systems.
Intrusion detection system (IDS) has been widely deployed as a defense measure
for computer networks. Features extracted from network traffic can be used as
sign to detect anomalies. However with the huge amount of network traffic,
collected data contains irrelevant and redundant features that affect the
detection rate of the IDS, consumes high amount of system resources, and
slowdown the training and testing process of the IDS. In this paper, a new
feature selection model is proposed; this model can effectively select the most
relevant features for intrusion detection. Our goal is to build a lightweight
intrusion detection system by using a reduced features set. Deleting irrelevant
and redundant features helps to build a faster training and testing process, to
have less resource consumption as well as to maintain high detection rates. The
effectiveness and the feasibility of our feature selection model were verified
by several experiments on KDD intrusion detection dataset. The experimental
results strongly showed that our model is not only able to yield high detection
rates but also to speed up the detection process.
| Ayman I. Madbouly, Amr M. Gody, Tamer M. Barakat | 10.14445/22315381/IJETT-V9P296 | 1403.7726 | null | null |
Optimal Cooperative Cognitive Relaying and Spectrum Access for an Energy
Harvesting Cognitive Radio: Reinforcement Learning Approach | cs.NI cs.IT cs.LG math.IT | In this paper, we consider a cognitive setting under the context of
cooperative communications, where the cognitive radio (CR) user is assumed to
be a self-organized relay for the network. The CR user and the PU are assumed
to be energy harvesters. The CR user cooperatively relays some of the
undelivered packets of the primary user (PU). Specifically, the CR user stores
a fraction of the undelivered primary packets in a relaying queue (buffer). It
manages the flow of the undelivered primary packets to its relaying queue using
the appropriate actions over time slots. Moreover, it has the decision of
choosing the used queue for channel accessing at idle time slots (slots where
the PU's queue is empty). It is assumed that one data packet transmission
dissipates one energy packet. The optimal policy changes according to the
primary and CR users arrival rates to the data and energy queues as well as the
channels connectivity. The CR user saves energy for the PU by taking the
responsibility of relaying the undelivered primary packets. It optimally
organizes its own energy packets to maximize its payoff as time progresses.
| Ahmed El Shafie and Tamer Khattab and Hussien Saad and Amr Mohamed | null | 1403.7735 | null | null |
Sharpened Error Bounds for Random Sampling Based $\ell_2$ Regression | cs.LG cs.NA stat.ML | Given a data matrix $X \in R^{n\times d}$ and a response vector $y \in
R^{n}$, suppose $n>d$, it costs $O(n d^2)$ time and $O(n d)$ space to solve the
least squares regression (LSR) problem. When $n$ and $d$ are both large,
exactly solving the LSR problem is very expensive. When $n \gg d$, one feasible
approach to speeding up LSR is to randomly embed $y$ and all columns of $X$
into a smaller subspace $R^c$; the induced LSR problem has the same number of
columns but much fewer number of rows, and it can be solved in $O(c d^2)$ time
and $O(c d)$ space.
We discuss in this paper two random sampling based methods for solving LSR
more efficiently. Previous work showed that the leverage scores based sampling
based LSR achieves $1+\epsilon$ accuracy when $c \geq O(d \epsilon^{-2} \log
d)$. In this paper we sharpen this error bound, showing that $c = O(d \log d +
d \epsilon^{-1})$ is enough for achieving $1+\epsilon$ accuracy. We also show
that when $c \geq O(\mu d \epsilon^{-2} \log d)$, the uniform sampling based
LSR attains a $2+\epsilon$ bound with positive probability.
| Shusen Wang | null | 1403.7737 | null | null |
Multi-label Ferns for Efficient Recognition of Musical Instruments in
Recordings | cs.LG cs.SD | In this paper we introduce multi-label ferns, and apply this technique for
automatic classification of musical instruments in audio recordings. We compare
the performance of our proposed method to a set of binary random ferns, using
jazz recordings as input data. Our main result is obtaining much faster
classification and higher F-score. We also achieve substantial reduction of the
model size.
| Miron B. Kursa, Alicja A. Wieczorkowska | null | 1403.7746 | null | null |
Auto-encoders: reconstruction versus compression | cs.NE cs.IT cs.LG math.IT | We discuss the similarities and differences between training an auto-encoder
to minimize the reconstruction error, and training the same auto-encoder to
compress the data via a generative model. Minimizing a codelength for the data
using an auto-encoder is equivalent to minimizing the reconstruction error plus
some correcting terms which have an interpretation as either a denoising or
contractive property of the decoding function. These terms are related but not
identical to those used in denoising or contractive auto-encoders [Vincent et
al. 2010, Rifai et al. 2011]. In particular, the codelength viewpoint fully
determines an optimal noise level for the denoising criterion.
| Yann Ollivier | null | 1403.7752 | null | null |
Sparse K-Means with $\ell_{\infty}/\ell_0$ Penalty for High-Dimensional
Data Clustering | stat.ML cs.LG stat.ME | Sparse clustering, which aims to find a proper partition of an extremely
high-dimensional data set with redundant noise features, has been attracted
more and more interests in recent years. The existing studies commonly solve
the problem in a framework of maximizing the weighted feature contributions
subject to a $\ell_2/\ell_1$ penalty. Nevertheless, this framework has two
serious drawbacks: One is that the solution of the framework unavoidably
involves a considerable portion of redundant noise features in many situations,
and the other is that the framework neither offers intuitive explanations on
why this framework can select relevant features nor leads to any theoretical
guarantee for feature selection consistency.
In this article, we attempt to overcome those drawbacks through developing a
new sparse clustering framework which uses a $\ell_{\infty}/\ell_0$ penalty.
First, we introduce new concepts on optimal partitions and noise features for
the high-dimensional data clustering problems, based on which the previously
known framework can be intuitively explained in principle. Then, we apply the
suggested $\ell_{\infty}/\ell_0$ framework to formulate a new sparse k-means
model with the $\ell_{\infty}/\ell_0$ penalty ($\ell_0$-k-means for short). We
propose an efficient iterative algorithm for solving the $\ell_0$-k-means. To
deeply understand the behavior of $\ell_0$-k-means, we prove that the solution
yielded by the $\ell_0$-k-means algorithm has feature selection consistency
whenever the data matrix is generated from a high-dimensional Gaussian mixture
model. Finally, we provide experiments with both synthetic data and the Allen
Developing Mouse Brain Atlas data to support that the proposed $\ell_0$-k-means
exhibits better noise feature detection capacity over the previously known
sparse k-means with the $\ell_2/\ell_1$ penalty ($\ell_1$-k-means for short).
| Xiangyu Chang, Yu Wang, Rongjian Li, Zongben Xu | null | 1403.7890 | null | null |
Privacy Tradeoffs in Predictive Analytics | cs.CR cs.LG | Online services routinely mine user data to predict user preferences, make
recommendations, and place targeted ads. Recent research has demonstrated that
several private user attributes (such as political affiliation, sexual
orientation, and gender) can be inferred from such data. Can a
privacy-conscious user benefit from personalization while simultaneously
protecting her private attributes? We study this question in the context of a
rating prediction service based on matrix factorization. We construct a
protocol of interactions between the service and users that has remarkable
optimality properties: it is privacy-preserving, in that no inference algorithm
can succeed in inferring a user's private attribute with a probability better
than random guessing; it has maximal accuracy, in that no other
privacy-preserving protocol improves rating prediction; and, finally, it
involves a minimal disclosure, as the prediction accuracy strictly decreases
when the service reveals less information. We extensively evaluate our protocol
using several rating datasets, demonstrating that it successfully blocks the
inference of gender, age and political affiliation, while incurring less than
5% decrease in the accuracy of rating prediction.
| Stratis Ioannidis, Andrea Montanari, Udi Weinsberg, Smriti Bhagat,
Nadia Fawaz, Nina Taft | null | 1403.8084 | null | null |
Coding for Random Projections and Approximate Near Neighbor Search | cs.LG cs.DB cs.DS stat.CO | This technical note compares two coding (quantization) schemes for random
projections in the context of sub-linear time approximate near neighbor search.
The first scheme is based on uniform quantization while the second scheme
utilizes a uniform quantization plus a uniformly random offset (which has been
popular in practice). The prior work compared the two schemes in the context of
similarity estimation and training linear classifiers, with the conclusion that
the step of random offset is not necessary and may hurt the performance
(depending on the similarity level). The task of near neighbor search is
related to similarity estimation with importance distinctions and requires own
study. In this paper, we demonstrate that in the context of near neighbor
search, the step of random offset is not needed either and may hurt the
performance (sometimes significantly so, depending on the similarity and other
parameters).
| Ping Li, Michael Mitzenmacher, Anshumali Shrivastava | null | 1403.8144 | null | null |
Using HMM in Strategic Games | cs.GT cs.IR cs.LG | In this paper we describe an approach to resolve strategic games in which
players can assume different types along the game. Our goal is to infer which
type the opponent is adopting at each moment so that we can increase the
player's odds. To achieve that we use Markov games combined with hidden Markov
model. We discuss a hypothetical example of a tennis game whose solution can be
applied to any game with similar characteristics.
| Mario Benevides (Federal University of Rio de Janeiro), Isaque Lima
(Federal University of Rio de Janeiro), Rafael Nader (Federal University of
Rio de Janeiro), Pedro Rougemont (Federal University of Rio de Janeiro) | 10.4204/EPTCS.144.6 | 1404.0086 | null | null |
Efficient Algorithms and Error Analysis for the Modified Nystrom Method | cs.LG | Many kernel methods suffer from high time and space complexities and are thus
prohibitive in big-data applications. To tackle the computational challenge,
the Nystr\"om method has been extensively used to reduce time and space
complexities by sacrificing some accuracy. The Nystr\"om method speedups
computation by constructing an approximation of the kernel matrix using only a
few columns of the matrix. Recently, a variant of the Nystr\"om method called
the modified Nystr\"om method has demonstrated significant improvement over the
standard Nystr\"om method in approximation accuracy, both theoretically and
empirically.
In this paper, we propose two algorithms that make the modified Nystr\"om
method practical. First, we devise a simple column selection algorithm with a
provable error bound. Our algorithm is more efficient and easier to implement
than and nearly as accurate as the state-of-the-art algorithm. Second, with the
selected columns at hand, we propose an algorithm that computes the
approximation in lower time complexity than the approach in the previous work.
Furthermore, we prove that the modified Nystr\"om method is exact under certain
conditions, and we establish a lower error bound for the modified Nystr\"om
method.
| Shusen Wang, Zhihua Zhang | null | 1404.0138 | null | null |
Household Electricity Demand Forecasting -- Benchmarking
State-of-the-Art Methods | cs.LG stat.AP | The increasing use of renewable energy sources with variable output, such as
solar photovoltaic and wind power generation, calls for Smart Grids that
effectively manage flexible loads and energy storage. The ability to forecast
consumption at different locations in distribution systems will be a key
capability of Smart Grids. The goal of this paper is to benchmark
state-of-the-art methods for forecasting electricity demand on the household
level across different granularities and time scales in an explorative way,
thereby revealing potential shortcomings and find promising directions for
future research in this area. We apply a number of forecasting methods
including ARIMA, neural networks, and exponential smoothening using several
strategies for training data selection, in particular day type and sliding
window based strategies. We consider forecasting horizons ranging between 15
minutes and 24 hours. Our evaluation is based on two data sets containing the
power usage of individual appliances at second time granularity collected over
the course of several months. The results indicate that forecasting accuracy
varies significantly depending on the choice of forecasting methods/strategy
and the parameter configuration. Measured by the Mean Absolute Percentage Error
(MAPE), the considered state-of-the-art forecasting methods rarely beat
corresponding persistence forecasts. Overall, we observed MAPEs in the range
between 5 and >100%. The average MAPE for the first data set was ~30%, while it
was ~85% for the other data set. These results show big room for improvement.
Based on the identified trends and experiences from our experiments, we
contribute a detailed discussion of promising future research.
| Andreas Veit, Christoph Goebel, Rohit Tidke, Christoph Doblander and
Hans-Arno Jacobsen | null | 1404.0200 | null | null |
Active Deformable Part Models | cs.CV cs.LG | This paper presents an active approach for part-based object detection, which
optimizes the order of part filter evaluations and the time at which to stop
and make a prediction. Statistics, describing the part responses, are learned
from training data and are used to formalize the part scheduling problem as an
offline optimization. Dynamic programming is applied to obtain a policy, which
balances the number of part evaluations with the classification accuracy.
During inference, the policy is used as a look-up table to choose the part
order and the stopping time based on the observed filter responses. The method
is faster than cascade detection with deformable part models (which does not
optimize the part order) with negligible loss in accuracy when evaluated on the
PASCAL VOC 2007 and 2010 datasets.
| Menglong Zhu, Nikolay Atanasov, George J. Pappas, Kostas Daniilidis | null | 1404.0334 | null | null |
A Deep Representation for Invariance And Music Classification | cs.SD cs.LG stat.ML | Representations in the auditory cortex might be based on mechanisms similar
to the visual ventral stream; modules for building invariance to
transformations and multiple layers for compositionality and selectivity. In
this paper we propose the use of such computational modules for extracting
invariant and discriminative audio representations. Building on a theory of
invariance in hierarchical architectures, we propose a novel, mid-level
representation for acoustical signals, using the empirical distributions of
projections on a set of templates and their transformations. Under the
assumption that, by construction, this dictionary of templates is composed from
similar classes, and samples the orbit of variance-inducing signal
transformations (such as shift and scale), the resulting signature is
theoretically guaranteed to be unique, invariant to transformations and stable
to deformations. Modules of projection and pooling can then constitute layers
of deep networks, for learning composite representations. We present the main
theoretical and computational aspects of a framework for unsupervised learning
of invariant audio representations, empirically evaluated on music genre
classification.
| Chiyuan Zhang, Georgios Evangelopoulos, Stephen Voinea, Lorenzo
Rosasco, Tomaso Poggio | 10.1109/ICASSP.2014.6854954 | 1404.0400 | null | null |
Learning Two-input Linear and Nonlinear Analog Functions with a Simple
Chemical System | q-bio.MN cs.LG | The current biochemical information processing systems behave in a
predetermined manner because all features are defined during the design phase.
To make such unconventional computing systems reusable and programmable for
biomedical applications, adaptation, learning, and self-modification based on
external stimuli would be highly desirable. However, so far, it has been too
challenging to implement these in wet chemistries. In this paper we extend the
chemical perceptron, a model previously proposed by the authors, to function as
an analog instead of a binary system. The new analog asymmetric signal
perceptron learns through feedback and supports Michaelis-Menten kinetics. The
results show that our perceptron is able to learn linear and nonlinear
(quadratic) functions of two inputs. To the best of our knowledge, it is the
first simulated chemical system capable of doing so. The small number of
species and reactions and their simplicity allows for a mapping to an actual
wet implementation using DNA-strand displacement or deoxyribozymes. Our results
are an important step toward actual biochemical systems that can learn and
adapt.
| Peter Banda, Christof Teuscher | 10.1007/978-3-319-08123-6_2 | 1404.0427 | null | null |
Cellular Automata and Its Applications in Bioinformatics: A Review | cs.CE cs.LG | This paper aims at providing a survey on the problems that can be easily
addressed by cellular automata in bioinformatics. Some of the authors have
proposed algorithms for addressing some problems in bioinformatics but the
application of cellular automata in bioinformatics is a virgin field in
research. None of the researchers has tried to relate the major problems in
bioinformatics and find a common solution. Extensive literature surveys were
conducted. We have considered some papers in various journals and conferences
for conduct of our research. This paper provides intuition towards relating
various problems in bioinformatics logically and tries to attain a common frame
work for addressing the same.
| Pokkuluri Kiran Sree, Inampudi Ramesh Babu, SSSN Usha Devi N | null | 1404.0453 | null | null |
piCholesky: Polynomial Interpolation of Multiple Cholesky Factors for
Efficient Approximate Cross-Validation | cs.LG cs.NA | The dominant cost in solving least-square problems using Newton's method is
often that of factorizing the Hessian matrix over multiple values of the
regularization parameter ($\lambda$). We propose an efficient way to
interpolate the Cholesky factors of the Hessian matrix computed over a small
set of $\lambda$ values. This approximation enables us to optimally minimize
the hold-out error while incurring only a fraction of the cost compared to
exact cross-validation. We provide a formal error bound for our approximation
scheme and present solutions to a set of key implementation challenges that
allow our approach to maximally exploit the compute power of modern
architectures. We present a thorough empirical analysis over multiple datasets
to show the effectiveness of our approach.
| Da Kuang, Alex Gittens, Raffay Hamid | null | 1404.0466 | null | null |
A probabilistic estimation and prediction technique for dynamic
continuous social science models: The evolution of the attitude of the Basque
Country population towards ETA as a case study | cs.LG | In this paper, we present a computational technique to deal with uncertainty
in dynamic continuous models in Social Sciences. Considering data from surveys,
the method consists of determining the probability distribution of the survey
output and this allows to sample data and fit the model to the sampled data
using a goodness-of-fit criterion based on the chi-square-test. Taking the
fitted parameters non-rejected by the chi-square-test, substituting them into
the model and computing their outputs, we build 95% confidence intervals in
each time instant capturing uncertainty of the survey data (probabilistic
estimation). Using the same set of obtained model parameters, we also provide a
prediction over the next few years with 95% confidence intervals (probabilistic
prediction). This technique is applied to a dynamic social model describing the
evolution of the attitude of the Basque Country population towards the
revolutionary organization ETA.
| Juan-Carlos Cort\'es, Francisco-J. Santonja, Ana-C. Tarazona,
Rafael-J. Villanueva, Javier Villanueva-Oller | null | 1404.0649 | null | null |
Exploiting Linear Structure Within Convolutional Networks for Efficient
Evaluation | cs.CV cs.LG | We present techniques for speeding up the test-time evaluation of large
convolutional networks, designed for object recognition tasks. These models
deliver impressive accuracy but each image evaluation requires millions of
floating point operations, making their deployment on smartphones and
Internet-scale clusters problematic. The computation is dominated by the
convolution operations in the lower layers of the model. We exploit the linear
structure present within the convolutional filters to derive approximations
that significantly reduce the required computation. Using large
state-of-the-art models, we demonstrate we demonstrate speedups of
convolutional layers on both CPU and GPU by a factor of 2x, while keeping the
accuracy within 1% of the original model.
| Emily Denton, Wojciech Zaremba, Joan Bruna, Yann LeCun, Rob Fergus | null | 1404.0736 | null | null |
Subspace Learning from Extremely Compressed Measurements | stat.ML cs.LG | We consider learning the principal subspace of a large set of vectors from an
extremely small number of compressive measurements of each vector. Our
theoretical results show that even a constant number of measurements per column
suffices to approximate the principal subspace to arbitrary precision, provided
that the number of vectors is large. This result is achieved by a simple
algorithm that computes the eigenvectors of an estimate of the covariance
matrix. The main insight is to exploit an averaging effect that arises from
applying a different random projection to each vector. We provide a number of
simulations confirming our theoretical results.
| Akshay Krishnamurthy, Martin Azizyan, Aarti Singh | null | 1404.0751 | null | null |
The Least Wrong Model Is Not in the Data | cs.LG | The true process that generated data cannot be determined when multiple
explanations are possible. Prediction requires a model of the probability that
a process, chosen randomly from the set of candidate explanations, generates
some future observation. The best model includes all of the information
contained in the minimal description of the data that is not contained in the
data. It is closely related to the Halting Problem and is logarithmic in the
size of the data. Prediction is difficult because the ideal model is not
computable, and the best computable model is not "findable." However, the error
from any approximation can be bounded by the size of the description using the
model.
| Oscar Stiffelman | null | 1404.0789 | null | null |
Bayes and Naive Bayes Classifier | cs.LG | The Bayesian Classification represents a supervised learning method as well
as a statistical method for classification. Assumes an underlying probabilistic
model and it allows us to capture uncertainty about the model in a principled
way by determining probabilities of the outcomes. This Classification is named
after Thomas Bayes (1702-1761), who proposed the Bayes Theorem. Bayesian
classification provides practical learning algorithms and prior knowledge and
observed data can be combined. Bayesian Classification provides a useful
perspective for understanding and evaluating many learning algorithms. It
calculates explicit probabilities for hypothesis and it is robust to noise in
input data. In statistical classification the Bayes classifier minimises the
probability of misclassification. That was a visual intuition for a simple case
of the Bayes classifier, also called: 1)Idiot Bayes 2)Naive Bayes 3)Simple
Bayes
| Vikramkumar (B092633), Vijaykumar B (B091956), Trilochan (B092654) | null | 1404.0933 | null | null |
Kernel-Based Adaptive Online Reconstruction of Coverage Maps With Side
Information | cs.NI cs.LG stat.ML | In this paper, we address the problem of reconstructing coverage maps from
path-loss measurements in cellular networks. We propose and evaluate two
kernel-based adaptive online algorithms as an alternative to typical offline
methods. The proposed algorithms are application-tailored extensions of
powerful iterative methods such as the adaptive projected subgradient method
and a state-of-the-art adaptive multikernel method. Assuming that the moving
trajectories of users are available, it is shown how side information can be
incorporated in the algorithms to improve their convergence performance and the
quality of the estimation. The complexity is significantly reduced by imposing
sparsity-awareness in the sense that the algorithms exploit the compressibility
of the measurement data to reduce the amount of data which is saved and
processed. Finally, we present extensive simulations based on realistic data to
show that our algorithms provide fast, robust estimates of coverage maps in
real-world scenarios. Envisioned applications include path-loss prediction
along trajectories of mobile users as a building block for anticipatory
buffering or traffic offloading.
| Martin Kasparick, Renato L. G. Cavalcante, Stefan Valentin, Slawomir
Stanczak, Masahiro Yukawa | 10.1109/TVT.2015.2453391 | 1404.0979 | null | null |
Parallel Support Vector Machines in Practice | cs.LG | In this paper, we evaluate the performance of various parallel optimization
methods for Kernel Support Vector Machines on multicore CPUs and GPUs. In
particular, we provide the first comparison of algorithms with explicit and
implicit parallelization. Most existing parallel implementations for multi-core
or GPU architectures are based on explicit parallelization of Sequential
Minimal Optimization (SMO)---the programmers identified parallelizable
components and hand-parallelized them, specifically tuned for a particular
architecture. We compare these approaches with each other and with implicitly
parallelized algorithms---where the algorithm is expressed such that most of
the work is done within few iterations with large dense linear algebra
operations. These can be computed with highly-optimized libraries, that are
carefully parallelized for a large variety of parallel platforms. We highlight
the advantages and disadvantages of both approaches and compare them on various
benchmark data sets. We find an approximate implicitly parallel algorithm which
is surprisingly efficient, permits a much simpler implementation, and leads to
unprecedented speedups in SVM training.
| Stephen Tyree, Jacob R. Gardner, Kilian Q. Weinberger, Kunal Agrawal,
John Tran | null | 1404.1066 | null | null |
A Tutorial on Principal Component Analysis | cs.LG stat.ML | Principal component analysis (PCA) is a mainstay of modern data analysis - a
black box that is widely used but (sometimes) poorly understood. The goal of
this paper is to dispel the magic behind this black box. This manuscript
focuses on building a solid intuition for how and why principal component
analysis works. This manuscript crystallizes this knowledge by deriving from
simple intuitions, the mathematics behind PCA. This tutorial does not shy away
from explaining the ideas informally, nor does it shy away from the
mathematics. The hope is that by addressing both aspects, readers of all levels
will be able to gain a better understanding of PCA as well as the when, the how
and the why of applying this technique.
| Jonathon Shlens | null | 1404.1100 | null | null |
Scalable Planning and Learning for Multiagent POMDPs: Extended Version | cs.AI cs.LG | Online, sample-based planning algorithms for POMDPs have shown great promise
in scaling to problems with large state spaces, but they become intractable for
large action and observation spaces. This is particularly problematic in
multiagent POMDPs where the action and observation space grows exponentially
with the number of agents. To combat this intractability, we propose a novel
scalable approach based on sample-based planning and factored value functions
that exploits structure present in many multiagent settings. This approach
applies not only in the planning case, but also in the Bayesian reinforcement
learning setting. Experimental results show that we are able to provide high
quality solutions to large multiagent planning and learning problems.
| Christopher Amato, Frans A. Oliehoek | null | 1404.1140 | null | null |
AIS-MACA- Z: MACA based Clonal Classifier for Splicing Site, Protein
Coding and Promoter Region Identification in Eukaryotes | cs.CE cs.LG | Bioinformatics incorporates information regarding biological data storage,
accessing mechanisms and presentation of characteristics within this data. Most
of the problems in bioinformatics and be addressed efficiently by computer
techniques. This paper aims at building a classifier based on Multiple
Attractor Cellular Automata (MACA) which uses fuzzy logic with version Z to
predict splicing site, protein coding and promoter region identification in
eukaryotes. It is strengthened with an artificial immune system technique
(AIS), Clonal algorithm for choosing rules of best fitness. The proposed
classifier can handle DNA sequences of lengths 54,108,162,252,354. This
classifier gives the exact boundaries of both protein and promoter regions with
an average accuracy of 90.6%. This classifier can predict the splicing site
with 97% accuracy. This classifier was tested with 1, 97,000 data components
which were taken from Fickett & Toung , EPDnew, and other sequences from a
renowned medical university.
| Pokkuluri Kiran Sree, Inampudi Ramesh Babu, SSSN Usha Devi N | null | 1404.1144 | null | null |
Hierarchical Dirichlet Scaling Process | cs.LG | We present the \textit{hierarchical Dirichlet scaling process} (HDSP), a
Bayesian nonparametric mixed membership model. The HDSP generalizes the
hierarchical Dirichlet process (HDP) to model the correlation structure between
metadata in the corpus and mixture components. We construct the HDSP based on
the normalized gamma representation of the Dirichlet process, and this
construction allows incorporating a scaling function that controls the
membership probabilities of the mixture components. We develop two scaling
methods to demonstrate that different modeling assumptions can be expressed in
the HDSP. We also derive the corresponding approximate posterior inference
algorithms using variational Bayes. Through experiments on datasets of
newswire, medical journal articles, conference proceedings, and product
reviews, we show that the HDSP results in a better predictive performance than
labeled LDA, partially labeled LDA, and author topic model and a better
negative review classification performance than the supervised topic model and
SVM.
| Dongwoo Kim, Alice Oh | 10.1007/s10994-016-5621-5 | 1404.1282 | null | null |
Understanding Machine-learned Density Functionals | physics.chem-ph cs.LG physics.comp-ph stat.ML | Kernel ridge regression is used to approximate the kinetic energy of
non-interacting fermions in a one-dimensional box as a functional of their
density. The properties of different kernels and methods of cross-validation
are explored, and highly accurate energies are achieved. Accurate {\em
constrained optimal densities} are found via a modified Euler-Lagrange
constrained minimization of the total energy. A projected gradient descent
algorithm is derived using local principal component analysis. Additionally, a
sparse grid representation of the density can be used without degrading the
performance of the methods. The implications for machine-learned density
functional approximations are discussed.
| Li Li, John C. Snyder, Isabelle M. Pelaschier, Jessica Huang,
Uma-Naresh Niranjan, Paul Duncan, Matthias Rupp, Klaus-Robert M\"uller,
Kieron Burke | null | 1404.1333 | null | null |
Optimal learning with Bernstein Online Aggregation | stat.ML cs.LG math.ST stat.TH | We introduce a new recursive aggregation procedure called Bernstein Online
Aggregation (BOA). The exponential weights include an accuracy term and a
second order term that is a proxy of the quadratic variation as in Hazan and
Kale (2010). This second term stabilizes the procedure that is optimal in
different senses. We first obtain optimal regret bounds in the deterministic
context. Then, an adaptive version is the first exponential weights algorithm
that exhibits a second order bound with excess losses that appears first in
Gaillard et al. (2014). The second order bounds in the deterministic context
are extended to a general stochastic context using the cumulative predictive
risk. Such conversion provides the main result of the paper, an inequality of a
novel type comparing the procedure with any deterministic aggregation procedure
for an integrated criteria. Then we obtain an observable estimate of the excess
of risk of the BOA procedure. To assert the optimality, we consider finally the
iid case for strongly convex and Lipschitz continuous losses and we prove that
the optimal rate of aggregation of Tsybakov (2003) is achieved. The batch
version of the BOA procedure is then the first adaptive explicit algorithm that
satisfies an optimal oracle inequality with high probability.
| Olivier Wintenberger (LSTA) | null | 1404.1356 | null | null |
Orthogonal Rank-One Matrix Pursuit for Low Rank Matrix Completion | cs.LG math.NA stat.ML | In this paper, we propose an efficient and scalable low rank matrix
completion algorithm. The key idea is to extend orthogonal matching pursuit
method from the vector case to the matrix case. We further propose an economic
version of our algorithm by introducing a novel weight updating rule to reduce
the time and storage complexity. Both versions are computationally inexpensive
for each matrix pursuit iteration, and find satisfactory results in a few
iterations. Another advantage of our proposed algorithm is that it has only one
tunable parameter, which is the rank. It is easy to understand and to use by
the user. This becomes especially important in large-scale learning problems.
In addition, we rigorously show that both versions achieve a linear convergence
rate, which is significantly better than the previous known results. We also
empirically compare the proposed algorithms with several state-of-the-art
matrix completion algorithms on many real-world datasets, including the
large-scale recommendation dataset Netflix as well as the MovieLens datasets.
Numerical results show that our proposed algorithm is more efficient than
competing algorithms while achieving similar or better prediction performance.
| Zheng Wang, Ming-Jun Lai, Zhaosong Lu, Wei Fan, Hasan Davulcu and
Jieping Ye | null | 1404.1377 | null | null |
An Efficient Feature Selection in Classification of Audio Files | cs.LG | In this paper we have focused on an efficient feature selection method in
classification of audio files. The main objective is feature selection and
extraction. We have selected a set of features for further analysis, which
represents the elements in feature vector. By extraction method we can compute
a numerical representation that can be used to characterize the audio using the
existing toolbox. In this study Gain Ratio (GR) is used as a feature selection
measure. GR is used to select splitting attribute which will separate the
tuples into different classes. The pulse clarity is considered as a subjective
measure and it is used to calculate the gain of features of audio files. The
splitting criterion is employed in the application to identify the class or the
music genre of a specific audio file from testing database. Experimental
results indicate that by using GR the application can produce a satisfactory
result for music genre classification. After dimensionality reduction best
three features have been selected out of various features of audio file and in
this technique we will get more than 90% successful classification result.
| Jayita Mitra and Diganta Saha | null | 1404.1491 | null | null |
Ensemble Committees for Stock Return Classification and Prediction | stat.ML cs.LG | This paper considers a portfolio trading strategy formulated by algorithms in
the field of machine learning. The profitability of the strategy is measured by
the algorithm's capability to consistently and accurately identify stock
indices with positive or negative returns, and to generate a preferred
portfolio allocation on the basis of a learned model. Stocks are characterized
by time series data sets consisting of technical variables that reflect market
conditions in a previous time interval, which are utilized produce binary
classification decisions in subsequent intervals. The learned model is
constructed as a committee of random forest classifiers, a non-linear support
vector machine classifier, a relevance vector machine classifier, and a
constituent ensemble of k-nearest neighbors classifiers. The Global Industry
Classification Standard (GICS) is used to explore the ensemble model's efficacy
within the context of various fields of investment including Energy, Materials,
Financials, and Information Technology. Data from 2006 to 2012, inclusive, are
considered, which are chosen for providing a range of market circumstances for
evaluating the model. The model is observed to achieve an accuracy of
approximately 70% when predicting stock price returns three months in advance.
| James Brofos | null | 1404.1492 | null | null |
A Compression Technique for Analyzing Disagreement-Based Active Learning | cs.LG stat.ML | We introduce a new and improved characterization of the label complexity of
disagreement-based active learning, in which the leading quantity is the
version space compression set size. This quantity is defined as the size of the
smallest subset of the training data that induces the same version space. We
show various applications of the new characterization, including a tight
analysis of CAL and refined label complexity bounds for linear separators under
mixtures of Gaussians and axis-aligned rectangles under product densities. The
version space compression set size, as well as the new characterization of the
label complexity, can be naturally extended to agnostic learning problems, for
which we show new speedup results for two well known active learning
algorithms.
| Yair Wiener, Steve Hanneke, Ran El-Yaniv | null | 1404.1504 | null | null |
Exploring the power of GPU's for training Polyglot language models | cs.LG cs.CL | One of the major research trends currently is the evolution of heterogeneous
parallel computing. GP-GPU computing is being widely used and several
applications have been designed to exploit the massive parallelism that
GP-GPU's have to offer. While GPU's have always been widely used in areas of
computer vision for image processing, little has been done to investigate
whether the massive parallelism provided by GP-GPU's can be utilized
effectively for Natural Language Processing(NLP) tasks. In this work, we
investigate and explore the power of GP-GPU's in the task of learning language
models. More specifically, we investigate the performance of training Polyglot
language models using deep belief neural networks. We evaluate the performance
of training the model on the GPU and present optimizations that boost the
performance on the GPU.One of the key optimizations, we propose increases the
performance of a function involved in calculating and updating the gradient by
approximately 50 times on the GPU for sufficiently large batch sizes. We show
that with the above optimizations, the GP-GPU's performance on the task
increases by factor of approximately 3-4. The optimizations we made are generic
Theano optimizations and hence potentially boost the performance of other
models which rely on these operations.We also show that these optimizations
result in the GPU's performance at this task being now comparable to that on
the CPU. We conclude by presenting a thorough evaluation of the applicability
of GP-GPU's for this task and highlight the factors limiting the performance of
training a Polyglot model on the GPU.
| Vivek Kulkarni, Rami Al-Rfou', Bryan Perozzi, Steven Skiena | null | 1404.1521 | null | null |
Sparse Coding: A Deep Learning using Unlabeled Data for High - Level
Representation | cs.LG cs.NE | Sparse coding algorithm is an learning algorithm mainly for unsupervised
feature for finding succinct, a little above high - level Representation of
inputs, and it has successfully given a way for Deep learning. Our objective is
to use High - Level Representation data in form of unlabeled category to help
unsupervised learning task. when compared with labeled data, unlabeled data is
easier to acquire because, unlike labeled data it does not follow some
particular class labels. This really makes the Deep learning wider and
applicable to practical problems and learning. The main problem with sparse
coding is it uses Quadratic loss function and Gaussian noise mode. So, its
performs is very poor when binary or integer value or other Non- Gaussian type
data is applied. Thus first we propose an algorithm for solving the L1 -
regularized convex optimization algorithm for the problem to allow High - Level
Representation of unlabeled data. Through this we derive a optimal solution for
describing an approach to Deep learning algorithm by using sparse code.
| R. Vidya, Dr.G.M.Nasira, R. P. Jaia Priyankka | 10.1109/WCCCT.2014.69 | 1404.1559 | null | null |
Fast Supervised Hashing with Decision Trees for High-Dimensional Data | cs.CV cs.LG | Supervised hashing aims to map the original features to compact binary codes
that are able to preserve label based similarity in the Hamming space.
Non-linear hash functions have demonstrated the advantage over linear ones due
to their powerful generalization capability. In the literature, kernel
functions are typically used to achieve non-linearity in hashing, which achieve
encouraging retrieval performance at the price of slow evaluation and training
time. Here we propose to use boosted decision trees for achieving non-linearity
in hashing, which are fast to train and evaluate, hence more suitable for
hashing with high dimensional data. In our approach, we first propose
sub-modular formulations for the hashing binary code inference problem and an
efficient GraphCut based block search method for solving large-scale inference.
Then we learn hash functions by training boosted decision trees to fit the
binary codes. Experiments demonstrate that our proposed method significantly
outperforms most state-of-the-art methods in retrieval precision and training
time. Especially for high-dimensional data, our method is orders of magnitude
faster than many methods in terms of training time.
| Guosheng Lin, Chunhua Shen, Qinfeng Shi, Anton van den Hengel, David
Suter | 10.1109/CVPR.2014.253 | 1404.1561 | null | null |
The Power of Online Learning in Stochastic Network Optimization | math.OC cs.LG cs.SY | In this paper, we investigate the power of online learning in stochastic
network optimization with unknown system statistics {\it a priori}. We are
interested in understanding how information and learning can be efficiently
incorporated into system control techniques, and what are the fundamental
benefits of doing so. We propose two \emph{Online Learning-Aided Control}
techniques, $\mathtt{OLAC}$ and $\mathtt{OLAC2}$, that explicitly utilize the
past system information in current system control via a learning procedure
called \emph{dual learning}. We prove strong performance guarantees of the
proposed algorithms: $\mathtt{OLAC}$ and $\mathtt{OLAC2}$ achieve the
near-optimal $[O(\epsilon), O([\log(1/\epsilon)]^2)]$ utility-delay tradeoff
and $\mathtt{OLAC2}$ possesses an $O(\epsilon^{-2/3})$ convergence time.
$\mathtt{OLAC}$ and $\mathtt{OLAC2}$ are probably the first algorithms that
simultaneously possess explicit near-optimal delay guarantee and sub-linear
convergence time. Simulation results also confirm the superior performance of
the proposed algorithms in practice. To the best of our knowledge, our attempt
is the first to explicitly incorporate online learning into stochastic network
optimization and to demonstrate its power in both theory and practice.
| Longbo Huang, Xin Liu, Xiaohong Hao | null | 1404.1592 | null | null |
A Denoising Autoencoder that Guides Stochastic Search | cs.NE cs.LG | An algorithm is described that adaptively learns a non-linear mutation
distribution. It works by training a denoising autoencoder (DA) online at each
generation of a genetic algorithm to reconstruct a slowly decaying memory of
the best genotypes so far. A compressed hidden layer forces the autoencoder to
learn hidden features in the training set that can be used to accelerate search
on novel problems with similar structure. Its output neurons define a
probability distribution that we sample from to produce offspring solutions.
The algorithm outperforms a canonical genetic algorithm on several
combinatorial optimisation problems, e.g. multidimensional 0/1 knapsack
problem, MAXSAT, HIFF, and on parameter optimisation problems, e.g. Rastrigin
and Rosenbrock functions.
| Alexander W. Churchill and Siddharth Sigtia and Chrisantha Fernando | null | 1404.1614 | null | null |
Notes on Generalized Linear Models of Neurons | cs.NE cs.LG q-bio.NC | Experimental neuroscience increasingly requires tractable models for
analyzing and predicting the behavior of neurons and networks. The generalized
linear model (GLM) is an increasingly popular statistical framework for
analyzing neural data that is flexible, exhibits rich dynamic behavior and is
computationally tractable (Paninski, 2004; Pillow et al., 2008; Truccolo et
al., 2005). What follows is a brief summary of the primary equations governing
the application of GLM's to spike trains with a few sentences linking this work
to the larger statistical literature. Latter sections include extensions of a
basic GLM to model spatio-temporal receptive fields as well as network activity
in an arbitrary numbers of neurons.
| Jonathon Shlens | null | 1404.1999 | null | null |
Optimistic Risk Perception in the Temporal Difference error Explains the
Relation between Risk-taking, Gambling, Sensation-seeking and Low Fear | cs.LG q-bio.NC | Understanding the affective, cognitive and behavioural processes involved in
risk taking is essential for treatment and for setting environmental conditions
to limit damage. Using Temporal Difference Reinforcement Learning (TDRL) we
computationally investigated the effect of optimism in risk perception in a
variety of goal-oriented tasks. Optimism in risk perception was studied by
varying the calculation of the Temporal Difference error, i.e., delta, in three
ways: realistic (stochastically correct), optimistic (assuming action control),
and overly optimistic (assuming outcome control). We show that for the gambling
task individuals with 'healthy' perception of control, i.e., action optimism,
do not develop gambling behaviour while individuals with 'unhealthy' perception
of control, i.e., outcome optimism, do. We show that high intensity of
sensations and low levels of fear co-occur due to optimistic risk perception.
We found that overly optimistic risk perception (outcome optimism) results in
risk taking and in persistent gambling behaviour in addition to high intensity
of sensations. We discuss how our results replicate risk-taking related
phenomena.
| Joost Broekens and Tim Baarslag | null | 1404.2078 | null | null |
Efficiency of conformalized ridge regression | cs.LG stat.ML | Conformal prediction is a method of producing prediction sets that can be
applied on top of a wide range of prediction algorithms. The method has a
guaranteed coverage probability under the standard IID assumption regardless of
whether the assumptions (often considerably more restrictive) of the underlying
algorithm are satisfied. However, for the method to be really useful it is
desirable that in the case where the assumptions of the underlying algorithm
are satisfied, the conformal predictor loses little in efficiency as compared
with the underlying algorithm (whereas being a conformal predictor, it has the
stronger guarantee of validity). In this paper we explore the degree to which
this additional requirement of efficiency is satisfied in the case of Bayesian
ridge regression; we find that asymptotically conformal prediction sets differ
little from ridge regression prediction intervals when the standard Bayesian
assumptions are satisfied.
| Evgeny Burnaev and Vladimir Vovk | null | 1404.2083 | null | null |
Towards the Safety of Human-in-the-Loop Robotics: Challenges and
Opportunities for Safety Assurance of Robotic Co-Workers | cs.RO cs.LG | The success of the human-robot co-worker team in a flexible manufacturing
environment where robots learn from demonstration heavily relies on the correct
and safe operation of the robot. How this can be achieved is a challenge that
requires addressing both technical as well as human-centric research questions.
In this paper we discuss the state of the art in safety assurance, existing as
well as emerging standards in this area, and the need for new approaches to
safety assurance in the context of learning machines. We then focus on robotic
learning from demonstration, the challenges these techniques pose to safety
assurance and indicate opportunities to integrate safety considerations into
algorithms "by design". Finally, from a human-centric perspective, we stipulate
that, to achieve high levels of safety and ultimately trust, the robotic
co-worker must meet the innate expectations of the humans it works with. It is
our aim to stimulate a discussion focused on the safety aspects of
human-in-the-loop robotics, and to foster multidisciplinary collaboration to
address the research challenges identified.
| Kerstin Eder, Chris Harper, Ute Leonards | 10.1109/ROMAN.2014.6926328 | 1404.2229 | null | null |
Power System Parameters Forecasting Using Hilbert-Huang Transform and
Machine Learning | cs.LG stat.ML | A novel hybrid data-driven approach is developed for forecasting power system
parameters with the goal of increasing the efficiency of short-term forecasting
studies for non-stationary time-series. The proposed approach is based on mode
decomposition and a feature analysis of initial retrospective data using the
Hilbert-Huang transform and machine learning algorithms. The random forests and
gradient boosting trees learning techniques were examined. The decision tree
techniques were used to rank the importance of variables employed in the
forecasting models. The Mean Decrease Gini index is employed as an impurity
function. The resulting hybrid forecasting models employ the radial basis
function neural network and support vector regression. Apart from introduction
and references the paper is organized as follows. The section 2 presents the
background and the review of several approaches for short-term forecasting of
power system parameters. In the third section a hybrid machine learning-based
algorithm using Hilbert-Huang transform is developed for short-term forecasting
of power system parameters. Fourth section describes the decision tree learning
algorithms used for the issue of variables importance. Finally in section six
the experimental results in the following electric power problems are
presented: active power flow forecasting, electricity price forecasting and for
the wind speed and direction forecasting.
| Victor Kurbatsky, Nikita Tomin, Vadim Spiryaev, Paul Leahy, Denis
Sidorov and Alexei Zhukov | null | 1404.2353 | null | null |
A Distributed Frank-Wolfe Algorithm for Communication-Efficient Sparse
Learning | cs.DC cs.AI cs.LG stat.ML | Learning sparse combinations is a frequent theme in machine learning. In this
paper, we study its associated optimization problem in the distributed setting
where the elements to be combined are not centrally located but spread over a
network. We address the key challenges of balancing communication costs and
optimization errors. To this end, we propose a distributed Frank-Wolfe (dFW)
algorithm. We obtain theoretical guarantees on the optimization error
$\epsilon$ and communication cost that do not depend on the total number of
combining elements. We further show that the communication cost of dFW is
optimal by deriving a lower-bound on the communication cost required to
construct an $\epsilon$-approximate solution. We validate our theoretical
analysis with empirical studies on synthetic and real-world data, which
demonstrate that dFW outperforms both baselines and competing methods. We also
study the performance of dFW when the conditions of our analysis are relaxed,
and show that dFW is fairly robust.
| Aur\'elien Bellet, Yingyu Liang, Alireza Bagheri Garakani,
Maria-Florina Balcan, Fei Sha | null | 1404.2644 | null | null |
Open problem: Tightness of maximum likelihood semidefinite relaxations | math.OC cs.LG stat.ML | We have observed an interesting, yet unexplained, phenomenon: Semidefinite
programming (SDP) based relaxations of maximum likelihood estimators (MLE) tend
to be tight in recovery problems with noisy data, even when MLE cannot exactly
recover the ground truth. Several results establish tightness of SDP based
relaxations in the regime where exact recovery from MLE is possible. However,
to the best of our knowledge, their tightness is not understood beyond this
regime. As an illustrative example, we focus on the generalized Procrustes
problem.
| Afonso S. Bandeira and Yuehaw Khoo and Amit Singer | null | 1404.2655 | null | null |
A New Clustering Approach for Anomaly Intrusion Detection | cs.DC cs.CR cs.LG | Recent advances in technology have made our work easier compare to earlier
times. Computer network is growing day by day but while discussing about the
security of computers and networks it has always been a major concerns for
organizations varying from smaller to larger enterprises. It is true that
organizations are aware of the possible threats and attacks so they always
prepare for the safer side but due to some loopholes attackers are able to make
attacks. Intrusion detection is one of the major fields of research and
researchers are trying to find new algorithms for detecting intrusions.
Clustering techniques of data mining is an interested area of research for
detecting possible intrusions and attacks. This paper presents a new clustering
approach for anomaly intrusion detection by using the approach of K-medoids
method of clustering and its certain modifications. The proposed algorithm is
able to achieve high detection rate and overcomes the disadvantages of K-means
algorithm.
| Ravi Ranjan and G. Sahoo | 10.5121/ijdkp.2014.4203 | 1404.2772 | null | null |
A Networks and Machine Learning Approach to Determine the Best College
Coaches of the 20th-21st Centuries | stat.AP cs.LG cs.SI | Our objective is to find the five best college sports coaches of past century
for three different sports. We decided to look at men's basketball, football,
and baseball. We wanted to use an approach that could definitively determine
team skill from the games played, and then use a machine-learning algorithm to
calculate the correct coach skills for each team in a given year. We created a
networks-based model to calculate team skill from historical game data. A
digraph was created for each year in each sport. Nodes represented teams, and
edges represented a game played between two teams. The arrowhead pointed
towards the losing team. We calculated the team skill of each graph using a
right-hand eigenvector centrality measure. This way, teams that beat good teams
will be ranked higher than teams that beat mediocre teams. The eigenvector
centrality rankings for most years were well correlated with tournament
performance and poll-based rankings. We assumed that the relationship between
coach skill $C_s$, player skill $P_s$, and team skill $T_s$ was $C_s \cdot P_s
= T_s$. We then created a function to describe the probability that a given
score difference would occur based on player skill and coach skill. We
multiplied the probabilities of all edges in the network together to find the
probability that the correct network would occur with any given player skill
and coach skill matrix. We was able to determine player skill as a function of
team skill and coach skill, eliminating the need to optimize two unknown
matrices. The top five coaches in each year were noted, and the top coach of
all time was calculated by dividing the number of times that coach ranked in
the yearly top five by the years said coach had been active.
| Tian-Shun Jiang, Zachary Polizzi, Christopher Yuan | null | 1404.2885 | null | null |
Thoughts on a Recursive Classifier Graph: a Multiclass Network for Deep
Object Recognition | cs.CV cs.LG cs.NE | We propose a general multi-class visual recognition model, termed the
Classifier Graph, which aims to generalize and integrate ideas from many of
today's successful hierarchical recognition approaches. Our graph-based model
has the advantage of enabling rich interactions between classes from different
levels of interpretation and abstraction. The proposed multi-class system is
efficiently learned using step by step updates. The structure consists of
simple logistic linear layers with inputs from features that are automatically
selected from a large pool. Each newly learned classifier becomes a potential
new feature. Thus, our feature pool can consist both of initial manually
designed features as well as learned classifiers from previous steps (graph
nodes), each copied many times at different scales and locations. In this
manner we can learn and grow both a deep, complex graph of classifiers and a
rich pool of features at different levels of abstraction and interpretation.
Our proposed graph of classifiers becomes a multi-class system with a recursive
structure, suitable for deep detection and recognition of several classes
simultaneously.
| Marius Leordeanu and Rahul Sukthankar | null | 1404.2903 | null | null |
Gradient-based Laplacian Feature Selection | cs.LG | Analysis of high dimensional noisy data is of essence across a variety of
research fields. Feature selection techniques are designed to find the relevant
feature subset that can facilitate classification or pattern detection.
Traditional (supervised) feature selection methods utilize label information to
guide the identification of relevant feature subsets. In this paper, however,
we consider the unsupervised feature selection problem. Without the label
information, it is particularly difficult to identify a small set of relevant
features due to the noisy nature of real-world data which corrupts the
intrinsic structure of the data. Our Gradient-based Laplacian Feature Selection
(GLFS) selects important features by minimizing the variance of the Laplacian
regularized least squares regression model. With $\ell_1$ relaxation, GLFS can
find a sparse subset of features that is relevant to the Laplacian manifolds.
Extensive experiments on simulated, three real-world object recognition and two
computational biology datasets, have illustrated the power and superior
performance of our approach over multiple state-of-the-art unsupervised feature
selection methods. Additionally, we show that GLFS selects a sparser set of
more relevant features in a supervised setting outperforming the popular
elastic net methodology.
| Bo Wang and Anna Goldenberg | null | 1404.2948 | null | null |
A Tutorial on Independent Component Analysis | cs.LG stat.ML | Independent component analysis (ICA) has become a standard data analysis
technique applied to an array of problems in signal processing and machine
learning. This tutorial provides an introduction to ICA based on linear algebra
formulating an intuition for ICA from first principles. The goal of this
tutorial is to provide a solid foundation on this advanced topic so that one
might learn the motivation behind ICA, learn why and when to apply this
technique and in the process gain an introduction to this exciting field of
active research.
| Jonathon Shlens | null | 1404.2986 | null | null |
Bayesian image segmentations by Potts prior and loopy belief propagation | cs.CV cond-mat.dis-nn cond-mat.stat-mech cs.LG stat.ML | This paper presents a Bayesian image segmentation model based on Potts prior
and loopy belief propagation. The proposed Bayesian model involves several
terms, including the pairwise interactions of Potts models, and the average
vectors and covariant matrices of Gauss distributions in color image modeling.
These terms are often referred to as hyperparameters in statistical machine
learning theory. In order to determine these hyperparameters, we propose a new
scheme for hyperparameter estimation based on conditional maximization of
entropy in the Potts prior. The algorithm is given based on loopy belief
propagation. In addition, we compare our conditional maximum entropy framework
with the conventional maximum likelihood framework, and also clarify how the
first order phase transitions in LBP's for Potts models influence our
hyperparameter estimation procedures.
| Kazuyuki Tanaka, Shun Kataoka, Muneki Yasuda, Yuji Waizumi and
Chiou-Ting Hsu | 10.7566/JPSJ.83.124002 | 1404.3012 | null | null |
On the Ground Validation of Online Diagnosis with Twitter and Medical
Records | cs.SI cs.CL cs.LG | Social media has been considered as a data source for tracking disease.
However, most analyses are based on models that prioritize strong correlation
with population-level disease rates over determining whether or not specific
individual users are actually sick. Taking a different approach, we develop a
novel system for social-media based disease detection at the individual level
using a sample of professionally diagnosed individuals. Specifically, we
develop a system for making an accurate influenza diagnosis based on an
individual's publicly available Twitter data. We find that about half (17/35 =
48.57%) of the users in our sample that were sick explicitly discuss their
disease on Twitter. By developing a meta classifier that combines text
analysis, anomaly detection, and social network analysis, we are able to
diagnose an individual with greater than 99% accuracy even if she does not
discuss her health.
| Todd Bodnar, Victoria C Barclay, Nilam Ram, Conrad S Tucker, Marcel
Salath\'e | 10.1145/2567948.2579272 | 1404.3026 | null | null |
Pareto-Path Multi-Task Multiple Kernel Learning | cs.LG | A traditional and intuitively appealing Multi-Task Multiple Kernel Learning
(MT-MKL) method is to optimize the sum (thus, the average) of objective
functions with (partially) shared kernel function, which allows information
sharing amongst tasks. We point out that the obtained solution corresponds to a
single point on the Pareto Front (PF) of a Multi-Objective Optimization (MOO)
problem, which considers the concurrent optimization of all task objectives
involved in the Multi-Task Learning (MTL) problem. Motivated by this last
observation and arguing that the former approach is heuristic, we propose a
novel Support Vector Machine (SVM) MT-MKL framework, that considers an
implicitly-defined set of conic combinations of task objectives. We show that
solving our framework produces solutions along a path on the aforementioned PF
and that it subsumes the optimization of the average of objective functions as
a special case. Using algorithms we derived, we demonstrate through a series of
experimental results that the framework is capable of achieving better
classification performance, when compared to other similar MTL approaches.
| Cong Li, Michael Georgiopoulos, Georgios C. Anagnostopoulos | 10.1109/TNNLS.2014.2309939 | 1404.3190 | null | null |
Compressive classification and the rare eclipse problem | cs.LG cs.IT math.IT math.ST stat.TH | This paper addresses the fundamental question of when convex sets remain
disjoint after random projection. We provide an analysis using ideas from
high-dimensional convex geometry. For ellipsoids, we provide a bound in terms
of the distance between these ellipsoids and simple functions of their
polynomial coefficients. As an application, this theorem provides bounds for
compressive classification of convex sets. Rather than assuming that the data
to be classified is sparse, our results show that the data can be acquired via
very few measurements yet will remain linearly separable. We demonstrate the
feasibility of this approach in the context of hyperspectral imaging.
| Afonso S. Bandeira and Dustin G. Mixon and Benjamin Recht | null | 1404.3203 | null | null |
Cost-Effective HITs for Relative Similarity Comparisons | cs.CV cs.LG | Similarity comparisons of the form "Is object a more similar to b than to c?"
are useful for computer vision and machine learning applications.
Unfortunately, an embedding of $n$ points is specified by $n^3$ triplets,
making collecting every triplet an expensive task. In noticing this difficulty,
other researchers have investigated more intelligent triplet sampling
techniques, but they do not study their effectiveness or their potential
drawbacks. Although it is important to reduce the number of collected triplets,
it is also important to understand how best to display a triplet collection
task to a user. In this work we explore an alternative display for collecting
triplets and analyze the monetary cost and speed of the display. We propose
best practices for creating cost effective human intelligence tasks for
collecting triplets. We show that rather than changing the sampling algorithm,
simple changes to the crowdsourcing UI can lead to much higher quality
embeddings. We also provide a dataset as well as the labels collected from
crowd workers.
| Michael J. Wilber and Iljung S. Kwak and Serge J. Belongie | null | 1404.3291 | null | null |
Near-optimal sample compression for nearest neighbors | cs.LG cs.CC | We present the first sample compression algorithm for nearest neighbors with
non-trivial performance guarantees. We complement these guarantees by
demonstrating almost matching hardness lower bounds, which show that our bound
is nearly optimal. Our result yields new insight into margin-based nearest
neighbor classification in metric spaces and allows us to significantly sharpen
and simplify existing bounds. Some encouraging empirical results are also
presented.
| Lee-Ad Gottlieb and Aryeh Kontorovich and Pinhas Nisnevitch | null | 1404.3368 | null | null |
Complexity theoretic limitations on learning DNF's | cs.LG cs.CC | Using the recently developed framework of [Daniely et al, 2014], we show that
under a natural assumption on the complexity of refuting random K-SAT formulas,
learning DNF formulas is hard. Furthermore, the same assumption implies the
hardness of learning intersections of $\omega(\log(n))$ halfspaces,
agnostically learning conjunctions, as well as virtually all (distribution
free) learning problems that were previously shown hard (under complexity
assumptions).
| Amit Daniely and Shai Shalev-Shwatz | null | 1404.3378 | null | null |
Generalized version of the support vector machine for binary
classification problems: supporting hyperplane machine | cs.LG stat.ML | In this paper there is proposed a generalized version of the SVM for binary
classification problems in the case of using an arbitrary transformation x ->
y. An approach similar to the classic SVM method is used. The problem is widely
explained. Various formulations of primal and dual problems are proposed. For
one of the most important cases the formulae are derived in detail. A simple
computational example is demonstrated. The algorithm and its implementation is
presented in Octave language.
| E. G. Abramov, A. B. Komissarov, D. A. Kornyakov | null | 1404.3415 | null | null |
Anytime Hierarchical Clustering | stat.ML cs.IR cs.LG | We propose a new anytime hierarchical clustering method that iteratively
transforms an arbitrary initial hierarchy on the configuration of measurements
along a sequence of trees we prove for a fixed data set must terminate in a
chain of nested partitions that satisfies a natural homogeneity requirement.
Each recursive step re-edits the tree so as to improve a local measure of
cluster homogeneity that is compatible with a number of commonly used (e.g.,
single, average, complete) linkage functions. As an alternative to the standard
batch algorithms, we present numerical evidence to suggest that appropriate
adaptations of this method can yield decentralized, scalable algorithms
suitable for distributed/parallel computation of clustering hierarchies and
online tracking of clustering trees applicable to large, dynamically changing
databases and anomaly detection.
| Omur Arslan and Daniel E. Koditschek | null | 1404.3439 | null | null |
Random forests with random projections of the output space for high
dimensional multi-label classification | stat.ML cs.LG | We adapt the idea of random projections applied to the output space, so as to
enhance tree-based ensemble methods in the context of multi-label
classification. We show how learning time complexity can be reduced without
affecting computational complexity and accuracy of predictions. We also show
that random output space projections may be used in order to reach different
bias-variance tradeoffs, over a broad panel of benchmark problems, and that
this may lead to improved accuracy while reducing significantly the
computational burden of the learning stage.
| Arnaud Joly, Pierre Geurts, Louis Wehenkel | 10.1007/978-3-662-44848-9_39 | 1404.3581 | null | null |
Hybrid Conditional Gradient - Smoothing Algorithms with Applications to
Sparse and Low Rank Regularization | math.OC cs.LG stat.ML | We study a hybrid conditional gradient - smoothing algorithm (HCGS) for
solving composite convex optimization problems which contain several terms over
a bounded set. Examples of these include regularization problems with several
norms as penalties and a norm constraint. HCGS extends conditional gradient
methods to cases with multiple nonsmooth terms, in which standard conditional
gradient methods may be difficult to apply. The HCGS algorithm borrows
techniques from smoothing proximal methods and requires first-order
computations (subgradients and proximity operations). Unlike proximal methods,
HCGS benefits from the advantages of conditional gradient methods, which render
it more efficient on certain large scale optimization problems. We demonstrate
these advantages with simulations on two matrix optimization problems:
regularization of matrices with combined $\ell_1$ and trace norm penalties; and
a convex relaxation of sparse PCA.
| Andreas Argyriou and Marco Signoretto and Johan Suykens | null | 1404.3591 | null | null |
PCANet: A Simple Deep Learning Baseline for Image Classification? | cs.CV cs.LG cs.NE | In this work, we propose a very simple deep learning network for image
classification which comprises only the very basic data processing components:
cascaded principal component analysis (PCA), binary hashing, and block-wise
histograms. In the proposed architecture, PCA is employed to learn multistage
filter banks. It is followed by simple binary hashing and block histograms for
indexing and pooling. This architecture is thus named as a PCA network (PCANet)
and can be designed and learned extremely easily and efficiently. For
comparison and better understanding, we also introduce and study two simple
variations to the PCANet, namely the RandNet and LDANet. They share the same
topology of PCANet but their cascaded filters are either selected randomly or
learned from LDA. We have tested these basic networks extensively on many
benchmark visual datasets for different tasks, such as LFW for face
verification, MultiPIE, Extended Yale B, AR, FERET datasets for face
recognition, as well as MNIST for hand-written digits recognition.
Surprisingly, for all tasks, such a seemingly naive PCANet model is on par with
the state of the art features, either prefixed, highly hand-crafted or
carefully learned (by DNNs). Even more surprisingly, it sets new records for
many classification tasks in Extended Yale B, AR, FERET datasets, and MNIST
variations. Additional experiments on other public datasets also demonstrate
the potential of the PCANet serving as a simple but highly competitive baseline
for texture classification and object recognition.
| Tsung-Han Chan, Kui Jia, Shenghua Gao, Jiwen Lu, Zinan Zeng and Yi Ma | 10.1109/TIP.2015.2475625 | 1404.3606 | null | null |
Methods for Ordinal Peer Grading | cs.LG cs.IR | MOOCs have the potential to revolutionize higher education with their wide
outreach and accessibility, but they require instructors to come up with
scalable alternates to traditional student evaluation. Peer grading -- having
students assess each other -- is a promising approach to tackling the problem
of evaluation at scale, since the number of "graders" naturally scales with the
number of students. However, students are not trained in grading, which means
that one cannot expect the same level of grading skills as in traditional
settings. Drawing on broad evidence that ordinal feedback is easier to provide
and more reliable than cardinal feedback, it is therefore desirable to allow
peer graders to make ordinal statements (e.g. "project X is better than project
Y") and not require them to make cardinal statements (e.g. "project X is a
B-"). Thus, in this paper we study the problem of automatically inferring
student grades from ordinal peer feedback, as opposed to existing methods that
require cardinal peer feedback. We formulate the ordinal peer grading problem
as a type of rank aggregation problem, and explore several probabilistic models
under which to estimate student grades and grader reliability. We study the
applicability of these methods using peer grading data collected from a real
class -- with instructor and TA grades as a baseline -- and demonstrate the
efficacy of ordinal feedback techniques in comparison to existing cardinal peer
grading methods. Finally, we compare these peer-grading techniques to
traditional evaluation techniques.
| Karthik Raman and Thorsten Joachims | null | 1404.3656 | null | null |
Surpassing Human-Level Face Verification Performance on LFW with
GaussianFace | cs.CV cs.LG stat.ML | Face verification remains a challenging problem in very complex conditions
with large variations such as pose, illumination, expression, and occlusions.
This problem is exacerbated when we rely unrealistically on a single training
data source, which is often insufficient to cover the intrinsically complex
face variations. This paper proposes a principled multi-task learning approach
based on Discriminative Gaussian Process Latent Variable Model, named
GaussianFace, to enrich the diversity of training data. In comparison to
existing methods, our model exploits additional data from multiple
source-domains to improve the generalization performance of face verification
in an unknown target-domain. Importantly, our model can adapt automatically to
complex data distributions, and therefore can well capture complex face
variations inherent in multiple sources. Extensive experiments demonstrate the
effectiveness of the proposed model in learning from diverse data sources and
generalize to unseen domain. Specifically, the accuracy of our algorithm
achieves an impressive accuracy rate of 98.52% on the well-known and
challenging Labeled Faces in the Wild (LFW) benchmark. For the first time, the
human-level performance in face verification (97.53%) on LFW is surpassed.
| Chaochao Lu, Xiaoou Tang | null | 1404.3840 | null | null |
Optimizing the CVaR via Sampling | stat.ML cs.AI cs.LG | Conditional Value at Risk (CVaR) is a prominent risk measure that is being
used extensively in various domains. We develop a new formula for the gradient
of the CVaR in the form of a conditional expectation. Based on this formula, we
propose a novel sampling-based estimator for the CVaR gradient, in the spirit
of the likelihood-ratio method. We analyze the bias of the estimator, and prove
the convergence of a corresponding stochastic gradient descent algorithm to a
local CVaR optimum. Our method allows to consider CVaR optimization in new
domains. As an example, we consider a reinforcement learning application, and
learn a risk-sensitive controller for the game of Tetris.
| Aviv Tamar, Yonatan Glassner, Shie Mannor | null | 1404.3862 | null | null |
Recovery of Coherent Data via Low-Rank Dictionary Pursuit | stat.ME cs.IT cs.LG math.IT math.ST stat.TH | The recently established RPCA method provides us a convenient way to restore
low-rank matrices from grossly corrupted observations. While elegant in theory
and powerful in reality, RPCA may be not an ultimate solution to the low-rank
matrix recovery problem. Indeed, its performance may not be perfect even when
data are strictly low-rank. This is because conventional RPCA ignores the
clustering structures of the data which are ubiquitous in modern applications.
As the number of cluster grows, the coherence of data keeps increasing, and
accordingly, the recovery performance of RPCA degrades. We show that the
challenges raised by coherent data (i.e., the data with high coherence) could
be alleviated by Low-Rank Representation (LRR), provided that the dictionary in
LRR is configured appropriately. More precisely, we mathematically prove that
if the dictionary itself is low-rank then LRR is immune to the coherence
parameter which increases with the underlying cluster number. This provides an
elementary principle for dealing with coherent data. Subsequently, we devise a
practical algorithm to obtain proper dictionaries in unsupervised environments.
Our extensive experiments on randomly generated matrices verify our claims.
| Guangcan Liu and Ping Li | null | 1404.4032 | null | null |
Discovering and Exploiting Entailment Relationships in Multi-Label
Learning | cs.LG | This work presents a sound probabilistic method for enforcing adherence of
the marginal probabilities of a multi-label model to automatically discovered
deterministic relationships among labels. In particular we focus on discovering
two kinds of relationships among the labels. The first one concerns pairwise
positive entailement: pairs of labels, where the presence of one implies the
presence of the other in all instances of a dataset. The second concerns
exclusion: sets of labels that do not coexist in the same instances of the
dataset. These relationships are represented with a Bayesian network. Marginal
probabilities are entered as soft evidence in the network and adjusted through
probabilistic inference. Our approach offers robust improvements in mean
average precision compared to the standard binary relavance approach across all
12 datasets involved in our experiments. The discovery process helps
interesting implicit knowledge to emerge, which could be useful in itself.
| Christina Papagiannopoulou, Grigorios Tsoumakas, Ioannis Tsamardinos | null | 1404.4038 | null | null |
Ensemble Classifiers and Their Applications: A Review | cs.LG | Ensemble classifier refers to a group of individual classifiers that are
cooperatively trained on data set in a supervised classification problem. In
this paper we present a review of commonly used ensemble classifiers in the
literature. Some ensemble classifiers are also developed targeting specific
applications. We also present some application driven ensemble classifiers in
this paper.
| Akhlaqur Rahman, Sumaira Tasnim | 10.14445/22312803/IJCTT-V10P107 | 1404.4088 | null | null |
Multi-borders classification | stat.ML cs.LG | The number of possible methods of generalizing binary classification to
multi-class classification increases exponentially with the number of class
labels. Often, the best method of doing so will be highly problem dependent.
Here we present classification software in which the partitioning of
multi-class classification problems into binary classification problems is
specified using a recursive control language.
| Peter Mills | null | 1404.4095 | null | null |
Sparse Bilinear Logistic Regression | math.OC cs.CV cs.LG | In this paper, we introduce the concept of sparse bilinear logistic
regression for decision problems involving explanatory variables that are
two-dimensional matrices. Such problems are common in computer vision,
brain-computer interfaces, style/content factorization, and parallel factor
analysis. The underlying optimization problem is bi-convex; we study its
solution and develop an efficient algorithm based on block coordinate descent.
We provide a theoretical guarantee for global convergence and estimate the
asymptotical convergence rate using the Kurdyka-{\L}ojasiewicz inequality. A
range of experiments with simulated and real data demonstrate that sparse
bilinear logistic regression outperforms current techniques in several
important applications.
| Jianing V. Shi, Yangyang Xu, and Richard G. Baraniuk | null | 1404.4104 | null | null |
Sparse Compositional Metric Learning | cs.LG cs.AI stat.ML | We propose a new approach for metric learning by framing it as learning a
sparse combination of locally discriminative metrics that are inexpensive to
generate from the training data. This flexible framework allows us to naturally
derive formulations for global, multi-task and local metric learning. The
resulting algorithms have several advantages over existing methods in the
literature: a much smaller number of parameters to be estimated and a
principled way to generalize learned metrics to new testing data points. To
analyze the approach theoretically, we derive a generalization bound that
justifies the sparse combination. Empirically, we evaluate our algorithms on
several datasets against state-of-the-art metric learning methods. The results
are consistent with our theoretical findings and demonstrate the superiority of
our approach in terms of classification performance and scalability.
| Yuan Shi and Aur\'elien Bellet and Fei Sha | null | 1404.4105 | null | null |
Representation as a Service | cs.LG | Consider a Machine Learning Service Provider (MLSP) designed to rapidly
create highly accurate learners for a never-ending stream of new tasks. The
challenge is to produce task-specific learners that can be trained from few
labeled samples, even if tasks are not uniquely identified, and the number of
tasks and input dimensionality are large. In this paper, we argue that the MLSP
should exploit knowledge from previous tasks to build a good representation of
the environment it is in, and more precisely, that useful representations for
such a service are ones that minimize generalization error for a new hypothesis
trained on a new task. We formalize this intuition with a novel method that
minimizes an empirical proxy of the intra-task small-sample generalization
error. We present several empirical results showing state-of-the art
performance on single-task transfer, multitask learning, and the full lifelong
learning problem.
| Ouais Alsharif, Philip Bachman, Joelle Pineau | null | 1404.4108 | null | null |
Structured Stochastic Variational Inference | cs.LG | Stochastic variational inference makes it possible to approximate posterior
distributions induced by large datasets quickly using stochastic optimization.
The algorithm relies on the use of fully factorized variational distributions.
However, this "mean-field" independence approximation limits the fidelity of
the posterior approximation, and introduces local optima. We show how to relax
the mean-field approximation to allow arbitrary dependencies between global
parameters and local hidden variables, producing better parameter estimates by
reducing bias, sensitivity to local optima, and sensitivity to hyperparameters.
| Matthew D. Hoffman and David M. Blei | null | 1404.4114 | null | null |
Dropout Training for Support Vector Machines | cs.LG | Dropout and other feature noising schemes have shown promising results in
controlling over-fitting by artificially corrupting the training data. Though
extensive theoretical and empirical studies have been performed for generalized
linear models, little work has been done for support vector machines (SVMs),
one of the most successful approaches for supervised learning. This paper
presents dropout training for linear SVMs. To deal with the intractable
expectation of the non-smooth hinge loss under corrupting distributions, we
develop an iteratively re-weighted least square (IRLS) algorithm by exploring
data augmentation techniques. Our algorithm iteratively minimizes the
expectation of a re-weighted least square problem, where the re-weights have
closed-form solutions. The similar ideas are applied to develop a new IRLS
algorithm for the expected logistic loss under corrupting distributions. Our
algorithms offer insights on the connection and difference between the hinge
loss and logistic loss in dropout training. Empirical results on several real
datasets demonstrate the effectiveness of dropout training on significantly
boosting the classification accuracy of linear SVMs.
| Ning Chen, Jun Zhu, Jianfei Chen, Bo Zhang | null | 1404.4171 | null | null |
MEG Decoding Across Subjects | stat.ML cs.LG q-bio.NC | Brain decoding is a data analysis paradigm for neuroimaging experiments that
is based on predicting the stimulus presented to the subject from the
concurrent brain activity. In order to make inference at the group level, a
straightforward but sometimes unsuccessful approach is to train a classifier on
the trials of a group of subjects and then to test it on unseen trials from new
subjects. The extreme difficulty is related to the structural and functional
variability across the subjects. We call this approach "decoding across
subjects". In this work, we address the problem of decoding across subjects for
magnetoencephalographic (MEG) experiments and we provide the following
contributions: first, we formally describe the problem and show that it belongs
to a machine learning sub-field called transductive transfer learning (TTL).
Second, we propose to use a simple TTL technique that accounts for the
differences between train data and test data. Third, we propose the use of
ensemble learning, and specifically of stacked generalization, to address the
variability across subjects within train data, with the aim of producing more
stable classifiers. On a face vs. scramble task MEG dataset of 16 subjects, we
compare the standard approach of not modelling the differences across subjects,
to the proposed one of combining TTL and ensemble learning. We show that the
proposed approach is consistently more accurate than the standard one.
| Emanuele Olivetti, Seyed Mostafa Kia, Paolo Avesani | null | 1404.4175 | null | null |
Open Question Answering with Weakly Supervised Embedding Models | cs.CL cs.LG | Building computers able to answer questions on any subject is a long standing
goal of artificial intelligence. Promising progress has recently been achieved
by methods that learn to map questions to logical forms or database queries.
Such approaches can be effective but at the cost of either large amounts of
human-labeled data or by defining lexicons and grammars tailored by
practitioners. In this paper, we instead take the radical approach of learning
to map questions to vectorial feature representations. By mapping answers into
the same space one can query any knowledge base independent of its schema,
without requiring any grammar or lexicon. Our method is trained with a new
optimization procedure combining stochastic gradient descent followed by a
fine-tuning step using the weak supervision provided by blending automatically
and collaboratively generated resources. We empirically demonstrate that our
model can capture meaningful signals from its noisy supervision leading to
major improvements over paralex, the only existing method able to be trained on
similar weakly labeled data.
| Antoine Bordes, Jason Weston and Nicolas Usunier | null | 1404.4326 | null | null |
Stable Graphical Models | cs.LG stat.ML | Stable random variables are motivated by the central limit theorem for
densities with (potentially) unbounded variance and can be thought of as
natural generalizations of the Gaussian distribution to skewed and heavy-tailed
phenomenon. In this paper, we introduce stable graphical (SG) models, a class
of multivariate stable densities that can also be represented as Bayesian
networks whose edges encode linear dependencies between random variables. One
major hurdle to the extensive use of stable distributions is the lack of a
closed-form analytical expression for their densities. This makes penalized
maximum-likelihood based learning computationally demanding. We establish
theoretically that the Bayesian information criterion (BIC) can asymptotically
be reduced to the computationally more tractable minimum dispersion criterion
(MDC) and develop StabLe, a structure learning algorithm based on MDC. We use
simulated datasets for five benchmark network topologies to empirically
demonstrate how StabLe improves upon ordinary least squares (OLS) regression.
We also apply StabLe to microarray gene expression data for lymphoblastoid
cells from 727 individuals belonging to eight global population groups. We
establish that StabLe improves test set performance relative to OLS via
ten-fold cross-validation. Finally, we develop SGEX, a method for quantifying
differential expression of genes between different population groups.
| Navodit Misra and Ercan E. Kuruoglu | null | 1404.4351 | null | null |
Efficient Nonnegative Tucker Decompositions: Algorithms and Uniqueness | cs.LG cs.CV stat.ML | Nonnegative Tucker decomposition (NTD) is a powerful tool for the extraction
of nonnegative parts-based and physically meaningful latent components from
high-dimensional tensor data while preserving the natural multilinear structure
of data. However, as the data tensor often has multiple modes and is
large-scale, existing NTD algorithms suffer from a very high computational
complexity in terms of both storage and computation time, which has been one
major obstacle for practical applications of NTD. To overcome these
disadvantages, we show how low (multilinear) rank approximation (LRA) of
tensors is able to significantly simplify the computation of the gradients of
the cost function, upon which a family of efficient first-order NTD algorithms
are developed. Besides dramatically reducing the storage complexity and running
time, the new algorithms are quite flexible and robust to noise because any
well-established LRA approaches can be applied. We also show how nonnegativity
incorporating sparsity substantially improves the uniqueness property and
partially alleviates the curse of dimensionality of the Tucker decompositions.
Simulation results on synthetic and real-world data justify the validity and
high efficiency of the proposed NTD algorithms.
| Guoxu Zhou and Andrzej Cichocki and Qibin Zhao and Shengli Xie | 10.1109/TIP.2015.2478396 | 1404.4412 | null | null |
How Many Topics? Stability Analysis for Topic Models | cs.LG cs.CL cs.IR | Topic modeling refers to the task of discovering the underlying thematic
structure in a text corpus, where the output is commonly presented as a report
of the top terms appearing in each topic. Despite the diversity of topic
modeling algorithms that have been proposed, a common challenge in successfully
applying these techniques is the selection of an appropriate number of topics
for a given corpus. Choosing too few topics will produce results that are
overly broad, while choosing too many will result in the "over-clustering" of a
corpus into many small, highly-similar topics. In this paper, we propose a
term-centric stability analysis strategy to address this issue, the idea being
that a model with an appropriate number of topics will be more robust to
perturbations in the data. Using a topic modeling approach based on matrix
factorization, evaluations performed on a range of corpora show that this
strategy can successfully guide the model selection process.
| Derek Greene, Derek O'Callaghan, P\'adraig Cunningham | null | 1404.4606 | null | null |
A New Space for Comparing Graphs | stat.ME cs.IR cs.LG stat.ML | Finding a new mathematical representations for graph, which allows direct
comparison between different graph structures, is an open-ended research
direction. Having such a representation is the first prerequisite for a variety
of machine learning algorithms like classification, clustering, etc., over
graph datasets. In this paper, we propose a symmetric positive semidefinite
matrix with the $(i,j)$-{th} entry equal to the covariance between normalized
vectors $A^ie$ and $A^je$ ($e$ being vector of all ones) as a representation
for graph with adjacency matrix $A$. We show that the proposed matrix
representation encodes the spectrum of the underlying adjacency matrix and it
also contains information about the counts of small sub-structures present in
the graph such as triangles and small paths. In addition, we show that this
matrix is a \emph{"graph invariant"}. All these properties make the proposed
matrix a suitable object for representing graphs.
The representation, being a covariance matrix in a fixed dimensional metric
space, gives a mathematical embedding for graphs. This naturally leads to a
measure of similarity on graph objects. We define similarity between two given
graphs as a Bhattacharya similarity measure between their corresponding
covariance matrix representations. As shown in our experimental study on the
task of social network classification, such a similarity measure outperforms
other widely used state-of-the-art methodologies. Our proposed method is also
computationally efficient. The computation of both the matrix representation
and the similarity value can be performed in operations linear in the number of
edges. This makes our method scalable in practice.
We believe our theoretical and empirical results provide evidence for
studying truncated power iterations, of the adjacency matrix, to characterize
social networks.
| Anshumali Shrivastava and Ping Li | null | 1404.4644 | null | null |
Advancing Matrix Completion by Modeling Extra Structures beyond
Low-Rankness | stat.ME cs.IT cs.LG math.IT math.ST stat.TH | A well-known method for completing low-rank matrices based on convex
optimization has been established by Cand{\`e}s and Recht. Although
theoretically complete, the method may not entirely solve the low-rank matrix
completion problem. This is because the method captures only the low-rankness
property which gives merely a rough constraint that the data points locate on
some low-dimensional subspace, but generally ignores the extra structures which
specify in more detail how the data points locate on the subspace. Whenever the
geometric distribution of the data points is not uniform, the coherence
parameters of data might be large and, accordingly, the method might fail even
if the latent matrix we want to recover is fairly low-rank. To better handle
non-uniform data, in this paper we propose a method termed Low-Rank Factor
Decomposition (LRFD), which imposes an additional restriction that the data
points must be represented as linear combinations of the bases in a dictionary
constructed or learnt in advance. We show that LRFD can well handle non-uniform
data, provided that the dictionary is configured properly: We mathematically
prove that if the dictionary itself is low-rank then LRFD is immune to the
coherence parameters which might be large on non-uniform data. This provides an
elementary principle for learning the dictionary in LRFD and, naturally, leads
to a practical algorithm for advancing matrix completion. Extensive experiments
on randomly generated matrices and motion datasets show encouraging results.
| Guangcan Liu and Ping Li | null | 1404.4646 | null | null |
Hierarchical Quasi-Clustering Methods for Asymmetric Networks | cs.LG stat.ML | This paper introduces hierarchical quasi-clustering methods, a generalization
of hierarchical clustering for asymmetric networks where the output structure
preserves the asymmetry of the input data. We show that this output structure
is equivalent to a finite quasi-ultrametric space and study admissibility with
respect to two desirable properties. We prove that a modified version of single
linkage is the only admissible quasi-clustering method. Moreover, we show
stability of the proposed method and we establish invariance properties
fulfilled by it. Algorithms are further developed and the value of
quasi-clustering analysis is illustrated with a study of internal migration
within United States.
| Gunnar Carlsson, Facundo M\'emoli, Alejandro Ribeiro, Santiago Segarra | null | 1404.4655 | null | null |
Subspace Learning and Imputation for Streaming Big Data Matrices and
Tensors | stat.ML cs.IT cs.LG math.IT | Extracting latent low-dimensional structure from high-dimensional data is of
paramount importance in timely inference tasks encountered with `Big Data'
analytics. However, increasingly noisy, heterogeneous, and incomplete datasets
as well as the need for {\em real-time} processing of streaming data pose major
challenges to this end. In this context, the present paper permeates benefits
from rank minimization to scalable imputation of missing data, via tracking
low-dimensional subspaces and unraveling latent (possibly multi-way) structure
from \emph{incomplete streaming} data. For low-rank matrix data, a subspace
estimator is proposed based on an exponentially-weighted least-squares
criterion regularized with the nuclear norm. After recasting the non-separable
nuclear norm into a form amenable to online optimization, real-time algorithms
with complementary strengths are developed and their convergence is established
under simplifying technical assumptions. In a stationary setting, the
asymptotic estimates obtained offer the well-documented performance guarantees
of the {\em batch} nuclear-norm regularized estimator. Under the same unifying
framework, a novel online (adaptive) algorithm is developed to obtain multi-way
decompositions of \emph{low-rank tensors} with missing entries, and perform
imputation as a byproduct. Simulated tests with both synthetic as well as real
Internet and cardiac magnetic resonance imagery (MRI) data confirm the efficacy
of the proposed algorithms, and their superior performance relative to
state-of-the-art alternatives.
| Morteza Mardani, Gonzalo Mateos, and Georgios B. Giannakis | 10.1109/TSP.2015.2417491 | 1404.4667 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.