title
stringlengths
5
246
categories
stringlengths
5
94
abstract
stringlengths
54
5.03k
authors
stringlengths
0
6.72k
doi
stringlengths
12
54
id
stringlengths
6
10
year
float64
2.02k
2.02k
venue
stringclasses
13 values
Optimal rates of convergence for persistence diagrams in Topological Data Analysis
math.ST cs.CG cs.LG math.GT stat.TH
Computational topology has recently known an important development toward data analysis, giving birth to the field of topological data analysis. Topological persistence, or persistent homology, appears as a fundamental tool in this field. In this paper, we study topological persistence in general metric spaces, with a statistical approach. We show that the use of persistent homology can be naturally considered in general statistical frameworks and persistence diagrams can be used as statistics with interesting convergence properties. Some numerical experiments are performed in various contexts to illustrate our results.
Fr\'ed\'eric Chazal and Marc Glisse and Catherine Labru\`ere and Bertrand Michel
null
1305.6239
null
null
Reinforcement Learning for the Soccer Dribbling Task
cs.LG cs.RO stat.ML
We propose a reinforcement learning solution to the \emph{soccer dribbling task}, a scenario in which a soccer agent has to go from the beginning to the end of a region keeping possession of the ball, as an adversary attempts to gain possession. While the adversary uses a stationary policy, the dribbler learns the best action to take at each decision point. After defining meaningful variables to represent the state space, and high-level macro-actions to incorporate domain knowledge, we describe our application of the reinforcement learning algorithm \emph{Sarsa} with CMAC for function approximation. Our experiments show that, after the training period, the dribbler is able to accomplish its task against a strong adversary around 58% of the time.
Arthur Carvalho and Renato Oliveira
10.1109/CIG.2011.6031994
1305.6568
null
null
Normalized Online Learning
cs.LG stat.ML
We introduce online learning algorithms which are independent of feature scales, proving regret bounds dependent on the ratio of scales existent in the data rather than the absolute scale. This has several useful effects: there is no need to pre-normalize data, the test-time and test-space complexity are reduced, and the algorithms are more robust.
Stephane Ross and Paul Mineiro and John Langford
null
1305.6646
null
null
Dynamic Clustering via Asymptotics of the Dependent Dirichlet Process Mixture
cs.LG stat.ML
This paper presents a novel algorithm, based upon the dependent Dirichlet process mixture model (DDPMM), for clustering batch-sequential data containing an unknown number of evolving clusters. The algorithm is derived via a low-variance asymptotic analysis of the Gibbs sampling algorithm for the DDPMM, and provides a hard clustering with convergence guarantees similar to those of the k-means algorithm. Empirical results from a synthetic test with moving Gaussian clusters and a test with real ADS-B aircraft trajectory data demonstrate that the algorithm requires orders of magnitude less computational time than contemporary probabilistic and hard clustering algorithms, while providing higher accuracy on the examined datasets.
Trevor Campbell, Miao Liu, Brian Kulis, Jonathan P. How, Lawrence Carin
null
1305.6659
null
null
Generalized Denoising Auto-Encoders as Generative Models
cs.LG
Recent work has shown how denoising and contractive autoencoders implicitly capture the structure of the data-generating density, in the case where the corruption noise is Gaussian, the reconstruction error is the squared error, and the data is continuous-valued. This has led to various proposals for sampling from this implicitly learned density function, using Langevin and Metropolis-Hastings MCMC. However, it remained unclear how to connect the training procedure of regularized auto-encoders to the implicit estimation of the underlying data-generating distribution when the data are discrete, or using other forms of corruption process and reconstruction errors. Another issue is the mathematical justification which is only valid in the limit of small corruption noise. We propose here a different attack on the problem, which deals with all these issues: arbitrary (but noisy enough) corruption, arbitrary reconstruction loss (seen as a log-likelihood), handling both discrete and continuous-valued variables, and removing the bias due to non-infinitesimal corruption noise (or non-infinitesimal contractive penalty).
Yoshua Bengio, Li Yao, Guillaume Alain, and Pascal Vincent
null
1305.6663
null
null
Predicting the Severity of Breast Masses with Data Mining Methods
cs.LG stat.ML
Mammography is the most effective and available tool for breast cancer screening. However, the low positive predictive value of breast biopsy resulting from mammogram interpretation leads to approximately 70% unnecessary biopsies with benign outcomes. Data mining algorithms could be used to help physicians in their decisions to perform a breast biopsy on a suspicious lesion seen in a mammogram image or to perform a short term follow-up examination instead. In this research paper data mining classification algorithms; Decision Tree (DT), Artificial Neural Network (ANN), and Support Vector Machine (SVM) are analyzed on mammographic masses data set. The purpose of this study is to increase the ability of physicians to determine the severity (benign or malignant) of a mammographic mass lesion from BI-RADS attributes and the patient,s age. The whole data set is divided for training the models and test them by the ratio of 70:30% respectively and the performances of classification algorithms are compared through three statistical measures; sensitivity, specificity, and classification accuracy. Accuracy of DT, ANN and SVM are 78.12%, 80.56% and 81.25% of test samples respectively. Our analysis shows that out of these three classification models SVM predicts severity of breast cancer with least error rate and highest accuracy.
Sahar A. Mokhtar and Alaa. M. Elsayad
null
1305.7057
null
null
Test cost and misclassification cost trade-off using reframing
cs.LG
Many solutions to cost-sensitive classification (and regression) rely on some or all of the following assumptions: we have complete knowledge about the cost context at training time, we can easily re-train whenever the cost context changes, and we have technique-specific methods (such as cost-sensitive decision trees) that can take advantage of that information. In this paper we address the problem of selecting models and minimising joint cost (integrating both misclassification cost and test costs) without any of the above assumptions. We introduce methods and plots (such as the so-called JROC plots) that can work with any off-the-shelf predictive technique, including ensembles, such that we reframe the model to use the appropriate subset of attributes (the feature configuration) during deployment time. In other words, models are trained with the available attributes (once and for all) and then deployed by setting missing values on the attributes that are deemed ineffective for reducing the joint cost. As the number of feature configuration combinations grows exponentially with the number of features we introduce quadratic methods that are able to approximate the optimal configuration and model choices, as shown by the experimental results.
Celestine Periale Maguedong-Djoumessi, Jos\'e Hern\'andez-Orallo
null
1305.7111
null
null
Alternating Decision trees for early diagnosis of dengue fever
cs.LG q-bio.QM stat.AP
Dengue fever is a flu-like illness spread by the bite of an infected mosquito which is fast emerging as a major health problem. Timely and cost effective diagnosis using clinical and laboratory features would reduce the mortality rates besides providing better grounds for clinical management and disease surveillance. We wish to develop a robust and effective decision tree based approach for predicting dengue disease. Our analysis is based on the clinical characteristics and laboratory measurements of the diseased individuals. We have developed and trained an alternating decision tree with boosting and compared its performance with C4.5 algorithm for dengue disease diagnosis. Of the 65 patient records a diagnosis establishes that 53 individuals have been confirmed to have dengue fever. An alternating decision tree based algorithm was able to differentiate the dengue fever using the clinical and laboratory data with number of correctly classified instances as 89%, F-measure of 0.86 and receiver operator characteristics (ROC) of 0.826 as compared to C4.5 having correctly classified instances as 78%,h F-measure of 0.738 and ROC of 0.617 respectively. Alternating decision tree based approach with boosting has been able to predict dengue fever with a higher degree of accuracy than C4.5 based decision tree using simple clinical and laboratory features. Further analysis on larger data sets is required to improve the sensitivity and specificity of the alternating decision trees.
M. Naresh Kumar
null
1305.7331
null
null
Privileged Information for Data Clustering
cs.LG stat.ML
Many machine learning algorithms assume that all input samples are independently and identically distributed from some common distribution on either the input space X, in the case of unsupervised learning, or the input and output space X x Y in the case of supervised and semi-supervised learning. In the last number of years the relaxation of this assumption has been explored and the importance of incorporation of additional information within machine learning algorithms became more apparent. Traditionally such fusion of information was the domain of semi-supervised learning. More recently the inclusion of knowledge from separate hypothetical spaces has been proposed by Vapnik as part of the supervised setting. In this work we are interested in exploring Vapnik's idea of master-class learning and the associated learning using privileged information, however within the unsupervised setting. Adoption of the advanced supervised learning paradigm for the unsupervised setting instigates investigation into the difference between privileged and technical data. By means of our proposed aRi-MAX method stability of the KMeans algorithm is improved and identification of the best clustering solution is achieved on an artificial dataset. Subsequently an information theoretic dot product based algorithm called P-Dot is proposed. This method has the ability to utilize a wide variety of clustering techniques, individually or in combination, while fusing privileged and technical data for improved clustering. Application of the P-Dot method to the task of digit recognition confirms our findings in a real-world scenario.
Jan Feyereisl, Uwe Aickelin
null
1305.7454
null
null
On model selection consistency of regularized M-estimators
math.ST cs.LG math.OC stat.ME stat.ML stat.TH
Regularized M-estimators are used in diverse areas of science and engineering to fit high-dimensional models with some low-dimensional structure. Usually the low-dimensional structure is encoded by the presence of the (unknown) parameters in some low-dimensional model subspace. In such settings, it is desirable for estimates of the model parameters to be \emph{model selection consistent}: the estimates also fall in the model subspace. We develop a general framework for establishing consistency and model selection consistency of regularized M-estimators and show how it applies to some special cases of interest in statistical learning. Our analysis identifies two key properties of regularized M-estimators, referred to as geometric decomposability and irrepresentability, that ensure the estimators are consistent and model selection consistent.
Jason D. Lee, Yuekai Sun, Jonathan E. Taylor
null
1305.7477
null
null
Understanding ACT-R - an Outsider's Perspective
cs.LG
The ACT-R theory of cognition developed by John Anderson and colleagues endeavors to explain how humans recall chunks of information and how they solve problems. ACT-R also serves as a theoretical basis for "cognitive tutors", i.e., automatic tutoring systems that help students learn mathematics, computer programming, and other subjects. The official ACT-R definition is distributed across a large body of literature spanning many articles and monographs, and hence it is difficult for an "outsider" to learn the most important aspects of the theory. This paper aims to provide a tutorial to the core components of the ACT-R theory.
Jacob Whitehill
null
1306.0125
null
null
Dynamic Ad Allocation: Bandits with Budgets
cs.LG cs.DS
We consider an application of multi-armed bandits to internet advertising (specifically, to dynamic ad allocation in the pay-per-click model, with uncertainty on the click probabilities). We focus on an important practical issue that advertisers are constrained in how much money they can spend on their ad campaigns. This issue has not been considered in the prior work on bandit-based approaches for ad allocation, to the best of our knowledge. We define a simple, stylized model where an algorithm picks one ad to display in each round, and each ad has a \emph{budget}: the maximal amount of money that can be spent on this ad. This model admits a natural variant of UCB1, a well-known algorithm for multi-armed bandits with stochastic rewards. We derive strong provable guarantees for this algorithm.
Aleksandrs Slivkins
null
1306.0155
null
null
Phase Retrieval using Alternating Minimization
stat.ML cs.IT cs.LG math.IT
Phase retrieval problems involve solving linear equations, but with missing sign (or phase, for complex numbers) information. More than four decades after it was first proposed, the seminal error reduction algorithm of (Gerchberg and Saxton 1972) and (Fienup 1982) is still the popular choice for solving many variants of this problem. The algorithm is based on alternating minimization; i.e. it alternates between estimating the missing phase information, and the candidate solution. Despite its wide usage in practice, no global convergence guarantees for this algorithm are known. In this paper, we show that a (resampling) variant of this approach converges geometrically to the solution of one such problem -- finding a vector $\mathbf{x}$ from $\mathbf{y},\mathbf{A}$, where $\mathbf{y} = \left|\mathbf{A}^{\top}\mathbf{x}\right|$ and $|\mathbf{z}|$ denotes a vector of element-wise magnitudes of $\mathbf{z}$ -- under the assumption that $\mathbf{A}$ is Gaussian. Empirically, we demonstrate that alternating minimization performs similar to recently proposed convex techniques for this problem (which are based on "lifting" to a convex matrix problem) in sample complexity and robustness to noise. However, it is much more efficient and can scale to large problems. Analytically, for a resampling version of alternating minimization, we show geometric convergence to the solution, and sample complexity that is off by log factors from obvious lower bounds. We also establish close to optimal scaling for the case when the unknown vector is sparse. Our work represents the first theoretical guarantee for alternating minimization (albeit with resampling) for any variant of phase retrieval problems in the non-convex setting.
Praneeth Netrapalli and Prateek Jain and Sujay Sanghavi
null
1306.0160
null
null
RNADE: The real-valued neural autoregressive density-estimator
stat.ML cs.LG
We introduce RNADE, a new model for joint density estimation of real-valued vectors. Our model calculates the density of a datapoint as the product of one-dimensional conditionals modeled using mixture density networks with shared parameters. RNADE learns a distributed representation of the data, while having a tractable expression for the calculation of densities. A tractable likelihood allows direct comparison with other methods and training by standard gradient-based optimizers. We compare the performance of RNADE on several datasets of heterogeneous and perceptual data, finding it outperforms mixture models in all but one case.
Benigno Uria, Iain Murray, Hugo Larochelle
null
1306.0186
null
null
Guided Random Forest in the RRF Package
cs.LG
Random Forest (RF) is a powerful supervised learner and has been popularly used in many applications such as bioinformatics. In this work we propose the guided random forest (GRF) for feature selection. Similar to a feature selection method called guided regularized random forest (GRRF), GRF is built using the importance scores from an ordinary RF. However, the trees in GRRF are built sequentially, are highly correlated and do not allow for parallel computing, while the trees in GRF are built independently and can be implemented in parallel. Experiments on 10 high-dimensional gene data sets show that, with a fixed parameter value (without tuning the parameter), RF applied to features selected by GRF outperforms RF applied to all features on 9 data sets and 7 of them have significant differences at the 0.05 level. Therefore, both accuracy and interpretability are significantly improved. GRF selects more features than GRRF, however, leads to better classification accuracy. Note in this work the guided random forest is guided by the importance scores from an ordinary random forest, however, it can also be guided by other methods such as human insights (by specifying $\lambda_i$). GRF can be used in "RRF" v1.4 (and later versions), a package that also includes the regularized random forest methods.
Houtao Deng
null
1306.0237
null
null
Deep Learning using Linear Support Vector Machines
cs.LG stat.ML
Recently, fully-connected and convolutional neural networks have been trained to achieve state-of-the-art performance on a wide variety of tasks such as speech recognition, image classification, natural language processing, and bioinformatics. For classification tasks, most of these "deep learning" models employ the softmax activation function for prediction and minimize cross-entropy loss. In this paper, we demonstrate a small but consistent advantage of replacing the softmax layer with a linear support vector machine. Learning minimizes a margin-based loss instead of the cross-entropy loss. While there have been various combinations of neural nets and SVMs in prior art, our results using L2-SVMs show that by simply replacing softmax with linear SVMs gives significant gains on popular deep learning datasets MNIST, CIFAR-10, and the ICML 2013 Representation Learning Workshop's face expression recognition challenge.
Yichuan Tang
null
1306.0239
null
null
KERT: Automatic Extraction and Ranking of Topical Keyphrases from Content-Representative Document Titles
cs.LG cs.IR
We introduce KERT (Keyphrase Extraction and Ranking by Topic), a framework for topical keyphrase generation and ranking. By shifting from the unigram-centric traditional methods of unsupervised keyphrase extraction to a phrase-centric approach, we are able to directly compare and rank phrases of different lengths. We construct a topical keyphrase ranking function which implements the four criteria that represent high quality topical keyphrases (coverage, purity, phraseness, and completeness). The effectiveness of our approach is demonstrated on two collections of content-representative titles in the domains of Computer Science and Physics.
Marina Danilevsky, Chi Wang, Nihit Desai, Jingyi Guo, Jiawei Han
null
1306.0271
null
null
Probabilistic Solutions to Differential Equations and their Application to Riemannian Statistics
stat.ML cs.LG math.NA
We study a probabilistic numerical method for the solution of both boundary and initial value problems that returns a joint Gaussian process posterior over the solution. Such methods have concrete value in the statistics on Riemannian manifolds, where non-analytic ordinary differential equations are involved in virtually all computations. The probabilistic formulation permits marginalising the uncertainty of the numerical solution such that statistics are less sensitive to inaccuracies. This leads to new Riemannian algorithms for mean value computations and principal geodesic analysis. Marginalisation also means results can be less precise than point estimates, enabling a noticeable speed-up over the state of the art. Our approach is an argument for a wider point that uncertainty caused by numerical calculations should be tracked throughout the pipeline of machine learning algorithms.
Philipp Hennig and S{\o}ren Hauberg
null
1306.0308
null
null
Learning from networked examples in a k-partite graph
cs.LG stat.ML
Many machine learning algorithms are based on the assumption that training examples are drawn independently. However, this assumption does not hold anymore when learning from a networked sample where two or more training examples may share common features. We propose an efficient weighting method for learning from networked examples and show the sample error bound which is better than previous work.
Yuyi Wang, Jan Ramon and Zheng-Chu Guo
null
1306.0393
null
null
Riemannian metrics for neural networks II: recurrent networks and learning symbolic data sequences
cs.NE cs.LG
Recurrent neural networks are powerful models for sequential data, able to represent complex dependencies in the sequence that simpler models such as hidden Markov models cannot handle. Yet they are notoriously hard to train. Here we introduce a training procedure using a gradient ascent in a Riemannian metric: this produces an algorithm independent from design choices such as the encoding of parameters and unit activities. This metric gradient ascent is designed to have an algorithmic cost close to backpropagation through time for sparsely connected networks. We use this procedure on gated leaky neural networks (GLNNs), a variant of recurrent neural networks with an architecture inspired by finite automata and an evolution equation inspired by continuous-time networks. GLNNs trained with a Riemannian gradient are demonstrated to effectively capture a variety of structures in synthetic problems: basic block nesting as in context-free grammars (an important feature of natural languages, but difficult to learn), intersections of multiple independent Markov-type relations, or long-distance relationships such as the distant-XOR problem. This method does not require adjusting the network structure or initial parameters: the network used is a sparse random graph and the initialization is identical for all problems considered.
Yann Ollivier
null
1306.0514
null
null
On the Performance Bounds of some Policy Search Dynamic Programming Algorithms
cs.AI cs.LG
We consider the infinite-horizon discounted optimal control problem formalized by Markov Decision Processes. We focus on Policy Search algorithms, that compute an approximately optimal policy by following the standard Policy Iteration (PI) scheme via an -approximate greedy operator (Kakade and Langford, 2002; Lazaric et al., 2010). We describe existing and a few new performance bounds for Direct Policy Iteration (DPI) (Lagoudakis and Parr, 2003; Fern et al., 2006; Lazaric et al., 2010) and Conservative Policy Iteration (CPI) (Kakade and Langford, 2002). By paying a particular attention to the concentrability constants involved in such guarantees, we notably argue that the guarantee of CPI is much better than that of DPI, but this comes at the cost of a relative--exponential in $\frac{1}{\epsilon}$-- increase of time complexity. We then describe an algorithm, Non-Stationary Direct Policy Iteration (NSDPI), that can either be seen as 1) a variation of Policy Search by Dynamic Programming by Bagnell et al. (2003) to the infinite horizon situation or 2) a simplified version of the Non-Stationary PI with growing period of Scherrer and Lesner (2012). We provide an analysis of this algorithm, that shows in particular that it enjoys the best of both worlds: its performance guarantee is similar to that of CPI, but within a time complexity similar to that of DPI.
Bruno Scherrer (INRIA Nancy - Grand Est / LORIA)
null
1306.0539
null
null
Identifying Pairs in Simulated Bio-Medical Time-Series
cs.LG cs.CE
The paper presents a time-series-based classification approach to identify similarities in pairs of simulated human-generated patterns. An example for a pattern is a time-series representing a heart rate during a specific time-range, wherein the time-series is a sequence of data points that represent the changes in the heart rate values. A bio-medical simulator system was developed to acquire a collection of 7,871 price patterns of financial instruments. The financial instruments traded in real-time on three American stock exchanges, NASDAQ, NYSE, and AMEX, simulate bio-medical measurements. The system simulates a human in which each price pattern represents one bio-medical sensor. Data provided during trading hours from the stock exchanges allowed real-time classification. Classification is based on new machine learning techniques: self-labeling, which allows the application of supervised learning methods on unlabeled time-series and similarity ranking, which applied on a decision tree learning algorithm to classify time-series regardless of type and quantity.
Uri Kartoun
null
1306.0541
null
null
Predicting Parameters in Deep Learning
cs.LG cs.NE stat.ML
We demonstrate that there is significant redundancy in the parameterization of several deep learning models. Given only a few weight values for each feature it is possible to accurately predict the remaining values. Moreover, we show that not only can the parameter values be predicted, but many of them need not be learned at all. We train several different architectures by learning only a small number of weights and predicting the rest. In the best case we are able to predict more than 95% of the weights of a network without any drop in accuracy.
Misha Denil, Babak Shakibi, Laurent Dinh, Marc'Aurelio Ranzato, Nando de Freitas
null
1306.0543
null
null
Distributed k-Means and k-Median Clustering on General Topologies
cs.LG cs.DC stat.ML
This paper provides new algorithms for distributed clustering for two popular center-based objectives, k-median and k-means. These algorithms have provable guarantees and improve communication complexity over existing approaches. Following a classic approach in clustering by \cite{har2004coresets}, we reduce the problem of finding a clustering with low cost to the problem of finding a coreset of small size. We provide a distributed method for constructing a global coreset which improves over the previous methods by reducing the communication complexity, and which works over general communication topologies. Experimental results on large scale data sets show that this approach outperforms other coreset-based distributed clustering algorithms.
Maria Florina Balcan, Steven Ehrlich, Yingyu Liang
null
1306.0604
null
null
Prediction with Missing Data via Bayesian Additive Regression Trees
stat.ML cs.LG
We present a method for incorporating missing data in non-parametric statistical learning without the need for imputation. We focus on a tree-based method, Bayesian Additive Regression Trees (BART), enhanced with "Missingness Incorporated in Attributes," an approach recently proposed incorporating missingness into decision trees (Twala, 2008). This procedure takes advantage of the partitioning mechanisms found in tree-based models. Simulations on generated models and real data indicate that our proposed method can forecast well on complicated missing-at-random and not-missing-at-random models as well as models where missingness itself influences the response. Our procedure has higher predictive performance and is more stable than competitors in many cases. We also illustrate BART's abilities to incorporate missingness into uncertainty intervals and to detect the influence of missingness on the model fit.
Adam Kapelner and Justin Bleich
null
1306.0618
null
null
Provable Inductive Matrix Completion
cs.LG cs.IT math.IT stat.ML
Consider a movie recommendation system where apart from the ratings information, side information such as user's age or movie's genre is also available. Unlike standard matrix completion, in this setting one should be able to predict inductively on new users/movies. In this paper, we study the problem of inductive matrix completion in the exact recovery setting. That is, we assume that the ratings matrix is generated by applying feature vectors to a low-rank matrix and the goal is to recover back the underlying matrix. Furthermore, we generalize the problem to that of low-rank matrix estimation using rank-1 measurements. We study this generic problem and provide conditions that the set of measurements should satisfy so that the alternating minimization method (which otherwise is a non-convex method with no convergence guarantees) is able to recover back the {\em exact} underlying low-rank matrix. In addition to inductive matrix completion, we show that two other low-rank estimation problems can be studied in our framework: a) general low-rank matrix sensing using rank-1 measurements, and b) multi-label regression with missing labels. For both the problems, we provide novel and interesting bounds on the number of measurements required by alternating minimization to provably converges to the {\em exact} low-rank matrix. In particular, our analysis for the general low rank matrix sensing problem significantly improves the required storage and computational cost than that required by the RIP-based matrix sensing methods \cite{RechtFP2007}. Finally, we provide empirical validation of our approach and demonstrate that alternating minimization is able to recover the true matrix for the above mentioned problems using a small number of measurements.
Prateek Jain and Inderjit S. Dhillon
null
1306.0626
null
null
Online Learning under Delayed Feedback
cs.LG cs.AI stat.ML
Online learning with delayed feedback has received increasing attention recently due to its several applications in distributed, web-based learning problems. In this paper we provide a systematic study of the topic, and analyze the effect of delay on the regret of online learning algorithms. Somewhat surprisingly, it turns out that delay increases the regret in a multiplicative way in adversarial problems, and in an additive way in stochastic problems. We give meta-algorithms that transform, in a black-box fashion, algorithms developed for the non-delayed case into ones that can handle the presence of delays in the feedback loop. Modifications of the well-known UCB algorithm are also developed for the bandit problem with delayed feedback, with the advantage over the meta-algorithms that they can be implemented with lower complexity.
Pooria Joulani, Andr\'as Gy\"orgy, Csaba Szepesv\'ari
null
1306.0686
null
null
Fast Gradient-Based Inference with Continuous Latent Variable Models in Auxiliary Form
cs.LG stat.ML
We propose a technique for increasing the efficiency of gradient-based inference and learning in Bayesian networks with multiple layers of continuous latent vari- ables. We show that, in many cases, it is possible to express such models in an auxiliary form, where continuous latent variables are conditionally deterministic given their parents and a set of independent auxiliary variables. Variables of mod- els in this auxiliary form have much larger Markov blankets, leading to significant speedups in gradient-based inference, e.g. rapid mixing Hybrid Monte Carlo and efficient gradient-based optimization. The relative efficiency is confirmed in ex- periments.
Diederik P Kingma
null
1306.0733
null
null
A Gang of Bandits
cs.LG cs.SI stat.ML
Multi-armed bandit problems are receiving a great deal of attention because they adequately formalize the exploration-exploitation trade-offs arising in several industrially relevant applications, such as online advertisement and, more generally, recommendation systems. In many cases, however, these applications have a strong social component, whose integration in the bandit algorithm could lead to a dramatic performance increase. For instance, we may want to serve content to a group of users by taking advantage of an underlying network of social relationships among them. In this paper, we introduce novel algorithmic approaches to the solution of such networked bandit problems. More specifically, we design and analyze a global strategy which allocates a bandit algorithm to each network node (user) and allows it to "share" signals (contexts and payoffs) with the neghboring nodes. We then derive two more scalable variants of this strategy based on different ways of clustering the graph nodes. We experimentally compare the algorithm and its variants to state-of-the-art methods for contextual bandits that do not use the relational information. Our experiments, carried out on synthetic and real-world datasets, show a marked increase in prediction performance obtained by exploiting the network structure.
Nicol\`o Cesa-Bianchi, Claudio Gentile and Giovanni Zappella
null
1306.0811
null
null
Kernel Mean Estimation and Stein's Effect
stat.ML cs.LG math.ST stat.TH
A mean function in reproducing kernel Hilbert space, or a kernel mean, is an important part of many applications ranging from kernel principal component analysis to Hilbert-space embedding of distributions. Given finite samples, an empirical average is the standard estimate for the true kernel mean. We show that this estimator can be improved via a well-known phenomenon in statistics called Stein's phenomenon. After consideration, our theoretical analysis reveals the existence of a wide class of estimators that are better than the standard. Focusing on a subset of this class, we propose efficient shrinkage estimators for the kernel mean. Empirical evaluations on several benchmark applications clearly demonstrate that the proposed estimators outperform the standard kernel mean estimator.
Krikamol Muandet, Kenji Fukumizu, Bharath Sriperumbudur, Arthur Gretton, Bernhard Sch\"olkopf
null
1306.0842
null
null
$\propto$SVM for learning with label proportions
cs.LG stat.ML
We study the problem of learning with label proportions in which the training data is provided in groups and only the proportion of each class in each group is known. We propose a new method called proportion-SVM, or $\propto$SVM, which explicitly models the latent unknown instance labels together with the known group label proportions in a large-margin framework. Unlike the existing works, our approach avoids making restrictive assumptions about the data. The $\propto$SVM model leads to a non-convex integer programming problem. In order to solve it efficiently, we propose two algorithms: one based on simple alternating optimization and the other based on a convex relaxation. Extensive experiments on standard datasets show that $\propto$SVM outperforms the state-of-the-art, especially for larger group sizes.
Felix X. Yu, Dong Liu, Sanjiv Kumar, Tony Jebara, Shih-Fu Chang
null
1306.0886
null
null
(More) Efficient Reinforcement Learning via Posterior Sampling
stat.ML cs.LG
Most provably-efficient learning algorithms introduce optimism about poorly-understood states and actions to encourage exploration. We study an alternative approach for efficient exploration, posterior sampling for reinforcement learning (PSRL). This algorithm proceeds in repeated episodes of known duration. At the start of each episode, PSRL updates a prior distribution over Markov decision processes and takes one sample from this posterior. PSRL then follows the policy that is optimal for this sample during the episode. The algorithm is conceptually simple, computationally efficient and allows an agent to encode prior knowledge in a natural way. We establish an $\tilde{O}(\tau S \sqrt{AT})$ bound on the expected regret, where $T$ is time, $\tau$ is the episode length and $S$ and $A$ are the cardinalities of the state and action spaces. This bound is one of the first for an algorithm not based on optimism, and close to the state of the art for any reinforcement learning algorithm. We show through simulation that PSRL significantly outperforms existing algorithms with similar regret bounds.
Ian Osband, Daniel Russo, Benjamin Van Roy
null
1306.0940
null
null
Bayesian Differential Privacy through Posterior Sampling
stat.ML cs.LG
Differential privacy formalises privacy-preserving mechanisms that provide access to a database. We pose the question of whether Bayesian inference itself can be used directly to provide private access to data, with no modification. The answer is affirmative: under certain conditions on the prior, sampling from the posterior distribution can be used to achieve a desired level of privacy and utility. To do so, we generalise differential privacy to arbitrary dataset metrics, outcome spaces and distribution families. This allows us to also deal with non-i.i.d or non-tabular datasets. We prove bounds on the sensitivity of the posterior to the data, which gives a measure of robustness. We also show how to use posterior sampling to provide differentially private responses to queries, within a decision-theoretic framework. Finally, we provide bounds on the utility and on the distinguishability of datasets. The latter are complemented by a novel use of Le Cam's method to obtain lower bounds. All our general results hold for arbitrary database metrics, including those for the common definition of differential privacy. For specific choices of the metric, we give a number of examples satisfying our assumptions.
Christos Dimitrakakis and Blaine Nelson and and Zuhe Zhang and Aikaterini Mitrokotsa and Benjamin Rubinstein
null
1306.1066
null
null
Discriminative Parameter Estimation for Random Walks Segmentation: Technical Report
cs.CV cs.LG
The Random Walks (RW) algorithm is one of the most e - cient and easy-to-use probabilistic segmentation methods. By combining contrast terms with prior terms, it provides accurate segmentations of medical images in a fully automated manner. However, one of the main drawbacks of using the RW algorithm is that its parameters have to be hand-tuned. we propose a novel discriminative learning framework that estimates the parameters using a training dataset. The main challenge we face is that the training samples are not fully supervised. Speci cally, they provide a hard segmentation of the images, instead of a proba-bilistic segmentation. We overcome this challenge by treating the optimal probabilistic segmentation that is compatible with the given hard segmentation as a latent variable. This allows us to employ the latent support vector machine formulation for parameter estimation. We show that our approach signi cantly outperforms the baseline methods on a challenging dataset consisting of real clinical 3D MRI volumes of skeletal muscles.
Pierre-Yves Baudin (INRIA Saclay - Ile de France), Danny Goodman, Puneet Kumar (INRIA Saclay - Ile de France, CVN), Noura Azzabou (MIRCEN, UPMC), Pierre G. Carlier (UPMC), Nikos Paragios (INRIA Saclay - Ile de France, LIGM, ENPC, MAS), M. Pawan Kumar (INRIA Saclay - Ile de France, CVN)
null
1306.1083
null
null
Deep Generative Stochastic Networks Trainable by Backprop
cs.LG
We introduce a novel training principle for probabilistic models that is an alternative to maximum likelihood. The proposed Generative Stochastic Networks (GSN) framework is based on learning the transition operator of a Markov chain whose stationary distribution estimates the data distribution. The transition distribution of the Markov chain is conditional on the previous state, generally involving a small move, so this conditional distribution has fewer dominant modes, being unimodal in the limit of small moves. Thus, it is easier to learn because it is easier to approximate its partition function, more like learning to perform supervised function approximation, with gradients that can be obtained by backprop. We provide theorems that generalize recent work on the probabilistic interpretation of denoising autoencoders and obtain along the way an interesting justification for dependency networks and generalized pseudolikelihood, along with a definition of an appropriate joint distribution and sampling mechanism even when the conditionals are not consistent. GSNs can be used with missing inputs and can be used to sample subsets of variables given the rest. We validate these theoretical results with experiments on two image datasets using an architecture that mimics the Deep Boltzmann Machine Gibbs sampler but allows training to proceed with simple backprop, without the need for layerwise pretraining.
Yoshua Bengio, \'Eric Thibodeau-Laufer, Guillaume Alain and Jason Yosinski
null
1306.1091
null
null
Multiclass Total Variation Clustering
stat.ML cs.LG math.OC
Ideas from the image processing literature have recently motivated a new set of clustering algorithms that rely on the concept of total variation. While these algorithms perform well for bi-partitioning tasks, their recursive extensions yield unimpressive results for multiclass clustering tasks. This paper presents a general framework for multiclass total variation clustering that does not rely on recursion. The results greatly outperform previous total variation algorithms and compare well with state-of-the-art NMF approaches.
Xavier Bresson, Thomas Laurent, David Uminsky and James H. von Brecht
null
1306.1185
null
null
Multiclass Semi-Supervised Learning on Graphs using Ginzburg-Landau Functional Minimization
stat.ML cs.LG math.ST physics.data-an stat.TH
We present a graph-based variational algorithm for classification of high-dimensional data, generalizing the binary diffuse interface model to the case of multiple classes. Motivated by total variation techniques, the method involves minimizing an energy functional made up of three terms. The first two terms promote a stepwise continuous classification function with sharp transitions between classes, while preserving symmetry among the class labels. The third term is a data fidelity term, allowing us to incorporate prior information into the model in a semi-supervised framework. The performance of the algorithm on synthetic data, as well as on the COIL and MNIST benchmark datasets, is competitive with state-of-the-art graph-based multiclass segmentation methods.
Cristina Garcia-Cardona, Arjuna Flenner, Allon G. Percus
null
1306.1298
null
null
Verdict Accuracy of Quick Reduct Algorithm using Clustering and Classification Techniques for Gene Expression Data
cs.LG cs.CE stat.ML
In most gene expression data, the number of training samples is very small compared to the large number of genes involved in the experiments. However, among the large amount of genes, only a small fraction is effective for performing a certain task. Furthermore, a small subset of genes is desirable in developing gene expression based diagnostic tools for delivering reliable and understandable results. With the gene selection results, the cost of biological experiment and decision can be greatly reduced by analyzing only the marker genes. An important application of gene expression data in functional genomics is to classify samples according to their gene expression profiles. Feature selection (FS) is a process which attempts to select more informative features. It is one of the important steps in knowledge discovery. Conventional supervised FS methods evaluate various feature subsets using an evaluation function or metric to select only those features which are related to the decision classes of the data under consideration. This paper studies a feature selection method based on rough set theory. Further K-Means, Fuzzy C-Means (FCM) algorithm have implemented for the reduced feature set without considering class labels. Then the obtained results are compared with the original class labels. Back Propagation Network (BPN) has also been used for classification. Then the performance of K-Means, FCM, and BPN are analyzed through the confusion matrix. It is found that the BPN is performing well comparatively.
T. Chandrasekhar, K. Thangavel, E.N. Sathishkumar
null
1306.1323
null
null
Performance analysis of unsupervised feature selection methods
cs.LG
Feature selection (FS) is a process which attempts to select more informative features. In some cases, too many redundant or irrelevant features may overpower main features for classification. Feature selection can remedy this problem and therefore improve the prediction accuracy and reduce the computational overhead of classification algorithms. The main aim of feature selection is to determine a minimal feature subset from a problem domain while retaining a suitably high accuracy in representing the original features. In this paper, Principal Component Analysis (PCA), Rough PCA, Unsupervised Quick Reduct (USQR) algorithm and Empirical Distribution Ranking (EDR) approaches are applied to discover discriminative features that will be the most adequate ones for classification. Efficiency of the approaches is evaluated using standard classification metrics.
A. Nisthana Parveen, H. Hannah Inbarani, E.N. Sathishkumar
10.1109/ICCCA.2012.6179181
1306.1326
null
null
Diffusion map for clustering fMRI spatial maps extracted by independent component analysis
cs.CE cs.LG stat.ML
Functional magnetic resonance imaging (fMRI) produces data about activity inside the brain, from which spatial maps can be extracted by independent component analysis (ICA). In datasets, there are n spatial maps that contain p voxels. The number of voxels is very high compared to the number of analyzed spatial maps. Clustering of the spatial maps is usually based on correlation matrices. This usually works well, although such a similarity matrix inherently can explain only a certain amount of the total variance contained in the high-dimensional data where n is relatively small but p is large. For high-dimensional space, it is reasonable to perform dimensionality reduction before clustering. In this research, we used the recently developed diffusion map for dimensionality reduction in conjunction with spectral clustering. This research revealed that the diffusion map based clustering worked as well as the more traditional methods, and produced more compact clusters when needed.
Tuomo Sipola, Fengyu Cong, Tapani Ristaniemi, Vinoo Alluri, Petri Toiviainen, Elvira Brattico, Asoke K. Nandi
10.1109/MLSP.2013.6661923
1306.1350
null
null
Tight Lower Bound on the Probability of a Binomial Exceeding its Expectation
cs.LG stat.ML
We give the proof of a tight lower bound on the probability that a binomial random variable exceeds its expected value. The inequality plays an important role in a variety of contexts, including the analysis of relative deviation bounds in learning theory and generalization bounds for unbounded loss functions.
Spencer Greenberg, Mehryar Mohri
null
1306.1433
null
null
Highly Scalable, Parallel and Distributed AdaBoost Algorithm using Light Weight Threads and Web Services on a Network of Multi-Core Machines
cs.DC cs.LG
AdaBoost is an important algorithm in machine learning and is being widely used in object detection. AdaBoost works by iteratively selecting the best amongst weak classifiers, and then combines several weak classifiers to obtain a strong classifier. Even though AdaBoost has proven to be very effective, its learning execution time can be quite large depending upon the application e.g., in face detection, the learning time can be several days. Due to its increasing use in computer vision applications, the learning time needs to be drastically reduced so that an adaptive near real time object detection system can be incorporated. In this paper, we develop a hybrid parallel and distributed AdaBoost algorithm that exploits the multiple cores in a CPU via light weight threads, and also uses multiple machines via a web service software architecture to achieve high scalability. We present a novel hierarchical web services based distributed architecture and achieve nearly linear speedup up to the number of processors available to us. In comparison with the previously published work, which used a single level master-slave parallel and distributed implementation [1] and only achieved a speedup of 2.66 on four nodes, we achieve a speedup of 95.1 on 31 workstations each having a quad-core processor, resulting in a learning time of only 4.8 seconds per feature.
Munther Abualkibash, Ahmed ElSayed, Ausif Mahmood
null
1306.1467
null
null
Gaussian Process-Based Decentralized Data Fusion and Active Sensing for Mobility-on-Demand System
cs.RO cs.DC cs.LG cs.MA
Mobility-on-demand (MoD) systems have recently emerged as a promising paradigm of one-way vehicle sharing for sustainable personal urban mobility in densely populated cities. In this paper, we enhance the capability of a MoD system by deploying robotic shared vehicles that can autonomously cruise the streets to be hailed by users. A key challenge to managing the MoD system effectively is that of real-time, fine-grained mobility demand sensing and prediction. This paper presents a novel decentralized data fusion and active sensing algorithm for real-time, fine-grained mobility demand sensing and prediction with a fleet of autonomous robotic vehicles in a MoD system. Our Gaussian process (GP)-based decentralized data fusion algorithm can achieve a fine balance between predictive power and time efficiency. We theoretically guarantee its predictive performance to be equivalent to that of a sophisticated centralized sparse approximation for the GP model: The computation of such a sparse approximate GP model can thus be distributed among the MoD vehicles, hence achieving efficient and scalable demand prediction. Though our decentralized active sensing strategy is devised to gather the most informative demand data for demand prediction, it can achieve a dual effect of fleet rebalancing to service the mobility demands. Empirical evaluation on real-world mobility demand data shows that our proposed algorithm can achieve a better balance between predictive accuracy and time efficiency than state-of-the-art algorithms.
Jie Chen, Kian Hsiang Low, Colin Keng-Yan Tan
null
1306.1491
null
null
Policy Search: Any Local Optimum Enjoys a Global Performance Guarantee
cs.LG cs.AI cs.RO math.OC
Local Policy Search is a popular reinforcement learning approach for handling large state spaces. Formally, it searches locally in a paramet erized policy space in order to maximize the associated value function averaged over some predefined distribution. It is probably commonly b elieved that the best one can hope in general from such an approach is to get a local optimum of this criterion. In this article, we show th e following surprising result: \emph{any} (approximate) \emph{local optimum} enjoys a \emph{global performance guarantee}. We compare this g uarantee with the one that is satisfied by Direct Policy Iteration, an approximate dynamic programming algorithm that does some form of Poli cy Search: if the approximation error of Local Policy Search may generally be bigger (because local search requires to consider a space of s tochastic policies), we argue that the concentrability coefficient that appears in the performance bound is much nicer. Finally, we discuss several practical and theoretical consequences of our analysis.
Bruno Scherrer (INRIA Nancy - Grand Est / LORIA), Matthieu Geist
null
1306.1520
null
null
Fast greedy algorithm for subspace clustering from corrupted and incomplete data
cs.LG cs.DS math.NA stat.ML
We describe the Fast Greedy Sparse Subspace Clustering (FGSSC) algorithm providing an efficient method for clustering data belonging to a few low-dimensional linear or affine subspaces. The main difference of our algorithm from predecessors is its ability to work with noisy data having a high rate of erasures (missed entries with the known coordinates) and errors (corrupted entries with unknown coordinates). We discuss here how to implement the fast version of the greedy algorithm with the maximum efficiency whose greedy strategy is incorporated into iterations of the basic algorithm. We provide numerical evidences that, in the subspace clustering capability, the fast greedy algorithm outperforms not only the existing state-of-the art SSC algorithm taken by the authors as a basic algorithm but also the recent GSSC algorithm. At the same time, its computational cost is only slightly higher than the cost of SSC. The numerical evidence of the algorithm significant advantage is presented for a few synthetic models as well as for the Extended Yale B dataset of facial images. In particular, the face recognition misclassification rate turned out to be 6-20 times lower than for the SSC algorithm. We provide also the numerical evidence that the FGSSC algorithm is able to perform clustering of corrupted data efficiently even when the sum of subspace dimensions significantly exceeds the dimension of the ambient space.
Alexander Petukhov and Inna Kozlov
null
1306.1716
null
null
Loss-Proportional Subsampling for Subsequent ERM
cs.LG stat.ML
We propose a sampling scheme suitable for reducing a data set prior to selecting a hypothesis with minimum empirical risk. The sampling only considers a subset of the ultimate (unknown) hypothesis set, but can nonetheless guarantee that the final excess risk will compare favorably with utilizing the entire original data set. We demonstrate the practical benefits of our approach on a large dataset which we subsample and subsequently fit with boosted trees.
Paul Mineiro, Nikos Karampatziakis
null
1306.1840
null
null
Emotional Expression Classification using Time-Series Kernels
cs.CV cs.LG stat.ML
Estimation of facial expressions, as spatio-temporal processes, can take advantage of kernel methods if one considers facial landmark positions and their motion in 3D space. We applied support vector classification with kernels derived from dynamic time-warping similarity measures. We achieved over 99% accuracy - measured by area under ROC curve - using only the 'motion pattern' of the PCA compressed representation of the marker point vector, the so-called shape parameters. Beyond the classification of full motion patterns, several expressions were recognized with over 90% accuracy in as few as 5-6 frames from their onset, about 200 milliseconds.
Andras Lorincz, Laszlo Jeni, Zoltan Szabo, Jeffrey Cohn, Takeo Kanade
10.1109/CVPRW.2013.131
1306.1913
null
null
Minimax Theory for High-dimensional Gaussian Mixtures with Sparse Mean Separation
stat.ML cs.LG math.ST stat.TH
While several papers have investigated computationally and statistically efficient methods for learning Gaussian mixtures, precise minimax bounds for their statistical performance as well as fundamental limits in high-dimensional settings are not well-understood. In this paper, we provide precise information theoretic bounds on the clustering accuracy and sample complexity of learning a mixture of two isotropic Gaussians in high dimensions under small mean separation. If there is a sparse subset of relevant dimensions that determine the mean separation, then the sample complexity only depends on the number of relevant dimensions and mean separation, and can be achieved by a simple computationally efficient procedure. Our results provide the first step of a theoretical basis for recent methods that combine feature selection and clustering.
Martin Azizyan, Aarti Singh, Larry Wasserman
null
1306.2035
null
null
Logistic Tensor Factorization for Multi-Relational Data
stat.ML cs.LG
Tensor factorizations have become increasingly popular approaches for various learning tasks on structured data. In this work, we extend the RESCAL tensor factorization, which has shown state-of-the-art results for multi-relational learning, to account for the binary nature of adjacency tensors. We study the improvements that can be gained via this approach on various benchmark datasets and show that the logistic extension can improve the prediction results significantly.
Maximilian Nickel, Volker Tresp
null
1306.2084
null
null
Predicting Risk-of-Readmission for Congestive Heart Failure Patients: A Multi-Layer Approach
cs.LG stat.AP
Mitigating risk-of-readmission of Congestive Heart Failure (CHF) patients within 30 days of discharge is important because such readmissions are not only expensive but also critical indicator of provider care and quality of treatment. Accurately predicting the risk-of-readmission may allow hospitals to identify high-risk patients and eventually improve quality of care by identifying factors that contribute to such readmissions in many scenarios. In this paper, we investigate the problem of predicting risk-of-readmission as a supervised learning problem, using a multi-layer classification approach. Earlier contributions inadequately attempted to assess a risk value for 30 day readmission by building a direct predictive model as opposed to our approach. We first split the problem into various stages, (a) at risk in general (b) risk within 60 days (c) risk within 30 days, and then build suitable classifiers for each stage, thereby increasing the ability to accurately predict the risk using multiple layers of decision. The advantage of our approach is that we can use different classification models for the subtasks that are more suited for the respective problems. Moreover, each of the subtasks can be solved using different features and training data leading to a highly confident diagnosis or risk compared to a one-shot single layer approach. An experimental evaluation on actual hospital patient record data from Multicare Health Systems shows that our model is significantly better at predicting risk-of-readmission of CHF patients within 30 days after discharge compared to prior attempts.
Kiyana Zolfaghar, Nele Verbiest, Jayshree Agarwal, Naren Meadem, Si-Chi Chin, Senjuti Basu Roy, Ankur Teredesai, David Hazel, Paul Amoroso, Lester Reed
null
1306.2094
null
null
A Novel Approach for Single Gene Selection Using Clustering and Dimensionality Reduction
cs.CE cs.LG
We extend the standard rough set-based approach to deal with huge amounts of numeric attributes versus small amount of available objects. Here, a novel approach of clustering along with dimensionality reduction; Hybrid Fuzzy C Means-Quick Reduct (FCMQR) algorithm is proposed for single gene selection. Gene selection is a process to select genes which are more informative. It is one of the important steps in knowledge discovery. The problem is that all genes are not important in gene expression data. Some of the genes may be redundant, and others may be irrelevant and noisy. In this study, the entire dataset is divided in proper grouping of similar genes by applying Fuzzy C Means (FCM) algorithm. A high class discriminated genes has been selected based on their degree of dependence by applying Quick Reduct algorithm based on Rough Set Theory to all the resultant clusters. Average Correlation Value (ACV) is calculated for the high class discriminated genes. The clusters which have the ACV value a s 1 is determined as significant clusters, whose classification accuracy will be equal or high when comparing to the accuracy of the entire dataset. The proposed algorithm is evaluated using WEKA classifiers and compared. Finally, experimental results related to the leukemia cancer data confirm that our approach is quite promising, though it surely requires further research.
E.N.Sathishkumar, K.Thangavel, T.Chandrasekhar
null
1306.2118
null
null
Asymptotically Optimal Sequential Estimation of the Mean Based on Inclusion Principle
math.ST cs.LG math.PR stat.TH
A large class of problems in sciences and engineering can be formulated as the general problem of constructing random intervals with pre-specified coverage probabilities for the mean. Wee propose a general approach for statistical inference of mean values based on accumulated observational data. We show that the construction of such random intervals can be accomplished by comparing the endpoints of random intervals with confidence sequences for the mean. Asymptotic results are obtained for such sequential methods.
Xinjia Chen
null
1306.2290
null
null
Markov random fields factorization with context-specific independences
cs.AI cs.LG
Markov random fields provide a compact representation of joint probability distributions by representing its independence properties in an undirected graph. The well-known Hammersley-Clifford theorem uses these conditional independences to factorize a Gibbs distribution into a set of factors. However, an important issue of using a graph to represent independences is that it cannot encode some types of independence relations, such as the context-specific independences (CSIs). They are a particular case of conditional independences that is true only for a certain assignment of its conditioning set; in contrast to conditional independences that must hold for all its assignments. This work presents a method for factorizing a Markov random field according to CSIs present in a distribution, and formally guarantees that this factorization is correct. This is presented in our main contribution, the context-specific Hammersley-Clifford theorem, a generalization to CSIs of the Hammersley-Clifford theorem that applies for conditional independences.
Alejandro Edera, Facundo Bromberg, and Federico Schl\"uter
null
1306.2295
null
null
Generative Model Selection Using a Scalable and Size-Independent Complex Network Classifier
cs.SI cs.LG physics.soc-ph stat.ML
Real networks exhibit nontrivial topological features such as heavy-tailed degree distribution, high clustering, and small-worldness. Researchers have developed several generative models for synthesizing artificial networks that are structurally similar to real networks. An important research problem is to identify the generative model that best fits to a target network. In this paper, we investigate this problem and our goal is to select the model that is able to generate graphs similar to a given network instance. By the means of generating synthetic networks with seven outstanding generative models, we have utilized machine learning methods to develop a decision tree for model selection. Our proposed method, which is named "Generative Model Selection for Complex Networks" (GMSCN), outperforms existing methods with respect to accuracy, scalability and size-independence.
Sadegh Motallebi, Sadegh Aliakbary, Jafar Habibi
10.1063/1.4840235
1306.2298
null
null
Auditing: Active Learning with Outcome-Dependent Query Costs
cs.LG
We propose a learning setting in which unlabeled data is free, and the cost of a label depends on its value, which is not known in advance. We study binary classification in an extreme case, where the algorithm only pays for negative labels. Our motivation are applications such as fraud detection, in which investigating an honest transaction should be avoided if possible. We term the setting auditing, and consider the auditing complexity of an algorithm: the number of negative labels the algorithm requires in order to learn a hypothesis with low relative error. We design auditing algorithms for simple hypothesis classes (thresholds and rectangles), and show that with these algorithms, the auditing complexity can be significantly lower than the active label complexity. We also discuss a general competitive approach for auditing and possible modifications to the framework.
Sivan Sabato and Anand D. Sarwate and Nathan Srebro
null
1306.2347
null
null
DISCOMAX: A Proximity-Preserving Distance Correlation Maximization Algorithm
cs.LG stat.ML
In a regression setting we propose algorithms that reduce the dimensionality of the features while simultaneously maximizing a statistical measure of dependence known as distance correlation between the low-dimensional features and a response variable. This helps in solving the prediction problem with a low-dimensional set of features. Our setting is different from subset-selection algorithms where the problem is to choose the best subset of features for regression. Instead, we attempt to generate a new set of low-dimensional features as in a feature-learning setting. We attempt to keep our proposed approach as model-free and our algorithm does not assume the application of any specific regression model in conjunction with the low-dimensional features that it learns. The algorithm is iterative and is fomulated as a combination of the majorization-minimization and concave-convex optimization procedures. We also present spectral radius based convergence results for the proposed iterations.
Praneeth Vepakomma and Ahmed Elgammal
null
1306.2533
null
null
Efficient Classification for Metric Data
cs.LG cs.DS stat.ML
Recent advances in large-margin classification of data residing in general metric spaces (rather than Hilbert spaces) enable classification under various natural metrics, such as string edit and earthmover distance. A general framework developed for this purpose by von Luxburg and Bousquet [JMLR, 2004] left open the questions of computational efficiency and of providing direct bounds on generalization error. We design a new algorithm for classification in general metric spaces, whose runtime and accuracy depend on the doubling dimension of the data points, and can thus achieve superior classification performance in many common scenarios. The algorithmic core of our approach is an approximate (rather than exact) solution to the classical problems of Lipschitz extension and of Nearest Neighbor Search. The algorithm's generalization performance is guaranteed via the fat-shattering dimension of Lipschitz classifiers, and we present experimental evidence of its superiority to some common kernel methods. As a by-product, we offer a new perspective on the nearest neighbor classifier, which yields significantly sharper risk asymptotics than the classic analysis of Cover and Hart [IEEE Trans. Info. Theory, 1967].
Lee-Ad Gottlieb and Aryeh Kontorovich and Robert Krauthgamer
null
1306.2547
null
null
The association problem in wireless networks: a Policy Gradient Reinforcement Learning approach
cs.NI cs.IT cs.LG math.IT
The purpose of this paper is to develop a self-optimized association algorithm based on PGRL (Policy Gradient Reinforcement Learning), which is both scalable, stable and robust. The term robust means that performance degradation in the learning phase should be forbidden or limited to predefined thresholds. The algorithm is model-free (as opposed to Value Iteration) and robust (as opposed to Q-Learning). The association problem is modeled as a Markov Decision Process (MDP). The policy space is parameterized. The parameterized family of policies is then used as expert knowledge for the PGRL. The PGRL converges towards a local optimum and the average cost decreases monotonically during the learning process. The properties of the solution make it a good candidate for practical implementation. Furthermore, the robustness property allows to use the PGRL algorithm in an "always-on" learning mode.
Richard Combes and Ilham El Bouloumi and Stephane Senecal and Zwi Altman
null
1306.2554
null
null
Concentration bounds for temporal difference learning with linear function approximation: The case of batch data and uniform sampling
cs.LG stat.ML
We propose a stochastic approximation (SA) based method with randomization of samples for policy evaluation using the least squares temporal difference (LSTD) algorithm. Our proposed scheme is equivalent to running regular temporal difference learning with linear function approximation, albeit with samples picked uniformly from a given dataset. Our method results in an $O(d)$ improvement in complexity in comparison to LSTD, where $d$ is the dimension of the data. We provide non-asymptotic bounds for our proposed method, both in high probability and in expectation, under the assumption that the matrix underlying the LSTD solution is positive definite. The latter assumption can be easily satisfied for the pathwise LSTD variant proposed in [23]. Moreover, we also establish that using our method in place of LSTD does not impact the rate of convergence of the approximate value function to the true value function. These rate results coupled with the low computational complexity of our method make it attractive for implementation in big data settings, where $d$ is large. A similar low-complexity alternative for least squares regression is well-known as the stochastic gradient descent (SGD) algorithm. We provide finite-time bounds for SGD. We demonstrate the practicality of our method as an efficient alternative for pathwise LSTD empirically by combining it with the least squares policy iteration (LSPI) algorithm in a traffic signal control application. We also conduct another set of experiments that combines the SA based low-complexity variant for least squares regression with the LinUCB algorithm for contextual bandits, using the large scale news recommendation dataset from Yahoo.
L.A. Prashanth, Nathaniel Korda and R\'emi Munos
null
1306.2557
null
null
Large Margin Low Rank Tensor Analysis
cs.LG cs.NA
Other than vector representations, the direct objects of human cognition are generally high-order tensors, such as 2D images and 3D textures. From this fact, two interesting questions naturally arise: How does the human brain represent these tensor perceptions in a "manifold" way, and how can they be recognized on the "manifold"? In this paper, we present a supervised model to learn the intrinsic structure of the tensors embedded in a high dimensional Euclidean space. With the fixed point continuation procedures, our model automatically and jointly discovers the optimal dimensionality and the representations of the low dimensional embeddings. This makes it an effective simulation of the cognitive process of human brain. Furthermore, the generalization of our model based on similarity between the learned low dimensional embeddings can be viewed as counterpart of recognition of human brain. Experiments on applications for object recognition and face recognition demonstrate the superiority of our proposed model over state-of-the-art approaches.
Guoqiang Zhong and Mohamed Cheriet
null
1306.2663
null
null
Precisely Verifying the Null Space Conditions in Compressed Sensing: A Sandwiching Algorithm
cs.IT cs.LG cs.SY math.IT math.OC stat.ML
In this paper, we propose new efficient algorithms to verify the null space condition in compressed sensing (CS). Given an $(n-m) \times n$ ($m>0$) CS matrix $A$ and a positive $k$, we are interested in computing $\displaystyle \alpha_k = \max_{\{z: Az=0,z\neq 0\}}\max_{\{K: |K|\leq k\}}$ ${\|z_K \|_{1}}{\|z\|_{1}}$, where $K$ represents subsets of $\{1,2,...,n\}$, and $|K|$ is the cardinality of $K$. In particular, we are interested in finding the maximum $k$ such that $\alpha_k < {1}{2}$. However, computing $\alpha_k$ is known to be extremely challenging. In this paper, we first propose a series of new polynomial-time algorithms to compute upper bounds on $\alpha_k$. Based on these new polynomial-time algorithms, we further design a new sandwiching algorithm, to compute the \emph{exact} $\alpha_k$ with greatly reduced complexity. When needed, this new sandwiching algorithm also achieves a smooth tradeoff between computational complexity and result accuracy. Empirical results show the performance improvements of our algorithm over existing known methods; and our algorithm outputs precise values of $\alpha_k$, with much lower complexity than exhaustive search.
Myung Cho and Weiyu Xu
null
1306.2665
null
null
R3MC: A Riemannian three-factor algorithm for low-rank matrix completion
math.OC cs.LG
We exploit the versatile framework of Riemannian optimization on quotient manifolds to develop R3MC, a nonlinear conjugate-gradient method for low-rank matrix completion. The underlying search space of fixed-rank matrices is endowed with a novel Riemannian metric that is tailored to the least-squares cost. Numerical comparisons suggest that R3MC robustly outperforms state-of-the-art algorithms across different problem instances, especially those that combine scarcely sampled and ill-conditioned data.
B. Mishra and R. Sepulchre
null
1306.2672
null
null
Flexible sampling of discrete data correlations without the marginal distributions
stat.ML cs.LG stat.CO
Learning the joint dependence of discrete variables is a fundamental problem in machine learning, with many applications including prediction, clustering and dimensionality reduction. More recently, the framework of copula modeling has gained popularity due to its modular parametrization of joint distributions. Among other properties, copulas provide a recipe for combining flexible models for univariate marginal distributions with parametric families suitable for potentially high dimensional dependence structures. More radically, the extended rank likelihood approach of Hoff (2007) bypasses learning marginal models completely when such information is ancillary to the learning task at hand as in, e.g., standard dimensionality reduction problems or copula parameter estimation. The main idea is to represent data by their observable rank statistics, ignoring any other information from the marginals. Inference is typically done in a Bayesian framework with Gaussian copulas, and it is complicated by the fact this implies sampling within a space where the number of constraints increases quadratically with the number of data points. The result is slow mixing when using off-the-shelf Gibbs sampling. We present an efficient algorithm based on recent advances on constrained Hamiltonian Markov chain Monte Carlo that is simple to implement and does not require paying for a quadratic cost in sample size.
Alfredo Kalaitzis and Ricardo Silva
null
1306.2685
null
null
Copula Mixed-Membership Stochastic Blockmodel for Intra-Subgroup Correlations
cs.LG stat.ML
The \emph{Mixed-Membership Stochastic Blockmodel (MMSB)} is a popular framework for modeling social network relationships. It can fully exploit each individual node's participation (or membership) in a social structure. Despite its powerful representations, this model makes an assumption that the distributions of relational membership indicators between two nodes are independent. Under many social network settings, however, it is possible that certain known subgroups of people may have high or low correlations in terms of their membership categories towards each other, and such prior information should be incorporated into the model. To this end, we introduce a \emph{Copula Mixed-Membership Stochastic Blockmodel (cMMSB)} where an individual Copula function is employed to jointly model the membership pairs of those nodes within the subgroup of interest. The model enables the use of various Copula functions to suit the scenario, while maintaining the membership's marginal distribution, as needed, for modeling membership indicators with other nodes outside of the subgroup of interest. We describe the proposed model and its inference algorithm in detail for both the finite and infinite cases. In the experiment section, we compare our algorithms with other popular models in terms of link prediction, using both synthetic and real world data.
Xuhui Fan, Longbing Cao, Richard Yi Da Xu
null
1306.2733
null
null
Horizontal and Vertical Ensemble with Deep Representation for Classification
cs.LG stat.ML
Representation learning, especially which by using deep learning, has been widely applied in classification. However, how to use limited size of labeled data to achieve good classification performance with deep neural network, and how can the learned features further improve classification remain indefinite. In this paper, we propose Horizontal Voting Vertical Voting and Horizontal Stacked Ensemble methods to improve the classification performance of deep neural networks. In the ICML 2013 Black Box Challenge, via using these methods independently, Bing Xu achieved 3rd in public leaderboard, and 7th in private leaderboard; Jingjing Xie achieved 4th in public leaderboard, and 5th in private leaderboard.
Jingjing Xie, Bing Xu, Zhang Chuang
null
1306.2759
null
null
Understanding Dropout: Training Multi-Layer Perceptrons with Auxiliary Independent Stochastic Neurons
cs.NE cs.LG stat.ML
In this paper, a simple, general method of adding auxiliary stochastic neurons to a multi-layer perceptron is proposed. It is shown that the proposed method is a generalization of recently successful methods of dropout (Hinton et al., 2012), explicit noise injection (Vincent et al., 2010; Bishop, 1995) and semantic hashing (Salakhutdinov & Hinton, 2009). Under the proposed framework, an extension of dropout which allows using separate dropping probabilities for different hidden neurons, or layers, is found to be available. The use of different dropping probabilities for hidden layers separately is empirically investigated.
Kyunghyun Cho
null
1306.2801
null
null
Bayesian Inference and Learning in Gaussian Process State-Space Models with Particle MCMC
stat.ML cs.LG cs.SY
State-space models are successfully used in many areas of science, engineering and economics to model time series and dynamical systems. We present a fully Bayesian approach to inference \emph{and learning} (i.e. state estimation and system identification) in nonlinear nonparametric state-space models. We place a Gaussian process prior over the state transition dynamics, resulting in a flexible model able to capture complex dynamical phenomena. To enable efficient inference, we marginalize over the transition dynamics function and infer directly the joint smoothing distribution using specially tailored Particle Markov Chain Monte Carlo samplers. Once a sample from the smoothing distribution is computed, the state transition predictive distribution can be formulated analytically. Our approach preserves the full nonparametric expressivity of the model and can make use of sparse Gaussian processes to greatly reduce computational complexity.
Roger Frigola, Fredrik Lindsten, Thomas B. Sch\"on, Carl E. Rasmussen
null
1306.2861
null
null
Robust Support Vector Machines for Speaker Verification Task
cs.LG cs.SD stat.ML
An important step in speaker verification is extracting features that best characterize the speaker voice. This paper investigates a front-end processing that aims at improving the performance of speaker verification based on the SVMs classifier, in text independent mode. This approach combines features based on conventional Mel-cepstral Coefficients (MFCCs) and Line Spectral Frequencies (LSFs) to constitute robust multivariate feature vectors. To reduce the high dimensionality required for training these feature vectors, we use a dimension reduction method called principal component analysis (PCA). In order to evaluate the robustness of these systems, different noisy environments have been used. The obtained results using TIMIT database showed that, using the paradigm that combines these spectral cues leads to a significant improvement in verification accuracy, especially with PCA reduction for low signal-to-noise ratio noisy environment.
Kawthar Yasmine Zergat, Abderrahmane Amrouche
null
1306.2906
null
null
Reinforcement learning with restrictions on the action set
cs.GT cs.LG math.PR
Consider a 2-player normal-form game repeated over time. We introduce an adaptive learning procedure, where the players only observe their own realized payoff at each stage. We assume that agents do not know their own payoff function, and have no information on the other player. Furthermore, we assume that they have restrictions on their own action set such that, at each stage, their choice is limited to a subset of their action set. We prove that the empirical distributions of play converge to the set of Nash equilibria for zero-sum and potential games, and games where one player has two actions.
Mario Bravo (ISCI), Mathieu Faure (AMSE)
null
1306.2918
null
null
Completing Any Low-rank Matrix, Provably
stat.ML cs.IT cs.LG math.IT
Matrix completion, i.e., the exact and provable recovery of a low-rank matrix from a small subset of its elements, is currently only known to be possible if the matrix satisfies a restrictive structural constraint---known as {\em incoherence}---on its row and column spaces. In these cases, the subset of elements is sampled uniformly at random. In this paper, we show that {\em any} rank-$ r $ $ n$-by-$ n $ matrix can be exactly recovered from as few as $O(nr \log^2 n)$ randomly chosen elements, provided this random choice is made according to a {\em specific biased distribution}: the probability of any element being sampled should be proportional to the sum of the leverage scores of the corresponding row, and column. Perhaps equally important, we show that this specific form of sampling is nearly necessary, in a natural precise sense; this implies that other perhaps more intuitive sampling schemes fail. We further establish three ways to use the above result for the setting when leverage scores are not known \textit{a priori}: (a) a sampling strategy for the case when only one of the row or column spaces are incoherent, (b) a two-phase sampling procedure for general matrices that first samples to estimate leverage scores followed by sampling for exact recovery, and (c) an analysis showing the advantages of weighted nuclear/trace-norm minimization over the vanilla un-weighted formulation for the case of non-uniform sampling.
Yudong Chen, Srinadh Bhojanapalli, Sujay Sanghavi, Rachel Ward
null
1306.2979
null
null
Dynamic Infinite Mixed-Membership Stochastic Blockmodel
cs.SI cs.LG stat.ML
Directional and pairwise measurements are often used to model inter-relationships in a social network setting. The Mixed-Membership Stochastic Blockmodel (MMSB) was a seminal work in this area, and many of its capabilities were extended since then. In this paper, we propose the \emph{Dynamic Infinite Mixed-Membership stochastic blockModel (DIM3)}, a generalised framework that extends the existing work to a potentially infinite number of communities and mixture memberships for each of the network's nodes. This model is in a dynamic setting, where additional model parameters are introduced to reflect the degree of persistence between one's memberships at consecutive times. Accordingly, two effective posterior sampling strategies and their results are presented using both synthetic and real data.
Xuhui Fan, Longbing Cao, Richard Yi Da Xu
null
1306.2999
null
null
A Convergence Theorem for the Graph Shift-type Algorithms
stat.ML cs.LG
Graph Shift (GS) algorithms are recently focused as a promising approach for discovering dense subgraphs in noisy data. However, there are no theoretical foundations for proving the convergence of the GS Algorithm. In this paper, we propose a generic theoretical framework consisting of three key GS components: simplex of generated sequence set, monotonic and continuous objective function and closed mapping. We prove that GS algorithms with such components can be transformed to fit the Zangwill's convergence theorem, and the sequence set generated by the GS procedures always terminates at a local maximum, or at worst, contains a subsequence which converges to a local maximum of the similarity measure function. The framework is verified by expanding it to other GS-type algorithms and experimental results.
Xuhui Fan, Longbing Cao
null
1306.3002
null
null
Non-parametric Power-law Data Clustering
cs.LG cs.CV stat.ML
It has always been a great challenge for clustering algorithms to automatically determine the cluster numbers according to the distribution of datasets. Several approaches have been proposed to address this issue, including the recent promising work which incorporate Bayesian Nonparametrics into the $k$-means clustering procedure. This approach shows simplicity in implementation and solidity in theory, while it also provides a feasible way to inference in large scale datasets. However, several problems remains unsolved in this pioneering work, including the power-law data applicability, mechanism to merge centers to avoid the over-fitting problem, clustering order problem, e.t.c.. To address these issues, the Pitman-Yor Process based k-means (namely \emph{pyp-means}) is proposed in this paper. Taking advantage of the Pitman-Yor Process, \emph{pyp-means} treats clusters differently by dynamically and adaptively changing the threshold to guarantee the generation of power-law clustering results. Also, one center agglomeration procedure is integrated into the implementation to be able to merge small but close clusters and then adaptively determine the cluster number. With more discussion on the clustering order, the convergence proof, complexity analysis and extension to spectral clustering, our approach is compared with traditional clustering algorithm and variational inference methods. The advantages and properties of pyp-means are validated by experiments on both synthetic datasets and real world datasets.
Xuhui Fan, Yiling Zeng, Longbing Cao
null
1306.3003
null
null
Physeter catodon localization by sparse coding
cs.LG cs.CE stat.ML
This paper presents a spermwhale' localization architecture using jointly a bag-of-features (BoF) approach and machine learning framework. BoF methods are known, especially in computer vision, to produce from a collection of local features a global representation invariant to principal signal transformations. Our idea is to regress supervisely from these local features two rough estimates of the distance and azimuth thanks to some datasets where both acoustic events and ground-truth position are now available. Furthermore, these estimates can feed a particle filter system in order to obtain a precise spermwhale' position even in mono-hydrophone configuration. Anti-collision system and whale watching are considered applications of this work.
S\'ebastien Paris and Yann Doh and Herv\'e Glotin and Xanadu Halkias and Joseph Razik
null
1306.3058
null
null
Guaranteed Classification via Regularized Similarity Learning
cs.LG
Learning an appropriate (dis)similarity function from the available data is a central problem in machine learning, since the success of many machine learning algorithms critically depends on the choice of a similarity function to compare examples. Despite many approaches for similarity metric learning have been proposed, there is little theoretical study on the links between similarity met- ric learning and the classification performance of the result classifier. In this paper, we propose a regularized similarity learning formulation associated with general matrix-norms, and establish their generalization bounds. We show that the generalization error of the resulting linear separator can be bounded by the derived generalization bound of similarity learning. This shows that a good gen- eralization of the learnt similarity function guarantees a good classification of the resulting linear classifier. Our results extend and improve those obtained by Bellet at al. [3]. Due to the techniques dependent on the notion of uniform stability [6], the bound obtained there holds true only for the Frobenius matrix- norm regularization. Our techniques using the Rademacher complexity [5] and its related Khinchin-type inequality enable us to establish bounds for regularized similarity learning formulations associated with general matrix-norms including sparse L 1 -norm and mixed (2,1)-norm.
Zheng-Chu Guo and Yiming Ying
null
1306.3108
null
null
Learning Using Privileged Information: SVM+ and Weighted SVM
stat.ML cs.LG
Prior knowledge can be used to improve predictive performance of learning algorithms or reduce the amount of data required for training. The same goal is pursued within the learning using privileged information paradigm which was recently introduced by Vapnik et al. and is aimed at utilizing additional information available only at training time -- a framework implemented by SVM+. We relate the privileged information to importance weighting and show that the prior knowledge expressible with privileged features can also be encoded by weights associated with every training example. We show that a weighted SVM can always replicate an SVM+ solution, while the converse is not true and we construct a counterexample highlighting the limitations of SVM+. Finally, we touch on the problem of choosing weights for weighted SVMs when privileged features are not available.
Maksim Lapin, Matthias Hein, Bernt Schiele
10.1016/j.neunet.2014.02.002
1306.3161
null
null
Learning to encode motion using spatio-temporal synchrony
cs.CV cs.LG stat.ML
We consider the task of learning to extract motion from videos. To this end, we show that the detection of spatial transformations can be viewed as the detection of synchrony between the image sequence and a sequence of features undergoing the motion we wish to detect. We show that learning about synchrony is possible using very fast, local learning rules, by introducing multiplicative "gating" interactions between hidden units across frames. This makes it possible to achieve competitive performance in a wide variety of motion estimation tasks, using a small fraction of the time required to learn features, and to outperform hand-crafted spatio-temporal features by a large margin. We also show how learning about synchrony can be viewed as performing greedy parameter estimation in the well-known motion energy model.
Kishore Reddy Konda, Roland Memisevic, Vincent Michalski
null
1306.3162
null
null
Confidence Intervals and Hypothesis Testing for High-Dimensional Regression
stat.ME cs.IT cs.LG math.IT
Fitting high-dimensional statistical models often requires the use of non-linear parameter estimation procedures. As a consequence, it is generally impossible to obtain an exact characterization of the probability distribution of the parameter estimates. This in turn implies that it is extremely challenging to quantify the \emph{uncertainty} associated with a certain parameter estimate. Concretely, no commonly accepted procedure exists for computing classical measures of uncertainty and statistical significance as confidence intervals or $p$-values for these models. We consider here high-dimensional linear regression problem, and propose an efficient algorithm for constructing confidence intervals and $p$-values. The resulting confidence intervals have nearly optimal size. When testing for the null hypothesis that a certain parameter is vanishing, our method has nearly optimal power. Our approach is based on constructing a `de-biased' version of regularized M-estimators. The new construction improves over recent work in the field in that it does not assume a special structure on the design matrix. We test our method on synthetic data and a high-throughput genomic data set about riboflavin production rate.
Adel Javanmard and Andrea Montanari
null
1306.3171
null
null
Bregman Alternating Direction Method of Multipliers
math.OC cs.LG stat.ML
The mirror descent algorithm (MDA) generalizes gradient descent by using a Bregman divergence to replace squared Euclidean distance. In this paper, we similarly generalize the alternating direction method of multipliers (ADMM) to Bregman ADMM (BADMM), which allows the choice of different Bregman divergences to exploit the structure of problems. BADMM provides a unified framework for ADMM and its variants, including generalized ADMM, inexact ADMM and Bethe ADMM. We establish the global convergence and the $O(1/T)$ iteration complexity for BADMM. In some cases, BADMM can be faster than ADMM by a factor of $O(n/\log(n))$. In solving the linear program of mass transportation problem, BADMM leads to massive parallelism and can easily run on GPU. BADMM is several times faster than highly optimized commercial software Gurobi.
Huahua Wang and Arindam Banerjee
null
1306.3203
null
null
Sparse Inverse Covariance Matrix Estimation Using Quadratic Approximation
cs.LG stat.ML
The L1-regularized Gaussian maximum likelihood estimator (MLE) has been shown to have strong statistical guarantees in recovering a sparse inverse covariance matrix, or alternatively the underlying graph structure of a Gaussian Markov Random Field, from very limited samples. We propose a novel algorithm for solving the resulting optimization problem which is a regularized log-determinant program. In contrast to recent state-of-the-art methods that largely use first order gradient information, our algorithm is based on Newton's method and employs a quadratic approximation, but with some modifications that leverage the structure of the sparse Gaussian MLE problem. We show that our method is superlinearly convergent, and present experimental results using synthetic and real-world application data that demonstrate the considerable improvements in performance of our method when compared to other state-of-the-art methods.
Cho-Jui Hsieh, Matyas A. Sustik, Inderjit S. Dhillon and Pradeep Ravikumar
null
1306.3212
null
null
Relaxed Sparse Eigenvalue Conditions for Sparse Estimation via Non-convex Regularized Regression
cs.LG cs.NA stat.ML
Non-convex regularizers usually improve the performance of sparse estimation in practice. To prove this fact, we study the conditions of sparse estimations for the sharp concave regularizers which are a general family of non-convex regularizers including many existing regularizers. For the global solutions of the regularized regression, our sparse eigenvalue based conditions are weaker than that of L1-regularization for parameter estimation and sparseness estimation. For the approximate global and approximate stationary (AGAS) solutions, almost the same conditions are also enough. We show that the desired AGAS solutions can be obtained by coordinate descent (CD) based methods. Finally, we perform some experiments to show the performance of CD methods on giving AGAS solutions and the degree of weakness of the estimation conditions required by the sharp concave regularizers.
Zheng Pan, Changshui Zhang
null
1306.3343
null
null
Constrained fractional set programs and their application in local clustering and community detection
stat.ML cs.LG math.OC
The (constrained) minimization of a ratio of set functions is a problem frequently occurring in clustering and community detection. As these optimization problems are typically NP-hard, one uses convex or spectral relaxations in practice. While these relaxations can be solved globally optimally, they are often too loose and thus lead to results far away from the optimum. In this paper we show that every constrained minimization problem of a ratio of non-negative set functions allows a tight relaxation into an unconstrained continuous optimization problem. This result leads to a flexible framework for solving constrained problems in network analysis. While a globally optimal solution for the resulting non-convex problem cannot be guaranteed, we outperform the loose convex or spectral relaxations by a large margin on constrained local clustering problems.
Thomas B\"uhler, Syama Sundar Rangapuram, Simon Setzer, Matthias Hein
null
1306.3409
null
null
Classifying Single-Trial EEG during Motor Imagery with a Small Training Set
cs.LG cs.HC stat.ML
Before the operation of a motor imagery based brain-computer interface (BCI) adopting machine learning techniques, a cumbersome training procedure is unavoidable. The development of a practical BCI posed the challenge of classifying single-trial EEG with a small training set. In this letter, we addressed this problem by employing a series of signal processing and machine learning approaches to alleviate overfitting and obtained test accuracy similar to training accuracy on the datasets from BCI Competition III and our own experiments.
Yijun Wang
null
1306.3474
null
null
Hyperparameter Optimization and Boosting for Classifying Facial Expressions: How good can a "Null" Model be?
cs.CV cs.LG stat.ML
One of the goals of the ICML workshop on representation and learning is to establish benchmark scores for a new data set of labeled facial expressions. This paper presents the performance of a "Null" model consisting of convolutions with random weights, PCA, pooling, normalization, and a linear readout. Our approach focused on hyperparameter optimization rather than novel model components. On the Facial Expression Recognition Challenge held by the Kaggle website, our hyperparameter optimization approach achieved a score of 60% accuracy on the test data. This paper also introduces a new ensemble construction variant that combines hyperparameter optimization with the construction of ensembles. This algorithm constructed an ensemble of four models that scored 65.5% accuracy. These scores rank 12th and 5th respectively among the 56 challenge participants. It is worth noting that our approach was developed prior to the release of the data set, and applied without modification; our strong competition performance suggests that the TPE hyperparameter optimization algorithm and domain expertise encoded in our Null model can generalize to new image classification data sets.
James Bergstra and David D. Cox
null
1306.3476
null
null
Approximation Algorithms for Bayesian Multi-Armed Bandit Problems
cs.DS cs.LG
In this paper, we consider several finite-horizon Bayesian multi-armed bandit problems with side constraints which are computationally intractable (NP-Hard) and for which no optimal (or near optimal) algorithms are known to exist with sub-exponential running time. All of these problems violate the standard exchange property, which assumes that the reward from the play of an arm is not contingent upon when the arm is played. Not only are index policies suboptimal in these contexts, there has been little analysis of such policies in these problem settings. We show that if we consider near-optimal policies, in the sense of approximation algorithms, then there exists (near) index policies. Conceptually, if we can find policies that satisfy an approximate version of the exchange property, namely, that the reward from the play of an arm depends on when the arm is played to within a constant factor, then we have an avenue towards solving these problems. However such an approximate version of the idling bandit property does not hold on a per-play basis and are shown to hold in a global sense. Clearly, such a property is not necessarily true of arbitrary single arm policies and finding such single arm policies is nontrivial. We show that by restricting the state spaces of arms we can find single arm policies and that these single arm policies can be combined into global (near) index policies where the approximate version of the exchange property is true in expectation. The number of different bandit problems that can be addressed by this technique already demonstrate its wide applicability.
Sudipto Guha and Kamesh Munagala
null
1306.3525
null
null
Outlying Property Detection with Numerical Attributes
cs.LG cs.DB stat.ML
The outlying property detection problem is the problem of discovering the properties distinguishing a given object, known in advance to be an outlier in a database, from the other database objects. In this paper, we analyze the problem within a context where numerical attributes are taken into account, which represents a relevant case left open in the literature. We introduce a measure to quantify the degree the outlierness of an object, which is associated with the relative likelihood of the value, compared to the to the relative likelihood of other objects in the database. As a major contribution, we present an efficient algorithm to compute the outlierness relative to significant subsets of the data. The latter subsets are characterized in a "rule-based" fashion, and hence the basis for the underlying explanation of the outlierness.
Fabrizio Angiulli and Fabio Fassetti and Luigi Palopoli and Giuseppe Manco
null
1306.3558
null
null
Online Alternating Direction Method (longer version)
cs.LG math.OC
Online optimization has emerged as powerful tool in large scale optimization. In this pa- per, we introduce efficient online optimization algorithms based on the alternating direction method (ADM), which can solve online convex optimization under linear constraints where the objective could be non-smooth. We introduce new proof techniques for ADM in the batch setting, which yields a O(1/T) convergence rate for ADM and forms the basis for regret anal- ysis in the online setting. We consider two scenarios in the online setting, based on whether an additional Bregman divergence is needed or not. In both settings, we establish regret bounds for both the objective function as well as constraints violation for general and strongly convex functions. We also consider inexact ADM updates where certain terms are linearized to yield efficient updates and show the stochastic convergence rates. In addition, we briefly discuss that online ADM can be used as projection- free online learning algorithm in some scenarios. Preliminary results are presented to illustrate the performance of the proposed algorithms.
Huahua Wang and Arindam Banerjee
null
1306.3721
null
null
Spectral Experts for Estimating Mixtures of Linear Regressions
cs.LG stat.ML
Discriminative latent-variable models are typically learned using EM or gradient-based optimization, which suffer from local optima. In this paper, we develop a new computationally efficient and provably consistent estimator for a mixture of linear regressions, a simple instance of a discriminative latent-variable model. Our approach relies on a low-rank linear regression to recover a symmetric tensor, which can be factorized into the parameters using a tensor power method. We prove rates of convergence for our estimator and provide an empirical evaluation illustrating its strengths relative to local optimization (EM).
Arun Tejasvi Chaganty and Percy Liang
null
1306.3729
null
null
Cluster coloring of the Self-Organizing Map: An information visualization perspective
cs.LG cs.HC
This paper takes an information visualization perspective to visual representations in the general SOM paradigm. This involves viewing SOM-based visualizations through the eyes of Bertin's and Tufte's theories on data graphics. The regular grid shape of the Self-Organizing Map (SOM), while being a virtue for linking visualizations to it, restricts representation of cluster structures. From the viewpoint of information visualization, this paper provides a general, yet simple, solution to projection-based coloring of the SOM that reveals structures. First, the proposed color space is easy to construct and customize to the purpose of use, while aiming at being perceptually correct and informative through two separable dimensions. Second, the coloring method is not dependent on any specific method of projection, but is rather modular to fit any objective function suitable for the task at hand. The cluster coloring is illustrated on two datasets: the iris data, and welfare and poverty indicators.
Peter Sarlin and Samuel R\"onnqvist
null
1306.3860
null
null
On-line PCA with Optimal Regrets
cs.LG
We carefully investigate the on-line version of PCA, where in each trial a learning algorithm plays a k-dimensional subspace, and suffers the compression loss on the next instance when projected into the chosen subspace. In this setting, we analyze two popular on-line algorithms, Gradient Descent (GD) and Exponentiated Gradient (EG). We show that both algorithms are essentially optimal in the worst-case. This comes as a surprise, since EG is known to perform sub-optimally when the instances are sparse. This different behavior of EG for PCA is mainly related to the non-negativity of the loss in this case, which makes the PCA setting qualitatively different from other settings studied in the literature. Furthermore, we show that when considering regret bounds as function of a loss budget, EG remains optimal and strictly outperforms GD. Next, we study the extension of the PCA setting, in which the Nature is allowed to play with dense instances, which are positive matrices with bounded largest eigenvalue. Again we can show that EG is optimal and strictly better than GD in this setting.
Jiazhong Nie and Wojciech Kotlowski and Manfred K. Warmuth
null
1306.3895
null
null
Stability of Multi-Task Kernel Regression Algorithms
cs.LG stat.ML
We study the stability properties of nonlinear multi-task regression in reproducing Hilbert spaces with operator-valued kernels. Such kernels, a.k.a. multi-task kernels, are appropriate for learning prob- lems with nonscalar outputs like multi-task learning and structured out- put prediction. We show that multi-task kernel regression algorithms are uniformly stable in the general case of infinite-dimensional output spaces. We then derive under mild assumption on the kernel generaliza- tion bounds of such algorithms, and we show their consistency even with non Hilbert-Schmidt operator-valued kernels . We demonstrate how to apply the results to various multi-task kernel regression methods such as vector-valued SVR and functional ridge regression.
Julien Audiffren (LIF), Hachem Kadri (LIF)
null
1306.3905
null
null
On Finding the Largest Mean Among Many
stat.ML cs.LG
Sampling from distributions to find the one with the largest mean arises in a broad range of applications, and it can be mathematically modeled as a multi-armed bandit problem in which each distribution is associated with an arm. This paper studies the sample complexity of identifying the best arm (largest mean) in a multi-armed bandit problem. Motivated by large-scale applications, we are especially interested in identifying situations where the total number of samples that are necessary and sufficient to find the best arm scale linearly with the number of arms. We present a single-parameter multi-armed bandit model that spans the range from linear to superlinear sample complexity. We also give a new algorithm for best arm identification, called PRISM, with linear sample complexity for a wide range of mean distributions. The algorithm, like most exploration procedures for multi-armed bandits, is adaptive in the sense that the next arms to sample are selected based on previous samples. We compare the sample complexity of adaptive procedures with simpler non-adaptive procedures using new lower bounds. For many problem instances, the increased sample complexity required by non-adaptive procedures is a polynomial factor of the number of arms.
Kevin Jamieson, Matthew Malloy, Robert Nowak, Sebastien Bubeck
null
1306.3917
null
null
Parallel Coordinate Descent Newton Method for Efficient $\ell_1$-Regularized Minimization
cs.LG cs.NA
The recent years have witnessed advances in parallel algorithms for large scale optimization problems. Notwithstanding demonstrated success, existing algorithms that parallelize over features are usually limited by divergence issues under high parallelism or require data preprocessing to alleviate these problems. In this work, we propose a Parallel Coordinate Descent Newton algorithm using multidimensional approximate Newton steps (PCDN), where the off-diagonal elements of the Hessian are set to zero to enable parallelization. It randomly partitions the feature set into $b$ bundles/subsets with size of $P$, and sequentially processes each bundle by first computing the descent directions for each feature in parallel and then conducting $P$-dimensional line search to obtain the step size. We show that: (1) PCDN is guaranteed to converge globally despite increasing parallelism; (2) PCDN converges to the specified accuracy $\epsilon$ within the limited iteration number of $T_\epsilon$, and $T_\epsilon$ decreases with increasing parallelism (bundle size $P$). Using the implementation technique of maintaining intermediate quantities, we minimize the data transfer and synchronization cost of the $P$-dimensional line search. For concreteness, the proposed PCDN algorithm is applied to $\ell_1$-regularized logistic regression and $\ell_2$-loss SVM. Experimental evaluations on six benchmark datasets show that the proposed PCDN algorithm exploits parallelism well and outperforms the state-of-the-art methods in speed without losing accuracy.
An Bian, Xiong Li, Yuncai Liu, Ming-Hsuan Yang
null
1306.4080
null
null
Bioclimating Modelling: A Machine Learning Perspective
cs.LG stat.ML
Many machine learning (ML) approaches are widely used to generate bioclimatic models for prediction of geographic range of organism as a function of climate. Applications such as prediction of range shift in organism, range of invasive species influenced by climate change are important parameters in understanding the impact of climate change. However, success of machine learning-based approaches depends on a number of factors. While it can be safely said that no particular ML technique can be effective in all applications and success of a technique is predominantly dependent on the application or the type of the problem, it is useful to understand their behaviour to ensure informed choice of techniques. This paper presents a comprehensive review of machine learning-based bioclimatic model generation and analyses the factors influencing success of such models. Considering the wide use of statistical techniques, in our discussion we also include conventional statistical techniques used in bioclimatic modelling.
Maumita Bhattacharya
null
1306.4152
null
null
Joint estimation of sparse multivariate regression and conditional graphical models
stat.ML cs.LG
Multivariate regression model is a natural generalization of the classical univari- ate regression model for fitting multiple responses. In this paper, we propose a high- dimensional multivariate conditional regression model for constructing sparse estimates of the multivariate regression coefficient matrix that accounts for the dependency struc- ture among the multiple responses. The proposed method decomposes the multivariate regression problem into a series of penalized conditional log-likelihood of each response conditioned on the covariates and other responses. It allows simultaneous estimation of the sparse regression coefficient matrix and the sparse inverse covariance matrix. The asymptotic selection consistency and normality are established for the diverging dimension of the covariates and number of responses. The effectiveness of the pro- posed method is also demonstrated in a variety of simulated examples as well as an application to the Glioblastoma multiforme cancer data.
Junhui Wang
null
1306.4410
null
null
Hacking Smart Machines with Smarter Ones: How to Extract Meaningful Data from Machine Learning Classifiers
cs.CR cs.LG stat.ML
Machine Learning (ML) algorithms are used to train computers to perform a variety of complex tasks and improve with experience. Computers learn how to recognize patterns, make unintended decisions, or react to a dynamic environment. Certain trained machines may be more effective than others because they are based on more suitable ML algorithms or because they were trained through superior training sets. Although ML algorithms are known and publicly released, training sets may not be reasonably ascertainable and, indeed, may be guarded as trade secrets. While much research has been performed about the privacy of the elements of training sets, in this paper we focus our attention on ML classifiers and on the statistical information that can be unconsciously or maliciously revealed from them. We show that it is possible to infer unexpected but useful information from ML classifiers. In particular, we build a novel meta-classifier and train it to hack other classifiers, obtaining meaningful information about their training sets. This kind of information leakage can be exploited, for example, by a vendor to build more effective classifiers or to simply acquire trade secrets from a competitor's apparatus, potentially violating its intellectual property rights.
Giuseppe Ateniese, Giovanni Felici, Luigi V. Mancini, Angelo Spognardi, Antonio Villani, Domenico Vitali
null
1306.4447
null
null
Table of Content detection using Machine Learning
cs.LG cs.DL cs.IR
Table of content (TOC) detection has drawn attention now a day because it plays an important role in digitization of multipage document. Generally book document is multipage document. So it becomes necessary to detect Table of Content page for easy navigation of multipage document and also to make information retrieval faster for desirable data from the multipage document. All the Table of content pages follow the different layout, different way of presenting the contents of the document like chapter, section, subsection etc. This paper introduces a new method to detect Table of content using machine learning technique with different features. With the main aim to detect Table of Content pages is to structure the document according to their contents.
Rachana Parikh and Avani R. Vasant
null
1306.4631
null
null
A Fuzzy Based Approach to Text Mining and Document Clustering
cs.LG cs.IR
Fuzzy logic deals with degrees of truth. In this paper, we have shown how to apply fuzzy logic in text mining in order to perform document clustering. We took an example of document clustering where the documents had to be clustered into two categories. The method involved cleaning up the text and stemming of words. Then, we chose m number of features which differ significantly in their word frequencies (WF), normalized by document length, between documents belonging to these two clusters. The documents to be clustered were represented as a collection of m normalized WF values. Fuzzy c-means (FCM) algorithm was used to cluster these documents into two clusters. After the FCM execution finished, the documents in the two clusters were analysed for the values of their respective m features. It was known that documents belonging to a document type, say X, tend to have higher WF values for some particular features. If the documents belonging to a cluster had higher WF values for those same features, then that cluster was said to represent X. By fuzzy logic, we not only get the cluster name, but also the degree to which a document belongs to a cluster.
Sumit Goswami and Mayank Singh Shishodia
null
1306.4633
null
null
Stochastic Majorization-Minimization Algorithms for Large-Scale Optimization
stat.ML cs.LG math.OC
Majorization-minimization algorithms consist of iteratively minimizing a majorizing surrogate of an objective function. Because of its simplicity and its wide applicability, this principle has been very popular in statistics and in signal processing. In this paper, we intend to make this principle scalable. We introduce a stochastic majorization-minimization scheme which is able to deal with large-scale or possibly infinite data sets. When applied to convex optimization problems under suitable assumptions, we show that it achieves an expected convergence rate of $O(1/\sqrt{n})$ after $n$ iterations, and of $O(1/n)$ for strongly convex functions. Equally important, our scheme almost surely converges to stationary points for a large class of non-convex problems. We develop several efficient algorithms based on our framework. First, we propose a new stochastic proximal gradient method, which experimentally matches state-of-the-art solvers for large-scale $\ell_1$-logistic regression. Second, we develop an online DC programming algorithm for non-convex sparse estimation. Finally, we demonstrate the effectiveness of our approach for solving large-scale structured matrix factorization problems.
Julien Mairal (INRIA Grenoble Rh\^one-Alpes / LJK Laboratoire Jean Kuntzmann)
null
1306.4650
null
null
Multiarmed Bandits With Limited Expert Advice
cs.LG
We solve the COLT 2013 open problem of \citet{SCB} on minimizing regret in the setting of advice-efficient multiarmed bandits with expert advice. We give an algorithm for the setting of K arms and N experts out of which we are allowed to query and use only M experts' advices in each round, which has a regret bound of \tilde{O}\bigP{\sqrt{\frac{\min\{K, M\} N}{M} T}} after T rounds. We also prove that any algorithm for this problem must have expected regret at least \tilde{\Omega}\bigP{\sqrt{\frac{\min\{K, M\} N}{M}T}}, thus showing that our upper bound is nearly tight.
Satyen Kale
null
1306.4653
null
null