categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
list
cs.LG
null
1408.2327
null
null
http://arxiv.org/pdf/1408.2327v9
2017-07-21T05:50:43Z
2014-08-11T06:52:46Z
On the Consistency of Ordinal Regression Methods
Many of the ordinal regression models that have been proposed in the literature can be seen as methods that minimize a convex surrogate of the zero-one, absolute, or squared loss functions. A key property that allows to study the statistical implications of such approximations is that of Fisher consistency. Fisher consistency is a desirable property for surrogate loss functions and implies that in the population setting, i.e., if the probability distribution that generates the data were available, then optimization of the surrogate would yield the best possible model. In this paper we will characterize the Fisher consistency of a rich family of surrogate loss functions used in the context of ordinal regression, including support vector ordinal regression, ORBoosting and least absolute deviation. We will see that, for a family of surrogate loss functions that subsumes support vector ordinal regression and ORBoosting, consistency can be fully characterized by the derivative of a real-valued function at zero, as happens for convex margin-based surrogates in binary classification. We also derive excess risk bounds for a surrogate of the absolute error that generalize existing risk bounds for binary classification. Finally, our analysis suggests a novel surrogate of the squared error loss. We compare this novel surrogate with competing approaches on 9 different datasets. Our method shows to be highly competitive in practice, outperforming the least squares loss on 7 out of 9 datasets.
[ "['Fabian Pedregosa' 'Francis Bach' 'Alexandre Gramfort']", "Fabian Pedregosa, Francis Bach, Alexandre Gramfort" ]
cs.LG
null
1408.2368
null
null
http://arxiv.org/pdf/1408.2368v1
2014-08-11T10:40:29Z
2014-08-11T10:40:29Z
On the Complexity of Bandit Linear Optimization
We study the attainable regret for online linear optimization problems with bandit feedback, where unlike the full-information setting, the player can only observe its own loss rather than the full loss vector. We show that the price of bandit information in this setting can be as large as $d$, disproving the well-known conjecture that the regret for bandit linear optimization is at most $\sqrt{d}$ times the full-information regret. Surprisingly, this is shown using "trivial" modifications of standard domains, which have no effect in the full-information setting. This and other results we present highlight some interesting differences between full-information and bandit learning, which were not considered in previous literature.
[ "['Ohad Shamir']", "Ohad Shamir" ]
stat.ME cs.DS cs.IT cs.LG math.IT
null
1408.2504
null
null
http://arxiv.org/pdf/1408.2504v1
2014-08-11T19:55:11Z
2014-08-11T19:55:11Z
Compressed Sensing with Very Sparse Gaussian Random Projections
We study the use of very sparse random projections for compressed sensing (sparse signal recovery) when the signal entries can be either positive or negative. In our setting, the entries of a Gaussian design matrix are randomly sparsified so that only a very small fraction of the entries are nonzero. Our proposed decoding algorithm is simple and efficient in that the major cost is one linear scan of the coordinates. We have developed two estimators: (i) the {\em tie estimator}, and (ii) the {\em absolute minimum estimator}. Using only the tie estimator, we are able to recover a $K$-sparse signal of length $N$ using $1.551 eK \log K/\delta$ measurements (where $\delta\leq 0.05$ is the confidence). Using only the absolute minimum estimator, we can detect the support of the signal using $eK\log N/\delta$ measurements. For a particular coordinate, the absolute minimum estimator requires fewer measurements (i.e., with a constant $e$ instead of $1.551e$). Thus, the two estimators can be combined to form an even more practical decoding framework. Prior studies have shown that existing one-scan (or roughly one-scan) recovery algorithms using sparse matrices would require substantially more (e.g., one order of magnitude) measurements than L1 decoding by linear programming, when the nonzero entries of signals can be either negative or positive. In this paper, following a known experimental setup, we show that, at the same number of measurements, the recovery accuracies of our proposed method are (at least) similar to the standard L1 decoding.
[ "Ping Li and Cun-Hui Zhang", "['Ping Li' 'Cun-Hui Zhang']" ]
stat.ML cs.GT cs.LG
null
1408.2539
null
null
http://arxiv.org/pdf/1408.2539v2
2015-04-24T16:39:54Z
2014-08-11T20:13:15Z
Optimum Statistical Estimation with Strategic Data Sources
We propose an optimum mechanism for providing monetary incentives to the data sources of a statistical estimator such as linear regression, so that high quality data is provided at low cost, in the sense that the sum of payments and estimation error is minimized. The mechanism applies to a broad range of estimators, including linear and polynomial regression, kernel regression, and, under some additional assumptions, ridge regression. It also generalizes to several objectives, including minimizing estimation error subject to budget constraints. Besides our concrete results for regression problems, we contribute a mechanism design framework through which to design and analyze statistical estimators whose examples are supplied by workers with cost for labeling said examples.
[ "Yang Cai, Constantinos Daskalakis, Christos H. Papadimitriou", "['Yang Cai' 'Constantinos Daskalakis' 'Christos H. Papadimitriou']" ]
q-bio.PE cs.LG stat.ML
null
1408.2552
null
null
http://arxiv.org/pdf/1408.2552v1
2014-08-11T20:39:42Z
2014-08-11T20:39:42Z
Comparing Nonparametric Bayesian Tree Priors for Clonal Reconstruction of Tumors
Statistical machine learning methods, especially nonparametric Bayesian methods, have become increasingly popular to infer clonal population structure of tumors. Here we describe the treeCRP, an extension of the Chinese restaurant process (CRP), a popular construction used in nonparametric mixture models, to infer the phylogeny and genotype of major subclonal lineages represented in the population of cancer cells. We also propose new split-merge updates tailored to the subclonal reconstruction problem that improve the mixing time of Markov chains. In comparisons with the tree-structured stick breaking prior used in PhyloSub, we demonstrate superior mixing and running time using the treeCRP with our new split-merge procedures. We also show that given the same number of samples, TSSB and treeCRP have similar ability to recover the subclonal structure of a tumor.
[ "['Amit G. Deshwar' 'Shankar Vembu' 'Quaid Morris']", "Amit G. Deshwar, Shankar Vembu, Quaid Morris" ]
math.OC cs.LG cs.NA math.NA stat.ML
null
1408.2597
null
null
http://arxiv.org/pdf/1408.2597v3
2015-03-02T04:02:54Z
2014-08-12T01:21:42Z
Block stochastic gradient iteration for convex and nonconvex optimization
The stochastic gradient (SG) method can minimize an objective function composed of a large number of differentiable functions, or solve a stochastic optimization problem, to a moderate accuracy. The block coordinate descent/update (BCD) method, on the other hand, handles problems with multiple blocks of variables by updating them one at a time; when the blocks of variables are easier to update individually than together, BCD has a lower per-iteration cost. This paper introduces a method that combines the features of SG and BCD for problems with many components in the objective and with multiple (blocks of) variables. Specifically, a block stochastic gradient (BSG) method is proposed for solving both convex and nonconvex programs. At each iteration, BSG approximates the gradient of the differentiable part of the objective by randomly sampling a small set of data or sampling a few functions from the sum term in the objective, and then, using those samples, it updates all the blocks of variables in either a deterministic or a randomly shuffled order. Its convergence for both convex and nonconvex cases are established in different senses. In the convex case, the proposed method has the same order of convergence rate as the SG method. In the nonconvex case, its convergence is established in terms of the expected violation of a first-order optimality condition. The proposed method was numerically tested on problems including stochastic least squares and logistic regression, which are convex, as well as low-rank tensor recovery and bilinear logistic regression, which are nonconvex.
[ "Yangyang Xu and Wotao Yin", "['Yangyang Xu' 'Wotao Yin']" ]
cs.LG stat.ML
null
1408.2764
null
null
http://arxiv.org/pdf/1408.2764v2
2015-08-23T18:56:15Z
2014-08-12T16:13:50Z
Convex Calibration Dimension for Multiclass Loss Matrices
We study consistency properties of surrogate loss functions for general multiclass learning problems, defined by a general multiclass loss matrix. We extend the notion of classification calibration, which has been studied for binary and multiclass 0-1 classification problems (and for certain other specific learning problems), to the general multiclass setting, and derive necessary and sufficient conditions for a surrogate loss to be calibrated with respect to a loss matrix in this setting. We then introduce the notion of convex calibration dimension of a multiclass loss matrix, which measures the smallest `size' of a prediction space in which it is possible to design a convex surrogate that is calibrated with respect to the loss matrix. We derive both upper and lower bounds on this quantity, and use these results to analyze various loss matrices. In particular, we apply our framework to study various subset ranking losses, and use the convex calibration dimension as a tool to show both the existence and non-existence of various types of convex calibrated surrogates for these losses. Our results strengthen recent results of Duchi et al. (2010) and Calauzenes et al. (2012) on the non-existence of certain types of convex calibrated surrogates in subset ranking. We anticipate the convex calibration dimension may prove to be a useful tool in the study and design of surrogate losses for general multiclass learning problems.
[ "['Harish G. Ramaswamy' 'Shivani Agarwal']", "Harish G. Ramaswamy and Shivani Agarwal" ]
cs.LG
10.1016/j.neucom.2014.07.062
1408.2803
null
null
http://arxiv.org/abs/1408.2803v2
2014-08-13T16:32:30Z
2014-08-12T18:57:48Z
Learning a hyperplane classifier by minimizing an exact bound on the VC dimension
The VC dimension measures the capacity of a learning machine, and a low VC dimension leads to good generalization. While SVMs produce state-of-the-art learning performance, it is well known that the VC dimension of a SVM can be unbounded; despite good results in practice, there is no guarantee of good generalization. In this paper, we show how to learn a hyperplane classifier by minimizing an exact, or \boldmath{$\Theta$} bound on its VC dimension. The proposed approach, termed as the Minimal Complexity Machine (MCM), involves solving a simple linear programming problem. Experimental results show, that on a number of benchmark datasets, the proposed approach learns classifiers with error rates much less than conventional SVMs, while often using fewer support vectors. On many benchmark datasets, the number of support vectors is less than one-tenth the number used by SVMs, indicating that the MCM does indeed learn simpler representations.
[ "['Jayadeva']", "Jayadeva" ]
cs.LG stat.ML
null
1408.2869
null
null
http://arxiv.org/pdf/1408.2869v1
2014-08-12T22:30:11Z
2014-08-12T22:30:11Z
Cluster based RBF Kernel for Support Vector Machines
In the classical Gaussian SVM classification we use the feature space projection transforming points to normal distributions with fixed covariance matrices (identity in the standard RBF and the covariance of the whole dataset in Mahalanobis RBF). In this paper we add additional information to Gaussian SVM by considering local geometry-dependent feature space projection. We emphasize that our approach is in fact an algorithm for a construction of the new Gaussian-type kernel. We show that better (compared to standard RBF and Mahalanobis RBF) classification results are obtained in the simple case when the space is preliminary divided by k-means into two sets and points are represented as normal distributions with a covariances calculated according to the dataset partitioning. We call the constructed method C$_k$RBF, where $k$ stands for the amount of clusters used in k-means. We show empirically on nine datasets from UCI repository that C$_2$RBF increases the stability of the grid search (measured as the probability of finding good parameters).
[ "Wojciech Marian Czarnecki, Jacek Tabor", "['Wojciech Marian Czarnecki' 'Jacek Tabor']" ]
cs.CL cs.LG cs.NE
null
1408.2873
null
null
http://arxiv.org/pdf/1408.2873v2
2014-12-08T20:21:52Z
2014-08-12T22:40:21Z
First-Pass Large Vocabulary Continuous Speech Recognition using Bi-Directional Recurrent DNNs
We present a method to perform first-pass large vocabulary continuous speech recognition using only a neural network and language model. Deep neural network acoustic models are now commonplace in HMM-based speech recognition systems, but building such systems is a complex, domain-specific task. Recent work demonstrated the feasibility of discarding the HMM sequence modeling framework by directly predicting transcript text from audio. This paper extends this approach in two ways. First, we demonstrate that a straightforward recurrent neural network architecture can achieve a high level of accuracy. Second, we propose and evaluate a modified prefix-search decoding algorithm. This approach to decoding enables first-pass speech recognition with a language model, completely unaided by the cumbersome infrastructure of HMM-based systems. Experiments on the Wall Street Journal corpus demonstrate fairly competitive word error rates, and the importance of bi-directional network recurrence.
[ "['Awni Y. Hannun' 'Andrew L. Maas' 'Daniel Jurafsky' 'Andrew Y. Ng']", "Awni Y. Hannun, Andrew L. Maas, Daniel Jurafsky, Andrew Y. Ng" ]
cs.LG cs.NE
null
1408.2889
null
null
http://arxiv.org/pdf/1408.2889v1
2014-08-13T00:38:41Z
2014-08-13T00:38:41Z
A Classifier-free Ensemble Selection Method based on Data Diversity in Random Subspaces
The Ensemble of Classifiers (EoC) has been shown to be effective in improving the performance of single classifiers by combining their outputs, and one of the most important properties involved in the selection of the best EoC from a pool of classifiers is considered to be classifier diversity. In general, classifier diversity does not occur randomly, but is generated systematically by various ensemble creation methods. By using diverse data subsets to train classifiers, these methods can create diverse classifiers for the EoC. In this work, we propose a scheme to measure data diversity directly from random subspaces, and explore the possibility of using it to select the best data subsets for the construction of the EoC. Our scheme is the first ensemble selection method to be presented in the literature based on the concept of data diversity. Its main advantage over the traditional framework (ensemble creation then selection) is that it obviates the need for classifier training prior to ensemble selection. A single Genetic Algorithm (GA) and a Multi-Objective Genetic Algorithm (MOGA) were evaluated to search for the best solutions for the classifier-free ensemble selection. In both cases, objective functions based on different clustering diversity measures were implemented and tested. All the results obtained with the proposed classifier-free ensemble selection method were compared with the traditional classifier-based ensemble selection using Mean Classifier Error (ME) and Majority Voting Error (MVE). The applicability of the method is tested on UCI machine learning problems and NIST SD19 handwritten numerals.
[ "['Albert H. R. Ko' 'Robert Sabourin' 'Alceu S. Britto Jr'\n 'Luiz E. S. Oliveira']", "Albert H. R. Ko, Robert Sabourin, Alceu S. Britto Jr, Luiz E. S.\n Oliveira" ]
cs.LG
null
1408.2890
null
null
http://arxiv.org/pdf/1408.2890v1
2014-08-13T00:41:01Z
2014-08-13T00:41:01Z
Robust OS-ELM with a novel selective ensemble based on particle swarm optimization
In this paper, a robust online sequential extreme learning machine (ROS-ELM) is proposed. It is based on the original OS-ELM with an adaptive selective ensemble framework. Two novel insights are proposed in this paper. First, a novel selective ensemble algorithm referred to as particle swarm optimization selective ensemble (PSOSEN) is proposed. Noting that PSOSEN is a general selective ensemble method which is applicable to any learning algorithms, including batch learning and online learning. Second, an adaptive selective ensemble framework for online learning is designed to balance the robustness and complexity of the algorithm. Experiments for both regression and classification problems with UCI data sets are carried out. Comparisons between OS-ELM, simple ensemble OS-ELM (EOS-ELM) and the proposed ROS-ELM empirically show that ROS-ELM significantly improves the robustness and stability.
[ "['Yang Liu' 'Bo He' 'Diya Dong' 'Yue Shen' 'Tianhong Yan' 'Rui Nian'\n 'Amaury Lendase']", "Yang Liu, Bo He, Diya Dong, Yue Shen, Tianhong Yan, Rui Nian, Amaury\n Lendase" ]
cs.CV cs.LG cs.NE
null
1408.2938
null
null
http://arxiv.org/pdf/1408.2938v1
2014-08-13T08:27:12Z
2014-08-13T08:27:12Z
Learning Multi-Scale Representations for Material Classification
The recent progress in sparse coding and deep learning has made unsupervised feature learning methods a strong competitor to hand-crafted descriptors. In computer vision, success stories of learned features have been predominantly reported for object recognition tasks. In this paper, we investigate if and how feature learning can be used for material recognition. We propose two strategies to incorporate scale information into the learning procedure resulting in a novel multi-scale coding procedure. Our results show that our learned features for material recognition outperform hand-crafted descriptors on the FMD and the KTH-TIPS2 material classification benchmarks.
[ "Wenbin Li, Mario Fritz", "['Wenbin Li' 'Mario Fritz']" ]
cs.LG stat.ML
null
1408.3060
null
null
http://arxiv.org/pdf/1408.3060v1
2014-08-13T17:37:43Z
2014-08-13T17:37:43Z
Fastfood: Approximate Kernel Expansions in Loglinear Time
Despite their successes, what makes kernel methods difficult to use in many large scale problems is the fact that storing and computing the decision function is typically expensive, especially at prediction time. In this paper, we overcome this difficulty by proposing Fastfood, an approximation that accelerates such computation significantly. Key to Fastfood is the observation that Hadamard matrices, when combined with diagonal Gaussian matrices, exhibit properties similar to dense Gaussian random matrices. Yet unlike the latter, Hadamard and diagonal matrices are inexpensive to multiply and store. These two matrices can be used in lieu of Gaussian matrices in Random Kitchen Sinks proposed by Rahimi and Recht (2009) and thereby speeding up the computation for a large range of kernel functions. Specifically, Fastfood requires O(n log d) time and O(n) storage to compute n non-linear basis functions in d dimensions, a significant improvement from O(nd) computation and storage, without sacrificing accuracy. Our method applies to any translation invariant and any dot-product kernel, such as the popular RBF kernels and polynomial kernels. We prove that the approximation is unbiased and has low variance. Experiments show that we achieve similar accuracy to full kernel expansions and Random Kitchen Sinks while being 100x faster and using 1000x less memory. These improvements, especially in terms of memory usage, make kernel methods more practical for applications that have large training sets and/or require real-time prediction.
[ "Quoc Viet Le, Tamas Sarlos, Alexander Johannes Smola", "['Quoc Viet Le' 'Tamas Sarlos' 'Alexander Johannes Smola']" ]
cs.LG cs.CV stat.ML
null
1408.3081
null
null
http://arxiv.org/pdf/1408.3081v1
2014-08-06T02:35:49Z
2014-08-06T02:35:49Z
Human Activity Learning and Segmentation using Partially Hidden Discriminative Models
Learning and understanding the typical patterns in the daily activities and routines of people from low-level sensory data is an important problem in many application domains such as building smart environments, or providing intelligent assistance. Traditional approaches to this problem typically rely on supervised learning and generative models such as the hidden Markov models and its extensions. While activity data can be readily acquired from pervasive sensors, e.g. in smart environments, providing manual labels to support supervised training is often extremely expensive. In this paper, we propose a new approach based on semi-supervised training of partially hidden discriminative models such as the conditional random field (CRF) and the maximum entropy Markov model (MEMM). We show that these models allow us to incorporate both labeled and unlabeled data for learning, and at the same time, provide us with the flexibility and accuracy of the discriminative framework. Our experimental results in the video surveillance domain illustrate that these models can perform better than their generative counterpart, the partially hidden Markov model, even when a substantial amount of labels are unavailable.
[ "['Truyen Tran' 'Hung Bui' 'Svetha Venkatesh']", "Truyen Tran, Hung Bui, Svetha Venkatesh" ]
stat.ML cs.LG
null
1408.3092
null
null
http://arxiv.org/pdf/1408.3092v1
2014-08-13T19:16:29Z
2014-08-13T19:16:29Z
Convergence rate of Bayesian tensor estimator: Optimal rate without restricted strong convexity
In this paper, we investigate the statistical convergence rate of a Bayesian low-rank tensor estimator. Our problem setting is the regression problem where a tensor structure underlying the data is estimated. This problem setting occurs in many practical applications, such as collaborative filtering, multi-task learning, and spatio-temporal data analysis. The convergence rate is analyzed in terms of both in-sample and out-of-sample predictive accuracies. It is shown that a near optimal rate is achieved without any strong convexity of the observation. Moreover, we show that the method has adaptivity to the unknown rank of the true tensor, that is, the near optimal rate depending on the true rank is achieved even if it is not known a priori.
[ "['Taiji Suzuki']", "Taiji Suzuki" ]
cs.NA cs.LG stat.ML
null
1408.3115
null
null
http://arxiv.org/pdf/1408.3115v4
2015-09-25T15:35:10Z
2014-08-13T18:44:18Z
On Data Preconditioning for Regularized Loss Minimization
In this work, we study data preconditioning, a well-known and long-existing technique, for boosting the convergence of first-order methods for regularized loss minimization. It is well understood that the condition number of the problem, i.e., the ratio of the Lipschitz constant to the strong convexity modulus, has a harsh effect on the convergence of the first-order optimization methods. Therefore, minimizing a small regularized loss for achieving good generalization performance, yielding an ill conditioned problem, becomes the bottleneck for big data problems. We provide a theory on data preconditioning for regularized loss minimization. In particular, our analysis exhibits an appropriate data preconditioner and characterizes the conditions on the loss function and on the data under which data preconditioning can reduce the condition number and therefore boost the convergence for minimizing the regularized loss. To make the data preconditioning practically useful, we endeavor to employ and analyze a random sampling approach to efficiently compute the preconditioned data. The preliminary experiments validate our theory.
[ "Tianbao Yang, Rong Jin, Shenghuo Zhu, Qihang Lin", "['Tianbao Yang' 'Rong Jin' 'Shenghuo Zhu' 'Qihang Lin']" ]
cs.LG math.PR math.ST stat.TH
null
1408.3169
null
null
http://arxiv.org/pdf/1408.3169v1
2014-08-14T00:19:03Z
2014-08-14T00:19:03Z
Indefinitely Oscillating Martingales
We construct a class of nonnegative martingale processes that oscillate indefinitely with high probability. For these processes, we state a uniform rate of the number of oscillations and show that this rate is asymptotically close to the theoretical upper bound. These bounds on probability and expectation of the number of upcrossings are compared to classical bounds from the martingale literature. We discuss two applications. First, our results imply that the limit of the minimum description length operator may not exist. Second, we give bounds on how often one can change one's belief in a given hypothesis when observing a stream of data.
[ "Jan Leike and Marcus Hutter", "['Jan Leike' 'Marcus Hutter']" ]
cs.CV cs.LG
null
1408.3218
null
null
http://arxiv.org/pdf/1408.3218v1
2014-08-14T08:42:22Z
2014-08-14T08:42:22Z
Toward Automated Discovery of Artistic Influence
Considering the huge amount of art pieces that exist, there is valuable information to be discovered. Examining a painting, an expert can determine its style, genre, and the time period that the painting belongs. One important task for art historians is to find influences and connections between artists. Is influence a task that a computer can measure? The contribution of this paper is in exploring the problem of computer-automated suggestion of influences between artists, a problem that was not addressed before in a general setting. We first present a comparative study of different classification methodologies for the task of fine-art style classification. A two-level comparative study is performed for this classification problem. The first level reviews the performance of discriminative vs. generative models, while the second level touches the features aspect of the paintings and compares semantic-level features vs. low-level and intermediate-level features present in the painting. Then, we investigate the question "Who influenced this artist?" by looking at his masterpieces and comparing them to others. We pose this interesting question as a knowledge discovery problem. For this purpose, we investigated several painting-similarity and artist-similarity measures. As a result, we provide a visualization of artists (Map of Artists) based on the similarity between their works
[ "Babak Saleh, Kanako Abe, Ravneet Singh Arora, Ahmed Elgammal", "['Babak Saleh' 'Kanako Abe' 'Ravneet Singh Arora' 'Ahmed Elgammal']" ]
cs.CV cs.LG cs.MS cs.NE
null
1408.3264
null
null
http://arxiv.org/pdf/1408.3264v7
2016-01-06T13:20:11Z
2014-08-14T12:37:57Z
A brief survey on deep belief networks and introducing a new object oriented toolbox (DeeBNet)
Nowadays, this is very popular to use the deep architectures in machine learning. Deep Belief Networks (DBNs) are deep architectures that use stack of Restricted Boltzmann Machines (RBM) to create a powerful generative model using training data. DBNs have many ability like feature extraction and classification that are used in many applications like image processing, speech processing and etc. This paper introduces a new object oriented MATLAB toolbox with most of abilities needed for the implementation of DBNs. In the new version, the toolbox can be used in Octave. According to the results of the experiments conducted on MNIST (image), ISOLET (speech), and 20 Newsgroups (text) datasets, it was shown that the toolbox can learn automatically a good representation of the input from unlabeled data with better discrimination between different classes. Also on all datasets, the obtained classification errors are comparable to those of state of the art classifiers. In addition, the toolbox supports different sampling methods (e.g. Gibbs, CD, PCD and our new FEPCD method), different sparsity methods (quadratic, rate distortion and our new normal method), different RBM types (generative and discriminative), using GPU, etc. The toolbox is a user-friendly open source software and is freely available on the website http://ceit.aut.ac.ir/~keyvanrad/DeeBNet%20Toolbox.html .
[ "['Mohammad Ali Keyvanrad' 'Mohammad Mehdi Homayounpour']", "Mohammad Ali Keyvanrad, Mohammad Mehdi Homayounpour" ]
stat.ML cs.LG
null
1408.3332
null
null
http://arxiv.org/pdf/1408.3332v1
2014-08-14T16:29:36Z
2014-08-14T16:29:36Z
Exact and empirical estimation of misclassification probability
We discuss the problem of risk estimation in the classification problem, with specific focus on finding distributions that maximize the confidence intervals of risk estimation. We derived simple analytic approximations for the maximum bias of empirical risk for histogram classifier. We carry out a detailed study on using these analytic estimates for empirical estimation of risk.
[ "Victor Nedelko", "['Victor Nedelko']" ]
cs.CV cs.LG
null
1408.3337
null
null
http://arxiv.org/pdf/1408.3337v1
2014-08-14T16:47:34Z
2014-08-14T16:47:34Z
2D View Aggregation for Lymph Node Detection Using a Shallow Hierarchy of Linear Classifiers
Enlarged lymph nodes (LNs) can provide important information for cancer diagnosis, staging, and measuring treatment reactions, making automated detection a highly sought goal. In this paper, we propose a new algorithm representation of decomposing the LN detection problem into a set of 2D object detection subtasks on sampled CT slices, largely alleviating the curse of dimensionality issue. Our 2D detection can be effectively formulated as linear classification on a single image feature type of Histogram of Oriented Gradients (HOG), covering a moderate field-of-view of 45 by 45 voxels. We exploit both simple pooling and sparse linear fusion schemes to aggregate these 2D detection scores for the final 3D LN detection. In this manner, detection is more tractable and does not need to perform perfectly at instance level (as weak hypotheses) since our aggregation process will robustly harness collective information for LN detection. Two datasets (90 patients with 389 mediastinal LNs and 86 patients with 595 abdominal LNs) are used for validation. Cross-validation demonstrates 78.0% sensitivity at 6 false positives/volume (FP/vol.) (86.1% at 10 FP/vol.) and 73.1% sensitivity at 6 FP/vol. (87.2% at 10 FP/vol.), for the mediastinal and abdominal datasets respectively. Our results compare favorably to previous state-of-the-art methods.
[ "Ari Seff, Le Lu, Kevin M. Cherry, Holger Roth, Jiamin Liu, Shijun\n Wang, Joanne Hoffman, Evrim B. Turkbey, and Ronald M. Summers", "['Ari Seff' 'Le Lu' 'Kevin M. Cherry' 'Holger Roth' 'Jiamin Liu'\n 'Shijun Wang' 'Joanne Hoffman' 'Evrim B. Turkbey' 'Ronald M. Summers']" ]
cs.LG
null
1408.3359
null
null
http://arxiv.org/pdf/1408.3359v1
2014-08-13T05:14:15Z
2014-08-13T05:14:15Z
Linear Contour Learning: A Method for Supervised Dimension Reduction
We propose a novel approach to sufficient dimension reduction in regression, based on estimating contour directions of negligible variation for the response surface. These directions span the orthogonal complement of the minimal space relevant for the regression, and can be extracted according to a measure of the variation in the response, leading to General Contour Regression(GCR). In comparison to exiisting sufficient dimension reduction techniques, this sontour-based mothology guarantees exhaustive estimation of the central space under ellipticity of the predictoor distribution and very mild additional assumptions, while maintaining vn-consisytency and somputational ease. Moreover, it proves to be robust to departures from ellipticity. We also establish some useful population properties for GCR. Simulations to compare performance with that of standard techniques such as ordinary least squares, sliced inverse regression, principal hessian directions, and sliced average variance estimation confirm the advntages anticipated by theoretical analyses. We also demonstrate the use of contour-based methods on a data set concerning grades of students from Massachusetts colleges.
[ "['Bing Li' 'Hongyuan Zha' 'Francesca Chiaromonte']", "Bing Li, Hongyuan Zha, Francesca Chiaromonte" ]
cs.CY cs.LG
null
1408.3382
null
null
http://arxiv.org/pdf/1408.3382v1
2014-08-14T18:54:30Z
2014-08-14T18:54:30Z
Likely to stop? Predicting Stopout in Massive Open Online Courses
Understanding why students stopout will help in understanding how students learn in MOOCs. In this report, part of a 3 unit compendium, we describe how we build accurate predictive models of MOOC student stopout. We document a scalable, stopout prediction methodology, end to end, from raw source data to model analysis. We attempted to predict stopout for the Fall 2012 offering of 6.002x. This involved the meticulous and crowd-sourced engineering of over 25 predictive features extracted for thousands of students, the creation of temporal and non-temporal data representations for use in predictive modeling, the derivation of over 10 thousand models with a variety of state-of-the-art machine learning techniques and the analysis of feature importance by examining over 70000 models. We found that stop out prediction is a tractable problem. Our models achieved an AUC (receiver operating characteristic area-under-the-curve) as high as 0.95 (and generally 0.88) when predicting one week in advance. Even with more difficult prediction problems, such as predicting stop out at the end of the course with only one weeks' data, the models attained AUCs of 0.7.
[ "Colin Taylor, Kalyan Veeramachaneni, Una-May O'Reilly", "['Colin Taylor' 'Kalyan Veeramachaneni' \"Una-May O'Reilly\"]" ]
stat.ME cs.LG stat.ML
null
1408.3467
null
null
null
null
null
Evaluating Visual Properties via Robust HodgeRank
Nowadays, how to effectively evaluate visual properties has become a popular topic for fine-grained visual comprehension. In this paper we study the problem of how to estimate such visual properties from a ranking perspective with the help of the annotators from online crowdsourcing platforms. The main challenges of our task are two-fold. On one hand, the annotations often contain contaminated information, where a small fraction of label flips might ruin the global ranking of the whole dataset. On the other hand, considering the large data capacity, the annotations are often far from being complete. What is worse, there might even exist imbalanced annotations where a small subset of samples are frequently annotated. Facing such challenges, we propose a robust ranking framework based on the principle of Hodge decomposition of imbalanced and incomplete ranking data. According to the HodgeRank theory, we find that the major source of the contamination comes from the cyclic ranking component of the Hodge decomposition. This leads us to an outlier detection formulation as sparse approximations of the cyclic ranking projection. Taking a step further, it facilitates a novel outlier detection model as Huber's LASSO in robust statistics. Moreover, simple yet scalable algorithms are developed based on Linearized Bregman Iteration to achieve an even less biased estimator. Statistical consistency of outlier detection is established in both cases under nearly the same conditions. Our studies are supported by experiments with both simulated examples and real-world data. The proposed framework provides us a promising tool for robust ranking with large scale crowdsourcing data arising from computer vision.
[ "Qianqian Xu and Jiechao Xiong and Xiaochun Cao and Qingming Huang and\n Yuan Yao" ]
math.OC cs.DC cs.LG cs.MA
10.1109/TSP.2015.2415759
1408.3693
null
null
http://arxiv.org/abs/1408.3693v3
2015-05-13T12:04:30Z
2014-08-16T01:52:42Z
Stability and Performance Limits of Adaptive Primal-Dual Networks
This work studies distributed primal-dual strategies for adaptation and learning over networks from streaming data. Two first-order methods are considered based on the Arrow-Hurwicz (AH) and augmented Lagrangian (AL) techniques. Several revealing results are discovered in relation to the performance and stability of these strategies when employed over adaptive networks. The conclusions establish that the advantages that these methods have for deterministic optimization problems do not necessarily carry over to stochastic optimization problems. It is found that they have narrower stability ranges and worse steady-state mean-square-error performance than primal methods of the consensus and diffusion type. It is also found that the AH technique can become unstable under a partial observation model, while the other techniques are able to recover the unknown under this scenario. A method to enhance the performance of AL strategies is proposed by tying the selection of the step-size to their regularization parameter. It is shown that this method allows the AL algorithm to approach the performance of consensus and diffusion strategies but that it remains less stable than these other strategies.
[ "['Zaid J. Towfic' 'Ali H. Sayed']", "Zaid J. Towfic and Ali H. Sayed" ]
cs.RO cs.LG
null
1408.3727
null
null
http://arxiv.org/pdf/1408.3727v5
2015-04-18T01:27:35Z
2014-08-16T09:20:47Z
Inverse Reinforcement Learning with Multi-Relational Chains for Robot-Centered Smart Home
In a robot-centered smart home, the robot observes the home states with its own sensors, and then it can change certain object states according to an operator's commands for remote operations, or imitate the operator's behaviors in the house for autonomous operations. To model the robot's imitation of the operator's behaviors in a dynamic indoor environment, we use multi-relational chains to describe the changes of environment states, and apply inverse reinforcement learning to encoding the operator's behaviors with a learned reward function. We implement this approach with a mobile robot, and do five experiments to include increasing training days, object numbers, and action types. Besides, a baseline method by directly recording the operator's behaviors is also implemented, and comparison is made on the accuracy of home state evaluation and the accuracy of robot action selection. The results show that the proposed approach handles dynamic environment well, and guides the robot's actions in the house more accurately.
[ "Kun Li, Max Q.-H. Meng", "['Kun Li' 'Max Q. -H. Meng']" ]
cs.LG
null
1408.3733
null
null
http://arxiv.org/pdf/1408.3733v1
2014-08-16T11:11:59Z
2014-08-16T11:11:59Z
Multi-Sensor Event Detection using Shape Histograms
Vehicular sensor data consists of multiple time-series arising from a number of sensors. Using such multi-sensor data we would like to detect occurrences of specific events that vehicles encounter, e.g., corresponding to particular maneuvers that a vehicle makes or conditions that it encounters. Events are characterized by similar waveform patterns re-appearing within one or more sensors. Further such patterns can be of variable duration. In this work, we propose a method for detecting such events in time-series data using a novel feature descriptor motivated by similar ideas in image processing. We define the shape histogram: a constant dimension descriptor that nevertheless captures patterns of variable duration. We demonstrate the efficacy of using shape histograms as features to detect events in an SVM-based, multi-sensor, supervised learning scenario, i.e., multiple time-series are used to detect an event. We present results on real-life vehicular sensor data and show that our technique performs better than available pattern detection implementations on our data, and that it can also be used to combine features from multiple sensors resulting in better accuracy than using any single sensor. Since previous work on pattern detection in time-series has been in the single series context, we also present results using our technique on multiple standard time-series datasets and show that it is the most versatile in terms of how it ranks compared to other published results.
[ "Ehtesham Hassan and Gautam Shroff and Puneet Agarwal", "['Ehtesham Hassan' 'Gautam Shroff' 'Puneet Agarwal']" ]
cs.CV cs.LG cs.NE
null
1408.3750
null
null
http://arxiv.org/pdf/1408.3750v1
2014-08-16T17:11:44Z
2014-08-16T17:11:44Z
Real-time emotion recognition for gaming using deep convolutional network features
The goal of the present study is to explore the application of deep convolutional network features to emotion recognition. Results indicate that they perform similarly to other published models at a best recognition rate of 94.4%, and do so with a single still image rather than a video stream. An implementation of an affective feedback game is also described, where a classifier using these features tracks the facial expressions of a player in real-time.
[ "S\\'ebastien Ouellet", "['Sébastien Ouellet']" ]
cs.LG cs.HC
null
1408.3944
null
null
http://arxiv.org/pdf/1408.3944v2
2014-09-17T16:19:34Z
2014-08-18T09:18:38Z
Down-Sampling coupled to Elastic Kernel Machines for Efficient Recognition of Isolated Gestures
In the field of gestural action recognition, many studies have focused on dimensionality reduction along the spatial axis, to reduce both the variability of gestural sequences expressed in the reduced space, and the computational complexity of their processing. It is noticeable that very few of these methods have explicitly addressed the dimensionality reduction along the time axis. This is however a major issue with regard to the use of elastic distances characterized by a quadratic complexity. To partially fill this apparent gap, we present in this paper an approach based on temporal down-sampling associated to elastic kernel machine learning. We experimentally show, on two data sets that are widely referenced in the domain of human gesture recognition, and very different in terms of quality of motion capture, that it is possible to significantly reduce the number of skeleton frames while maintaining a good recognition rate. The method proves to give satisfactory results at a level currently reached by state-of-the-art methods on these data sets. The computational complexity reduction makes this approach eligible for real-time applications.
[ "['Pierre-François Marteau' 'Sylvie Gibet' 'Clement Reverdy']", "Pierre-Fran\\c{c}ois Marteau (IRISA), Sylvie Gibet (IRISA), Clement\n Reverdy (IRISA)" ]
cs.CV cs.LG
10.1109/TPAMI.2015.2469286
1408.3967
null
null
http://arxiv.org/abs/1408.3967v4
2015-08-11T10:08:40Z
2014-08-18T10:34:29Z
Learning Deep Representation for Face Alignment with Auxiliary Attributes
In this study, we show that landmark detection or face alignment task is not a single and independent problem. Instead, its robustness can be greatly improved with auxiliary information. Specifically, we jointly optimize landmark detection together with the recognition of heterogeneous but subtly correlated facial attributes, such as gender, expression, and appearance attributes. This is non-trivial since different attribute inference tasks have different learning difficulties and convergence rates. To address this problem, we formulate a novel tasks-constrained deep model, which not only learns the inter-task correlation but also employs dynamic task coefficients to facilitate the optimization convergence when learning multiple complex tasks. Extensive evaluations show that the proposed task-constrained learning (i) outperforms existing face alignment methods, especially in dealing with faces with severe occlusion and pose variation, and (ii) reduces model complexity drastically compared to the state-of-the-art methods based on cascaded deep model.
[ "Zhanpeng Zhang, Ping Luo, Chen Change Loy, Xiaoou Tang", "['Zhanpeng Zhang' 'Ping Luo' 'Chen Change Loy' 'Xiaoou Tang']" ]
stat.ML cs.DS cs.LG math.ST stat.TH
null
1408.4045
null
null
http://arxiv.org/pdf/1408.4045v5
2015-04-15T02:11:54Z
2014-08-18T15:42:16Z
Relax, no need to round: integrality of clustering formulations
We study exact recovery conditions for convex relaxations of point cloud clustering problems, focusing on two of the most common optimization problems for unsupervised clustering: $k$-means and $k$-median clustering. Motivations for focusing on convex relaxations are: (a) they come with a certificate of optimality, and (b) they are generic tools which are relatively parameter-free, not tailored to specific assumptions over the input. More precisely, we consider the distributional setting where there are $k$ clusters in $\mathbb{R}^m$ and data from each cluster consists of $n$ points sampled from a symmetric distribution within a ball of unit radius. We ask: what is the minimal separation distance between cluster centers needed for convex relaxations to exactly recover these $k$ clusters as the optimal integral solution? For the $k$-median linear programming relaxation we show a tight bound: exact recovery is obtained given arbitrarily small pairwise separation $\epsilon > 0$ between the balls. In other words, the pairwise center separation is $\Delta > 2+\epsilon$. Under the same distributional model, the $k$-means LP relaxation fails to recover such clusters at separation as large as $\Delta = 4$. Yet, if we enforce PSD constraints on the $k$-means LP, we get exact cluster recovery at center separation $\Delta > 2\sqrt2(1+\sqrt{1/m})$. In contrast, common heuristics such as Lloyd's algorithm (a.k.a. the $k$-means algorithm) can fail to recover clusters in this setting; even with arbitrarily large cluster separation, k-means++ with overseeding by any constant factor fails with high probability at exact cluster recovery. To complement the theoretical analysis, we provide an experimental study of the recovery guarantees for these various methods, and discuss several open problems which these experiments suggest.
[ "Pranjal Awasthi, Afonso S. Bandeira, Moses Charikar, Ravishankar\n Krishnaswamy, Soledad Villar, Rachel Ward", "['Pranjal Awasthi' 'Afonso S. Bandeira' 'Moses Charikar'\n 'Ravishankar Krishnaswamy' 'Soledad Villar' 'Rachel Ward']" ]
cs.LG cs.DB cs.DS
null
1408.4072
null
null
http://arxiv.org/pdf/1408.4072v1
2014-08-15T07:21:48Z
2014-08-15T07:21:48Z
Indexing Cost Sensitive Prediction
Predictive models are often used for real-time decision making. However, typical machine learning techniques ignore feature evaluation cost, and focus solely on the accuracy of the machine learning models obtained utilizing all the features available. We develop algorithms and indexes to support cost-sensitive prediction, i.e., making decisions using machine learning models taking feature evaluation cost into account. Given an item and a online computation cost (i.e., time) budget, we present two approaches to return an appropriately chosen machine learning model that will run within the specified time on the given item. The first approach returns the optimal machine learning model, i.e., one with the highest accuracy, that runs within the specified time, but requires significant up-front precomputation time. The second approach returns a possibly sub- optimal machine learning model, but requires little up-front precomputation time. We study these two algorithms in detail and characterize the scenarios (using real and synthetic data) in which each performs well. Unlike prior work that focuses on a narrow domain or a specific algorithm, our techniques are very general: they apply to any cost-sensitive prediction scenario on any machine learning algorithm.
[ "Leilani Battle, Edward Benson, Aditya Parameswaran, Eugene Wu", "['Leilani Battle' 'Edward Benson' 'Aditya Parameswaran' 'Eugene Wu']" ]
math.OC cs.LG cs.SY
null
1408.4551
null
null
http://arxiv.org/pdf/1408.4551v2
2014-11-08T12:13:28Z
2014-08-20T07:29:11Z
Dimensionality Reduction of Affine Variational Inequalities Using Random Projections
We present a method for dimensionality reduction of an affine variational inequality (AVI) defined over a compact feasible region. Centered around the Johnson Lindenstrauss lemma, our method is a randomized algorithm that produces with high probability an approximate solution for the given AVI by solving a lower-dimensional AVI. The algorithm allows the lower dimension to be chosen based on the quality of approximation desired. The algorithm can also be used as a subroutine in an exact algorithm for generating an initial point close to the solution. The lower-dimensional AVI is obtained by appropriately projecting the original AVI on a randomly chosen subspace. The lower-dimensional AVI is solved using standard solvers and from this solution an approximate solution to the original AVI is recovered through an inexpensive process. Our numerical experiments corroborate the theoretical results and validate that the algorithm provides a good approximation at low dimensions and substantial savings in time for an exact solution.
[ "Bharat Prabhakar, Ankur A. Kulkarni", "['Bharat Prabhakar' 'Ankur A. Kulkarni']" ]
cs.LG cs.CV
null
1408.4576
null
null
http://arxiv.org/pdf/1408.4576v2
2014-08-25T14:49:56Z
2014-08-20T09:25:18Z
Introduction to Clustering Algorithms and Applications
Data clustering is the process of identifying natural groupings or clusters within multidimensional data based on some similarity measure. Clustering is a fundamental process in many different disciplines. Hence, researchers from different fields are actively working on the clustering problem. This paper provides an overview of the different representative clustering methods. In addition, application of clustering in different field is briefly introduced.
[ "Sibei Yang and Liangde Tao and Bingchen Gong", "['Sibei Yang' 'Liangde Tao' 'Bingchen Gong']" ]
stat.CO cs.LG math.OC stat.ML
null
1408.4622
null
null
http://arxiv.org/pdf/1408.4622v1
2014-08-20T12:16:54Z
2014-08-20T12:16:54Z
A new integral loss function for Bayesian optimization
We consider the problem of maximizing a real-valued continuous function $f$ using a Bayesian approach. Since the early work of Jonas Mockus and Antanas \v{Z}ilinskas in the 70's, the problem of optimization is usually formulated by considering the loss function $\max f - M_n$ (where $M_n$ denotes the best function value observed after $n$ evaluations of $f$). This loss function puts emphasis on the value of the maximum, at the expense of the location of the maximizer. In the special case of a one-step Bayes-optimal strategy, it leads to the classical Expected Improvement (EI) sampling criterion. This is a special case of a Stepwise Uncertainty Reduction (SUR) strategy, where the risk associated to a certain uncertainty measure (here, the expected loss) on the quantity of interest is minimized at each step of the algorithm. In this article, assuming that $f$ is defined over a measure space $(\mathbb{X}, \lambda)$, we propose to consider instead the integral loss function $\int_{\mathbb{X}} (f - M_n)_{+}\, d\lambda$, and we show that this leads, in the case of a Gaussian process prior, to a new numerically tractable sampling criterion that we call $\rm EI^2$ (for Expected Integrated Expected Improvement). A numerical experiment illustrates that a SUR strategy based on this new sampling criterion reduces the error on both the value and the location of the maximizer faster than the EI-based strategy.
[ "['Emmanuel Vazquez' 'Julien Bect']", "Emmanuel Vazquez and Julien Bect" ]
cs.LG
null
1408.4673
null
null
http://arxiv.org/pdf/1408.4673v2
2017-08-09T12:50:11Z
2014-08-20T14:29:11Z
AFP Algorithm and a Canonical Normal Form for Horn Formulas
AFP Algorithm is a learning algorithm for Horn formulas. We show that it does not improve the complexity of AFP Algorithm, if after each negative counterexample more that just one refinements are performed. Moreover, a canonical normal form for Horn formulas is presented, and it is proved that the output formula of AFP Algorithm is in this normal form.
[ "Ruhollah Majdoddin", "['Ruhollah Majdoddin']" ]
cs.LG
null
1408.4714
null
null
http://arxiv.org/pdf/1408.4714v1
2014-08-20T16:23:53Z
2014-08-20T16:23:53Z
Conic Multi-Task Classification
Traditionally, Multi-task Learning (MTL) models optimize the average of task-related objective functions, which is an intuitive approach and which we will be referring to as Average MTL. However, a more general framework, referred to as Conic MTL, can be formulated by considering conic combinations of the objective functions instead; in this framework, Average MTL arises as a special case, when all combination coefficients equal 1. Although the advantage of Conic MTL over Average MTL has been shown experimentally in previous works, no theoretical justification has been provided to date. In this paper, we derive a generalization bound for the Conic MTL method, and demonstrate that the tightest bound is not necessarily achieved, when all combination coefficients equal 1; hence, Average MTL may not always be the optimal choice, and it is important to consider Conic MTL. As a byproduct of the generalization bound, it also theoretically explains the good experimental results of previous relevant works. Finally, we propose a new Conic MTL model, whose conic combination coefficients minimize the generalization bound, instead of choosing them heuristically as has been done in previous methods. The rationale and advantage of our model is demonstrated and verified via a series of experiments by comparing with several other methods.
[ "Cong Li, Michael Georgiopoulos, Georgios C. Anagnostopoulos", "['Cong Li' 'Michael Georgiopoulos' 'Georgios C. Anagnostopoulos']" ]
stat.ML cs.IR cs.LG
null
1408.4966
null
null
http://arxiv.org/pdf/1408.4966v2
2015-06-25T13:48:40Z
2014-08-21T11:34:37Z
Diffusion Fingerprints
We introduce, test and discuss a method for classifying and clustering data modeled as directed graphs. The idea is to start diffusion processes from any subset of a data collection, generating corresponding distributions for reaching points in the network. These distributions take the form of high-dimensional numerical vectors and capture essential topological properties of the original dataset. We show how these diffusion vectors can be successfully applied for getting state-of-the-art accuracies in the problem of extracting pathways from metabolic networks. We also provide a guideline to illustrate how to use our method for classification problems, and discuss important details of its implementation. In particular, we present a simple dimensionality reduction technique that lowers the computational cost of classifying diffusion vectors, while leaving the predictive power of the classification process substantially unaltered. Although the method has very few parameters, the results we obtain show its flexibility and power. This should make it helpful in many other contexts.
[ "Jimmy Dubuisson, Jean-Pierre Eckmann and Andrea Agazzi", "['Jimmy Dubuisson' 'Jean-Pierre Eckmann' 'Andrea Agazzi']" ]
cs.CV cs.LG cs.NE
null
1408.5093
null
null
http://arxiv.org/pdf/1408.5093v1
2014-06-20T23:00:32Z
2014-06-20T23:00:32Z
Caffe: Convolutional Architecture for Fast Feature Embedding
Caffe provides multimedia scientists and practitioners with a clean and modifiable framework for state-of-the-art deep learning algorithms and a collection of reference models. The framework is a BSD-licensed C++ library with Python and MATLAB bindings for training and deploying general-purpose convolutional neural networks and other deep models efficiently on commodity architectures. Caffe fits industry and internet-scale media needs by CUDA GPU computation, processing over 40 million images a day on a single K40 or Titan GPU ($\approx$ 2.5 ms per image). By separating model representation from actual implementation, Caffe allows experimentation and seamless switching among platforms for ease of development and deployment from prototyping machines to cloud environments. Caffe is maintained and developed by the Berkeley Vision and Learning Center (BVLC) with the help of an active community of contributors on GitHub. It powers ongoing research projects, large-scale industrial applications, and startup prototypes in vision, speech, and multimedia.
[ "Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan\n Long, Ross Girshick, Sergio Guadarrama, Trevor Darrell", "['Yangqing Jia' 'Evan Shelhamer' 'Jeff Donahue' 'Sergey Karayev'\n 'Jonathan Long' 'Ross Girshick' 'Sergio Guadarrama' 'Trevor Darrell']" ]
cs.DS cs.LG stat.ML
null
1408.5099
null
null
http://arxiv.org/pdf/1408.5099v1
2014-08-21T18:32:00Z
2014-08-21T18:32:00Z
Uniform Sampling for Matrix Approximation
Random sampling has become a critical tool in solving massive matrix problems. For linear regression, a small, manageable set of data rows can be randomly selected to approximate a tall, skinny data matrix, improving processing time significantly. For theoretical performance guarantees, each row must be sampled with probability proportional to its statistical leverage score. Unfortunately, leverage scores are difficult to compute. A simple alternative is to sample rows uniformly at random. While this often works, uniform sampling will eliminate critical row information for many natural instances. We take a fresh look at uniform sampling by examining what information it does preserve. Specifically, we show that uniform sampling yields a matrix that, in some sense, well approximates a large fraction of the original. While this weak form of approximation is not enough for solving linear regression directly, it is enough to compute a better approximation. This observation leads to simple iterative row sampling algorithms for matrix approximation that run in input-sparsity time and preserve row structure and sparsity at all intermediate steps. In addition to an improved understanding of uniform sampling, our main proof introduces a structural result of independent interest: we show that every matrix can be made to have low coherence by reweighting a small subset of its rows.
[ "['Michael B. Cohen' 'Yin Tat Lee' 'Cameron Musco' 'Christopher Musco'\n 'Richard Peng' 'Aaron Sidford']", "Michael B. Cohen, Yin Tat Lee, Cameron Musco, Christopher Musco,\n Richard Peng, Aaron Sidford" ]
cs.AI cs.LG
null
1408.5241
null
null
http://arxiv.org/pdf/1408.5241v1
2014-08-22T09:33:50Z
2014-08-22T09:33:50Z
A two-stage architecture for stock price forecasting by combining SOM and fuzzy-SVM
This paper proposed a model to predict the stock price based on combining Self-Organizing Map (SOM) and fuzzy-Support Vector Machines (f-SVM). Extraction of fuzzy rules from raw data based on the combining of statistical machine learning models is base of this proposed approach. In the proposed model, SOM is used as a clustering algorithm to partition the whole input space into the several disjoint regions. For each partition, a set of fuzzy rules is extracted based on a f-SVM combining model. Then fuzzy rules sets are used to predict the test data using fuzzy inference algorithms. The performance of the proposed approach is compared with other models using four data sets
[ "['Duc-Hien Nguyen' 'Manh-Thanh Le']", "Duc-Hien Nguyen, Manh-Thanh Le" ]
cs.LG cs.AI
null
1408.5246
null
null
http://arxiv.org/pdf/1408.5246v1
2014-08-22T09:55:06Z
2014-08-22T09:55:06Z
Improving the Interpretability of Support Vector Machines-based Fuzzy Rules
Support vector machines (SVMs) and fuzzy rule systems are functionally equivalent under some conditions. Therefore, the learning algorithms developed in the field of support vector machines can be used to adapt the parameters of fuzzy systems. Extracting fuzzy models from support vector machines has the inherent advantage that the model does not need to determine the number of rules in advance. However, after the support vector machine learning, the complexity is usually high, and interpretability is also impaired. This paper not only proposes a complete framework for extracting interpretable SVM-based fuzzy modeling, but also provides optimization issues of the models. Simulations examples are given to embody the idea of this paper.
[ "['Duc-Hien Nguyen' 'Manh-Thanh Le']", "Duc-Hien Nguyen, Manh-Thanh Le" ]
stat.ML cs.LG
null
1408.5352
null
null
http://arxiv.org/pdf/1408.5352v1
2014-08-22T16:33:09Z
2014-08-22T16:33:09Z
Nonconvex Statistical Optimization: Minimax-Optimal Sparse PCA in Polynomial Time
Sparse principal component analysis (PCA) involves nonconvex optimization for which the global solution is hard to obtain. To address this issue, one popular approach is convex relaxation. However, such an approach may produce suboptimal estimators due to the relaxation effect. To optimally estimate sparse principal subspaces, we propose a two-stage computational framework named "tighten after relax": Within the 'relax' stage, we approximately solve a convex relaxation of sparse PCA with early stopping to obtain a desired initial estimator; For the 'tighten' stage, we propose a novel algorithm called sparse orthogonal iteration pursuit (SOAP), which iteratively refines the initial estimator by directly solving the underlying nonconvex problem. A key concept of this two-stage framework is the basin of attraction. It represents a local region within which the `tighten' stage has desired computational and statistical guarantees. We prove that, the initial estimator obtained from the 'relax' stage falls into such a region, and hence SOAP geometrically converges to a principal subspace estimator which is minimax-optimal within a certain model class. Unlike most existing sparse PCA estimators, our approach applies to the non-spiked covariance models, and adapts to non-Gaussianity as well as dependent data settings. Moreover, through analyzing the computational complexity of the two stages, we illustrate an interesting phenomenon that larger sample size can reduce the total iteration complexity. Our framework motivates a general paradigm for solving many complex statistical problems which involve nonconvex optimization with provable guarantees.
[ "Zhaoran Wang, Huanran Lu, Han Liu", "['Zhaoran Wang' 'Huanran Lu' 'Han Liu']" ]
cs.LG cs.DB
10.1145/2661829.2662010
1408.5389
null
null
http://arxiv.org/abs/1408.5389v1
2014-08-22T19:12:19Z
2014-08-22T19:12:19Z
Computing Multi-Relational Sufficient Statistics for Large Databases
Databases contain information about which relationships do and do not hold among entities. To make this information accessible for statistical analysis requires computing sufficient statistics that combine information from different database tables. Such statistics may involve any number of {\em positive and negative} relationships. With a naive enumeration approach, computing sufficient statistics for negative relationships is feasible only for small databases. We solve this problem with a new dynamic programming algorithm that performs a virtual join, where the requisite counts are computed without materializing join tables. Contingency table algebra is a new extension of relational algebra, that facilitates the efficient implementation of this M\"obius virtual join operation. The M\"obius Join scales to large datasets (over 1M tuples) with complex schemas. Empirical evaluation with seven benchmark datasets showed that information about the presence and absence of links can be exploited in feature selection, association rule mining, and Bayesian network learning.
[ "['Zhensong Qian' 'Oliver Schulte' 'Yan Sun']", "Zhensong Qian, Oliver Schulte and Yan Sun" ]
cs.CV cs.LG
null
1408.5400
null
null
http://arxiv.org/pdf/1408.5400v1
2014-08-22T19:56:14Z
2014-08-22T19:56:14Z
Hierarchical Adaptive Structural SVM for Domain Adaptation
A key topic in classification is the accuracy loss produced when the data distribution in the training (source) domain differs from that in the testing (target) domain. This is being recognized as a very relevant problem for many computer vision tasks such as image classification, object detection, and object category recognition. In this paper, we present a novel domain adaptation method that leverages multiple target domains (or sub-domains) in a hierarchical adaptation tree. The core idea is to exploit the commonalities and differences of the jointly considered target domains. Given the relevance of structural SVM (SSVM) classifiers, we apply our idea to the adaptive SSVM (A-SSVM), which only requires the target domain samples together with the existing source-domain classifier for performing the desired adaptation. Altogether, we term our proposal as hierarchical A-SSVM (HA-SSVM). As proof of concept we use HA-SSVM for pedestrian detection and object category recognition. In the former we apply HA-SSVM to the deformable part-based model (DPM) while in the latter HA-SSVM is applied to multi-category classifiers. In both cases, we show how HA-SSVM is effective in increasing the detection/recognition accuracy with respect to adaptation strategies that ignore the structure of the target data. Since, the sub-domains of the target data are not always known a priori, we shown how HA-SSVM can incorporate sub-domain structure discovery for object category recognition.
[ "['Jiaolong Xu' 'Sebastian Ramos' 'David Vazquez' 'Antonio M. Lopez']", "Jiaolong Xu, Sebastian Ramos, David Vazquez, Antonio M. Lopez" ]
stat.ML cs.CL cs.IR cs.LG
null
1408.5427
null
null
http://arxiv.org/pdf/1408.5427v1
2014-08-21T17:58:33Z
2014-08-21T17:58:33Z
A Case Study in Text Mining: Interpreting Twitter Data From World Cup Tweets
Cluster analysis is a field of data analysis that extracts underlying patterns in data. One application of cluster analysis is in text-mining, the analysis of large collections of text to find similarities between documents. We used a collection of about 30,000 tweets extracted from Twitter just before the World Cup started. A common problem with real world text data is the presence of linguistic noise. In our case it would be extraneous tweets that are unrelated to dominant themes. To combat this problem, we created an algorithm that combined the DBSCAN algorithm and a consensus matrix. This way we are left with the tweets that are related to those dominant themes. We then used cluster analysis to find those topics that the tweets describe. We clustered the tweets using k-means, a commonly used clustering algorithm, and Non-Negative Matrix Factorization (NMF) and compared the results. The two algorithms gave similar results, but NMF proved to be faster and provided more easily interpreted results. We explored our results using two visualization tools, Gephi and Wordle.
[ "Daniel Godfrey, Caley Johns, Carl Meyer, Shaina Race, Carol Sadek", "['Daniel Godfrey' 'Caley Johns' 'Carl Meyer' 'Shaina Race' 'Carol Sadek']" ]
cs.LG stat.ML
null
1408.5449
null
null
http://arxiv.org/pdf/1408.5449v1
2014-08-23T01:23:23Z
2014-08-23T01:23:23Z
Stretchy Polynomial Regression
This article proposes a novel solution for stretchy polynomial regression learning. The solution comes in primal and dual closed-forms similar to that of ridge regression. Essentially, the proposed solution stretches the covariance computation via a power term thereby compresses or amplifies the estimation. Our experiments on both synthetic data and real-world data show effectiveness of the proposed method for compressive learning.
[ "['Kar-Ann Toh']", "Kar-Ann Toh" ]
cs.LG stat.ML
null
1408.5456
null
null
http://arxiv.org/pdf/1408.5456v1
2014-08-23T05:06:55Z
2014-08-23T05:06:55Z
Interpreting Tree Ensembles with inTrees
Tree ensembles such as random forests and boosted trees are accurate but difficult to understand, debug and deploy. In this work, we provide the inTrees (interpretable trees) framework that extracts, measures, prunes and selects rules from a tree ensemble, and calculates frequent variable interactions. An rule-based learner, referred to as the simplified tree ensemble learner (STEL), can also be formed and used for future prediction. The inTrees framework can applied to both classification and regression problems, and is applicable to many types of tree ensembles, e.g., random forests, regularized random forests, and boosted trees. We implemented the inTrees algorithms in the "inTrees" R package.
[ "Houtao Deng", "['Houtao Deng']" ]
stat.ML cs.LG
null
1408.5544
null
null
http://arxiv.org/pdf/1408.5544v2
2014-08-27T15:21:09Z
2014-08-24T02:53:57Z
To lie or not to lie in a subspace
Give deterministic necessary and sufficient conditions to guarantee that if a subspace fits certain partially observed data from a union of subspaces, it is because such data really lies in a subspace. Furthermore, Give deterministic necessary and sufficient conditions to guarantee that if a subspace fits certain partially observed data, such subspace is unique. Do this by characterizing when and only when a set of incomplete vectors behaves as a single but complete one.
[ "['Daniel L. Pimentel-Alarcón']", "Daniel L. Pimentel-Alarc\\'on" ]
cs.LG cs.CV
10.1109/TPAMI.2015.2404776
1408.5574
null
null
http://arxiv.org/abs/1408.5574v2
2015-02-08T23:52:38Z
2014-08-24T07:40:19Z
Supervised Hashing Using Graph Cuts and Boosted Decision Trees
Embedding image features into a binary Hamming space can improve both the speed and accuracy of large-scale query-by-example image retrieval systems. Supervised hashing aims to map the original features to compact binary codes in a manner which preserves the label-based similarities of the original data. Most existing approaches apply a single form of hash function, and an optimization process which is typically deeply coupled to this specific form. This tight coupling restricts the flexibility of those methods, and can result in complex optimization problems that are difficult to solve. In this work we proffer a flexible yet simple framework that is able to accommodate different types of loss functions and hash functions. The proposed framework allows a number of existing approaches to hashing to be placed in context, and simplifies the development of new problem-specific hashing methods. Our framework decomposes the into two steps: binary code (hash bits) learning, and hash function learning. The first step can typically be formulated as a binary quadratic problem, and the second step can be accomplished by training standard binary classifiers. For solving large-scale binary code inference, we show how to ensure that the binary quadratic problems are submodular such that an efficient graph cut approach can be used. To achieve efficiency as well as efficacy on large-scale high-dimensional data, we propose to use boosted decision trees as the hash functions, which are nonlinear, highly descriptive, and very fast to train and evaluate. Experiments demonstrate that our proposed method significantly outperforms most state-of-the-art methods, especially on high-dimensional data.
[ "Guosheng Lin, Chunhua Shen, Anton van den Hengel", "['Guosheng Lin' 'Chunhua Shen' 'Anton van den Hengel']" ]
cs.CE cs.LG q-bio.QM stat.ML
null
1408.5634
null
null
http://arxiv.org/pdf/1408.5634v1
2014-08-24T20:41:51Z
2014-08-24T20:41:51Z
An application of topological graph clustering to protein function prediction
We use a semisupervised learning algorithm based on a topological data analysis approach to assign functional categories to yeast proteins using similarity graphs. This new approach to analyzing biological networks yields results that are as good as or better than state of the art existing approaches.
[ "R. Sean Bowman and Douglas Heisterkamp and Jesse Johnson and Danielle\n O'Donnol", "['R. Sean Bowman' 'Douglas Heisterkamp' 'Jesse Johnson'\n \"Danielle O'Donnol\"]" ]
stat.ML cs.LG
null
1408.5661
null
null
http://arxiv.org/pdf/1408.5661v3
2015-04-17T06:59:26Z
2014-08-25T04:44:53Z
Asymptotic Accuracy of Bayesian Estimation for a Single Latent Variable
In data science and machine learning, hierarchical parametric models, such as mixture models, are often used. They contain two kinds of variables: observable variables, which represent the parts of the data that can be directly measured, and latent variables, which represent the underlying processes that generate the data. Although there has been an increase in research on the estimation accuracy for observable variables, the theoretical analysis of estimating latent variables has not been thoroughly investigated. In a previous study, we determined the accuracy of a Bayes estimation for the joint probability of the latent variables in a dataset, and we proved that the Bayes method is asymptotically more accurate than the maximum-likelihood method. However, the accuracy of the Bayes estimation for a single latent variable remains unknown. In the present paper, we derive the asymptotic expansions of the error functions, which are defined by the Kullback-Leibler divergence, for two types of single-variable estimations when the statistical regularity is satisfied. Our results indicate that the accuracies of the Bayes and maximum-likelihood methods are asymptotically equivalent and clarify that the Bayes method is only advantageous for multivariable estimations.
[ "Keisuke Yamazaki", "['Keisuke Yamazaki']" ]
cs.LG
null
1408.5823
null
null
http://arxiv.org/pdf/1408.5823v5
2014-12-23T03:17:19Z
2014-08-25T16:24:43Z
Improved Distributed Principal Component Analysis
We study the distributed computing setting in which there are multiple servers, each holding a set of points, who wish to compute functions on the union of their point sets. A key task in this setting is Principal Component Analysis (PCA), in which the servers would like to compute a low dimensional subspace capturing as much of the variance of the union of their point sets as possible. Given a procedure for approximate PCA, one can use it to approximately solve $\ell_2$-error fitting problems such as $k$-means clustering and subspace clustering. The essential properties of an approximate distributed PCA algorithm are its communication cost and computational efficiency for a given desired accuracy in downstream applications. We give new algorithms and analyses for distributed PCA which lead to improved communication and computational costs for $k$-means clustering and related problems. Our empirical study on real world data shows a speedup of orders of magnitude, preserving communication with only a negligible degradation in solution quality. Some of these techniques we develop, such as a general transformation from a constant success probability subspace embedding to a high success probability subspace embedding with a dimension and sparsity independent of the success probability, may be of independent interest.
[ "Maria-Florina Balcan, Vandana Kanchanapally, Yingyu Liang, David\n Woodruff", "['Maria-Florina Balcan' 'Vandana Kanchanapally' 'Yingyu Liang'\n 'David Woodruff']" ]
cs.DC cs.LG cs.SY math.OC
null
1408.5845
null
null
http://arxiv.org/pdf/1408.5845v2
2014-12-05T00:40:57Z
2014-08-25T17:42:41Z
Analysis of a Reduced-Communication Diffusion LMS Algorithm
In diffusion-based algorithms for adaptive distributed estimation, each node of an adaptive network estimates a target parameter vector by creating an intermediate estimate and then combining the intermediate estimates available within its closed neighborhood. We analyze the performance of a reduced-communication diffusion least mean-square (RC-DLMS) algorithm, which allows each node to receive the intermediate estimates of only a subset of its neighbors at each iteration. This algorithm eases the usage of network communication resources and delivers a trade-off between estimation performance and communication cost. We show analytically that the RC-DLMS algorithm is stable and convergent in both mean and mean-square senses. We also calculate its theoretical steady-state mean-square deviation. Simulation results demonstrate a good match between theory and experiment.
[ "Reza Arablouei, Stefan Werner, Kutluy{\\i}l Do\\u{g}an\\c{c}ay, and\n Yih-Fang Huang", "['Reza Arablouei' 'Stefan Werner' 'Kutluyıl Doğançay' 'Yih-Fang Huang']" ]
cs.LG
null
1408.6027
null
null
http://arxiv.org/pdf/1408.6027v2
2016-04-05T09:47:09Z
2014-08-26T06:48:58Z
Label Distribution Learning
Although multi-label learning can deal with many problems with label ambiguity, it does not fit some real applications well where the overall distribution of the importance of the labels matters. This paper proposes a novel learning paradigm named \emph{label distribution learning} (LDL) for such kind of applications. The label distribution covers a certain number of labels, representing the degree to which each label describes the instance. LDL is a more general learning framework which includes both single-label and multi-label learning as its special cases. This paper proposes six working LDL algorithms in three ways: problem transformation, algorithm adaptation, and specialized algorithm design. In order to compare the performance of the LDL algorithms, six representative and diverse evaluation measures are selected via a clustering analysis, and the first batch of label distribution datasets are collected and made publicly available. Experimental results on one artificial and fifteen real-world datasets show clear advantages of the specialized algorithms, which indicates the importance of special design for the characteristics of the LDL problem.
[ "['Xin Geng']", "Xin Geng" ]
stat.ML cs.LG q-bio.QM
null
1408.6032
null
null
null
null
null
PMCE: efficient inference of expressive models of cancer evolution with high prognostic power
Motivation: Driver (epi)genomic alterations underlie the positive selection of cancer subpopulations, which promotes drug resistance and relapse. Even though substantial heterogeneity is witnessed in most cancer types, mutation accumulation patterns can be regularly found and can be exploited to reconstruct predictive models of cancer evolution. Yet, available methods cannot infer logical formulas connecting events to represent alternative evolutionary routes or convergent evolution. Results: We introduce PMCE, an expressive framework that leverages mutational profiles from cross-sectional sequencing data to infer probabilistic graphical models of cancer evolution including arbitrary logical formulas, and which outperforms the state-of-the-art in terms of accuracy and robustness to noise, on simulations. The application of PMCE to 7866 samples from the TCGA database allows us to identify a highly significant correlation between the predicted evolutionary paths and the overall survival in 7 tumor types, proving that our approach can effectively stratify cancer patients in reliable risk groups. Availability: PMCE is freely available at https://github.com/BIMIB-DISCo/PMCE, in addition to the code to replicate all the analyses presented in the manuscript. Contacts: [email protected], [email protected].
[ "Fabrizio Angaroni, Kevin Chen, Chiara Damiani, Giulio Caravagna, Alex\n Graudenzi, Daniele Ramazzotti" ]
cs.SY cs.LG
10.1109/TSP.2015.2405492
1408.6141
null
null
http://arxiv.org/abs/1408.6141v2
2014-11-11T00:44:23Z
2014-08-25T17:40:44Z
Recursive Total Least-Squares Algorithm Based on Inverse Power Method and Dichotomous Coordinate-Descent Iterations
We develop a recursive total least-squares (RTLS) algorithm for errors-in-variables system identification utilizing the inverse power method and the dichotomous coordinate-descent (DCD) iterations. The proposed algorithm, called DCD-RTLS, outperforms the previously-proposed RTLS algorithms, which are based on the line-search method, with reduced computational complexity. We perform a comprehensive analysis of the DCD-RTLS algorithm and show that it is asymptotically unbiased as well as being stable in the mean. We also find a lower bound for the forgetting factor that ensures mean-square stability of the algorithm and calculate the theoretical steady-state mean-square deviation (MSD). We verify the effectiveness of the proposed algorithm and the accuracy of the predicted steady-state MSD via simulations.
[ "Reza Arablouei, Kutluy{\\i}l Do\\u{g}an\\c{c}ay, and Stefan Werner", "['Reza Arablouei' 'Kutluyıl Doğançay' 'Stefan Werner']" ]
stat.ML cs.LG
10.1007/978-3-319-08976-8_11
1408.6214
null
null
http://arxiv.org/abs/1408.6214v1
2014-08-26T19:15:21Z
2014-08-26T19:15:21Z
A Methodology for the Diagnostic of Aircraft Engine Based on Indicators Aggregation
Aircraft engine manufacturers collect large amount of engine related data during flights. These data are used to detect anomalies in the engines in order to help companies optimize their maintenance costs. This article introduces and studies a generic methodology that allows one to build automatic early signs of anomaly detection in a way that is understandable by human operators who make the final maintenance decision. The main idea of the method is to generate a very large number of binary indicators based on parametric anomaly scores designed by experts, complemented by simple aggregations of those scores. The best indicators are selected via a classical forward scheme, leading to a much reduced number of indicators that are tuned to a data set. We illustrate the interest of the method on simulated data which contain realistic early signs of anomalies.
[ "Tsirizo Rabenoro (SAMM), J\\'er\\^ome Lacaille, Marie Cottrell (SAMM),\n Fabrice Rossi (SAMM)", "['Tsirizo Rabenoro' 'Jérôme Lacaille' 'Marie Cottrell' 'Fabrice Rossi']" ]
cs.LG
null
1408.6515
null
null
http://arxiv.org/pdf/1408.6515v3
2015-03-04T15:11:18Z
2014-08-27T06:32:21Z
Large Scale Purchase Prediction with Historical User Actions on B2C Online Retail Platform
This paper describes the solution of Bazinga Team for Tmall Recommendation Prize 2014. With real-world user action data provided by Tmall, one of the largest B2C online retail platforms in China, this competition requires to predict future user purchases on Tmall website. Predictions are judged on F1Score, which considers both precision and recall for fair evaluation. The data set provided by Tmall contains more than half billion action records from over ten million distinct users. Such massive data volume poses a big challenge, and drives competitors to write every single program in MapReduce fashion and run it on distributed cluster. We model the purchase prediction problem as standard machine learning problem, and mainly employ regression and classification methods as single models. Individual models are then aggregated in a two-stage approach, using linear regression for blending, and finally a linear ensemble of blended models. The competition is approaching the end but still in running during writing this paper. In the end, our team achieves F1Score 6.11 and ranks 7th (out of 7,276 teams in total).
[ "['Yuyu Zhang' 'Liang Pang' 'Lei Shi' 'Bin Wang']", "Yuyu Zhang, Liang Pang, Lei Shi and Bin Wang" ]
cs.LG
null
1408.6617
null
null
http://arxiv.org/pdf/1408.6617v1
2014-08-28T03:27:27Z
2014-08-28T03:27:27Z
Task-group Relatedness and Generalization Bounds for Regularized Multi-task Learning
In this paper, we study the generalization performance of regularized multi-task learning (RMTL) in a vector-valued framework, where MTL is considered as a learning process for vector-valued functions. We are mainly concerned with two theoretical questions: 1) under what conditions does RMTL perform better with a smaller task sample size than STL? 2) under what conditions is RMTL generalizable and can guarantee the consistency of each task during simultaneous learning? In particular, we investigate two types of task-group relatedness: the observed discrepancy-dependence measure (ODDM) and the empirical discrepancy-dependence measure (EDDM), both of which detect the dependence between two groups of multiple related tasks (MRTs). We then introduce the Cartesian product-based uniform entropy number (CPUEN) to measure the complexities of vector-valued function classes. By applying the specific deviation and the symmetrization inequalities to the vector-valued framework, we obtain the generalization bound for RMTL, which is the upper bound of the joint probability of the event that there is at least one task with a large empirical discrepancy between the expected and empirical risks. Finally, we present a sufficient condition to guarantee the consistency of each task in the simultaneous learning process, and we discuss how task relatedness affects the generalization performance of RMTL. Our theoretical findings answer the aforementioned two questions.
[ "Chao Zhang, Dacheng Tao, Tao Hu, Xiang Li", "['Chao Zhang' 'Dacheng Tao' 'Tao Hu' 'Xiang Li']" ]
cs.LG math.ST stat.ML stat.TH
null
1408.6618
null
null
http://arxiv.org/pdf/1408.6618v1
2014-08-28T03:29:06Z
2014-08-28T03:29:06Z
Falsifiable implies Learnable
The paper demonstrates that falsifiability is fundamental to learning. We prove the following theorem for statistical learning and sequential prediction: If a theory is falsifiable then it is learnable -- i.e. admits a strategy that predicts optimally. An analogous result is shown for universal induction.
[ "['David Balduzzi']", "David Balduzzi" ]
null
null
1408.6686
null
null
http://arxiv.org/abs/1408.6686v2
2014-11-18T07:07:13Z
2014-08-28T11:22:08Z
Sparse Generalized Eigenvalue Problem via Smooth Optimization
In this paper, we consider an $ell_{0}$-norm penalized formulation of the generalized eigenvalue problem (GEP), aimed at extracting the leading sparse generalized eigenvector of a matrix pair. The formulation involves maximization of a discontinuous nonconcave objective function over a nonconvex constraint set, and is therefore computationally intractable. To tackle the problem, we first approximate the $ell_{0}$-norm by a continuous surrogate function. Then an algorithm is developed via iteratively majorizing the surrogate function by a quadratic separable function, which at each iteration reduces to a regular generalized eigenvalue problem. A preconditioned steepest ascent algorithm for finding the leading generalized eigenvector is provided. A systematic way based on smoothing is proposed to deal with the "singularity issue" that arises when a quadratic function is used to majorize the nondifferentiable surrogate function. For sparse GEPs with special structure, algorithms that admit a closed-form solution at every iteration are derived. Numerical experiments show that the proposed algorithms match or outperform existing algorithms in terms of computational complexity and support recovery.
[ "['Junxiao Song' 'Prabhu Babu' 'Daniel P. Palomar']" ]
cs.CL cs.LG
10.1109/MIPRO.2014.6859744
1408.6746
null
null
http://arxiv.org/abs/1408.6746v2
2014-11-16T21:33:22Z
2014-08-28T15:06:50Z
Non-Standard Words as Features for Text Categorization
This paper presents categorization of Croatian texts using Non-Standard Words (NSW) as features. Non-Standard Words are: numbers, dates, acronyms, abbreviations, currency, etc. NSWs in Croatian language are determined according to Croatian NSW taxonomy. For the purpose of this research, 390 text documents were collected and formed the SKIPEZ collection with 6 classes: official, literary, informative, popular, educational and scientific. Text categorization experiment was conducted on three different representations of the SKIPEZ collection: in the first representation, the frequencies of NSWs are used as features; in the second representation, the statistic measures of NSWs (variance, coefficient of variation, standard deviation, etc.) are used as features; while the third representation combines the first two feature sets. Naive Bayes, CN2, C4.5, kNN, Classification Trees and Random Forest algorithms were used in text categorization experiments. The best categorization results are achieved using the first feature set (NSW frequencies) with the categorization accuracy of 87%. This suggests that the NSWs should be considered as features in highly inflectional languages, such as Croatian. NSW based features reduce the dimensionality of the feature space without standard lemmatization procedures, and therefore the bag-of-NSWs should be considered for further Croatian texts categorization experiments.
[ "Slobodan Beliga, Sanda Martin\\v{c}i\\'c-Ip\\v{s}i\\'c", "['Slobodan Beliga' 'Sanda Martinčić-Ipšić']" ]
cs.LG
null
1408.6804
null
null
http://arxiv.org/pdf/1408.6804v2
2014-11-18T14:43:43Z
2014-08-28T18:38:24Z
A Multi-Plane Block-Coordinate Frank-Wolfe Algorithm for Training Structural SVMs with a Costly max-Oracle
Structural support vector machines (SSVMs) are amongst the best performing models for structured computer vision tasks, such as semantic image segmentation or human pose estimation. Training SSVMs, however, is computationally costly, because it requires repeated calls to a structured prediction subroutine (called \emph{max-oracle}), which has to solve an optimization problem itself, e.g. a graph cut. In this work, we introduce a new algorithm for SSVM training that is more efficient than earlier techniques when the max-oracle is computationally expensive, as it is frequently the case in computer vision tasks. The main idea is to (i) combine the recent stochastic Block-Coordinate Frank-Wolfe algorithm with efficient hyperplane caching, and (ii) use an automatic selection rule for deciding whether to call the exact max-oracle or to rely on an approximate one based on the cached hyperplanes. We show experimentally that this strategy leads to faster convergence to the optimum with respect to the number of requires oracle calls, and that this translates into faster convergence with respect to the total runtime when the max-oracle is slow compared to the other steps of the algorithm. A publicly available C++ implementation is provided at http://pub.ist.ac.at/~vnk/papers/SVM.html .
[ "['Neel Shah' 'Vladimir Kolmogorov' 'Christoph H. Lampert']", "Neel Shah, Vladimir Kolmogorov, Christoph H. Lampert" ]
cs.LG cs.HC stat.ML
null
1409.0107
null
null
http://arxiv.org/pdf/1409.0107v1
2014-08-30T12:17:15Z
2014-08-30T12:17:15Z
A Plug&Play P300 BCI Using Information Geometry
This paper presents a new classification methods for Event Related Potentials (ERP) based on an Information geometry framework. Through a new estimation of covariance matrices, this work extend the use of Riemannian geometry, which was previously limited to SMR-based BCI, to the problem of classification of ERPs. As compared to the state-of-the-art, this new method increases performance, reduces the number of data needed for the calibration and features good generalisation across sessions and subjects. This method is illustrated on data recorded with the P300-based game brain invaders. Finally, an online and adaptive implementation is described, where the BCI is initialized with generic parameters derived from a database and continuously adapt to the individual, allowing the user to play the game without any calibration while keeping a high accuracy.
[ "Alexandre Barachant and Marco Congedo", "['Alexandre Barachant' 'Marco Congedo']" ]
cs.SD cs.LG
null
1409.0203
null
null
http://arxiv.org/pdf/1409.0203v1
2014-08-31T10:33:45Z
2014-08-31T10:33:45Z
Ad Hoc Microphone Array Calibration: Euclidean Distance Matrix Completion Algorithm and Theoretical Guarantees
This paper addresses the problem of ad hoc microphone array calibration where only partial information about the distances between microphones is available. We construct a matrix consisting of the pairwise distances and propose to estimate the missing entries based on a novel Euclidean distance matrix completion algorithm by alternative low-rank matrix completion and projection onto the Euclidean distance space. This approach confines the recovered matrix to the EDM cone at each iteration of the matrix completion algorithm. The theoretical guarantees of the calibration performance are obtained considering the random and locally structured missing entries as well as the measurement noise on the known distances. This study elucidates the links between the calibration error and the number of microphones along with the noise level and the ratio of missing distances. Thorough experiments on real data recordings and simulated setups are conducted to demonstrate these theoretical insights. A significant improvement is achieved by the proposed Euclidean distance matrix completion algorithm over the state-of-the-art techniques for ad hoc microphone array calibration.
[ "Mohammad J. Taghizadeh, Reza Parhizkar, Philip N. Garner, Herve\n Bourlard, Afsaneh Asaei", "['Mohammad J. Taghizadeh' 'Reza Parhizkar' 'Philip N. Garner'\n 'Herve Bourlard' 'Afsaneh Asaei']" ]
cs.LG stat.ML
10.1145/2661829.2662091
1409.0272
null
null
http://arxiv.org/abs/1409.0272v2
2014-09-02T00:33:35Z
2014-09-01T00:33:38Z
Multi-task Sparse Structure Learning
Multi-task learning (MTL) aims to improve generalization performance by learning multiple related tasks simultaneously. While sometimes the underlying task relationship structure is known, often the structure needs to be estimated from data at hand. In this paper, we present a novel family of models for MTL, applicable to regression and classification problems, capable of learning the structure of task relationships. In particular, we consider a joint estimation problem of the task relationship structure and the individual task parameters, which is solved using alternating minimization. The task relationship structure learning component builds on recent advances in structure learning of Gaussian graphical models based on sparse estimators of the precision (inverse covariance) matrix. We illustrate the effectiveness of the proposed model on a variety of synthetic and benchmark datasets for regression and classification. We also consider the problem of combining climate model outputs for better projections of future climate, with focus on temperature in South America, and show that the proposed model outperforms several existing methods for the problem.
[ "Andre R. Goncalves, Puja Das, Soumyadeep Chatterjee, Vidyashankar\n Sivakumar, Fernando J. Von Zuben, Arindam Banerjee", "['Andre R. Goncalves' 'Puja Das' 'Soumyadeep Chatterjee'\n 'Vidyashankar Sivakumar' 'Fernando J. Von Zuben' 'Arindam Banerjee']" ]
cs.CL cs.LG cs.NE stat.ML
null
1409.0473
null
null
http://arxiv.org/pdf/1409.0473v7
2016-05-19T21:53:22Z
2014-09-01T16:33:02Z
Neural Machine Translation by Jointly Learning to Align and Translate
Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition.
[ "['Dzmitry Bahdanau' 'Kyunghyun Cho' 'Yoshua Bengio']", "Dzmitry Bahdanau and Kyunghyun Cho and Yoshua Bengio" ]
cs.SY cs.LG
null
1409.0553
null
null
http://arxiv.org/pdf/1409.0553v2
2015-09-09T20:20:13Z
2014-09-01T20:03:48Z
Sampling-based Approximations with Quantitative Performance for the Probabilistic Reach-Avoid Problem over General Markov Processes
This article deals with stochastic processes endowed with the Markov (memoryless) property and evolving over general (uncountable) state spaces. The models further depend on a non-deterministic quantity in the form of a control input, which can be selected to affect the probabilistic dynamics. We address the computation of maximal reach-avoid specifications, together with the synthesis of the corresponding optimal controllers. The reach-avoid specification deals with assessing the likelihood that any finite-horizon trajectory of the model enters a given goal set, while avoiding a given set of undesired states. This article newly provides an approximate computational scheme for the reach-avoid specification based on the Fitted Value Iteration algorithm, which hinges on random sample extractions, and gives a-priori computable formal probabilistic bounds on the error made by the approximation algorithm: as such, the output of the numerical scheme is quantitatively assessed and thus meaningful for safety-critical applications. Furthermore, we provide tighter probabilistic error bounds that are sample-based. The overall computational scheme is put in relationship with alternative approximation algorithms in the literature, and finally its performance is practically assessed over a benchmark case study.
[ "['Sofie Haesaert' 'Robert Babuska' 'Alessandro Abate']", "Sofie Haesaert and Robert Babuska and Alessandro Abate" ]
stat.ML cs.LG
null
1409.0585
null
null
http://arxiv.org/pdf/1409.0585v1
2014-09-02T01:22:42Z
2014-09-02T01:22:42Z
On the Equivalence Between Deep NADE and Generative Stochastic Networks
Neural Autoregressive Distribution Estimators (NADEs) have recently been shown as successful alternatives for modeling high dimensional multimodal distributions. One issue associated with NADEs is that they rely on a particular order of factorization for $P(\mathbf{x})$. This issue has been recently addressed by a variant of NADE called Orderless NADEs and its deeper version, Deep Orderless NADE. Orderless NADEs are trained based on a criterion that stochastically maximizes $P(\mathbf{x})$ with all possible orders of factorizations. Unfortunately, ancestral sampling from deep NADE is very expensive, corresponding to running through a neural net separately predicting each of the visible variables given some others. This work makes a connection between this criterion and the training criterion for Generative Stochastic Networks (GSNs). It shows that training NADEs in this way also trains a GSN, which defines a Markov chain associated with the NADE model. Based on this connection, we show an alternative way to sample from a trained Orderless NADE that allows to trade-off computing time and quality of the samples: a 3 to 10-fold speedup (taking into account the waste due to correlations between consecutive samples of the chain) can be obtained without noticeably reducing the quality of the samples. This is achieved using a novel sampling procedure for GSNs called annealed GSN sampling, similar to tempering methods that combines fast mixing (obtained thanks to steps at high noise levels) with accurate samples (obtained thanks to steps at low noise levels).
[ "Li Yao and Sherjil Ozair and Kyunghyun Cho and Yoshua Bengio", "['Li Yao' 'Sherjil Ozair' 'Kyunghyun Cho' 'Yoshua Bengio']" ]
stat.ML cs.LG
null
1409.0745
null
null
http://arxiv.org/pdf/1409.0745v2
2017-07-03T19:06:40Z
2014-09-02T15:10:33Z
Multi-rank Sparse Hierarchical Clustering
There has been a surge in the number of large and flat data sets - data sets containing a large number of features and a relatively small number of observations - due to the growing ability to collect and store information in medical research and other fields. Hierarchical clustering is a widely used clustering tool. In hierarchical clustering, large and flat data sets may allow for a better coverage of clustering features (features that help explain the true underlying clusters) but, such data sets usually include a large fraction of noise features (non-clustering features) that may hide the underlying clusters. Witten and Tibshirani (2010) proposed a sparse hierarchical clustering framework to cluster the observations using an adaptively chosen subset of the features, however, we show that this framework has some limitations when the data sets contain clustering features with complex structure. In this paper, we propose the Multi-rank sparse hierarchical clustering (MrSHC). We show that, using simulation studies and real data examples, MrSHC produces superior feature selection and clustering performance comparing to the classical (of-the-shelf) hierarchical clustering and the existing sparse hierarchical clustering framework.
[ "['Hongyang Zhang' 'Ruben H. Zamar']", "Hongyang Zhang and Ruben H. Zamar" ]
cs.LG cs.CE
null
1409.0748
null
null
http://arxiv.org/pdf/1409.0748v1
2014-09-02T15:16:26Z
2014-09-02T15:16:26Z
Comparison of algorithms that detect drug side effects using electronic healthcare databases
The electronic healthcare databases are starting to become more readily available and are thought to have excellent potential for generating adverse drug reaction signals. The Health Improvement Network (THIN) database is an electronic healthcare database containing medical information on over 11 million patients that has excellent potential for detecting ADRs. In this paper we apply four existing electronic healthcare database signal detecting algorithms (MUTARA, HUNT, Temporal Pattern Discovery and modified ROR) on the THIN database for a selection of drugs from six chosen drug families. This is the first comparison of ADR signalling algorithms that includes MUTARA and HUNT and enabled us to set a benchmark for the adverse drug reaction signalling ability of the THIN database. The drugs were selectively chosen to enable a comparison with previous work and for variety. It was found that no algorithm was generally superior and the algorithms' natural thresholds act at variable stringencies. Furthermore, none of the algorithms perform well at detecting rare ADRs.
[ "Jenna Reps, Jonathan M. Garibaldi, Uwe Aickelin, Daniele Soria, Jack\n Gibson, Richard Hubbard", "['Jenna Reps' 'Jonathan M. Garibaldi' 'Uwe Aickelin' 'Daniele Soria'\n 'Jack Gibson' 'Richard Hubbard']" ]
cs.LG
10.1080/0952813X.2014.886301
1409.0763
null
null
http://arxiv.org/abs/1409.0763v1
2014-09-02T15:49:40Z
2014-09-02T15:49:40Z
Data classification using the Dempster-Shafer method
In this paper, the Dempster-Shafer method is employed as the theoretical basis for creating data classification systems. Testing is carried out using three popular (multiple attribute) benchmark datasets that have two, three and four classes. In each case, a subset of the available data is used for training to establish thresholds, limits or likelihoods of class membership for each attribute, and hence create mass functions that establish probability of class membership for each attribute of the test data. Classification of each data item is achieved by combination of these probabilities via Dempster's Rule of Combination. Results for the first two datasets show extremely high classification accuracy that is competitive with other popular methods. The third dataset is non-numerical and difficult to classify, but good results can be achieved provided the system and mass functions are designed carefully and the right attributes are chosen for combination. In all cases the Dempster-Shafer method provides comparable performance to other more popular algorithms, but the overhead of generating accurate mass functions increases the complexity with the addition of new attributes. Overall, the results suggest that the D-S approach provides a suitable framework for the design of classification systems and that automating the mass function design and calculation would increase the viability of the algorithm for complex classification problems.
[ "['Qi Chen' 'Amanda Whitbrook' 'Uwe Aickelin' 'Chris Roadknight']", "Qi Chen, Amanda Whitbrook, Uwe Aickelin and Chris Roadknight" ]
cs.LG cs.CE
10.2139/ssrn.2823251
1409.0768
null
null
http://arxiv.org/abs/1409.0768v1
2014-09-02T16:00:23Z
2014-09-02T16:00:23Z
A Novel Semi-Supervised Algorithm for Rare Prescription Side Effect Discovery
Drugs are frequently prescribed to patients with the aim of improving each patient's medical state, but an unfortunate consequence of most prescription drugs is the occurrence of undesirable side effects. Side effects that occur in more than one in a thousand patients are likely to be signalled efficiently by current drug surveillance methods, however, these same methods may take decades before generating signals for rarer side effects, risking medical morbidity or mortality in patients prescribed the drug while the rare side effect is undiscovered. In this paper we propose a novel computational meta-analysis framework for signalling rare side effects that integrates existing methods, knowledge from the web, metric learning and semi-supervised clustering. The novel framework was able to signal many known rare and serious side effects for the selection of drugs investigated, such as tendon rupture when prescribed Ciprofloxacin or Levofloxacin, renal failure with Naproxen and depression associated with Rimonabant. Furthermore, for the majority of the drug investigated it generated signals for rare side effects at a more stringent signalling threshold than existing methods and shows the potential to become a fundamental part of post marketing surveillance to detect rare side effects.
[ "['Jenna Reps' 'Jonathan M. Garibaldi' 'Uwe Aickelin' 'Daniele Soria'\n 'Jack E. Gibson' 'Richard B. Hubbard']", "Jenna Reps, Jonathan M. Garibaldi, Uwe Aickelin, Daniele Soria, Jack\n E. Gibson, Richard B. Hubbard" ]
cs.LG cs.CE
null
1409.0772
null
null
http://arxiv.org/pdf/1409.0772v1
2014-09-02T16:17:25Z
2014-09-02T16:17:25Z
Signalling Paediatric Side Effects using an Ensemble of Simple Study Designs
Background: Children are frequently prescribed medication off-label, meaning there has not been sufficient testing of the medication to determine its safety or effectiveness. The main reason this safety knowledge is lacking is due to ethical restrictions that prevent children from being included in the majority of clinical trials. Objective: The objective of this paper is to investigate whether an ensemble of simple study designs can be implemented to signal acutely occurring side effects effectively within the paediatric population by using historical longitudinal data. The majority of pharmacovigilance techniques are unsupervised, but this research presents a supervised framework. Methods: Multiple measures of association are calculated for each drug and medical event pair and these are used as features that are fed into a classiffier to determine the likelihood of the drug and medical event pair corresponding to an adverse drug reaction. The classiffier is trained using known adverse drug reactions or known non-adverse drug reaction relationships. Results: The novel ensemble framework obtained a false positive rate of 0:149, a sensitivity of 0:547 and a specificity of 0:851 when implemented on a reference set of drug and medical event pairs. The novel framework consistently outperformed each individual simple study design. Conclusion: This research shows that it is possible to exploit the mechanism of causality and presents a framework for signalling adverse drug reactions effectively.
[ "Jenna M. Reps, Jonathan M. Garibaldi, Uwe Aickelin, Daniele Soria,\n Jack E. Gibson, Richard B. Hubbard", "['Jenna M. Reps' 'Jonathan M. Garibaldi' 'Uwe Aickelin' 'Daniele Soria'\n 'Jack E. Gibson' 'Richard B. Hubbard']" ]
cs.LG cs.CE
null
1409.0775
null
null
http://arxiv.org/pdf/1409.0775v1
2014-09-02T16:25:58Z
2014-09-02T16:25:58Z
Feature selection in detection of adverse drug reactions from the Health Improvement Network (THIN) database
Adverse drug reaction (ADR) is widely concerned for public health issue. ADRs are one of most common causes to withdraw some drugs from market. Prescription event monitoring (PEM) is an important approach to detect the adverse drug reactions. The main problem to deal with this method is how to automatically extract the medical events or side effects from high-throughput medical events, which are collected from day to day clinical practice. In this study we propose a novel concept of feature matrix to detect the ADRs. Feature matrix, which is extracted from big medical data from The Health Improvement Network (THIN) database, is created to characterize the medical events for the patients who take drugs. Feature matrix builds the foundation for the irregular and big medical data. Then feature selection methods are performed on feature matrix to detect the significant features. Finally the ADRs can be located based on the significant features. The experiments are carried out on three drugs: Atorvastatin, Alendronate, and Metoclopramide. Major side effects for each drug are detected and better performance is achieved compared to other computerized methods. The detected ADRs are based on computerized methods, further investigation is needed.
[ "['Yihui Liu' 'Uwe Aickelin']", "Yihui Liu and Uwe Aickelin" ]
cs.LG cs.CE
10.1109/CIVEMSA.2013.6617400
1409.0788
null
null
http://arxiv.org/abs/1409.0788v1
2014-09-02T16:52:16Z
2014-09-02T16:52:16Z
Ensemble Learning of Colorectal Cancer Survival Rates
In this paper, we describe a dataset relating to cellular and physical conditions of patients who are operated upon to remove colorectal tumours. This data provides a unique insight into immunological status at the point of tumour removal, tumour classification and post-operative survival. We build on existing research on clustering and machine learning facets of this data to demonstrate a role for an ensemble approach to highlighting patients with clearer prognosis parameters. Results for survival prediction using 3 different approaches are shown for a subset of the data which is most difficult to model. The performance of each model individually is compared with subsets of the data where some agreement is reached for multiple models. Significant improvements in model accuracy on an unseen test set can be achieved for patients where agreement between models is achieved.
[ "['Chris Roadknight' 'Uwe Aickelin' 'John Scholefield' 'Lindy Durrant']", "Chris Roadknight, Uwe Aickelin, John Scholefield, Lindy Durrant" ]
stat.ML cs.AI cs.LG
null
1409.0791
null
null
http://arxiv.org/pdf/1409.0791v1
2014-09-02T16:52:53Z
2014-09-02T16:52:53Z
Feature Selection in Conditional Random Fields for Map Matching of GPS Trajectories
Map matching of the GPS trajectory serves the purpose of recovering the original route on a road network from a sequence of noisy GPS observations. It is a fundamental technique to many Location Based Services. However, map matching of a low sampling rate on urban road network is still a challenging task. In this paper, the characteristics of Conditional Random Fields with regard to inducing many contextual features and feature selection are explored for the map matching of the GPS trajectories at a low sampling rate. Experiments on a taxi trajectory dataset show that our method may achieve competitive results along with the success of reducing model complexity for computation-limited applications.
[ "['Jian Yang' 'Liqiu Meng']", "Jian Yang, Liqiu Meng" ]
stat.ML cs.LG
null
1409.0797
null
null
http://arxiv.org/pdf/1409.0797v1
2014-09-02T17:15:58Z
2014-09-02T17:15:58Z
Feature Engineering for Map Matching of Low-Sampling-Rate GPS Trajectories in Road Network
Map matching of GPS trajectories from a sequence of noisy observations serves the purpose of recovering the original routes in a road network. In this work in progress, we attempt to share our experience of feature construction in a spatial database by reporting our ongoing experiment of feature extrac-tion in Conditional Random Fields (CRFs) for map matching. Our preliminary results are obtained from real-world taxi GPS trajectories.
[ "['Jian Yang' 'Liqiu Meng']", "Jian Yang and Liqiu Meng" ]
cs.LG
null
1409.0919
null
null
http://arxiv.org/pdf/1409.0919v1
2014-09-02T23:28:22Z
2014-09-02T23:28:22Z
Solving the Problem of the K Parameter in the KNN Classifier Using an Ensemble Learning Approach
This paper presents a new solution for choosing the K parameter in the k-nearest neighbor (KNN) algorithm, the solution depending on the idea of ensemble learning, in which a weak KNN classifier is used each time with a different K, starting from one to the square root of the size of the training set. The results of the weak classifiers are combined using the weighted sum rule. The proposed solution was tested and compared to other solutions using a group of experiments in real life problems. The experimental results show that the proposed classifier outperforms the traditional KNN classifier that uses a different number of neighbors, is competitive with other classifiers, and is a promising classifier with strong potential for a wide range of applications.
[ "Ahmad Basheer Hassanat, Mohammad Ali Abbadi, Ghada Awad Altarawneh,\n Ahmad Ali Alhasanat", "['Ahmad Basheer Hassanat' 'Mohammad Ali Abbadi' 'Ghada Awad Altarawneh'\n 'Ahmad Ali Alhasanat']" ]
cs.LG
null
1409.0923
null
null
http://arxiv.org/pdf/1409.0923v1
2014-09-02T23:45:29Z
2014-09-02T23:45:29Z
Dimensionality Invariant Similarity Measure
This paper presents a new similarity measure to be used for general tasks including supervised learning, which is represented by the K-nearest neighbor classifier (KNN). The proposed similarity measure is invariant to large differences in some dimensions in the feature space. The proposed metric is proved mathematically to be a metric. To test its viability for different applications, the KNN used the proposed metric for classifying test examples chosen from a number of real datasets. Compared to some other well known metrics, the experimental results show that the proposed metric is a promising distance measure for the KNN classifier with strong potential for a wide range of applications.
[ "['Ahmad Basheer Hassanat']", "Ahmad Basheer Hassanat" ]
stat.ML cs.LG
null
1409.0934
null
null
http://arxiv.org/pdf/1409.0934v1
2014-09-03T01:39:34Z
2014-09-03T01:39:34Z
Breakdown Point of Robust Support Vector Machine
The support vector machine (SVM) is one of the most successful learning methods for solving classification problems. Despite its popularity, SVM has a serious drawback, that is sensitivity to outliers in training samples. The penalty on misclassification is defined by a convex loss called the hinge loss, and the unboundedness of the convex loss causes the sensitivity to outliers. To deal with outliers, robust variants of SVM have been proposed, such as the robust outlier detection algorithm and an SVM with a bounded loss called the ramp loss. In this paper, we propose a robust variant of SVM and investigate its robustness in terms of the breakdown point. The breakdown point is a robustness measure that is the largest amount of contamination such that the estimated classifier still gives information about the non-contaminated data. The main contribution of this paper is to show an exact evaluation of the breakdown point for the robust SVM. For learning parameters such as the regularization parameter in our algorithm, we derive a simple formula that guarantees the robustness of the classifier. When the learning parameters are determined with a grid search using cross validation, our formula works to reduce the number of candidate search points. The robustness of the proposed method is confirmed in numerical experiments. We show that the statistical properties of the robust SVM are well explained by a theoretical analysis of the breakdown point.
[ "['Takafumi Kanamori' 'Shuhei Fujiwara' 'Akiko Takeda']", "Takafumi Kanamori, Shuhei Fujiwara, Akiko Takeda" ]
stat.ML cs.DC cs.LG
null
1409.0940
null
null
http://arxiv.org/pdf/1409.0940v3
2015-04-16T18:06:53Z
2014-09-03T02:28:51Z
High-performance Kernel Machines with Implicit Distributed Optimization and Randomization
In order to fully utilize "big data", it is often required to use "big models". Such models tend to grow with the complexity and size of the training data, and do not make strong parametric assumptions upfront on the nature of the underlying statistical dependencies. Kernel methods fit this need well, as they constitute a versatile and principled statistical methodology for solving a wide range of non-parametric modelling problems. However, their high computational costs (in storage and time) pose a significant barrier to their widespread adoption in big data applications. We propose an algorithmic framework and high-performance implementation for massive-scale training of kernel-based statistical models, based on combining two key technical ingredients: (i) distributed general purpose convex optimization, and (ii) the use of randomization to improve the scalability of kernel methods. Our approach is based on a block-splitting variant of the Alternating Directions Method of Multipliers, carefully reconfigured to handle very large random feature matrices, while exploiting hybrid parallelism typically found in modern clusters of multicore machines. Our implementation supports a variety of statistical learning tasks by enabling several loss functions, regularization schemes, kernels, and layers of randomized approximations for both dense and sparse datasets, in a highly extensible framework. We evaluate the ability of our framework to learn models on data from applications, and provide a comparison against existing sequential and parallel libraries.
[ "Vikas Sindhwani and Haim Avron", "['Vikas Sindhwani' 'Haim Avron']" ]
cs.CV cs.LG
10.1109/TIP.2015.2441632
1409.0964
null
null
http://arxiv.org/abs/1409.0964v1
2014-09-03T06:45:11Z
2014-09-03T06:45:11Z
Constructing a Non-Negative Low Rank and Sparse Graph with Data-Adaptive Features
This paper aims at constructing a good graph for discovering intrinsic data structures in a semi-supervised learning setting. Firstly, we propose to build a non-negative low-rank and sparse (referred to as NNLRS) graph for the given data representation. Specifically, the weights of edges in the graph are obtained by seeking a nonnegative low-rank and sparse matrix that represents each data sample as a linear combination of others. The so-obtained NNLRS-graph can capture both the global mixture of subspaces structure (by the low rankness) and the locally linear structure (by the sparseness) of the data, hence is both generative and discriminative. Secondly, as good features are extremely important for constructing a good graph, we propose to learn the data embedding matrix and construct the graph jointly within one framework, which is termed as NNLRS with embedded features (referred to as NNLRS-EF). Extensive experiments on three publicly available datasets demonstrate that the proposed method outperforms the state-of-the-art graph construction method by a large margin for both semi-supervised classification and discriminative analysis, which verifies the effectiveness of our proposed method.
[ "Liansheng Zhuang, Shenghua Gao, Jinhui Tang, Jingjing Wang, Zhouchen\n Lin, Yi Ma", "['Liansheng Zhuang' 'Shenghua Gao' 'Jinhui Tang' 'Jingjing Wang'\n 'Zhouchen Lin' 'Yi Ma']" ]
cs.LG cs.CE
null
1409.1043
null
null
http://arxiv.org/pdf/1409.1043v1
2014-09-03T11:42:33Z
2014-09-03T11:42:33Z
Variability of Behaviour in Electricity Load Profile Clustering; Who Does Things at the Same Time Each Day?
UK electricity market changes provide opportunities to alter households' electricity usage patterns for the benefit of the overall electricity network. Work on clustering similar households has concentrated on daily load profiles and the variability in regular household behaviours has not been considered. Those households with most variability in regular activities may be the most receptive to incentives to change timing. Whether using the variability of regular behaviour allows the creation of more consistent groupings of households is investigated and compared with daily load profile clustering. 204 UK households are analysed to find repeating patterns (motifs). Variability in the time of the motif is used as the basis for clustering households. Different clustering algorithms are assessed by the consistency of the results. Findings show that variability of behaviour, using motifs, provides more consistent groupings of households across different clustering algorithms and allows for more efficient targeting of behaviour change interventions.
[ "['Ian Dent' 'Tony Craig' 'Uwe Aickelin' 'Tom Rodden']", "Ian Dent, Tony Craig, Uwe Aickelin and Tom Rodden" ]
cs.LG cs.CE
null
1409.1053
null
null
http://arxiv.org/pdf/1409.1053v1
2014-09-03T12:11:54Z
2014-09-03T12:11:54Z
Tuning a Multiple Classifier System for Side Effect Discovery using Genetic Algorithms
In previous work, a novel supervised framework implementing a binary classifier was presented that obtained excellent results for side effect discovery. Interestingly, unique side effects were identified when different binary classifiers were used within the framework, prompting the investigation of applying a multiple classifier system. In this paper we investigate tuning a side effect multiple classifying system using genetic algorithms. The results of this research show that the novel framework implementing a multiple classifying system trained using genetic algorithms can obtain a higher partial area under the receiver operating characteristic curve than implementing a single classifier. Furthermore, the framework is able to detect side effects efficiently and obtains a low false positive rate.
[ "['Jenna M. Reps' 'Uwe Aickelin' 'Jonathan M. Garibaldi']", "Jenna M. Reps, Uwe Aickelin and Jonathan M. Garibaldi" ]
cs.CE cs.LG cs.NE
null
1409.1057
null
null
http://arxiv.org/pdf/1409.1057v1
2014-09-03T12:23:50Z
2014-09-03T12:23:50Z
Augmented Neural Networks for Modelling Consumer Indebtness
Consumer Debt has risen to be an important problem of modern societies, generating a lot of research in order to understand the nature of consumer indebtness, which so far its modelling has been carried out by statistical models. In this work we show that Computational Intelligence can offer a more holistic approach that is more suitable for the complex relationships an indebtness dataset has and Linear Regression cannot uncover. In particular, as our results show, Neural Networks achieve the best performance in modelling consumer indebtness, especially when they manage to incorporate the significant and experimentally verified results of the Data Mining process in the model, exploiting the flexibility Neural Networks offer in designing their topology. This novel method forms an elaborate framework to model Consumer indebtness that can be extended to any other real world application.
[ "['Alexandros Ladas' 'Jonathan M. Garibaldi' 'Rodrigo Scarpel'\n 'Uwe Aickelin']", "Alexandros Ladas, Jonathan M. Garibaldi, Rodrigo Scarpel and Uwe\n Aickelin" ]
null
null
1409.1062
null
null
http://arxiv.org/pdf/1409.1062v1
2014-09-03T12:36:25Z
2014-09-03T12:36:25Z
Structured Low-Rank Matrix Factorization with Missing and Grossly Corrupted Observations
Recovering low-rank and sparse matrices from incomplete or corrupted observations is an important problem in machine learning, statistics, bioinformatics, computer vision, as well as signal and image processing. In theory, this problem can be solved by the natural convex joint/mixed relaxations (i.e., l_{1}-norm and trace norm) under certain conditions. However, all current provable algorithms suffer from superlinear per-iteration cost, which severely limits their applicability to large-scale problems. In this paper, we propose a scalable, provable structured low-rank matrix factorization method to recover low-rank and sparse matrices from missing and grossly corrupted data, i.e., robust matrix completion (RMC) problems, or incomplete and grossly corrupted measurements, i.e., compressive principal component pursuit (CPCP) problems. Specifically, we first present two small-scale matrix trace norm regularized bilinear structured factorization models for RMC and CPCP problems, in which repetitively calculating SVD of a large-scale matrix is replaced by updating two much smaller factor matrices. Then, we apply the alternating direction method of multipliers (ADMM) to efficiently solve the RMC problems. Finally, we provide the convergence analysis of our algorithm, and extend it to address general CPCP problems. Experimental results verified both the efficiency and effectiveness of our method compared with the state-of-the-art methods.
[ "['Fanhua Shang' 'Yuanyuan Liu' 'Hanghang Tong' 'James Cheng' 'Hong Cheng']" ]
cs.LG
null
1409.1200
null
null
http://arxiv.org/pdf/1409.1200v1
2014-09-03T19:18:05Z
2014-09-03T19:18:05Z
Domain Transfer Structured Output Learning
In this paper, we propose the problem of domain transfer structured output learn- ing and the first solution to solve it. The problem is defined on two different data domains sharing the same input and output spaces, named as source domain and target domain. The outputs are structured, and for the data samples of the source domain, the corresponding outputs are available, while for most data samples of the target domain, the corresponding outputs are missing. The input distributions of the two domains are significantly different. The problem is to learn a predictor for the target domain to predict the structured outputs from the input. Due to the limited number of outputs available for the samples form the target domain, it is difficult to directly learn the predictor from the target domain, thus it is necessary to use the output information available in source domain. We propose to learn the target domain predictor by adapting a auxiliary predictor trained by using source domain data to the target domain. The adaptation is implemented by adding a delta function on the basis of the auxiliary predictor. An algorithm is developed to learn the parameter of the delta function to minimize loss functions associat- ed with the predicted outputs against the true outputs of the data samples with available outputs of the target domain.
[ "['Jim Jing-Yan Wang']", "Jim Jing-Yan Wang" ]
cs.CL cs.LG cs.NE stat.ML
null
1409.1257
null
null
http://arxiv.org/pdf/1409.1257v2
2014-10-07T18:09:37Z
2014-09-03T21:00:49Z
Overcoming the Curse of Sentence Length for Neural Machine Translation using Automatic Segmentation
The authors of (Cho et al., 2014a) have shown that the recently introduced neural network translation systems suffer from a significant drop in translation quality when translating long sentences, unlike existing phrase-based translation systems. In this paper, we propose a way to address this issue by automatically segmenting an input sentence into phrases that can be easily translated by the neural network translation model. Once each segment has been independently translated by the neural machine translation model, the translated clauses are concatenated to form a final translation. Empirical results show a significant improvement in translation quality for long sentences.
[ "Jean Pouget-Abadie and Dzmitry Bahdanau and Bart van Merrienboer and\n Kyunghyun Cho and Yoshua Bengio", "['Jean Pouget-Abadie' 'Dzmitry Bahdanau' 'Bart van Merrienboer'\n 'Kyunghyun Cho' 'Yoshua Bengio']" ]
stat.ML cs.LG
null
1409.1320
null
null
http://arxiv.org/pdf/1409.1320v2
2014-09-05T21:13:36Z
2014-09-04T05:06:34Z
Marginal Structured SVM with Hidden Variables
In this work, we propose the marginal structured SVM (MSSVM) for structured prediction with hidden variables. MSSVM properly accounts for the uncertainty of hidden variables, and can significantly outperform the previously proposed latent structured SVM (LSSVM; Yu & Joachims (2009)) and other state-of-art methods, especially when that uncertainty is large. Our method also results in a smoother objective function, making gradient-based optimization of MSSVMs converge significantly faster than for LSSVMs. We also show that our method consistently outperforms hidden conditional random fields (HCRFs; Quattoni et al. (2007)) on both simulated and real-world datasets. Furthermore, we propose a unified framework that includes both our and several other existing methods as special cases, and provides insights into the comparison of different models in practice.
[ "Wei Ping, Qiang Liu, Alexander Ihler", "['Wei Ping' 'Qiang Liu' 'Alexander Ihler']" ]
cs.LG math.OC stat.ML
null
1409.1458
null
null
http://arxiv.org/pdf/1409.1458v2
2014-09-29T16:07:32Z
2014-09-04T14:59:35Z
Communication-Efficient Distributed Dual Coordinate Ascent
Communication remains the most significant bottleneck in the performance of distributed optimization algorithms for large-scale machine learning. In this paper, we propose a communication-efficient framework, CoCoA, that uses local computation in a primal-dual setting to dramatically reduce the amount of necessary communication. We provide a strong convergence rate analysis for this class of algorithms, as well as experiments on real-world distributed datasets with implementations in Spark. In our experiments, we find that as compared to state-of-the-art mini-batch versions of SGD and SDCA algorithms, CoCoA converges to the same .001-accurate solution quality on average 25x as quickly.
[ "['Martin Jaggi' 'Virginia Smith' 'Martin Takáč' 'Jonathan Terhorst'\n 'Sanjay Krishnan' 'Thomas Hofmann' 'Michael I. Jordan']", "Martin Jaggi, Virginia Smith, Martin Tak\\'a\\v{c}, Jonathan Terhorst,\n Sanjay Krishnan, Thomas Hofmann, Michael I. Jordan" ]
astro-ph.CO astro-ph.IM cs.LG stat.ML
10.1088/1475-7516/2015/01/038
1409.1576
null
null
http://arxiv.org/abs/1409.1576v2
2015-01-21T21:11:42Z
2014-09-04T20:00:05Z
Machine Learning Etudes in Astrophysics: Selection Functions for Mock Cluster Catalogs
Making mock simulated catalogs is an important component of astrophysical data analysis. Selection criteria for observed astronomical objects are often too complicated to be derived from first principles. However the existence of an observed group of objects is a well-suited problem for machine learning classification. In this paper we use one-class classifiers to learn the properties of an observed catalog of clusters of galaxies from ROSAT and to pick clusters from mock simulations that resemble the observed ROSAT catalog. We show how this method can be used to study the cross-correlations of thermal Sunya'ev-Zeldovich signals with number density maps of X-ray selected cluster catalogs. The method reduces the bias due to hand-tuning the selection function and is readily scalable to large catalogs with a high-dimensional space of astrophysical features.
[ "['Amir Hajian' 'Marcelo Alvarez' 'J. Richard Bond']", "Amir Hajian, Marcelo Alvarez, J. Richard Bond" ]
cs.LG
null
1409.1917
null
null
http://arxiv.org/pdf/1409.1917v1
2014-09-05T16:17:14Z
2014-09-05T16:17:14Z
Novel Methods for Activity Classification and Occupany Prediction Enabling Fine-grained HVAC Control
Much of the energy consumption in buildings is due to HVAC systems, which has motivated several recent studies on making these systems more energy- efficient. Occupancy and activity are two important aspects, which need to be correctly estimated for optimal HVAC control. However, state-of-the-art methods to estimate occupancy and classify activity require infrastructure and/or wearable sensors which suffers from lower acceptability due to higher cost. Encouragingly, with the advancement of the smartphones, these are becoming more achievable. Most of the existing occupancy estimation tech- niques have the underlying assumption that the phone is always carried by its user. However, phones are often left at desk while attending meeting or other events, which generates estimation error for the existing phone based occupancy algorithms. Similarly, in the recent days the emerging theory of Sparse Random Classifier (SRC) has been applied for activity classification on smartphone, however, there are rooms to improve the on-phone process- ing. We propose a novel sensor fusion method which offers almost 100% accuracy for occupancy estimation. We also propose an activity classifica- tion algorithm, which offers similar accuracy as of the state-of-the-art SRC algorithms while offering 50% reduction in processing.
[ "Rajib Rana, Brano Kusy, Josh Wall, Wen Hu", "['Rajib Rana' 'Brano Kusy' 'Josh Wall' 'Wen Hu']" ]
stat.ML cs.LG
null
1409.1976
null
null
http://arxiv.org/pdf/1409.1976v1
2014-09-06T03:12:39Z
2014-09-06T03:12:39Z
A Reduction of the Elastic Net to Support Vector Machines with an Application to GPU Computing
The past years have witnessed many dedicated open-source projects that built and maintain implementations of Support Vector Machines (SVM), parallelized for GPU, multi-core CPUs and distributed systems. Up to this point, no comparable effort has been made to parallelize the Elastic Net, despite its popularity in many high impact applications, including genetics, neuroscience and systems biology. The first contribution in this paper is of theoretical nature. We establish a tight link between two seemingly different algorithms and prove that Elastic Net regression can be reduced to SVM with squared hinge loss classification. Our second contribution is to derive a practical algorithm based on this reduction. The reduction enables us to utilize prior efforts in speeding up and parallelizing SVMs to obtain a highly optimized and parallel solver for the Elastic Net and Lasso. With a simple wrapper, consisting of only 11 lines of MATLAB code, we obtain an Elastic Net implementation that naturally utilizes GPU and multi-core CPUs. We demonstrate on twelve real world data sets, that our algorithm yields identical results as the popular (and highly optimized) glmnet implementation but is one or several orders of magnitude faster.
[ "Quan Zhou, Wenlin Chen, Shiji Song, Jacob R. Gardner, Kilian Q.\n Weinberger, Yixin Chen", "['Quan Zhou' 'Wenlin Chen' 'Shiji Song' 'Jacob R. Gardner'\n 'Kilian Q. Weinberger' 'Yixin Chen']" ]
math.OC cs.LG stat.ML
null
1409.2045
null
null
http://arxiv.org/pdf/1409.2045v1
2014-09-06T18:51:17Z
2014-09-06T18:51:17Z
Global Convergence of Online Limited Memory BFGS
Global convergence of an online (stochastic) limited memory version of the Broyden-Fletcher- Goldfarb-Shanno (BFGS) quasi-Newton method for solving optimization problems with stochastic objectives that arise in large scale machine learning is established. Lower and upper bounds on the Hessian eigenvalues of the sample functions are shown to suffice to guarantee that the curvature approximation matrices have bounded determinants and traces, which, in turn, permits establishing convergence to optimal arguments with probability 1. Numerical experiments on support vector machines with synthetic data showcase reductions in convergence time relative to stochastic gradient descent algorithms as well as reductions in storage and computation relative to other online quasi-Newton methods. Experimental evaluation on a search engine advertising problem corroborates that these advantages also manifest in practical applications.
[ "['Aryan Mokhtari' 'Alejandro Ribeiro']", "Aryan Mokhtari and Alejandro Ribeiro" ]
cs.LG cs.DS cs.IT math.IT math.ST stat.TH
null
1409.2177
null
null
http://arxiv.org/pdf/1409.2177v1
2014-09-07T23:51:00Z
2014-09-07T23:51:00Z
The Large Margin Mechanism for Differentially Private Maximization
A basic problem in the design of privacy-preserving algorithms is the private maximization problem: the goal is to pick an item from a universe that (approximately) maximizes a data-dependent function, all under the constraint of differential privacy. This problem has been used as a sub-routine in many privacy-preserving algorithms for statistics and machine-learning. Previous algorithms for this problem are either range-dependent---i.e., their utility diminishes with the size of the universe---or only apply to very restricted function classes. This work provides the first general-purpose, range-independent algorithm for private maximization that guarantees approximate differential privacy. Its applicability is demonstrated on two fundamental tasks in data mining and machine learning.
[ "['Kamalika Chaudhuri' 'Daniel Hsu' 'Shuang Song']", "Kamalika Chaudhuri and Daniel Hsu and Shuang Song" ]
cs.CV cs.LG stat.ML
null
1409.2232
null
null
http://arxiv.org/pdf/1409.2232v2
2016-11-02T07:33:51Z
2014-09-08T08:10:37Z
When coding meets ranking: A joint framework based on local learning
Sparse coding, which represents a data point as a sparse reconstruction code with regard to a dictionary, has been a popular data representation method. Meanwhile, in database retrieval problems, learning the ranking scores from data points plays an important role. Up to now, these two problems have always been considered separately, assuming that data coding and ranking are two independent and irrelevant problems. However, is there any internal relationship between sparse coding and ranking score learning? If yes, how to explore and make use of this internal relationship? In this paper, we try to answer these questions by developing the first joint sparse coding and ranking score learning algorithm. To explore the local distribution in the sparse code space, and also to bridge coding and ranking problems, we assume that in the neighborhood of each data point, the ranking scores can be approximated from the corresponding sparse codes by a local linear function. By considering the local approximation error of ranking scores, the reconstruction error and sparsity of sparse coding, and the query information provided by the user, we construct a unified objective function for learning of sparse codes, the dictionary and ranking scores. We further develop an iterative algorithm to solve this optimization problem.
[ "['Jim Jing-Yan Wang' 'Xuefeng Cui' 'Ge Yu' 'Lili Guo' 'Xin Gao']", "Jim Jing-Yan Wang, Xuefeng Cui, Ge Yu, Lili Guo, Xin Gao" ]
stat.ML cs.AI cs.CV cs.LG
null
1409.2287
null
null
http://arxiv.org/pdf/1409.2287v1
2014-09-08T10:47:23Z
2014-09-08T10:47:23Z
Variational Inference for Uncertainty on the Inputs of Gaussian Process Models
The Gaussian process latent variable model (GP-LVM) provides a flexible approach for non-linear dimensionality reduction that has been widely applied. However, the current approach for training GP-LVMs is based on maximum likelihood, where the latent projection variables are maximized over rather than integrated out. In this paper we present a Bayesian method for training GP-LVMs by introducing a non-standard variational inference framework that allows to approximately integrate out the latent variables and subsequently train a GP-LVM by maximizing an analytic lower bound on the exact marginal likelihood. We apply this method for learning a GP-LVM from iid observations and for learning non-linear dynamical systems where the observations are temporally correlated. We show that a benefit of the variational Bayesian procedure is its robustness to overfitting and its ability to automatically select the dimensionality of the nonlinear latent space. The resulting framework is generic, flexible and easy to extend for other purposes, such as Gaussian process regression with uncertain inputs and semi-supervised Gaussian processes. We demonstrate our method on synthetic data and standard machine learning benchmarks, as well as challenging real world datasets, including high resolution video data.
[ "['Andreas C. Damianou' 'Michalis K. Titsias' 'Neil D. Lawrence']", "Andreas C. Damianou, Michalis K. Titsias, Neil D. Lawrence" ]