categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
list
cs.LG
null
1404.7472
null
null
http://arxiv.org/pdf/1404.7472v1
2014-04-29T19:28:09Z
2014-04-29T19:28:09Z
Implementing spectral methods for hidden Markov models with real-valued emissions
Hidden Markov models (HMMs) are widely used statistical models for modeling sequential data. The parameter estimation for HMMs from time series data is an important learning problem. The predominant methods for parameter estimation are based on local search heuristics, most notably the expectation-maximization (EM) algorithm. These methods are prone to local optima and oftentimes suffer from high computational and sample complexity. Recent years saw the emergence of spectral methods for the parameter estimation of HMMs, based on a method of moments approach. Two spectral learning algorithms as proposed by Hsu, Kakade and Zhang 2012 (arXiv:0811.4413) and Anandkumar, Hsu and Kakade 2012 (arXiv:1203.0683) are assessed in this work. Using experiments with synthetic data, the algorithms are compared with each other. Furthermore, the spectral methods are compared to the Baum-Welch algorithm, a well-established method applying the EM algorithm to HMMs. The spectral algorithms are found to have a much more favorable computational and sample complexity. Even though the algorithms readily handle high dimensional observation spaces, instability issues are encountered in this regime. In view of learning from real-world experimental data, the representation of real-valued observations for the use in spectral methods is discussed, presenting possible methods to represent data for the use in the learning algorithms.
[ "Carl Mattfeld", "['Carl Mattfeld']" ]
cs.LG
null
1404.7527
null
null
http://arxiv.org/pdf/1404.7527v2
2014-07-03T21:46:43Z
2014-04-29T20:54:40Z
A Map of Update Constraints in Inductive Inference
We investigate how different learning restrictions reduce learning power and how the different restrictions relate to one another. We give a complete map for nine different restrictions both for the cases of complete information learning and set-driven learning. This completes the picture for these well-studied \emph{delayable} learning restrictions. A further insight is gained by different characterizations of \emph{conservative} learning in terms of variants of \emph{cautious} learning. Our analyses greatly benefit from general theorems we give, for example showing that learners with exclusively delayable restrictions can always be assumed total.
[ "['Timo Kötzing' 'Raphaela Palenta']", "Timo K\\\"otzing and Raphaela Palenta" ]
stat.ML cs.LG cs.MM
null
1404.7796
null
null
http://arxiv.org/pdf/1404.7796v2
2014-06-19T08:06:24Z
2014-04-30T16:55:00Z
Majority Vote of Diverse Classifiers for Late Fusion
In the past few years, a lot of attention has been devoted to multimedia indexing by fusing multimodal informations. Two kinds of fusion schemes are generally considered: The early fusion and the late fusion. We focus on late classifier fusion, where one combines the scores of each modality at the decision level. To tackle this problem, we investigate a recent and elegant well-founded quadratic program named MinCq coming from the machine learning PAC-Bayesian theory. MinCq looks for the weighted combination, over a set of real-valued functions seen as voters, leading to the lowest misclassification rate, while maximizing the voters' diversity. We propose an extension of MinCq tailored to multimedia indexing. Our method is based on an order-preserving pairwise loss adapted to ranking that allows us to improve Mean Averaged Precision measure while taking into account the diversity of the voters that we want to fuse. We provide evidence that this method is naturally adapted to late fusion procedures and confirm the good behavior of our approach on the challenging PASCAL VOC'07 benchmark.
[ "['Emilie Morvant' 'Amaury Habrard' 'Stéphane Ayache']", "Emilie Morvant (IST Austria), Amaury Habrard (LHC), St\\'ephane Ayache\n (LIF)" ]
cs.NE cs.LG
10.1016/j.neunet.2014.09.003
1404.7828
null
null
http://arxiv.org/abs/1404.7828v4
2014-10-08T10:00:38Z
2014-04-30T18:39:00Z
Deep Learning in Neural Networks: An Overview
In recent years, deep artificial neural networks (including recurrent ones) have won numerous contests in pattern recognition and machine learning. This historical survey compactly summarises relevant work, much of it from the previous millennium. Shallow and deep learners are distinguished by the depth of their credit assignment paths, which are chains of possibly learnable, causal links between actions and effects. I review deep supervised learning (also recapitulating the history of backpropagation), unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.
[ "['Juergen Schmidhuber']", "Juergen Schmidhuber" ]
stat.ML cs.LG math.OC math.PR
null
1405.0042
null
null
http://arxiv.org/pdf/1405.0042v2
2015-06-15T13:12:12Z
2014-04-30T21:48:34Z
Learning with incremental iterative regularization
Within a statistical learning setting, we propose and study an iterative regularization algorithm for least squares defined by an incremental gradient method. In particular, we show that, if all other parameters are fixed a priori, the number of passes over the data (epochs) acts as a regularization parameter, and prove strong universal consistency, i.e. almost sure convergence of the risk, as well as sharp finite sample bounds for the iterates. Our results are a step towards understanding the effect of multiple epochs in stochastic gradient techniques in machine learning and rely on integrating statistical and optimization results.
[ "['Lorenzo Rosasco' 'Silvia Villa']", "Lorenzo Rosasco, Silvia Villa" ]
stat.ML cs.LG
null
1405.0099
null
null
null
null
null
Fast MLE Computation for the Dirichlet Multinomial
Given a collection of categorical data, we want to find the parameters of a Dirichlet distribution which maximizes the likelihood of that data. Newton's method is typically used for this purpose but current implementations require reading through the entire dataset on each iteration. In this paper, we propose a modification which requires only a single pass through the dataset and substantially decreases running time. Furthermore we analyze both theoretically and empirically the performance of the proposed algorithm, and provide an open source implementation.
[ "Max Sklar" ]
cs.LG math.DG stat.ML
null
1405.0133
null
null
http://arxiv.org/pdf/1405.0133v2
2014-05-08T05:07:21Z
2014-05-01T11:10:36Z
Geodesic Distance Function Learning via Heat Flow on Vector Fields
Learning a distance function or metric on a given data manifold is of great importance in machine learning and pattern recognition. Many of the previous works first embed the manifold to Euclidean space and then learn the distance function. However, such a scheme might not faithfully preserve the distance function if the original manifold is not Euclidean. Note that the distance function on a manifold can always be well-defined. In this paper, we propose to learn the distance function directly on the manifold without embedding. We first provide a theoretical characterization of the distance function by its gradient field. Based on our theoretical analysis, we propose to first learn the gradient field of the distance function and then learn the distance function itself. Specifically, we set the gradient field of a local distance function as an initial vector field. Then we transport it to the whole manifold via heat flow on vector fields. Finally, the geodesic distance function can be obtained by requiring its gradient field to be close to the normalized vector field. Experimental results on both synthetic and real data demonstrate the effectiveness of our proposed algorithm.
[ "['Binbin Lin' 'Ji Yang' 'Xiaofei He' 'Jieping Ye']", "Binbin Lin, Ji Yang, Xiaofei He and Jieping Ye" ]
cs.LG cs.AI
null
1405.0501
null
null
http://arxiv.org/pdf/1405.0501v1
2014-05-02T20:13:06Z
2014-05-02T20:13:06Z
Exchangeable Variable Models
A sequence of random variables is exchangeable if its joint distribution is invariant under variable permutations. We introduce exchangeable variable models (EVMs) as a novel class of probabilistic models whose basic building blocks are partially exchangeable sequences, a generalization of exchangeable sequences. We prove that a family of tractable EVMs is optimal under zero-one loss for a large class of functions, including parity and threshold functions, and strictly subsumes existing tractable independence-based model families. Extensive experiments show that EVMs outperform state of the art classifiers such as SVMs and probabilistic models which are solely based on independence assumptions.
[ "['Mathias Niepert' 'Pedro Domingos']", "Mathias Niepert and Pedro Domingos" ]
cs.LG cs.FL
null
1405.0514
null
null
http://arxiv.org/pdf/1405.0514v2
2014-11-27T18:59:45Z
2014-05-02T20:58:39Z
Complexity of Equivalence and Learning for Multiplicity Tree Automata
We consider the complexity of equivalence and learning for multiplicity tree automata, i.e., weighted tree automata over a field. We first show that the equivalence problem is logspace equivalent to polynomial identity testing, the complexity of which is a longstanding open problem. Secondly, we derive lower bounds on the number of queries needed to learn multiplicity tree automata in Angluin's exact learning model, over both arbitrary and fixed fields. Habrard and Oncina (2006) give an exact learning algorithm for multiplicity tree automata, in which the number of queries is proportional to the size of the target automaton and the size of a largest counterexample, represented as a tree, that is returned by the Teacher. However, the smallest tree-counterexample may be exponential in the size of the target automaton. Thus the above algorithm does not run in time polynomial in the size of the target automaton, and has query complexity exponential in the lower bound. Assuming a Teacher that returns minimal DAG representations of counterexamples, we give a new exact learning algorithm whose query complexity is quadratic in the target automaton size, almost matching the lower bound, and improving the best previously-known algorithm by an exponential factor.
[ "Ines Marusic and James Worrell", "['Ines Marusic' 'James Worrell']" ]
cs.LG stat.ML
null
1405.0586
null
null
http://arxiv.org/pdf/1405.0586v3
2016-09-13T18:06:14Z
2014-05-03T13:36:59Z
On Lipschitz Continuity and Smoothness of Loss Functions in Learning to Rank
In binary classification and regression problems, it is well understood that Lipschitz continuity and smoothness of the loss function play key roles in governing generalization error bounds for empirical risk minimization algorithms. In this paper, we show how these two properties affect generalization error bounds in the learning to rank problem. The learning to rank problem involves vector valued predictions and therefore the choice of the norm with respect to which Lipschitz continuity and smoothness are defined becomes crucial. Choosing the $\ell_\infty$ norm in our definition of Lipschitz continuity allows us to improve existing bounds. Furthermore, under smoothness assumptions, our choice enables us to prove rates that interpolate between $1/\sqrt{n}$ and $1/n$ rates. Application of our results to ListNet, a popular learning to rank method, gives state-of-the-art performance guarantees.
[ "Ambuj Tewari and Sougata Chaudhuri", "['Ambuj Tewari' 'Sougata Chaudhuri']" ]
cs.LG stat.ML
null
1405.0591
null
null
http://arxiv.org/pdf/1405.0591v1
2014-05-03T14:38:47Z
2014-05-03T14:38:47Z
Perceptron-like Algorithms and Generalization Bounds for Learning to Rank
Learning to rank is a supervised learning problem where the output space is the space of rankings but the supervision space is the space of relevance scores. We make theoretical contributions to the learning to rank problem both in the online and batch settings. First, we propose a perceptron-like algorithm for learning a ranking function in an online setting. Our algorithm is an extension of the classic perceptron algorithm for the classification problem. Second, in the setting of batch learning, we introduce a sufficient condition for convex ranking surrogates to ensure a generalization bound that is independent of number of objects per query. Our bound holds when linear ranking functions are used: a common practice in many learning to rank algorithms. En route to developing the online algorithm and generalization bound, we propose a novel family of listwise large margin ranking surrogates. Our novel surrogate family is obtained by modifying a well-known pairwise large margin ranking surrogate and is distinct from the listwise large margin surrogates developed using the structured prediction framework. Using the proposed family, we provide a guaranteed upper bound on the cumulative NDCG (or MAP) induced loss under the perceptron-like algorithm. We also show that the novel surrogates satisfy the generalization bound condition.
[ "Sougata Chaudhuri and Ambuj Tewari", "['Sougata Chaudhuri' 'Ambuj Tewari']" ]
cs.IT cs.LG math.IT math.ST stat.TH
null
1405.0782
null
null
http://arxiv.org/pdf/1405.0782v2
2014-06-21T01:08:17Z
2014-05-05T05:23:30Z
Optimality guarantees for distributed statistical estimation
Large data sets often require performing distributed statistical estimation, with a full data set split across multiple machines and limited communication between machines. To study such scenarios, we define and study some refinements of the classical minimax risk that apply to distributed settings, comparing to the performance of estimators with access to the entire data. Lower bounds on these quantities provide a precise characterization of the minimum amount of communication required to achieve the centralized minimax risk. We study two classes of distributed protocols: one in which machines send messages independently over channels without feedback, and a second allowing for interactive communication, in which a central server broadcasts the messages from a given machine to all other machines. We establish lower bounds for a variety of problems, including location estimation in several families and parameter estimation in different types of regression models. Our results include a novel class of quantitative data-processing inequalities used to characterize the effects of limited communication.
[ "John C. Duchi and Michael I. Jordan and Martin J. Wainwright and\n Yuchen Zhang", "['John C. Duchi' 'Michael I. Jordan' 'Martin J. Wainwright' 'Yuchen Zhang']" ]
cs.LG
null
1405.0792
null
null
http://arxiv.org/pdf/1405.0792v1
2014-05-05T06:49:05Z
2014-05-05T06:49:05Z
On Exact Learning Monotone DNF from Membership Queries
In this paper, we study the problem of learning a monotone DNF with at most $s$ terms of size (number of variables in each term) at most $r$ ($s$ term $r$-MDNF) from membership queries. This problem is equivalent to the problem of learning a general hypergraph using hyperedge-detecting queries, a problem motivated by applications arising in chemical reactions and genome sequencing. We first present new lower bounds for this problem and then present deterministic and randomized adaptive algorithms with query complexities that are almost optimal. All the algorithms we present in this paper run in time linear in the query complexity and the number of variables $n$. In addition, all of the algorithms we present in this paper are asymptotically tight for fixed $r$ and/or $s$.
[ "['Hasan Abasi' 'Nader H. Bshouty' 'Hanna Mazzawi']", "Hasan Abasi and Nader H. Bshouty and Hanna Mazzawi" ]
cs.LG stat.ML
null
1405.0833
null
null
http://arxiv.org/pdf/1405.0833v1
2014-05-05T09:29:17Z
2014-05-05T09:29:17Z
Generalized Risk-Aversion in Stochastic Multi-Armed Bandits
We consider the problem of minimizing the regret in stochastic multi-armed bandit, when the measure of goodness of an arm is not the mean return, but some general function of the mean and the variance.We characterize the conditions under which learning is possible and present examples for which no natural algorithm can achieve sublinear regret.
[ "['Alexander Zimin' 'Rasmus Ibsen-Jensen' 'Krishnendu Chatterjee']", "Alexander Zimin and Rasmus Ibsen-Jensen and Krishnendu Chatterjee" ]
cs.AI cs.LG stat.ML
null
1405.0869
null
null
http://arxiv.org/pdf/1405.0869v1
2014-05-05T12:01:24Z
2014-05-05T12:01:24Z
Robust Subspace Outlier Detection in High Dimensional Space
Rare data in a large-scale database are called outliers that reveal significant information in the real world. The subspace-based outlier detection is regarded as a feasible approach in very high dimensional space. However, the outliers found in subspaces are only part of the true outliers in high dimensional space, indeed. The outliers hidden in normal-clustered points are sometimes neglected in the projected dimensional subspace. In this paper, we propose a robust subspace method for detecting such inner outliers in a given dataset, which uses two dimensional-projections: detecting outliers in subspaces with local density ratio in the first projected dimensions; finding outliers by comparing neighbor's positions in the second projected dimensions. Each point's weight is calculated by summing up all related values got in the two steps projected dimensions, and then the points scoring the largest weight values are taken as outliers. By taking a series of experiments with the number of dimensions from 10 to 10000, the results show that our proposed method achieves high precision in the case of extremely high dimensional space, and works well in low dimensional space.
[ "Zhana Bao", "['Zhana Bao']" ]
cs.CV cs.LG
null
1405.1005
null
null
http://arxiv.org/pdf/1405.1005v2
2014-09-27T18:35:35Z
2014-05-05T19:26:58Z
Comparing apples to apples in the evaluation of binary coding methods
We discuss methodological issues related to the evaluation of unsupervised binary code construction methods for nearest neighbor search. These issues have been widely ignored in literature. These coding methods attempt to preserve either Euclidean distance or angular (cosine) distance in the binary embedding space. We explain why when comparing a method whose goal is preserving cosine similarity to one designed for preserving Euclidean distance, the original features should be normalized by mapping them to the unit hypersphere before learning the binary mapping functions. To compare a method whose goal is to preserves Euclidean distance to one that preserves cosine similarity, the original feature data must be mapped to a higher dimension by including a bias term in binary mapping functions. These conditions ensure the fair comparison between different binary code methods for the task of nearest neighbor search. Our experiments show under these conditions the very simple methods (e.g. LSH and ITQ) often outperform recent state-of-the-art methods (e.g. MDSH and OK-means).
[ "Mohammad Rastegari, Shobeir Fakhraei, Jonghyun Choi, David Jacobs,\n Larry S. Davis", "['Mohammad Rastegari' 'Shobeir Fakhraei' 'Jonghyun Choi' 'David Jacobs'\n 'Larry S. Davis']" ]
cs.AI cs.LG stat.ML
null
1405.1027
null
null
http://arxiv.org/pdf/1405.1027v1
2014-05-05T12:06:06Z
2014-05-05T12:06:06Z
K-NS: Section-Based Outlier Detection in High Dimensional Space
Finding rare information hidden in a huge amount of data from the Internet is a necessary but complex issue. Many researchers have studied this issue and have found effective methods to detect anomaly data in low dimensional space. However, as the dimension increases, most of these existing methods perform poorly in detecting outliers because of "high dimensional curse". Even though some approaches aim to solve this problem in high dimensional space, they can only detect some anomaly data appearing in low dimensional space and cannot detect all of anomaly data which appear differently in high dimensional space. To cope with this problem, we propose a new k-nearest section-based method (k-NS) in a section-based space. Our proposed approach not only detects outliers in low dimensional space with section-density ratio but also detects outliers in high dimensional space with the ratio of k-nearest section against average value. After taking a series of experiments with the dimension from 10 to 10000, the experiment results show that our proposed method achieves 100% precision and 100% recall result in the case of extremely high dimensional space, and better improvement in low dimensional space compared to our previously proposed method.
[ "Zhana Bao", "['Zhana Bao']" ]
cs.LG cs.IT math.IT stat.ML
null
1405.1119
null
null
http://arxiv.org/pdf/1405.1119v2
2015-02-01T12:00:07Z
2014-05-06T01:17:26Z
Feature selection for classification with class-separability strategy and data envelopment analysis
In this paper, a novel feature selection method is presented, which is based on Class-Separability (CS) strategy and Data Envelopment Analysis (DEA). To better capture the relationship between features and the class, class labels are separated into individual variables and relevance and redundancy are explicitly handled on each class label. Super-efficiency DEA is employed to evaluate and rank features via their conditional dependence scores on all class labels, and the feature with maximum super-efficiency score is then added in the conditioning set for conditional dependence estimation in the next iteration, in such a way as to iteratively select features and get the final selected features. Eventually, experiments are conducted to evaluate the effectiveness of proposed method comparing with four state-of-the-art methods from the viewpoint of classification accuracy. Empirical results verify the feasibility and the superiority of proposed feature selection method.
[ "['Yishi Zhang' 'Chao Yang' 'Anrong Yang' 'Chan Xiong' 'Xingchi Zhou'\n 'Zigang Zhang']", "Yishi Zhang, Chao Yang, Anrong Yang, Chan Xiong, Xingchi Zhou, Zigang\n Zhang" ]
stat.ML cs.LG
10.1016/j.neucom.2014.05.094
1405.1297
null
null
http://arxiv.org/abs/1405.1297v2
2016-06-03T16:10:19Z
2014-05-06T15:05:02Z
Combining Multiple Clusterings via Crowd Agreement Estimation and Multi-Granularity Link Analysis
The clustering ensemble technique aims to combine multiple clusterings into a probably better and more robust clustering and has been receiving an increasing attention in recent years. There are mainly two aspects of limitations in the existing clustering ensemble approaches. Firstly, many approaches lack the ability to weight the base clusterings without access to the original data and can be affected significantly by the low-quality, or even ill clusterings. Secondly, they generally focus on the instance level or cluster level in the ensemble system and fail to integrate multi-granularity cues into a unified model. To address these two limitations, this paper proposes to solve the clustering ensemble problem via crowd agreement estimation and multi-granularity link analysis. We present the normalized crowd agreement index (NCAI) to evaluate the quality of base clusterings in an unsupervised manner and thus weight the base clusterings in accordance with their clustering validity. To explore the relationship between clusters, the source aware connected triple (SACT) similarity is introduced with regard to their common neighbors and the source reliability. Based on NCAI and multi-granularity information collected among base clusterings, clusters, and data instances, we further propose two novel consensus functions, termed weighted evidence accumulation clustering (WEAC) and graph partitioning with multi-granularity link analysis (GP-MGLA) respectively. The experiments are conducted on eight real-world datasets. The experimental results demonstrate the effectiveness and robustness of the proposed methods.
[ "Dong Huang and Jian-Huang Lai and Chang-Dong Wang", "['Dong Huang' 'Jian-Huang Lai' 'Chang-Dong Wang']" ]
cs.CE cs.LG
10.14445/22312803/IJCTT-V10P137
1405.1304
null
null
http://arxiv.org/abs/1405.1304v1
2014-05-03T14:26:42Z
2014-05-03T14:26:42Z
Application of Machine Learning Techniques in Aquaculture
In this paper we present applications of different machine learning algorithms in aquaculture. Machine learning algorithms learn models from historical data. In aquaculture historical data are obtained from farm practices, yields, and environmental data sources. Associations between these different variables can be obtained by applying machine learning algorithms to historical data. In this paper we present applications of different machine learning algorithms in aquaculture applications.
[ "Akhlaqur Rahman and Sumaira Tasnim", "['Akhlaqur Rahman' 'Sumaira Tasnim']" ]
stat.ML cs.LG cs.NE
null
1405.1380
null
null
http://arxiv.org/pdf/1405.1380v4
2015-06-15T23:52:59Z
2014-05-06T17:41:33Z
Is Joint Training Better for Deep Auto-Encoders?
Traditionally, when generative models of data are developed via deep architectures, greedy layer-wise pre-training is employed. In a well-trained model, the lower layer of the architecture models the data distribution conditional upon the hidden variables, while the higher layers model the hidden distribution prior. But due to the greedy scheme of the layerwise training technique, the parameters of lower layers are fixed when training higher layers. This makes it extremely challenging for the model to learn the hidden distribution prior, which in turn leads to a suboptimal model for the data distribution. We therefore investigate joint training of deep autoencoders, where the architecture is viewed as one stack of two or more single-layer autoencoders. A single global reconstruction objective is jointly optimized, such that the objective for the single autoencoders at each layer acts as a local, layer-level regularizer. We empirically evaluate the performance of this joint training scheme and observe that it not only learns a better data model, but also learns better higher layer representations, which highlights its potential for unsupervised feature learning. In addition, we find that the usage of regularizations in the joint training scheme is crucial in achieving good performance. In the supervised setting, joint training also shows superior performance when training deeper models. The joint training framework can thus provide a platform for investigating more efficient usage of different types of regularizers, especially in light of the growing volumes of available unlabeled data.
[ "['Yingbo Zhou' 'Devansh Arpit' 'Ifeoma Nwogu' 'Venu Govindaraju']", "Yingbo Zhou, Devansh Arpit, Ifeoma Nwogu, Venu Govindaraju" ]
cs.NE cs.LG stat.ML
null
1405.1436
null
null
http://arxiv.org/pdf/1405.1436v1
2014-05-06T20:02:46Z
2014-05-06T20:02:46Z
Training Restricted Boltzmann Machine by Perturbation
A new approach to maximum likelihood learning of discrete graphical models and RBM in particular is introduced. Our method, Perturb and Descend (PD) is inspired by two ideas (I) perturb and MAP method for sampling (II) learning by Contrastive Divergence minimization. In contrast to perturb and MAP, PD leverages training data to learn the models that do not allow efficient MAP estimation. During the learning, to produce a sample from the current model, we start from a training data and descend in the energy landscape of the "perturbed model", for a fixed number of steps, or until a local optima is reached. For RBM, this involves linear calculations and thresholding which can be very fast. Furthermore we show that the amount of perturbation is closely related to the temperature parameter and it can regularize the model by producing robust features resulting in sparse hidden layer activation.
[ "Siamak Ravanbakhsh, Russell Greiner, Brendan Frey", "['Siamak Ravanbakhsh' 'Russell Greiner' 'Brendan Frey']" ]
cs.LG
null
1405.1503
null
null
http://arxiv.org/pdf/1405.1503v3
2015-02-21T02:23:24Z
2014-05-07T04:39:01Z
Adaptation Algorithm and Theory Based on Generalized Discrepancy
We present a new algorithm for domain adaptation improving upon a discrepancy minimization algorithm previously shown to outperform a number of algorithms for this task. Unlike many previous algorithms for domain adaptation, our algorithm does not consist of a fixed reweighting of the losses over the training sample. We show that our algorithm benefits from a solid theoretical foundation and more favorable learning bounds than discrepancy minimization. We present a detailed description of our algorithm and give several efficient solutions for solving its optimization problem. We also report the results of several experiments showing that it outperforms discrepancy minimization.
[ "['Corinna Cortes' 'Mehryar Mohri' 'Andres Muñoz Medina']", "Corinna Cortes and Mehryar Mohri and Andres Mu\\~noz Medina" ]
cs.LG cs.AI cs.IT math.IT
null
1405.1513
null
null
http://arxiv.org/pdf/1405.1513v1
2014-05-07T06:10:47Z
2014-05-07T06:10:47Z
A Mathematical Theory of Learning
In this paper, a mathematical theory of learning is proposed that has many parallels with information theory. We consider Vapnik's General Setting of Learning in which the learning process is defined to be the act of selecting a hypothesis in response to a given training set. Such hypothesis can, for example, be a decision boundary in classification, a set of centroids in clustering, or a set of frequent item-sets in association rule mining. Depending on the hypothesis space and how the final hypothesis is selected, we show that a learning process can be assigned a numeric score, called learning capacity, which is analogous to Shannon's channel capacity and satisfies similar interesting properties as well such as the data-processing inequality and the information-cannot-hurt inequality. In addition, learning capacity provides the tightest possible bound on the difference between true risk and empirical risk of the learning process for all loss functions that are parametrized by the chosen hypothesis. It is also shown that the notion of learning capacity equivalently quantifies how sensitive the choice of the final hypothesis is to a small perturbation in the training set. Consequently, algorithmic stability is both necessary and sufficient for generalization. While the theory does not rely on concentration inequalities, we finally show that analogs to classical results in learning theory using the Probably Approximately Correct (PAC) model can be immediately deduced using this theory, and conclude with information-theoretic bounds to learning capacity.
[ "['Ibrahim Alabdulmohsin']", "Ibrahim Alabdulmohsin" ]
math.ST cs.LG stat.ML stat.TH
null
1405.1533
null
null
http://arxiv.org/pdf/1405.1533v2
2014-05-08T20:12:02Z
2014-05-07T08:33:41Z
A consistent deterministic regression tree for non-parametric prediction of time series
We study online prediction of bounded stationary ergodic processes. To do so, we consider the setting of prediction of individual sequences and build a deterministic regression tree that performs asymptotically as well as the best L-Lipschitz constant predictors. Then, we show why the obtained regret bound entails the asymptotical optimality with respect to the class of bounded stationary ergodic processes.
[ "Pierre Gaillard (GREGH), Paul Baudin (INRIA Rocquencourt)", "['Pierre Gaillard' 'Paul Baudin']" ]
null
null
1405.1535
null
null
http://arxiv.org/pdf/1405.1535v1
2014-05-07T09:06:28Z
2014-05-07T09:06:28Z
Learning Boolean Halfspaces with Small Weights from Membership Queries
We consider the problem of proper learning a Boolean Halfspace with integer weights ${0,1,ldots,t}$ from membership queries only. The best known algorithm for this problem is an adaptive algorithm that asks $n^{O(t^5)}$ membership queries where the best lower bound for the number of membership queries is $n^t$ [Learning Threshold Functions with Small Weights Using Membership Queries. COLT 1999] In this paper we close this gap and give an adaptive proper learning algorithm with two rounds that asks $n^{O(t)}$ membership queries. We also give a non-adaptive proper learning algorithm that asks $n^{O(t^3)}$ membership queries.
[ "['Hasan Abasi' 'Ali Z. Abdi' 'Nader H. Bshouty']" ]
cs.LG cs.IT math.IT
null
1405.1665
null
null
http://arxiv.org/pdf/1405.1665v2
2014-11-08T03:06:04Z
2014-05-07T16:44:21Z
On Communication Cost of Distributed Statistical Estimation and Dimensionality
We explore the connection between dimensionality and communication cost in distributed learning problems. Specifically we study the problem of estimating the mean $\vec{\theta}$ of an unknown $d$ dimensional gaussian distribution in the distributed setting. In this problem, the samples from the unknown distribution are distributed among $m$ different machines. The goal is to estimate the mean $\vec{\theta}$ at the optimal minimax rate while communicating as few bits as possible. We show that in this setting, the communication cost scales linearly in the number of dimensions i.e. one needs to deal with different dimensions individually. Applying this result to previous lower bounds for one dimension in the interactive setting \cite{ZDJW13} and to our improved bounds for the simultaneous setting, we prove new lower bounds of $\Omega(md/\log(m))$ and $\Omega(md)$ for the bits of communication needed to achieve the minimax squared loss, in the interactive and simultaneous settings respectively. To complement, we also demonstrate an interactive protocol achieving the minimax squared loss with $O(md)$ bits of communication, which improves upon the simple simultaneous protocol by a logarithmic factor. Given the strong lower bounds in the general setting, we initiate the study of the distributed parameter estimation problems with structured parameters. Specifically, when the parameter is promised to be $s$-sparse, we show a simple thresholding based protocol that achieves the same squared loss while saving a $d/s$ factor of communication. We conjecture that the tradeoff between communication and squared loss demonstrated by this protocol is essentially optimal up to logarithmic factor.
[ "Ankit Garg and Tengyu Ma and Huy L. Nguyen", "['Ankit Garg' 'Tengyu Ma' 'Huy L. Nguyen']" ]
cs.CV cs.LG
null
1405.1966
null
null
http://arxiv.org/pdf/1405.1966v1
2014-04-15T16:52:38Z
2014-04-15T16:52:38Z
Texture Based Image Segmentation of Chili Pepper X-Ray Images Using Gabor Filter
Texture segmentation is the process of partitioning an image into regions with different textures containing a similar group of pixels. Detecting the discontinuity of the filter's output and their statistical properties help in segmenting and classifying a given image with different texture regions. In this proposed paper, chili x-ray image texture segmentation is performed by using Gabor filter. The texture segmented result obtained from Gabor filter fed into three texture filters, namely Entropy, Standard Deviation and Range filter. After performing texture analysis, features can be extracted by using Statistical methods. In this paper Gray Level Co-occurrence Matrices and First order statistics are used as feature extraction methods. Features extracted from statistical methods are given to Support Vector Machine (SVM) classifier. Using this methodology, it is found that texture segmentation is followed by the Gray Level Co-occurrence Matrix feature extraction method gives a higher accuracy rate of 84% when compared with First order feature extraction method. Key Words: Texture segmentation, Texture filter, Gabor filter, Feature extraction methods, SVM classifier.
[ "M.Rajalakshmi and Dr. P.Subashini", "['M. Rajalakshmi' 'Dr. P. Subashini']" ]
cs.LG cs.CV
null
1405.2102
null
null
http://arxiv.org/pdf/1405.2102v1
2014-05-08T21:29:04Z
2014-05-08T21:29:04Z
Improving Image Clustering using Sparse Text and the Wisdom of the Crowds
We propose a method to improve image clustering using sparse text and the wisdom of the crowds. In particular, we present a method to fuse two different kinds of document features, image and text features, and use a common dictionary or "wisdom of the crowds" as the connection between the two different kinds of documents. With the proposed fusion matrix, we use topic modeling via non-negative matrix factorization to cluster documents.
[ "['Anna Ma' 'Arjuna Flenner' 'Deanna Needell' 'Allon G. Percus']", "Anna Ma, Arjuna Flenner, Deanna Needell, Allon G. Percus" ]
cs.NE cs.LG
null
1405.2262
null
null
http://arxiv.org/pdf/1405.2262v1
2014-05-09T15:23:06Z
2014-05-09T15:23:06Z
Training Deep Fourier Neural Networks To Fit Time-Series Data
We present a method for training a deep neural network containing sinusoidal activation functions to fit to time-series data. Weights are initialized using a fast Fourier transform, then trained with regularization to improve generalization. A simple dynamic parameter tuning method is employed to adjust both the learning rate and regularization term, such that stability and efficient training are both achieved. We show how deeper layers can be utilized to model the observed sequence using a sparser set of sinusoid units, and how non-uniform regularization can improve generalization by promoting the shifting of weight toward simpler units. The method is demonstrated with time-series problems to show that it leads to effective extrapolation of nonlinear trends.
[ "['Michael S. Gashler' 'Stephen C. Ashmore']", "Michael S. Gashler and Stephen C. Ashmore" ]
cs.LG astro-ph.IM stat.ML
null
1405.2278
null
null
http://arxiv.org/pdf/1405.2278v1
2014-05-09T16:14:47Z
2014-05-09T16:14:47Z
Hellinger Distance Trees for Imbalanced Streams
Classifiers trained on data sets possessing an imbalanced class distribution are known to exhibit poor generalisation performance. This is known as the imbalanced learning problem. The problem becomes particularly acute when we consider incremental classifiers operating on imbalanced data streams, especially when the learning objective is rare class identification. As accuracy may provide a misleading impression of performance on imbalanced data, existing stream classifiers based on accuracy can suffer poor minority class performance on imbalanced streams, with the result being low minority class recall rates. In this paper we address this deficiency by proposing the use of the Hellinger distance measure, as a very fast decision tree split criterion. We demonstrate that by using Hellinger a statistically significant improvement in recall rates on imbalanced data streams can be achieved, with an acceptable increase in the false positive rate.
[ "R. J. Lyon, J. M. Brooke, J. D. Knowles, B. W. Stappers", "['R. J. Lyon' 'J. M. Brooke' 'J. D. Knowles' 'B. W. Stappers']" ]
cs.LG stat.ML
null
1405.2294
null
null
http://arxiv.org/pdf/1405.2294v2
2016-12-14T02:06:49Z
2014-04-25T15:52:47Z
Nonparametric Detection of Anomalous Data Streams
A nonparametric anomalous hypothesis testing problem is investigated, in which there are totally n sequences with s anomalous sequences to be detected. Each typical sequence contains m independent and identically distributed (i.i.d.) samples drawn from a distribution p, whereas each anomalous sequence contains m i.i.d. samples drawn from a distribution q that is distinct from p. The distributions p and q are assumed to be unknown in advance. Distribution-free tests are constructed using maximum mean discrepancy as the metric, which is based on mean embeddings of distributions into a reproducing kernel Hilbert space. The probability of error is bounded as a function of the sample size m, the number s of anomalous sequences and the number n of sequences. It is then shown that with s known, the constructed test is exponentially consistent if m is greater than a constant factor of log n, for any p and q, whereas with s unknown, m should has an order strictly greater than log n. Furthermore, it is shown that no test can be consistent for arbitrary p and q if m is less than a constant factor of log n, thus the order-level optimality of the proposed test is established. Numerical results are provided to demonstrate that our tests outperform (or perform as well as) the tests based on other competitive approaches under various cases.
[ "Shaofeng Zou, Yingbin Liang, H. Vincent Poor, Xinghua Shi", "['Shaofeng Zou' 'Yingbin Liang' 'H. Vincent Poor' 'Xinghua Shi']" ]
stat.ML cs.LG stat.ME
null
1405.2377
null
null
http://arxiv.org/pdf/1405.2377v1
2014-05-10T02:03:22Z
2014-05-10T02:03:22Z
A Hybrid Monte Carlo Architecture for Parameter Optimization
Much recent research has been conducted in the area of Bayesian learning, particularly with regard to the optimization of hyper-parameters via Gaussian process regression. The methodologies rely chiefly on the method of maximizing the expected improvement of a score function with respect to adjustments in the hyper-parameters. In this work, we present a novel algorithm that exploits notions of confidence intervals and uncertainties to enable the discovery of the best optimal within a targeted region of the parameter space. We demonstrate the efficacy of our algorithm with respect to machine learning problems and show cases where our algorithm is competitive with the method of maximizing expected improvement.
[ "James Brofos", "['James Brofos']" ]
cs.LG
null
1405.2420
null
null
http://arxiv.org/pdf/1405.2420v1
2014-05-10T11:23:08Z
2014-05-10T11:23:08Z
Optimal Learners for Multiclass Problems
The fundamental theorem of statistical learning states that for binary classification problems, any Empirical Risk Minimization (ERM) learning rule has close to optimal sample complexity. In this paper we seek for a generic optimal learner for multiclass prediction. We start by proving a surprising result: a generic optimal multiclass learner must be improper, namely, it must have the ability to output hypotheses which do not belong to the hypothesis class, even though it knows that all the labels are generated by some hypothesis from the class. In particular, no ERM learner is optimal. This brings back the fundmamental question of "how to learn"? We give a complete answer to this question by giving a new analysis of the one-inclusion multiclass learner of Rubinstein et al (2006) showing that its sample complexity is essentially optimal. Then, we turn to study the popular hypothesis class of generalized linear classifiers. We derive optimal learners that, unlike the one-inclusion algorithm, are computationally efficient. Furthermore, we show that the sample complexity of these learners is better than the sample complexity of the ERM rule, thus settling in negative an open question due to Collins (2005).
[ "Amit Daniely and Shai Shalev-Shwartz", "['Amit Daniely' 'Shai Shalev-Shwartz']" ]
stat.ML cs.LG
null
1405.2432
null
null
http://arxiv.org/pdf/1405.2432v1
2014-05-10T13:34:22Z
2014-05-10T13:34:22Z
Functional Bandits
We introduce the functional bandit problem, where the objective is to find an arm that optimises a known functional of the unknown arm-reward distributions. These problems arise in many settings such as maximum entropy methods in natural language processing, and risk-averse decision-making, but current best-arm identification techniques fail in these domains. We propose a new approach, that combines functional estimation and arm elimination, to tackle this problem. This method achieves provably efficient performance guarantees. In addition, we illustrate this method on a number of important functionals in risk management and information theory, and refine our generic theoretical results in those cases.
[ "['Long Tran-Thanh' 'Jia Yuan Yu']", "Long Tran-Thanh and Jia Yuan Yu" ]
cs.LG
null
1405.2476
null
null
http://arxiv.org/pdf/1405.2476v4
2016-10-10T20:56:01Z
2014-05-10T22:30:38Z
A Canonical Semi-Deterministic Transducer
We prove the existence of a canonical form for semi-deterministic transducers with incomparable sets of output strings. Based on this, we develop an algorithm which learns semi-deterministic transducers given access to translation queries. We also prove that there is no learning algorithm for semi-deterministic transducers that uses only domain knowledge.
[ "Achilles Beros, Colin de la Higuera", "['Achilles Beros' 'Colin de la Higuera']" ]
cs.AI cs.LG stat.ML
null
1405.2600
null
null
http://arxiv.org/pdf/1405.2600v4
2017-06-03T12:03:19Z
2014-05-11T23:11:52Z
Learning from networked examples
Many machine learning algorithms are based on the assumption that training examples are drawn independently. However, this assumption does not hold anymore when learning from a networked sample because two or more training examples may share some common objects, and hence share the features of these shared objects. We show that the classic approach of ignoring this problem potentially can have a harmful effect on the accuracy of statistics, and then consider alternatives. One of these is to only use independent examples, discarding other information. However, this is clearly suboptimal. We analyze sample error bounds in this networked setting, providing significantly improved results. An important component of our approach is formed by efficient sample weighting schemes, which leads to novel concentration inequalities.
[ "Yuyi Wang and Jan Ramon and Zheng-Chu Guo", "['Yuyi Wang' 'Jan Ramon' 'Zheng-Chu Guo']" ]
stat.ML cs.LG
null
1405.2606
null
null
http://arxiv.org/pdf/1405.2606v1
2014-05-12T00:26:12Z
2014-05-12T00:26:12Z
Structural Return Maximization for Reinforcement Learning
Batch Reinforcement Learning (RL) algorithms attempt to choose a policy from a designer-provided class of policies given a fixed set of training data. Choosing the policy which maximizes an estimate of return often leads to over-fitting when only limited data is available, due to the size of the policy class in relation to the amount of data available. In this work, we focus on learning policy classes that are appropriately sized to the amount of data available. We accomplish this by using the principle of Structural Risk Minimization, from Statistical Learning Theory, which uses Rademacher complexity to identify a policy class that maximizes a bound on the return of the best policy in the chosen policy class, given the available data. Unlike similar batch RL approaches, our bound on return requires only extremely weak assumptions on the true system.
[ "['Joshua Joseph' 'Javier Velez' 'Nicholas Roy']", "Joshua Joseph, Javier Velez, Nicholas Roy" ]
math.PR cs.LG stat.ML
null
1405.2639
null
null
http://arxiv.org/pdf/1405.2639v4
2015-12-01T21:15:23Z
2014-05-12T06:32:49Z
Sharp Finite-Time Iterated-Logarithm Martingale Concentration
We give concentration bounds for martingales that are uniform over finite times and extend classical Hoeffding and Bernstein inequalities. We also demonstrate our concentration bounds to be optimal with a matching anti-concentration inequality, proved using the same method. Together these constitute a finite-time version of the law of the iterated logarithm, and shed light on the relationship between it and the central limit theorem.
[ "Akshay Balsubramani", "['Akshay Balsubramani']" ]
cs.LG
null
1405.2652
null
null
http://arxiv.org/pdf/1405.2652v6
2014-09-15T08:32:45Z
2014-05-12T07:45:54Z
Selecting Near-Optimal Approximate State Representations in Reinforcement Learning
We consider a reinforcement learning setting introduced in (Maillard et al., NIPS 2011) where the learner does not have explicit access to the states of the underlying Markov decision process (MDP). Instead, she has access to several models that map histories of past interactions to states. Here we improve over known regret bounds in this setting, and more importantly generalize to the case where the models given to the learner do not contain a true model resulting in an MDP representation but only approximations of it. We also give improved error bounds for state aggregation.
[ "Ronald Ortner, Odalric-Ambrym Maillard, Daniil Ryabko", "['Ronald Ortner' 'Odalric-Ambrym Maillard' 'Daniil Ryabko']" ]
cs.AI cs.LG stat.ML
10.1162/NECO_a_00732
1405.2664
null
null
http://arxiv.org/abs/1405.2664v2
2015-06-18T12:06:14Z
2014-05-12T08:20:21Z
FastMMD: Ensemble of Circular Discrepancy for Efficient Two-Sample Test
The maximum mean discrepancy (MMD) is a recently proposed test statistic for two-sample test. Its quadratic time complexity, however, greatly hampers its availability to large-scale applications. To accelerate the MMD calculation, in this study we propose an efficient method called FastMMD. The core idea of FastMMD is to equivalently transform the MMD with shift-invariant kernels into the amplitude expectation of a linear combination of sinusoid components based on Bochner's theorem and Fourier transform (Rahimi & Recht, 2007). Taking advantage of sampling of Fourier transform, FastMMD decreases the time complexity for MMD calculation from $O(N^2 d)$ to $O(L N d)$, where $N$ and $d$ are the size and dimension of the sample set, respectively. Here $L$ is the number of basis functions for approximating kernels which determines the approximation accuracy. For kernels that are spherically invariant, the computation can be further accelerated to $O(L N \log d)$ by using the Fastfood technique (Le et al., 2013). The uniform convergence of our method has also been theoretically proved in both unbiased and biased estimates. We have further provided a geometric explanation for our method, namely ensemble of circular discrepancy, which facilitates us to understand the insight of MMD, and is hopeful to help arouse more extensive metrics for assessing two-sample test. Experimental results substantiate that FastMMD is with similar accuracy as exact MMD, while with faster computation speed and lower variance than the existing MMD approximation methods.
[ "['Ji Zhao' 'Deyu Meng']", "Ji Zhao, Deyu Meng" ]
stat.ML cs.LG math.OC
null
1405.2690
null
null
http://arxiv.org/pdf/1405.2690v1
2014-05-12T09:59:59Z
2014-05-12T09:59:59Z
Policy Gradients for CVaR-Constrained MDPs
We study a risk-constrained version of the stochastic shortest path (SSP) problem, where the risk measure considered is Conditional Value-at-Risk (CVaR). We propose two algorithms that obtain a locally risk-optimal policy by employing four tools: stochastic approximation, mini batches, policy gradients and importance sampling. Both the algorithms incorporate a CVaR estimation procedure, along the lines of Bardou et al. [2009], which in turn is based on Rockafellar-Uryasev's representation for CVaR and utilize the likelihood ratio principle for estimating the gradient of the sum of one cost function (objective of the SSP) and the gradient of the CVaR of the sum of another cost function (in the constraint of SSP). The algorithms differ in the manner in which they approximate the CVaR estimates/necessary gradients - the first algorithm uses stochastic approximation, while the second employ mini-batches in the spirit of Monte Carlo methods. We establish asymptotic convergence of both the algorithms. Further, since estimating CVaR is related to rare-event simulation, we incorporate an importance sampling based variance reduction scheme into our proposed algorithms.
[ "['Prashanth L. A.']", "Prashanth L.A." ]
cs.LG cs.AI stat.ML
null
1405.2798
null
null
http://arxiv.org/pdf/1405.2798v1
2014-05-12T15:18:15Z
2014-05-12T15:18:15Z
Two-Stage Metric Learning
In this paper, we present a novel two-stage metric learning algorithm. We first map each learning instance to a probability distribution by computing its similarities to a set of fixed anchor points. Then, we define the distance in the input data space as the Fisher information distance on the associated statistical manifold. This induces in the input data space a new family of distance metric with unique properties. Unlike kernelized metric learning, we do not require the similarity measure to be positive semi-definite. Moreover, it can also be interpreted as a local metric learning algorithm with well defined distance approximation. We evaluate its performance on a number of datasets. It outperforms significantly other metric learning methods and SVM.
[ "Jun Wang, Ke Sun, Fei Sha, Stephane Marchand-Maillet, Alexandros\n Kalousis", "['Jun Wang' 'Ke Sun' 'Fei Sha' 'Stephane Marchand-Maillet'\n 'Alexandros Kalousis']" ]
cs.DS cs.GT cs.LG
null
1405.2875
null
null
http://arxiv.org/pdf/1405.2875v2
2015-09-02T04:21:07Z
2014-05-12T18:52:28Z
Adaptive Contract Design for Crowdsourcing Markets: Bandit Algorithms for Repeated Principal-Agent Problems
Crowdsourcing markets have emerged as a popular platform for matching available workers with tasks to complete. The payment for a particular task is typically set by the task's requester, and may be adjusted based on the quality of the completed work, for example, through the use of "bonus" payments. In this paper, we study the requester's problem of dynamically adjusting quality-contingent payments for tasks. We consider a multi-round version of the well-known principal-agent model, whereby in each round a worker makes a strategic choice of the effort level which is not directly observable by the requester. In particular, our formulation significantly generalizes the budget-free online task pricing problems studied in prior work. We treat this problem as a multi-armed bandit problem, with each "arm" representing a potential contract. To cope with the large (and in fact, infinite) number of arms, we propose a new algorithm, AgnosticZooming, which discretizes the contract space into a finite number of regions, effectively treating each region as a single arm. This discretization is adaptively refined, so that more promising regions of the contract space are eventually discretized more finely. We analyze this algorithm, showing that it achieves regret sublinear in the time horizon and substantially improves over non-adaptive discretization (which is the only competing approach in the literature). Our results advance the state of art on several different topics: the theory of crowdsourcing markets, principal-agent problems, multi-armed bandits, and dynamic pricing.
[ "Chien-Ju Ho, Aleksandrs Slivkins, Jennifer Wortman Vaughan", "['Chien-Ju Ho' 'Aleksandrs Slivkins' 'Jennifer Wortman Vaughan']" ]
cs.AI cs.LG stat.ML
null
1405.2878
null
null
http://arxiv.org/pdf/1405.2878v1
2014-05-12T19:11:03Z
2014-05-12T19:11:03Z
Approximate Policy Iteration Schemes: A Comparison
We consider the infinite-horizon discounted optimal control problem formalized by Markov Decision Processes. We focus on several approximate variations of the Policy Iteration algorithm: Approximate Policy Iteration, Conservative Policy Iteration (CPI), a natural adaptation of the Policy Search by Dynamic Programming algorithm to the infinite-horizon case (PSDP$_\infty$), and the recently proposed Non-Stationary Policy iteration (NSPI(m)). For all algorithms, we describe performance bounds, and make a comparison by paying a particular attention to the concentrability constants involved, the number of iterations and the memory required. Our analysis highlights the following points: 1) The performance guarantee of CPI can be arbitrarily better than that of API/API($\alpha$), but this comes at the cost of a relative---exponential in $\frac{1}{\epsilon}$---increase of the number of iterations. 2) PSDP$_\infty$ enjoys the best of both worlds: its performance guarantee is similar to that of CPI, but within a number of iterations similar to that of API. 3) Contrary to API that requires a constant memory, the memory needed by CPI and PSDP$_\infty$ is proportional to their number of iterations, which may be problematic when the discount factor $\gamma$ is close to 1 or the approximation error $\epsilon$ is close to $0$; we show that the NSPI(m) algorithm allows to make an overall trade-off between memory and performance. Simulations with these schemes confirm our analysis.
[ "['Bruno Scherrer']", "Bruno Scherrer (INRIA Nancy - Grand Est / LORIA)" ]
stat.ML cs.LG math.OC
null
1405.3080
null
null
http://arxiv.org/pdf/1405.3080v1
2014-05-13T09:45:49Z
2014-05-13T09:45:49Z
Accelerating Minibatch Stochastic Gradient Descent using Stratified Sampling
Stochastic Gradient Descent (SGD) is a popular optimization method which has been applied to many important machine learning tasks such as Support Vector Machines and Deep Neural Networks. In order to parallelize SGD, minibatch training is often employed. The standard approach is to uniformly sample a minibatch at each step, which often leads to high variance. In this paper we propose a stratified sampling strategy, which divides the whole dataset into clusters with low within-cluster variance; we then take examples from these clusters using a stratified sampling technique. It is shown that the convergence rate can be significantly improved by the algorithm. Encouraging experimental results confirm the effectiveness of the proposed method.
[ "['Peilin Zhao' 'Tong Zhang']", "Peilin Zhao, Tong Zhang" ]
stat.ML cs.LG
null
1405.3162
null
null
http://arxiv.org/pdf/1405.3162v1
2014-05-13T14:17:11Z
2014-05-13T14:17:11Z
Circulant Binary Embedding
Binary embedding of high-dimensional data requires long codes to preserve the discriminative power of the input space. Traditional binary coding methods often suffer from very high computation and storage costs in such a scenario. To address this problem, we propose Circulant Binary Embedding (CBE) which generates binary codes by projecting the data with a circulant matrix. The circulant structure enables the use of Fast Fourier Transformation to speed up the computation. Compared to methods that use unstructured matrices, the proposed method improves the time complexity from $\mathcal{O}(d^2)$ to $\mathcal{O}(d\log{d})$, and the space complexity from $\mathcal{O}(d^2)$ to $\mathcal{O}(d)$ where $d$ is the input dimensionality. We also propose a novel time-frequency alternating optimization to learn data-dependent circulant projections, which alternatively minimizes the objective in original and Fourier domains. We show by extensive experiments that the proposed approach gives much better performance than the state-of-the-art approaches for fixed time, and provides much faster computation with no performance degradation for fixed number of bits.
[ "['Felix X. Yu' 'Sanjiv Kumar' 'Yunchao Gong' 'Shih-Fu Chang']", "Felix X. Yu, Sanjiv Kumar, Yunchao Gong, Shih-Fu Chang" ]
cs.LG
null
1405.3167
null
null
http://arxiv.org/pdf/1405.3167v1
2014-05-13T14:36:59Z
2014-05-13T14:36:59Z
Clustering, Hamming Embedding, Generalized LSH and the Max Norm
We study the convex relaxation of clustering and hamming embedding, focusing on the asymmetric case (co-clustering and asymmetric hamming embedding), understanding their relationship to LSH as studied by (Charikar 2002) and to the max-norm ball, and the differences between their symmetric and asymmetric versions.
[ "['Behnam Neyshabur' 'Yury Makarychev' 'Nathan Srebro']", "Behnam Neyshabur, Yury Makarychev, Nathan Srebro" ]
cs.LG cs.SI physics.soc-ph
null
1405.3210
null
null
http://arxiv.org/pdf/1405.3210v1
2014-05-13T16:08:55Z
2014-05-13T16:08:55Z
Locally Boosted Graph Aggregation for Community Detection
Learning the right graph representation from noisy, multi-source data has garnered significant interest in recent years. A central tenet of this problem is relational learning. Here the objective is to incorporate the partial information each data source gives us in a way that captures the true underlying relationships. To address this challenge, we present a general, boosting-inspired framework for combining weak evidence of entity associations into a robust similarity metric. Building on previous work, we explore the extent to which different local quality measurements yield graph representations that are suitable for community detection. We present empirical results on a variety of datasets demonstrating the utility of this framework, especially with respect to real datasets where noise and scale present serious challenges. Finally, we prove a convergence theorem in an ideal setting and outline future research into other application domains.
[ "Jeremy Kun, Rajmonda Caceres, Kevin Carter", "['Jeremy Kun' 'Rajmonda Caceres' 'Kevin Carter']" ]
stat.CO cs.LG stat.ML
null
1405.3222
null
null
http://arxiv.org/pdf/1405.3222v2
2014-11-03T16:44:50Z
2014-05-13T16:42:45Z
Efficient Implementations of the Generalized Lasso Dual Path Algorithm
We consider efficient implementations of the generalized lasso dual path algorithm of Tibshirani and Taylor (2011). We first describe a generic approach that covers any penalty matrix D and any (full column rank) matrix X of predictor variables. We then describe fast implementations for the special cases of trend filtering problems, fused lasso problems, and sparse fused lasso problems, both with X=I and a general matrix X. These specialized implementations offer a considerable improvement over the generic implementation, both in terms of numerical stability and efficiency of the solution path computation. These algorithms are all available for use in the genlasso R package, which can be found in the CRAN repository.
[ "['Taylor Arnold' 'Ryan Tibshirani']", "Taylor Arnold and Ryan Tibshirani" ]
math.ST cs.LG stat.ML stat.TH
null
1405.3224
null
null
http://arxiv.org/pdf/1405.3224v2
2015-02-24T08:55:57Z
2014-05-13T16:47:17Z
On the Complexity of A/B Testing
A/B testing refers to the task of determining the best option among two alternatives that yield random outcomes. We provide distribution-dependent lower bounds for the performance of A/B testing that improve over the results currently available both in the fixed-confidence (or delta-PAC) and fixed-budget settings. When the distribution of the outcomes are Gaussian, we prove that the complexity of the fixed-confidence and fixed-budget settings are equivalent, and that uniform sampling of both alternatives is optimal only in the case of equal variances. In the common variance case, we also provide a stopping rule that terminates faster than existing fixed-confidence algorithms. In the case of Bernoulli distributions, we show that the complexity of fixed-budget setting is smaller than that of fixed-confidence setting and that uniform sampling of both alternatives -though not optimal- is advisable in practice when combined with an appropriate stopping criterion.
[ "['Emilie Kaufmann' 'Olivier Cappé' 'Aurélien Garivier']", "Emilie Kaufmann (LTCI), Olivier Capp\\'e (LTCI), Aur\\'elien Garivier\n (IMT)" ]
cs.LG cs.AI math.OC math.ST stat.TH
null
1405.3229
null
null
http://arxiv.org/pdf/1405.3229v1
2014-05-13T16:51:54Z
2014-05-13T16:51:54Z
Rate of Convergence and Error Bounds for LSTD($\lambda$)
We consider LSTD($\lambda$), the least-squares temporal-difference algorithm with eligibility traces algorithm proposed by Boyan (2002). It computes a linear approximation of the value function of a fixed policy in a large Markov Decision Process. Under a $\beta$-mixing assumption, we derive, for any value of $\lambda \in (0,1)$, a high-probability estimate of the rate of convergence of this algorithm to its limit. We deduce a high-probability bound on the error of this algorithm, that extends (and slightly improves) that derived by Lazaric et al. (2012) in the specific case where $\lambda=0$. In particular, our analysis sheds some light on the choice of $\lambda$ with respect to the quality of the chosen linear space and the number of samples, that complies with simulations.
[ "Manel Tagorti (INRIA Nancy - Grand Est / LORIA), Bruno Scherrer (INRIA\n Nancy - Grand Est / LORIA)", "['Manel Tagorti' 'Bruno Scherrer']" ]
stat.ME cs.LG
10.1002/sam.11206
1405.3292
null
null
http://arxiv.org/abs/1405.3292v1
2014-05-13T20:03:14Z
2014-05-13T20:03:14Z
Learning with many experts: model selection and sparsity
Experts classifying data are often imprecise. Recently, several models have been proposed to train classifiers using the noisy labels generated by these experts. How to choose between these models? In such situations, the true labels are unavailable. Thus, one cannot perform model selection using the standard versions of methods such as empirical risk minimization and cross validation. In order to allow model selection, we present a surrogate loss and provide theoretical guarantees that assure its consistency. Next, we discuss how this loss can be used to tune a penalization which introduces sparsity in the parameters of a traditional class of models. Sparsity provides more parsimonious models and can avoid overfitting. Nevertheless, it has seldom been discussed in the context of noisy labels due to the difficulty in model selection and, therefore, in choosing tuning parameters. We apply these techniques to several sets of simulated and real data.
[ "['Rafael Izbicki' 'Rafael Bassi Stern']", "Rafael Izbicki, Rafael Bassi Stern" ]
stat.ML cs.LG stat.AP
null
1405.3295
null
null
http://arxiv.org/pdf/1405.3295v1
2014-05-13T20:07:09Z
2014-05-13T20:07:09Z
Effects of Sampling Methods on Prediction Quality. The Case of Classifying Land Cover Using Decision Trees
Clever sampling methods can be used to improve the handling of big data and increase its usefulness. The subject of this study is remote sensing, specifically airborne laser scanning point clouds representing different classes of ground cover. The aim is to derive a supervised learning model for the classification using CARTs. In order to measure the effect of different sampling methods on the classification accuracy, various experiments with varying types of sampling methods, sample sizes, and accuracy metrics have been designed. Numerical results for a subset of a large surveying project covering the lower Rhine area in Germany are shown. General conclusions regarding sampling design are drawn and presented.
[ "Ronald Hochreiter and Christoph Waldhauser", "['Ronald Hochreiter' 'Christoph Waldhauser']" ]
cs.LG math.OC math.PR stat.ML
null
1405.3316
null
null
http://arxiv.org/pdf/1405.3316v2
2019-06-06T16:42:25Z
2014-05-13T22:15:06Z
Optimal Exploration-Exploitation in a Multi-Armed-Bandit Problem with Non-stationary Rewards
In a multi-armed bandit (MAB) problem a gambler needs to choose at each round of play one of K arms, each characterized by an unknown reward distribution. Reward realizations are only observed when an arm is selected, and the gambler's objective is to maximize his cumulative expected earnings over some given horizon of play T. To do this, the gambler needs to acquire information about arms (exploration) while simultaneously optimizing immediate rewards (exploitation); the price paid due to this trade off is often referred to as the regret, and the main question is how small can this price be as a function of the horizon length T. This problem has been studied extensively when the reward distributions do not change over time; an assumption that supports a sharp characterization of the regret, yet is often violated in practical settings. In this paper, we focus on a MAB formulation which allows for a broad range of temporal uncertainties in the rewards, while still maintaining mathematical tractability. We fully characterize the (regret) complexity of this class of MAB problems by establishing a direct link between the extent of allowable reward "variation" and the minimal achievable regret. Our analysis draws some connections between two rather disparate strands of literature: the adversarial and the stochastic MAB frameworks.
[ "Omar Besbes, Yonatan Gur, Assaf Zeevi", "['Omar Besbes' 'Yonatan Gur' 'Assaf Zeevi']" ]
cs.AI cs.LG
null
1405.3318
null
null
http://arxiv.org/pdf/1405.3318v1
2014-05-13T22:29:14Z
2014-05-13T22:29:14Z
Adaptive Monte Carlo via Bandit Allocation
We consider the problem of sequentially choosing between a set of unbiased Monte Carlo estimators to minimize the mean-squared-error (MSE) of a final combined estimate. By reducing this task to a stochastic multi-armed bandit problem, we show that well developed allocation strategies can be used to achieve an MSE that approaches that of the best estimator chosen in retrospect. We then extend these developments to a scenario where alternative estimators have different, possibly stochastic costs. The outcome is a new set of adaptive Monte Carlo strategies that provide stronger guarantees than previous approaches while offering practical advantages.
[ "['James Neufeld' 'András György' 'Dale Schuurmans' 'Csaba Szepesvári']", "James Neufeld, Andr\\'as Gy\\\"orgy, Dale Schuurmans, Csaba Szepesv\\'ari" ]
cs.CV cs.LG
null
1405.3382
null
null
http://arxiv.org/pdf/1405.3382v1
2014-05-14T07:00:38Z
2014-05-14T07:00:38Z
Active Mining of Parallel Video Streams
The practicality of a video surveillance system is adversely limited by the amount of queries that can be placed on human resources and their vigilance in response. To transcend this limitation, a major effort under way is to include software that (fully or at least semi) automatically mines video footage, reducing the burden imposed to the system. Herein, we propose a semi-supervised incremental learning framework for evolving visual streams in order to develop a robust and flexible track classification system. Our proposed method learns from consecutive batches by updating an ensemble in each time. It tries to strike a balance between performance of the system and amount of data which needs to be labelled. As no restriction is considered, the system can address many practical problems in an evolving multi-camera scenario, such as concept drift, class evolution and various length of video streams which have not been addressed before. Experiments were performed on synthetic as well as real-world visual data in non-stationary environments, showing high accuracy with fairly little human collaboration.
[ "Samaneh Khoshrou, Jaime S. Cardoso, Luis F. Teixeira", "['Samaneh Khoshrou' 'Jaime S. Cardoso' 'Luis F. Teixeira']" ]
cs.LG
null
1405.3396
null
null
http://arxiv.org/pdf/1405.3396v1
2014-05-14T08:03:08Z
2014-05-14T08:03:08Z
Reducing Dueling Bandits to Cardinal Bandits
We present algorithms for reducing the Dueling Bandits problem to the conventional (stochastic) Multi-Armed Bandits problem. The Dueling Bandits problem is an online model of learning with ordinal feedback of the form "A is preferred to B" (as opposed to cardinal feedback like "A has value 2.5"), giving it wide applicability in learning from implicit user feedback and revealed and stated preferences. In contrast to existing algorithms for the Dueling Bandits problem, our reductions -- named $\Doubler$, $\MultiSbm$ and $\DoubleSbm$ -- provide a generic schema for translating the extensive body of known results about conventional Multi-Armed Bandit algorithms to the Dueling Bandits setting. For $\Doubler$ and $\MultiSbm$ we prove regret upper bounds in both finite and infinite settings, and conjecture about the performance of $\DoubleSbm$ which empirically outperforms the other two as well as previous algorithms in our experiments. In addition, we provide the first almost optimal regret bound in terms of second order terms, such as the differences between the values of the arms.
[ "['Nir Ailon' 'Thorsten Joachims' 'Zohar Karnin']", "Nir Ailon and Thorsten Joachims and Zohar Karnin" ]
cs.LG cs.CR
10.1016/j.eswa.2014.04.009
1405.3410
null
null
http://arxiv.org/abs/1405.3410v1
2014-05-14T08:47:31Z
2014-05-14T08:47:31Z
Efficient classification using parallel and scalable compressed model and Its application on intrusion detection
In order to achieve high efficiency of classification in intrusion detection, a compressed model is proposed in this paper which combines horizontal compression with vertical compression. OneR is utilized as horizontal com-pression for attribute reduction, and affinity propagation is employed as vertical compression to select small representative exemplars from large training data. As to be able to computationally compress the larger volume of training data with scalability, MapReduce based parallelization approach is then implemented and evaluated for each step of the model compression process abovementioned, on which common but efficient classification methods can be directly used. Experimental application study on two publicly available datasets of intrusion detection, KDD99 and CMDC2012, demonstrates that the classification using the compressed model proposed can effectively speed up the detection procedure at up to 184 times, most importantly at the cost of a minimal accuracy difference with less than 1% on average.
[ "Tieming Chen, Xu Zhang, Shichao Jin, Okhee Kim", "['Tieming Chen' 'Xu Zhang' 'Shichao Jin' 'Okhee Kim']" ]
stat.ML cs.LG
null
1405.3536
null
null
http://arxiv.org/pdf/1405.3536v1
2014-05-14T15:29:02Z
2014-05-14T15:29:02Z
Improving offline evaluation of contextual bandit algorithms via bootstrapping techniques
In many recommendation applications such as news recommendation, the items that can be rec- ommended come and go at a very fast pace. This is a challenge for recommender systems (RS) to face this setting. Online learning algorithms seem to be the most straight forward solution. The contextual bandit framework was introduced for that very purpose. In general the evaluation of a RS is a critical issue. Live evaluation is of- ten avoided due to the potential loss of revenue, hence the need for offline evaluation methods. Two options are available. Model based meth- ods are biased by nature and are thus difficult to trust when used alone. Data driven methods are therefore what we consider here. Evaluat- ing online learning algorithms with past data is not simple but some methods exist in the litera- ture. Nonetheless their accuracy is not satisfac- tory mainly due to their mechanism of data re- jection that only allow the exploitation of a small fraction of the data. We precisely address this issue in this paper. After highlighting the limita- tions of the previous methods, we present a new method, based on bootstrapping techniques. This new method comes with two important improve- ments: it is much more accurate and it provides a measure of quality of its estimation. The latter is a highly desirable property in order to minimize the risks entailed by putting online a RS for the first time. We provide both theoretical and ex- perimental proofs of its superiority compared to state-of-the-art methods, as well as an analysis of the convergence of the measure of quality.
[ "Olivier Nicol (INRIA Lille - Nord Europe, LIFL), J\\'er\\'emie Mary\n (INRIA Lille - Nord Europe, LIFL), Philippe Preux (INRIA Lille - Nord Europe,\n LIFL)", "['Olivier Nicol' 'Jérémie Mary' 'Philippe Preux']" ]
cs.SI cs.LG physics.soc-ph
10.1371/journal.pcbi.1003892
1405.3612
null
null
http://arxiv.org/abs/1405.3612v2
2014-07-15T16:11:43Z
2014-05-14T18:26:23Z
Global disease monitoring and forecasting with Wikipedia
Infectious disease is a leading threat to public health, economic stability, and other key social structures. Efforts to mitigate these impacts depend on accurate and timely monitoring to measure the risk and progress of disease. Traditional, biologically-focused monitoring techniques are accurate but costly and slow; in response, new techniques based on social internet data such as social media and search queries are emerging. These efforts are promising, but important challenges in the areas of scientific peer review, breadth of diseases and countries, and forecasting hamper their operational usefulness. We examine a freely available, open data source for this use: access logs from the online encyclopedia Wikipedia. Using linear models, language as a proxy for location, and a systematic yet simple article selection procedure, we tested 14 location-disease combinations and demonstrate that these data feasibly support an approach that overcomes these challenges. Specifically, our proof-of-concept yields models with $r^2$ up to 0.92, forecasting value up to the 28 days tested, and several pairs of models similar enough to suggest that transferring models from one location to another without re-training is feasible. Based on these preliminary results, we close with a research agenda designed to overcome these challenges and produce a disease monitoring and forecasting system that is significantly more effective, robust, and globally comprehensive than the current state of the art.
[ "['Nicholas Generous' 'Geoffrey Fairchild' 'Alina Deshpande'\n 'Sara Y. Del Valle' 'Reid Priedhorsky']", "Nicholas Generous (1), Geoffrey Fairchild (1), Alina Deshpande (1),\n Sara Y. Del Valle (1), Reid Priedhorsky (1) ((1) Los Alamos National\n Laboratory, Los Alamos, NM)" ]
cs.SI cs.DC cs.IR cs.LG stat.ML
null
1405.3726
null
null
http://arxiv.org/pdf/1405.3726v1
2014-05-15T02:15:01Z
2014-05-15T02:15:01Z
Topic words analysis based on LDA model
Social network analysis (SNA), which is a research field describing and modeling the social connection of a certain group of people, is popular among network services. Our topic words analysis project is a SNA method to visualize the topic words among emails from Obama.com to accounts registered in Columbus, Ohio. Based on Latent Dirichlet Allocation (LDA) model, a popular topic model of SNA, our project characterizes the preference of senders for target group of receptors. Gibbs sampling is used to estimate topic and word distribution. Our training and testing data are emails from the carbon-free server Datagreening.com. We use parallel computing tool BashReduce for word processing and generate related words under each latent topic to discovers typical information of political news sending specially to local Columbus receptors. Running on two instances using paralleling tool BashReduce, our project contributes almost 30% speedup processing the raw contents, comparing with processing contents on one instance locally. Also, the experimental result shows that the LDA model applied in our project provides precision rate 53.96% higher than TF-IDF model finding target words, on the condition that appropriate size of topic words list is selected.
[ "Xi Qiu and Christopher Stewart", "['Xi Qiu' 'Christopher Stewart']" ]
cs.LG
null
1405.3843
null
null
http://arxiv.org/pdf/1405.3843v1
2014-05-15T13:29:27Z
2014-05-15T13:29:27Z
Logistic Regression: Tight Bounds for Stochastic and Online Optimization
The logistic loss function is often advocated in machine learning and statistics as a smooth and strictly convex surrogate for the 0-1 loss. In this paper we investigate the question of whether these smoothness and convexity properties make the logistic loss preferable to other widely considered options such as the hinge loss. We show that in contrast to known asymptotic bounds, as long as the number of prediction/optimization iterations is sub exponential, the logistic loss provides no improvement over a generic non-smooth loss function such as the hinge loss. In particular we show that the convergence rate of stochastic logistic optimization is bounded from below by a polynomial in the diameter of the decision set and the number of prediction iterations, and provide a matching tight upper bound. This resolves the COLT open problem of McMahan and Streeter (2012).
[ "Elad Hazan, Tomer Koren, Kfir Y. Levy", "['Elad Hazan' 'Tomer Koren' 'Kfir Y. Levy']" ]
stat.ME cs.LG stat.ML
null
1405.4047
null
null
http://arxiv.org/pdf/1405.4047v2
2014-10-01T23:33:31Z
2014-05-16T01:30:11Z
Methods and Models for Interpretable Linear Classification
We present an integer programming framework to build accurate and interpretable discrete linear classification models. Unlike existing approaches, our framework is designed to provide practitioners with the control and flexibility they need to tailor accurate and interpretable models for a domain of choice. To this end, our framework can produce models that are fully optimized for accuracy, by minimizing the 0--1 classification loss, and that address multiple aspects of interpretability, by incorporating a range of discrete constraints and penalty functions. We use our framework to produce models that are difficult to create with existing methods, such as scoring systems and M-of-N rule tables. In addition, we propose specially designed optimization methods to improve the scalability of our framework through decomposition and data reduction. We show that discrete linear classifiers can attain the training accuracy of any other linear classifier, and provide an Occam's Razor type argument as to why the use of small discrete coefficients can provide better generalization. We demonstrate the performance and flexibility of our framework through numerical experiments and a case study in which we construct a highly tailored clinical tool for sleep apnea diagnosis.
[ "Berk Ustun and Cynthia Rudin", "['Berk Ustun' 'Cynthia Rudin']" ]
cs.CL cs.AI cs.LG
null
1405.4053
null
null
http://arxiv.org/pdf/1405.4053v2
2014-05-22T23:23:19Z
2014-05-16T07:12:16Z
Distributed Representations of Sentences and Documents
Many machine learning algorithms require the input to be represented as a fixed-length feature vector. When it comes to texts, one of the most common fixed-length features is bag-of-words. Despite their popularity, bag-of-words features have two major weaknesses: they lose the ordering of the words and they also ignore semantics of the words. For example, "powerful," "strong" and "Paris" are equally distant. In this paper, we propose Paragraph Vector, an unsupervised algorithm that learns fixed-length feature representations from variable-length pieces of texts, such as sentences, paragraphs, and documents. Our algorithm represents each document by a dense vector which is trained to predict words in the document. Its construction gives our algorithm the potential to overcome the weaknesses of bag-of-words models. Empirical results show that Paragraph Vectors outperform bag-of-words models as well as other techniques for text representations. Finally, we achieve new state-of-the-art results on several text classification and sentiment analysis tasks.
[ "Quoc V. Le and Tomas Mikolov", "['Quoc V. Le' 'Tomas Mikolov']" ]
cs.LG stat.ML
null
1405.4324
null
null
http://arxiv.org/pdf/1405.4324v1
2014-05-16T22:31:42Z
2014-05-16T22:31:42Z
Active Semi-Supervised Learning Using Sampling Theory for Graph Signals
We consider the problem of offline, pool-based active semi-supervised learning on graphs. This problem is important when the labeled data is scarce and expensive whereas unlabeled data is easily available. The data points are represented by the vertices of an undirected graph with the similarity between them captured by the edge weights. Given a target number of nodes to label, the goal is to choose those nodes that are most informative and then predict the unknown labels. We propose a novel framework for this problem based on our recent results on sampling theory for graph signals. A graph signal is a real-valued function defined on each node of the graph. A notion of frequency for such signals can be defined using the spectrum of the graph Laplacian matrix. The sampling theory for graph signals aims to extend the traditional Nyquist-Shannon sampling theory by allowing us to identify the class of graph signals that can be reconstructed from their values on a subset of vertices. This approach allows us to define a criterion for active learning based on sampling set selection which aims at maximizing the frequency of the signals that can be reconstructed from their samples on the set. Experiments show the effectiveness of our method.
[ "Akshay Gadde, Aamir Anis and Antonio Ortega", "['Akshay Gadde' 'Aamir Anis' 'Antonio Ortega']" ]
cs.LG cs.CE q-bio.QM stat.ML
null
1405.4394
null
null
http://arxiv.org/pdf/1405.4394v1
2014-05-17T13:51:42Z
2014-05-17T13:51:42Z
Identification of functionally related enzymes by learning-to-rank methods
Enzyme sequences and structures are routinely used in the biological sciences as queries to search for functionally related enzymes in online databases. To this end, one usually departs from some notion of similarity, comparing two enzymes by looking for correspondences in their sequences, structures or surfaces. For a given query, the search operation results in a ranking of the enzymes in the database, from very similar to dissimilar enzymes, while information about the biological function of annotated database enzymes is ignored. In this work we show that rankings of that kind can be substantially improved by applying kernel-based learning algorithms. This approach enables the detection of statistical dependencies between similarities of the active cleft and the biological function of annotated enzymes. This is in contrast to search-based approaches, which do not take annotated training data into account. Similarity measures based on the active cleft are known to outperform sequence-based or structure-based measures under certain conditions. We consider the Enzyme Commission (EC) classification hierarchy for obtaining annotated enzymes during the training phase. The results of a set of sizeable experiments indicate a consistent and significant improvement for a set of similarity measures that exploit information about small cavities in the surface of enzymes.
[ "Michiel Stock, Thomas Fober, Eyke H\\\"ullermeier, Serghei Glinca,\n Gerhard Klebe, Tapio Pahikkala, Antti Airola, Bernard De Baets, Willem\n Waegeman", "['Michiel Stock' 'Thomas Fober' 'Eyke Hüllermeier' 'Serghei Glinca'\n 'Gerhard Klebe' 'Tapio Pahikkala' 'Antti Airola' 'Bernard De Baets'\n 'Willem Waegeman']" ]
cs.LG
null
1405.4423
null
null
http://arxiv.org/pdf/1405.4423v1
2014-05-17T18:20:13Z
2014-05-17T18:20:13Z
A two-step learning approach for solving full and almost full cold start problems in dyadic prediction
Dyadic prediction methods operate on pairs of objects (dyads), aiming to infer labels for out-of-sample dyads. We consider the full and almost full cold start problem in dyadic prediction, a setting that occurs when both objects in an out-of-sample dyad have not been observed during training, or if one of them has been observed, but very few times. A popular approach for addressing this problem is to train a model that makes predictions based on a pairwise feature representation of the dyads, or, in case of kernel methods, based on a tensor product pairwise kernel. As an alternative to such a kernel approach, we introduce a novel two-step learning algorithm that borrows ideas from the fields of pairwise learning and spectral filtering. We show theoretically that the two-step method is very closely related to the tensor product kernel approach, and experimentally that it yields a slightly better predictive performance. Moreover, unlike existing tensor product kernel methods, the two-step method allows closed-form solutions for training and parameter selection via cross-validation estimates both in the full and almost full cold start settings, making the approach much more efficient and straightforward to implement.
[ "['Tapio Pahikkala' 'Michiel Stock' 'Antti Airola' 'Tero Aittokallio'\n 'Bernard De Baets' 'Willem Waegeman']", "Tapio Pahikkala, Michiel Stock, Antti Airola, Tero Aittokallio,\n Bernard De Baets, Willem Waegeman" ]
cs.NI cs.LG
10.1109/COMST.2014.2320099
1405.4463
null
null
http://arxiv.org/abs/1405.4463v2
2015-03-19T15:15:04Z
2014-05-18T06:28:47Z
Machine Learning in Wireless Sensor Networks: Algorithms, Strategies, and Applications
Wireless sensor networks monitor dynamic environments that change rapidly over time. This dynamic behavior is either caused by external factors or initiated by the system designers themselves. To adapt to such conditions, sensor networks often adopt machine learning techniques to eliminate the need for unnecessary redesign. Machine learning also inspires many practical solutions that maximize resource utilization and prolong the lifespan of the network. In this paper, we present an extensive literature review over the period 2002-2013 of machine learning methods that were used to address common issues in wireless sensor networks (WSNs). The advantages and disadvantages of each proposed algorithm are evaluated against the corresponding problem. We also provide a comparative guide to aid WSN designers in developing suitable machine learning solutions for their specific application challenges.
[ "Mohammad Abu Alsheikh, Shaowei Lin, Dusit Niyato and Hwee-Pink Tan", "['Mohammad Abu Alsheikh' 'Shaowei Lin' 'Dusit Niyato' 'Hwee-Pink Tan']" ]
cs.LG
null
1405.4471
null
null
http://arxiv.org/pdf/1405.4471v1
2014-05-18T08:47:58Z
2014-05-18T08:47:58Z
Online Learning with Composite Loss Functions
We study a new class of online learning problems where each of the online algorithm's actions is assigned an adversarial value, and the loss of the algorithm at each step is a known and deterministic function of the values assigned to its recent actions. This class includes problems where the algorithm's loss is the minimum over the recent adversarial values, the maximum over the recent values, or a linear combination of the recent values. We analyze the minimax regret of this class of problems when the algorithm receives bandit feedback, and prove that when the minimum or maximum functions are used, the minimax regret is $\tilde \Omega(T^{2/3})$ (so called hard online learning problems), and when a linear function is used, the minimax regret is $\tilde O(\sqrt{T})$ (so called easy learning problems). Previously, the only online learning problem that was known to be provably hard was the multi-armed bandit with switching costs.
[ "['Ofer Dekel' 'Jian Ding' 'Tomer Koren' 'Yuval Peres']", "Ofer Dekel, Jian Ding, Tomer Koren, Yuval Peres" ]
cs.LG
null
1405.4543
null
null
http://arxiv.org/pdf/1405.4543v1
2014-05-18T19:54:18Z
2014-05-18T19:54:18Z
A Distributed Algorithm for Training Nonlinear Kernel Machines
This paper concerns the distributed training of nonlinear kernel machines on Map-Reduce. We show that a re-formulation of Nystr\"om approximation based solution which is solved using gradient based techniques is well suited for this, especially when it is necessary to work with a large number of basis points. The main advantages of this approach are: avoidance of computing the pseudo-inverse of the kernel sub-matrix corresponding to the basis points; simplicity and efficiency of the distributed part of the computations; and, friendliness to stage-wise addition of basis points. We implement the method using an AllReduce tree on Hadoop and demonstrate its value on a few large benchmark datasets.
[ "Dhruv Mahajan, S. Sathiya Keerthi, S. Sundararajan", "['Dhruv Mahajan' 'S. Sathiya Keerthi' 'S. Sundararajan']" ]
cs.LG
null
1405.4544
null
null
http://arxiv.org/pdf/1405.4544v2
2015-03-16T21:31:59Z
2014-05-18T20:07:41Z
A distributed block coordinate descent method for training $l_1$ regularized linear classifiers
Distributed training of $l_1$ regularized classifiers has received great attention recently. Most existing methods approach this problem by taking steps obtained from approximating the objective by a quadratic approximation that is decoupled at the individual variable level. These methods are designed for multicore and MPI platforms where communication costs are low. They are inefficient on systems such as Hadoop running on a cluster of commodity machines where communication costs are substantial. In this paper we design a distributed algorithm for $l_1$ regularization that is much better suited for such systems than existing algorithms. A careful cost analysis is used to support these points and motivate our method. The main idea of our algorithm is to do block optimization of many variables on the actual objective function within each computing node; this increases the computational cost per step that is matched with the communication cost, and decreases the number of outer iterations, thus yielding a faster overall method. Distributed Gauss-Seidel and Gauss-Southwell greedy schemes are used for choosing variables to update in each step. We establish global convergence theory for our algorithm, including Q-linear rate of convergence. Experiments on two benchmark problems show our method to be much faster than existing methods.
[ "Dhruv Mahajan, S. Sathiya Keerthi, S. Sundararajan", "['Dhruv Mahajan' 'S. Sathiya Keerthi' 'S. Sundararajan']" ]
cs.CV cs.LG
null
1405.4583
null
null
http://arxiv.org/pdf/1405.4583v1
2014-05-19T03:06:14Z
2014-05-19T03:06:14Z
ESSP: An Efficient Approach to Minimizing Dense and Nonsubmodular Energy Functions
Many recent advances in computer vision have demonstrated the impressive power of dense and nonsubmodular energy functions in solving visual labeling problems. However, minimizing such energies is challenging. None of existing techniques (such as s-t graph cut, QPBO, BP and TRW-S) can individually do this well. In this paper, we present an efficient method, namely ESSP, to optimize binary MRFs with arbitrary pairwise potentials, which could be nonsubmodular and with dense connectivity. We also provide a comparative study of our approach and several recent promising methods. From our study, we make some reasonable recommendations of combining existing methods that perform the best in different situations for this challenging problem. Experimental results validate that for dense and nonsubmodular energy functions, the proposed approach can usually obtain lower energies than the best combination of other techniques using comparably reasonable time.
[ "['Wei Feng' 'Jiaya Jia' 'Zhi-Qiang Liu']", "Wei Feng and Jiaya Jia and Zhi-Qiang Liu" ]
cs.NE cs.LG
null
1405.4589
null
null
http://arxiv.org/pdf/1405.4589v2
2014-05-20T11:53:39Z
2014-05-19T03:50:21Z
A Parallel Way to Select the Parameters of SVM Based on the Ant Optimization Algorithm
A large number of experimental data shows that Support Vector Machine (SVM) algorithm has obvious advantages in text classification, handwriting recognition, image classification, bioinformatics, and some other fields. To some degree, the optimization of SVM depends on its kernel function and Slack variable, the determinant of which is its parameters $\delta$ and c in the classification function. That is to say,to optimize the SVM algorithm, the optimization of the two parameters play a huge role. Ant Colony Optimization (ACO) is optimization algorithm which simulate ants to find the optimal path.In the available literature, we mix the ACO algorithm and Parallel algorithm together to find a well parameters.
[ "Chao Zhang, Hong-cen Mei, Hao Yang", "['Chao Zhang' 'Hong-cen Mei' 'Hao Yang']" ]
cs.CL cs.LG stat.ML
null
1405.4599
null
null
http://arxiv.org/pdf/1405.4599v1
2014-05-19T04:36:38Z
2014-05-19T04:36:38Z
Modelling Data Dispersion Degree in Automatic Robust Estimation for Multivariate Gaussian Mixture Models with an Application to Noisy Speech Processing
The trimming scheme with a prefixed cutoff portion is known as a method of improving the robustness of statistical models such as multivariate Gaussian mixture models (MG- MMs) in small scale tests by alleviating the impacts of outliers. However, when this method is applied to real- world data, such as noisy speech processing, it is hard to know the optimal cut-off portion to remove the outliers and sometimes removes useful data samples as well. In this paper, we propose a new method based on measuring the dispersion degree (DD) of the training data to avoid this problem, so as to realise automatic robust estimation for MGMMs. The DD model is studied by using two different measures. For each one, we theoretically prove that the DD of the data samples in a context of MGMMs approximately obeys a specific (chi or chi-square) distribution. The proposed method is evaluated on a real-world application with a moderately-sized speaker recognition task. Experiments show that the proposed method can significantly improve the robustness of the conventional training method of GMMs for speaker recognition.
[ "['Dalei Wu' 'Haiqing Wu']", "Dalei Wu and Haiqing Wu" ]
cs.LG cs.NE
null
1405.4604
null
null
http://arxiv.org/pdf/1405.4604v2
2014-05-28T03:05:00Z
2014-05-19T04:56:30Z
On the saddle point problem for non-convex optimization
A central challenge to many fields of science and engineering involves minimizing non-convex error functions over continuous, high dimensional spaces. Gradient descent or quasi-Newton methods are almost ubiquitously used to perform such minimizations, and it is often thought that a main source of difficulty for the ability of these local methods to find the global minimum is the proliferation of local minima with much higher error than the global minimum. Here we argue, based on results from statistical physics, random matrix theory, and neural network theory, that a deeper and more profound difficulty originates from the proliferation of saddle points, not local minima, especially in high dimensional problems of practical interest. Such saddle points are surrounded by high error plateaus that can dramatically slow down learning, and give the illusory impression of the existence of a local minimum. Motivated by these arguments, we propose a new algorithm, the saddle-free Newton method, that can rapidly escape high dimensional saddle points, unlike gradient descent and quasi-Newton methods. We apply this algorithm to deep neural network training, and provide preliminary numerical evidence for its superior performance.
[ "['Razvan Pascanu' 'Yann N. Dauphin' 'Surya Ganguli' 'Yoshua Bengio']", "Razvan Pascanu, Yann N. Dauphin, Surya Ganguli and Yoshua Bengio" ]
cs.LG
null
1405.4758
null
null
http://arxiv.org/pdf/1405.4758v1
2014-05-19T14:56:51Z
2014-05-19T14:56:51Z
Lipschitz Bandits: Regret Lower Bounds and Optimal Algorithms
We consider stochastic multi-armed bandit problems where the expected reward is a Lipschitz function of the arm, and where the set of arms is either discrete or continuous. For discrete Lipschitz bandits, we derive asymptotic problem specific lower bounds for the regret satisfied by any algorithm, and propose OSLB and CKL-UCB, two algorithms that efficiently exploit the Lipschitz structure of the problem. In fact, we prove that OSLB is asymptotically optimal, as its asymptotic regret matches the lower bound. The regret analysis of our algorithms relies on a new concentration inequality for weighted sums of KL divergences between the empirical distributions of rewards and their true distributions. For continuous Lipschitz bandits, we propose to first discretize the action space, and then apply OSLB or CKL-UCB, algorithms that provably exploit the structure efficiently. This approach is shown, through numerical experiments, to significantly outperform existing algorithms that directly deal with the continuous set of arms. Finally the results and algorithms are extended to contextual bandits with similarities.
[ "['Stefan Magureanu' 'Richard Combes' 'Alexandre Proutiere']", "Stefan Magureanu and Richard Combes and Alexandre Proutiere" ]
cs.LG cs.CV cs.IT math.IT math.OC stat.ML
null
1405.4807
null
null
http://arxiv.org/pdf/1405.4807v1
2014-05-19T16:58:24Z
2014-05-19T16:58:24Z
Scalable Semidefinite Relaxation for Maximum A Posterior Estimation
Maximum a posteriori (MAP) inference over discrete Markov random fields is a fundamental task spanning a wide spectrum of real-world applications, which is known to be NP-hard for general graphs. In this paper, we propose a novel semidefinite relaxation formulation (referred to as SDR) to estimate the MAP assignment. Algorithmically, we develop an accelerated variant of the alternating direction method of multipliers (referred to as SDPAD-LR) that can effectively exploit the special structure of the new relaxation. Encouragingly, the proposed procedure allows solving SDR for large-scale problems, e.g., problems on a grid graph comprising hundreds of thousands of variables with multiple states per node. Compared with prior SDP solvers, SDPAD-LR is capable of attaining comparable accuracy while exhibiting remarkably improved scalability, in contrast to the commonly held belief that semidefinite relaxation can only been applied on small-scale MRF problems. We have evaluated the performance of SDR on various benchmark datasets including OPENGM2 and PIC in terms of both the quality of the solutions and computation time. Experimental results demonstrate that for a broad class of problems, SDPAD-LR outperforms state-of-the-art algorithms in producing better MAP assignment in an efficient manner.
[ "['Qixing Huang' 'Yuxin Chen' 'Leonidas Guibas']", "Qixing Huang, Yuxin Chen, and Leonidas Guibas" ]
cs.LG stat.ML
10.1109/TPAMI.2016.2568185
1405.4897
null
null
http://arxiv.org/abs/1405.4897v2
2016-08-21T22:04:31Z
2014-05-19T21:07:08Z
Screening Tests for Lasso Problems
This paper is a survey of dictionary screening for the lasso problem. The lasso problem seeks a sparse linear combination of the columns of a dictionary to best match a given target vector. This sparse representation has proven useful in a variety of subsequent processing and decision tasks. For a given target vector, dictionary screening quickly identifies a subset of dictionary columns that will receive zero weight in a solution of the corresponding lasso problem. These columns can be removed from the dictionary prior to solving the lasso problem without impacting the optimality of the solution obtained. This has two potential advantages: it reduces the size of the dictionary, allowing the lasso problem to be solved with less resources, and it may speed up obtaining a solution. Using a geometrically intuitive framework, we provide basic insights for understanding useful lasso screening tests and their limitations. We also provide illustrative numerical studies on several datasets.
[ "Zhen James Xiang, Yun Wang and Peter J. Ramadge", "['Zhen James Xiang' 'Yun Wang' 'Peter J. Ramadge']" ]
math.OC cs.CC cs.LG cs.NA stat.ML
null
1405.4980
null
null
http://arxiv.org/pdf/1405.4980v2
2015-11-16T18:52:04Z
2014-05-20T07:50:56Z
Convex Optimization: Algorithms and Complexity
This monograph presents the main complexity theorems in convex optimization and their corresponding algorithms. Starting from the fundamental theory of black-box optimization, the material progresses towards recent advances in structural optimization and stochastic optimization. Our presentation of black-box optimization, strongly influenced by Nesterov's seminal book and Nemirovski's lecture notes, includes the analysis of cutting plane methods, as well as (accelerated) gradient descent schemes. We also pay special attention to non-Euclidean settings (relevant algorithms include Frank-Wolfe, mirror descent, and dual averaging) and discuss their relevance in machine learning. We provide a gentle introduction to structural optimization with FISTA (to optimize a sum of a smooth and a simple non-smooth term), saddle-point mirror prox (Nemirovski's alternative to Nesterov's smoothing), and a concise description of interior point methods. In stochastic optimization we discuss stochastic gradient descent, mini-batches, random coordinate descent, and sublinear algorithms. We also briefly touch upon convex relaxation of combinatorial problems and the use of randomness to round solutions, as well as random walks based methods.
[ "['Sébastien Bubeck']", "S\\'ebastien Bubeck" ]
cs.LG stat.ML
null
1405.5096
null
null
http://arxiv.org/pdf/1405.5096v1
2014-05-20T14:15:54Z
2014-05-20T14:15:54Z
Unimodal Bandits: Regret Lower Bounds and Optimal Algorithms
We consider stochastic multi-armed bandits where the expected reward is a unimodal function over partially ordered arms. This important class of problems has been recently investigated in (Cope 2009, Yu 2011). The set of arms is either discrete, in which case arms correspond to the vertices of a finite graph whose structure represents similarity in rewards, or continuous, in which case arms belong to a bounded interval. For discrete unimodal bandits, we derive asymptotic lower bounds for the regret achieved under any algorithm, and propose OSUB, an algorithm whose regret matches this lower bound. Our algorithm optimally exploits the unimodal structure of the problem, and surprisingly, its asymptotic regret does not depend on the number of arms. We also provide a regret upper bound for OSUB in non-stationary environments where the expected rewards smoothly evolve over time. The analytical results are supported by numerical experiments showing that OSUB performs significantly better than the state-of-the-art algorithms. For continuous sets of arms, we provide a brief discussion. We show that combining an appropriate discretization of the set of arms with the UCB algorithm yields an order-optimal regret, and in practice, outperforms recently proposed algorithms designed to exploit the unimodal structure.
[ "['Richard Combes' 'Alexandre Proutiere']", "Richard Combes and Alexandre Proutiere" ]
cs.LG cs.IR
null
1405.5147
null
null
http://arxiv.org/pdf/1405.5147v1
2014-05-20T16:32:59Z
2014-05-20T16:32:59Z
Predicting Online Video Engagement Using Clickstreams
In the nascent days of e-content delivery, having a superior product was enough to give companies an edge against the competition. With today's fiercely competitive market, one needs to be multiple steps ahead, especially when it comes to understanding consumers. Focusing on a large set of web portals owned and managed by a private communications company, we propose methods by which these sites' clickstream data can be used to provide a deep understanding of their visitors, as well as their interests and preferences. We further expand the use of this data to show that it can be effectively used to predict user engagement to video streams.
[ "['Everaldo Aguiar' 'Saurabh Nagrecha' 'Nitesh V. Chawla']", "Everaldo Aguiar, Saurabh Nagrecha, Nitesh V. Chawla" ]
cs.LG cs.AI stat.ML
null
1405.5156
null
null
http://arxiv.org/pdf/1405.5156v1
2014-05-20T17:12:56Z
2014-05-20T17:12:56Z
Gaussian Approximation of Collective Graphical Models
The Collective Graphical Model (CGM) models a population of independent and identically distributed individuals when only collective statistics (i.e., counts of individuals) are observed. Exact inference in CGMs is intractable, and previous work has explored Markov Chain Monte Carlo (MCMC) and MAP approximations for learning and inference. This paper studies Gaussian approximations to the CGM. As the population grows large, we show that the CGM distribution converges to a multivariate Gaussian distribution (GCGM) that maintains the conditional independence properties of the original CGM. If the observations are exact marginals of the CGM or marginals that are corrupted by Gaussian noise, inference in the GCGM approximation can be computed efficiently in closed form. If the observations follow a different noise model (e.g., Poisson), then expectation propagation provides efficient and accurate approximate inference. The accuracy and speed of GCGM inference is compared to the MCMC and MAP methods on a simulated bird migration problem. The GCGM matches or exceeds the accuracy of the MAP method while being significantly faster.
[ "Li-Ping Liu, Daniel Sheldon, Thomas G. Dietterich", "['Li-Ping Liu' 'Daniel Sheldon' 'Thomas G. Dietterich']" ]
cs.LG cs.CC cs.DM
null
1405.5268
null
null
http://arxiv.org/pdf/1405.5268v2
2014-07-09T19:16:57Z
2014-05-21T00:06:02Z
Approximate resilience, monotonicity, and the complexity of agnostic learning
A function $f$ is $d$-resilient if all its Fourier coefficients of degree at most $d$ are zero, i.e., $f$ is uncorrelated with all low-degree parities. We study the notion of $\mathit{approximate}$ $\mathit{resilience}$ of Boolean functions, where we say that $f$ is $\alpha$-approximately $d$-resilient if $f$ is $\alpha$-close to a $[-1,1]$-valued $d$-resilient function in $\ell_1$ distance. We show that approximate resilience essentially characterizes the complexity of agnostic learning of a concept class $C$ over the uniform distribution. Roughly speaking, if all functions in a class $C$ are far from being $d$-resilient then $C$ can be learned agnostically in time $n^{O(d)}$ and conversely, if $C$ contains a function close to being $d$-resilient then agnostic learning of $C$ in the statistical query (SQ) framework of Kearns has complexity of at least $n^{\Omega(d)}$. This characterization is based on the duality between $\ell_1$ approximation by degree-$d$ polynomials and approximate $d$-resilience that we establish. In particular, it implies that $\ell_1$ approximation by low-degree polynomials, known to be sufficient for agnostic learning over product distributions, is in fact necessary. Focusing on monotone Boolean functions, we exhibit the existence of near-optimal $\alpha$-approximately $\widetilde{\Omega}(\alpha\sqrt{n})$-resilient monotone functions for all $\alpha>0$. Prior to our work, it was conceivable even that every monotone function is $\Omega(1)$-far from any $1$-resilient function. Furthermore, we construct simple, explicit monotone functions based on ${\sf Tribes}$ and ${\sf CycleRun}$ that are close to highly resilient functions. Our constructions are based on a fairly general resilience analysis and amplification. These structural results, together with the characterization, imply nearly optimal lower bounds for agnostic learning of monotone juntas.
[ "['Dana Dachman-Soled' 'Vitaly Feldman' 'Li-Yang Tan' 'Andrew Wan'\n 'Karl Wimmer']", "Dana Dachman-Soled and Vitaly Feldman and Li-Yang Tan and Andrew Wan\n and Karl Wimmer" ]
math.OC cs.LG
null
1405.5300
null
null
http://arxiv.org/pdf/1405.5300v2
2014-07-27T12:22:28Z
2014-05-21T05:12:55Z
Fast Distributed Coordinate Descent for Non-Strongly Convex Losses
We propose an efficient distributed randomized coordinate descent method for minimizing regularized non-strongly convex loss functions. The method attains the optimal $O(1/k^2)$ convergence rate, where $k$ is the iteration counter. The core of the work is the theoretical study of stepsize parameters. We have implemented the method on Archer - the largest supercomputer in the UK - and show that the method is capable of solving a (synthetic) LASSO optimization problem with 50 billion variables.
[ "Olivier Fercoq and Zheng Qu and Peter Richt\\'arik and Martin\n Tak\\'a\\v{c}", "['Olivier Fercoq' 'Zheng Qu' 'Peter Richtárik' 'Martin Takáč']" ]
stat.ME cs.LG stat.ML
null
1405.5311
null
null
http://arxiv.org/pdf/1405.5311v1
2014-05-21T06:53:16Z
2014-05-21T06:53:16Z
Compressive Sampling Using EM Algorithm
Conventional approaches of sampling signals follow the celebrated theorem of Nyquist and Shannon. Compressive sampling, introduced by Donoho, Romberg and Tao, is a new paradigm that goes against the conventional methods in data acquisition and provides a way of recovering signals using fewer samples than the traditional methods use. Here we suggest an alternative way of reconstructing the original signals in compressive sampling using EM algorithm. We first propose a naive approach which has certain computational difficulties and subsequently modify it to a new approach which performs better than the conventional methods of compressive sampling. The comparison of the different approaches and the performance of the new approach has been studied using simulated data.
[ "['Atanu Kumar Ghosh' 'Arnab Chakraborty']", "Atanu Kumar Ghosh, Arnab Chakraborty" ]
cs.AI cs.LG
null
1405.5358
null
null
http://arxiv.org/pdf/1405.5358v1
2014-05-21T10:20:15Z
2014-05-21T10:20:15Z
Off-Policy Shaping Ensembles in Reinforcement Learning
Recent advances of gradient temporal-difference methods allow to learn off-policy multiple value functions in parallel with- out sacrificing convergence guarantees or computational efficiency. This opens up new possibilities for sound ensemble techniques in reinforcement learning. In this work we propose learning an ensemble of policies related through potential-based shaping rewards. The ensemble induces a combination policy by using a voting mechanism on its components. Learning happens in real time, and we empirically show the combination policy to outperform the individual policies of the ensemble.
[ "Anna Harutyunyan and Tim Brys and Peter Vrancx and Ann Nowe", "['Anna Harutyunyan' 'Tim Brys' 'Peter Vrancx' 'Ann Nowe']" ]
cs.CV cs.LG
null
1405.5488
null
null
http://arxiv.org/pdf/1405.5488v1
2014-04-24T02:29:19Z
2014-04-24T02:29:19Z
On Learning Where To Look
Current automatic vision systems face two major challenges: scalability and extreme variability of appearance. First, the computational time required to process an image typically scales linearly with the number of pixels in the image, therefore limiting the resolution of input images to thumbnail size. Second, variability in appearance and pose of the objects constitute a major hurdle for robust recognition and detection. In this work, we propose a model that makes baby steps towards addressing these challenges. We describe a learning based method that recognizes objects through a series of glimpses. This system performs an amount of computation that scales with the complexity of the input rather than its number of pixels. Moreover, the proposed method is potentially more robust to changes in appearance since its parameters are learned in a data driven manner. Preliminary experiments on a handwritten dataset of digits demonstrate the computational advantages of this approach.
[ "[\"Marc'Aurelio Ranzato\"]", "Marc'Aurelio Ranzato" ]
stat.ML cs.LG
null
1405.5505
null
null
http://arxiv.org/pdf/1405.5505v3
2016-02-25T09:28:14Z
2014-05-21T18:17:37Z
Kernel Mean Shrinkage Estimators
A mean function in a reproducing kernel Hilbert space (RKHS), or a kernel mean, is central to kernel methods in that it is used by many classical algorithms such as kernel principal component analysis, and it also forms the core inference step of modern kernel methods that rely on embedding probability distributions in RKHSs. Given a finite sample, an empirical average has been used commonly as a standard estimator of the true kernel mean. Despite a widespread use of this estimator, we show that it can be improved thanks to the well-known Stein phenomenon. We propose a new family of estimators called kernel mean shrinkage estimators (KMSEs), which benefit from both theoretical justifications and good empirical performance. The results demonstrate that the proposed estimators outperform the standard one, especially in a "large d, small n" paradigm.
[ "Krikamol Muandet, Bharath Sriperumbudur, Kenji Fukumizu, Arthur\n Gretton, Bernhard Sch\\\"olkopf", "['Krikamol Muandet' 'Bharath Sriperumbudur' 'Kenji Fukumizu'\n 'Arthur Gretton' 'Bernhard Schölkopf']" ]
cs.CV cs.LG
null
1405.5769
null
null
http://arxiv.org/pdf/1405.5769v2
2015-06-24T09:16:28Z
2014-05-22T14:35:52Z
Descriptor Matching with Convolutional Neural Networks: a Comparison to SIFT
Latest results indicate that features learned via convolutional neural networks outperform previous descriptors on classification tasks by a large margin. It has been shown that these networks still work well when they are applied to datasets or recognition tasks different from those they were trained on. However, descriptors like SIFT are not only used in recognition but also for many correspondence problems that rely on descriptor matching. In this paper we compare features from various layers of convolutional neural nets to standard SIFT descriptors. We consider a network that was trained on ImageNet and another one that was trained without supervision. Surprisingly, convolutional neural networks clearly outperform SIFT on descriptor matching. This paper has been merged with arXiv:1406.6909
[ "Philipp Fischer, Alexey Dosovitskiy, Thomas Brox", "['Philipp Fischer' 'Alexey Dosovitskiy' 'Thomas Brox']" ]
cs.DB cs.LG
null
1405.5829
null
null
http://arxiv.org/pdf/1405.5829v1
2014-05-22T17:13:00Z
2014-05-22T17:13:00Z
Node Classification in Uncertain Graphs
In many real applications that use and analyze networked data, the links in the network graph may be erroneous, or derived from probabilistic techniques. In such cases, the node classification problem can be challenging, since the unreliability of the links may affect the final results of the classification process. If the information about link reliability is not used explicitly, the classification accuracy in the underlying network may be affected adversely. In this paper, we focus on situations that require the analysis of the uncertainty that is present in the graph structure. We study the novel problem of node classification in uncertain graphs, by treating uncertainty as a first-class citizen. We propose two techniques based on a Bayes model and automatic parameter selection, and show that the incorporation of uncertainty in the classification process as a first-class citizen is beneficial. We experimentally evaluate the proposed approach using different real data sets, and study the behavior of the algorithms under different conditions. The results demonstrate the effectiveness and efficiency of our approach.
[ "['Michele Dallachiesa' 'Charu Aggarwal' 'Themis Palpanas']", "Michele Dallachiesa and Charu Aggarwal and Themis Palpanas" ]
cs.LG cs.SI physics.soc-ph
null
1405.5868
null
null
http://arxiv.org/pdf/1405.5868v2
2014-11-10T18:11:10Z
2014-05-22T19:41:51Z
Learning to Generate Networks
We investigate the problem of learning to generate complex networks from data. Specifically, we consider whether deep belief networks, dependency networks, and members of the exponential random graph family can learn to generate networks whose complex behavior is consistent with a set of input examples. We find that the deep model is able to capture the complex behavior of small networks, but that no model is able capture this behavior for networks with more than a handful of nodes.
[ "James Atwood, Don Towsley, Krista Gile, and David Jensen", "['James Atwood' 'Don Towsley' 'Krista Gile' 'David Jensen']" ]
stat.ML cs.DS cs.IR cs.LG
null
1405.5869
null
null
http://arxiv.org/pdf/1405.5869v1
2014-05-22T19:42:57Z
2014-05-22T19:42:57Z
Asymmetric LSH (ALSH) for Sublinear Time Maximum Inner Product Search (MIPS)
We present the first provably sublinear time algorithm for approximate \emph{Maximum Inner Product Search} (MIPS). Our proposal is also the first hashing algorithm for searching with (un-normalized) inner product as the underlying similarity measure. Finding hashing schemes for MIPS was considered hard. We formally show that the existing Locality Sensitive Hashing (LSH) framework is insufficient for solving MIPS, and then we extend the existing LSH framework to allow asymmetric hashing schemes. Our proposal is based on an interesting mathematical phenomenon in which inner products, after independent asymmetric transformations, can be converted into the problem of approximate near neighbor search. This key observation makes efficient sublinear hashing scheme for MIPS possible. In the extended asymmetric LSH (ALSH) framework, we provide an explicit construction of provably fast hashing scheme for MIPS. The proposed construction and the extended LSH framework could be of independent theoretical interest. Our proposed algorithm is simple and easy to implement. We evaluate the method, for retrieving inner products, in the collaborative filtering task of item recommendations on Netflix and Movielens datasets.
[ "['Anshumali Shrivastava' 'Ping Li']", "Anshumali Shrivastava and Ping Li" ]
cs.LG math.OC stat.ML
null
1405.5960
null
null
http://arxiv.org/pdf/1405.5960v1
2014-05-23T04:28:29Z
2014-05-23T04:28:29Z
LASS: a simple assignment model with Laplacian smoothing
We consider the problem of learning soft assignments of $N$ items to $K$ categories given two sources of information: an item-category similarity matrix, which encourages items to be assigned to categories they are similar to (and to not be assigned to categories they are dissimilar to), and an item-item similarity matrix, which encourages similar items to have similar assignments. We propose a simple quadratic programming model that captures this intuition. We give necessary conditions for its solution to be unique, define an out-of-sample mapping, and derive a simple, effective training algorithm based on the alternating direction method of multipliers. The model predicts reasonable assignments from even a few similarity values, and can be seen as a generalization of semisupervised learning. It is particularly useful when items naturally belong to multiple categories, as for example when annotating documents with keywords or pictures with tags, with partially tagged items, or when the categories have complex interrelations (e.g. hierarchical) that are unknown.
[ "Miguel \\'A. Carreira-Perpi\\~n\\'an and Weiran Wang", "['Miguel Á. Carreira-Perpiñán' 'Weiran Wang']" ]
cs.CV cs.LG stat.ML
null
1405.6012
null
null
http://arxiv.org/pdf/1405.6012v1
2014-05-23T10:15:04Z
2014-05-23T10:15:04Z
On the Optimal Solution of Weighted Nuclear Norm Minimization
In recent years, the nuclear norm minimization (NNM) problem has been attracting much attention in computer vision and machine learning. The NNM problem is capitalized on its convexity and it can be solved efficiently. The standard nuclear norm regularizes all singular values equally, which is however not flexible enough to fit real scenarios. Weighted nuclear norm minimization (WNNM) is a natural extension and generalization of NNM. By assigning properly different weights to different singular values, WNNM can lead to state-of-the-art results in applications such as image denoising. Nevertheless, so far the global optimal solution of WNNM problem is not completely solved yet due to its non-convexity in general cases. In this article, we study the theoretical properties of WNNM and prove that WNNM can be equivalently transformed into a quadratic programming problem with linear constraints. This implies that WNNM is equivalent to a convex problem and its global optimum can be readily achieved by off-the-shelf convex optimization solvers. We further show that when the weights are non-descending, the globally optimal solution of WNNM can be obtained in closed-form.
[ "Qi Xie, Deyu Meng, Shuhang Gu, Lei Zhang, Wangmeng Zuo, Xiangchu Feng\n and Zongben Xu", "['Qi Xie' 'Deyu Meng' 'Shuhang Gu' 'Lei Zhang' 'Wangmeng Zuo'\n 'Xiangchu Feng' 'Zongben Xu']" ]
cs.LG
null
1405.6076
null
null
http://arxiv.org/pdf/1405.6076v1
2014-05-23T14:33:48Z
2014-05-23T14:33:48Z
Online Linear Optimization via Smoothing
We present a new optimization-theoretic approach to analyzing Follow-the-Leader style algorithms, particularly in the setting where perturbations are used as a tool for regularization. We show that adding a strongly convex penalty function to the decision rule and adding stochastic perturbations to data correspond to deterministic and stochastic smoothing operations, respectively. We establish an equivalence between "Follow the Regularized Leader" and "Follow the Perturbed Leader" up to the smoothness properties. This intuition leads to a new generic analysis framework that recovers and improves the previous known regret bounds of the class of algorithms commonly known as Follow the Perturbed Leader.
[ "['Jacob Abernethy' 'Chansoo Lee' 'Abhinav Sinha' 'Ambuj Tewari']", "Jacob Abernethy, Chansoo Lee, Abhinav Sinha, Ambuj Tewari" ]
cs.CV cs.LG cs.NE
null
1405.6137
null
null
http://arxiv.org/pdf/1405.6137v1
2014-02-05T20:05:34Z
2014-02-05T20:05:34Z
An enhanced neural network based approach towards object extraction
The improvements in spectral and spatial resolution of the satellite images have facilitated the automatic extraction and identification of the features from satellite images and aerial photographs. An automatic object extraction method is presented for extracting and identifying the various objects from satellite images and the accuracy of the system is verified with regard to IRS satellite images. The system is based on neural network and simulates the process of visual interpretation from remote sensing images and hence increases the efficiency of image analysis. This approach obtains the basic characteristics of the various features and the performance is enhanced by the automatic learning approach, intelligent interpretation, and intelligent interpolation. The major advantage of the method is its simplicity and that the system identifies the features not only based on pixel value but also based on the shape, haralick features etc of the objects. Further the system allows flexibility for identifying the features within the same category based on size and shape. The successful application of the system verified its effectiveness and the accuracy of the system were assessed by ground truth verification.
[ "['S. K. Katiyar' 'P. V. Arun']", "S.K. Katiyar and P.V. Arun" ]
cs.CV cs.LG stat.ML
10.1137/140967325
1405.6159
null
null
http://arxiv.org/abs/1405.6159v3
2014-08-20T22:12:15Z
2014-04-30T21:58:10Z
A Bi-clustering Framework for Consensus Problems
We consider grouping as a general characterization for problems such as clustering, community detection in networks, and multiple parametric model estimation. We are interested in merging solutions from different grouping algorithms, distilling all their good qualities into a consensus solution. In this paper, we propose a bi-clustering framework and perspective for reaching consensus in such grouping problems. In particular, this is the first time that the task of finding/fitting multiple parametric models to a dataset is formally posed as a consensus problem. We highlight the equivalence of these tasks and establish the connection with the computational Gestalt program, that seeks to provide a psychologically-inspired detection theory for visual events. We also present a simple but powerful bi-clustering algorithm, specially tuned to the nature of the problem we address, though general enough to handle many different instances inscribed within our characterization. The presentation is accompanied with diverse and extensive experimental results in clustering, community detection, and multiple parametric model estimation in image processing applications.
[ "Mariano Tepper and Guillermo Sapiro", "['Mariano Tepper' 'Guillermo Sapiro']" ]
cs.CV cs.LG
10.5121/ijfcst.2014.4102
1405.6177
null
null
http://arxiv.org/abs/1405.6177v1
2014-02-14T20:53:43Z
2014-02-14T20:53:43Z
Automated Fabric Defect Inspection: A Survey of Classifiers
Quality control at each stage of production in textile industry has become a key factor to retaining the existence in the highly competitive global market. Problems of manual fabric defect inspection are lack of accuracy and high time consumption, where early and accurate fabric defect detection is a significant phase of quality control. Computer vision based, i.e. automated fabric defect inspection systems are thought by many researchers of different countries to be very useful to resolve these problems. There are two major challenges to be resolved to attain a successful automated fabric defect inspection system. They are defect detection and defect classification. In this work, we discuss different techniques used for automated fabric defect classification, then show a survey of classifiers used in automated fabric defect inspection systems, and finally, compare these classifiers by using performance metrics. This work is expected to be very useful for the researchers in the area of automated fabric defect inspection to understand and evaluate the many potential options in this field.
[ "['Md. Tarek Habib' 'Rahat Hossain Faisal' 'M. Rokonuzzaman' 'Farruk Ahmed']", "Md. Tarek Habib, Rahat Hossain Faisal, M. Rokonuzzaman, Farruk Ahmed" ]
cs.LG cs.IR
null
1405.6223
null
null
http://arxiv.org/pdf/1405.6223v1
2014-04-08T00:42:16Z
2014-04-08T00:42:16Z
Coupled Item-based Matrix Factorization
The essence of the challenges cold start and sparsity in Recommender Systems (RS) is that the extant techniques, such as Collaborative Filtering (CF) and Matrix Factorization (MF), mainly rely on the user-item rating matrix, which sometimes is not informative enough for predicting recommendations. To solve these challenges, the objective item attributes are incorporated as complementary information. However, most of the existing methods for inferring the relationships between items assume that the attributes are "independently and identically distributed (iid)", which does not always hold in reality. In fact, the attributes are more or less coupled with each other by some implicit relationships. Therefore, in this pa-per we propose an attribute-based coupled similarity measure to capture the implicit relationships between items. We then integrate the implicit item coupling into MF to form the Coupled Item-based Matrix Factorization (CIMF) model. Experimental results on two open data sets demonstrate that CIMF outperforms the benchmark methods.
[ "Fangfang Li, Guandong Xu, Longbing Cao", "['Fangfang Li' 'Guandong Xu' 'Longbing Cao']" ]