categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
list
cs.LG stat.ML
null
1605.08375
null
null
http://arxiv.org/pdf/1605.08375v1
2016-05-26T17:37:51Z
2016-05-26T17:37:51Z
Generalization Properties and Implicit Regularization for Multiple Passes SGM
We study the generalization properties of stochastic gradient methods for learning with convex loss functions and linearly parameterized functions. We show that, in the absence of penalizations or constraints, the stability and approximation properties of the algorithm can be controlled by tuning either the step-size or the number of passes over the data. In this view, these parameters can be seen to control a form of implicit regularization. Numerical results complement the theoretical findings.
[ "['Junhong Lin' 'Raffaello Camoriano' 'Lorenzo Rosasco']", "Junhong Lin, Raffaello Camoriano, Lorenzo Rosasco" ]
cs.LG physics.data-an stat.ML
null
1605.08455
null
null
http://arxiv.org/pdf/1605.08455v1
2016-05-26T21:27:11Z
2016-05-26T21:27:11Z
Suppressing Background Radiation Using Poisson Principal Component Analysis
Performance of nuclear threat detection systems based on gamma-ray spectrometry often strongly depends on the ability to identify the part of measured signal that can be attributed to background radiation. We have successfully applied a method based on Principal Component Analysis (PCA) to obtain a compact null-space model of background spectra using PCA projection residuals to derive a source detection score. We have shown the method's utility in a threat detection system using mobile spectrometers in urban scenes (Tandon et al 2012). While it is commonly assumed that measured photon counts follow a Poisson process, standard PCA makes a Gaussian assumption about the data distribution, which may be a poor approximation when photon counts are low. This paper studies whether and in what conditions PCA with a Poisson-based loss function (Poisson PCA) can outperform standard Gaussian PCA in modeling background radiation to enable more sensitive and specific nuclear threat detection.
[ "['P. Tandon' 'P. Huggins' 'A. Dubrawski' 'S. Labov' 'K. Nelson']", "P. Tandon (1), P. Huggins (1), A. Dubrawski (1), S. Labov (2), K.\n Nelson (2) ((1) Auton Lab, Carnegie Mellon University, (2) Lawrence Livermore\n National Laboratory)" ]
cs.LG cs.AI
null
1605.08478
null
null
http://arxiv.org/pdf/1605.08478v1
2016-05-26T23:43:32Z
2016-05-26T23:43:32Z
Model-Free Imitation Learning with Policy Optimization
In imitation learning, an agent learns how to behave in an environment with an unknown cost function by mimicking expert demonstrations. Existing imitation learning algorithms typically involve solving a sequence of planning or reinforcement learning problems. Such algorithms are therefore not directly applicable to large, high-dimensional environments, and their performance can significantly degrade if the planning problems are not solved to optimality. Under the apprenticeship learning formalism, we develop alternative model-free algorithms for finding a parameterized stochastic policy that performs at least as well as an expert policy on an unknown cost function, based on sample trajectories from the expert. Our approach, based on policy gradients, scales to large continuous environments with guaranteed convergence to local minima.
[ "Jonathan Ho, Jayesh K. Gupta, Stefano Ermon", "['Jonathan Ho' 'Jayesh K. Gupta' 'Stefano Ermon']" ]
cs.LG
null
1605.08481
null
null
http://arxiv.org/pdf/1605.08481v1
2016-05-27T00:23:39Z
2016-05-27T00:23:39Z
Open Problem: Best Arm Identification: Almost Instance-Wise Optimality and the Gap Entropy Conjecture
The best arm identification problem (BEST-1-ARM) is the most basic pure exploration problem in stochastic multi-armed bandits. The problem has a long history and attracted significant attention for the last decade. However, we do not yet have a complete understanding of the optimal sample complexity of the problem: The state-of-the-art algorithms achieve a sample complexity of $O(\sum_{i=2}^{n} \Delta_{i}^{-2}(\ln\delta^{-1} + \ln\ln\Delta_i^{-1}))$ ($\Delta_{i}$ is the difference between the largest mean and the $i^{th}$ mean), while the best known lower bound is $\Omega(\sum_{i=2}^{n} \Delta_{i}^{-2}\ln\delta^{-1})$ for general instances and $\Omega(\Delta^{-2} \ln\ln \Delta^{-1})$ for the two-arm instances. We propose to study the instance-wise optimality for the BEST-1-ARM problem. Previous work has proved that it is impossible to have an instance optimal algorithm for the 2-arm problem. However, we conjecture that modulo the additive term $\Omega(\Delta_2^{-2} \ln\ln \Delta_2^{-1})$ (which is an upper bound and worst case lower bound for the 2-arm problem), there is an instance optimal algorithm for BEST-1-ARM. Moreover, we introduce a new quantity, called the gap entropy for a best-arm problem instance, and conjecture that it is the instance-wise lower bound. Hence, resolving this conjecture would provide a final answer to the old and basic problem.
[ "Lijie Chen and Jian Li", "['Lijie Chen' 'Jian Li']" ]
cs.LG stat.ML
null
1605.08491
null
null
http://arxiv.org/pdf/1605.08491v1
2016-05-27T02:18:43Z
2016-05-27T02:18:43Z
Provable Algorithms for Inference in Topic Models
Recently, there has been considerable progress on designing algorithms with provable guarantees -- typically using linear algebraic methods -- for parameter learning in latent variable models. But designing provable algorithms for inference has proven to be more challenging. Here we take a first step towards provable inference in topic models. We leverage a property of topic models that enables us to construct simple linear estimators for the unknown topic proportions that have small variance, and consequently can work with short documents. Our estimators also correspond to finding an estimate around which the posterior is well-concentrated. We show lower bounds that for shorter documents it can be information theoretically impossible to find the hidden topics. Finally, we give empirical results that demonstrate that our algorithm works on realistic topic models. It yields good solutions on synthetic data and runs in time comparable to a {\em single} iteration of Gibbs sampling.
[ "['Sanjeev Arora' 'Rong Ge' 'Frederic Koehler' 'Tengyu Ma' 'Ankur Moitra']", "Sanjeev Arora, Rong Ge, Frederic Koehler, Tengyu Ma, Ankur Moitra" ]
cs.LG
null
1605.08497
null
null
http://arxiv.org/pdf/1605.08497v1
2016-05-27T03:11:41Z
2016-05-27T03:11:41Z
Universum Learning for SVM Regression
This paper extends the idea of Universum learning [18, 19] to regression problems. We propose new Universum-SVM formulation for regression problems that incorporates a priori knowledge in the form of additional data samples. These additional data samples or Universum belong to the same application domain as the training samples, but they follow a different distribution. Several empirical comparisons are presented to illustrate the utility of the proposed approach.
[ "['Sauptik Dhar' 'Vladimir Cherkassky']", "Sauptik Dhar, Vladimir Cherkassky" ]
stat.ML cs.LG
null
1605.08501
null
null
http://arxiv.org/pdf/1605.08501v1
2016-05-27T03:28:41Z
2016-05-27T03:28:41Z
Local Region Sparse Learning for Image-on-Scalar Regression
Identification of regions of interest (ROI) associated with certain disease has a great impact on public health. Imposing sparsity of pixel values and extracting active regions simultaneously greatly complicate the image analysis. We address these challenges by introducing a novel region-selection penalty in the framework of image-on-scalar regression. Our penalty combines the Smoothly Clipped Absolute Deviation (SCAD) regularization, enforcing sparsity, and the SCAD of total variation (TV) regularization, enforcing spatial contiguity, into one group, which segments contiguous spatial regions against zero-valued background. Efficient algorithm is based on the alternative direction method of multipliers (ADMM) which decomposes the non-convex problem into two iterative optimization problems with explicit solutions. Another virtue of the proposed method is that a divide and conquer learning algorithm is developed, thereby allowing scaling to large images. Several examples are presented and the experimental results are compared with other state-of-the-art approaches.
[ "['Yao Chen' 'Xiao Wang' 'Linglong Kong' 'Hongtu Zhu']", "Yao Chen, Xiao Wang, Linglong Kong and Hongtu Zhu" ]
cs.LG cs.CV cs.NE
null
1605.08512
null
null
http://arxiv.org/pdf/1605.08512v1
2016-05-27T06:02:48Z
2016-05-27T06:02:48Z
SNN: Stacked Neural Networks
It has been proven that transfer learning provides an easy way to achieve state-of-the-art accuracies on several vision tasks by training a simple classifier on top of features obtained from pre-trained neural networks. The goal of this work is to generate better features for transfer learning from multiple publicly available pre-trained neural networks. To this end, we propose a novel architecture called Stacked Neural Networks which leverages the fast training time of transfer learning while simultaneously being much more accurate. We show that using a stacked NN architecture can result in up to 8% improvements in accuracy over state-of-the-art techniques using only one pre-trained network for transfer learning. A second aim of this work is to make network fine- tuning retain the generalizability of the base network to unseen tasks. To this end, we propose a new technique called "joint fine-tuning" that is able to give accuracies comparable to finetuning the same network individually over two datasets. We also show that a jointly finetuned network generalizes better to unseen tasks when compared to a network finetuned over a single task.
[ "Milad Mohammadi, Subhasis Das", "['Milad Mohammadi' 'Subhasis Das']" ]
math.OC cs.LG math.NA
null
1605.08527
null
null
http://arxiv.org/pdf/1605.08527v1
2016-05-27T07:47:30Z
2016-05-27T07:47:30Z
Stochastic Optimization for Large-scale Optimal Transport
Optimal transport (OT) defines a powerful framework to compare probability distributions in a geometrically faithful way. However, the practical impact of OT is still limited because of its computational burden. We propose a new class of stochastic optimization algorithms to cope with large-scale problems routinely encountered in machine learning applications. These methods are able to manipulate arbitrary distributions (either discrete or continuous) by simply requiring to be able to draw samples from them, which is the typical setup in high-dimensional learning problems. This alleviates the need to discretize these densities, while giving access to provably convergent methods that output the correct distance without discretization error. These algorithms rely on two main ideas: (a) the dual OT problem can be re-cast as the maximization of an expectation ; (b) entropic regularization of the primal OT problem results in a smooth dual optimization optimization which can be addressed with algorithms that have a provably faster convergence. We instantiate these ideas in three different setups: (i) when comparing a discrete distribution to another, we show that incremental stochastic optimization schemes can beat Sinkhorn's algorithm, the current state-of-the-art finite dimensional OT solver; (ii) when comparing a discrete distribution to a continuous density, a semi-discrete reformulation of the dual program is amenable to averaged stochastic gradient descent, leading to better performance than approximately solving the problem by discretization ; (iii) when dealing with two continuous densities, we propose a stochastic gradient descent over a reproducing kernel Hilbert space (RKHS). This is currently the only known method to solve this problem, apart from computing OT on finite samples. We backup these claims on a set of discrete, semi-discrete and continuous benchmark problems.
[ "['Genevay Aude' 'Marco Cuturi' 'Gabriel Peyré' 'Francis Bach']", "Genevay Aude (MOKAPLAN, CEREMADE), Marco Cuturi, Gabriel Peyr\\'e\n (MOKAPLAN, CEREMADE), Francis Bach (SIERRA, LIENS)" ]
cs.SE cs.CL cs.LG cs.NE
null
1605.08535
null
null
http://arxiv.org/pdf/1605.08535v3
2017-07-14T01:22:18Z
2016-05-27T08:27:18Z
Deep API Learning
Developers often wonder how to implement a certain functionality (e.g., how to parse XML files) using APIs. Obtaining an API usage sequence based on an API-related natural language query is very helpful in this regard. Given a query, existing approaches utilize information retrieval models to search for matching API sequences. These approaches treat queries and APIs as bag-of-words (i.e., keyword matching or word-to-word alignment) and lack a deep understanding of the semantics of the query. We propose DeepAPI, a deep learning based approach to generate API usage sequences for a given natural language query. Instead of a bags-of-words assumption, it learns the sequence of words in a query and the sequence of associated APIs. DeepAPI adapts a neural language model named RNN Encoder-Decoder. It encodes a word sequence (user query) into a fixed-length context vector, and generates an API sequence based on the context vector. We also augment the RNN Encoder-Decoder by considering the importance of individual APIs. We empirically evaluate our approach with more than 7 million annotated code snippets collected from GitHub. The results show that our approach generates largely accurate API sequences and outperforms the related approaches.
[ "Xiaodong Gu, Hongyu Zhang, Dongmei Zhang, Sunghun Kim", "['Xiaodong Gu' 'Hongyu Zhang' 'Dongmei Zhang' 'Sunghun Kim']" ]
cs.LG stat.ML
null
1605.08618
null
null
http://arxiv.org/pdf/1605.08618v1
2016-05-27T13:00:31Z
2016-05-27T13:00:31Z
Variational Bayesian Inference for Hidden Markov Models With Multivariate Gaussian Output Distributions
Hidden Markov Models (HMM) have been used for several years in many time series analysis or pattern recognitions tasks. HMM are often trained by means of the Baum-Welch algorithm which can be seen as a special variant of an expectation maximization (EM) algorithm. Second-order training techniques such as Variational Bayesian Inference (VI) for probabilistic models regard the parameters of the probabilistic models as random variables and define distributions over these distribution parameters, hence the name of this technique. VI can also bee regarded as a special case of an EM algorithm. In this article, we bring both together and train HMM with multivariate Gaussian output distributions with VI. The article defines the new training technique for HMM. An evaluation based on some case studies and a comparison to related approaches is part of our ongoing work.
[ "['Christian Gruhl' 'Bernhard Sick']", "Christian Gruhl, Bernhard Sick" ]
stat.ML cs.LG
null
1605.08636
null
null
http://arxiv.org/pdf/1605.08636v4
2017-02-13T17:14:52Z
2016-05-27T13:41:33Z
PAC-Bayesian Theory Meets Bayesian Inference
We exhibit a strong link between frequentist PAC-Bayesian risk bounds and the Bayesian marginal likelihood. That is, for the negative log-likelihood loss function, we show that the minimization of PAC-Bayesian generalization risk bounds maximizes the Bayesian marginal likelihood. This provides an alternative explanation to the Bayesian Occam's razor criteria, under the assumption that the data is generated by an i.i.d distribution. Moreover, as the negative log-likelihood is an unbounded loss function, we motivate and propose a PAC-Bayesian theorem tailored for the sub-gamma loss family, and we show that our approach is sound on classical Bayesian linear regression tasks.
[ "['Pascal Germain' 'Francis Bach' 'Alexandre Lacoste'\n 'Simon Lacoste-Julien']", "Pascal Germain (INRIA Paris), Francis Bach (INRIA Paris), Alexandre\n Lacoste (Google), Simon Lacoste-Julien (INRIA Paris)" ]
stat.ML cs.LG
null
1605.08671
null
null
http://arxiv.org/pdf/1605.08671v1
2016-05-27T14:35:29Z
2016-05-27T14:35:29Z
An optimal algorithm for the Thresholding Bandit Problem
We study a specific \textit{combinatorial pure exploration stochastic bandit problem} where the learner aims at finding the set of arms whose means are above a given threshold, up to a given precision, and \textit{for a fixed time horizon}. We propose a parameter-free algorithm based on an original heuristic, and prove that it is optimal for this problem by deriving matching upper and lower bounds. To the best of our knowledge, this is the first non-trivial pure exploration setting with \textit{fixed budget} for which optimal strategies are constructed.
[ "['Andrea Locatelli' 'Maurilio Gutzeit' 'Alexandra Carpentier']", "Andrea Locatelli, Maurilio Gutzeit, and Alexandra Carpentier" ]
cs.LG
null
1605.08722
null
null
http://arxiv.org/pdf/1605.08722v1
2016-05-27T17:28:35Z
2016-05-27T17:28:35Z
An algorithm with nearly optimal pseudo-regret for both stochastic and adversarial bandits
We present an algorithm that achieves almost optimal pseudo-regret bounds against adversarial and stochastic bandits. Against adversarial bandits the pseudo-regret is $O(K\sqrt{n \log n})$ and against stochastic bandits the pseudo-regret is $O(\sum_i (\log n)/\Delta_i)$. We also show that no algorithm with $O(\log n)$ pseudo-regret against stochastic bandits can achieve $\tilde{O}(\sqrt{n})$ expected regret against adaptive adversarial bandits. This complements previous results of Bubeck and Slivkins (2012) that show $\tilde{O}(\sqrt{n})$ expected adversarial regret with $O((\log n)^2)$ stochastic pseudo-regret.
[ "Peter Auer and Chao-Kai Chiang", "['Peter Auer' 'Chao-Kai Chiang']" ]
cs.DS cs.LG math.NA math.OC
null
1605.08754
null
null
http://arxiv.org/pdf/1605.08754v1
2016-05-26T03:53:00Z
2016-05-26T03:53:00Z
Faster Eigenvector Computation via Shift-and-Invert Preconditioning
We give faster algorithms and improved sample complexities for estimating the top eigenvector of a matrix $\Sigma$ -- i.e. computing a unit vector $x$ such that $x^T \Sigma x \ge (1-\epsilon)\lambda_1(\Sigma)$: Offline Eigenvector Estimation: Given an explicit $A \in \mathbb{R}^{n \times d}$ with $\Sigma = A^TA$, we show how to compute an $\epsilon$ approximate top eigenvector in time $\tilde O([nnz(A) + \frac{d*sr(A)}{gap^2} ]* \log 1/\epsilon )$ and $\tilde O([\frac{nnz(A)^{3/4} (d*sr(A))^{1/4}}{\sqrt{gap}} ] * \log 1/\epsilon )$. Here $nnz(A)$ is the number of nonzeros in $A$, $sr(A)$ is the stable rank, $gap$ is the relative eigengap. By separating the $gap$ dependence from the $nnz(A)$ term, our first runtime improves upon the classical power and Lanczos methods. It also improves prior work using fast subspace embeddings [AC09, CW13] and stochastic optimization [Sha15c], giving significantly better dependencies on $sr(A)$ and $\epsilon$. Our second running time improves these further when $nnz(A) \le \frac{d*sr(A)}{gap^2}$. Online Eigenvector Estimation: Given a distribution $D$ with covariance matrix $\Sigma$ and a vector $x_0$ which is an $O(gap)$ approximate top eigenvector for $\Sigma$, we show how to refine to an $\epsilon$ approximation using $ O(\frac{var(D)}{gap*\epsilon})$ samples from $D$. Here $var(D)$ is a natural notion of variance. Combining our algorithm with previous work to initialize $x_0$, we obtain improved sample complexity and runtime results under a variety of assumptions on $D$. We achieve our results using a general framework that we believe is of independent interest. We give a robust analysis of the classic method of shift-and-invert preconditioning to reduce eigenvector computation to approximately solving a sequence of linear systems. We then apply fast stochastic variance reduced gradient (SVRG) based system solvers to achieve our claims.
[ "Dan Garber, Elad Hazan, Chi Jin, Sham M. Kakade, Cameron Musco,\n Praneeth Netrapalli, Aaron Sidford", "['Dan Garber' 'Elad Hazan' 'Chi Jin' 'Sham M. Kakade' 'Cameron Musco'\n 'Praneeth Netrapalli' 'Aaron Sidford']" ]
cs.CL cs.CV cs.LG
null
1605.08764
null
null
http://arxiv.org/pdf/1605.08764v1
2016-05-27T19:31:54Z
2016-05-27T19:31:54Z
Stacking With Auxiliary Features
Ensembling methods are well known for improving prediction accuracy. However, they are limited in the sense that they cannot discriminate among component models effectively. In this paper, we propose stacking with auxiliary features that learns to fuse relevant information from multiple systems to improve performance. Auxiliary features enable the stacker to rely on systems that not just agree on an output but also the provenance of the output. We demonstrate our approach on three very different and difficult problems -- the Cold Start Slot Filling, the Tri-lingual Entity Discovery and Linking and the ImageNet object detection tasks. We obtain new state-of-the-art results on the first two tasks and substantial improvements on the detection task, thus verifying the power and generality of our approach.
[ "['Nazneen Fatema Rajani' 'Raymond J. Mooney']", "Nazneen Fatema Rajani and Raymond J. Mooney" ]
stat.ML cs.LG
null
1605.08798
null
null
http://arxiv.org/pdf/1605.08798v2
2016-10-14T21:09:35Z
2016-05-27T20:44:28Z
Asymptotic Analysis of Objectives based on Fisher Information in Active Learning
Obtaining labels can be costly and time-consuming. Active learning allows a learning algorithm to intelligently query samples to be labeled for efficient learning. Fisher information ratio (FIR) has been used as an objective for selecting queries in active learning. However, little is known about the theory behind the use of FIR for active learning. There is a gap between the underlying theory and the motivation of its usage in practice. In this paper, we attempt to fill this gap and provide a rigorous framework for analyzing existing FIR-based active learning methods. In particular, we show that FIR can be asymptotically viewed as an upper bound of the expected variance of the log-likelihood ratio. Additionally, our analysis suggests a unifying framework that not only enables us to make theoretical comparisons among the existing querying methods based on FIR, but also allows us to give insight into the development of new active learning approaches based on this objective.
[ "['Jamshid Sourati' 'Murat Akcakaya' 'Todd K. Leen' 'Deniz Erdogmus'\n 'Jennifer G. Dy']", "Jamshid Sourati, Murat Akcakaya, Todd K. Leen, Deniz Erdogmus,\n Jennifer G. Dy" ]
cs.LG cs.AI cs.NE stat.ML
null
1605.08803
null
null
http://arxiv.org/pdf/1605.08803v3
2017-02-27T23:21:10Z
2016-05-27T21:24:32Z
Density estimation using Real NVP
Unsupervised learning of probabilistic models is a central yet challenging problem in machine learning. Specifically, designing models with tractable learning, sampling, inference and evaluation is crucial in solving this task. We extend the space of such models using real-valued non-volume preserving (real NVP) transformations, a set of powerful invertible and learnable transformations, resulting in an unsupervised learning algorithm with exact log-likelihood computation, exact sampling, exact inference of latent variables, and an interpretable latent space. We demonstrate its ability to model natural images on four datasets through sampling, log-likelihood evaluation and latent variable manipulations.
[ "Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio", "['Laurent Dinh' 'Jascha Sohl-Dickstein' 'Samy Bengio']" ]
cs.LG stat.ML
null
1605.08833
null
null
http://arxiv.org/pdf/1605.08833v1
2016-05-28T02:39:24Z
2016-05-28T02:39:24Z
Muffled Semi-Supervised Learning
We explore a novel approach to semi-supervised learning. This approach is contrary to the common approach in that the unlabeled examples serve to "muffle," rather than enhance, the guidance provided by the labeled examples. We provide several variants of the basic algorithm and show experimentally that they can achieve significantly higher AUC than boosted trees, random forests and logistic regression when unlabeled examples are available.
[ "Akshay Balsubramani, Yoav Freund", "['Akshay Balsubramani' 'Yoav Freund']" ]
cs.LG
null
1605.08838
null
null
http://arxiv.org/pdf/1605.08838v2
2017-06-15T01:30:11Z
2016-05-28T03:21:44Z
Dueling Bandits with Dependent Arms
We study dueling bandits with weak utility-based regret when preferences over arms have a total order and carry observable feature vectors. The order is assumed to be determined by these feature vectors, an unknown preference vector, and a known utility function. This structure introduces dependence between preferences for pairs of arms, and allows learning about the preference over one pair of arms from the preference over another pair of arms. We propose an algorithm for this setting called Comparing The Best (CTB), which we show has constant expected cumulative weak utility-based regret. We provide a Bayesian interpretation for CTB, an implementation appropriate for a small number of arms, and an alternate implementation for many arms that can be used when the input parameters satisfy a decomposability condition. We demonstrate through numerical experiments that CTB with appropriate input parameters outperforms all benchmarks considered.
[ "['Bangrui Chen' 'Peter I. Frazier']", "Bangrui Chen, Peter I. Frazier" ]
cs.LG cs.IR
null
1605.08872
null
null
http://arxiv.org/pdf/1605.08872v1
2016-05-28T10:17:37Z
2016-05-28T10:17:37Z
Online Bayesian Collaborative Topic Regression
Collaborative Topic Regression (CTR) combines ideas of probabilistic matrix factorization (PMF) and topic modeling (e.g., LDA) for recommender systems, which has gained increasing successes in many applications. Despite enjoying many advantages, the existing CTR algorithms have some critical limitations. First of all, they are often designed to work in a batch learning manner, making them unsuitable to deal with streaming data or big data in real-world recommender systems. Second, the document-specific topic proportions of LDA are fed to the downstream PMF, but not reverse, which is sub-optimal as the rating information is not exploited in discovering the low-dimensional representation of documents and thus can result in a sub-optimal representation for prediction. In this paper, we propose a novel scheme of Online Bayesian Collaborative Topic Regression (OBCTR) which is efficient and scalable for learning from data streams. Particularly, we {\it jointly} optimize the combined objective function of both PMF and LDA in an online learning fashion, in which both PMF and LDA tasks can be reinforced each other during the online learning process. Our encouraging experimental results on real-world data validate the effectiveness of the proposed method.
[ "Chenghao Liu, Tao Jin, Steven C.H. Hoi, Peilin Zhao, Jianling Sun", "['Chenghao Liu' 'Tao Jin' 'Steven C. H. Hoi' 'Peilin Zhao' 'Jianling Sun']" ]
cs.LG math.OC stat.ML
null
1605.08882
null
null
http://arxiv.org/pdf/1605.08882v3
2019-03-15T02:07:20Z
2016-05-28T12:11:22Z
Optimal Rates for Multi-pass Stochastic Gradient Methods
We analyze the learning properties of the stochastic gradient method when multiple passes over the data and mini-batches are allowed. We study how regularization properties are controlled by the step-size, the number of passes and the mini-batch size. In particular, we consider the square loss and show that for a universal step-size choice, the number of passes acts as a regularization parameter, and optimal finite sample bounds can be achieved by early-stopping. Moreover, we show that larger step-sizes are allowed when considering mini-batches. Our analysis is based on a unifying approach, encompassing both batch and stochastic gradient methods as special cases. As a byproduct, we derive optimal convergence results for batch gradient methods (even in the non-attainable cases).
[ "Junhong Lin, Lorenzo Rosasco", "['Junhong Lin' 'Lorenzo Rosasco']" ]
math.ST cs.LG stat.TH
null
1605.08988
null
null
http://arxiv.org/pdf/1605.08988v2
2016-11-14T12:40:20Z
2016-05-29T10:35:33Z
On Explore-Then-Commit Strategies
We study the problem of minimising regret in two-armed bandit problems with Gaussian rewards. Our objective is to use this simple setting to illustrate that strategies based on an exploration phase (up to a stopping time) followed by exploitation are necessarily suboptimal. The results hold regardless of whether or not the difference in means between the two arms is known. Besides the main message, we also refine existing deviation inequalities, which allow us to design fully sequential strategies with finite-time regret guarantees that are (a) asymptotically optimal as the horizon grows and (b) order-optimal in the minimax sense. Furthermore we provide empirical evidence that the theory also holds in practice and discuss extensions to non-gaussian and multiple-armed case.
[ "['Aurélien Garivier' 'Emilie Kaufmann' 'Tor Lattimore']", "Aur\\'elien Garivier (IMT), Emilie Kaufmann (SEQUEL, CRIStAL, CNRS),\n Tor Lattimore" ]
stat.ML cs.LG
null
1605.09004
null
null
http://arxiv.org/pdf/1605.09004v1
2016-05-29T13:59:48Z
2016-05-29T13:59:48Z
Tight (Lower) Bounds for the Fixed Budget Best Arm Identification Bandit Problem
We consider the problem of \textit{best arm identification} with a \textit{fixed budget $T$}, in the $K$-armed stochastic bandit setting, with arms distribution defined on $[0,1]$. We prove that any bandit strategy, for at least one bandit problem characterized by a complexity $H$, will misidentify the best arm with probability lower bounded by $$\exp\Big(-\frac{T}{\log(K)H}\Big),$$ where $H$ is the sum for all sub-optimal arms of the inverse of the squared gaps. Our result disproves formally the general belief - coming from results in the fixed confidence setting - that there must exist an algorithm for this problem whose probability of error is upper bounded by $\exp(-T/H)$. This also proves that some existing strategies based on the Successive Rejection of the arms are optimal - closing therefore the current gap between upper and lower bounds for the fixed budget best arm identification problem.
[ "['Alexandra Carpentier' 'Andrea Locatelli']", "Alexandra Carpentier and Andrea Locatelli" ]
cs.LG stat.ML
null
1605.09046
null
null
http://arxiv.org/pdf/1605.09046v2
2016-06-06T15:05:31Z
2016-05-29T19:07:09Z
TripleSpin - a generic compact paradigm for fast machine learning computations
We present a generic compact computational framework relying on structured random matrices that can be applied to speed up several machine learning algorithms with almost no loss of accuracy. The applications include new fast LSH-based algorithms, efficient kernel computations via random feature maps, convex optimization algorithms, quantization techniques and many more. Certain models of the presented paradigm are even more compressible since they apply only bit matrices. This makes them suitable for deploying on mobile devices. All our findings come with strong theoretical guarantees. In particular, as a byproduct of the presented techniques and by using relatively new Berry-Esseen-type CLT for random vectors, we give the first theoretical guarantees for one of the most efficient existing LSH algorithms based on the $\textbf{HD}_{3}\textbf{HD}_{2}\textbf{HD}_{1}$ structured matrix ("Practical and Optimal LSH for Angular Distance"). These guarantees as well as theoretical results for other aforementioned applications follow from the same general theoretical principle that we present in the paper. Our structured family contains as special cases all previously considered structured schemes, including the recently introduced $P$-model. Experimental evaluation confirms the accuracy and efficiency of TripleSpin matrices.
[ "['Krzysztof Choromanski' 'Francois Fagan' 'Cedric Gouy-Pailler'\n 'Anne Morvan' 'Tamas Sarlos' 'Jamal Atif']", "Krzysztof Choromanski, Francois Fagan, Cedric Gouy-Pailler, Anne\n Morvan, Tamas Sarlos, Jamal Atif" ]
cs.LG cs.NA stat.ML
null
1605.09049
null
null
http://arxiv.org/pdf/1605.09049v1
2016-05-29T19:21:22Z
2016-05-29T19:21:22Z
Recycling Randomness with Structure for Sublinear time Kernel Expansions
We propose a scheme for recycling Gaussian random vectors into structured matrices to approximate various kernel functions in sublinear time via random embeddings. Our framework includes the Fastfood construction as a special case, but also extends to Circulant, Toeplitz and Hankel matrices, and the broader family of structured matrices that are characterized by the concept of low-displacement rank. We introduce notions of coherence and graph-theoretic structural constants that control the approximation quality, and prove unbiasedness and low-variance properties of random feature maps that arise within our framework. For the case of low-displacement matrices, we show how the degree of structure and randomness can be controlled to reduce statistical variance at the cost of increased computation and storage requirements. Empirical results strongly support our theory and justify the use of a broader family of structured matrices for scaling up kernel methods using random features.
[ "['Krzysztof Choromanski' 'Vikas Sindhwani']", "Krzysztof Choromanski, Vikas Sindhwani" ]
cs.LG
null
1605.09066
null
null
http://arxiv.org/pdf/1605.09066v4
2017-10-27T03:06:07Z
2016-05-29T21:33:07Z
Distributed Asynchronous Dual Free Stochastic Dual Coordinate Ascent
The primal-dual distributed optimization methods have broad large-scale machine learning applications. Previous primal-dual distributed methods are not applicable when the dual formulation is not available, e.g. the sum-of-non-convex objectives. Moreover, these algorithms and theoretical analysis are based on the fundamental assumption that the computing speeds of multiple machines in a cluster are similar. However, the straggler problem is an unavoidable practical issue in the distributed system because of the existence of slow machines. Therefore, the total computational time of the distributed optimization methods is highly dependent on the slowest machine. In this paper, we address these two issues by proposing distributed asynchronous dual free stochastic dual coordinate ascent algorithm for distributed optimization. Our method does not need the dual formulation of the target problem in the optimization. We tackle the straggler problem through asynchronous communication and the negative effect of slow machines is significantly alleviated. We also analyze the convergence rate of our method and prove the linear convergence rate even if the individual functions in objective are non-convex. Experiments on both convex and non-convex loss functions are used to validate our statements.
[ "['Zhouyuan Huo' 'Heng Huang']", "Zhouyuan Huo and Heng Huang" ]
cs.LG stat.ML
null
1605.09068
null
null
http://arxiv.org/pdf/1605.09068v3
2017-06-08T18:27:39Z
2016-05-29T21:50:25Z
A budget-constrained inverse classification framework for smooth classifiers
Inverse classification is the process of manipulating an instance such that it is more likely to conform to a specific class. Past methods that address such a problem have shortcomings. Greedy methods make changes that are overly radical, often relying on data that is strictly discrete. Other methods rely on certain data points, the presence of which cannot be guaranteed. In this paper we propose a general framework and method that overcomes these and other limitations. The formulation of our method can use any differentiable classification function. We demonstrate the method by using logistic regression and Gaussian kernel SVMs. We constrain the inverse classification to occur on features that can actually be changed, each of which incurs an individual cost. We further subject such changes to fall within a certain level of cumulative change (budget). Our framework can also accommodate the estimation of (indirectly changeable) features whose values change as a consequence of actions taken. Furthermore, we propose two methods for specifying feature-value ranges that result in different algorithmic behavior. We apply our method, and a proposed sensitivity analysis-based benchmark method, to two freely available datasets: Student Performance from the UCI Machine Learning Repository and a real world cardiovascular disease dataset. The results obtained demonstrate the validity and benefits of our framework and method.
[ "Michael T. Lash, Qihang Lin, W. Nick Street and Jennifer G. Robinson", "['Michael T. Lash' 'Qihang Lin' 'W. Nick Street' 'Jennifer G. Robinson']" ]
cs.LG stat.ML
null
1605.09080
null
null
http://arxiv.org/pdf/1605.09080v5
2016-11-13T20:24:02Z
2016-05-30T00:32:11Z
Spectral Methods for Correlated Topic Models
In this paper, we propose guaranteed spectral methods for learning a broad range of topic models, which generalize the popular Latent Dirichlet Allocation (LDA). We overcome the limitation of LDA to incorporate arbitrary topic correlations, by assuming that the hidden topic proportions are drawn from a flexible class of Normalized Infinitely Divisible (NID) distributions. NID distributions are generated through the process of normalizing a family of independent Infinitely Divisible (ID) random variables. The Dirichlet distribution is a special case obtained by normalizing a set of Gamma random variables. We prove that this flexible topic model class can be learned via spectral methods using only moments up to the third order, with (low order) polynomial sample and computational complexity. The proof is based on a key new technique derived here that allows us to diagonalize the moments of the NID distribution through an efficient procedure that requires evaluating only univariate integrals, despite the fact that we are handling high dimensional multivariate moments. In order to assess the performance of our proposed Latent NID topic model, we use two real datasets of articles collected from New York Times and Pubmed. Our experiments yield improved perplexity on both datasets compared with the baseline.
[ "Forough Arabshahi, Animashree Anandkumar", "['Forough Arabshahi' 'Animashree Anandkumar']" ]
cs.LG
null
1605.09082
null
null
http://arxiv.org/pdf/1605.09082v1
2016-05-30T01:18:47Z
2016-05-30T01:18:47Z
One-Pass Learning with Incremental and Decremental Features
In many real tasks the features are evolving, with some features being vanished and some other features augmented. For example, in environment monitoring some sensors expired whereas some new ones deployed; in mobile game recommendation some games dropped whereas some new ones added. Learning with such incremental and decremental features is crucial but rarely studied, particularly when the data coming like a stream and thus it is infeasible to keep the whole data for optimization. In this paper, we study this challenging problem and present the OPID approach. Our approach attempts to compress important information of vanished features into functions of survived features, and then expand to include the augmented features. It is the one-pass learning approach, which only needs to scan each instance once and does not need to store the whole data, and thus satisfy the evolving streaming data nature. The effectiveness of our approach is validated theoretically and empirically.
[ "Chenping Hou and Zhi-Hua Zhou", "['Chenping Hou' 'Zhi-Hua Zhou']" ]
cs.LG cs.CV stat.ML
null
1605.09085
null
null
http://arxiv.org/pdf/1605.09085v3
2019-08-30T14:38:32Z
2016-05-30T01:49:18Z
Stochastic Function Norm Regularization of Deep Networks
Deep neural networks have had an enormous impact on image analysis. State-of-the-art training methods, based on weight decay and DropOut, result in impressive performance when a very large training set is available. However, they tend to have large problems overfitting to small data sets. Indeed, the available regularization methods deal with the complexity of the network function only indirectly. In this paper, we study the feasibility of directly using the $L_2$ function norm for regularization. Two methods to integrate this new regularization in the stochastic backpropagation are proposed. Moreover, the convergence of these new algorithms is studied. We finally show that they outperform the state-of-the-art methods in the low sample regime on benchmark datasets (MNIST and CIFAR10). The obtained results demonstrate very clear improvement, especially in the context of small sample regimes with data laying in a low dimensional manifold. Source code of the method can be found at \url{https://github.com/AmalRT/DNN_Reg}.
[ "Amal Rannen Triki and Matthew B. Blaschko", "['Amal Rannen Triki' 'Matthew B. Blaschko']" ]
cs.LG
null
1605.09088
null
null
http://arxiv.org/pdf/1605.09088v2
2016-10-22T18:48:14Z
2016-05-30T02:35:07Z
The Bayesian Linear Information Filtering Problem
We present a Bayesian sequential decision-making formulation of the information filtering problem, in which an algorithm presents items (news articles, scientific papers, tweets) arriving in a stream, and learns relevance from user feedback on presented items. We model user preferences using a Bayesian linear model, similar in spirit to a Bayesian linear bandit. We compute a computational upper bound on the value of the optimal policy, which allows computing an optimality gap for implementable policies. We then use this analysis as motivation in introducing a pair of new Decompose-Then-Decide (DTD) heuristic policies, DTD-Dynamic-Programming (DTD-DP) and DTD-Upper-Confidence-Bound (DTD-UCB). We compare DTD-DP and DTD-UCB against several benchmarks on real and simulated data, demonstrating significant improvement, and show that the achieved performance is close to the upper bound.
[ "['Bangrui Chen' 'Peter I. Frazier']", "Bangrui Chen, Peter I. Frazier" ]
cs.LG cs.DC cs.NE math.OC stat.ML
null
1605.09114
null
null
http://arxiv.org/pdf/1605.09114v1
2016-05-30T06:31:14Z
2016-05-30T06:31:14Z
ParMAC: distributed optimisation of nested functions, with application to learning binary autoencoders
Many powerful machine learning models are based on the composition of multiple processing layers, such as deep nets, which gives rise to nonconvex objective functions. A general, recent approach to optimise such "nested" functions is the method of auxiliary coordinates (MAC). MAC introduces an auxiliary coordinate for each data point in order to decouple the nested model into independent submodels. This decomposes the optimisation into steps that alternate between training single layers and updating the coordinates. It has the advantage that it reuses existing single-layer algorithms, introduces parallelism, and does not need to use chain-rule gradients, so it works with nondifferentiable layers. With large-scale problems, or when distributing the computation is necessary for faster training, the dataset may not fit in a single machine. It is then essential to limit the amount of communication between machines so it does not obliterate the benefit of parallelism. We describe a general way to achieve this, ParMAC. ParMAC works on a cluster of processing machines with a circular topology and alternates two steps until convergence: one step trains the submodels in parallel using stochastic updates, and the other trains the coordinates in parallel. Only submodel parameters, no data or coordinates, are ever communicated between machines. ParMAC exhibits high parallelism, low communication overhead, and facilitates data shuffling, load balancing, fault tolerance and streaming data processing. We study the convergence of ParMAC and propose a theoretical model of its runtime and parallel speedup. We develop ParMAC to learn binary autoencoders for fast, approximate image retrieval. We implement it in MPI in a distributed system and demonstrate nearly perfect speedups in a 128-processor cluster with a training set of 100 million high-dimensional points.
[ "Miguel \\'A. Carreira-Perpi\\~n\\'an and Mehdi Alizadeh", "['Miguel Á. Carreira-Perpiñán' 'Mehdi Alizadeh']" ]
cs.AI cs.CV cs.LG
null
1605.09128
null
null
http://arxiv.org/pdf/1605.09128v1
2016-05-30T07:40:13Z
2016-05-30T07:40:13Z
Control of Memory, Active Perception, and Action in Minecraft
In this paper, we introduce a new set of reinforcement learning (RL) tasks in Minecraft (a flexible 3D world). We then use these tasks to systematically compare and contrast existing deep reinforcement learning (DRL) architectures with our new memory-based DRL architectures. These tasks are designed to emphasize, in a controllable manner, issues that pose challenges for RL methods including partial observability (due to first-person visual observations), delayed rewards, high-dimensional visual observations, and the need to use active perception in a correct manner so as to perform well in the tasks. While these tasks are conceptually simple to describe, by virtue of having all of these challenges simultaneously they are difficult for current DRL architectures. Additionally, we evaluate the generalization performance of the architectures on environments not used during training. The experimental results show that our new architectures generalize to unseen environments better than existing DRL architectures.
[ "['Junhyuk Oh' 'Valliappa Chockalingam' 'Satinder Singh' 'Honglak Lee']", "Junhyuk Oh, Valliappa Chockalingam, Satinder Singh, Honglak Lee" ]
cs.LG
null
1605.09131
null
null
http://arxiv.org/pdf/1605.09131v1
2016-05-30T07:57:41Z
2016-05-30T07:57:41Z
Classification under Streaming Emerging New Classes: A Solution using Completely Random Trees
This paper investigates an important problem in stream mining, i.e., classification under streaming emerging new classes or SENC. The common approach is to treat it as a classification problem and solve it using either a supervised learner or a semi-supervised learner. We propose an alternative approach by using unsupervised learning as the basis to solve this problem. The SENC problem can be decomposed into three sub problems: detecting emerging new classes, classifying for known classes, and updating models to enable classification of instances of the new class and detection of more emerging new classes. The proposed method employs completely random trees which have been shown to work well in unsupervised learning and supervised learning independently in the literature. This is the first time, as far as we know, that completely random trees are used as a single common core to solve all three sub problems: unsupervised learning, supervised learning and model update in data streams. We show that the proposed unsupervised-learning-focused method often achieves significantly better outcomes than existing classification-focused methods.
[ "Xin Mu and Kai Ming Ting and Zhi-Hua Zhou", "['Xin Mu' 'Kai Ming Ting' 'Zhi-Hua Zhou']" ]
cs.CV cs.LG stat.ML
null
1605.09136
null
null
http://arxiv.org/pdf/1605.09136v1
2016-05-30T08:26:28Z
2016-05-30T08:26:28Z
Hyperspectral Image Classification with Support Vector Machines on Kernel Distribution Embeddings
We propose a novel approach for pixel classification in hyperspectral images, leveraging on both the spatial and spectral information in the data. The introduced method relies on a recently proposed framework for learning on distributions -- by representing them with mean elements in reproducing kernel Hilbert spaces (RKHS) and formulating a classification algorithm therein. In particular, we associate each pixel to an empirical distribution of its neighbouring pixels, a judicious representation of which in an RKHS, in conjunction with the spectral information contained in the pixel itself, give a new explicit set of features that can be fed into a suite of standard classification techniques -- we opt for a well-established framework of support vector machines (SVM). Furthermore, the computational complexity is reduced via random Fourier features formalism. We study the consistency and the convergence rates of the proposed method and the experiments demonstrate strong performance on hyperspectral data with gains in comparison to the state-of-the-art results.
[ "['Gianni Franchi' 'Jesus Angulo' 'Dino Sejdinovic']", "Gianni Franchi, Jesus Angulo, and Dino Sejdinovic" ]
cs.CL cs.LG cs.NE
10.18653/v1/W16-2358
1605.09186
null
null
http://arxiv.org/abs/1605.09186v4
2016-08-16T12:11:29Z
2016-05-30T11:47:00Z
Does Multimodality Help Human and Machine for Translation and Image Captioning?
This paper presents the systems developed by LIUM and CVC for the WMT16 Multimodal Machine Translation challenge. We explored various comparative methods, namely phrase-based systems and attentional recurrent neural networks models trained using monomodal or multimodal data. We also performed a human evaluation in order to estimate the usefulness of multimodal data for human machine translation and image description generation. Our systems obtained the best results for both tasks according to the automatic evaluation metrics BLEU and METEOR.
[ "Ozan Caglayan, Walid Aransa, Yaxing Wang, Marc Masana, Mercedes\n Garc\\'ia-Mart\\'inez, Fethi Bougares, Lo\\\"ic Barrault, Joost van de Weijer", "['Ozan Caglayan' 'Walid Aransa' 'Yaxing Wang' 'Marc Masana'\n 'Mercedes García-Martínez' 'Fethi Bougares' 'Loïc Barrault'\n 'Joost van de Weijer']" ]
stat.ML cs.LG
null
1605.09196
null
null
http://arxiv.org/pdf/1605.09196v3
2016-07-04T12:53:27Z
2016-05-30T12:24:08Z
Forest Floor Visualizations of Random Forests
We propose a novel methodology, forest floor, to visualize and interpret random forest (RF) models. RF is a popular and useful tool for non-linear multi-variate classification and regression, which yields a good trade-off between robustness (low variance) and adaptiveness (low bias). Direct interpretation of a RF model is difficult, as the explicit ensemble model of hundreds of deep trees is complex. Nonetheless, it is possible to visualize a RF model fit by its mapping from feature space to prediction space. Hereby the user is first presented with the overall geometrical shape of the model structure, and when needed one can zoom in on local details. Dimensional reduction by projection is used to visualize high dimensional shapes. The traditional method to visualize RF model structure, partial dependence plots, achieve this by averaging multiple parallel projections. We suggest to first use feature contributions, a method to decompose trees by splitting features, and then subsequently perform projections. The advantages of forest floor over partial dependence plots is that interactions are not masked by averaging. As a consequence, it is possible to locate interactions, which are not visualized in a given projection. Furthermore, we introduce: a goodness-of-visualization measure, use of colour gradients to identify interactions and an out-of-bag cross validated variant of feature contributions.
[ "Soeren H. Welling, Hanne H.F. Refsgaard, Per B. Brockhoff, Line H.\n Clemmensen", "['Soeren H. Welling' 'Hanne H. F. Refsgaard' 'Per B. Brockhoff'\n 'Line H. Clemmensen']" ]
cs.LG
null
1605.09221
null
null
http://arxiv.org/pdf/1605.09221v1
2016-05-30T13:23:04Z
2016-05-30T13:23:04Z
Deep Reinforcement Learning Radio Control and Signal Detection with KeRLym, a Gym RL Agent
This paper presents research in progress investigating the viability and adaptation of reinforcement learning using deep neural network based function approximation for the task of radio control and signal detection in the wireless domain. We demonstrate a successful initial method for radio control which allows naive learning of search without the need for expert features, heuristics, or search strategies. We also introduce Kerlym, an open Keras based reinforcement learning agent collection for OpenAI's Gym.
[ "Timothy J. O'Shea, T. Charles Clancy", "[\"Timothy J. O'Shea\" 'T. Charles Clancy']" ]
cs.LG cs.DS
null
1605.09227
null
null
http://arxiv.org/pdf/1605.09227v1
2016-05-30T13:38:47Z
2016-05-30T13:38:47Z
Learning Combinatorial Functions from Pairwise Comparisons
A large body of work in machine learning has focused on the problem of learning a close approximation to an underlying combinatorial function, given a small set of labeled examples. However, for real-valued functions, cardinal labels might not be accessible, or it may be difficult for an expert to consistently assign real-valued labels over the entire set of examples. For instance, it is notoriously hard for consumers to reliably assign values to bundles of merchandise. Instead, it might be much easier for a consumer to report which of two bundles she likes better. With this motivation in mind, we consider an alternative learning model, wherein the algorithm must learn the underlying function up to pairwise comparisons, from pairwise comparisons. In this model, we present a series of novel algorithms that learn over a wide variety of combinatorial function classes. These range from graph functions to broad classes of valuation functions that are fundamentally important in microeconomic theory, the analysis of social networks, and machine learning, such as coverage, submodular, XOS, and subadditive functions, as well as functions with sparse Fourier support.
[ "['Maria-Florina Balcan' 'Ellen Vitercik' 'Colin White']", "Maria-Florina Balcan, Ellen Vitercik, Colin White" ]
cs.NA cs.LG cs.NE math.OC stat.ML
null
1605.09232
null
null
http://arxiv.org/pdf/1605.09232v3
2018-02-15T10:53:57Z
2016-05-30T13:43:59Z
Tradeoffs between Convergence Speed and Reconstruction Accuracy in Inverse Problems
Solving inverse problems with iterative algorithms is popular, especially for large data. Due to time constraints, the number of possible iterations is usually limited, potentially affecting the achievable accuracy. Given an error one is willing to tolerate, an important question is whether it is possible to modify the original iterations to obtain faster convergence to a minimizer achieving the allowed error without increasing the computational cost of each iteration considerably. Relying on recent recovery techniques developed for settings in which the desired signal belongs to some low-dimensional set, we show that using a coarse estimate of this set may lead to faster convergence at the cost of an additional reconstruction error related to the accuracy of the set approximation. Our theory ties to recent advances in sparse recovery, compressed sensing, and deep learning. Particularly, it may provide a possible explanation to the successful approximation of the l1-minimization solution by neural networks with layers representing iterations, as practiced in the learned iterative shrinkage-thresholding algorithm (LISTA).
[ "['Raja Giryes' 'Yonina C. Eldar' 'Alex M. Bronstein' 'Guillermo Sapiro']", "Raja Giryes and Yonina C. Eldar and Alex M. Bronstein and Guillermo\n Sapiro" ]
cs.LO cs.AI cs.LG
null
1605.09293
null
null
http://arxiv.org/pdf/1605.09293v1
2016-05-30T16:01:51Z
2016-05-30T16:01:51Z
Internal Guidance for Satallax
We propose a new internal guidance method for automated theorem provers based on the given-clause algorithm. Our method influences the choice of unprocessed clauses using positive and negative examples from previous proofs. To this end, we present an efficient scheme for Naive Bayesian classification by generalising label occurrences to types with monoid structure. This makes it possible to extend existing fast classifiers, which consider only positive examples, with negative ones. We implement the method in the higher-order logic prover Satallax, where we modify the delay with which propositions are processed. We evaluated our method on a simply-typed higher-order logic version of the Flyspeck project, where it solves 26% more problems than Satallax without internal guidance.
[ "['Michael Färber' 'Chad Brown']", "Michael F\\\"arber and Chad Brown" ]
cs.LG cs.CV
null
1605.09299
null
null
http://arxiv.org/pdf/1605.09299v1
2016-05-30T16:17:45Z
2016-05-30T16:17:45Z
k2-means for fast and accurate large scale clustering
We propose k^2-means, a new clustering method which efficiently copes with large numbers of clusters and achieves low energy solutions. k^2-means builds upon the standard k-means (Lloyd's algorithm) and combines a new strategy to accelerate the convergence with a new low time complexity divisive initialization. The accelerated convergence is achieved through only looking at k_n nearest clusters and using triangle inequality bounds in the assignment step while the divisive initialization employs an optimal 2-clustering along a direction. The worst-case time complexity per iteration of our k^2-means is O(nk_nd+k^2d), where d is the dimension of the n data points and k is the number of clusters and usually n << k << k_n. Compared to k-means' O(nkd) complexity, our k^2-means complexity is significantly lower, at the expense of slightly increasing the memory complexity by O(nk_n+k^2). In our extensive experiments k^2-means is order(s) of magnitude faster than standard methods in computing accurate clusterings on several standard datasets and settings with hundreds of clusters and high dimensional data. Moreover, the proposed divisive initialization generally leads to clustering energies comparable to those achieved with the standard k-means++ initialization, while being significantly faster.
[ "['Eirikur Agustsson' 'Radu Timofte' 'Luc Van Gool']", "Eirikur Agustsson, Radu Timofte and Luc Van Gool" ]
cs.NE cs.AI cs.CV cs.LG
null
1605.09304
null
null
http://arxiv.org/pdf/1605.09304v5
2016-11-23T18:41:12Z
2016-05-30T16:22:54Z
Synthesizing the preferred inputs for neurons in neural networks via deep generator networks
Deep neural networks (DNNs) have demonstrated state-of-the-art results on many pattern recognition tasks, especially vision classification problems. Understanding the inner workings of such computational brains is both fascinating basic science that is interesting in its own right - similar to why we study the human brain - and will enable researchers to further improve DNNs. One path to understanding how a neural network functions internally is to study what each of its neurons has learned to detect. One such method is called activation maximization (AM), which synthesizes an input (e.g. an image) that highly activates a neuron. Here we dramatically improve the qualitative state of the art of activation maximization by harnessing a powerful, learned prior: a deep generator network (DGN). The algorithm (1) generates qualitatively state-of-the-art synthetic images that look almost real, (2) reveals the features learned by each neuron in an interpretable way, (3) generalizes well to new datasets and somewhat well to different network architectures without requiring the prior to be relearned, and (4) can be considered as a high-quality generative method (in this case, by generating novel, creative, interesting, recognizable images).
[ "Anh Nguyen, Alexey Dosovitskiy, Jason Yosinski, Thomas Brox, Jeff\n Clune", "['Anh Nguyen' 'Alexey Dosovitskiy' 'Jason Yosinski' 'Thomas Brox'\n 'Jeff Clune']" ]
cs.LG cs.CV cs.NE
null
1605.09332
null
null
http://arxiv.org/pdf/1605.09332v4
2018-01-10T15:18:48Z
2016-05-30T17:16:40Z
Parametric Exponential Linear Unit for Deep Convolutional Neural Networks
Object recognition is an important task for improving the ability of visual systems to perform complex scene understanding. Recently, the Exponential Linear Unit (ELU) has been proposed as a key component for managing bias shift in Convolutional Neural Networks (CNNs), but defines a parameter that must be set by hand. In this paper, we propose learning a parameterization of ELU in order to learn the proper activation shape at each layer in the CNNs. Our results on the MNIST, CIFAR-10/100 and ImageNet datasets using the NiN, Overfeat, All-CNN and ResNet networks indicate that our proposed Parametric ELU (PELU) has better performances than the non-parametric ELU. We have observed as much as a 7.28% relative error improvement on ImageNet with the NiN network, with only 0.0003% parameter increase. Our visual examination of the non-linear behaviors adopted by Vgg using PELU shows that the network took advantage of the added flexibility by learning different activations at different layers.
[ "Ludovic Trottier, Philippe Gigu\\`ere, Brahim Chaib-draa", "['Ludovic Trottier' 'Philippe Giguère' 'Brahim Chaib-draa']" ]
cs.LG math.OC stat.ML
null
1605.09346
null
null
http://arxiv.org/pdf/1605.09346v1
2016-05-30T18:15:30Z
2016-05-30T18:15:30Z
Minding the Gaps for Block Frank-Wolfe Optimization of Structured SVMs
In this paper, we propose several improvements on the block-coordinate Frank-Wolfe (BCFW) algorithm from Lacoste-Julien et al. (2013) recently used to optimize the structured support vector machine (SSVM) objective in the context of structured prediction, though it has wider applications. The key intuition behind our improvements is that the estimates of block gaps maintained by BCFW reveal the block suboptimality that can be used as an adaptive criterion. First, we sample objects at each iteration of BCFW in an adaptive non-uniform way via gapbased sampling. Second, we incorporate pairwise and away-step variants of Frank-Wolfe into the block-coordinate setting. Third, we cache oracle calls with a cache-hit criterion based on the block gaps. Fourth, we provide the first method to compute an approximate regularization path for SSVM. Finally, we provide an exhaustive empirical evaluation of all our methods on four structured prediction datasets.
[ "Anton Osokin, Jean-Baptiste Alayrac, Isabella Lukasewitz, Puneet K.\n Dokania, Simon Lacoste-Julien", "['Anton Osokin' 'Jean-Baptiste Alayrac' 'Isabella Lukasewitz'\n 'Puneet K. Dokania' 'Simon Lacoste-Julien']" ]
cs.LG
10.1016/j.medengphy.2016.10.014
1605.09351
null
null
http://arxiv.org/abs/1605.09351v2
2016-09-16T21:17:24Z
2016-05-30T18:45:06Z
Review of Fall Detection Techniques: A Data Availability Perspective
A fall is an abnormal activity that occurs rarely; however, missing to identify falls can have serious health and safety implications on an individual. Due to the rarity of occurrence of falls, there may be insufficient or no training data available for them. Therefore, standard supervised machine learning methods may not be directly applied to handle this problem. In this paper, we present a taxonomy for the study of fall detection from the perspective of availability of fall data. The proposed taxonomy is independent of the type of sensors used and specific feature extraction/selection methods. The taxonomy identifies different categories of classification methods for the study of fall detection based on the availability of their data during training the classifiers. Then, we present a comprehensive literature review within those categories and identify the approach of treating a fall as an abnormal activity to be a plausible research direction. We conclude our paper by discussing several open research problems in the field and pointers for future research.
[ "Shehroz S. Khan, Jesse Hoey", "['Shehroz S. Khan' 'Jesse Hoey']" ]
stat.ML cs.AI cs.LG physics.ao-ph
null
1605.09370
null
null
http://arxiv.org/pdf/1605.09370v1
2016-05-30T19:57:56Z
2016-05-30T19:57:56Z
Unsupervised Discovery of El Nino Using Causal Feature Learning on Microlevel Climate Data
We show that the climate phenomena of El Nino and La Nina arise naturally as states of macro-variables when our recent causal feature learning framework (Chalupka 2015, Chalupka 2016) is applied to micro-level measures of zonal wind (ZW) and sea surface temperatures (SST) taken over the equatorial band of the Pacific Ocean. The method identifies these unusual climate states on the basis of the relation between ZW and SST patterns without any input about past occurrences of El Nino or La Nina. The simpler alternatives of (i) clustering the SST fields while disregarding their relationship with ZW patterns, or (ii) clustering the joint ZW-SST patterns, do not discover El Nino. We discuss the degree to which our method supports a causal interpretation and use a low-dimensional toy example to explain its success over other clustering approaches. Finally, we propose a new robust and scalable alternative to our original algorithm (Chalupka 2016), which circumvents the need for high-dimensional density learning.
[ "Krzysztof Chalupka, Tobias Bischoff, Pietro Perona, Frederick\n Eberhardt", "['Krzysztof Chalupka' 'Tobias Bischoff' 'Pietro Perona'\n 'Frederick Eberhardt']" ]
cs.LG cs.CV
null
1605.09410
null
null
http://arxiv.org/pdf/1605.09410v5
2017-07-13T00:53:33Z
2016-05-30T20:40:20Z
End-to-End Instance Segmentation with Recurrent Attention
While convolutional neural networks have gained impressive success recently in solving structured prediction problems such as semantic segmentation, it remains a challenge to differentiate individual object instances in the scene. Instance segmentation is very important in a variety of applications, such as autonomous driving, image captioning, and visual question answering. Techniques that combine large graphical models with low-level vision have been proposed to address this problem; however, we propose an end-to-end recurrent neural network (RNN) architecture with an attention mechanism to model a human-like counting process, and produce detailed instance segmentations. The network is jointly trained to sequentially produce regions of interest as well as a dominant object segmentation within each region. The proposed model achieves competitive results on the CVPPP, KITTI, and Cityscapes datasets.
[ "Mengye Ren, Richard S. Zemel", "['Mengye Ren' 'Richard S. Zemel']" ]
cs.HC cs.LG
null
1605.09432
null
null
http://arxiv.org/pdf/1605.09432v1
2016-05-30T22:05:36Z
2016-05-30T22:05:36Z
Evaluating Crowdsourcing Participants in the Absence of Ground-Truth
Given a supervised/semi-supervised learning scenario where multiple annotators are available, we consider the problem of identification of adversarial or unreliable annotators.
[ "Ramanathan Subramanian, Romer Rosales, Glenn Fung, Jennifer Dy", "['Ramanathan Subramanian' 'Romer Rosales' 'Glenn Fung' 'Jennifer Dy']" ]
cs.SY cs.LG
10.1109/SCES.2012.6199047
1605.09444
null
null
http://arxiv.org/abs/1605.09444v1
2016-05-30T23:11:00Z
2016-05-30T23:11:00Z
A Novel Fault Classification Scheme Based on Least Square SVM
This paper presents a novel approach for fault classification and section identification in a series compensated transmission line based on least square support vector machine. The current signal corresponding to one-fourth of the post fault cycle is used as input to proposed modular LS-SVM classifier. The proposed scheme uses four binary classifier; three for selection of three phases and fourth for ground detection. The proposed classification scheme is found to be accurate and reliable in presence of noise as well. The simulation results validate the efficacy of proposed scheme for accurate classification of fault in a series compensated transmission line.
[ "['Harishchandra Dubey' 'A. K. Tiwari' 'Nandita' 'P. K. Ray'\n 'S. R. Mohanty' 'Nand Kishor']", "Harishchandra Dubey, A.K. Tiwari, Nandita, P.K. Ray, S.R. Mohanty and\n Nand Kishor" ]
cs.LG
null
1605.09458
null
null
http://arxiv.org/pdf/1605.09458v1
2016-05-31T00:58:47Z
2016-05-31T00:58:47Z
Training Auto-encoders Effectively via Eliminating Task-irrelevant Input Variables
Auto-encoders are often used as building blocks of deep network classifier to learn feature extractors, but task-irrelevant information in the input data may lead to bad extractors and result in poor generalization performance of the network. In this paper,via dropping the task-irrelevant input variables the performance of auto-encoders can be obviously improved .Specifically, an importance-based variable selection method is proposed to aim at finding the task-irrelevant input variables and dropping them.It firstly estimates importance of each variable,and then drops the variables with importance value lower than a threshold. In order to obtain better performance, the method can be employed for each layer of stacked auto-encoders. Experimental results show that when combined with our method the stacked denoising auto-encoders achieves significantly improved performance on three challenging datasets.
[ "['Hui Shen' 'Dehua Li' 'Hong Wu' 'Zhaoxiang Zang']", "Hui Shen, Dehua Li, Hong Wu, Zhaoxiang Zang" ]
cs.IR cs.LG stat.ML
null
1605.09477
null
null
http://arxiv.org/pdf/1605.09477v1
2016-05-31T03:07:06Z
2016-05-31T03:07:06Z
A Neural Autoregressive Approach to Collaborative Filtering
This paper proposes CF-NADE, a neural autoregressive architecture for collaborative filtering (CF) tasks, which is inspired by the Restricted Boltzmann Machine (RBM) based CF model and the Neural Autoregressive Distribution Estimator (NADE). We first describe the basic CF-NADE model for CF tasks. Then we propose to improve the model by sharing parameters between different ratings. A factored version of CF-NADE is also proposed for better scalability. Furthermore, we take the ordinal nature of the preferences into consideration and propose an ordinal cost to optimize CF-NADE, which shows superior performance. Finally, CF-NADE can be extended to a deep model, with only moderately increased computational complexity. Experimental results show that CF-NADE with a single hidden layer beats all previous state-of-the-art methods on MovieLens 1M, MovieLens 10M, and Netflix datasets, and adding more hidden layers can further improve the performance.
[ "Yin Zheng, Bangsheng Tang, Wenkui Ding, Hanning Zhou", "['Yin Zheng' 'Bangsheng Tang' 'Wenkui Ding' 'Hanning Zhou']" ]
cs.SD cs.CV cs.LG cs.NE
10.1109/TASLP.2016.2632307
1605.09507
null
null
http://arxiv.org/abs/1605.09507v3
2016-12-26T12:29:26Z
2016-05-31T07:11:18Z
Deep convolutional neural networks for predominant instrument recognition in polyphonic music
Identifying musical instruments in polyphonic music recordings is a challenging but important problem in the field of music information retrieval. It enables music search by instrument, helps recognize musical genres, or can make music transcription easier and more accurate. In this paper, we present a convolutional neural network framework for predominant instrument recognition in real-world polyphonic music. We train our network from fixed-length music excerpts with a single-labeled predominant instrument and estimate an arbitrary number of predominant instruments from an audio signal with a variable length. To obtain the audio-excerpt-wise result, we aggregate multiple outputs from sliding windows over the test audio. In doing so, we investigated two different aggregation methods: one takes the average for each instrument and the other takes the instrument-wise sum followed by normalization. In addition, we conducted extensive experiments on several important factors that affect the performance, including analysis window size, identification threshold, and activation functions for neural networks to find the optimal set of parameters. Using a dataset of 10k audio excerpts from 11 instruments for evaluation, we found that convolutional neural networks are more robust than conventional methods that exploit spectral features and source separation with support vector machines. Experimental results showed that the proposed convolutional network architecture obtained an F1 measure of 0.602 for micro and 0.503 for macro, respectively, achieving 19.6% and 16.4% in performance improvement compared with other state-of-the-art algorithms.
[ "Yoonchang Han, Jaehun Kim, Kyogu Lee", "['Yoonchang Han' 'Jaehun Kim' 'Kyogu Lee']" ]
stat.ML cs.LG
10.1561/2200000060
1605.09522
null
null
null
null
null
Kernel Mean Embedding of Distributions: A Review and Beyond
A Hilbert space embedding of a distribution---in short, a kernel mean embedding---has recently emerged as a powerful tool for machine learning and inference. The basic idea behind this framework is to map distributions into a reproducing kernel Hilbert space (RKHS) in which the whole arsenal of kernel methods can be extended to probability measures. It can be viewed as a generalization of the original "feature map" common to support vector machines (SVMs) and other kernel methods. While initially closely associated with the latter, it has meanwhile found application in fields ranging from kernel machines and probabilistic modeling to statistical inference, causal discovery, and deep learning. The goal of this survey is to give a comprehensive review of existing work and recent advances in this research area, and to discuss the most challenging issues and open problems that could lead to new research directions. The survey begins with a brief introduction to the RKHS and positive definite kernels which forms the backbone of this survey, followed by a thorough discussion of the Hilbert space embedding of marginal distributions, theoretical guarantees, and a review of its applications. The embedding of distributions enables us to apply RKHS methods to probability measures which prompts a wide range of applications such as kernel two-sample testing, independent testing, and learning on distributional data. Next, we discuss the Hilbert space embedding for conditional distributions, give theoretical insights, and review some applications. The conditional mean embedding enables us to perform sum, product, and Bayes' rules---which are ubiquitous in graphical model, probabilistic inference, and reinforcement learning---in a non-parametric way. We then discuss relationships between this framework and other related areas. Lastly, we give some suggestions on future research directions.
[ "Krikamol Muandet, Kenji Fukumizu, Bharath Sriperumbudur, Bernhard\n Sch\\\"olkopf" ]
cs.CV cs.LG cs.RO
null
1605.09533
null
null
http://arxiv.org/pdf/1605.09533v1
2016-05-31T09:00:33Z
2016-05-31T09:00:33Z
Robust Deep-Learning-Based Road-Prediction for Augmented Reality Navigation Systems
This paper proposes an approach that predicts the road course from camera sensors leveraging deep learning techniques. Road pixels are identified by training a multi-scale convolutional neural network on a large number of full-scene-labeled night-time road images including adverse weather conditions. A framework is presented that applies the proposed approach to longer distance road course estimation, which is the basis for an augmented reality navigation application. In this framework long range sensor data (radar) and data from a map database are fused with short range sensor data (camera) to produce a precise longitudinal and lateral localization and road course estimation. The proposed approach reliably detects roads with and without lane markings and thus increases the robustness and availability of road course estimations and augmented reality navigation. Evaluations on an extensive set of high precision ground truth data taken from a differential GPS and an inertial measurement unit show that the proposed approach reaches state-of-the-art performance without the limitation of requiring existing lane markings.
[ "Matthias Limmer, Julian Forster, Dennis Baudach, Florian Sch\\\"ule,\n Roland Schweiger, Hendrik P.A. Lensch", "['Matthias Limmer' 'Julian Forster' 'Dennis Baudach' 'Florian Schüle'\n 'Roland Schweiger' 'Hendrik P. A. Lensch']" ]
cs.CV cs.CL cs.LG
null
1605.09553
null
null
http://arxiv.org/pdf/1605.09553v2
2016-11-23T07:29:46Z
2016-05-31T10:04:20Z
Attention Correctness in Neural Image Captioning
Attention mechanisms have recently been introduced in deep learning for various tasks in natural language processing and computer vision. But despite their popularity, the "correctness" of the implicitly-learned attention maps has only been assessed qualitatively by visualization of several examples. In this paper we focus on evaluating and improving the correctness of attention in neural image captioning models. Specifically, we propose a quantitative evaluation metric for the consistency between the generated attention maps and human annotations, using recently released datasets with alignment between regions in images and entities in captions. We then propose novel models with different levels of explicit supervision for learning attention maps during training. The supervision can be strong when alignment between regions and caption entities are available, or weak when only object segments and categories are provided. We show on the popular Flickr30k and COCO datasets that introducing supervision of attention maps during training solidly improves both attention correctness and caption quality, showing the promise of making machine perception more human-like.
[ "Chenxi Liu, Junhua Mao, Fei Sha, Alan Yuille", "['Chenxi Liu' 'Junhua Mao' 'Fei Sha' 'Alan Yuille']" ]
cs.LG cs.AI stat.ML
10.24963/ijcai.2017/267
1605.09593
null
null
http://arxiv.org/abs/1605.09593v2
2017-09-28T10:24:50Z
2016-05-31T12:11:51Z
Adaptive Learning Rate via Covariance Matrix Based Preconditioning for Deep Neural Networks
Adaptive learning rate algorithms such as RMSProp are widely used for training deep neural networks. RMSProp offers efficient training since it uses first order gradients to approximate Hessian-based preconditioning. However, since the first order gradients include noise caused by stochastic optimization, the approximation may be inaccurate. In this paper, we propose a novel adaptive learning rate algorithm called SDProp. Its key idea is effective handling of the noise by preconditioning based on covariance matrix. For various neural networks, our approach is more efficient and effective than RMSProp and its variant.
[ "Yasutoshi Ida, Yasuhiro Fujiwara, Sotetsu Iwamura", "['Yasutoshi Ida' 'Yasuhiro Fujiwara' 'Sotetsu Iwamura']" ]
stat.ML cs.DC cs.DM cs.LG
null
1605.09619
null
null
http://arxiv.org/pdf/1605.09619v1
2016-05-31T13:18:30Z
2016-05-31T13:18:30Z
Horizontally Scalable Submodular Maximization
A variety of large-scale machine learning problems can be cast as instances of constrained submodular maximization. Existing approaches for distributed submodular maximization have a critical drawback: The capacity - number of instances that can fit in memory - must grow with the data set size. In practice, while one can provision many machines, the capacity of each machine is limited by physical constraints. We propose a truly scalable approach for distributed submodular maximization under fixed capacity. The proposed framework applies to a broad class of algorithms and constraints and provides theoretical guarantees on the approximation factor for any available capacity. We empirically evaluate the proposed algorithm on a variety of data sets and demonstrate that it achieves performance competitive with the centralized greedy solution.
[ "['Mario Lucic' 'Olivier Bachem' 'Morteza Zadimoghaddam' 'Andreas Krause']", "Mario Lucic and Olivier Bachem and Morteza Zadimoghaddam and Andreas\n Krause" ]
cs.LG cs.CC math.ST stat.ML stat.TH
null
1605.09646
null
null
http://arxiv.org/pdf/1605.09646v1
2016-05-31T14:38:03Z
2016-05-31T14:38:03Z
Average-case Hardness of RIP Certification
The restricted isometry property (RIP) for design matrices gives guarantees for optimal recovery in sparse linear models. It is of high interest in compressed sensing and statistical learning. This property is particularly important for computationally efficient recovery methods. As a consequence, even though it is in general NP-hard to check that RIP holds, there have been substantial efforts to find tractable proxies for it. These would allow the construction of RIP matrices and the polynomial-time verification of RIP given an arbitrary matrix. We consider the framework of average-case certifiers, that never wrongly declare that a matrix is RIP, while being often correct for random instances. While there are such functions which are tractable in a suboptimal parameter regime, we show that this is a computationally hard task in any better regime. Our results are based on a new, weaker assumption on the problem of detecting dense subgraphs.
[ "Tengyao Wang, Quentin Berthet, Yaniv Plan", "['Tengyao Wang' 'Quentin Berthet' 'Yaniv Plan']" ]
cs.LG cs.CV
null
1605.09673
null
null
http://arxiv.org/pdf/1605.09673v2
2016-06-06T15:39:10Z
2016-05-31T15:29:36Z
Dynamic Filter Networks
In a traditional convolutional layer, the learned filters stay fixed after training. In contrast, we introduce a new framework, the Dynamic Filter Network, where filters are generated dynamically conditioned on an input. We show that this architecture is a powerful one, with increased flexibility thanks to its adaptive nature, yet without an excessive increase in the number of model parameters. A wide variety of filtering operations can be learned this way, including local spatial transformations, but also others like selective (de)blurring or adaptive feature extraction. Moreover, multiple such layers can be combined, e.g. in a recurrent architecture. We demonstrate the effectiveness of the dynamic filter network on the tasks of video and stereo prediction, and reach state-of-the-art performance on the moving MNIST dataset with a much smaller model. By visualizing the learned filters, we illustrate that the network has picked up flow information by only looking at unlabelled training data. This suggests that the network can be used to pretrain networks for various supervised tasks in an unsupervised way, like optical flow and depth estimation.
[ "Bert De Brabandere, Xu Jia, Tinne Tuytelaars, Luc Van Gool", "['Bert De Brabandere' 'Xu Jia' 'Tinne Tuytelaars' 'Luc Van Gool']" ]
cs.LG cs.AI cs.RO stat.ML
null
1605.09674
null
null
http://arxiv.org/pdf/1605.09674v4
2017-01-27T09:26:28Z
2016-05-31T15:34:36Z
VIME: Variational Information Maximizing Exploration
Scalable and effective exploration remains a key challenge in reinforcement learning (RL). While there are methods with optimality guarantees in the setting of discrete state and action spaces, these methods cannot be applied in high-dimensional deep RL scenarios. As such, most contemporary RL relies on simple heuristics such as epsilon-greedy exploration or adding Gaussian noise to the controls. This paper introduces Variational Information Maximizing Exploration (VIME), an exploration strategy based on maximization of information gain about the agent's belief of environment dynamics. We propose a practical implementation, using variational inference in Bayesian neural networks which efficiently handles continuous state and action spaces. VIME modifies the MDP reward function, and can be applied with several different underlying RL algorithms. We demonstrate that VIME achieves significantly better performance compared to heuristic exploration methods across a variety of continuous control tasks and algorithms, including tasks with very sparse rewards.
[ "Rein Houthooft, Xi Chen, Yan Duan, John Schulman, Filip De Turck,\n Pieter Abbeel", "['Rein Houthooft' 'Xi Chen' 'Yan Duan' 'John Schulman' 'Filip De Turck'\n 'Pieter Abbeel']" ]
cs.CV cs.LG
10.1109/TCYB.2017.2742705
1605.09696
null
null
http://arxiv.org/abs/1605.09696v3
2017-08-31T09:17:50Z
2016-05-31T16:11:16Z
Generalized Multi-view Embedding for Visual Recognition and Cross-modal Retrieval
In this paper, the problem of multi-view embedding from different visual cues and modalities is considered. We propose a unified solution for subspace learning methods using the Rayleigh quotient, which is extensible for multiple views, supervised learning, and non-linear embeddings. Numerous methods including Canonical Correlation Analysis, Partial Least Sqaure regression and Linear Discriminant Analysis are studied using specific intrinsic and penalty graphs within the same framework. Non-linear extensions based on kernels and (deep) neural networks are derived, achieving better performance than the linear ones. Moreover, a novel Multi-view Modular Discriminant Analysis (MvMDA) is proposed by taking the view difference into consideration. We demonstrate the effectiveness of the proposed multi-view embedding methods on visual object recognition and cross-modal image retrieval, and obtain superior results in both applications compared to related methods.
[ "['Guanqun Cao' 'Alexandros Iosifidis' 'Ke Chen' 'Moncef Gabbouj']", "Guanqun Cao, Alexandros Iosifidis, Ke Chen, Moncef Gabbouj" ]
stat.ML cs.DC cs.DS cs.LG math.OC
null
1605.09721
null
null
http://arxiv.org/pdf/1605.09721v1
2016-05-31T17:15:01Z
2016-05-31T17:15:01Z
CYCLADES: Conflict-free Asynchronous Machine Learning
We present CYCLADES, a general framework for parallelizing stochastic optimization algorithms in a shared memory setting. CYCLADES is asynchronous during shared model updates, and requires no memory locking mechanisms, similar to HOGWILD!-type algorithms. Unlike HOGWILD!, CYCLADES introduces no conflicts during the parallel execution, and offers a black-box analysis for provable speedups across a large family of algorithms. Due to its inherent conflict-free nature and cache locality, our multi-core implementation of CYCLADES consistently outperforms HOGWILD!-type algorithms on sufficiently sparse datasets, leading to up to 40% speedup gains compared to the HOGWILD! implementation of SGD, and up to 5x gains over asynchronous implementations of variance reduction algorithms.
[ "Xinghao Pan, Maximilian Lam, Stephen Tu, Dimitris Papailiopoulos, Ce\n Zhang, Michael I. Jordan, Kannan Ramchandran, Chris Re, Benjamin Recht", "['Xinghao Pan' 'Maximilian Lam' 'Stephen Tu' 'Dimitris Papailiopoulos'\n 'Ce Zhang' 'Michael I. Jordan' 'Kannan Ramchandran' 'Chris Re'\n 'Benjamin Recht']" ]
stat.ML cs.DC cs.LG math.OC
null
1605.09774
null
null
http://arxiv.org/pdf/1605.09774v2
2016-11-25T12:00:28Z
2016-05-31T19:16:56Z
Asynchrony begets Momentum, with an Application to Deep Learning
Asynchronous methods are widely used in deep learning, but have limited theoretical justification when applied to non-convex problems. We show that running stochastic gradient descent (SGD) in an asynchronous manner can be viewed as adding a momentum-like term to the SGD iteration. Our result does not assume convexity of the objective function, so it is applicable to deep learning systems. We observe that a standard queuing model of asynchrony results in a form of momentum that is commonly used by deep learning practitioners. This forges a link between queuing theory and asynchrony in deep learning systems, which could be useful for systems builders. For convolutional neural networks, we experimentally validate that the degree of asynchrony directly correlates with the momentum, confirming our main result. An important implication is that tuning the momentum parameter is important when considering different levels of asynchrony. We assert that properly tuned momentum reduces the number of steps required for convergence. Finally, our theory suggests new ways of counteracting the adverse effects of asynchrony: a simple mechanism like using negative algorithmic momentum can improve performance under high asynchrony. Since asynchronous methods have better hardware efficiency, this result may shed light on when asynchronous execution is more efficient for deep learning systems.
[ "['Ioannis Mitliagkas' 'Ce Zhang' 'Stefan Hadjis' 'Christopher Ré']", "Ioannis Mitliagkas, Ce Zhang, Stefan Hadjis, Christopher R\\'e" ]
cs.LG cs.AI cs.CV cs.NE stat.ML
null
1605.09782
null
null
http://arxiv.org/pdf/1605.09782v7
2017-04-03T20:34:36Z
2016-05-31T19:37:29Z
Adversarial Feature Learning
The ability of the Generative Adversarial Networks (GANs) framework to learn generative models mapping from simple latent distributions to arbitrarily complex data distributions has been demonstrated empirically, with compelling results showing that the latent space of such generators captures semantic variation in the data distribution. Intuitively, models trained to predict these semantic latent representations given data may serve as useful feature representations for auxiliary problems where semantics are relevant. However, in their existing form, GANs have no means of learning the inverse mapping -- projecting data back into the latent space. We propose Bidirectional Generative Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and demonstrate that the resulting learned feature representation is useful for auxiliary supervised discrimination tasks, competitive with contemporary approaches to unsupervised and self-supervised feature learning.
[ "Jeff Donahue, Philipp Kr\\\"ahenb\\\"uhl, Trevor Darrell", "['Jeff Donahue' 'Philipp Krähenbühl' 'Trevor Darrell']" ]
cs.AI cs.LG stat.ML
null
1606.00068
null
null
http://arxiv.org/pdf/1606.00068v1
2016-05-31T22:37:43Z
2016-05-31T22:37:43Z
Quantifying the probable approximation error of probabilistic inference programs
This paper introduces a new technique for quantifying the approximation error of a broad class of probabilistic inference programs, including ones based on both variational and Monte Carlo approaches. The key idea is to derive a subjective bound on the symmetrized KL divergence between the distribution achieved by an approximate inference program and its true target distribution. The bound's validity (and subjectivity) rests on the accuracy of two auxiliary probabilistic programs: (i) a "reference" inference program that defines a gold standard of accuracy and (ii) a "meta-inference" program that answers the question "what internal random choices did the original approximate inference program probably make given that it produced a particular result?" The paper includes empirical results on inference problems drawn from linear regression, Dirichlet process mixture modeling, HMMs, and Bayesian networks. The experiments show that the technique is robust to the quality of the reference inference program and that it can detect implementation bugs that are not apparent from predictive performance.
[ "Marco F Cusumano-Towner, Vikash K Mansinghka", "['Marco F Cusumano-Towner' 'Vikash K Mansinghka']" ]
cs.LG cs.SY stat.ML
null
1606.00119
null
null
http://arxiv.org/pdf/1606.00119v3
2016-10-27T15:59:58Z
2016-06-01T05:21:40Z
Contextual Bandits with Latent Confounders: An NMF Approach
Motivated by online recommendation and advertising systems, we consider a causal model for stochastic contextual bandits with a latent low-dimensional confounder. In our model, there are $L$ observed contexts and $K$ arms of the bandit. The observed context influences the reward obtained through a latent confounder variable with cardinality $m$ ($m \ll L,K$). The arm choice and the latent confounder causally determines the reward while the observed context is correlated with the confounder. Under this model, the $L \times K$ mean reward matrix $\mathbf{U}$ (for each context in $[L]$ and each arm in $[K]$) factorizes into non-negative factors $\mathbf{A}$ ($L \times m$) and $\mathbf{W}$ ($m \times K$). This insight enables us to propose an $\epsilon$-greedy NMF-Bandit algorithm that designs a sequence of interventions (selecting specific arms), that achieves a balance between learning this low-dimensional structure and selecting the best arm to minimize regret. Our algorithm achieves a regret of $\mathcal{O}\left(L\mathrm{poly}(m, \log K) \log T \right)$ at time $T$, as compared to $\mathcal{O}(LK\log T)$ for conventional contextual bandits, assuming a constant gap between the best arm and the rest for each context. These guarantees are obtained under mild sufficiency conditions on the factors that are weaker versions of the well-known Statistical RIP condition. We further propose a class of generative models that satisfy our sufficient conditions, and derive a lower bound of $\mathcal{O}\left(Km\log T\right)$. These are the first regret guarantees for online matrix completion with bandit feedback, when the rank is greater than one. We further compare the performance of our algorithm with the state of the art, on synthetic and real world data-sets.
[ "['Rajat Sen' 'Karthikeyan Shanmugam' 'Murat Kocaoglu'\n 'Alexandros G. Dimakis' 'Sanjay Shakkottai']", "Rajat Sen, Karthikeyan Shanmugam, Murat Kocaoglu, Alexandros G.\n Dimakis, and Sanjay Shakkottai" ]
cs.LG cs.CV
null
1606.00128
null
null
http://arxiv.org/pdf/1606.00128v3
2016-09-18T15:32:47Z
2016-06-01T06:18:29Z
Self-Paced Learning: an Implicit Regularization Perspective
Self-paced learning (SPL) mimics the cognitive mechanism of humans and animals that gradually learns from easy to hard samples. One key issue in SPL is to obtain better weighting strategy that is determined by minimizer function. Existing methods usually pursue this by artificially designing the explicit form of SPL regularizer. In this paper, we focus on the minimizer function, and study a group of new regularizer, named self-paced implicit regularizer that is deduced from robust loss function. Based on the convex conjugacy theory, the minimizer function for self-paced implicit regularizer can be directly learned from the latent loss function, while the analytic form of the regularizer can be even known. A general framework (named SPL-IR) for SPL is developed accordingly. We demonstrate that the learning procedure of SPL-IR is associated with latent robust loss functions, thus can provide some theoretical inspirations for its working mechanism. We further analyze the relation between SPL-IR and half-quadratic optimization. Finally, we implement SPL-IR to both supervised and unsupervised tasks, and experimental results corroborate our ideas and demonstrate the correctness and effectiveness of implicit regularizers.
[ "['Yanbo Fan' 'Ran He' 'Jian Liang' 'Bao-Gang Hu']", "Yanbo Fan, Ran He, Jian Liang, Bao-Gang Hu" ]
stat.ML cs.LG
null
1606.00136
null
null
http://arxiv.org/pdf/1606.00136v1
2016-06-01T06:56:17Z
2016-06-01T06:56:17Z
Efficiently Bounding Optimal Solutions after Small Data Modification in Large-Scale Empirical Risk Minimization
We study large-scale classification problems in changing environments where a small part of the dataset is modified, and the effect of the data modification must be quickly incorporated into the classifier. When the entire dataset is large, even if the amount of the data modification is fairly small, the computational cost of re-training the classifier would be prohibitively large. In this paper, we propose a novel method for efficiently incorporating such a data modification effect into the classifier without actually re-training it. The proposed method provides bounds on the unknown optimal classifier with the cost only proportional to the size of the data modification. We demonstrate through numerical experiments that the proposed method provides sufficiently tight bounds with negligible computational costs, especially when a small part of the dataset is modified in a large-scale classification problem.
[ "['Hiroyuki Hanada' 'Atsushi Shibagaki' 'Jun Sakuma' 'Ichiro Takeuchi']", "Hiroyuki Hanada, Atsushi Shibagaki, Jun Sakuma, Ichiro Takeuchi" ]
cs.LG cs.SI
null
1606.00182
null
null
http://arxiv.org/pdf/1606.00182v5
2017-02-28T21:33:41Z
2016-06-01T09:16:46Z
On the Troll-Trust Model for Edge Sign Prediction in Social Networks
In the problem of edge sign prediction, we are given a directed graph (representing a social network), and our task is to predict the binary labels of the edges (i.e., the positive or negative nature of the social relationships). Many successful heuristics for this problem are based on the troll-trust features, estimating at each node the fraction of outgoing and incoming positive/negative edges. We show that these heuristics can be understood, and rigorously analyzed, as approximators to the Bayes optimal classifier for a simple probabilistic model of the edge labels. We then show that the maximum likelihood estimator for this model approximately corresponds to the predictions of a Label Propagation algorithm run on a transformed version of the original social graph. Extensive experiments on a number of real-world datasets show that this algorithm is competitive against state-of-the-art classifiers in terms of both accuracy and scalability. Finally, we show that troll-trust features can also be used to derive online learning algorithms which have theoretical guarantees even when edges are adversarially labeled.
[ "G\\'eraud Le Falher, Nicol\\`o Cesa-Bianchi, Claudio Gentile, Fabio\n Vitale", "['Géraud Le Falher' 'Nicolò Cesa-Bianchi' 'Claudio Gentile' 'Fabio Vitale']" ]
stat.ML cs.HC cs.LG cs.SI
null
1606.00226
null
null
http://arxiv.org/pdf/1606.00226v2
2017-10-25T16:19:38Z
2016-06-01T11:18:21Z
A Minimax Optimal Algorithm for Crowdsourcing
We consider the problem of accurately estimating the reliability of workers based on noisy labels they provide, which is a fundamental question in crowdsourcing. We propose a novel lower bound on the minimax estimation error which applies to any estimation procedure. We further propose Triangular Estimation (TE), an algorithm for estimating the reliability of workers. TE has low complexity, may be implemented in a streaming setting when labels are provided by workers in real time, and does not rely on an iterative procedure. We further prove that TE is minimax optimal and matches our lower bound. We conclude by assessing the performance of TE and other state-of-the-art algorithms on both synthetic and real-world data sets.
[ "['Thomas Bonald' 'Richard Combes']", "Thomas Bonald and Richard Combes" ]
cs.CL cs.IR cs.LG
null
1606.00253
null
null
http://arxiv.org/pdf/1606.00253v1
2016-06-01T12:34:50Z
2016-06-01T12:34:50Z
On a Topic Model for Sentences
Probabilistic topic models are generative models that describe the content of documents by discovering the latent topics underlying them. However, the structure of the textual input, and for instance the grouping of words in coherent text spans such as sentences, contains much information which is generally lost with these models. In this paper, we propose sentenceLDA, an extension of LDA whose goal is to overcome this limitation by incorporating the structure of the text in the generative and inference processes. We illustrate the advantages of sentenceLDA by comparing it with LDA using both intrinsic (perplexity) and extrinsic (text classification) evaluation tasks on different text collections.
[ "['Georgios Balikas' 'Massih-Reza Amini' 'Marianne Clausel']", "Georgios Balikas, Massih-Reza Amini, Marianne Clausel" ]
cs.LG
null
1606.00282
null
null
http://arxiv.org/pdf/1606.00282v1
2016-06-01T13:38:04Z
2016-06-01T13:38:04Z
Multi-Label Zero-Shot Learning via Concept Embedding
Zero Shot Learning (ZSL) enables a learning model to classify instances of an unseen class during training. While most research in ZSL focuses on single-label classification, few studies have been done in multi-label ZSL, where an instance is associated with a set of labels simultaneously, due to the difficulty in modeling complex semantics conveyed by a set of labels. In this paper, we propose a novel approach to multi-label ZSL via concept embedding learned from collections of public users' annotations of multimedia. Thanks to concept embedding, multi-label ZSL can be done by efficiently mapping an instance input features onto the concept embedding space in a similar manner used in single-label ZSL. Moreover, our semantic learning model is capable of embedding an out-of-vocabulary label by inferring its meaning from its co-occurring labels. Thus, our approach allows both seen and unseen labels during the concept embedding learning to be used in the aforementioned instance mapping, which makes multi-label ZSL more flexible and suitable for real applications. Experimental results of multi-label ZSL on images and music tracks suggest that our approach outperforms a state-of-the-art multi-label ZSL model and can deal with a scenario involving out-of-vocabulary labels without re-training the semantics learning model.
[ "['Ubai Sandouk' 'Ke Chen']", "Ubai Sandouk and Ke Chen" ]
cs.SD cs.LG
null
1606.00298
null
null
http://arxiv.org/pdf/1606.00298v1
2016-06-01T14:18:08Z
2016-06-01T14:18:08Z
Automatic tagging using deep convolutional neural networks
We present a content-based automatic music tagging algorithm using fully convolutional neural networks (FCNs). We evaluate different architectures consisting of 2D convolutional layers and subsampling layers only. In the experiments, we measure the AUC-ROC scores of the architectures with different complexities and input types using the MagnaTagATune dataset, where a 4-layer architecture shows state-of-the-art performance with mel-spectrogram input. Furthermore, we evaluated the performances of the architectures with varying the number of layers on a larger dataset (Million Song Dataset), and found that deeper models outperformed the 4-layer architecture. The experiments show that mel-spectrogram is an effective time-frequency representation for automatic tagging and that more complex models benefit from more training data.
[ "Keunwoo Choi, George Fazekas, Mark Sandler", "['Keunwoo Choi' 'George Fazekas' 'Mark Sandler']" ]
null
null
1606.00313
null
null
http://arxiv.org/pdf/1606.00313v1
2016-06-01T14:47:19Z
2016-06-01T14:47:19Z
Improved Regret Bounds for Oracle-Based Adversarial Contextual Bandits
We give an oracle-based algorithm for the adversarial contextual bandit problem, where either contexts are drawn i.i.d. or the sequence of contexts is known a priori, but where the losses are picked adversarially. Our algorithm is computationally efficient, assuming access to an offline optimization oracle, and enjoys a regret of order $O((KT)^{frac{2}{3}}(log N)^{frac{1}{3}})$, where $K$ is the number of actions, $T$ is the number of iterations and $N$ is the number of baseline policies. Our result is the first to break the $O(T^{frac{3}{4}})$ barrier that is achieved by recently introduced algorithms. Breaking this barrier was left as a major open problem. Our analysis is based on the recent relaxation based approach of (Rakhlin and Sridharan, 2016).
[ "['Vasilis Syrgkanis' 'Haipeng Luo' 'Akshay Krishnamurthy'\n 'Robert E. Schapire']" ]
cs.HC cs.LG
null
1606.00370
null
null
http://arxiv.org/pdf/1606.00370v1
2016-06-01T17:52:30Z
2016-06-01T17:52:30Z
Decoding Emotional Experience through Physiological Signal Processing
There is an increasing consensus among re- searchers that making a computer emotionally intelligent with the ability to decode human affective states would allow a more meaningful and natural way of human-computer interactions (HCIs). One unobtrusive and non-invasive way of recognizing human affective states entails the exploration of how physiological signals vary under different emotional experiences. In particular, this paper explores the correlation between autonomically-mediated changes in multimodal body signals and discrete emotional states. In order to fully exploit the information in each modality, we have provided an innovative classification approach for three specific physiological signals including Electromyogram (EMG), Blood Volume Pressure (BVP) and Galvanic Skin Response (GSR). These signals are analyzed as inputs to an emotion recognition paradigm based on fusion of a series of weak learners. Our proposed classification approach showed 88.1% recognition accuracy, which outperformed the conventional Support Vector Machine (SVM) classifier with 17% accuracy improvement. Furthermore, in order to avoid information redundancy and the resultant over-fitting, a feature reduction method is proposed based on a correlation analysis to optimize the number of features required for training and validating each weak learner. Results showed that despite the feature space dimensionality reduction from 27 to 18 features, our methodology preserved the recognition accuracy of about 85.0%. This reduction in complexity will get us one step closer towards embedding this human emotion encoder in the wireless and wearable HCI platforms.
[ "['Maria S. Perez-Rosero' 'Behnaz Rezaei' 'Murat Akcakaya'\n 'Sarah Ostadabbas']", "Maria S. Perez-Rosero, Behnaz Rezaei, Murat Akcakaya, and Sarah\n Ostadabbas" ]
cs.CL cs.LG
null
1606.00372
null
null
http://arxiv.org/pdf/1606.00372v1
2016-06-01T18:01:14Z
2016-06-01T18:01:14Z
Conversational Contextual Cues: The Case of Personalization and History for Response Ranking
We investigate the task of modeling open-domain, multi-turn, unstructured, multi-participant, conversational dialogue. We specifically study the effect of incorporating different elements of the conversation. Unlike previous efforts, which focused on modeling messages and responses, we extend the modeling to long context and participant's history. Our system does not rely on handwritten rules or engineered features; instead, we train deep neural networks on a large conversational dataset. In particular, we exploit the structure of Reddit comments and posts to extract 2.1 billion messages and 133 million conversations. We evaluate our models on the task of predicting the next response in a conversation, and we find that modeling both context and participants improves prediction accuracy.
[ "['Rami Al-Rfou' 'Marc Pickett' 'Javier Snaider' 'Yun-hsuan Sung'\n 'Brian Strope' 'Ray Kurzweil']", "Rami Al-Rfou and Marc Pickett and Javier Snaider and Yun-hsuan Sung\n and Brian Strope and Ray Kurzweil" ]
stat.ML cs.LG math.CO
null
1606.00389
null
null
http://arxiv.org/pdf/1606.00389v3
2018-02-13T01:50:38Z
2016-06-01T18:43:13Z
Stream Clipper: Scalable Submodular Maximization on Stream
We propose a streaming submodular maximization algorithm "stream clipper" that performs as well as the offline greedy algorithm on document/video summarization in practice. It adds elements from a stream either to a solution set $S$ or to an extra buffer $B$ based on two adaptive thresholds, and improves $S$ by a final greedy step that starts from $S$ adding elements from $B$. During this process, swapping elements out of $S$ can occur if doing so yields improvements. The thresholds adapt based on if current memory utilization exceeds a budget, e.g., it increases the lower threshold, and removes from the buffer $B$ elements below the new lower threshold. We show that, while our approximation factor in the worst case is $1/2$ (like in previous work, and corresponding to the tight bound), we show that there are data-dependent conditions where our bound falls within the range $[1/2, 1-1/e]$. In news and video summarization experiments, the algorithm consistently outperforms other streaming methods, and, while using significantly less computation and memory, performs similarly to the offline greedy algorithm.
[ "Tianyi Zhou and Jeff Bilmes", "['Tianyi Zhou' 'Jeff Bilmes']" ]
cs.LG stat.ML
null
1606.00398
null
null
http://arxiv.org/pdf/1606.00398v1
2016-06-01T18:57:54Z
2016-06-01T18:57:54Z
Short Communication on QUIST: A Quick Clustering Algorithm
In this short communication we introduce the quick clustering algorithm (QUIST), an efficient hierarchical clustering algorithm based on sorting. QUIST is a poly-logarithmic divisive clustering algorithm that does not assume the number of clusters, and/or the cluster size to be known ahead of time. It is also insensitive to the original ordering of the input.
[ "Sherenaz W. Al-Haj Baddar", "['Sherenaz W. Al-Haj Baddar']" ]
cs.LG math.CO stat.ML
null
1606.00399
null
null
http://arxiv.org/pdf/1606.00399v1
2016-06-01T18:58:36Z
2016-06-01T18:58:36Z
Scaling Submodular Maximization via Pruned Submodularity Graphs
We propose a new random pruning method (called "submodular sparsification (SS)") to reduce the cost of submodular maximization. The pruning is applied via a "submodularity graph" over the $n$ ground elements, where each directed edge is associated with a pairwise dependency defined by the submodular function. In each step, SS prunes a $1-1/\sqrt{c}$ (for $c>1$) fraction of the nodes using weights on edges computed based on only a small number ($O(\log n)$) of randomly sampled nodes. The algorithm requires $\log_{\sqrt{c}}n$ steps with a small and highly parallelizable per-step computation. An accuracy-speed tradeoff parameter $c$, set as $c = 8$, leads to a fast shrink rate $\sqrt{2}/4$ and small iteration complexity $\log_{2\sqrt{2}}n$. Analysis shows that w.h.p., the greedy algorithm on the pruned set of size $O(\log^2 n)$ can achieve a guarantee similar to that of processing the original dataset. In news and video summarization tasks, SS is able to substantially reduce both computational costs and memory usage, while maintaining (or even slightly exceeding) the quality of the original (and much more costly) greedy algorithm.
[ "Tianyi Zhou, Hua Ouyang, Yi Chang, Jeff Bilmes, Carlos Guestrin", "['Tianyi Zhou' 'Hua Ouyang' 'Yi Chang' 'Jeff Bilmes' 'Carlos Guestrin']" ]
cs.LG cs.DC math.OC
null
1606.00511
null
null
http://arxiv.org/pdf/1606.00511v2
2017-01-15T13:51:26Z
2016-06-02T00:39:03Z
Distributed Hessian-Free Optimization for Deep Neural Network
Training deep neural network is a high dimensional and a highly non-convex optimization problem. Stochastic gradient descent (SGD) algorithm and it's variations are the current state-of-the-art solvers for this task. However, due to non-covexity nature of the problem, it was observed that SGD slows down near saddle point. Recent empirical work claim that by detecting and escaping saddle point efficiently, it's more likely to improve training performance. With this objective, we revisit Hessian-free optimization method for deep networks. We also develop its distributed variant and demonstrate superior scaling potential to SGD, which allows more efficiently utilizing larger computing resources thus enabling large models and faster time to obtain desired solution. Furthermore, unlike truncated Newton method (Marten's HF) that ignores negative curvature information by using na\"ive conjugate gradient method and Gauss-Newton Hessian approximation information - we propose a novel algorithm to explore negative curvature direction by solving the sub-problem with stabilized bi-conjugate method involving possible indefinite stochastic Hessian information. We show that these techniques accelerate the training process for both the standard MNIST dataset and also the TIMIT speech recognition problem, demonstrating robust performance with upto an order of magnitude larger batch sizes. This increased scaling potential is illustrated with near linear speed-up on upto 16 CPU nodes for a simple 4-layer network.
[ "['Xi He' 'Dheevatsa Mudigere' 'Mikhail Smelyanskiy' 'Martin Takáč']", "Xi He and Dheevatsa Mudigere and Mikhail Smelyanskiy and Martin\n Tak\\'a\\v{c}" ]
cs.NE cs.LG
null
1606.00540
null
null
http://arxiv.org/pdf/1606.00540v1
2016-06-02T05:39:54Z
2016-06-02T05:39:54Z
Multi-pretrained Deep Neural Network
Pretraining is widely used in deep neutral network and one of the most famous pretraining models is Deep Belief Network (DBN). The optimization formulas are different during the pretraining process for different pretraining models. In this paper, we pretrained deep neutral network by different pretraining models and hence investigated the difference between DBN and Stacked Denoising Autoencoder (SDA) when used as pretraining model. The experimental results show that DBN get a better initial model. However the model converges to a relatively worse model after the finetuning process. Yet after pretrained by SDA for the second time the model converges to a better model if finetuned.
[ "['Zhen Hu' 'Zhuyin Xue' 'Tong Cui' 'Shiqiang Zong' 'Chenglong He']", "Zhen Hu, Zhuyin Xue, Tong Cui, Shiqiang Zong, Chenglong He" ]
cs.DC cs.LG cs.NE
null
1606.00575
null
null
http://arxiv.org/pdf/1606.00575v2
2017-07-18T08:50:05Z
2016-06-02T08:10:10Z
Ensemble-Compression: A New Method for Parallel Training of Deep Neural Networks
Parallelization framework has become a necessity to speed up the training of deep neural networks (DNN) recently. Such framework typically employs the Model Average approach, denoted as MA-DNN, in which parallel workers conduct respective training based on their own local data while the parameters of local models are periodically communicated and averaged to obtain a global model which serves as the new start of local models. However, since DNN is a highly non-convex model, averaging parameters cannot ensure that such global model can perform better than those local models. To tackle this problem, we introduce a new parallel training framework called Ensemble-Compression, denoted as EC-DNN. In this framework, we propose to aggregate the local models by ensemble, i.e., averaging the outputs of local models instead of the parameters. As most of prevalent loss functions are convex to the output of DNN, the performance of ensemble-based global model is guaranteed to be at least as good as the average performance of local models. However, a big challenge lies in the explosion of model size since each round of ensemble can give rise to multiple times size increment. Thus, we carry out model compression after each ensemble, specialized by a distillation based method in this paper, to reduce the size of the global model to be the same as the local ones. Our experimental results demonstrate the prominent advantage of EC-DNN over MA-DNN in terms of both accuracy and speedup.
[ "Shizhao Sun, Wei Chen, Jiang Bian, Xiaoguang Liu, Tie-Yan Liu", "['Shizhao Sun' 'Wei Chen' 'Jiang Bian' 'Xiaoguang Liu' 'Tie-Yan Liu']" ]
cs.CL cs.IR cs.LG
null
1606.00577
null
null
http://arxiv.org/pdf/1606.00577v3
2017-05-17T21:03:06Z
2016-06-02T08:15:15Z
Source-LDA: Enhancing probabilistic topic models using prior knowledge sources
A popular approach to topic modeling involves extracting co-occurring n-grams of a corpus into semantic themes. The set of n-grams in a theme represents an underlying topic, but most topic modeling approaches are not able to label these sets of words with a single n-gram. Such labels are useful for topic identification in summarization systems. This paper introduces a novel approach to labeling a group of n-grams comprising an individual topic. The approach taken is to complement the existing topic distributions over words with a known distribution based on a predefined set of topics. This is done by integrating existing labeled knowledge sources representing known potential topics into the probabilistic topic model. These knowledge sources are translated into a distribution and used to set the hyperparameters of the Dirichlet generated distribution over words. In the inference these modified distributions guide the convergence of the latent topics to conform with the complementary distributions. This approach ensures that the topic inference process is consistent with existing knowledge. The label assignment from the complementary knowledge sources are then transferred to the latent topics of the corpus. The results show both accurate label assignment to topics as well as improved topic generation than those obtained using various labeling approaches based off Latent Dirichlet allocation (LDA).
[ "['Justin Wood' 'Patrick Tan' 'Wei Wang' 'Corey Arnold']", "Justin Wood, Patrick Tan, Wei Wang, Corey Arnold" ]
stat.ML cs.LG cs.NA
null
1606.00602
null
null
http://arxiv.org/pdf/1606.00602v2
2016-09-11T04:15:04Z
2016-06-02T09:59:16Z
Variance-Reduced Proximal Stochastic Gradient Descent for Non-convex Composite optimization
Here we study non-convex composite optimization: first, a finite-sum of smooth but non-convex functions, and second, a general function that admits a simple proximal mapping. Most research on stochastic methods for composite optimization assumes convexity or strong convexity of each function. In this paper, we extend this problem into the non-convex setting using variance reduction techniques, such as prox-SVRG and prox-SAGA. We prove that, with a constant step size, both prox-SVRG and prox-SAGA are suitable for non-convex composite optimization, and help the problem converge to a stationary point within $O(1/\epsilon)$ iterations. That is similar to the convergence rate seen with the state-of-the-art RSAG method and faster than stochastic gradient descent. Our analysis is also extended into the min-batch setting, which linearly accelerates the convergence. To the best of our knowledge, this is the first analysis of convergence rate of variance-reduced proximal stochastic gradient for non-convex composite optimization.
[ "['Xiyu Yu' 'Dacheng Tao']", "Xiyu Yu, Dacheng Tao" ]
cs.CV cs.LG cs.NE
null
1606.00611
null
null
http://arxiv.org/pdf/1606.00611v2
2017-03-26T18:31:05Z
2016-06-02T10:37:46Z
Recursive Autoconvolution for Unsupervised Learning of Convolutional Neural Networks
In visual recognition tasks, such as image classification, unsupervised learning exploits cheap unlabeled data and can help to solve these tasks more efficiently. We show that the recursive autoconvolution operator, adopted from physics, boosts existing unsupervised methods by learning more discriminative filters. We take well established convolutional neural networks and train their filters layer-wise. In addition, based on previous works we design a network which extracts more than 600k features per sample, but with the total number of trainable parameters greatly reduced by introducing shared filters in higher layers. We evaluate our networks on the MNIST, CIFAR-10, CIFAR-100 and STL-10 image classification benchmarks and report several state of the art results among other unsupervised methods.
[ "['Boris Knyazev' 'Erhardt Barth' 'Thomas Martinetz']", "Boris Knyazev, Erhardt Barth, Thomas Martinetz" ]
stat.ML cs.LG
null
1606.00704
null
null
http://arxiv.org/pdf/1606.00704v3
2017-02-21T18:28:22Z
2016-06-02T14:43:37Z
Adversarially Learned Inference
We introduce the adversarially learned inference (ALI) model, which jointly learns a generation network and an inference network using an adversarial process. The generation network maps samples from stochastic latent variables to the data space while the inference network maps training examples in data space to the space of latent variables. An adversarial game is cast between these two networks and a discriminative network is trained to distinguish between joint latent/data-space samples from the generative network and joint samples from the inference network. We illustrate the ability of the model to learn mutually coherent inference and generation networks through the inspections of model samples and reconstructions and confirm the usefulness of the learned representations by obtaining a performance competitive with state-of-the-art on the semi-supervised SVHN and CIFAR10 tasks.
[ "Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Olivier Mastropietro,\n Alex Lamb, Martin Arjovsky, Aaron Courville", "['Vincent Dumoulin' 'Ishmael Belghazi' 'Ben Poole' 'Olivier Mastropietro'\n 'Alex Lamb' 'Martin Arjovsky' 'Aaron Courville']" ]
stat.ML cs.LG stat.ME
null
1606.00709
null
null
http://arxiv.org/pdf/1606.00709v1
2016-06-02T14:53:33Z
2016-06-02T14:53:33Z
f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization
Generative neural samplers are probabilistic models that implement sampling using feedforward neural networks: they take a random input vector and produce a sample from a probability distribution defined by the network weights. These models are expressive and allow efficient computation of samples and derivatives, but cannot be used for computing likelihoods or for marginalization. The generative-adversarial training method allows to train such models through the use of an auxiliary discriminative neural network. We show that the generative-adversarial approach is a special case of an existing more general variational divergence estimation approach. We show that any f-divergence can be used for training generative neural samplers. We discuss the benefits of various choices of divergence functions on training complexity and the quality of the obtained generative models.
[ "Sebastian Nowozin, Botond Cseke, Ryota Tomioka", "['Sebastian Nowozin' 'Botond Cseke' 'Ryota Tomioka']" ]
stat.ML cs.LG
null
1606.00720
null
null
http://arxiv.org/pdf/1606.00720v3
2019-01-17T16:34:53Z
2016-06-02T15:26:53Z
Differentially Private Gaussian Processes
A major challenge for machine learning is increasing the availability of data while respecting the privacy of individuals. Here we combine the provable privacy guarantees of the differential privacy framework with the flexibility of Gaussian processes (GPs). We propose a method using GPs to provide differentially private (DP) regression. We then improve this method by crafting the DP noise covariance structure to efficiently protect the training data, while minimising the scale of the added noise. We find that this cloaking method achieves the greatest accuracy, while still providing privacy guarantees, and offers practical DP for regression over multi-dimensional inputs. Together these methods provide a starter toolkit for combining differential privacy and GPs.
[ "['Michael Thomas Smith' 'Max Zwiessele' 'Neil D. Lawrence']", "Michael Thomas Smith, Max Zwiessele, Neil D. Lawrence" ]
cs.CL cs.LG stat.ML
null
1606.00739
null
null
http://arxiv.org/pdf/1606.00739v2
2016-11-02T16:29:42Z
2016-06-02T16:06:29Z
Stochastic Structured Prediction under Bandit Feedback
Stochastic structured prediction under bandit feedback follows a learning protocol where on each of a sequence of iterations, the learner receives an input, predicts an output structure, and receives partial feedback in form of a task loss evaluation of the predicted structure. We present applications of this learning scenario to convex and non-convex objectives for structured prediction and analyze them as stochastic first-order methods. We present an experimental evaluation on problems of natural language processing over exponential output spaces, and compare convergence speed across different objectives under the practical criterion of optimal task performance on development data and the optimization-theoretic criterion of minimal squared gradient norm. Best results under both criteria are obtained for a non-convex objective for pairwise preference learning under bandit feedback.
[ "['Artem Sokolov' 'Julia Kreutzer' 'Christopher Lo' 'Stefan Riezler']", "Artem Sokolov and Julia Kreutzer and Christopher Lo and Stefan Riezler" ]
cs.CL cs.AI cs.LG cs.NE stat.ML
null
1606.00776
null
null
http://arxiv.org/pdf/1606.00776v2
2016-06-14T02:01:16Z
2016-06-02T17:37:31Z
Multiresolution Recurrent Neural Networks: An Application to Dialogue Response Generation
We introduce the multiresolution recurrent neural network, which extends the sequence-to-sequence framework to model natural language generation as two parallel discrete stochastic processes: a sequence of high-level coarse tokens, and a sequence of natural language tokens. There are many ways to estimate or learn the high-level coarse tokens, but we argue that a simple extraction procedure is sufficient to capture a wealth of high-level discourse semantics. Such procedure allows training the multiresolution recurrent neural network by maximizing the exact joint log-likelihood over both sequences. In contrast to the standard log- likelihood objective w.r.t. natural language tokens (word perplexity), optimizing the joint log-likelihood biases the model towards modeling high-level abstractions. We apply the proposed model to the task of dialogue response generation in two challenging domains: the Ubuntu technical support domain, and Twitter conversations. On Ubuntu, the model outperforms competing approaches by a substantial margin, achieving state-of-the-art results according to both automatic evaluation metrics and a human evaluation study. On Twitter, the model appears to generate more relevant and on-topic responses according to automatic evaluation metrics. Finally, our experiments demonstrate that the proposed model is more adept at overcoming the sparsity of natural language and is better able to capture long-term structure.
[ "Iulian Vlad Serban, Tim Klinger, Gerald Tesauro, Kartik Talamadupula,\n Bowen Zhou, Yoshua Bengio, Aaron Courville", "['Iulian Vlad Serban' 'Tim Klinger' 'Gerald Tesauro' 'Kartik Talamadupula'\n 'Bowen Zhou' 'Yoshua Bengio' 'Aaron Courville']" ]
stat.ML cs.AI cs.LG stat.CO stat.ME
null
1606.00787
null
null
http://arxiv.org/pdf/1606.00787v2
2017-07-12T18:01:17Z
2016-06-02T18:20:35Z
Post-Inference Prior Swapping
While Bayesian methods are praised for their ability to incorporate useful prior knowledge, in practice, convenient priors that allow for computationally cheap or tractable inference are commonly used. In this paper, we investigate the following question: for a given model, is it possible to compute an inference result with any convenient false prior, and afterwards, given any target prior of interest, quickly transform this result into the target posterior? A potential solution is to use importance sampling (IS). However, we demonstrate that IS will fail for many choices of the target prior, depending on its parametric form and similarity to the false prior. Instead, we propose prior swapping, a method that leverages the pre-inferred false posterior to efficiently generate accurate posterior samples under arbitrary target priors. Prior swapping lets us apply less-costly inference algorithms to certain models, and incorporate new or updated prior information "post-inference". We give theoretical guarantees about our method, and demonstrate it empirically on a number of models and priors.
[ "['Willie Neiswanger' 'Eric Xing']", "Willie Neiswanger, Eric Xing" ]
stat.ML cs.LG
null
1606.00856
null
null
http://arxiv.org/pdf/1606.00856v1
2016-06-02T20:23:00Z
2016-06-02T20:23:00Z
Sequential Principal Curves Analysis
This work includes all the technical details of the Sequential Principal Curves Analysis (SPCA) in a single document. SPCA is an unsupervised nonlinear and invertible feature extraction technique. The identified curvilinear features can be interpreted as a set of nonlinear sensors: the response of each sensor is the projection onto the corresponding feature. Moreover, it can be easily tuned for different optimization criteria; e.g. infomax, error minimization, decorrelation; by choosing the right way to measure distances along each curvilinear feature. Even though proposed in [Laparra et al. Neural Comp. 12] and shown to work in multiple modalities in [Laparra and Malo Frontiers Hum. Neuro. 15], the SPCA framework has its original roots in the nonlinear ICA algorithm in [Malo and Gutierrez Network 06]. Later on, the SPCA philosophy for nonlinear generalization of PCA originated substantially faster alternatives at the cost of introducing different constraints in the model. Namely, the Principal Polynomial Analysis (PPA) [Laparra et al. IJNS 14], and the Dimensionality Reduction via Regression (DRR) [Laparra et al. IEEE TGRS 15]. This report illustrates the reasons why we developed such family and is the appropriate technical companion for the missing details in [Laparra et al., NeCo 12, Laparra and Malo, Front.Hum.Neuro. 15]. See also the data, code and examples in the dedicated sites http://isp.uv.es/spca.html and http://isp.uv.es/after effects.html
[ "Valero Laparra and Jesus Malo", "['Valero Laparra' 'Jesus Malo']" ]
cs.LG
null
1606.00868
null
null
http://arxiv.org/pdf/1606.00868v1
2016-06-02T20:42:31Z
2016-06-02T20:42:31Z
Unified Framework for Quantification
Quantification is the machine learning task of estimating test-data class proportions that are not necessarily similar to those in training. Apart from its intrinsic value as an aggregate statistic, quantification output can also be used to optimize classifier probabilities, thereby increasing classification accuracy. We unify major quantification approaches under a constrained multi-variate regression framework, and use mathematical programming to estimate class proportions for different loss functions. With this modeling approach, we extend existing binary-only quantification approaches to multi-class settings as well. We empirically verify our unified framework by experimenting with several multi-class datasets including the Stanford Sentiment Treebank and CIFAR-10.
[ "Aykut Firat", "['Aykut Firat']" ]
q-bio.QM cs.LG q-bio.TO stat.ML
null
1606.00897
null
null
http://arxiv.org/pdf/1606.00897v2
2016-12-02T20:06:14Z
2016-06-02T21:09:00Z
Multi-Organ Cancer Classification and Survival Analysis
Accurate and robust cell nuclei classification is the cornerstone for a wider range of tasks in digital and Computational Pathology. However, most machine learning systems require extensive labeling from expert pathologists for each individual problem at hand, with no or limited abilities for knowledge transfer between datasets and organ sites. In this paper we implement and evaluate a variety of deep neural network models and model ensembles for nuclei classification in renal cell cancer (RCC) and prostate cancer (PCa). We propose a convolutional neural network system based on residual learning which significantly improves over the state-of-the-art in cell nuclei classification. Finally, we show that the combination of tissue types during training increases not only classification accuracy but also overall survival analysis.
[ "['Stefan Bauer' 'Nicolas Carion' 'Peter Schüffler' 'Thomas Fuchs'\n 'Peter Wild' 'Joachim M. Buhmann']", "Stefan Bauer and Nicolas Carion and Peter Sch\\\"uffler and Thomas Fuchs\n and Peter Wild and Joachim M. Buhmann" ]
cs.SY cs.LG math.OC
null
1606.00911
null
null
http://arxiv.org/pdf/1606.00911v3
2019-09-17T04:49:44Z
2016-06-02T21:49:25Z
Distributed Cooperative Decision-Making in Multiarmed Bandits: Frequentist and Bayesian Algorithms
We study distributed cooperative decision-making under the explore-exploit tradeoff in the multiarmed bandit (MAB) problem. We extend the state-of-the-art frequentist and Bayesian algorithms for single-agent MAB problems to cooperative distributed algorithms for multi-agent MAB problems in which agents communicate according to a fixed network graph. We rely on a running consensus algorithm for each agent's estimation of mean rewards from its own rewards and the estimated rewards of its neighbors. We prove the performance of these algorithms and show that they asymptotically recover the performance of a centralized agent. Further, we rigorously characterize the influence of the communication graph structure on the decision-making performance of the group.
[ "['Peter Landgren' 'Vaibhav Srivastava' 'Naomi Ehrich Leonard']", "Peter Landgren, Vaibhav Srivastava, and Naomi Ehrich Leonard" ]
cs.LG cs.AI
null
1606.00917
null
null
http://arxiv.org/pdf/1606.00917v1
2016-06-02T22:01:50Z
2016-06-02T22:01:50Z
Towards a Job Title Classification System
Document classification for text, images and other applicable entities has long been a focus of research in academia and also finds application in many industrial settings. Amidst a plethora of approaches to solve such problems, machine-learning techniques have found success in a variety of scenarios. In this paper we discuss the design of a machine learning-based semi-supervised job title classification system for the online job recruitment domain currently in production at CareerBuilder.com and propose enhancements to it. The system leverages a varied collection of classification as well clustering algorithms. These algorithms are encompassed in an architecture that facilitates leveraging existing off-the-shelf machine learning tools and techniques while keeping into consideration the challenges of constructing a scalable classification system for a large taxonomy of categories. As a continuously evolving system that is still under development we first discuss the existing semi-supervised classification system which is composed of both clustering and classification components in a proximity-based classifier setup and results of which are already used across numerous products at CareerBuilder. We then elucidate our long-term goals for job title classification and propose enhancements to the existing system in the form of a two-stage coarse and fine level classifier augmentation to construct a cascade of hierarchical vertical classifiers. Preliminary results are presented using experimental evaluation on real world industrial data.
[ "['Faizan Javed' 'Matt McNair' 'Ferosh Jacob' 'Meng Zhao']", "Faizan Javed, Matt McNair, Ferosh Jacob, Meng Zhao" ]
cs.LG stat.ML
null
1606.00925
null
null
http://arxiv.org/pdf/1606.00925v3
2018-06-07T21:01:10Z
2016-06-02T22:40:59Z
Convolutional Imputation of Matrix Networks
A matrix network is a family of matrices, with relatedness modeled by a weighted graph. We consider the task of completing a partially observed matrix network. We assume a novel sampling scheme where a fraction of matrices might be completely unobserved. How can we recover the entire matrix network from incomplete observations? This mathematical problem arises in many applications including medical imaging and social networks. To recover the matrix network, we propose a structural assumption that the matrices have a graph Fourier transform which is low-rank. We formulate a convex optimization problem and prove an exact recovery guarantee for the optimization problem. Furthermore, we numerically characterize the exact recovery regime for varying rank and sampling rate and discover a new phase transition phenomenon. Then we give an iterative imputation algorithm to efficiently solve the optimization problem and complete large scale matrix networks. We demonstrate the algorithm with a variety of applications such as MRI and Facebook user network.
[ "Qingyun Sun, Mengyuan Yan David Donoho and Stephen Boyd", "['Qingyun Sun' 'Mengyuan Yan David Donoho' 'Stephen Boyd']" ]
cs.LG cs.CV
null
1606.00930
null
null
http://arxiv.org/pdf/1606.00930v1
2016-06-02T23:01:25Z
2016-06-02T23:01:25Z
Comparison of 14 different families of classification algorithms on 115 binary datasets
We tested 14 very different classification algorithms (random forest, gradient boosting machines, SVM - linear, polynomial, and RBF - 1-hidden-layer neural nets, extreme learning machines, k-nearest neighbors and a bagging of knn, naive Bayes, learning vector quantization, elastic net logistic regression, sparse linear discriminant analysis, and a boosting of linear classifiers) on 115 real life binary datasets. We followed the Demsar analysis and found that the three best classifiers (random forest, gbm and RBF SVM) are not significantly different from each other. We also discuss that a change of less then 0.0112 in the error rate should be considered as an irrelevant change, and used a Bayesian ANOVA analysis to conclude that with high probability the differences between these three classifiers is not of practical consequence. We also verified the execution time of "standard implementations" of these algorithms and concluded that RBF SVM is the fastest (significantly so) both in training time and in training plus testing time.
[ "['Jacques Wainer']", "Jacques Wainer" ]