categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
list
null
null
1608.07630
null
null
http://arxiv.org/pdf/1608.07630v1
2016-08-26T23:53:43Z
2016-08-26T23:53:43Z
Global analysis of Expectation Maximization for mixtures of two Gaussians
Expectation Maximization (EM) is among the most popular algorithms for estimating parameters of statistical models. However, EM, which is an iterative algorithm based on the maximum likelihood principle, is generally only guaranteed to find stationary points of the likelihood objective, and these points may be far from any maximizer. This article addresses this disconnect between the statistical principles behind EM and its algorithmic properties. Specifically, it provides a global analysis of EM for specific models in which the observations comprise an i.i.d. sample from a mixture of two Gaussians. This is achieved by (i) studying the sequence of parameters from idealized execution of EM in the infinite sample limit, and fully characterizing the limit points of the sequence in terms of the initial parameters; and then (ii) based on this convergence analysis, establishing statistical consistency (or lack thereof) for the actual sequence of parameters produced by EM.
[ "['Ji Xu' 'Daniel Hsu' 'Arian Maleki']" ]
cs.LG stat.ML
null
1608.07636
null
null
http://arxiv.org/pdf/1608.07636v1
2016-08-27T00:25:54Z
2016-08-27T00:25:54Z
Learning Temporal Dependence from Time-Series Data with Latent Variables
We consider the setting where a collection of time series, modeled as random processes, evolve in a causal manner, and one is interested in learning the graph governing the relationships of these processes. A special case of wide interest and applicability is the setting where the noise is Gaussian and relationships are Markov and linear. We study this setting with two additional features: firstly, each random process has a hidden (latent) state, which we use to model the internal memory possessed by the variables (similar to hidden Markov models). Secondly, each variable can depend on its latent memory state through a random lag (rather than a fixed lag), thus modeling memory recall with differing lags at distinct times. Under this setting, we develop an estimator and prove that under a genericity assumption, the parameters of the model can be learned consistently. We also propose a practical adaption of this estimator, which demonstrates significant performance gains in both synthetic and real-world datasets.
[ "Hossein Hosseini, Sreeram Kannan, Baosen Zhang and Radha Poovendran", "['Hossein Hosseini' 'Sreeram Kannan' 'Baosen Zhang' 'Radha Poovendran']" ]
cs.CV cs.AI cs.CL cs.LG
null
1608.07639
null
null
http://arxiv.org/pdf/1608.07639v1
2016-08-27T00:34:00Z
2016-08-27T00:34:00Z
Learning to generalize to new compositions in image understanding
Recurrent neural networks have recently been used for learning to describe images using natural language. However, it has been observed that these models generalize poorly to scenes that were not observed during training, possibly depending too strongly on the statistics of the text in the training data. Here we propose to describe images using short structured representations, aiming to capture the crux of a description. These structured representations allow us to tease-out and evaluate separately two types of generalization: standard generalization to new images with similar scenes, and generalization to new combinations of known entities. We compare two learning approaches on the MS-COCO dataset: a state-of-the-art recurrent network based on an LSTM (Show, Attend and Tell), and a simple structured prediction model on top of a deep network. We find that the structured model generalizes to new compositions substantially better than the LSTM, ~7 times the accuracy of predicting structured representations. By providing a concrete method to quantify generalization for unseen combinations, we argue that structured representations and compositional splits are a useful benchmark for image captioning, and advocate compositional models that capture linguistic and visual structure.
[ "Yuval Atzmon, Jonathan Berant, Vahid Kezami, Amir Globerson and Gal\n Chechik", "['Yuval Atzmon' 'Jonathan Berant' 'Vahid Kezami' 'Amir Globerson'\n 'Gal Chechik']" ]
cs.LG cs.AI
null
1608.07685
null
null
http://arxiv.org/pdf/1608.07685v8
2020-04-01T03:14:54Z
2016-08-27T09:53:38Z
KSR: A Semantic Representation of Knowledge Graph within a Novel Unsupervised Paradigm
Knowledge representation is a long-history topic in AI, which is very important. A variety of models have been proposed for knowledge graph embedding, which projects symbolic entities and relations into continuous vector space. However, most related methods merely focus on the data-fitting of knowledge graph, and ignore the interpretable semantic expression. Thus, traditional embedding methods are not friendly for applications that require semantic analysis, such as question answering and entity retrieval. To this end, this paper proposes a semantic representation method for knowledge graph \textbf{(KSR)}, which imposes a two-level hierarchical generative process that globally extracts many aspects and then locally assigns a specific category in each aspect for every triple. Since both aspects and categories are semantics-relevant, the collection of categories in each aspect is treated as the semantic representation of this triple. Extensive experiments show that our model outperforms other state-of-the-art baselines substantially.
[ "['Han Xiao' 'Minlie Huang' 'Xiaoyan Zhu']", "Han Xiao, Minlie Huang, Xiaoyan Zhu" ]
cs.LG stat.ML
null
1608.0769
null
null
null
null
null
A Boundary Tilting Persepective on the Phenomenon of Adversarial Examples
Deep neural networks have been shown to suffer from a surprising weakness: their classification outputs can be changed by small, non-random perturbations of their inputs. This adversarial example phenomenon has been explained as originating from deep networks being "too linear" (Goodfellow et al., 2014). We show here that the linear explanation of adversarial examples presents a number of limitations: the formal argument is not convincing, linear classifiers do not always suffer from the phenomenon, and when they do their adversarial examples are different from the ones affecting deep networks. We propose a new perspective on the phenomenon. We argue that adversarial examples exist when the classification boundary lies close to the submanifold of sampled data, and present a mathematical analysis of this new perspective in the linear case. We define the notion of adversarial strength and show that it can be reduced to the deviation angle between the classifier considered and the nearest centroid classifier. Then, we show that the adversarial strength can be made arbitrarily high independently of the classification performance due to a mechanism that we call boundary tilting. This result leads us to defining a new taxonomy of adversarial examples. Finally, we show that the adversarial strength observed in practice is directly dependent on the level of regularisation used and the strongest adversarial examples, symptomatic of overfitting, can be avoided by using a proper level of regularisation.
[ "Thomas Tanay and Lewis Griffin" ]
null
null
1608.07690
null
null
http://arxiv.org/pdf/1608.07690v1
2016-08-27T10:44:54Z
2016-08-27T10:44:54Z
A Boundary Tilting Persepective on the Phenomenon of Adversarial Examples
Deep neural networks have been shown to suffer from a surprising weakness: their classification outputs can be changed by small, non-random perturbations of their inputs. This adversarial example phenomenon has been explained as originating from deep networks being "too linear" (Goodfellow et al., 2014). We show here that the linear explanation of adversarial examples presents a number of limitations: the formal argument is not convincing, linear classifiers do not always suffer from the phenomenon, and when they do their adversarial examples are different from the ones affecting deep networks. We propose a new perspective on the phenomenon. We argue that adversarial examples exist when the classification boundary lies close to the submanifold of sampled data, and present a mathematical analysis of this new perspective in the linear case. We define the notion of adversarial strength and show that it can be reduced to the deviation angle between the classifier considered and the nearest centroid classifier. Then, we show that the adversarial strength can be made arbitrarily high independently of the classification performance due to a mechanism that we call boundary tilting. This result leads us to defining a new taxonomy of adversarial examples. Finally, we show that the adversarial strength observed in practice is directly dependent on the level of regularisation used and the strongest adversarial examples, symptomatic of overfitting, can be avoided by using a proper level of regularisation.
[ "['Thomas Tanay' 'Lewis Griffin']" ]
cs.LG stat.ML
null
1608.0771
null
null
null
null
null
Random Forest for Label Ranking
Label ranking aims to learn a mapping from instances to rankings over a finite number of predefined labels. Random forest is a powerful and one of the most successful general-purpose machine learning algorithms of modern times. In this paper, we present a powerful random forest label ranking method which uses random decision trees to retrieve nearest neighbors. We have developed a novel two-step rank aggregation strategy to effectively aggregate neighboring rankings discovered by the random forest into a final predicted ranking. Compared with existing methods, the new random forest method has many advantages including its intrinsically scalable tree data structure, highly parallel-able computational architecture and much superior performance. We present extensive experimental results to demonstrate that our new method achieves the highly competitive performance compared with state-of-the-art methods for datasets with complete ranking and datasets with only partial ranking information.
[ "Yangming Zhou and Guoping Qiu" ]
null
null
1608.07710
null
null
http://arxiv.org/pdf/1608.07710v3
2018-06-16T03:22:49Z
2016-08-27T13:32:42Z
Random Forest for Label Ranking
Label ranking aims to learn a mapping from instances to rankings over a finite number of predefined labels. Random forest is a powerful and one of the most successful general-purpose machine learning algorithms of modern times. In this paper, we present a powerful random forest label ranking method which uses random decision trees to retrieve nearest neighbors. We have developed a novel two-step rank aggregation strategy to effectively aggregate neighboring rankings discovered by the random forest into a final predicted ranking. Compared with existing methods, the new random forest method has many advantages including its intrinsically scalable tree data structure, highly parallel-able computational architecture and much superior performance. We present extensive experimental results to demonstrate that our new method achieves the highly competitive performance compared with state-of-the-art methods for datasets with complete ranking and datasets with only partial ranking information.
[ "['Yangming Zhou' 'Guoping Qiu']" ]
cs.LG
null
1608.07719
null
null
http://arxiv.org/pdf/1608.07719v2
2016-09-04T00:55:52Z
2016-08-27T15:31:21Z
Temperature-Based Deep Boltzmann Machines
Deep learning techniques have been paramount in the last years, mainly due to their outstanding results in a number of applications, that range from speech recognition to face-based user identification. Despite other techniques employed for such purposes, Deep Boltzmann Machines are among the most used ones, which are composed of layers of Restricted Boltzmann Machines (RBMs) stacked on top of each other. In this work, we evaluate the concept of temperature in DBMs, which play a key role in Boltzmann-related distributions, but it has never been considered in this context up to date. Therefore, the main contribution of this paper is to take into account this information and to evaluate its influence in DBMs considering the task of binary image reconstruction. We expect this work can foster future research considering the usage of different temperatures during learning in DBMs.
[ "Leandro Aparecido Passos Junior and Joao Paulo Papa", "['Leandro Aparecido Passos Junior' 'Joao Paulo Papa']" ]
cs.LG stat.ML
10.1109/TSP.2017.2715000
1608.07739
null
null
http://arxiv.org/abs/1608.07739v2
2017-02-27T13:56:44Z
2016-08-27T19:59:29Z
Bayesian selection for the l2-Potts model regularization parameter: 1D piecewise constant signal denoising
Piecewise constant denoising can be solved either by deterministic optimization approaches, based on the Potts model, or by stochastic Bayesian procedures. The former lead to low computational time but require the selection of a regularization parameter, whose value significantly impacts the achieved solution, and whose automated selection remains an involved and challenging problem. Conversely, fully Bayesian formalisms encapsulate the regularization parameter selection into hierarchical models, at the price of high computational costs. This contribution proposes an operational strategy that combines hierarchical Bayesian and Potts model formulations, with the double aim of automatically tuning the regularization parameter and of maintaining computational effciency. The proposed procedure relies on formally connecting a Bayesian framework to a l2-Potts functional. Behaviors and performance for the proposed piecewise constant denoising and regularization parameter tuning techniques are studied qualitatively and assessed quantitatively, and shown to compare favorably against those of a fully Bayesian hierarchical procedure, both in accuracy and in computational load.
[ "Jordan Frecon, Nelly Pustelnik, Nicolas Dobigeon, Herwig Wendt and\n Patrice Abry", "['Jordan Frecon' 'Nelly Pustelnik' 'Nicolas Dobigeon' 'Herwig Wendt'\n 'Patrice Abry']" ]
cs.LG math.OC
null
1608.07888
null
null
http://arxiv.org/pdf/1608.07888v1
2016-08-29T01:58:27Z
2016-08-29T01:58:27Z
Online Monotone Optimization
This paper presents a new framework for analyzing and designing no-regret algorithms for dynamic (possibly adversarial) systems. The proposed framework generalizes the popular online convex optimization framework and extends it to its natural limit allowing it to capture a notion of regret that is intuitive for more general problems such as those encountered in game theory and variational inequalities. The framework hinges on a special choice of a system-wide loss function we have developed. Using this framework, we prove that a simple update scheme provides a no-regret algorithm for monotone systems. While previous results in game theory prove individual agents can enjoy unilateral no-regret guarantees, our result proves monotonicity sufficient for guaranteeing no-regret when considering the adjustments of multiple agent strategies in parallel. Furthermore, to our knowledge, this is the first framework to provide a suitable notion of regret for variational inequalities. Most importantly, our proposed framework ensures monotonicity a sufficient condition for employing multiple online learners safely in parallel.
[ "Ian Gemp and Sridhar Mahadevan", "['Ian Gemp' 'Sridhar Mahadevan']" ]
stat.ML cs.LG
null
1608.07892
null
null
http://arxiv.org/pdf/1608.07892v3
2018-02-21T03:45:44Z
2016-08-29T02:14:48Z
Optimizing Recurrent Neural Networks Architectures under Time Constraints
Recurrent neural network (RNN)'s architecture is a key factor influencing its performance. We propose algorithms to optimize hidden sizes under running time constraint. We convert the discrete optimization into a subset selection problem. By novel transformations, the objective function becomes submodular and constraint becomes supermodular. A greedy algorithm with bounds is suggested to solve the transformed problem. And we show how transformations influence the bounds. To speed up optimization, surrogate functions are proposed which balance exploration and exploitation. Experiments show that our algorithms can find more accurate models or faster models than manually tuned state-of-the-art and random search. We also compare popular RNN architectures using our algorithms.
[ "Junqi Jin, Ziang Yan, Kun Fu, Nan Jiang, Changshui Zhang", "['Junqi Jin' 'Ziang Yan' 'Kun Fu' 'Nan Jiang' 'Changshui Zhang']" ]
cs.LG cs.HC
null
1608.07895
null
null
http://arxiv.org/pdf/1608.07895v1
2016-08-29T02:37:21Z
2016-08-29T02:37:21Z
Human-Algorithm Interaction Biases in the Big Data Cycle: A Markov Chain Iterated Learning Framework
Early supervised machine learning algorithms have relied on reliable expert labels to build predictive models. However, the gates of data generation have recently been opened to a wider base of users who started participating increasingly with casual labeling, rating, annotating, etc. The increased online presence and participation of humans has led not only to a democratization of unchecked inputs to algorithms, but also to a wide democratization of the "consumption" of machine learning algorithms' outputs by general users. Hence, these algorithms, many of which are becoming essential building blocks of recommender systems and other information filters, started interacting with users at unprecedented rates. The result is machine learning algorithms that consume more and more data that is unchecked, or at the very least, not fitting conventional assumptions made by various machine learning algorithms. These include biased samples, biased labels, diverging training and testing sets, and cyclical interaction between algorithms, humans, information consumed by humans, and data consumed by algorithms. Yet, the continuous interaction between humans and algorithms is rarely taken into account in machine learning algorithm design and analysis. In this paper, we present a preliminary theoretical model and analysis of the mutual interaction between humans and algorithms, based on an iterated learning framework that is inspired from the study of human language evolution. We also define the concepts of human and algorithm blind spots and outline machine learning approaches to mend iterated bias through two novel notions: antidotes and reactive learning.
[ "Olfa Nasraoui and Patrick Shafto", "['Olfa Nasraoui' 'Patrick Shafto']" ]
cs.LG stat.ML
10.1016/j.engappai.2016.06.001
1608.07934
null
null
http://arxiv.org/abs/1608.07934v1
2016-08-29T07:21:20Z
2016-08-29T07:21:20Z
Relevant based structure learning for feature selection
Feature selection is an important task in many problems occurring in pattern recognition, bioinformatics, machine learning and data mining applications. The feature selection approach enables us to reduce the computation burden and the falling accuracy effect of dealing with huge number of features in typical learning problems. There is a variety of techniques for feature selection in supervised learning problems based on different selection metrics. In this paper, we propose a novel unified framework for feature selection built on the graphical models and information theoretic tools. The proposed approach exploits the structure learning among features to select more relevant and less redundant features to the predictive modeling problem according to a primary novel likelihood based criterion. In line with the selection of the optimal subset of features through the proposed method, it provides us the Bayesian network classifier without the additional cost of model training on the selected subset of features. The optimal properties of our method are established through empirical studies and computational complexity analysis. Furthermore the proposed approach is evaluated on a bunch of benchmark datasets based on the well-known classification algorithms. Extensive experiments confirm the significant improvement of the proposed approach compared to the earlier works.
[ "Hadi Zare and Mojtaba Niazi", "['Hadi Zare' 'Mojtaba Niazi']" ]
cs.NI cs.IT cs.LG math.IT
null
1608.07949
null
null
http://arxiv.org/pdf/1608.07949v1
2016-08-29T08:33:25Z
2016-08-29T08:33:25Z
Learning-Based Resource Allocation Scheme for TDD-Based CRAN System
Explosive growth in the use of smart wireless devices has necessitated the provision of higher data rates and always-on connectivity, which are the main motivators for designing the fifth generation (5G) systems. To achieve higher system efficiency, massive antenna deployment with tight coordination is one potential strategy for designing 5G systems, but has two types of associated system overhead. First is the synchronization overhead, which can be reduced by implementing a cloud radio access network (CRAN)-based architecture design, that separates the baseband processing and radio access functionality to achieve better system synchronization. Second is the overhead for acquiring channel state information (CSI) of the users present in the system, which, however, increases tremendously when instantaneous CSI is used to serve high-mobility users. To serve a large number of users, a CRAN system with a dense deployment of remote radio heads (RRHs) is considered, such that each user has a line-of-sight (LOS) link with the corresponding RRH. Since, the trajectory of movement for high-mobility users is predictable; therefore, fairly accurate position estimates for those users can be obtained, and can be used for resource allocation to serve the considered users. The resource allocation is dependent upon various correlated system parameters, and these correlations can be learned using well-known \emph{machine learning} algorithms. This paper proposes a novel \emph{learning-based resource allocation scheme} for time division duplex (TDD) based 5G CRAN systems with dense RRH deployment, by using only the users' position estimates for resource allocation, thus avoiding the need for CSI acquisition. This reduces the overall system overhead significantly, while still achieving near-optimal system performance; thus, better (effective) system efficiency is achieved. (See the paper for full abstract)
[ "Sahar Imtiaz, Hadi Ghauch, M. Mahboob Ur Rahman, George Koudouridis,\n and James Gross", "['Sahar Imtiaz' 'Hadi Ghauch' 'M. Mahboob Ur Rahman' 'George Koudouridis'\n 'James Gross']" ]
stat.ML cs.LG
null
1608.08052
null
null
http://arxiv.org/pdf/1608.08052v1
2016-08-29T14:00:21Z
2016-08-29T14:00:21Z
Robust Discriminative Clustering with Sparse Regularizers
Clustering high-dimensional data often requires some form of dimensionality reduction, where clustered variables are separated from "noise-looking" variables. We cast this problem as finding a low-dimensional projection of the data which is well-clustered. This yields a one-dimensional projection in the simplest situation with two clusters, and extends naturally to a multi-label scenario for more than two clusters. In this paper, (a) we first show that this joint clustering and dimension reduction formulation is equivalent to previously proposed discriminative clustering frameworks, thus leading to convex relaxations of the problem, (b) we propose a novel sparse extension, which is still cast as a convex relaxation and allows estimation in higher dimensions, (c) we propose a natural extension for the multi-label scenario, (d) we provide a new theoretical analysis of the performance of these formulations with a simple probabilistic model, leading to scalings over the form $d=O(\sqrt{n})$ for the affine invariant case and $d=O(n)$ for the sparse case, where $n$ is the number of examples and $d$ the ambient dimension, and finally, (e) we propose an efficient iterative algorithm with running-time complexity proportional to $O(nd^2)$, improving on earlier algorithms which had quadratic complexity in the number of examples.
[ "['Nicolas Flammarion' 'Balamurugan Palaniappan' 'Francis Bach']", "Nicolas Flammarion and Balamurugan Palaniappan and Francis Bach" ]
stat.ML cs.LG
10.1007/s10994-018-5717-1
1608.08063
null
null
http://arxiv.org/abs/1608.08063v2
2018-05-23T08:42:15Z
2016-08-29T14:18:40Z
Wasserstein Discriminant Analysis
Wasserstein Discriminant Analysis (WDA) is a new supervised method that can improve classification of high-dimensional data by computing a suitable linear map onto a lower dimensional subspace. Following the blueprint of classical Linear Discriminant Analysis (LDA), WDA selects the projection matrix that maximizes the ratio of two quantities: the dispersion of projected points coming from different classes, divided by the dispersion of projected points coming from the same class. To quantify dispersion, WDA uses regularized Wasserstein distances, rather than cross-variance measures which have been usually considered, notably in LDA. Thanks to the the underlying principles of optimal transport, WDA is able to capture both global (at distribution scale) and local (at samples scale) interactions between classes. Regularized Wasserstein distances can be computed using the Sinkhorn matrix scaling algorithm; We show that the optimization of WDA can be tackled using automatic differentiation of Sinkhorn iterations. Numerical experiments show promising results both in terms of prediction and visualization on toy examples and real life datasets such as MNIST and on deep features obtained from a subset of the Caltech dataset.
[ "R\\'emi Flamary, Marco Cuturi, Nicolas Courty, Alain Rakotomamonjy", "['Rémi Flamary' 'Marco Cuturi' 'Nicolas Courty' 'Alain Rakotomamonjy']" ]
cs.LG cs.CR cs.IR
null
1608.08182
null
null
http://arxiv.org/pdf/1608.08182v2
2016-10-05T22:26:13Z
2016-08-29T19:09:27Z
Data Poisoning Attacks on Factorization-Based Collaborative Filtering
Recommendation and collaborative filtering systems are important in modern information and e-commerce applications. As these systems are becoming increasingly popular in the industry, their outputs could affect business decision making, introducing incentives for an adversarial party to compromise the availability or integrity of such systems. We introduce a data poisoning attack on collaborative filtering systems. We demonstrate how a powerful attacker with full knowledge of the learner can generate malicious data so as to maximize his/her malicious objectives, while at the same time mimicking normal user behavior to avoid being detected. While the complete knowledge assumption seems extreme, it enables a robust assessment of the vulnerability of collaborative filtering schemes to highly motivated attacks. We present efficient solutions for two popular factorization-based collaborative filtering algorithms: the \emph{alternative minimization} formulation and the \emph{nuclear norm minimization} method. Finally, we test the effectiveness of our proposed algorithms on real-world data and discuss potential defensive strategies.
[ "['Bo Li' 'Yining Wang' 'Aarti Singh' 'Yevgeniy Vorobeychik']", "Bo Li, Yining Wang, Aarti Singh, Yevgeniy Vorobeychik" ]
cond-mat.dis-nn cs.LG cs.NE stat.ML
10.1007/s10955-017-1836-5
1608.08225
null
null
http://arxiv.org/abs/1608.08225v4
2017-08-03T18:32:53Z
2016-08-29T20:00:14Z
Why does deep and cheap learning work so well?
We show how the success of deep learning could depend not only on mathematics but also on physics: although well-known mathematical theorems guarantee that neural networks can approximate arbitrary functions well, the class of functions of practical interest can frequently be approximated through "cheap learning" with exponentially fewer parameters than generic ones. We explore how properties frequently encountered in physics such as symmetry, locality, compositionality, and polynomial log-probability translate into exceptionally simple neural networks. We further argue that when the statistical process generating the data is of a certain hierarchical form prevalent in physics and machine-learning, a deep neural network can be more efficient than a shallow one. We formalize these claims using information theory and discuss the relation to the renormalization group. We prove various "no-flattening theorems" showing when efficient linear deep networks cannot be accurately approximated by shallow ones without efficiency loss, for example, we show that $n$ variables cannot be multiplied using fewer than 2^n neurons in a single hidden layer.
[ "Henry W. Lin (Harvard), Max Tegmark (MIT), David Rolnick (MIT)", "['Henry W. Lin' 'Max Tegmark' 'David Rolnick']" ]
cs.LG stat.ML
10.1007/s10994-018-5760-y
1608.08266
null
null
http://arxiv.org/abs/1608.08266v2
2018-08-24T16:27:28Z
2016-08-29T22:10:17Z
Visualizing and Understanding Sum-Product Networks
Sum-Product Networks (SPNs) are recently introduced deep tractable probabilistic models by which several kinds of inference queries can be answered exactly and in a tractable time. Up to now, they have been largely used as black box density estimators, assessed only by comparing their likelihood scores only. In this paper we explore and exploit the inner representations learned by SPNs. We do this with a threefold aim: first we want to get a better understanding of the inner workings of SPNs; secondly, we seek additional ways to evaluate one SPN model and compare it against other probabilistic models, providing diagnostic tools to practitioners; lastly, we want to empirically evaluate how good and meaningful the extracted representations are, as in a classic Representation Learning framework. In order to do so we revise their interpretation as deep neural networks and we propose to exploit several visualization techniques on their node activations and network outputs under different types of inference queries. To investigate these models as feature extractors, we plug some SPNs, learned in a greedy unsupervised fashion on image datasets, in supervised classification learning tasks. We extract several embedding types from node activations by filtering nodes by their type, by their associated feature abstraction level and by their scope. In a thorough empirical comparison we prove them to be competitive against those generated from popular feature extractors as Restricted Boltzmann Machines. Finally, we investigate embeddings generated from random probabilistic marginal queries as means to compare other tractable probabilistic models on a common ground, extending our experiments to Mixtures of Trees.
[ "Antonio Vergari and Nicola Di Mauro and Floriana Esposito", "['Antonio Vergari' 'Nicola Di Mauro' 'Floriana Esposito']" ]
math.OC cs.LG stat.ML
null
1608.08337
null
null
http://arxiv.org/pdf/1608.08337v1
2016-08-30T05:58:38Z
2016-08-30T05:58:38Z
Data Dependent Convergence for Distributed Stochastic Optimization
In this dissertation we propose alternative analysis of distributed stochastic gradient descent (SGD) algorithms that rely on spectral properties of the data covariance. As a consequence we can relate questions pertaining to speedups and convergence rates for distributed SGD to the data distribution instead of the regularity properties of the objective functions. More precisely we show that this rate depends on the spectral norm of the sample covariance matrix. An estimate of this norm can provide practitioners with guidance towards a potential gain in algorithm performance. For example many sparse datasets with low spectral norm prove to be amenable to gains in distributed settings. Towards establishing this data dependence we first study a distributed consensus-based SGD algorithm and show that the rate of convergence involves the spectral norm of the sample covariance matrix when the underlying data is assumed to be independent and identically distributed (homogenous). This dependence allows us to identify network regimes that prove to be beneficial for datasets with low sample covariance spectral norm. Existing consensus based analyses prove to be sub-optimal in the homogenous setting. Our analysis method also allows us to find data-dependent convergence rates as we limit the amount of communication. Spreading a fixed amount of data across more nodes slows convergence; in the asymptotic regime we show that adding more machines can help when minimizing twice-differentiable losses. Since the mini-batch results don't follow from the consensus results we propose a different data dependent analysis thereby providing theoretical validation for why certain datasets are more amenable to mini-batching. We also provide empirical evidence for results in this thesis.
[ "Avleen S. Bijral", "['Avleen S. Bijral']" ]
cs.LG cs.AI cs.NE
10.1109/ICARCV.2014.7064375
1608.08435
null
null
http://arxiv.org/abs/1608.08435v1
2016-08-30T13:08:06Z
2016-08-30T13:08:06Z
Multi-Label Classification Method Based on Extreme Learning Machines
In this paper, an Extreme Learning Machine (ELM) based technique for Multi-label classification problems is proposed and discussed. In multi-label classification, each of the input data samples belongs to one or more than one class labels. The traditional binary and multi-class classification problems are the subset of the multi-label problem with the number of labels corresponding to each sample limited to one. The proposed ELM based multi-label classification technique is evaluated with six different benchmark multi-label datasets from different domains such as multimedia, text and biology. A detailed comparison of the results is made by comparing the proposed method with the results from nine state of the arts techniques for five different evaluation metrics. The nine methods are chosen from different categories of multi-label methods. The comparative results shows that the proposed Extreme Learning Machine based multi-label classification technique is a better alternative than the existing state of the art methods for multi-label problems.
[ "['Rajasekar Venkatesan' 'Meng Joo Er']", "Rajasekar Venkatesan, Meng Joo Er" ]
cs.LG cs.IR
null
1608.08574
null
null
http://arxiv.org/pdf/1608.08574v1
2016-08-30T17:46:55Z
2016-08-30T17:46:55Z
Applying Naive Bayes Classification to Google Play Apps Categorization
There are over one million apps on Google Play Store and over half a million publishers. Having such a huge number of apps and developers can pose a challenge to app users and new publishers on the store. Discovering apps can be challenging if apps are not correctly published in the right category, and, in turn, reduce earnings for app developers. Additionally, with over 41 categories on Google Play Store, deciding on the right category to publish an app can be challenging for developers due to the number of categories they have to choose from. Machine Learning has been very useful, especially in classification problems such sentiment analysis, document classification and spam detection. These strategies can also be applied to app categorization on Google Play Store to suggest appropriate categories for app publishers using details from their application. In this project, we built two variations of the Naive Bayes classifier using open metadata from top developer apps on Google Play Store in other to classify new apps on the store. These classifiers are then evaluated using various evaluation methods and their results compared against each other. The results show that the Naive Bayes algorithm performs well for our classification problem and can potentially automate app categorization for Android app publishers on Google Play Store
[ "['Babatunde Olabenjo']", "Babatunde Olabenjo" ]
cs.CV cs.AI cs.LG
null
1608.08614
null
null
http://arxiv.org/pdf/1608.08614v2
2016-12-10T13:37:06Z
2016-08-30T19:45:09Z
What makes ImageNet good for transfer learning?
The tremendous success of ImageNet-trained deep features on a wide range of transfer tasks begs the question: what are the properties of the ImageNet dataset that are critical for learning good, general-purpose features? This work provides an empirical investigation of various facets of this question: Is more pre-training data always better? How does feature quality depend on the number of training examples per class? Does adding more object classes improve performance? For the same data budget, how should the data be split into classes? Is fine-grained recognition necessary for learning good features? Given the same number of training classes, is it better to have coarse classes or fine-grained classes? Which is better: more classes or more examples per class? To answer these and related questions, we pre-trained CNN features on various subsets of the ImageNet dataset and evaluated transfer performance on PASCAL detection, PASCAL action classification, and SUN scene classification tasks. Our overall findings suggest that most changes in the choice of pre-training data long thought to be critical do not significantly affect transfer performance.? Given the same number of training classes, is it better to have coarse classes or fine-grained classes? Which is better: more classes or more examples per class?
[ "['Minyoung Huh' 'Pulkit Agrawal' 'Alexei A. Efros']", "Minyoung Huh, Pulkit Agrawal, Alexei A. Efros" ]
cs.CV cs.LG
null
1608.0871
null
null
null
null
null
Pruning Filters for Efficient ConvNets
The success of CNNs in various applications is accompanied by a significant increase in the computation and parameter storage costs. Recent efforts toward reducing these overheads involve pruning and compressing the weights of various layers without hurting original accuracy. However, magnitude-based pruning of weights reduces a significant number of parameters from the fully connected layers and may not adequately reduce the computation costs in the convolutional layers due to irregular sparsity in the pruned networks. We present an acceleration method for CNNs, where we prune filters from CNNs that are identified as having a small effect on the output accuracy. By removing whole filters in the network together with their connecting feature maps, the computation costs are reduced significantly. In contrast to pruning weights, this approach does not result in sparse connectivity patterns. Hence, it does not need the support of sparse convolution libraries and can work with existing efficient BLAS libraries for dense matrix multiplications. We show that even simple filter pruning techniques can reduce inference costs for VGG-16 by up to 34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the original accuracy by retraining the networks.
[ "Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, Hans Peter Graf" ]
null
null
1608.08710
null
null
http://arxiv.org/pdf/1608.08710v3
2017-03-10T17:57:56Z
2016-08-31T02:29:59Z
Pruning Filters for Efficient ConvNets
The success of CNNs in various applications is accompanied by a significant increase in the computation and parameter storage costs. Recent efforts toward reducing these overheads involve pruning and compressing the weights of various layers without hurting original accuracy. However, magnitude-based pruning of weights reduces a significant number of parameters from the fully connected layers and may not adequately reduce the computation costs in the convolutional layers due to irregular sparsity in the pruned networks. We present an acceleration method for CNNs, where we prune filters from CNNs that are identified as having a small effect on the output accuracy. By removing whole filters in the network together with their connecting feature maps, the computation costs are reduced significantly. In contrast to pruning weights, this approach does not result in sparse connectivity patterns. Hence, it does not need the support of sparse convolution libraries and can work with existing efficient BLAS libraries for dense matrix multiplications. We show that even simple filter pruning techniques can reduce inference costs for VGG-16 by up to 34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the original accuracy by retraining the networks.
[ "['Hao Li' 'Asim Kadav' 'Igor Durdanovic' 'Hanan Samet' 'Hans Peter Graf']" ]
cs.AI cs.CL cs.CV cs.LG
null
1608.08716
null
null
http://arxiv.org/pdf/1608.08716v1
2016-08-31T02:56:00Z
2016-08-31T02:56:00Z
Measuring Machine Intelligence Through Visual Question Answering
As machines have become more intelligent, there has been a renewed interest in methods for measuring their intelligence. A common approach is to propose tasks for which a human excels, but one which machines find difficult. However, an ideal task should also be easy to evaluate and not be easily gameable. We begin with a case study exploring the recently popular task of image captioning and its limitations as a task for measuring machine intelligence. An alternative and more promising task is Visual Question Answering that tests a machine's ability to reason about language and vision. We describe a dataset unprecedented in size created for the task that contains over 760,000 human generated questions about images. Using around 10 million human generated answers, machines may be easily evaluated.
[ "['C. Lawrence Zitnick' 'Aishwarya Agrawal' 'Stanislaw Antol'\n 'Margaret Mitchell' 'Dhruv Batra' 'Devi Parikh']", "C. Lawrence Zitnick, Aishwarya Agrawal, Stanislaw Antol, Margaret\n Mitchell, Dhruv Batra, Devi Parikh" ]
cs.LG stat.ML
null
1608.08761
null
null
http://arxiv.org/pdf/1608.08761v2
2016-10-31T17:34:03Z
2016-08-31T08:18:48Z
hi-RF: Incremental Learning Random Forest for large-scale multi-class Data Classification
In recent years, dynamically growing data and incrementally growing number of classes pose new challenges to large-scale data classification research. Most traditional methods struggle to balance the precision and computational burden when data and its number of classes increased. However, some methods are with weak precision, and the others are time-consuming. In this paper, we propose an incremental learning method, namely, heterogeneous incremental Nearest Class Mean Random Forest (hi-RF), to handle this issue. It is a heterogeneous method that either replaces trees or updates trees leaves in the random forest adaptively, to reduce the computational time in comparable performance, when data of new classes arrive. Specifically, to keep the accuracy, one proportion of trees are replaced by new NCM decision trees; to reduce the computational load, the rest trees are updated their leaves probabilities only. Most of all, out-of-bag estimation and out-of-bag boosting are proposed to balance the accuracy and the computational efficiency. Fair experiments were conducted and demonstrated its comparable precision with much less computational time.
[ "['Tingting Xie' 'Yuxing Peng' 'Changjian Wang']", "Tingting Xie, Yuxing Peng, Changjian Wang" ]
stat.ML cs.LG math.ST stat.TH
null
1608.08852
null
null
http://arxiv.org/pdf/1608.08852v1
2016-08-31T13:53:09Z
2016-08-31T13:53:09Z
A Mathematical Framework for Feature Selection from Real-World Data with Non-Linear Observations
In this paper, we study the challenge of feature selection based on a relatively small collection of sample pairs $\{(x_i, y_i)\}_{1 \leq i \leq m}$. The observations $y_i \in \mathbb{R}$ are thereby supposed to follow a noisy single-index model, depending on a certain set of signal variables. A major difficulty is that these variables usually cannot be observed directly, but rather arise as hidden factors in the actual data vectors $x_i \in \mathbb{R}^d$ (feature variables). We will prove that a successful variable selection is still possible in this setup, even when the applied estimator does not have any knowledge of the underlying model parameters and only takes the 'raw' samples $\{(x_i, y_i)\}_{1 \leq i \leq m}$ as input. The model assumptions of our results will be fairly general, allowing for non-linear observations, arbitrary convex signal structures as well as strictly convex loss functions. This is particularly appealing for practical purposes, since in many applications, already standard methods, e.g., the Lasso or logistic regression, yield surprisingly good outcomes. Apart from a general discussion of the practical scope of our theoretical findings, we will also derive a rigorous guarantee for a specific real-world problem, namely sparse feature extraction from (proteomics-based) mass spectrometry data.
[ "Martin Genzel and Gitta Kutyniok", "['Martin Genzel' 'Gitta Kutyniok']" ]
cs.LG cs.AI cs.NE
10.1007/978-3-319-28373-9_37
1608.08898
null
null
http://arxiv.org/abs/1608.08898v1
2016-08-31T14:56:12Z
2016-08-31T14:56:12Z
A High Speed Multi-label Classifier based on Extreme Learning Machines
In this paper a high speed neural network classifier based on extreme learning machines for multi-label classification problem is proposed and dis-cussed. Multi-label classification is a superset of traditional binary and multi-class classification problems. The proposed work extends the extreme learning machine technique to adapt to the multi-label problems. As opposed to the single-label problem, both the number of labels the sample belongs to, and each of those target labels are to be identified for multi-label classification resulting in in-creased complexity. The proposed high speed multi-label classifier is applied to six benchmark datasets comprising of different application areas such as multi-media, text and biology. The training time and testing time of the classifier are compared with those of the state-of-the-arts methods. Experimental studies show that for all the six datasets, our proposed technique have faster execution speed and better performance, thereby outperforming all the existing multi-label clas-sification methods.
[ "['Meng Joo Er' 'Rajasekar Venkatesan' 'Ning Wang']", "Meng Joo Er, Rajasekar Venkatesan and Ning Wang" ]
cs.LG cs.AI cs.NE
null
1608.08905
null
null
http://arxiv.org/pdf/1608.08905v1
2016-08-31T15:14:06Z
2016-08-31T15:14:06Z
A Novel Online Real-time Classifier for Multi-label Data Streams
In this paper, a novel extreme learning machine based online multi-label classifier for real-time data streams is proposed. Multi-label classification is one of the actively researched machine learning paradigm that has gained much attention in the recent years due to its rapidly increasing real world applications. In contrast to traditional binary and multi-class classification, multi-label classification involves association of each of the input samples with a set of target labels simultaneously. There are no real-time online neural network based multi-label classifier available in the literature. In this paper, we exploit the inherent nature of high speed exhibited by the extreme learning machines to develop a novel online real-time classifier for multi-label data streams. The developed classifier is experimented with datasets from different application domains for consistency, performance and speed. The experimental studies show that the proposed method outperforms the existing state-of-the-art techniques in terms of speed and accuracy and can classify multi-label data streams in real-time.
[ "['Rajasekar Venkatesan' 'Meng Joo Er' 'Shiqian Wu' 'Mahardhika Pratama']", "Rajasekar Venkatesan, Meng Joo Er, Shiqian Wu, Mahardhika Pratama" ]
stat.ML cs.LG
null
1608.08925
null
null
http://arxiv.org/pdf/1608.08925v3
2017-08-01T13:12:19Z
2016-08-31T16:20:59Z
Recursive Partitioning for Personalization using Observational Data
We study the problem of learning to choose from m discrete treatment options (e.g., news item or medical drug) the one with best causal effect for a particular instance (e.g., user or patient) where the training data consists of passive observations of covariates, treatment, and the outcome of the treatment. The standard approach to this problem is regress and compare: split the training data by treatment, fit a regression model in each split, and, for a new instance, predict all m outcomes and pick the best. By reformulating the problem as a single learning task rather than m separate ones, we propose a new approach based on recursively partitioning the data into regimes where different treatments are optimal. We extend this approach to an optimal partitioning approach that finds a globally optimal partition, achieving a compact, interpretable, and impactful personalization model. We develop new tools for validating and evaluating personalization models on observational data and use these to demonstrate the power of our novel approaches in a personalized medicine and a job training application.
[ "['Nathan Kallus']", "Nathan Kallus" ]
cs.CL cs.IR cs.LG
null
1608.0894
null
null
null
null
null
Hash2Vec, Feature Hashing for Word Embeddings
In this paper we propose the application of feature hashing to create word embeddings for natural language processing. Feature hashing has been used successfully to create document vectors in related tasks like document classification. In this work we show that feature hashing can be applied to obtain word embeddings in linear time with the size of the data. The results show that this algorithm, that does not need training, is able to capture the semantic meaning of words. We compare the results against GloVe showing that they are similar. As far as we know this is the first application of feature hashing to the word embeddings problem and the results indicate this is a scalable technique with practical results for NLP applications.
[ "Luis Argerich, Joaqu\\'in Torr\\'e Zaffaroni, Mat\\'ias J Cano" ]
null
null
1608.08940
null
null
http://arxiv.org/pdf/1608.08940v1
2016-08-31T17:01:09Z
2016-08-31T17:01:09Z
Hash2Vec, Feature Hashing for Word Embeddings
In this paper we propose the application of feature hashing to create word embeddings for natural language processing. Feature hashing has been used successfully to create document vectors in related tasks like document classification. In this work we show that feature hashing can be applied to obtain word embeddings in linear time with the size of the data. The results show that this algorithm, that does not need training, is able to capture the semantic meaning of words. We compare the results against GloVe showing that they are similar. As far as we know this is the first application of feature hashing to the word embeddings problem and the results indicate this is a scalable technique with practical results for NLP applications.
[ "['Luis Argerich' 'Joaquín Torré Zaffaroni' 'Matías J Cano']" ]
cs.LG cs.CV stat.ML
null
1608.08967
null
null
http://arxiv.org/pdf/1608.08967v1
2016-08-31T17:54:34Z
2016-08-31T17:54:34Z
Robustness of classifiers: from adversarial to random noise
Several recent works have shown that state-of-the-art classifiers are vulnerable to worst-case (i.e., adversarial) perturbations of the datapoints. On the other hand, it has been empirically observed that these same classifiers are relatively robust to random noise. In this paper, we propose to study a \textit{semi-random} noise regime that generalizes both the random and worst-case noise regimes. We propose the first quantitative analysis of the robustness of nonlinear classifiers in this general noise regime. We establish precise theoretical bounds on the robustness of classifiers in this general regime, which depend on the curvature of the classifier's decision boundary. Our bounds confirm and quantify the empirical observations that classifiers satisfying curvature constraints are robust to random noise. Moreover, we quantify the robustness of classifiers in terms of the subspace dimension in the semi-random noise regime, and show that our bounds remarkably interpolate between the worst-case and random noise regimes. We perform experiments and show that the derived bounds provide very accurate estimates when applied to various state-of-the-art deep neural networks and datasets. This result suggests bounds on the curvature of the classifiers' decision boundaries that we support experimentally, and more generally offers important insights onto the geometry of high dimensional classification problems.
[ "['Alhussein Fawzi' 'Seyed-Mohsen Moosavi-Dezfooli' 'Pascal Frossard']", "Alhussein Fawzi, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard" ]
cs.CV cs.AI cs.CL cs.LG
null
1608.08974
null
null
http://arxiv.org/pdf/1608.08974v2
2016-09-09T19:51:06Z
2016-08-31T18:11:29Z
Towards Transparent AI Systems: Interpreting Visual Question Answering Models
Deep neural networks have shown striking progress and obtained state-of-the-art results in many AI research fields in the recent years. However, it is often unsatisfying to not know why they predict what they do. In this paper, we address the problem of interpreting Visual Question Answering (VQA) models. Specifically, we are interested in finding what part of the input (pixels in images or words in questions) the VQA model focuses on while answering the question. To tackle this problem, we use two visualization techniques -- guided backpropagation and occlusion -- to find important words in the question and important regions in the image. We then present qualitative and quantitative analyses of these importance maps. We found that even without explicit attention mechanisms, VQA models may sometimes be implicitly attending to relevant regions in the image, and often to appropriate words in the question.
[ "Yash Goyal, Akrit Mohapatra, Devi Parikh, Dhruv Batra", "['Yash Goyal' 'Akrit Mohapatra' 'Devi Parikh' 'Dhruv Batra']" ]
stat.ML cs.LG
null
1608.08984
null
null
http://arxiv.org/pdf/1608.08984v1
2016-08-31T18:34:51Z
2016-08-31T18:34:51Z
Towards Competitive Classifiers for Unbalanced Classification Problems: A Study on the Performance Scores
Although a great methodological effort has been invested in proposing competitive solutions to the class-imbalance problem, little effort has been made in pursuing a theoretical understanding of this matter. In order to shed some light on this topic, we perform, through a novel framework, an exhaustive analysis of the adequateness of the most commonly used performance scores to assess this complex scenario. We conclude that using unweighted H\"older means with exponent $p \leq 1$ to average the recalls of all the classes produces adequate scores which are capable of determining whether a classifier is competitive. Then, we review the major solutions presented in the class-imbalance literature. Since any learning task can be defined as an optimisation problem where a loss function, usually connected to a particular score, is minimised, our goal, here, is to find whether the learning tasks found in the literature are also oriented to maximise the previously detected adequate scores. We conclude that they usually maximise the unweighted H\"older mean with $p = 1$ (a-mean). Finally, we provide bounds on the values of the studied performance scores which guarantee a classifier with a higher recall than the random classifier in each and every class.
[ "Jonathan Ortigosa-Hern\\'andez, I\\~naki Inza, Jose A. Lozano", "['Jonathan Ortigosa-Hernández' 'Iñaki Inza' 'Jose A. Lozano']" ]
cs.SE cs.LG cs.PL
null
1608.09
null
null
null
null
null
Learning Syntactic Program Transformations from Examples
IDEs, such as Visual Studio, automate common transformations, such as Rename and Extract Method refactorings. However, extending these catalogs of transformations is complex and time-consuming. A similar phenomenon appears in intelligent tutoring systems where instructors have to write cumbersome code transformations that describe "common faults" to fix similar student submissions to programming assignments. We present REFAZER, a technique for automatically generating program transformations. REFAZER builds on the observation that code edits performed by developers can be used as examples for learning transformations. Example edits may share the same structure but involve different variables and subexpressions, which must be generalized in a transformation at the right level of abstraction. To learn transformations, REFAZER leverages state-of-the-art programming-by-example methodology using the following key components: (a) a novel domain-specific language (DSL) for describing program transformations, (b) domain-specific deductive algorithms for synthesizing transformations in the DSL, and (c) functions for ranking the synthesized transformations. We instantiate and evaluate REFAZER in two domains. First, given examples of edits used by students to fix incorrect programming assignment submissions, we learn transformations that can fix other students' submissions with similar faults. In our evaluation conducted on 4 programming tasks performed by 720 students, our technique helped to fix incorrect submissions for 87% of the students. In the second domain, we use repetitive edits applied by developers to the same project to synthesize a program transformation that applies these edits to other locations in the code. In our evaluation conducted on 59 scenarios of repetitive edits taken from 3 C# open-source projects, REFAZER learns the intended program transformation in 83% of the cases.
[ "Reudismam Rolim, Gustavo Soares, Loris D'Antoni, Oleksandr Polozov,\n Sumit Gulwani, Rohit Gheyi, Ryo Suzuki, Bjoern Hartmann" ]
null
null
1608.09000
null
null
http://arxiv.org/pdf/1608.09000v1
2016-08-31T19:06:06Z
2016-08-31T19:06:06Z
Learning Syntactic Program Transformations from Examples
IDEs, such as Visual Studio, automate common transformations, such as Rename and Extract Method refactorings. However, extending these catalogs of transformations is complex and time-consuming. A similar phenomenon appears in intelligent tutoring systems where instructors have to write cumbersome code transformations that describe "common faults" to fix similar student submissions to programming assignments. We present REFAZER, a technique for automatically generating program transformations. REFAZER builds on the observation that code edits performed by developers can be used as examples for learning transformations. Example edits may share the same structure but involve different variables and subexpressions, which must be generalized in a transformation at the right level of abstraction. To learn transformations, REFAZER leverages state-of-the-art programming-by-example methodology using the following key components: (a) a novel domain-specific language (DSL) for describing program transformations, (b) domain-specific deductive algorithms for synthesizing transformations in the DSL, and (c) functions for ranking the synthesized transformations. We instantiate and evaluate REFAZER in two domains. First, given examples of edits used by students to fix incorrect programming assignment submissions, we learn transformations that can fix other students' submissions with similar faults. In our evaluation conducted on 4 programming tasks performed by 720 students, our technique helped to fix incorrect submissions for 87% of the students. In the second domain, we use repetitive edits applied by developers to the same project to synthesize a program transformation that applies these edits to other locations in the code. In our evaluation conducted on 59 scenarios of repetitive edits taken from 3 C# open-source projects, REFAZER learns the intended program transformation in 83% of the cases.
[ "['Reudismam Rolim' 'Gustavo Soares' \"Loris D'Antoni\" 'Oleksandr Polozov'\n 'Sumit Gulwani' 'Rohit Gheyi' 'Ryo Suzuki' 'Bjoern Hartmann']" ]
cs.LG stat.ML
null
1608.09014
null
null
http://arxiv.org/pdf/1608.09014v1
2016-08-31T19:55:35Z
2016-08-31T19:55:35Z
A Tutorial on Online Supervised Learning with Applications to Node Classification in Social Networks
We revisit the elegant observation of T. Cover '65 which, perhaps, is not as well-known to the broader community as it should be. The first goal of the tutorial is to explain---through the prism of this elementary result---how to solve certain sequence prediction problems by modeling sets of solutions rather than the unknown data-generating mechanism. We extend Cover's observation in several directions and focus on computational aspects of the proposed algorithms. The applicability of the methods is illustrated on several examples, including node classification in a network. The second aim of this tutorial is to demonstrate the following phenomenon: it is possible to predict as well as a combinatorial "benchmark" for which we have a certain multiplicative approximation algorithm, even if the exact computation of the benchmark given all the data is NP-hard. The proposed prediction methods, therefore, circumvent some of the computational difficulties associated with finding the best model given the data. These difficulties arise rather quickly when one attempts to develop a probabilistic model for graph-based or other problems with a combinatorial structure.
[ "['Alexander Rakhlin' 'Karthik Sridharan']", "Alexander Rakhlin and Karthik Sridharan" ]
stat.ML cs.LG
null
1609.00074
null
null
http://arxiv.org/pdf/1609.00074v3
2018-02-21T03:45:19Z
2016-09-01T00:59:30Z
Neural Network Architecture Optimization through Submodularity and Supermodularity
Deep learning models' architectures, including depth and width, are key factors influencing models' performance, such as test accuracy and computation time. This paper solves two problems: given computation time budget, choose an architecture to maximize accuracy, and given accuracy requirement, choose an architecture to minimize computation time. We convert this architecture optimization into a subset selection problem. With accuracy's submodularity and computation time's supermodularity, we propose efficient greedy optimization algorithms. The experiments demonstrate our algorithm's ability to find more accurate models or faster models. By analyzing architecture evolution with growing time budget, we discuss relationships among accuracy, time and architecture, and give suggestions on neural network architecture design.
[ "Junqi Jin, Ziang Yan, Kun Fu, Nan Jiang, Changshui Zhang", "['Junqi Jin' 'Ziang Yan' 'Kun Fu' 'Nan Jiang' 'Changshui Zhang']" ]
cs.LG cs.AI cs.NE
null
1609.00085
null
null
http://arxiv.org/pdf/1609.00085v2
2017-01-22T09:52:06Z
2016-09-01T01:50:18Z
A Novel Progressive Learning Technique for Multi-class Classification
In this paper, a progressive learning technique for multi-class classification is proposed. This newly developed learning technique is independent of the number of class constraints and it can learn new classes while still retaining the knowledge of previous classes. Whenever a new class (non-native to the knowledge learnt thus far) is encountered, the neural network structure gets remodeled automatically by facilitating new neurons and interconnections, and the parameters are calculated in such a way that it retains the knowledge learnt thus far. This technique is suitable for real-world applications where the number of classes is often unknown and online learning from real-time data is required. The consistency and the complexity of the progressive learning technique are analyzed. Several standard datasets are used to evaluate the performance of the developed technique. A comparative study shows that the developed technique is superior.
[ "['Rajasekar Venkatesan' 'Meng Joo Er']", "Rajasekar Venkatesan, Meng Joo Er" ]
cs.LG cs.AI cs.NE
10.1007/s12530-016-9162-8
1609.00086
null
null
http://arxiv.org/abs/1609.00086v1
2016-09-01T01:58:50Z
2016-09-01T01:58:50Z
A novel online multi-label classifier for high-speed streaming data applications
In this paper, a high-speed online neural network classifier based on extreme learning machines for multi-label classification is proposed. In multi-label classification, each of the input data sample belongs to one or more than one of the target labels. The traditional binary and multi-class classification where each sample belongs to only one target class forms the subset of multi-label classification. Multi-label classification problems are far more complex than binary and multi-class classification problems, as both the number of target labels and each of the target labels corresponding to each of the input samples are to be identified. The proposed work exploits the high-speed nature of the extreme learning machines to achieve real-time multi-label classification of streaming data. A new threshold-based online sequential learning algorithm is proposed for high speed and streaming data classification of multi-label problems. The proposed method is experimented with six different datasets from different application domains such as multimedia, text, and biology. The hamming loss, accuracy, training time and testing time of the proposed technique is compared with nine different state-of-the-art methods. Experimental studies shows that the proposed technique outperforms the existing multi-label classifiers in terms of performance and speed.
[ "Rajasekar Venkatesan, Meng Joo Er, Mihika Dave, Mahardhika Pratama,\n Shiqian Wu", "['Rajasekar Venkatesan' 'Meng Joo Er' 'Mihika Dave' 'Mahardhika Pratama'\n 'Shiqian Wu']" ]
cs.AI cs.LG stat.ML
null
1609.00116
null
null
http://arxiv.org/pdf/1609.00116v1
2016-09-01T05:34:23Z
2016-09-01T05:34:23Z
Neural Coarse-Graining: Extracting slowly-varying latent degrees of freedom with neural networks
We present a loss function for neural networks that encompasses an idea of trivial versus non-trivial predictions, such that the network jointly determines its own prediction goals and learns to satisfy them. This permits the network to choose sub-sets of a problem which are most amenable to its abilities to focus on solving, while discarding 'distracting' elements that interfere with its learning. To do this, the network first transforms the raw data into a higher-level categorical representation, and then trains a predictor from that new time series to its future. To prevent a trivial solution of mapping the signal to zero, we introduce a measure of non-triviality via a contrast between the prediction error of the learned model with a naive model of the overall signal statistics. The transform can learn to discard uninformative and unpredictable components of the signal in favor of the features which are both highly predictive and highly predictable. This creates a coarse-grained model of the time-series dynamics, focusing on predicting the slowly varying latent parameters which control the statistics of the time-series, rather than predicting the fast details directly. The result is a semi-supervised algorithm which is capable of extracting latent parameters, segmenting sections of time-series with differing statistics, and building a higher-level representation of the underlying dynamics from unlabeled data.
[ "['Nicholas Guttenberg' 'Martin Biehl' 'Ryota Kanai']", "Nicholas Guttenberg, Martin Biehl, Ryota Kanai" ]
cs.LG
null
1609.0015
null
null
null
null
null
Reward Augmented Maximum Likelihood for Neural Structured Prediction
A key problem in structured output prediction is direct optimization of the task reward function that matters for test evaluation. This paper presents a simple and computationally efficient approach to incorporate task reward into a maximum likelihood framework. By establishing a link between the log-likelihood and expected reward objectives, we show that an optimal regularized expected reward is achieved when the conditional distribution of the outputs given the inputs is proportional to their exponentiated scaled rewards. Accordingly, we present a framework to smooth the predictive probability of the outputs using their corresponding rewards. We optimize the conditional log-probability of augmented outputs that are sampled proportionally to their exponentiated scaled rewards. Experiments on neural sequence to sequence models for speech recognition and machine translation show notable improvements over a maximum likelihood baseline by using reward augmented maximum likelihood (RAML), where the rewards are defined as the negative edit distance between the outputs and the ground truth labels.
[ "Mohammad Norouzi, Samy Bengio, Zhifeng Chen, Navdeep Jaitly, Mike\n Schuster, Yonghui Wu, Dale Schuurmans" ]
null
null
1609.00150
null
null
http://arxiv.org/pdf/1609.00150v3
2017-01-04T18:10:36Z
2016-09-01T09:00:19Z
Reward Augmented Maximum Likelihood for Neural Structured Prediction
A key problem in structured output prediction is direct optimization of the task reward function that matters for test evaluation. This paper presents a simple and computationally efficient approach to incorporate task reward into a maximum likelihood framework. By establishing a link between the log-likelihood and expected reward objectives, we show that an optimal regularized expected reward is achieved when the conditional distribution of the outputs given the inputs is proportional to their exponentiated scaled rewards. Accordingly, we present a framework to smooth the predictive probability of the outputs using their corresponding rewards. We optimize the conditional log-probability of augmented outputs that are sampled proportionally to their exponentiated scaled rewards. Experiments on neural sequence to sequence models for speech recognition and machine translation show notable improvements over a maximum likelihood baseline by using reward augmented maximum likelihood (RAML), where the rewards are defined as the negative edit distance between the outputs and the ground truth labels.
[ "['Mohammad Norouzi' 'Samy Bengio' 'Zhifeng Chen' 'Navdeep Jaitly'\n 'Mike Schuster' 'Yonghui Wu' 'Dale Schuurmans']" ]
cs.LG
10.1016/j.jss.2016.06.016
1609.00203
null
null
http://arxiv.org/abs/1609.00203v1
2016-09-01T12:06:20Z
2016-09-01T12:06:20Z
Employing traditional machine learning algorithms for big data streams analysis: the case of object trajectory prediction
In this paper, we model the trajectory of sea vessels and provide a service that predicts in near-real time the position of any given vessel in 4', 10', 20' and 40' time intervals. We explore the necessary tradeoffs between accuracy, performance and resource utilization are explored given the large volume and update rates of input data. We start with building models based on well-established machine learning algorithms using static datasets and multi-scan training approaches and identify the best candidate to be used in implementing a single-pass predictive approach, under real-time constraints. The results are measured in terms of accuracy and performance and are compared against the baseline kinematic equations. Results show that it is possible to efficiently model the trajectory of multiple vessels using a single model, which is trained and evaluated using an adequately large, static dataset, thus achieving a significant gain in terms of resource usage while not compromising accuracy.
[ "Angelos Valsamis, Konstantinos Tserpes, Dimitrios Zissis, Dimosthenis\n Anagnostopoulos, Theodora Varvarigou", "['Angelos Valsamis' 'Konstantinos Tserpes' 'Dimitrios Zissis'\n 'Dimosthenis Anagnostopoulos' 'Theodora Varvarigou']" ]
cs.LG cs.AI cs.NE
null
1609.00222
null
null
http://arxiv.org/pdf/1609.00222v2
2017-02-26T09:44:34Z
2016-09-01T13:08:47Z
Ternary Neural Networks for Resource-Efficient AI Applications
The computation and storage requirements for Deep Neural Networks (DNNs) are usually high. This issue limits their deployability on ubiquitous computing devices such as smart phones, wearables and autonomous drones. In this paper, we propose ternary neural networks (TNNs) in order to make deep learning more resource-efficient. We train these TNNs using a teacher-student approach based on a novel, layer-wise greedy methodology. Thanks to our two-stage training procedure, the teacher network is still able to use state-of-the-art methods such as dropout and batch normalization to increase accuracy and reduce training time. Using only ternary weights and activations, the student ternary network learns to mimic the behavior of its teacher network without using any multiplication. Unlike its -1,1 binary counterparts, a ternary neural network inherently prunes the smaller weights by setting them to zero during training. This makes them sparser and thus more energy-efficient. We design a purpose-built hardware architecture for TNNs and implement it on FPGA and ASIC. We evaluate TNNs on several benchmark datasets and demonstrate up to 3.1x better energy efficiency with respect to the state of the art while also improving accuracy.
[ "['Hande Alemdar' 'Vincent Leroy' 'Adrien Prost-Boucle' 'Frédéric Pétrot']", "Hande Alemdar and Vincent Leroy and Adrien Prost-Boucle and\n Fr\\'ed\\'eric P\\'etrot" ]
cs.DS cs.DM cs.LG
null
1609.00265
null
null
http://arxiv.org/pdf/1609.00265v2
2016-09-14T18:53:51Z
2016-09-01T15:11:52Z
Testing $k$-Monotonicity
A Boolean $k$-monotone function defined over a finite poset domain ${\cal D}$ alternates between the values $0$ and $1$ at most $k$ times on any ascending chain in ${\cal D}$. Therefore, $k$-monotone functions are natural generalizations of the classical monotone functions, which are the $1$-monotone functions. Motivated by the recent interest in $k$-monotone functions in the context of circuit complexity and learning theory, and by the central role that monotonicity testing plays in the context of property testing, we initiate a systematic study of $k$-monotone functions, in the property testing model. In this model, the goal is to distinguish functions that are $k$-monotone (or are close to being $k$-monotone) from functions that are far from being $k$-monotone. Our results include the following: - We demonstrate a separation between testing $k$-monotonicity and testing monotonicity, on the hypercube domain $\{0,1\}^d$, for $k\geq 3$; - We demonstrate a separation between testing and learning on $\{0,1\}^d$, for $k=\omega(\log d)$: testing $k$-monotonicity can be performed with $2^{O(\sqrt d \cdot \log d\cdot \log{1/\varepsilon})}$ queries, while learning $k$-monotone functions requires $2^{\Omega(k\cdot \sqrt d\cdot{1/\varepsilon})}$ queries (Blais et al. (RANDOM 2015)). - We present a tolerant test for functions $f\colon[n]^d\to \{0,1\}$ with complexity independent of $n$, which makes progress on a problem left open by Berman et al. (STOC 2014). Our techniques exploit the testing-by-learning paradigm, use novel applications of Fourier analysis on the grid $[n]^d$, and draw connections to distribution testing techniques.
[ "Cl\\'ement L. Canonne, Elena Grigorescu, Siyao Guo, Akash Kumar, Karl\n Wimmer", "['Clément L. Canonne' 'Elena Grigorescu' 'Siyao Guo' 'Akash Kumar'\n 'Karl Wimmer']" ]
cs.LG
null
1609.00288
null
null
http://arxiv.org/pdf/1609.00288v2
2017-09-01T08:18:33Z
2016-09-01T15:49:43Z
A Unified View of Multi-Label Performance Measures
Multi-label classification deals with the problem where each instance is associated with multiple class labels. Because evaluation in multi-label classification is more complicated than single-label setting, a number of performance measures have been proposed. It is noticed that an algorithm usually performs differently on different measures. Therefore, it is important to understand which algorithms perform well on which measure(s) and why. In this paper, we propose a unified margin view to revisit eleven performance measures in multi-label classification. In particular, we define label-wise margin and instance-wise margin, and prove that through maximizing these margins, different corresponding performance measures will be optimized. Based on the defined margins, a max-margin approach called LIMO is designed and empirical results verify our theoretical findings.
[ "['Xi-Zhu Wu' 'Zhi-Hua Zhou']", "Xi-Zhu Wu and Zhi-Hua Zhou" ]
stat.ME cs.LG stat.ML
10.1080/01621459.2017.1395341
1609.00451
null
null
http://arxiv.org/abs/1609.00451v2
2018-12-22T20:39:01Z
2016-09-02T02:46:45Z
Least Ambiguous Set-Valued Classifiers with Bounded Error Levels
In most classification tasks there are observations that are ambiguous and therefore difficult to correctly label. Set-valued classifiers output sets of plausible labels rather than a single label, thereby giving a more appropriate and informative treatment to the labeling of ambiguous instances. We introduce a framework for multiclass set-valued classification, where the classifiers guarantee user-defined levels of coverage or confidence (the probability that the true label is contained in the set) while minimizing the ambiguity (the expected size of the output). We first derive oracle classifiers assuming the true distribution to be known. We show that the oracle classifiers are obtained from level sets of the functions that define the conditional probability of each class. Then we develop estimators with good asymptotic and finite sample properties. The proposed estimators build on existing single-label classifiers. The optimal classifier can sometimes output the empty set, but we provide two solutions to fix this issue that are suitable for various practical needs.
[ "['Mauricio Sadinle' 'Jing Lei' 'Larry Wasserman']", "Mauricio Sadinle, Jing Lei, Larry Wasserman" ]
cs.SE cs.LG stat.ML
null
1609.00489
null
null
http://arxiv.org/pdf/1609.00489v2
2016-09-06T06:18:04Z
2016-09-02T07:42:29Z
A deep learning model for estimating story points
Although there has been substantial research in software analytics for effort estimation in traditional software projects, little work has been done for estimation in agile projects, especially estimating user stories or issues. Story points are the most common unit of measure used for estimating the effort involved in implementing a user story or resolving an issue. In this paper, we offer for the \emph{first} time a comprehensive dataset for story points-based estimation that contains 23,313 issues from 16 open source projects. We also propose a prediction model for estimating story points based on a novel combination of two powerful deep learning architectures: long short-term memory and recurrent highway network. Our prediction system is \emph{end-to-end} trainable from raw input data to prediction outcomes without any manual feature engineering. An empirical evaluation demonstrates that our approach consistently outperforms three common effort estimation baselines and two alternatives in both Mean Absolute Error and the Standardized Accuracy.
[ "['Morakot Choetkiertikul' 'Hoa Khanh Dam' 'Truyen Tran' 'Trang Pham'\n 'Aditya Ghose' 'Tim Menzies']", "Morakot Choetkiertikul, Hoa Khanh Dam, Truyen Tran, Trang Pham, Aditya\n Ghose and Tim Menzies" ]
cs.LG
null
1609.00585
null
null
http://arxiv.org/pdf/1609.00585v2
2016-09-14T11:58:08Z
2016-09-02T13:20:06Z
Doubly stochastic large scale kernel learning with the empirical kernel map
With the rise of big data sets, the popularity of kernel methods declined and neural networks took over again. The main problem with kernel methods is that the kernel matrix grows quadratically with the number of data points. Most attempts to scale up kernel methods solve this problem by discarding data points or basis functions of some approximation of the kernel map. Here we present a simple yet effective alternative for scaling up kernel methods that takes into account the entire data set via doubly stochastic optimization of the emprical kernel map. The algorithm is straightforward to implement, in particular in parallel execution settings; it leverages the full power and versatility of classical kernel functions without the need to explicitly formulate a kernel map approximation. We provide empirical evidence that the algorithm works on large data sets.
[ "['Nikolaas Steenbergen' 'Sebastian Schelter' 'Felix Bießmann']", "Nikolaas Steenbergen, Sebastian Schelter, Felix Bie{\\ss}mann" ]
cs.CV cs.LG stat.ML
null
1609.00629
null
null
http://arxiv.org/pdf/1609.00629v1
2016-09-02T14:48:16Z
2016-09-02T14:48:16Z
SEBOOST - Boosting Stochastic Learning Using Subspace Optimization Techniques
We present SEBOOST, a technique for boosting the performance of existing stochastic optimization methods. SEBOOST applies a secondary optimization process in the subspace spanned by the last steps and descent directions. The method was inspired by the SESOP optimization method for large-scale problems, and has been adapted for the stochastic learning framework. It can be applied on top of any existing optimization method with no need to tweak the internal algorithm. We show that the method is able to boost the performance of different algorithms, and make them more robust to changes in their hyper-parameters. As the boosting steps of SEBOOST are applied between large sets of descent steps, the additional subspace optimization hardly increases the overall computational burden. We introduce two hyper-parameters that control the balance between the baseline method and the secondary optimization process. The method was evaluated on several deep learning tasks, demonstrating promising results.
[ "['Elad Richardson' 'Rom Herskovitz' 'Boris Ginsburg' 'Michael Zibulevsky']", "Elad Richardson, Rom Herskovitz, Boris Ginsburg, Michael Zibulevsky" ]
q-bio.BM cs.LG q-bio.QM stat.ML
10.1371/journal.pcbi.1005324
1609.0068
null
null
null
null
null
Accurate De Novo Prediction of Protein Contact Map by Ultra-Deep Learning Model
Recently exciting progress has been made on protein contact prediction, but the predicted contacts for proteins without many sequence homologs is still of low quality and not very useful for de novo structure prediction. This paper presents a new deep learning method that predicts contacts by integrating both evolutionary coupling (EC) and sequence conservation information through an ultra-deep neural network formed by two deep residual networks. This deep neural network allows us to model very complex sequence-contact relationship as well as long-range inter-contact correlation. Our method greatly outperforms existing contact prediction methods and leads to much more accurate contact-assisted protein folding. Tested on three datasets of 579 proteins, the average top L long-range prediction accuracy obtained our method, the representative EC method CCMpred and the CASP11 winner MetaPSICOV is 0.47, 0.21 and 0.30, respectively; the average top L/10 long-range accuracy of our method, CCMpred and MetaPSICOV is 0.77, 0.47 and 0.59, respectively. Ab initio folding using our predicted contacts as restraints can yield correct folds (i.e., TMscore>0.6) for 203 test proteins, while that using MetaPSICOV- and CCMpred-predicted contacts can do so for only 79 and 62 proteins, respectively. Further, our contact-assisted models have much better quality than template-based models. Using our predicted contacts as restraints, we can (ab initio) fold 208 of the 398 membrane proteins with TMscore>0.5. By contrast, when the training proteins of our method are used as templates, homology modeling can only do so for 10 of them. One interesting finding is that even if we do not train our prediction models with any membrane proteins, our method works very well on membrane protein prediction. Finally, in recent blind CAMEO benchmark our method successfully folded 5 test proteins with a novel fold.
[ "Sheng Wang, Siqi Sun, Zhen Li, Renyu Zhang and Jinbo Xu" ]
null
null
1609.00680
null
null
http://arxiv.org/abs/1609.00680v6
2016-11-27T22:32:50Z
2016-09-02T17:41:54Z
Accurate De Novo Prediction of Protein Contact Map by Ultra-Deep Learning Model
Recently exciting progress has been made on protein contact prediction, but the predicted contacts for proteins without many sequence homologs is still of low quality and not very useful for de novo structure prediction. This paper presents a new deep learning method that predicts contacts by integrating both evolutionary coupling (EC) and sequence conservation information through an ultra-deep neural network formed by two deep residual networks. This deep neural network allows us to model very complex sequence-contact relationship as well as long-range inter-contact correlation. Our method greatly outperforms existing contact prediction methods and leads to much more accurate contact-assisted protein folding. Tested on three datasets of 579 proteins, the average top L long-range prediction accuracy obtained our method, the representative EC method CCMpred and the CASP11 winner MetaPSICOV is 0.47, 0.21 and 0.30, respectively; the average top L/10 long-range accuracy of our method, CCMpred and MetaPSICOV is 0.77, 0.47 and 0.59, respectively. Ab initio folding using our predicted contacts as restraints can yield correct folds (i.e., TMscore>0.6) for 203 test proteins, while that using MetaPSICOV- and CCMpred-predicted contacts can do so for only 79 and 62 proteins, respectively. Further, our contact-assisted models have much better quality than template-based models. Using our predicted contacts as restraints, we can (ab initio) fold 208 of the 398 membrane proteins with TMscore>0.5. By contrast, when the training proteins of our method are used as templates, homology modeling can only do so for 10 of them. One interesting finding is that even if we do not train our prediction models with any membrane proteins, our method works very well on membrane protein prediction. Finally, in recent blind CAMEO benchmark our method successfully folded 5 test proteins with a novel fold.
[ "['Sheng Wang' 'Siqi Sun' 'Zhen Li' 'Renyu Zhang' 'Jinbo Xu']" ]
cs.LG physics.optics quant-ph
null
1609.00686
null
null
http://arxiv.org/pdf/1609.00686v1
2016-09-01T09:32:29Z
2016-09-01T09:32:29Z
Single photon in hierarchical architecture for physical reinforcement learning: Photon intelligence
Understanding and using natural processes for intelligent functionalities, referred to as natural intelligence, has recently attracted interest from a variety of fields, including post-silicon computing for artificial intelligence and decision making in the behavioural sciences. In a past study, we successfully used the wave-particle duality of single photons to solve the two-armed bandit problem, which constitutes the foundation of reinforcement learning and decision making. In this study, we propose and confirm a hierarchical architecture for single-photon-based reinforcement learning and decision making that verifies the scalability of the principle. Specifically, the four-armed bandit problem is solved given zero prior knowledge in a two-layer hierarchical architecture, where polarization is autonomously adapted in order to effect adequate decision making using single-photon measurements. In the hierarchical structure, the notion of layer-dependent decisions emerges. The optimal solutions in the coarse layer and in the fine layer, however, conflict with each other in some contradictive problems. We show that while what we call a tournament strategy resolves such contradictions, the probabilistic nature of single photons allows for the direct location of the optimal solution even for contradictive problems, hence manifesting the exploration ability of single photons. This study provides insights into photon intelligence in hierarchical architectures for future artificial intelligence as well as the potential of natural processes for intelligent functionalities.
[ "Makoto Naruse, Martin Berthel, Aur\\'elien Drezet, Serge Huant,\n Hirokazu Hori, Song-Ju Kim", "['Makoto Naruse' 'Martin Berthel' 'Aurélien Drezet' 'Serge Huant'\n 'Hirokazu Hori' 'Song-Ju Kim']" ]
cs.CL cs.LG stat.ML
null
1609.00718
null
null
http://arxiv.org/pdf/1609.00718v1
2016-08-31T15:43:27Z
2016-08-31T15:43:27Z
Convolutional Neural Networks for Text Categorization: Shallow Word-level vs. Deep Character-level
This paper reports the performances of shallow word-level convolutional neural networks (CNN), our earlier work (2015), on the eight datasets with relatively large training data that were used for testing the very deep character-level CNN in Conneau et al. (2016). Our findings are as follows. The shallow word-level CNNs achieve better error rates than the error rates reported in Conneau et al., though the results should be interpreted with some consideration due to the unique pre-processing of Conneau et al. The shallow word-level CNN uses more parameters and therefore requires more storage than the deep character-level CNN; however, the shallow word-level CNN computes much faster.
[ "Rie Johnson and Tong Zhang", "['Rie Johnson' 'Tong Zhang']" ]
cs.CL cs.LG
null
1609.00777
null
null
http://arxiv.org/pdf/1609.00777v3
2017-04-20T17:26:35Z
2016-09-03T01:02:51Z
Towards End-to-End Reinforcement Learning of Dialogue Agents for Information Access
This paper proposes KB-InfoBot -- a multi-turn dialogue agent which helps users search Knowledge Bases (KBs) without composing complicated queries. Such goal-oriented dialogue agents typically need to interact with an external database to access real-world knowledge. Previous systems achieved this by issuing a symbolic query to the KB to retrieve entries based on their attributes. However, such symbolic operations break the differentiability of the system and prevent end-to-end training of neural dialogue agents. In this paper, we address this limitation by replacing symbolic queries with an induced "soft" posterior distribution over the KB that indicates which entities the user is interested in. Integrating the soft retrieval process with a reinforcement learner leads to higher task success rate and reward in both simulations and against real users. We also present a fully neural end-to-end agent, trained entirely from user feedback, and discuss its application towards personalized dialogue agents. The source code is available at https://github.com/MiuLab/KB-InfoBot.
[ "Bhuwan Dhingra, Lihong Li, Xiujun Li, Jianfeng Gao, Yun-Nung Chen,\n Faisal Ahmed, Li Deng", "['Bhuwan Dhingra' 'Lihong Li' 'Xiujun Li' 'Jianfeng Gao' 'Yun-Nung Chen'\n 'Faisal Ahmed' 'Li Deng']" ]
cs.LG cs.GT
10.1109/TNNLS.2016.2593488
1609.00804
null
null
http://arxiv.org/abs/1609.00804v1
2016-09-03T09:30:51Z
2016-09-03T09:30:51Z
Randomized Prediction Games for Adversarial Machine Learning
In spam and malware detection, attackers exploit randomization to obfuscate malicious data and increase their chances of evading detection at test time; e.g., malware code is typically obfuscated using random strings or byte sequences to hide known exploits. Interestingly, randomization has also been proposed to improve security of learning algorithms against evasion attacks, as it results in hiding information about the classifier to the attacker. Recent work has proposed game-theoretical formulations to learn secure classifiers, by simulating different evasion attacks and modifying the classification function accordingly. However, both the classification function and the simulated data manipulations have been modeled in a deterministic manner, without accounting for any form of randomization. In this work, we overcome this limitation by proposing a randomized prediction game, namely, a non-cooperative game-theoretic formulation in which the classifier and the attacker make randomized strategy selections according to some probability distribution defined over the respective strategy set. We show that our approach allows one to improve the trade-off between attack detection and false alarms with respect to state-of-the-art secure classifiers, even against attacks that are different from those hypothesized during design, on application examples including handwritten digit recognition, spam and malware detection.
[ "Samuel Rota Bul\\`o and Battista Biggio and Ignazio Pillai and Marcello\n Pelillo and Fabio Roli", "['Samuel Rota Bulò' 'Battista Biggio' 'Ignazio Pillai' 'Marcello Pelillo'\n 'Fabio Roli']" ]
cs.LG cs.AI cs.NE
null
1609.00843
null
null
http://arxiv.org/pdf/1609.00843v1
2016-09-03T17:03:14Z
2016-09-03T17:03:14Z
An Online Universal Classifier for Binary, Multi-class and Multi-label Classification
Classification involves the learning of the mapping function that associates input samples to corresponding target label. There are two major categories of classification problems: Single-label classification and Multi-label classification. Traditional binary and multi-class classifications are sub-categories of single-label classification. Several classifiers are developed for binary, multi-class and multi-label classification problems, but there are no classifiers available in the literature capable of performing all three types of classification. In this paper, a novel online universal classifier capable of performing all the three types of classification is proposed. Being a high speed online classifier, the proposed technique can be applied to streaming data applications. The performance of the developed classifier is evaluated using datasets from binary, multi-class and multi-label problems. The results obtained are compared with state-of-the-art techniques from each of the classification types.
[ "['Meng Joo Er' 'Rajasekar Venkatesan' 'Ning Wang']", "Meng Joo Er, Rajasekar Venkatesan, Ning Wang" ]
stat.ML cs.LG
null
1609.00845
null
null
http://arxiv.org/pdf/1609.00845v1
2016-09-03T17:30:15Z
2016-09-03T17:30:15Z
Graph-Based Active Learning: A New Look at Expected Error Minimization
In graph-based active learning, algorithms based on expected error minimization (EEM) have been popular and yield good empirical performance. The exact computation of EEM optimally balances exploration and exploitation. In practice, however, EEM-based algorithms employ various approximations due to the computational hardness of exact EEM. This can result in a lack of either exploration or exploitation, which can negatively impact the effectiveness of active learning. We propose a new algorithm TSA (Two-Step Approximation) that balances between exploration and exploitation efficiently while enjoying the same computational complexity as existing approximations. Finally, we empirically show the value of balancing between exploration and exploitation in both toy and real-world datasets where our method outperforms several state-of-the-art methods.
[ "Kwang-Sung Jun and Robert Nowak", "['Kwang-Sung Jun' 'Robert Nowak']" ]
cs.CV cs.LG stat.ML
null
1609.00878
null
null
http://arxiv.org/pdf/1609.00878v1
2016-09-04T00:12:04Z
2016-09-04T00:12:04Z
A Probabilistic Optimum-Path Forest Classifier for Binary Classification Problems
Probabilistic-driven classification techniques extend the role of traditional approaches that output labels (usually integer numbers) only. Such techniques are more fruitful when dealing with problems where one is not interested in recognition/identification only, but also into monitoring the behavior of consumers and/or machines, for instance. Therefore, by means of probability estimates, one can take decisions to work better in a number of scenarios. In this paper, we propose a probabilistic-based Optimum Path Forest (OPF) classifier to handle with binary classification problems, and we show it can be more accurate than naive OPF in a number of datasets. In addition to being just more accurate or not, probabilistic OPF turns to be another useful tool to the scientific community.
[ "Silas E. N. Fernandes, Danillo R. Pereira, Caio C. O. Ramos, Andre N.\n Souza and Joao P. Papa", "['Silas E. N. Fernandes' 'Danillo R. Pereira' 'Caio C. O. Ramos'\n 'Andre N. Souza' 'Joao P. Papa']" ]
cs.AI cs.LG stat.ML
null
1609.00904
null
null
http://arxiv.org/pdf/1609.00904v1
2016-09-04T08:45:26Z
2016-09-04T08:45:26Z
High Dimensional Human Guided Machine Learning
Have you ever looked at a machine learning classification model and thought, I could have made that? Well, that is what we test in this project, comparing XGBoost trained on human engineered features to training directly on data. The human engineered features do not outperform XGBoost trained di- rectly on the data, but they are comparable. This project con- tributes a novel method for utilizing human created classifi- cation models on high dimensional datasets.
[ "Eric Holloway and Robert Marks II", "['Eric Holloway' 'Robert Marks II']" ]
stat.ML cs.LG q-bio.NC
null
1609.00921
null
null
http://arxiv.org/pdf/1609.00921v1
2016-09-04T12:01:50Z
2016-09-04T12:01:50Z
Decoding visual stimuli in human brain by using Anatomical Pattern Analysis on fMRI images
A universal unanswered question in neuroscience and machine learning is whether computers can decode the patterns of the human brain. Multi-Voxels Pattern Analysis (MVPA) is a critical tool for addressing this question. However, there are two challenges in the previous MVPA methods, which include decreasing sparsity and noises in the extracted features and increasing the performance of prediction. In overcoming mentioned challenges, this paper proposes Anatomical Pattern Analysis (APA) for decoding visual stimuli in the human brain. This framework develops a novel anatomical feature extraction method and a new imbalance AdaBoost algorithm for binary classification. Further, it utilizes an Error-Correcting Output Codes (ECOC) method for multi-class prediction. APA can automatically detect active regions for each category of the visual stimuli. Moreover, it enables us to combine homogeneous datasets for applying advanced classification. Experimental studies on 4 visual categories (words, consonants, objects and scrambled photos) demonstrate that the proposed approach achieves superior performance to state-of-the-art methods.
[ "Muhammad Yousefnezhad and Daoqiang Zhang", "['Muhammad Yousefnezhad' 'Daoqiang Zhang']" ]
cs.LG cs.AI cs.SY math.PR physics.data-an
null
1609.00932
null
null
http://arxiv.org/pdf/1609.00932v2
2017-06-20T20:30:33Z
2016-09-04T13:31:36Z
Spectral learning of dynamic systems from nonequilibrium data
Observable operator models (OOMs) and related models are one of the most important and powerful tools for modeling and analyzing stochastic systems. They exactly describe dynamics of finite-rank systems and can be efficiently and consistently estimated through spectral learning under the assumption of identically distributed data. In this paper, we investigate the properties of spectral learning without this assumption due to the requirements of analyzing large-time scale systems, and show that the equilibrium dynamics of a system can be extracted from nonequilibrium observation data by imposing an equilibrium constraint. In addition, we propose a binless extension of spectral learning for continuous data. In comparison with the other continuous-valued spectral algorithms, the binless algorithm can achieve consistent estimation of equilibrium dynamics with only linear complexity.
[ "['Hao Wu' 'Frank Noé']", "Hao Wu and Frank No\\'e" ]
stat.ML cs.LG math.OC
null
1609.00978
null
null
http://arxiv.org/pdf/1609.00978v1
2016-09-04T19:34:56Z
2016-09-04T19:34:56Z
Local Maxima in the Likelihood of Gaussian Mixture Models: Structural Results and Algorithmic Consequences
We provide two fundamental results on the population (infinite-sample) likelihood function of Gaussian mixture models with $M \geq 3$ components. Our first main result shows that the population likelihood function has bad local maxima even in the special case of equally-weighted mixtures of well-separated and spherical Gaussians. We prove that the log-likelihood value of these bad local maxima can be arbitrarily worse than that of any global optimum, thereby resolving an open question of Srebro (2007). Our second main result shows that the EM algorithm (or a first-order variant of it) with random initialization will converge to bad critical points with probability at least $1-e^{-\Omega(M)}$. We further establish that a first-order variant of EM will not converge to strict saddle points almost surely, indicating that the poor performance of the first-order method can be attributed to the existence of bad local maxima rather than bad saddle points. Overall, our results highlight the necessity of careful initialization when using the EM algorithm in practice, even when applied in highly favorable settings.
[ "Chi Jin, Yuchen Zhang, Sivaraman Balakrishnan, Martin J. Wainwright,\n Michael Jordan", "['Chi Jin' 'Yuchen Zhang' 'Sivaraman Balakrishnan' 'Martin J. Wainwright'\n 'Michael Jordan']" ]
cs.LG
null
1609.01
null
null
null
null
null
Convexified Convolutional Neural Networks
We describe the class of convexified convolutional neural networks (CCNNs), which capture the parameter sharing of convolutional neural networks in a convex manner. By representing the nonlinear convolutional filters as vectors in a reproducing kernel Hilbert space, the CNN parameters can be represented as a low-rank matrix, which can be relaxed to obtain a convex optimization problem. For learning two-layer convolutional neural networks, we prove that the generalization error obtained by a convexified CNN converges to that of the best possible CNN. For learning deeper networks, we train CCNNs in a layer-wise manner. Empirically, CCNNs achieve performance competitive with CNNs trained by backpropagation, SVMs, fully-connected neural networks, stacked denoising auto-encoders, and other baseline methods.
[ "Yuchen Zhang, Percy Liang, Martin J. Wainwright" ]
null
null
1609.01000
null
null
http://arxiv.org/pdf/1609.01000v1
2016-09-04T23:57:43Z
2016-09-04T23:57:43Z
Convexified Convolutional Neural Networks
We describe the class of convexified convolutional neural networks (CCNNs), which capture the parameter sharing of convolutional neural networks in a convex manner. By representing the nonlinear convolutional filters as vectors in a reproducing kernel Hilbert space, the CNN parameters can be represented as a low-rank matrix, which can be relaxed to obtain a convex optimization problem. For learning two-layer convolutional neural networks, we prove that the generalization error obtained by a convexified CNN converges to that of the best possible CNN. For learning deeper networks, we train CCNNs in a layer-wise manner. Empirically, CCNNs achieve performance competitive with CNNs trained by backpropagation, SVMs, fully-connected neural networks, stacked denoising auto-encoders, and other baseline methods.
[ "['Yuchen Zhang' 'Percy Liang' 'Martin J. Wainwright']" ]
cs.LG cs.NE stat.ML
null
1609.01037
null
null
http://arxiv.org/pdf/1609.01037v2
2017-03-09T08:56:44Z
2016-09-05T06:47:10Z
Distribution-Specific Hardness of Learning Neural Networks
Although neural networks are routinely and successfully trained in practice using simple gradient-based methods, most existing theoretical results are negative, showing that learning such networks is difficult, in a worst-case sense over all data distributions. In this paper, we take a more nuanced view, and consider whether specific assumptions on the "niceness" of the input distribution, or "niceness" of the target function (e.g. in terms of smoothness, non-degeneracy, incoherence, random choice of parameters etc.), are sufficient to guarantee learnability using gradient-based methods. We provide evidence that neither class of assumptions alone is sufficient: On the one hand, for any member of a class of "nice" target functions, there are difficult input distributions. On the other hand, we identify a family of simple target functions, which are difficult to learn even if the input distribution is "nice". To prove our results, we develop some tools which may be of independent interest, such as extending Fourier-based hardness techniques developed in the context of statistical queries \cite{blum1994weakly}, from the Boolean cube to Euclidean space and to more general classes of functions.
[ "['Ohad Shamir']", "Ohad Shamir" ]
cs.RO cs.CV cs.LG
null
1609.01044
null
null
http://arxiv.org/pdf/1609.01044v1
2016-09-05T07:44:40Z
2016-09-05T07:44:40Z
Classifying and sorting cluttered piles of unknown objects with robots: a learning approach
We consider the problem of sorting a densely cluttered pile of unknown objects using a robot. This yet unsolved problem is relevant in the robotic waste sorting business. By extending previous active learning approaches to grasping, we show a system that learns the task autonomously. Instead of predicting just whether a grasp succeeds, we predict the classes of the objects that end up being picked and thrown onto the target conveyor. Segmenting and identifying objects from the uncluttered target conveyor, as opposed to the working area, is easier due to the added structure since the thrown objects will be the only ones present. Instead of trying to segment or otherwise understand the cluttered working area in any way, we simply allow the controller to learn a mapping from an RGBD image in the neighborhood of the grasp to a predicted result---all segmentation etc. in the working area is implicit in the learned function. The grasp selection operates in two stages: The first stage is hardcoded and outputs a distribution of possible grasps that sometimes succeed. The second stage uses a purely learned criterion to choose the grasp to make from the proposal distribution created by the first stage. In an experiment, the system quickly learned to make good pickups and predict correctly, in advance, which class of object it was going to pick up and was able to sort the objects from a densely cluttered pile by color.
[ "Janne V. Kujala, Tuomas J. Lukka, Harri Holopainen", "['Janne V. Kujala' 'Tuomas J. Lukka' 'Harri Holopainen']" ]
cs.LG stat.AP
null
1609.01176
null
null
http://arxiv.org/pdf/1609.01176v1
2016-09-05T14:21:04Z
2016-09-05T14:21:04Z
The Player Kernel: Learning Team Strengths Based on Implicit Player Contributions
In this work, we draw attention to a connection between skill-based models of game outcomes and Gaussian process classification models. The Gaussian process perspective enables a) a principled way of dealing with uncertainty and b) rich models, specified through kernel functions. Using this connection, we tackle the problem of predicting outcomes of football matches between national teams. We develop a player kernel that relates any two football matches through the players lined up on the field. This makes it possible to share knowledge gained from observing matches between clubs (available in large quantities) and matches between national teams (available only in limited quantities). We evaluate our approach on the Euro 2008, 2012 and 2016 final tournaments.
[ "Lucas Maystre, Victor Kristof, Antonio J. Gonz\\'alez Ferrer, Matthias\n Grossglauser", "['Lucas Maystre' 'Victor Kristof' 'Antonio J. González Ferrer'\n 'Matthias Grossglauser']" ]
cs.LG
null
1609.01203
null
null
http://arxiv.org/pdf/1609.01203v2
2017-05-18T14:15:30Z
2016-09-05T15:58:11Z
Live Orchestral Piano, a system for real-time orchestral music generation
This paper introduces the first system for performing automatic orchestration based on a real-time piano input. We believe that it is possible to learn the underlying regularities existing between piano scores and their orchestrations by notorious composers, in order to automatically perform this task on novel piano inputs. To that end, we investigate a class of statistical inference models called conditional Restricted Boltzmann Machine (cRBM). We introduce a specific evaluation framework for orchestral generation based on a prediction task in order to assess the quality of different models. As prediction and creation are two widely different endeavours, we discuss the potential biases in evaluating temporal generative models through prediction tasks and their impact on a creative system. Finally, we introduce an implementation of the proposed model called Live Orchestral Piano (LOP), which allows to perform real-time projective orchestration of a MIDI keyboard input.
[ "L\\'eopold Crestel and Philippe Esling", "['Léopold Crestel' 'Philippe Esling']" ]
cs.LG stat.ML
null
1609.01226
null
null
http://arxiv.org/pdf/1609.01226v1
2016-09-05T17:27:22Z
2016-09-05T17:27:22Z
The Robustness of Estimator Composition
We formalize notions of robustness for composite estimators via the notion of a breakdown point. A composite estimator successively applies two (or more) estimators: on data decomposed into disjoint parts, it applies the first estimator on each part, then the second estimator on the outputs of the first estimator. And so on, if the composition is of more than two estimators. Informally, the breakdown point is the minimum fraction of data points which if significantly modified will also significantly modify the output of the estimator, so it is typically desirable to have a large breakdown point. Our main result shows that, under mild conditions on the individual estimators, the breakdown point of the composite estimator is the product of the breakdown points of the individual estimators. We also demonstrate several scenarios, ranging from regression to statistical testing, where this analysis is easy to apply, useful in understanding worst case robustness, and sheds powerful insights onto the associated data analysis.
[ "['Pingfan Tang' 'Jeff M. Phillips']", "Pingfan Tang, Jeff M. Phillips" ]
cs.LG cs.CV cs.NE stat.ML
null
1609.0136
null
null
null
null
null
Evolutionary Synthesis of Deep Neural Networks via Synaptic Cluster-driven Genetic Encoding
There has been significant recent interest towards achieving highly efficient deep neural network architectures. A promising paradigm for achieving this is the concept of evolutionary deep intelligence, which attempts to mimic biological evolution processes to synthesize highly-efficient deep neural networks over successive generations. An important aspect of evolutionary deep intelligence is the genetic encoding scheme used to mimic heredity, which can have a significant impact on the quality of offspring deep neural networks. Motivated by the neurobiological phenomenon of synaptic clustering, we introduce a new genetic encoding scheme where synaptic probability is driven towards the formation of a highly sparse set of synaptic clusters. Experimental results for the task of image classification demonstrated that the synthesized offspring networks using this synaptic cluster-driven genetic encoding scheme can achieve state-of-the-art performance while having network architectures that are not only significantly more efficient (with a ~125-fold decrease in synapses for MNIST) compared to the original ancestor network, but also tailored for GPU-accelerated machine learning applications.
[ "Mohammad Javad Shafiee and Alexander Wong" ]
null
null
1609.01360
null
null
http://arxiv.org/pdf/1609.01360v2
2016-11-22T16:00:01Z
2016-09-06T01:08:03Z
Evolutionary Synthesis of Deep Neural Networks via Synaptic Cluster-driven Genetic Encoding
There has been significant recent interest towards achieving highly efficient deep neural network architectures. A promising paradigm for achieving this is the concept of evolutionary deep intelligence, which attempts to mimic biological evolution processes to synthesize highly-efficient deep neural networks over successive generations. An important aspect of evolutionary deep intelligence is the genetic encoding scheme used to mimic heredity, which can have a significant impact on the quality of offspring deep neural networks. Motivated by the neurobiological phenomenon of synaptic clustering, we introduce a new genetic encoding scheme where synaptic probability is driven towards the formation of a highly sparse set of synaptic clusters. Experimental results for the task of image classification demonstrated that the synthesized offspring networks using this synaptic cluster-driven genetic encoding scheme can achieve state-of-the-art performance while having network architectures that are not only significantly more efficient (with a ~125-fold decrease in synapses for MNIST) compared to the original ancestor network, but also tailored for GPU-accelerated machine learning applications.
[ "['Mohammad Javad Shafiee' 'Alexander Wong']" ]
cs.SY cs.LG math.OC
null
1609.01387
null
null
http://arxiv.org/pdf/1609.01387v7
2017-12-14T00:23:58Z
2016-09-06T04:21:33Z
Learning Model Predictive Control for iterative tasks. A Data-Driven Control Framework
A Learning Model Predictive Controller (LMPC) for iterative tasks is presented. The controller is reference-free and is able to improve its performance by learning from previous iterations. A safe set and a terminal cost function are used in order to guarantee recursive feasibility and non-increasing performance at each iteration. The paper presents the control design approach, and shows how to recursively construct terminal set and terminal cost from state and input trajectories of previous iterations. Simulation results show the effectiveness of the proposed control logic.
[ "Ugo Rosolia and Francesco Borrelli", "['Ugo Rosolia' 'Francesco Borrelli']" ]
cs.LG cs.AI stat.ML
null
1609.01468
null
null
http://arxiv.org/pdf/1609.01468v1
2016-09-06T10:03:27Z
2016-09-06T10:03:27Z
Q-Learning with Basic Emotions
Q-learning is a simple and powerful tool in solving dynamic problems where environments are unknown. It uses a balance of exploration and exploitation to find an optimal solution to the problem. In this paper, we propose using four basic emotions: joy, sadness, fear, and anger to influence a Qlearning agent. Simulations show that the proposed affective agent requires lesser number of steps to find the optimal path. We found when affective agent finds the optimal path, the ratio between exploration to exploitation gradually decreases, indicating lower total step count in the long run
[ "Wilfredo Badoy Jr. and Kardi Teknomo", "['Wilfredo Badoy Jr.' 'Kardi Teknomo']" ]
cs.SE cs.LG cs.LO
10.1007/978-3-319-48989-6_10
1609.01491
null
null
http://arxiv.org/abs/1609.01491v1
2016-09-06T11:28:40Z
2016-09-06T11:28:40Z
Towards Learning and Verifying Invariants of Cyber-Physical Systems by Code Mutation
Cyber-physical systems (CPS), which integrate algorithmic control with physical processes, often consist of physically distributed components communicating over a network. A malfunctioning or compromised component in such a CPS can lead to costly consequences, especially in the context of public infrastructure. In this short paper, we argue for the importance of constructing invariants (or models) of the physical behaviour exhibited by CPS, motivated by their applications to the control, monitoring, and attestation of components. To achieve this despite the inherent complexity of CPS, we propose a new technique for learning invariants that combines machine learning with ideas from mutation testing. We present a preliminary study on a water treatment system that suggests the efficacy of this approach, propose strategies for establishing confidence in the correctness of invariants, then summarise some research questions and the steps we are taking to investigate them.
[ "['Yuqi Chen' 'Christopher M. Poskitt' 'Jun Sun']", "Yuqi Chen, Christopher M. Poskitt, Jun Sun" ]
cs.LG
null
1609.01508
null
null
http://arxiv.org/pdf/1609.01508v1
2016-09-06T12:01:30Z
2016-09-06T12:01:30Z
Low-rank Bandits with Latent Mixtures
We study the task of maximizing rewards from recommending items (actions) to users sequentially interacting with a recommender system. Users are modeled as latent mixtures of C many representative user classes, where each class specifies a mean reward profile across actions. Both the user features (mixture distribution over classes) and the item features (mean reward vector per class) are unknown a priori. The user identity is the only contextual information available to the learner while interacting. This induces a low-rank structure on the matrix of expected rewards r a,b from recommending item a to user b. The problem reduces to the well-known linear bandit when either user or item-side features are perfectly known. In the setting where each user, with its stochastically sampled taste profile, interacts only for a small number of sessions, we develop a bandit algorithm for the two-sided uncertainty. It combines the Robust Tensor Power Method of Anandkumar et al. (2014b) with the OFUL linear bandit algorithm of Abbasi-Yadkori et al. (2011). We provide the first rigorous regret analysis of this combination, showing that its regret after T user interactions is $\tilde O(C\sqrt{BT})$, with B the number of users. An ingredient towards this result is a novel robustness property of OFUL, of independent interest.
[ "['Aditya Gopalan' 'Odalric-Ambrym Maillard' 'Mohammadi Zaki']", "Aditya Gopalan, Odalric-Ambrym Maillard and Mohammadi Zaki" ]
cs.LG cs.CL
null
1609.01586
null
null
http://arxiv.org/pdf/1609.01586v1
2016-09-06T14:54:58Z
2016-09-06T14:54:58Z
A Bootstrap Machine Learning Approach to Identify Rare Disease Patients from Electronic Health Records
Rare diseases are very difficult to identify among large number of other possible diagnoses. Better availability of patient data and improvement in machine learning algorithms empower us to tackle this problem computationally. In this paper, we target one such rare disease - cardiac amyloidosis. We aim to automate the process of identifying potential cardiac amyloidosis patients with the help of machine learning algorithms and also learn most predictive factors. With the help of experienced cardiologists, we prepared a gold standard with 73 positive (cardiac amyloidosis) and 197 negative instances. We achieved high average cross-validation F1 score of 0.98 using an ensemble machine learning classifier. Some of the predictive variables were: Age and Diagnosis of cardiac arrest, chest pain, congestive heart failure, hypertension, prim open angle glaucoma, and shoulder arthritis. Further studies are needed to validate the accuracy of the system across an entire health system and its generalizability for other diseases.
[ "['Ravi Garg' 'Shu Dong' 'Sanjiv Shah' 'Siddhartha R Jonnalagadda']", "Ravi Garg, Shu Dong, Sanjiv Shah, Siddhartha R Jonnalagadda" ]
stat.ML cs.LG
null
1609.01596
null
null
http://arxiv.org/pdf/1609.01596v5
2016-12-21T16:36:40Z
2016-09-06T15:07:32Z
Direct Feedback Alignment Provides Learning in Deep Neural Networks
Artificial neural networks are most commonly trained with the back-propagation algorithm, where the gradient for learning is provided by back-propagating the error, layer by layer, from the output layer to the hidden layers. A recently discovered method called feedback-alignment shows that the weights used for propagating the error backward don't have to be symmetric with the weights used for propagation the activation forward. In fact, random feedback weights work evenly well, because the network learns how to make the feedback useful. In this work, the feedback alignment principle is used for training hidden layers more independently from the rest of the network, and from a zero initial condition. The error is propagated through fixed random feedback connections directly from the output layer to each hidden layer. This simple method is able to achieve zero training error even in convolutional networks and very deep networks, completely without error back-propagation. The method is a step towards biologically plausible machine learning because the error signal is almost local, and no symmetric or reciprocal weights are required. Experiments show that the test performance on MNIST and CIFAR is almost as good as those obtained with back-propagation for fully connected networks. If combined with dropout, the method achieves 1.45% error on the permutation invariant MNIST task.
[ "Arild N{\\o}kland", "['Arild Nøkland']" ]
cs.LG
null
1609.01704
null
null
http://arxiv.org/pdf/1609.01704v7
2017-03-09T05:22:52Z
2016-09-06T19:37:57Z
Hierarchical Multiscale Recurrent Neural Networks
Learning both hierarchical and temporal representation has been among the long-standing challenges of recurrent neural networks. Multiscale recurrent neural networks have been considered as a promising approach to resolve this issue, yet there has been a lack of empirical evidence showing that this type of models can actually capture the temporal dependencies by discovering the latent hierarchical structure of the sequence. In this paper, we propose a novel multiscale approach, called the hierarchical multiscale recurrent neural networks, which can capture the latent hierarchical structure in the sequence by encoding the temporal dependencies with different timescales using a novel update mechanism. We show some evidence that our proposed multiscale architecture can discover underlying hierarchical structure in the sequences without using explicit boundary information. We evaluate our proposed model on character-level language modelling and handwriting sequence modelling.
[ "['Junyoung Chung' 'Sungjin Ahn' 'Yoshua Bengio']", "Junyoung Chung and Sungjin Ahn and Yoshua Bengio" ]
cs.LG cs.CV
null
1609.01819
null
null
http://arxiv.org/pdf/1609.01819v1
2016-09-07T03:35:54Z
2016-09-07T03:35:54Z
Semantic Video Trailers
Query-based video summarization is the task of creating a brief visual trailer, which captures the parts of the video (or a collection of videos) that are most relevant to the user-issued query. In this paper, we propose an unsupervised label propagation approach for this task. Our approach effectively captures the multimodal semantics of queries and videos using state-of-the-art deep neural networks and creates a summary that is both semantically coherent and visually attractive. We describe the theoretical framework of our graph-based approach and empirically evaluate its effectiveness in creating relevant and attractive trailers. Finally, we showcase example video trailers generated by our system.
[ "Harrie Oosterhuis, Sujith Ravi, Michael Bendersky", "['Harrie Oosterhuis' 'Sujith Ravi' 'Michael Bendersky']" ]
cs.LG stat.ML
null
1609.0184
null
null
null
null
null
Learning Boltzmann Machine with EM-like Method
We propose an expectation-maximization-like(EMlike) method to train Boltzmann machine with unconstrained connectivity. It adopts Monte Carlo approximation in the E-step, and replaces the intractable likelihood objective with efficiently computed objectives or directly approximates the gradient of likelihood objective in the M-step. The EM-like method is a modification of alternating minimization. We prove that EM-like method will be the exactly same with contrastive divergence in restricted Boltzmann machine if the M-step of this method adopts special approximation. We also propose a new measure to assess the performance of Boltzmann machine as generative models of data, and its computational complexity is O(Rmn). Finally, we demonstrate the performance of EM-like method using numerical experiments.
[ "Jinmeng Song, Chun Yuan" ]
null
null
1609.01840
null
null
http://arxiv.org/pdf/1609.01840v1
2016-09-07T05:17:30Z
2016-09-07T05:17:30Z
Learning Boltzmann Machine with EM-like Method
We propose an expectation-maximization-like(EMlike) method to train Boltzmann machine with unconstrained connectivity. It adopts Monte Carlo approximation in the E-step, and replaces the intractable likelihood objective with efficiently computed objectives or directly approximates the gradient of likelihood objective in the M-step. The EM-like method is a modification of alternating minimization. We prove that EM-like method will be the exactly same with contrastive divergence in restricted Boltzmann machine if the M-step of this method adopts special approximation. We also propose a new measure to assess the performance of Boltzmann machine as generative models of data, and its computational complexity is O(Rmn). Finally, we demonstrate the performance of EM-like method using numerical experiments.
[ "['Jinmeng Song' 'Chun Yuan']" ]
stat.ML cs.LG
null
1609.01872
null
null
http://arxiv.org/pdf/1609.01872v1
2016-09-07T08:18:18Z
2016-09-07T08:18:18Z
Chaining Bounds for Empirical Risk Minimization
This paper extends the standard chaining technique to prove excess risk upper bounds for empirical risk minimization with random design settings even if the magnitude of the noise and the estimates is unbounded. The bound applies to many loss functions besides the squared loss, and scales only with the sub-Gaussian or subexponential parameters without further statistical assumptions such as the bounded kurtosis condition over the hypothesis class. A detailed analysis is provided for slope constrained and penalized linear least squares regression with a sub-Gaussian setting, which often proves tight sample complexity bounds up to logartihmic factors.
[ "G\\'abor Bal\\'azs, Andr\\'as Gy\\\"orgy, Csaba Szepesv\\'ari", "['Gábor Balázs' 'András György' 'Csaba Szepesvári']" ]
cs.CV cs.DB cs.IT cs.LG math.IT
null
1609.01882
null
null
http://arxiv.org/pdf/1609.01882v2
2016-10-10T23:00:00Z
2016-09-07T08:45:19Z
Polysemous codes
This paper considers the problem of approximate nearest neighbor search in the compressed domain. We introduce polysemous codes, which offer both the distance estimation quality of product quantization and the efficient comparison of binary codes with Hamming distance. Their design is inspired by algorithms introduced in the 90's to construct channel-optimized vector quantizers. At search time, this dual interpretation accelerates the search. Most of the indexed vectors are filtered out with Hamming distance, letting only a fraction of the vectors to be ranked with an asymmetric distance estimator. The method is complementary with a coarse partitioning of the feature space such as the inverted multi-index. This is shown by our experiments performed on several public benchmarks such as the BIGANN dataset comprising one billion vectors, for which we report state-of-the-art results for query times below 0.3\,millisecond per core. Last but not least, our approach allows the approximate computation of the k-NN graph associated with the Yahoo Flickr Creative Commons 100M, described by CNN image descriptors, in less than 8 hours on a single machine.
[ "Matthijs Douze, Herv\\'e J\\'egou and Florent Perronnin", "['Matthijs Douze' 'Hervé Jégou' 'Florent Perronnin']" ]
cs.CV cs.LG
null
1609.01885
null
null
null
null
null
DAiSEE: Towards User Engagement Recognition in the Wild
We introduce DAiSEE, the first multi-label video classification dataset comprising of 9068 video snippets captured from 112 users for recognizing the user affective states of boredom, confusion, engagement, and frustration in the wild. The dataset has four levels of labels namely - very low, low, high, and very high for each of the affective states, which are crowd annotated and correlated with a gold standard annotation created using a team of expert psychologists. We have also established benchmark results on this dataset using state-of-the-art video classification methods that are available today. We believe that DAiSEE will provide the research community with challenges in feature extraction, context-based inference, and development of suitable machine learning methods for related tasks, thus providing a springboard for further research. The dataset is available for download at https://people.iith.ac.in/vineethnb/resources/daisee/index.html.
[ "Abhay Gupta, Arjun D'Cunha, Kamal Awasthi, Vineeth Balasubramanian" ]
cs.LG
null
1609.01977
null
null
http://arxiv.org/pdf/1609.01977v2
2018-09-11T18:31:32Z
2016-09-07T13:42:26Z
Doubly Stochastic Neighbor Embedding on Spheres
Stochastic Neighbor Embedding (SNE) methods minimize the divergence between the similarity matrix of a high-dimensional data set and its counterpart from a low-dimensional embedding, leading to widely applied tools for data visualization. Despite their popularity, the current SNE methods experience a crowding problem when the data include highly imbalanced similarities. This implies that the data points with higher total similarity tend to get crowded around the display center. To solve this problem, we introduce a fast normalization method and normalize the similarity matrix to be doubly stochastic such that all the data points have equal total similarities. Furthermore, we show empirically and theoretically that the doubly stochasticity constraint often leads to embeddings which are approximately spherical. This suggests replacing a flat space with spheres as the embedding space. The spherical embedding eliminates the discrepancy between the center and the periphery in visualization, which efficiently resolves the crowding problem. We compared the proposed method (DOSNES) with the state-of-the-art SNE method on three real-world datasets and the results clearly indicate that our method is more favorable in terms of visualization quality.
[ "Yao Lu, Jukka Corander, Zhirong Yang", "['Yao Lu' 'Jukka Corander' 'Zhirong Yang']" ]
cs.RO cs.CV cs.LG
null
1609.01984
null
null
http://arxiv.org/pdf/1609.01984v1
2016-09-07T13:53:26Z
2016-09-07T13:53:26Z
Human Body Orientation Estimation using Convolutional Neural Network
Personal robots are expected to interact with the user by recognizing the user's face. However, in most of the service robot applications, the user needs to move himself/herself to allow the robot to see him/her face to face. To overcome such limitations, a method for estimating human body orientation is required. Previous studies used various components such as feature extractors and classification models to classify the orientation which resulted in low performance. For a more robust and accurate approach, we propose the light weight convolutional neural networks, an end to end system, for estimating human body orientation. Our body orientation estimation model achieved 81.58% and 94% accuracy with the benchmark dataset and our own dataset respectively. The proposed method can be used in a wide range of service robot applications which depend on the ability to estimate human body orientation. To show its usefulness in service robot applications, we designed a simple robot application which allows the robot to move towards the user's frontal plane. With this, we demonstrated an improved face detection rate.
[ "Jinyoung Choi, Beom-Jin Lee, and Byoung-Tak Zhang", "['Jinyoung Choi' 'Beom-Jin Lee' 'Byoung-Tak Zhang']" ]
stat.ML cs.LG
null
1609.0202
null
null
null
null
null
Random matrices meet machine learning: a large dimensional analysis of LS-SVM
This article proposes a performance analysis of kernel least squares support vector machines (LS-SVMs) based on a random matrix approach, in the regime where both the dimension of data $p$ and their number $n$ grow large at the same rate. Under a two-class Gaussian mixture model for the input data, we prove that the LS-SVM decision function is asymptotically normal with means and covariances shown to depend explicitly on the derivatives of the kernel function. This provides improved understanding along with new insights into the internal workings of SVM-type methods for large datasets.
[ "Zhenyu Liao, Romain Couillet" ]
null
null
1609.02020
null
null
http://arxiv.org/pdf/1609.02020v2
2016-09-08T07:26:00Z
2016-09-07T15:39:24Z
Random matrices meet machine learning: a large dimensional analysis of LS-SVM
This article proposes a performance analysis of kernel least squares support vector machines (LS-SVMs) based on a random matrix approach, in the regime where both the dimension of data $p$ and their number $n$ grow large at the same rate. Under a two-class Gaussian mixture model for the input data, we prove that the LS-SVM decision function is asymptotically normal with means and covariances shown to depend explicitly on the derivatives of the kernel function. This provides improved understanding along with new insights into the internal workings of SVM-type methods for large datasets.
[ "['Zhenyu Liao' 'Romain Couillet']" ]
cs.CV cs.AI cs.LG
null
1609.02036
null
null
http://arxiv.org/pdf/1609.02036v1
2016-09-07T15:56:36Z
2016-09-07T15:56:36Z
Deep Markov Random Field for Image Modeling
Markov Random Fields (MRFs), a formulation widely used in generative image modeling, have long been plagued by the lack of expressive power. This issue is primarily due to the fact that conventional MRFs formulations tend to use simplistic factors to capture local patterns. In this paper, we move beyond such limitations, and propose a novel MRF model that uses fully-connected neurons to express the complex interactions among pixels. Through theoretical analysis, we reveal an inherent connection between this model and recurrent neural networks, and thereon derive an approximated feed-forward network that couples multiple RNNs along opposite directions. This formulation combines the expressive power of deep neural networks and the cyclic dependency structure of MRF in a unified model, bringing the modeling capability to a new level. The feed-forward approximation also allows it to be efficiently learned from data. Experimental results on a variety of low-level vision tasks show notable improvement over state-of-the-arts.
[ "Zhirong Wu, Dahua Lin, Xiaoou Tang", "['Zhirong Wu' 'Dahua Lin' 'Xiaoou Tang']" ]
cs.LG cs.CL cs.SD
null
1609.02082
null
null
http://arxiv.org/pdf/1609.02082v1
2016-08-04T10:11:24Z
2016-08-04T10:11:24Z
An improved uncertainty decoding scheme with weighted samples for DNN-HMM hybrid systems
In this paper, we advance a recently-proposed uncertainty decoding scheme for DNN-HMM (deep neural network - hidden Markov model) hybrid systems. This numerical sampling concept averages DNN outputs produced by a finite set of feature samples (drawn from a probabilistic distortion model) to approximate the posterior likelihoods of the context-dependent HMM states. As main innovation, we propose a weighted DNN-output averaging based on a minimum classification error criterion and apply it to a probabilistic distortion model for spatial diffuseness features. The experimental evaluation is performed on the 8-channel REVERB Challenge task using a DNN-HMM hybrid system with multichannel front-end signal enhancement. We show that the recognition accuracy of the DNN-HMM hybrid system improves by incorporating uncertainty decoding based on random sampling and that the proposed weighted DNN-output averaging further reduces the word error rate scores.
[ "Christian Huemmer, Ram\\'on Fern\\'andez Astudillo and Walter Kellermann", "['Christian Huemmer' 'Ramón Fernández Astudillo' 'Walter Kellermann']" ]
stat.ML cs.CL cs.LG
10.1145/2959100.2959180
1609.02116
null
null
http://arxiv.org/abs/1609.02116v2
2016-09-09T19:27:19Z
2016-09-07T19:05:42Z
Ask the GRU: Multi-Task Learning for Deep Text Recommendations
In a variety of application domains the content to be recommended to users is associated with text. This includes research papers, movies with associated plot summaries, news articles, blog posts, etc. Recommendation approaches based on latent factor models can be extended naturally to leverage text by employing an explicit mapping from text to factors. This enables recommendations for new, unseen content, and may generalize better, since the factors for all items are produced by a compactly-parametrized model. Previous work has used topic models or averages of word embeddings for this mapping. In this paper we present a method leveraging deep recurrent neural networks to encode the text sequence into a latent vector, specifically gated recurrent units (GRUs) trained end-to-end on the collaborative filtering task. For the task of scientific paper recommendation, this yields models with significantly higher accuracy. In cold-start scenarios, we beat the previous state-of-the-art, all of which ignore word order. Performance is further improved by multi-task learning, where the text encoder network is trained for a combination of content recommendation and item metadata prediction. This regularizes the collaborative filtering model, ameliorating the problem of sparsity of the observed rating matrix.
[ "Trapit Bansal, David Belanger, Andrew McCallum", "['Trapit Bansal' 'David Belanger' 'Andrew McCallum']" ]
cs.CV cs.AI cs.LG
null
1609.02132
null
null
http://arxiv.org/pdf/1609.02132v1
2016-09-07T19:35:30Z
2016-09-07T19:35:30Z
UberNet: Training a `Universal' Convolutional Neural Network for Low-, Mid-, and High-Level Vision using Diverse Datasets and Limited Memory
In this work we introduce a convolutional neural network (CNN) that jointly handles low-, mid-, and high-level vision tasks in a unified architecture that is trained end-to-end. Such a universal network can act like a `swiss knife' for vision tasks; we call this architecture an UberNet to indicate its overarching nature. We address two main technical challenges that emerge when broadening up the range of tasks handled by a single CNN: (i) training a deep architecture while relying on diverse training sets and (ii) training many (potentially unlimited) tasks with a limited memory budget. Properly addressing these two problems allows us to train accurate predictors for a host of tasks, without compromising accuracy. Through these advances we train in an end-to-end manner a CNN that simultaneously addresses (a) boundary detection (b) normal estimation (c) saliency estimation (d) semantic segmentation (e) human part segmentation (f) semantic boundary detection, (g) region proposal generation and object detection. We obtain competitive performance while jointly addressing all of these tasks in 0.7 seconds per frame on a single GPU. A demonstration of this system can be found at http://cvn.ecp.fr/ubernet/.
[ "['Iasonas Kokkinos']", "Iasonas Kokkinos" ]
stat.ML cs.LG
null
1609.022
null
null
null
null
null
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
[ "Jason Tyler Rolfe" ]
null
null
1609.02200
null
null
http://arxiv.org/pdf/1609.02200v2
2017-04-22T01:23:06Z
2016-09-07T21:41:32Z
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
[ "['Jason Tyler Rolfe']" ]
cs.IT cs.LG math.IT stat.ML
null
1609.02208
null
null
http://arxiv.org/pdf/1609.02208v1
2016-09-07T22:11:39Z
2016-09-07T22:11:39Z
Breaking the Bandwidth Barrier: Geometrical Adaptive Entropy Estimation
Estimators of information theoretic measures such as entropy and mutual information are a basic workhorse for many downstream applications in modern data science. State of the art approaches have been either geometric (nearest neighbor (NN) based) or kernel based (with a globally chosen bandwidth). In this paper, we combine both these approaches to design new estimators of entropy and mutual information that outperform state of the art methods. Our estimator uses local bandwidth choices of $k$-NN distances with a finite $k$, independent of the sample size. Such a local and data dependent choice improves performance in practice, but the bandwidth is vanishing at a fast rate, leading to a non-vanishing bias. We show that the asymptotic bias of the proposed estimator is universal; it is independent of the underlying distribution. Hence, it can be pre-computed and subtracted from the estimate. As a byproduct, we obtain a unified way of obtaining both kernel and NN estimators. The corresponding theoretical contribution relating the asymptotic geometry of nearest neighbors to order statistics is of independent mathematical interest.
[ "['Weihao Gao' 'Sewoong Oh' 'Pramod Viswanath']", "Weihao Gao and Sewoong Oh and Pramod Viswanath" ]