categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
list
cs.LG stat.ML
null
1611.05162
null
null
http://arxiv.org/pdf/1611.05162v4
2017-11-23T09:34:28Z
2016-11-16T06:34:41Z
Net-Trim: Convex Pruning of Deep Neural Networks with Performance Guarantee
We introduce and analyze a new technique for model reduction for deep neural networks. While large networks are theoretically capable of learning arbitrarily complex models, overfitting and model redundancy negatively affects the prediction accuracy and model variance. Our Net-Trim algorithm prunes (sparsifies) a trained network layer-wise, removing connections at each layer by solving a convex optimization program. This program seeks a sparse set of weights at each layer that keeps the layer inputs and outputs consistent with the originally trained model. The algorithms and associated analysis are applicable to neural networks operating with the rectified linear unit (ReLU) as the nonlinear activation. We present both parallel and cascade versions of the algorithm. While the latter can achieve slightly simpler models with the same generalization performance, the former can be computed in a distributed manner. In both cases, Net-Trim significantly reduces the number of connections in the network, while also providing enough regularization to slightly reduce the generalization error. We also provide a mathematical analysis of the consistency between the initial network and the retrained model. To analyze the model sample complexity, we derive the general sufficient conditions for the recovery of a sparse transform matrix. For a single layer taking independent Gaussian random vectors of length $N$ as inputs, we show that if the network response can be described using a maximum number of $s$ non-zero weights per node, these weights can be learned from $\mathcal{O}(s\log N)$ samples.
[ "Alireza Aghasi, Afshin Abdi, Nam Nguyen, Justin Romberg", "['Alireza Aghasi' 'Afshin Abdi' 'Nam Nguyen' 'Justin Romberg']" ]
cs.LG stat.ML
null
1611.05181
null
null
http://arxiv.org/pdf/1611.05181v3
2017-07-06T03:26:33Z
2016-11-16T08:11:14Z
Graph Learning from Data under Structural and Laplacian Constraints
Graphs are fundamental mathematical structures used in various fields to represent data, signals and processes. In this paper, we propose a novel framework for learning/estimating graphs from data. The proposed framework includes (i) formulation of various graph learning problems, (ii) their probabilistic interpretations and (iii) associated algorithms. Specifically, graph learning problems are posed as estimation of graph Laplacian matrices from some observed data under given structural constraints (e.g., graph connectivity and sparsity level). From a probabilistic perspective, the problems of interest correspond to maximum a posteriori (MAP) parameter estimation of Gaussian-Markov random field (GMRF) models, whose precision (inverse covariance) is a graph Laplacian matrix. For the proposed graph learning problems, specialized algorithms are developed by incorporating the graph Laplacian and structural constraints. The experimental results demonstrate that the proposed algorithms outperform the current state-of-the-art methods in terms of accuracy and computational efficiency.
[ "Hilmi E. Egilmez, Eduardo Pavez, Antonio Ortega", "['Hilmi E. Egilmez' 'Eduardo Pavez' 'Antonio Ortega']" ]
cs.LG
null
1611.05193
null
null
http://arxiv.org/pdf/1611.05193v3
2017-06-14T13:26:38Z
2016-11-16T09:25:17Z
Bayesian optimization of hyper-parameters in reservoir computing
We describe a method for searching the optimal hyper-parameters in reservoir computing, which consists of a Gaussian process with Bayesian optimization. It provides an alternative to other frequently used optimization methods such as grid, random, or manual search. In addition to a set of optimal hyper-parameters, the method also provides a probability distribution of the cost function as a function of the hyper-parameters. We apply this method to two types of reservoirs: nonlinear delay nodes and echo state networks. It shows excellent performance on all considered benchmarks, either matching or significantly surpassing results found in the literature. In general, the algorithm achieves optimal results in fewer iterations when compared to other optimization methods. We have optimized up to six hyper-parameters simultaneously, which would have been infeasible using, e.g., grid search. Due to its automated nature, this method significantly reduces the need for expert knowledge when optimizing the hyper-parameters in reservoir computing. Existing software libraries for Bayesian optimization, such as Spearmint, make the implementation of the algorithm straightforward. A fork of the Spearmint framework along with a tutorial on how to use it in practice is available at https://bitbucket.org/uhasseltmachinelearning/spearmint/
[ "['Jan Yperman' 'Thijs Becker']", "Jan Yperman, Thijs Becker" ]
stat.ML cs.CV cs.LG
null
1611.05209
null
null
http://arxiv.org/pdf/1611.05209v1
2016-11-16T10:20:10Z
2016-11-16T10:20:10Z
Deep Variational Inference Without Pixel-Wise Reconstruction
Variational autoencoders (VAEs), that are built upon deep neural networks have emerged as popular generative models in computer vision. Most of the work towards improving variational autoencoders has focused mainly on making the approximations to the posterior flexible and accurate, leading to tremendous progress. However, there have been limited efforts to replace pixel-wise reconstruction, which have known shortcomings. In this work, we use real-valued non-volume preserving transformations (real NVP) to exactly compute the conditional likelihood of the data given the latent distribution. We show that a simple VAE with this form of reconstruction is competitive with complicated VAE structures, on image modeling tasks. As part of our model, we develop powerful conditional coupling layers that enable real NVP to learn with fewer intermediate layers.
[ "Siddharth Agrawal, Ambedkar Dukkipati", "['Siddharth Agrawal' 'Ambedkar Dukkipati']" ]
cs.LG cs.SY
null
1611.05317
null
null
http://arxiv.org/pdf/1611.05317v2
2017-04-17T23:29:02Z
2016-11-15T04:20:08Z
A Learning Scheme for Microgrid Islanding and Reconnection
This paper introduces a potential learning scheme that can dynamically predict the stability of the reconnection of sub-networks to a main grid. As the future electrical power systems tend towards smarter and greener technology, the deployment of self sufficient networks, or microgrids, becomes more likely. Microgrids may operate on their own or synchronized with the main grid, thus control methods need to take into account islanding and reconnecting of said networks. The ability to optimally and safely reconnect a portion of the grid is not well understood and, as of now, limited to raw synchronization between interconnection points. A support vector machine (SVM) leveraging real-time data from phasor measurement units (PMUs) is proposed to predict in real time whether the reconnection of a sub-network to the main grid would lead to stability or instability. A dynamics simulator fed with pre-acquired system parameters is used to create training data for the SVM in various operating states. The classifier was tested on a variety of cases and operating points to ensure diversity. Accuracies of approximately 85% were observed throughout most conditions when making dynamic predictions of a given network.
[ "Carter Lassetter, Eduardo Cotilla-Sanchez, Jinsub Kim", "['Carter Lassetter' 'Eduardo Cotilla-Sanchez' 'Jinsub Kim']" ]
cs.LG
null
1611.0534
null
null
null
null
null
Approximating Wisdom of Crowds using K-RBMs
An important way to make large training sets is to gather noisy labels from crowds of non experts. We propose a method to aggregate noisy labels collected from a crowd of workers or annotators. Eliciting labels is important in tasks such as judging web search quality and rating products. Our method assumes that labels are generated by a probability distribution over items and labels. We formulate the method by drawing parallels between Gaussian Mixture Models (GMMs) and Restricted Boltzmann Machines (RBMs) and show that the problem of vote aggregation can be viewed as one of clustering. We use K-RBMs to perform clustering. We finally show some empirical evaluations over real datasets.
[ "Abhay Gupta" ]
null
null
1611.05340
null
null
http://arxiv.org/pdf/1611.05340v2
2016-11-17T02:48:04Z
2016-11-16T16:01:48Z
Approximating Wisdom of Crowds using K-RBMs
An important way to make large training sets is to gather noisy labels from crowds of non experts. We propose a method to aggregate noisy labels collected from a crowd of workers or annotators. Eliciting labels is important in tasks such as judging web search quality and rating products. Our method assumes that labels are generated by a probability distribution over items and labels. We formulate the method by drawing parallels between Gaussian Mixture Models (GMMs) and Restricted Boltzmann Machines (RBMs) and show that the problem of vote aggregation can be viewed as one of clustering. We use K-RBMs to perform clustering. We finally show some empirical evaluations over real datasets.
[ "['Abhay Gupta']" ]
cs.CV cs.LG
null
1611.05369
null
null
http://arxiv.org/pdf/1611.05369v1
2016-11-16T17:04:35Z
2016-11-16T17:04:35Z
Fast On-Line Kernel Density Estimation for Active Object Localization
A major goal of computer vision is to enable computers to interpret visual situations---abstract concepts (e.g., "a person walking a dog," "a crowd waiting for a bus," "a picnic") whose image instantiations are linked more by their common spatial and semantic structure than by low-level visual similarity. In this paper, we propose a novel method for prior learning and active object localization for this kind of knowledge-driven search in static images. In our system, prior situation knowledge is captured by a set of flexible, kernel-based density estimations---a situation model---that represent the expected spatial structure of the given situation. These estimations are efficiently updated by information gained as the system searches for relevant objects, allowing the system to use context as it is discovered to narrow the search. More specifically, at any given time in a run on a test image, our system uses image features plus contextual information it has discovered to identify a small subset of training images---an importance cluster---that is deemed most similar to the given test image, given the context. This subset is used to generate an updated situation model in an on-line fashion, using an efficient multipole expansion technique. As a proof of concept, we apply our algorithm to a highly varied and challenging dataset consisting of instances of a "dog-walking" situation. Our results support the hypothesis that dynamically-rendered, context-based probability models can support efficient object localization in visual situations. Moreover, our approach is general enough to be applied to diverse machine learning paradigms requiring interpretable, probabilistic representations generated from partially observed data.
[ "Anthony D. Rhodes, Max H. Quinn, and Melanie Mitchell", "['Anthony D. Rhodes' 'Max H. Quinn' 'Melanie Mitchell']" ]
cs.SI cs.LG
null
1611.05373
null
null
http://arxiv.org/pdf/1611.05373v1
2016-11-16T17:14:06Z
2016-11-16T17:14:06Z
DeepCas: an End-to-end Predictor of Information Cascades
Information cascades, effectively facilitated by most social network platforms, are recognized as a major factor in almost every social success and disaster in these networks. Can cascades be predicted? While many believe that they are inherently unpredictable, recent work has shown that some key properties of information cascades, such as size, growth, and shape, can be predicted by a machine learning algorithm that combines many features. These predictors all depend on a bag of hand-crafting features to represent the cascade network and the global network structure. Such features, always carefully and sometimes mysteriously designed, are not easy to extend or to generalize to a different platform or domain. Inspired by the recent successes of deep learning in multiple data mining tasks, we investigate whether an end-to-end deep learning approach could effectively predict the future size of cascades. Such a method automatically learns the representation of individual cascade graphs in the context of the global network structure, without hand-crafted features and heuristics. We find that node embeddings fall short of predictive power, and it is critical to learn the representation of a cascade graph as a whole. We present algorithms that learn the representation of cascade graphs in an end-to-end manner, which significantly improve the performance of cascade prediction over strong baselines that include feature based methods, node embedding methods, and graph kernel methods. Our results also provide interesting implications for cascade prediction in general.
[ "Cheng Li, Jiaqi Ma, Xiaoxiao Guo, and Qiaozhu Mei", "['Cheng Li' 'Jiaqi Ma' 'Xiaoxiao Guo' 'Qiaozhu Mei']" ]
cs.CV cs.LG
null
1611.05377
null
null
http://arxiv.org/pdf/1611.05377v1
2016-11-16T17:31:44Z
2016-11-16T17:31:44Z
Fully-adaptive Feature Sharing in Multi-Task Networks with Applications in Person Attribute Classification
Multi-task learning aims to improve generalization performance of multiple prediction tasks by appropriately sharing relevant information across them. In the context of deep neural networks, this idea is often realized by hand-designed network architectures with layers that are shared across tasks and branches that encode task-specific features. However, the space of possible multi-task deep architectures is combinatorially large and often the final architecture is arrived at by manual exploration of this space subject to designer's bias, which can be both error-prone and tedious. In this work, we propose a principled approach for designing compact multi-task deep learning architectures. Our approach starts with a thin network and dynamically widens it in a greedy manner during training using a novel criterion that promotes grouping of similar tasks together. Our Extensive evaluation on person attributes classification tasks involving facial and clothing attributes suggests that the models produced by the proposed method are fast, compact and can closely match or exceed the state-of-the-art accuracy from strong baselines by much more expensive models.
[ "['Yongxi Lu' 'Abhishek Kumar' 'Shuangfei Zhai' 'Yu Cheng' 'Tara Javidi'\n 'Rogerio Feris']", "Yongxi Lu, Abhishek Kumar, Shuangfei Zhai, Yu Cheng, Tara Javidi,\n Rogerio Feris" ]
cs.LG stat.ML
null
1611.05378
null
null
http://arxiv.org/pdf/1611.05378v1
2016-11-16T17:32:09Z
2016-11-16T17:32:09Z
Spectral Convolution Networks
Previous research has shown that computation of convolution in the frequency domain provides a significant speedup versus traditional convolution network implementations. However, this performance increase comes at the expense of repeatedly computing the transform and its inverse in order to apply other network operations such as activation, pooling, and dropout. We show, mathematically, how convolution and activation can both be implemented in the frequency domain using either the Fourier or Laplace transformation. The main contributions are a description of spectral activation under the Fourier transform and a further description of an efficient algorithm for computing both convolution and activation under the Laplace transform. By computing both the convolution and activation functions in the frequency domain, we can reduce the number of transforms required, as well as reducing overall complexity. Our description of a spectral activation function, together with existing spectral analogs of other network functions may then be used to compose a fully spectral implementation of a convolution network.
[ "Maria Francesca and Arthur Hughes and David Gregg", "['Maria Francesca' 'Arthur Hughes' 'David Gregg']" ]
cs.LG cs.NE
null
1611.05397
null
null
http://arxiv.org/pdf/1611.05397v1
2016-11-16T18:21:29Z
2016-11-16T18:21:29Z
Reinforcement Learning with Unsupervised Auxiliary Tasks
Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880\% expert human performance, and a challenging suite of first-person, three-dimensional \emph{Labyrinth} tasks leading to a mean speedup in learning of 10$\times$ and averaging 87\% expert human performance on Labyrinth.
[ "['Max Jaderberg' 'Volodymyr Mnih' 'Wojciech Marian Czarnecki' 'Tom Schaul'\n 'Joel Z Leibo' 'David Silver' 'Koray Kavukcuoglu']", "Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul,\n Joel Z Leibo, David Silver, Koray Kavukcuoglu" ]
cs.LG stat.ML
null
1611.05402
null
null
http://arxiv.org/pdf/1611.05402v3
2017-06-19T14:36:00Z
2016-11-16T18:45:09Z
The ZipML Framework for Training Models with End-to-End Low Precision: The Cans, the Cannots, and a Little Bit of Deep Learning
Recently there has been significant interest in training machine-learning models at low precision: by reducing precision, one can reduce computation and communication by one order of magnitude. We examine training at reduced precision, both from a theoretical and practical perspective, and ask: is it possible to train models at end-to-end low precision with provable guarantees? Can this lead to consistent order-of-magnitude speedups? We present a framework called ZipML to answer these questions. For linear models, the answer is yes. We develop a simple framework based on one simple but novel strategy called double sampling. Our framework is able to execute training at low precision with no bias, guaranteeing convergence, whereas naive quantization would introduce significant bias. We validate our framework across a range of applications, and show that it enables an FPGA prototype that is up to 6.5x faster than an implementation using full 32-bit precision. We further develop a variance-optimal stochastic quantization strategy and show that it can make a significant difference in a variety of settings. When applied to linear models together with double sampling, we save up to another 1.7x in data movement compared with uniform quantization. When training deep networks with quantized models, we achieve higher accuracy than the state-of-the-art XNOR-Net. Finally, we extend our framework through approximation to non-linear models, such as SVM. We show that, although using low-precision data induces bias, we can appropriately bound and control the bias. We find in practice 8-bit precision is often sufficient to converge to the correct solution. Interestingly, however, in practice we notice that our framework does not always outperform the naive rounding approach. We discuss this negative result in detail.
[ "['Hantian Zhang' 'Jerry Li' 'Kaan Kara' 'Dan Alistarh' 'Ji Liu' 'Ce Zhang']", "Hantian Zhang, Jerry Li, Kaan Kara, Dan Alistarh, Ji Liu, Ce Zhang" ]
cs.LG cs.AI cs.SD
10.23919/APSIPA.2018.8659792
1611.05416
null
null
http://arxiv.org/abs/1611.05416v2
2016-12-07T20:51:36Z
2016-11-16T19:42:40Z
Composing Music with Grammar Argumented Neural Networks and Note-Level Encoding
Creating aesthetically pleasing pieces of art, including music, has been a long-term goal for artificial intelligence research. Despite recent successes of long-short term memory (LSTM) recurrent neural networks (RNNs) in sequential learning, LSTM neural networks have not, by themselves, been able to generate natural-sounding music conforming to music theory. To transcend this inadequacy, we put forward a novel method for music composition that combines the LSTM with Grammars motivated by music theory. The main tenets of music theory are encoded as grammar argumented (GA) filters on the training data, such that the machine can be trained to generate music inheriting the naturalness of human-composed pieces from the original dataset while adhering to the rules of music theory. Unlike previous approaches, pitches and durations are encoded as one semantic entity, which we refer to as note-level encoding. This allows easy implementation of music theory grammars, as well as closer emulation of the thinking pattern of a musician. Although the GA rules are applied to the training data and never directly to the LSTM music generation, our machine still composes music that possess high incidences of diatonic scale notes, small pitch intervals and chords, in deference to music theory.
[ "Zheng Sun, Jiaqi Liu, Zewang Zhang, Jingwen Chen, Zhao Huo, Ching Hua\n Lee, and Xiao Zhang", "['Zheng Sun' 'Jiaqi Liu' 'Zewang Zhang' 'Jingwen Chen' 'Zhao Huo'\n 'Ching Hua Lee' 'Xiao Zhang']" ]
cs.IR cs.LG
null
1611.0548
null
null
null
null
null
Solving Cold-Start Problem in Large-scale Recommendation Engines: A Deep Learning Approach
Collaborative Filtering (CF) is widely used in large-scale recommendation engines because of its efficiency, accuracy and scalability. However, in practice, the fact that recommendation engines based on CF require interactions between users and items before making recommendations, make it inappropriate for new items which haven't been exposed to the end users to interact with. This is known as the cold-start problem. In this paper we introduce a novel approach which employs deep learning to tackle this problem in any CF based recommendation engine. One of the most important features of the proposed technique is the fact that it can be applied on top of any existing CF based recommendation engine without changing the CF core. We successfully applied this technique to overcome the item cold-start problem in Careerbuilder's CF based recommendation engine. Our experiments show that the proposed technique is very efficient to resolve the cold-start problem while maintaining high accuracy of the CF recommendations.
[ "Jianbo Yuan, Walid Shalaby, Mohammed Korayem, David Lin, Khalifeh\n AlJadda, and Jiebo Luo" ]
null
null
1611.05480
null
null
http://arxiv.org/pdf/1611.05480v1
2016-11-16T22:03:04Z
2016-11-16T22:03:04Z
Solving Cold-Start Problem in Large-scale Recommendation Engines: A Deep Learning Approach
Collaborative Filtering (CF) is widely used in large-scale recommendation engines because of its efficiency, accuracy and scalability. However, in practice, the fact that recommendation engines based on CF require interactions between users and items before making recommendations, make it inappropriate for new items which haven't been exposed to the end users to interact with. This is known as the cold-start problem. In this paper we introduce a novel approach which employs deep learning to tackle this problem in any CF based recommendation engine. One of the most important features of the proposed technique is the fact that it can be applied on top of any existing CF based recommendation engine without changing the CF core. We successfully applied this technique to overcome the item cold-start problem in Careerbuilder's CF based recommendation engine. Our experiments show that the proposed technique is very efficient to resolve the cold-start problem while maintaining high accuracy of the CF recommendations.
[ "['Jianbo Yuan' 'Walid Shalaby' 'Mohammed Korayem' 'David Lin'\n 'Khalifeh AlJadda' 'Jiebo Luo']" ]
stat.ML cs.DS cs.LG stat.CO
null
1611.05487
null
null
http://arxiv.org/pdf/1611.05487v2
2016-11-24T01:10:21Z
2016-11-16T22:32:50Z
Algebraic multigrid support vector machines
The support vector machine is a flexible optimization-based technique widely used for classification problems. In practice, its training part becomes computationally expensive on large-scale data sets because of such reasons as the complexity and number of iterations in parameter fitting methods, underlying optimization solvers, and nonlinearity of kernels. We introduce a fast multilevel framework for solving support vector machine models that is inspired by the algebraic multigrid. Significant improvement in the running has been achieved without any loss in the quality. The proposed technique is highly beneficial on imbalanced sets. We demonstrate computational results on publicly available and industrial data sets.
[ "['Ehsan Sadrfaridpour' 'Sandeep Jeereddy' 'Ken Kennedy' 'Andre Luckow'\n 'Talayeh Razzaghi' 'Ilya Safro']", "Ehsan Sadrfaridpour, Sandeep Jeereddy, Ken Kennedy, Andre Luckow,\n Talayeh Razzaghi, Ilya Safro" ]
cs.LG
null
1611.05521
null
null
http://arxiv.org/pdf/1611.05521v1
2016-11-17T01:21:26Z
2016-11-17T01:21:26Z
Robust Hashing for Multi-View Data: Jointly Learning Low-Rank Kernelized Similarity Consensus and Hash Functions
Learning hash functions/codes for similarity search over multi-view data is attracting increasing attention, where similar hash codes are assigned to the data objects characterizing consistently neighborhood relationship across views. Traditional methods in this category inherently suffer three limitations: 1) they commonly adopt a two-stage scheme where similarity matrix is first constructed, followed by a subsequent hash function learning; 2) these methods are commonly developed on the assumption that data samples with multiple representations are noise-free,which is not practical in real-life applications; 3) they often incur cumbersome training model caused by the neighborhood graph construction using all $N$ points in the database ($O(N)$). In this paper, we motivate the problem of jointly and efficiently training the robust hash functions over data objects with multi-feature representations which may be noise corrupted. To achieve both the robustness and training efficiency, we propose an approach to effectively and efficiently learning low-rank kernelized \footnote{We use kernelized similarity rather than kernel, as it is not a squared symmetric matrix for data-landmark affinity matrix.} hash functions shared across views. Specifically, we utilize landmark graphs to construct tractable similarity matrices in multi-views to automatically discover neighborhood structure in the data. To learn robust hash functions, a latent low-rank kernel function is used to construct hash functions in order to accommodate linearly inseparable data. In particular, a latent kernelized similarity matrix is recovered by rank minimization on multiple kernel-based similarity matrices. Extensive experiments on real-world multi-view datasets validate the efficacy of our method in the presence of error corruptions.
[ "Lin Wu, Yang Wang", "['Lin Wu' 'Yang Wang']" ]
cs.CL cs.LG stat.ML
null
1611.05527
null
null
http://arxiv.org/pdf/1611.05527v1
2016-11-17T01:43:01Z
2016-11-17T01:43:01Z
Automatic Node Selection for Deep Neural Networks using Group Lasso Regularization
We examine the effect of the Group Lasso (gLasso) regularizer in selecting the salient nodes of Deep Neural Network (DNN) hidden layers by applying a DNN-HMM hybrid speech recognizer to TED Talks speech data. We test two types of gLasso regularization, one for outgoing weight vectors and another for incoming weight vectors, as well as two sizes of DNNs: 2048 hidden layer nodes and 4096 nodes. Furthermore, we compare gLasso and L2 regularizers. Our experiment results demonstrate that our DNN training, in which the gLasso regularizer was embedded, successfully selected the hidden layer nodes that are necessary and sufficient for achieving high classification power.
[ "['Tsubasa Ochiai' 'Shigeki Matsuda' 'Hideyuki Watanabe' 'Shigeru Katagiri']", "Tsubasa Ochiai, Shigeki Matsuda, Hideyuki Watanabe, Shigeru Katagiri" ]
cs.CV cs.LG cs.NE
null
1611.05552
null
null
http://arxiv.org/pdf/1611.05552v5
2017-08-23T14:09:55Z
2016-11-17T03:45:48Z
DelugeNets: Deep Networks with Efficient and Flexible Cross-layer Information Inflows
Deluge Networks (DelugeNets) are deep neural networks which efficiently facilitate massive cross-layer information inflows from preceding layers to succeeding layers. The connections between layers in DelugeNets are established through cross-layer depthwise convolutional layers with learnable filters, acting as a flexible yet efficient selection mechanism. DelugeNets can propagate information across many layers with greater flexibility and utilize network parameters more effectively compared to ResNets, whilst being more efficient than DenseNets. Remarkably, a DelugeNet model with just model complexity of 4.31 GigaFLOPs and 20.2M network parameters, achieve classification errors of 3.76% and 19.02% on CIFAR-10 and CIFAR-100 dataset respectively. Moreover, DelugeNet-122 performs competitively to ResNet-200 on ImageNet dataset, despite costing merely half of the computations needed by the latter.
[ "Jason Kuen, Xiangfei Kong, Gang Wang, Yap-Peng Tan", "['Jason Kuen' 'Xiangfei Kong' 'Gang Wang' 'Yap-Peng Tan']" ]
stat.ML cs.LG
null
1611.05559
null
null
http://arxiv.org/pdf/1611.05559v2
2017-03-01T21:54:11Z
2016-11-17T04:19:16Z
Boosting Variational Inference
Variational inference (VI) provides fast approximations of a Bayesian posterior in part because it formulates posterior approximation as an optimization problem: to find the closest distribution to the exact posterior over some family of distributions. For practical reasons, the family of distributions in VI is usually constrained so that it does not include the exact posterior, even as a limit point. Thus, no matter how long VI is run, the resulting approximation will not approach the exact posterior. We propose to instead consider a more flexible approximating family consisting of all possible finite mixtures of a parametric base distribution (e.g., Gaussian). For efficient inference, we borrow ideas from gradient boosting to develop an algorithm we call boosting variational inference (BVI). BVI iteratively improves the current approximation by mixing it with a new component from the base distribution family and thereby yields progressively more accurate posterior approximations as more computing time is spent. Unlike a number of common VI variants including mean-field VI, BVI is able to capture multimodality, general posterior covariance, and nonstandard posterior shapes.
[ "['Fangjian Guo' 'Xiangyu Wang' 'Kai Fan' 'Tamara Broderick'\n 'David B. Dunson']", "Fangjian Guo, Xiangyu Wang, Kai Fan, Tamara Broderick and David B.\n Dunson" ]
cs.CV cs.LG
null
1611.05607
null
null
http://arxiv.org/pdf/1611.05607v3
2017-02-02T10:52:03Z
2016-11-17T08:31:56Z
Optical Flow Requires Multiple Strategies (but only one network)
We show that the matching problem that underlies optical flow requires multiple strategies, depending on the amount of image motion and other factors. We then study the implications of this observation on training a deep neural network for representing image patches in the context of descriptor based optical flow. We propose a metric learning method, which selects suitable negative samples based on the nature of the true match. This type of training produces a network that displays multiple strategies depending on the input and leads to state of the art results on the KITTI 2012 and KITTI 2015 optical flow benchmarks.
[ "['Tal Schuster' 'Lior Wolf' 'David Gadot']", "Tal Schuster, Lior Wolf and David Gadot" ]
cs.CV cs.LG
null
1611.05644
null
null
http://arxiv.org/pdf/1611.05644v1
2016-11-17T11:55:16Z
2016-11-17T11:55:16Z
Inverting The Generator Of A Generative Adversarial Network
Generative adversarial networks (GANs) learn to synthesise new samples from a high-dimensional distribution by passing samples drawn from a latent space through a generative network. When the high-dimensional distribution describes images of a particular data set, the network should learn to generate visually similar image samples for latent variables that are close to each other in the latent space. For tasks such as image retrieval and image classification, it may be useful to exploit the arrangement of the latent space by projecting images into it, and using this as a representation for discriminative tasks. GANs often consist of multiple layers of non-linear computations, making them very difficult to invert. This paper introduces techniques for projecting image samples into the latent space using any pre-trained GAN, provided that the computational graph is available. We evaluate these techniques on both MNIST digits and Omniglot handwritten characters. In the case of MNIST digits, we show that projections into the latent space maintain information about the style and the identity of the digit. In the case of Omniglot characters, we show that even characters from alphabets that have not been seen during training may be projected well into the latent space; this suggests that this approach may have applications in one-shot learning.
[ "Antonia Creswell and Anil Anthony Bharath", "['Antonia Creswell' 'Anil Anthony Bharath']" ]
cs.LG cs.AI
null
1611.05675
null
null
http://arxiv.org/pdf/1611.05675v1
2016-11-17T13:32:59Z
2016-11-17T13:32:59Z
Study on Feature Subspace of Archetypal Emotions for Speech Emotion Recognition
Feature subspace selection is an important part in speech emotion recognition. Most of the studies are devoted to finding a feature subspace for representing all emotions. However, some studies have indicated that the features associated with different emotions are not exactly the same. Hence, traditional methods may fail to distinguish some of the emotions with just one global feature subspace. In this work, we propose a new divide and conquer idea to solve the problem. First, the feature subspaces are constructed for all the combinations of every two different emotions (emotion-pair). Bi-classifiers are then trained on these feature subspaces respectively. The final emotion recognition result is derived by the voting and competition method. Experimental results demonstrate that the proposed method can get better results than the traditional multi-classification method.
[ "['Xi Ma' 'Zhiyong Wu' 'Jia Jia' 'Mingxing Xu' 'Helen Meng' 'Lianhong Cai']", "Xi Ma, Zhiyong Wu, Jia Jia, Mingxing Xu, Helen Meng, Lianhong Cai" ]
stat.ML cs.LG
null
1611.05722
null
null
http://arxiv.org/pdf/1611.05722v1
2016-11-17T14:58:35Z
2016-11-17T14:58:35Z
GENESIM: genetic extraction of a single, interpretable model
Models obtained by decision tree induction techniques excel in being interpretable.However, they can be prone to overfitting, which results in a low predictive performance. Ensemble techniques are able to achieve a higher accuracy. However, this comes at a cost of losing interpretability of the resulting model. This makes ensemble techniques impractical in applications where decision support, instead of decision making, is crucial. To bridge this gap, we present the GENESIM algorithm that transforms an ensemble of decision trees to a single decision tree with an enhanced predictive performance by using a genetic algorithm. We compared GENESIM to prevalent decision tree induction and ensemble techniques using twelve publicly available data sets. The results show that GENESIM achieves a better predictive performance on most of these data sets than decision tree induction techniques and a predictive performance in the same order of magnitude as the ensemble techniques. Moreover, the resulting model of GENESIM has a very low complexity, making it very interpretable, in contrast to ensemble techniques.
[ "['Gilles Vandewiele' 'Olivier Janssens' 'Femke Ongenae' 'Filip De Turck'\n 'Sofie Van Hoecke']", "Gilles Vandewiele, Olivier Janssens, Femke Ongenae, Filip De Turck,\n Sofie Van Hoecke" ]
cs.LG stat.ML
null
1611.05724
null
null
http://arxiv.org/pdf/1611.05724v2
2016-11-22T10:13:02Z
2016-11-17T14:59:55Z
Unimodal Thompson Sampling for Graph-Structured Arms
We study, to the best of our knowledge, the first Bayesian algorithm for unimodal Multi-Armed Bandit (MAB) problems with graph structure. In this setting, each arm corresponds to a node of a graph and each edge provides a relationship, unknown to the learner, between two nodes in terms of expected reward. Furthermore, for any node of the graph there is a path leading to the unique node providing the maximum expected reward, along which the expected reward is monotonically increasing. Previous results on this setting describe the behavior of frequentist MAB algorithms. In our paper, we design a Thompson Sampling-based algorithm whose asymptotic pseudo-regret matches the lower bound for the considered setting. We show that -as it happens in a wide number of scenarios- Bayesian MAB algorithms dramatically outperform frequentist ones. In particular, we provide a thorough experimental evaluation of the performance of our and state-of-the-art algorithms as the properties of the graph vary.
[ "Stefano Paladino and Francesco Trov\\`o and Marcello Restelli and\n Nicola Gatti", "['Stefano Paladino' 'Francesco Trovò' 'Marcello Restelli' 'Nicola Gatti']" ]
cs.LG
10.1109/TSMCB.2012.2234108
1611.05743
null
null
http://arxiv.org/abs/1611.05743v1
2016-11-16T05:33:04Z
2016-11-16T05:33:04Z
Relational Multi-Manifold Co-Clustering
Co-clustering targets on grouping the samples (e.g., documents, users) and the features (e.g., words, ratings) simultaneously. It employs the dual relation and the bilateral information between the samples and features. In many realworld applications, data usually reside on a submanifold of the ambient Euclidean space, but it is nontrivial to estimate the intrinsic manifold of the data space in a principled way. In this study, we focus on improving the co-clustering performance via manifold ensemble learning, which is able to maximally approximate the intrinsic manifolds of both the sample and feature spaces. To achieve this, we develop a novel co-clustering algorithm called Relational Multi-manifold Co-clustering (RMC) based on symmetric nonnegative matrix tri-factorization, which decomposes the relational data matrix into three submatrices. This method considers the intertype relationship revealed by the relational data matrix, and also the intra-type information reflected by the affinity matrices encoded on the sample and feature data distributions. Specifically, we assume the intrinsic manifold of the sample or feature space lies in a convex hull of some pre-defined candidate manifolds. We want to learn a convex combination of them to maximally approach the desired intrinsic manifold. To optimize the objective function, the multiplicative rules are utilized to update the submatrices alternatively. Besides, both the entropic mirror descent algorithm and the coordinate descent algorithm are exploited to learn the manifold coefficient vector. Extensive experiments on documents, images and gene expression data sets have demonstrated the superiority of the proposed algorithm compared to other well-established methods.
[ "['Ping Li' 'Jiajun Bu' 'Chun Chen' 'Zhanying He' 'Deng Cai']", "Ping Li, Jiajun Bu, Chun Chen, Zhanying He, Deng Cai" ]
cs.LG stat.ML
null
1611.05751
null
null
http://arxiv.org/pdf/1611.05751v1
2016-11-17T16:01:36Z
2016-11-17T16:01:36Z
A Multi-Modal Graph-Based Semi-Supervised Pipeline for Predicting Cancer Survival
Cancer survival prediction is an active area of research that can help prevent unnecessary therapies and improve patient's quality of life. Gene expression profiling is being widely used in cancer studies to discover informative biomarkers that aid predict different clinical endpoint prediction. We use multiple modalities of data derived from RNA deep-sequencing (RNA-seq) to predict survival of cancer patients. Despite the wealth of information available in expression profiles of cancer tumors, fulfilling the aforementioned objective remains a big challenge, for the most part, due to the paucity of data samples compared to the high dimension of the expression profiles. As such, analysis of transcriptomic data modalities calls for state-of-the-art big-data analytics techniques that can maximally use all the available data to discover the relevant information hidden within a significant amount of noise. In this paper, we propose a pipeline that predicts cancer patients' survival by exploiting the structure of the input (manifold learning) and by leveraging the unlabeled samples using Laplacian support vector machines, a graph-based semi supervised learning (GSSL) paradigm. We show that under certain circumstances, no single modality per se will result in the best accuracy and by fusing different models together via a stacked generalization strategy, we may boost the accuracy synergistically. We apply our approach to two cancer datasets and present promising results. We maintain that a similar pipeline can be used for predictive tasks where labeled samples are expensive to acquire.
[ "['Hamid Reza Hassanzadeh' 'John H. Phan' 'May D. Wang']", "Hamid Reza Hassanzadeh, John H. Phan, May D. Wang" ]
cs.LG cs.AI stat.ML
null
1611.05763
null
null
http://arxiv.org/pdf/1611.05763v3
2017-01-23T12:38:24Z
2016-11-17T16:29:11Z
Learning to reinforcement learn
In recent years deep reinforcement learning (RL) systems have attained superhuman performance in a number of challenging task domains. However, a major limitation of such applications is their demand for massive amounts of training data. A critical present objective is thus to develop deep RL methods that can adapt rapidly to new tasks. In the present work we introduce a novel approach to this challenge, which we refer to as deep meta-reinforcement learning. Previous work has shown that recurrent networks can support meta-learning in a fully supervised context. We extend this approach to the RL setting. What emerges is a system that is trained using one RL algorithm, but whose recurrent dynamics implement a second, quite separate RL procedure. This second, learned RL algorithm can differ from the original one in arbitrary ways. Importantly, because it is learned, it is configured to exploit structure in the training domain. We unpack these points in a series of seven proof-of-concept experiments, each of which examines a key aspect of deep meta-RL. We consider prospects for extending and scaling up the approach, and also point out some potentially important implications for neuroscience.
[ "['Jane X Wang' 'Zeb Kurth-Nelson' 'Dhruva Tirumala' 'Hubert Soyer'\n 'Joel Z Leibo' 'Remi Munos' 'Charles Blundell' 'Dharshan Kumaran'\n 'Matt Botvinick']", "Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z\n Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick" ]
stat.ML cs.LG math.OC stat.CO
null
1611.0578
null
null
null
null
null
Gap Safe screening rules for sparsity enforcing penalties
In high dimensional regression settings, sparsity enforcing penalties have proved useful to regularize the data-fitting term. A recently introduced technique called screening rules propose to ignore some variables in the optimization leveraging the expected sparsity of the solutions and consequently leading to faster solvers. When the procedure is guaranteed not to discard variables wrongly the rules are said to be safe. In this work, we propose a unifying framework for generalized linear models regularized with standard sparsity enforcing penalties such as $\ell_1$ or $\ell_1/\ell_2$ norms. Our technique allows to discard safely more variables than previously considered safe rules, particularly for low regularization parameters. Our proposed Gap Safe rules (so called because they rely on duality gap computation) can cope with any iterative solver but are particularly well suited to (block) coordinate descent methods. Applied to many standard learning tasks, Lasso, Sparse-Group Lasso, multi-task Lasso, binary and multinomial logistic regression, etc., we report significant speed-ups compared to previously proposed safe rules on all tested data sets.
[ "Eugene Ndiaye, Olivier Fercoq, Alexandre Gramfort and Joseph Salmon" ]
null
null
1611.05780
null
null
http://arxiv.org/pdf/1611.05780v4
2017-12-27T17:26:38Z
2016-11-17T16:55:12Z
Gap Safe screening rules for sparsity enforcing penalties
In high dimensional regression settings, sparsity enforcing penalties have proved useful to regularize the data-fitting term. A recently introduced technique called screening rules propose to ignore some variables in the optimization leveraging the expected sparsity of the solutions and consequently leading to faster solvers. When the procedure is guaranteed not to discard variables wrongly the rules are said to be safe. In this work, we propose a unifying framework for generalized linear models regularized with standard sparsity enforcing penalties such as $ell_1$ or $ell_1/ell_2$ norms. Our technique allows to discard safely more variables than previously considered safe rules, particularly for low regularization parameters. Our proposed Gap Safe rules (so called because they rely on duality gap computation) can cope with any iterative solver but are particularly well suited to (block) coordinate descent methods. Applied to many standard learning tasks, Lasso, Sparse-Group Lasso, multi-task Lasso, binary and multinomial logistic regression, etc., we report significant speed-ups compared to previously proposed safe rules on all tested data sets.
[ "['Eugene Ndiaye' 'Olivier Fercoq' 'Alexandre Gramfort' 'Joseph Salmon']" ]
stat.AP cs.DB cs.LG
null
1611.05788
null
null
http://arxiv.org/pdf/1611.05788v1
2016-09-30T03:49:16Z
2016-09-30T03:49:16Z
Data Science in Service of Performing Arts: Applying Machine Learning to Predicting Audience Preferences
Performing arts organizations aim to enrich their communities through the arts. To do this, they strive to match their performance offerings to the taste of those communities. Success relies on understanding audience preference and predicting their behavior. Similar to most e-commerce or digital entertainment firms, arts presenters need to recommend the right performance to the right customer at the right time. As part of the Michigan Data Science Team (MDST), we partnered with the University Musical Society (UMS), a non-profit performing arts presenter housed in the University of Michigan, Ann Arbor. We are providing UMS with analysis and business intelligence, utilizing historical individual-level sales data. We built a recommendation system based on collaborative filtering, gaining insights into the artistic preferences of customers, along with the similarities between performances. To better understand audience behavior, we used statistical methods from customer-base analysis. We characterized customer heterogeneity via segmentation, and we modeled customer cohorts to understand and predict ticket purchasing patterns. Finally, we combined statistical modeling with natural language processing (NLP) to explore the impact of wording in program descriptions. These ongoing efforts provide a platform to launch targeted marketing campaigns, helping UMS carry out its mission by allocating its resources more efficiently. Celebrating its 138th season, UMS is a 2014 recipient of the National Medal of Arts, and it continues to enrich communities by connecting world-renowned artists with diverse audiences, especially students in their formative years. We aim to contribute to that mission through data science and customer analytics.
[ "Jacob Abernethy (University of Michigan), Cyrus Anderson (University\n of Michigan), Alex Chojnacki (University of Michigan), Chengyu Dai\n (University of Michigan), John Dryden (University of Michigan), Eric Schwartz\n (University of Michigan), Wenbo Shen (University of Michigan), Jonathan\n Stroud (University of Michigan), Laura Wendlandt (University of Michigan),\n Sheng Yang (University of Michigan), Daniel Zhang (University of Michigan)", "['Jacob Abernethy' 'Cyrus Anderson' 'Alex Chojnacki' 'Chengyu Dai'\n 'John Dryden' 'Eric Schwartz' 'Wenbo Shen' 'Jonathan Stroud'\n 'Laura Wendlandt' 'Sheng Yang' 'Daniel Zhang']" ]
stat.ML cs.AI cs.LG
null
1611.05817
null
null
http://arxiv.org/pdf/1611.05817v1
2016-11-17T19:07:00Z
2016-11-17T19:07:00Z
Nothing Else Matters: Model-Agnostic Explanations By Identifying Prediction Invariance
At the core of interpretable machine learning is the question of whether humans are able to make accurate predictions about a model's behavior. Assumed in this question are three properties of the interpretable output: coverage, precision, and effort. Coverage refers to how often humans think they can predict the model's behavior, precision to how accurate humans are in those predictions, and effort is either the up-front effort required in interpreting the model, or the effort required to make predictions about a model's behavior. In this work, we propose anchor-LIME (aLIME), a model-agnostic technique that produces high-precision rule-based explanations for which the coverage boundaries are very clear. We compare aLIME to linear LIME with simulated experiments, and demonstrate the flexibility of aLIME with qualitative examples from a variety of domains and tasks.
[ "Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin", "['Marco Tulio Ribeiro' 'Sameer Singh' 'Carlos Guestrin']" ]
cs.LG cs.AI cs.NE math.OC
null
1611.05827
null
null
http://arxiv.org/pdf/1611.05827v3
2017-11-21T22:11:13Z
2016-11-17T19:29:27Z
Towards a Mathematical Understanding of the Difficulty in Learning with Feedforward Neural Networks
Training deep neural networks for solving machine learning problems is one great challenge in the field, mainly due to its associated optimisation problem being highly non-convex. Recent developments have suggested that many training algorithms do not suffer from undesired local minima under certain scenario, and consequently led to great efforts in pursuing mathematical explanations for such observations. This work provides an alternative mathematical understanding of the challenge from a smooth optimisation perspective. By assuming exact learning of finite samples, sufficient conditions are identified via a critical point analysis to ensure any local minimum to be globally minimal as well. Furthermore, a state of the art algorithm, known as the Generalised Gauss-Newton (GGN) algorithm, is rigorously revisited as an approximate Newton's algorithm, which shares the property of being locally quadratically convergent to a global minimum under the condition of exact learning.
[ "['Hao Shen']", "Hao Shen" ]
cs.LG math.PR
null
1611.05898
null
null
http://arxiv.org/pdf/1611.05898v2
2017-07-05T12:53:20Z
2016-11-10T16:08:31Z
Associative Memories to Accelerate Approximate Nearest Neighbor Search
Nearest neighbor search is a very active field in machine learning for it appears in many application cases, including classification and object retrieval. In its canonical version, the complexity of the search is linear with both the dimension and the cardinal of the collection of vectors the search is performed in. Recently many works have focused on reducing the dimension of vectors using quantization techniques or hashing, while providing an approximate result. In this paper we focus instead on tackling the cardinal of the collection of vectors. Namely, we introduce a technique that partitions the collection of vectors and stores each part in its own associative memory. When a query vector is given to the system, associative memories are polled to identify which one contain the closest match. Then an exhaustive search is conducted only on the part of vectors stored in the selected associative memory. We study the effectiveness of the system when messages to store are generated from i.i.d. uniform $\pm$1 random variables or 0-1 sparse i.i.d. random variables. We also conduct experiment on both synthetic data and real data and show it is possible to achieve interesting trade-offs between complexity and accuracy.
[ "['Vincent Gripon' 'Matthias Löwe' 'Franck Vermet']", "Vincent Gripon, Matthias L\\\"owe, Franck Vermet" ]
stat.ML cs.LG
10.1109/BigData.2016.7841024
1611.05923
null
null
http://arxiv.org/abs/1611.05923v3
2017-03-23T05:55:24Z
2016-11-17T22:23:08Z
"Influence Sketching": Finding Influential Samples In Large-Scale Regressions
There is an especially strong need in modern large-scale data analysis to prioritize samples for manual inspection. For example, the inspection could target important mislabeled samples or key vulnerabilities exploitable by an adversarial attack. In order to solve the "needle in the haystack" problem of which samples to inspect, we develop a new scalable version of Cook's distance, a classical statistical technique for identifying samples which unusually strongly impact the fit of a regression model (and its downstream predictions). In order to scale this technique up to very large and high-dimensional datasets, we introduce a new algorithm which we call "influence sketching." Influence sketching embeds random projections within the influence computation; in particular, the influence score is calculated using the randomly projected pseudo-dataset from the post-convergence Generalized Linear Model (GLM). We validate that influence sketching can reliably and successfully discover influential samples by applying the technique to a malware detection dataset of over 2 million executable files, each represented with almost 100,000 features. For example, we find that randomly deleting approximately 10% of training samples reduces predictive accuracy only slightly from 99.47% to 99.45%, whereas deleting the same number of samples with high influence sketch scores reduces predictive accuracy all the way down to 90.24%. Moreover, we find that influential samples are especially likely to be mislabeled. In the case study, we manually inspect the most influential samples, and find that influence sketching pointed us to new, previously unidentified pieces of malware.
[ "['Mike Wojnowicz' 'Ben Cruz' 'Xuan Zhao' 'Brian Wallace' 'Matt Wolff'\n 'Jay Luan' 'Caleb Crable']", "Mike Wojnowicz, Ben Cruz, Xuan Zhao, Brian Wallace, Matt Wolff, Jay\n Luan, and Caleb Crable" ]
null
null
1611.05934
null
null
http://arxiv.org/pdf/1611.05934v1
2016-11-18T00:13:32Z
2016-11-18T00:13:32Z
Increasing the Interpretability of Recurrent Neural Networks Using Hidden Markov Models
As deep neural networks continue to revolutionize various application domains, there is increasing interest in making these powerful models more understandable and interpretable, and narrowing down the causes of good and bad predictions. We focus on recurrent neural networks, state of the art models in speech recognition and translation. Our approach to increasing interpretability is by combining a long short-term memory (LSTM) model with a hidden Markov model (HMM), a simpler and more transparent model. We add the HMM state probabilities to the output layer of the LSTM, and then train the HMM and LSTM either sequentially or jointly. The LSTM can make use of the information from the HMM, and fill in the gaps when the HMM is not performing well. A small hybrid model usually performs better than a standalone LSTM of the same size, especially on smaller data sets. We test the algorithms on text data and medical time series data, and find that the LSTM and HMM learn complementary information about the features in the text.
[ "['Viktoriya Krakovna' 'Finale Doshi-Velez']" ]
cs.AI cs.LG
null
1611.0595
null
null
null
null
null
Analysis of a Design Pattern for Teaching with Features and Labels
We study the task of teaching a machine to classify objects using features and labels. We introduce the Error-Driven-Featuring design pattern for teaching using features and labels in which a teacher prefers to introduce features only if they are needed. We analyze the potential risks and benefits of this teaching pattern through the use of teaching protocols, illustrative examples, and by providing bounds on the effort required for an optimal machine teacher using a linear learning algorithm, the most commonly used type of learners in interactive machine learning systems. Our analysis provides a deeper understanding of potential trade-offs of using different learning algorithms and between the effort required for featuring (creating new features) and labeling (providing labels for objects).
[ "Christopher Meek, Patrice Simard, Xiaojin Zhu" ]
null
null
1611.05950
null
null
http://arxiv.org/pdf/1611.05950v1
2016-11-18T02:04:57Z
2016-11-18T02:04:57Z
Analysis of a Design Pattern for Teaching with Features and Labels
We study the task of teaching a machine to classify objects using features and labels. We introduce the Error-Driven-Featuring design pattern for teaching using features and labels in which a teacher prefers to introduce features only if they are needed. We analyze the potential risks and benefits of this teaching pattern through the use of teaching protocols, illustrative examples, and by providing bounds on the effort required for an optimal machine teacher using a linear learning algorithm, the most commonly used type of learners in interactive machine learning systems. Our analysis provides a deeper understanding of potential trade-offs of using different learning algorithms and between the effort required for featuring (creating new features) and labeling (providing labels for objects).
[ "['Christopher Meek' 'Patrice Simard' 'Xiaojin Zhu']" ]
cs.LG
null
1611.05955
null
null
http://arxiv.org/pdf/1611.05955v1
2016-11-18T02:33:10Z
2016-11-18T02:33:10Z
A Characterization of Prediction Errors
Understanding prediction errors and determining how to fix them is critical to building effective predictive systems. In this paper, we delineate four types of prediction errors and demonstrate that these four types characterize all prediction errors. In addition, we describe potential remedies and tools that can be used to reduce the uncertainty when trying to determine the source of a prediction error and when trying to take action to remove a prediction errors.
[ "['Christopher Meek']", "Christopher Meek" ]
cs.LG cs.NA stat.AP stat.ML
null
1611.05977
null
null
http://arxiv.org/pdf/1611.05977v1
2016-11-18T05:07:21Z
2016-11-18T05:07:21Z
Robust and Scalable Column/Row Sampling from Corrupted Big Data
Conventional sampling techniques fall short of drawing descriptive sketches of the data when the data is grossly corrupted as such corruptions break the low rank structure required for them to perform satisfactorily. In this paper, we present new sampling algorithms which can locate the informative columns in presence of severe data corruptions. In addition, we develop new scalable randomized designs of the proposed algorithms. The proposed approach is simultaneously robust to sparse corruption and outliers and substantially outperforms the state-of-the-art robust sampling algorithms as demonstrated by experiments conducted using both real and synthetic data.
[ "Mostafa Rahmani, George Atia", "['Mostafa Rahmani' 'George Atia']" ]
cs.LO cs.AI cs.LG
10.1007/978-3-319-63046-5_34
1611.0599
null
null
null
null
null
Monte Carlo Tableau Proof Search
We study Monte Carlo Tree Search to guide proof search in tableau calculi. This includes proposing a number of proof-state evaluation heuristics, some of which are learnt from previous proofs. We present an implementation based on the leanCoP prover. The system is trained and evaluated on a large suite of related problems coming from the Mizar proof assistant, showing that it is capable to find new and different proofs.
[ "Michael F\\\"arber, Cezary Kaliszyk, Josef Urban" ]
null
null
1611.05990
null
null
http://arxiv.org/abs/1611.05990v2
2019-06-15T00:34:50Z
2016-11-18T06:30:09Z
Monte Carlo Tableau Proof Search
We study Monte Carlo Tree Search to guide proof search in tableau calculi. This includes proposing a number of proof-state evaluation heuristics, some of which are learnt from previous proofs. We present an implementation based on the leanCoP prover. The system is trained and evaluated on a large suite of related problems coming from the Mizar proof assistant, showing that it is capable to find new and different proofs.
[ "['Michael Färber' 'Cezary Kaliszyk' 'Josef Urban']" ]
stat.ML cs.LG
null
1611.0608
null
null
null
null
null
A Generalized Stochastic Variational Bayesian Hyperparameter Learning Framework for Sparse Spectrum Gaussian Process Regression
While much research effort has been dedicated to scaling up sparse Gaussian process (GP) models based on inducing variables for big data, little attention is afforded to the other less explored class of low-rank GP approximations that exploit the sparse spectral representation of a GP kernel. This paper presents such an effort to advance the state of the art of sparse spectrum GP models to achieve competitive predictive performance for massive datasets. Our generalized framework of stochastic variational Bayesian sparse spectrum GP (sVBSSGP) models addresses their shortcomings by adopting a Bayesian treatment of the spectral frequencies to avoid overfitting, modeling these frequencies jointly in its variational distribution to enable their interaction a posteriori, and exploiting local data for boosting the predictive performance. However, such structural improvements result in a variational lower bound that is intractable to be optimized. To resolve this, we exploit a variational parameterization trick to make it amenable to stochastic optimization. Interestingly, the resulting stochastic gradient has a linearly decomposable structure that can be exploited to refine our stochastic optimization method to incur constant time per iteration while preserving its property of being an unbiased estimator of the exact gradient of the variational lower bound. Empirical evaluation on real-world datasets shows that sVBSSGP outperforms state-of-the-art stochastic implementations of sparse GP models.
[ "Quang Minh Hoang, Trong Nghia Hoang, Kian Hsiang Low" ]
null
null
1611.06080
null
null
http://arxiv.org/pdf/1611.06080v1
2016-11-18T14:00:48Z
2016-11-18T14:00:48Z
A Generalized Stochastic Variational Bayesian Hyperparameter Learning Framework for Sparse Spectrum Gaussian Process Regression
While much research effort has been dedicated to scaling up sparse Gaussian process (GP) models based on inducing variables for big data, little attention is afforded to the other less explored class of low-rank GP approximations that exploit the sparse spectral representation of a GP kernel. This paper presents such an effort to advance the state of the art of sparse spectrum GP models to achieve competitive predictive performance for massive datasets. Our generalized framework of stochastic variational Bayesian sparse spectrum GP (sVBSSGP) models addresses their shortcomings by adopting a Bayesian treatment of the spectral frequencies to avoid overfitting, modeling these frequencies jointly in its variational distribution to enable their interaction a posteriori, and exploiting local data for boosting the predictive performance. However, such structural improvements result in a variational lower bound that is intractable to be optimized. To resolve this, we exploit a variational parameterization trick to make it amenable to stochastic optimization. Interestingly, the resulting stochastic gradient has a linearly decomposable structure that can be exploited to refine our stochastic optimization method to incur constant time per iteration while preserving its property of being an unbiased estimator of the exact gradient of the variational lower bound. Empirical evaluation on real-world datasets shows that sVBSSGP outperforms state-of-the-art stochastic implementations of sparse GP models.
[ "['Quang Minh Hoang' 'Trong Nghia Hoang' 'Kian Hsiang Low']" ]
cs.LG cs.AI stat.ML
null
1611.06132
null
null
http://arxiv.org/pdf/1611.06132v1
2016-11-18T15:53:50Z
2016-11-18T15:53:50Z
Faster variational inducing input Gaussian process classification
Gaussian processes (GP) provide a prior over functions and allow finding complex regularities in data. Gaussian processes are successfully used for classification/regression problems and dimensionality reduction. In this work we consider the classification problem only. The complexity of standard methods for GP-classification scales cubically with the size of the training dataset. This complexity makes them inapplicable to big data problems. Therefore, a variety of methods were introduced to overcome this limitation. In the paper we focus on methods based on so called inducing inputs. This approach is based on variational inference and proposes a particular lower bound for marginal likelihood (evidence). This bound is then maximized w.r.t. parameters of kernel function of the Gaussian process, thus fitting the model to data. The computational complexity of this method is $O(nm^2)$, where $m$ is the number of inducing inputs used by the model and is assumed to be substantially smaller than the size of the dataset $n$. Recently, a new evidence lower bound for GP-classification problem was introduced. It allows using stochastic optimization, which makes it suitable for big data problems. However, the new lower bound depends on $O(m^2)$ variational parameter, which makes optimization challenging in case of big m. In this work we develop a new approach for training inducing input GP models for classification problems. Here we use quadratic approximation of several terms in the aforementioned evidence lower bound, obtaining analytical expressions for optimal values of most of the parameters in the optimization, thus sufficiently reducing the dimension of optimization space. In our experiments we achieve as well or better results, compared to the existing method. Moreover, our method doesn't require the user to manually set the learning rate, making it more practical, than the existing method.
[ "['Pavel Izmailov' 'Dmitry Kropotov']", "Pavel Izmailov and Dmitry Kropotov" ]
stat.ML cs.LG cs.NE
null
1611.06148
null
null
http://arxiv.org/pdf/1611.06148v2
2017-05-24T12:26:43Z
2016-11-18T16:20:41Z
Compacting Neural Network Classifiers via Dropout Training
We introduce dropout compaction, a novel method for training feed-forward neural networks which realizes the performance gains of training a large model with dropout regularization, yet extracts a compact neural network for run-time efficiency. In the proposed method, we introduce a sparsity-inducing prior on the per unit dropout retention probability so that the optimizer can effectively prune hidden units during training. By changing the prior hyperparameters, we can control the size of the resulting network. We performed a systematic comparison of dropout compaction and competing methods on several real-world speech recognition tasks and found that dropout compaction achieved comparable accuracy with fewer than 50% of the hidden units, translating to a 2.5x speedup in run-time.
[ "['Yotaro Kubo' 'George Tucker' 'Simon Wiesler']", "Yotaro Kubo, George Tucker, Simon Wiesler" ]
stat.ML cs.AI cs.HC cs.LG
null
1611.06175
null
null
http://arxiv.org/pdf/1611.06175v1
2016-11-18T17:52:23Z
2016-11-18T17:52:23Z
Learning Interpretability for Visualizations using Adapted Cox Models through a User Experiment
In order to be useful, visualizations need to be interpretable. This paper uses a user-based approach to combine and assess quality measures in order to better model user preferences. Results show that cluster separability measures are outperformed by a neighborhood conservation measure, even though the former are usually considered as intuitively representative of user motives. Moreover, combining measures, as opposed to using a single measure, further improves prediction performances.
[ "['Adrien Bibal' 'Benoit Frénay']", "Adrien Bibal and Benoit Fr\\'enay" ]
stat.ML cs.AI cs.CL cs.LG
null
1611.06188
null
null
http://arxiv.org/pdf/1611.06188v2
2017-03-02T19:47:59Z
2016-11-18T18:13:46Z
Variable Computation in Recurrent Neural Networks
Recurrent neural networks (RNNs) have been used extensively and with increasing success to model various types of sequential data. Much of this progress has been achieved through devising recurrent units and architectures with the flexibility to capture complex statistics in the data, such as long range dependency or localized attention phenomena. However, while many sequential data (such as video, speech or language) can have highly variable information flow, most recurrent models still consume input features at a constant rate and perform a constant number of computations per time step, which can be detrimental to both speed and model capacity. In this paper, we explore a modification to existing recurrent units which allows them to learn to vary the amount of computation they perform at each step, without prior knowledge of the sequence's time structure. We show experimentally that not only do our models require fewer operations, they also lead to better performance overall on evaluation tasks.
[ "Yacine Jernite, Edouard Grave, Armand Joulin, Tomas Mikolov", "['Yacine Jernite' 'Edouard Grave' 'Armand Joulin' 'Tomas Mikolov']" ]
cs.CL cs.LG cs.NE
null
1611.06204
null
null
http://arxiv.org/pdf/1611.06204v1
2016-11-18T19:38:59Z
2016-11-18T19:38:59Z
Visualizing and Understanding Curriculum Learning for Long Short-Term Memory Networks
Curriculum Learning emphasizes the order of training instances in a computational learning setup. The core hypothesis is that simpler instances should be learned early as building blocks to learn more complex ones. Despite its usefulness, it is still unknown how exactly the internal representation of models are affected by curriculum learning. In this paper, we study the effect of curriculum learning on Long Short-Term Memory (LSTM) networks, which have shown strong competency in many Natural Language Processing (NLP) problems. Our experiments on sentiment analysis task and a synthetic task similar to sequence prediction tasks in NLP show that curriculum learning has a positive effect on the LSTM's internal states by biasing the model towards building constructive representations i.e. the internal representation at the previous timesteps are used as building blocks for the final prediction. We also find that smaller models significantly improves when they are trained with curriculum learning. Lastly, we show that curriculum learning helps more when the amount of training data is limited.
[ "['Volkan Cirik' 'Eduard Hovy' 'Louis-Philippe Morency']", "Volkan Cirik, Eduard Hovy, Louis-Philippe Morency" ]
stat.ML cs.DC cs.LG
null
1611.06213
null
null
http://arxiv.org/pdf/1611.06213v2
2017-10-03T20:30:19Z
2016-11-18T20:06:27Z
GaDei: On Scale-up Training As A Service For Deep Learning
Deep learning (DL) training-as-a-service (TaaS) is an important emerging industrial workload. The unique challenge of TaaS is that it must satisfy a wide range of customers who have no experience and resources to tune DL hyper-parameters, and meticulous tuning for each user's dataset is prohibitively expensive. Therefore, TaaS hyper-parameters must be fixed with values that are applicable to all users. IBM Watson Natural Language Classifier (NLC) service, the most popular IBM cognitive service used by thousands of enterprise-level clients around the globe, is a typical TaaS service. By evaluating the NLC workloads, we show that only the conservative hyper-parameter setup (e.g., small mini-batch size and small learning rate) can guarantee acceptable model accuracy for a wide range of customers. We further justify theoretically why such a setup guarantees better model convergence in general. Unfortunately, the small mini-batch size causes a high volume of communication traffic in a parameter-server based system. We characterize the high communication bandwidth requirement of TaaS using representative industrial deep learning workloads and demonstrate that none of the state-of-the-art scale-up or scale-out solutions can satisfy such a requirement. We then present GaDei, an optimized shared-memory based scale-up parameter server design. We prove that the designed protocol is deadlock-free and it processes each gradient exactly once. Our implementation is evaluated on both commercial benchmarks and public benchmarks to demonstrate that it significantly outperforms the state-of-the-art parameter-server based implementation while maintaining the required accuracy and our implementation reaches near the best possible runtime performance, constrained only by the hardware limitation. Furthermore, to the best of our knowledge, GaDei is the only scale-up DL system that provides fault-tolerance.
[ "['Wei Zhang' 'Minwei Feng' 'Yunhui Zheng' 'Yufei Ren' 'Yandong Wang'\n 'Ji Liu' 'Peng Liu' 'Bing Xiang' 'Li Zhang' 'Bowen Zhou' 'Fei Wang']", "Wei Zhang, Minwei Feng, Yunhui Zheng, Yufei Ren, Yandong Wang, Ji Liu,\n Peng Liu, Bing Xiang, Li Zhang, Bowen Zhou, Fei Wang" ]
stat.ME cs.AI cs.LG
10.1214/21-AOS2064
1611.06221
null
null
null
null
null
Foundations of Structural Causal Models with Cycles and Latent Variables
Structural causal models (SCMs), also known as (nonparametric) structural equation models (SEMs), are widely used for causal modeling purposes. In particular, acyclic SCMs, also known as recursive SEMs, form a well-studied subclass of SCMs that generalize causal Bayesian networks to allow for latent confounders. In this paper, we investigate SCMs in a more general setting, allowing for the presence of both latent confounders and cycles. We show that in the presence of cycles, many of the convenient properties of acyclic SCMs do not hold in general: they do not always have a solution; they do not always induce unique observational, interventional and counterfactual distributions; a marginalization does not always exist, and if it exists the marginal model does not always respect the latent projection; they do not always satisfy a Markov property; and their graphs are not always consistent with their causal semantics. We prove that for SCMs in general each of these properties does hold under certain solvability conditions. Our work generalizes results for SCMs with cycles that were only known for certain special cases so far. We introduce the class of simple SCMs that extends the class of acyclic SCMs to the cyclic setting, while preserving many of the convenient properties of acyclic SCMs. With this paper we aim to provide the foundations for a general theory of statistical causal modeling with SCMs.
[ "Stephan Bongers, Patrick Forr\\'e, Jonas Peters, Joris M. Mooij" ]
cs.DS cs.CG cs.LG math.MG
null
1611.06222
null
null
http://arxiv.org/pdf/1611.06222v2
2017-07-24T13:21:13Z
2016-11-18T20:56:26Z
Approximate Near Neighbors for General Symmetric Norms
We show that every symmetric normed space admits an efficient nearest neighbor search data structure with doubly-logarithmic approximation. Specifically, for every $n$, $d = n^{o(1)}$, and every $d$-dimensional symmetric norm $\|\cdot\|$, there exists a data structure for $\mathrm{poly}(\log \log n)$-approximate nearest neighbor search over $\|\cdot\|$ for $n$-point datasets achieving $n^{o(1)}$ query time and $n^{1+o(1)}$ space. The main technical ingredient of the algorithm is a low-distortion embedding of a symmetric norm into a low-dimensional iterated product of top-$k$ norms. We also show that our techniques cannot be extended to general norms.
[ "Alexandr Andoni, Huy L. Nguyen, Aleksandar Nikolov, Ilya Razenshteyn,\n Erik Waingarten", "['Alexandr Andoni' 'Huy L. Nguyen' 'Aleksandar Nikolov' 'Ilya Razenshteyn'\n 'Erik Waingarten']" ]
physics.ins-det cs.LG physics.acc-ph
10.1016/j.nima.2017.06.020
1611.06241
null
null
http://arxiv.org/abs/1611.06241v2
2017-06-22T20:38:36Z
2016-11-18T21:06:00Z
Using LSTM recurrent neural networks for monitoring the LHC superconducting magnets
The superconducting LHC magnets are coupled with an electronic monitoring system which records and analyses voltage time series reflecting their performance. A currently used system is based on a range of preprogrammed triggers which launches protection procedures when a misbehavior of the magnets is detected. All the procedures used in the protection equipment were designed and implemented according to known working scenarios of the system and are updated and monitored by human operators. This paper proposes a novel approach to monitoring and fault protection of the Large Hadron Collider (LHC) superconducting magnets which employs state-of-the-art Deep Learning algorithms. Consequently, the authors of the paper decided to examine the performance of LSTM recurrent neural networks for modeling of voltage time series of the magnets. In order to address this challenging task different network architectures and hyper-parameters were used to achieve the best possible performance of the solution. The regression results were measured in terms of RMSE for different number of future steps and history length taken into account for the prediction. The best result of RMSE=0.00104 was obtained for a network of 128 LSTM cells within the internal layer and 16 steps history buffer.
[ "Maciej Wielgosz and Andrzej Skocze\\'n and Matej Mertik", "['Maciej Wielgosz' 'Andrzej Skoczeń' 'Matej Mertik']" ]
cs.NE cs.LG stat.ML
null
1611.06245
null
null
http://arxiv.org/pdf/1611.06245v1
2016-11-18T21:09:16Z
2016-11-18T21:09:16Z
Spikes as regularizers
We present a confidence-based single-layer feed-forward learning algorithm SPIRAL (Spike Regularized Adaptive Learning) relying on an encoding of activation spikes. We adaptively update a weight vector relying on confidence estimates and activation offsets relative to previous activity. We regularize updates proportionally to item-level confidence and weight-specific support, loosely inspired by the observation from neurophysiology that high spike rates are sometimes accompanied by low temporal precision. Our experiments suggest that the new learning algorithm SPIRAL is more robust and less prone to overfitting than both the averaged perceptron and AROW.
[ "['Anders Søgaard']", "Anders S{\\o}gaard" ]
cs.LG
null
1611.06256
null
null
http://arxiv.org/pdf/1611.06256v3
2017-03-02T19:12:19Z
2016-11-18T21:34:47Z
Reinforcement Learning through Asynchronous Advantage Actor-Critic on a GPU
We introduce a hybrid CPU/GPU version of the Asynchronous Advantage Actor-Critic (A3C) algorithm, currently the state-of-the-art method in reinforcement learning for various gaming tasks. We analyze its computational traits and concentrate on aspects critical to leveraging the GPU's computational power. We introduce a system of queues and a dynamic scheduling strategy, potentially helpful for other asynchronous algorithms as well. Our hybrid CPU/GPU version of A3C, based on TensorFlow, achieves a significant speed up compared to a CPU implementation; we make it publicly available to other researchers at https://github.com/NVlabs/GA3C .
[ "Mohammad Babaeizadeh, Iuri Frosio, Stephen Tyree, Jason Clemons, Jan\n Kautz", "['Mohammad Babaeizadeh' 'Iuri Frosio' 'Stephen Tyree' 'Jason Clemons'\n 'Jan Kautz']" ]
stat.ML cs.LG cs.SD
10.1109/ICASSP.2017.7952118
1611.06265
null
null
http://arxiv.org/abs/1611.06265v2
2017-06-15T16:23:58Z
2016-11-18T22:33:05Z
Deep Clustering and Conventional Networks for Music Separation: Stronger Together
Deep clustering is the first method to handle general audio separation scenarios with multiple sources of the same type and an arbitrary number of sources, performing impressively in speaker-independent speech separation tasks. However, little is known about its effectiveness in other challenging situations such as music source separation. Contrary to conventional networks that directly estimate the source signals, deep clustering generates an embedding for each time-frequency bin, and separates sources by clustering the bins in the embedding space. We show that deep clustering outperforms conventional networks on a singing voice separation task, in both matched and mismatched conditions, even though conventional networks have the advantage of end-to-end training for best signal approximation, presumably because its more flexible objective engenders better regularization. Since the strengths of deep clustering and conventional network architectures appear complementary, we explore combining them in a single hybrid network trained via an approach akin to multi-task learning. Remarkably, the combination significantly outperforms either of its components.
[ "['Yi Luo' 'Zhuo Chen' 'John R. Hershey' 'Jonathan Le Roux'\n 'Nima Mesgarani']", "Yi Luo, Zhuo Chen, John R. Hershey, Jonathan Le Roux, Nima Mesgarani" ]
cs.LG
null
1611.06306
null
null
http://arxiv.org/pdf/1611.06306v1
2016-11-19T05:24:48Z
2016-11-19T05:24:48Z
Cross-model convolutional neural network for multiple modality data representation
A novel data representation method of convolutional neural net- work (CNN) is proposed in this paper to represent data of different modalities. We learn a CNN model for the data of each modality to map the data of differ- ent modalities to a common space, and regularize the new representations in the common space by a cross-model relevance matrix. We further impose that the class label of data points can also be predicted from the CNN representa- tions in the common space. The learning problem is modeled as a minimiza- tion problem, which is solved by an augmented Lagrange method (ALM) with updating rules of Alternating direction method of multipliers (ADMM). The experiments over benchmark of sequence data of multiple modalities show its advantage.
[ "Yanbin Wu, Li Wang, Fan Cui, Hongbin Zhai, Baoming Dong, Jim Jing-Yan\n Wang", "['Yanbin Wu' 'Li Wang' 'Fan Cui' 'Hongbin Zhai' 'Baoming Dong'\n 'Jim Jing-Yan Wang']" ]
stat.ML cs.LG cs.NE
null
1611.0631
null
null
null
null
null
Local minima in training of neural networks
There has been a lot of recent interest in trying to characterize the error surface of deep models. This stems from a long standing question. Given that deep networks are highly nonlinear systems optimized by local gradient methods, why do they not seem to be affected by bad local minima? It is widely believed that training of deep models using gradient methods works so well because the error surface either has no local minima, or if they exist they need to be close in value to the global minimum. It is known that such results hold under very strong assumptions which are not satisfied by real models. In this paper we present examples showing that for such theorem to be true additional assumptions on the data, initialization schemes and/or the model classes have to be made. We look at the particular case of finite size datasets. We demonstrate that in this scenario one can construct counter-examples (datasets or initialization schemes) when the network does become susceptible to bad local minima over the weight space.
[ "Grzegorz Swirszcz, Wojciech Marian Czarnecki and Razvan Pascanu" ]
null
null
1611.06310
null
null
http://arxiv.org/pdf/1611.06310v2
2017-02-17T14:51:54Z
2016-11-19T05:49:22Z
Local minima in training of neural networks
There has been a lot of recent interest in trying to characterize the error surface of deep models. This stems from a long standing question. Given that deep networks are highly nonlinear systems optimized by local gradient methods, why do they not seem to be affected by bad local minima? It is widely believed that training of deep models using gradient methods works so well because the error surface either has no local minima, or if they exist they need to be close in value to the global minimum. It is known that such results hold under very strong assumptions which are not satisfied by real models. In this paper we present examples showing that for such theorem to be true additional assumptions on the data, initialization schemes and/or the model classes have to be made. We look at the particular case of finite size datasets. We demonstrate that in this scenario one can construct counter-examples (datasets or initialization schemes) when the network does become susceptible to bad local minima over the weight space.
[ "['Grzegorz Swirszcz' 'Wojciech Marian Czarnecki' 'Razvan Pascanu']" ]
cs.CV cs.LG cs.NE
null
1611.06321
null
null
http://arxiv.org/pdf/1611.06321v3
2018-10-11T07:18:09Z
2016-11-19T07:18:17Z
Learning the Number of Neurons in Deep Networks
Nowadays, the number of layers and of neurons in each layer of a deep network are typically set manually. While very deep and wide networks have proven effective in general, they come at a high memory and computation cost, thus making them impractical for constrained platforms. These networks, however, are known to have many redundant parameters, and could thus, in principle, be replaced by more compact architectures. In this paper, we introduce an approach to automatically determining the number of neurons in each layer of a deep network during learning. To this end, we propose to make use of structured sparsity during learning. More precisely, we use a group sparsity regularizer on the parameters of the network, where each group is defined to act on a single neuron. Starting from an overcomplete network, we show that our approach can reduce the number of parameters by up to 80\% while retaining or even improving the network accuracy.
[ "['Jose M Alvarez' 'Mathieu Salzmann']", "Jose M Alvarez and Mathieu Salzmann" ]
cs.LG cs.NE
null
1611.06342
null
null
http://arxiv.org/pdf/1611.06342v1
2016-11-19T11:21:25Z
2016-11-19T11:21:25Z
Quantized neural network design under weight capacity constraint
The complexity of deep neural network algorithms for hardware implementation can be lowered either by scaling the number of units or reducing the word-length of weights. Both approaches, however, can accompany the performance degradation although many types of research are conducted to relieve this problem. Thus, it is an important question which one, between the network size scaling and the weight quantization, is more effective for hardware optimization. For this study, the performances of fully-connected deep neural networks (FCDNNs) and convolutional neural networks (CNNs) are evaluated while changing the network complexity and the word-length of weights. Based on these experiments, we present the effective compression ratio (ECR) to guide the trade-off between the network size and the precision of weights when the hardware resource is limited.
[ "['Sungho Shin' 'Kyuyeon Hwang' 'Wonyong Sung']", "Sungho Shin, Kyuyeon Hwang, and Wonyong Sung" ]
stat.ML cs.LG
null
1611.06426
null
null
http://arxiv.org/pdf/1611.06426v2
2017-03-04T01:28:26Z
2016-11-19T20:36:30Z
Conservative Contextual Linear Bandits
Safety is a desirable property that can immensely increase the applicability of learning algorithms in real-world decision-making problems. It is much easier for a company to deploy an algorithm that is safe, i.e., guaranteed to perform at least as well as a baseline. In this paper, we study the issue of safety in contextual linear bandits that have application in many different fields including personalized ad recommendation in online marketing. We formulate a notion of safety for this class of algorithms. We develop a safe contextual linear bandit algorithm, called conservative linear UCB (CLUCB), that simultaneously minimizes its regret and satisfies the safety constraint, i.e., maintains its performance above a fixed percentage of the performance of a baseline strategy, uniformly over time. We prove an upper-bound on the regret of CLUCB and show that it can be decomposed into two terms: 1) an upper-bound for the regret of the standard linear UCB algorithm that grows with the time horizon and 2) a constant (does not grow with the time horizon) term that accounts for the loss of being conservative in order to satisfy the safety constraint. We empirically show that our algorithm is safe and validate our theoretical analysis.
[ "Abbas Kazerouni, Mohammad Ghavamzadeh, Yasin Abbasi-Yadkori and\n Benjamin Van Roy", "['Abbas Kazerouni' 'Mohammad Ghavamzadeh' 'Yasin Abbasi-Yadkori'\n 'Benjamin Van Roy']" ]
cs.CR cs.AI cs.LG
null
1611.06439
null
null
http://arxiv.org/pdf/1611.06439v1
2016-11-19T22:46:13Z
2016-11-19T22:46:13Z
A Survey of Credit Card Fraud Detection Techniques: Data and Technique Oriented Perspective
Credit card plays a very important rule in today's economy. It becomes an unavoidable part of household, business and global activities. Although using credit cards provides enormous benefits when used carefully and responsibly,significant credit and financial damages may be caused by fraudulent activities. Many techniques have been proposed to confront the growth in credit card fraud. However, all of these techniques have the same goal of avoiding the credit card fraud; each one has its own drawbacks, advantages and characteristics. In this paper, after investigating difficulties of credit card fraud detection, we seek to review the state of the art in credit card fraud detection techniques, data sets and evaluation criteria.The advantages and disadvantages of fraud detection methods are enumerated and compared.Furthermore, a classification of mentioned techniques into two main fraud detection approaches, namely, misuses (supervised) and anomaly detection (unsupervised) is presented. Again, a classification of techniques is proposed based on capability to process the numerical and categorical data sets. Different data sets used in literature are then described and grouped into real and synthesized data and the effective and common attributes are extracted for further usage.Moreover, evaluation employed criterions in literature are collected and discussed.Consequently, open issues for credit card fraud detection are explained as guidelines for new researchers.
[ "['SamanehSorournejad' 'Zahra Zojaji' 'Reza Ebrahimi Atani'\n 'Amir Hassan Monadjemi']", "SamanehSorournejad, Zahra Zojaji, Reza Ebrahimi Atani, Amir Hassan\n Monadjemi" ]
cs.LG stat.ML
null
1611.0644
null
null
null
null
null
Pruning Convolutional Neural Networks for Resource Efficient Inference
We propose a new formulation for pruning convolutional kernels in neural networks to enable efficient inference. We interleave greedy criteria-based pruning with fine-tuning by backpropagation - a computationally efficient procedure that maintains good generalization in the pruned network. We propose a new criterion based on Taylor expansion that approximates the change in the cost function induced by pruning network parameters. We focus on transfer learning, where large pretrained networks are adapted to specialized tasks. The proposed criterion demonstrates superior performance compared to other criteria, e.g. the norm of kernel weights or feature map activation, for pruning large CNNs after adaptation to fine-grained classification tasks (Birds-200 and Flowers-102) relaying only on the first order gradient information. We also show that pruning can lead to more than 10x theoretical (5x practical) reduction in adapted 3D-convolutional filters with a small drop in accuracy in a recurrent gesture classifier. Finally, we show results for the large-scale ImageNet dataset to emphasize the flexibility of our approach.
[ "Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz" ]
null
null
1611.06440
null
null
http://arxiv.org/pdf/1611.06440v2
2017-06-08T19:53:26Z
2016-11-19T22:48:30Z
Pruning Convolutional Neural Networks for Resource Efficient Inference
We propose a new formulation for pruning convolutional kernels in neural networks to enable efficient inference. We interleave greedy criteria-based pruning with fine-tuning by backpropagation - a computationally efficient procedure that maintains good generalization in the pruned network. We propose a new criterion based on Taylor expansion that approximates the change in the cost function induced by pruning network parameters. We focus on transfer learning, where large pretrained networks are adapted to specialized tasks. The proposed criterion demonstrates superior performance compared to other criteria, e.g. the norm of kernel weights or feature map activation, for pruning large CNNs after adaptation to fine-grained classification tasks (Birds-200 and Flowers-102) relaying only on the first order gradient information. We also show that pruning can lead to more than 10x theoretical (5x practical) reduction in adapted 3D-convolutional filters with a small drop in accuracy in a recurrent gesture classifier. Finally, we show results for the large-scale ImageNet dataset to emphasize the flexibility of our approach.
[ "['Pavlo Molchanov' 'Stephen Tyree' 'Tero Karras' 'Timo Aila' 'Jan Kautz']" ]
cs.CV cs.LG cs.NE
null
1611.06453
null
null
http://arxiv.org/pdf/1611.06453v2
2017-07-02T02:17:00Z
2016-11-20T00:21:32Z
Fast Video Classification via Adaptive Cascading of Deep Models
Recent advances have enabled "oracle" classifiers that can classify across many classes and input distributions with high accuracy without retraining. However, these classifiers are relatively heavyweight, so that applying them to classify video is costly. We show that day-to-day video exhibits highly skewed class distributions over the short term, and that these distributions can be classified by much simpler models. We formulate the problem of detecting the short-term skews online and exploiting models based on it as a new sequential decision making problem dubbed the Online Bandit Problem, and present a new algorithm to solve it. When applied to recognizing faces in TV shows and movies, we realize end-to-end classification speedups of 2.4-7.8x/2.6-11.2x (on GPU/CPU) relative to a state-of-the-art convolutional neural network, at competitive accuracy.
[ "Haichen Shen, Seungyeop Han, Matthai Philipose, Arvind Krishnamurthy", "['Haichen Shen' 'Seungyeop Han' 'Matthai Philipose' 'Arvind Krishnamurthy']" ]
cs.LG cs.NE stat.ML
null
1611.06455
null
null
http://arxiv.org/pdf/1611.06455v4
2016-12-14T06:58:08Z
2016-11-20T00:34:09Z
Time Series Classification from Scratch with Deep Neural Networks: A Strong Baseline
We propose a simple but strong baseline for time series classification from scratch with deep neural networks. Our proposed baseline models are pure end-to-end without any heavy preprocessing on the raw data or feature crafting. The proposed Fully Convolutional Network (FCN) achieves premium performance to other state-of-the-art approaches and our exploration of the very deep neural networks with the ResNet structure is also competitive. The global average pooling in our convolutional model enables the exploitation of the Class Activation Map (CAM) to find out the contributing region in the raw data for the specific labels. Our models provides a simple choice for the real world application and a good starting point for the future research. An overall analysis is provided to discuss the generalization capability of our models, learned features, network structures and the classification semantics.
[ "['Zhiguang Wang' 'Weizhong Yan' 'Tim Oates']", "Zhiguang Wang, Weizhong Yan, Tim Oates" ]
cs.LG stat.ML
null
1611.06475
null
null
http://arxiv.org/pdf/1611.06475v2
2017-08-26T02:25:41Z
2016-11-20T06:12:43Z
Dealing with Range Anxiety in Mean Estimation via Statistical Queries
We give algorithms for estimating the expectation of a given real-valued function $\phi:X\to {\bf R}$ on a sample drawn randomly from some unknown distribution $D$ over domain $X$, namely ${\bf E}_{{\bf x}\sim D}[\phi({\bf x})]$. Our algorithms work in two well-studied models of restricted access to data samples. The first one is the statistical query (SQ) model in which an algorithm has access to an SQ oracle for the input distribution $D$ over $X$ instead of i.i.d. samples from $D$. Given a query function $\phi:X \to [0,1]$, the oracle returns an estimate of ${\bf E}_{{\bf x}\sim D}[\phi({\bf x})]$ within some tolerance $\tau$. The second, is a model in which only a single bit is communicated from each sample. In both of these models the error obtained using a naive implementation would scale polynomially with the range of the random variable $\phi({\bf x})$ (which might even be infinite). In contrast, without restrictions on access to data the expected error scales with the standard deviation of $\phi({\bf x})$. Here we give a simple algorithm whose error scales linearly in standard deviation of $\phi({\bf x})$ and logarithmically with an upper bound on the second moment of $\phi({\bf x})$. As corollaries, we obtain algorithms for high dimensional mean estimation and stochastic convex optimization in these models that work in more general settings than previously known solutions.
[ "['Vitaly Feldman']", "Vitaly Feldman" ]
cs.LG
null
1611.0653
null
null
null
null
null
Prototypical Recurrent Unit
Despite the great successes of deep learning, the effectiveness of deep neural networks has not been understood at any theoretical depth. This work is motivated by the thrust of developing a deeper understanding of recurrent neural networks, particularly LSTM/GRU-like networks. As the highly complex structure of the recurrent unit in LSTM and GRU networks makes them difficult to analyze, our methodology in this research theme is to construct an alternative recurrent unit that is as simple as possible and yet also captures the key components of LSTM/GRU recurrent units. Such a unit can then be used for the study of recurrent networks and its structural simplicity may allow easier analysis. Towards that goal, we take a system-theoretic perspective to design a new recurrent unit, which we call the prototypical recurrent unit (PRU). Not only having minimal complexity, PRU is demonstrated experimentally to have comparable performance to GRU and LSTM unit. This establishes PRU networks as a prototype for future study of LSTM/GRU-like recurrent networks. This paper also studies the memorization abilities of LSTM, GRU and PRU networks, motivated by the folk belief that such networks possess long-term memory. For this purpose, we design a simple and controllable task, called ``memorization problem'', where the networks are trained to memorize certain targeted information. We show that the memorization performance of all three networks depends on the amount of targeted information, the amount of ``interfering" information, and the state space dimension of the recurrent unit. Experiments are also performed for another controllable task, the adding problem, and similar conclusions are obtained.
[ "Dingkun Long, Richong Zhang, Yongyi Mao" ]
null
null
1611.06530
null
null
http://arxiv.org/pdf/1611.06530v2
2018-02-09T18:07:25Z
2016-11-20T15:39:43Z
Prototypical Recurrent Unit
Despite the great successes of deep learning, the effectiveness of deep neural networks has not been understood at any theoretical depth. This work is motivated by the thrust of developing a deeper understanding of recurrent neural networks, particularly LSTM/GRU-like networks. As the highly complex structure of the recurrent unit in LSTM and GRU networks makes them difficult to analyze, our methodology in this research theme is to construct an alternative recurrent unit that is as simple as possible and yet also captures the key components of LSTM/GRU recurrent units. Such a unit can then be used for the study of recurrent networks and its structural simplicity may allow easier analysis. Towards that goal, we take a system-theoretic perspective to design a new recurrent unit, which we call the prototypical recurrent unit (PRU). Not only having minimal complexity, PRU is demonstrated experimentally to have comparable performance to GRU and LSTM unit. This establishes PRU networks as a prototype for future study of LSTM/GRU-like recurrent networks. This paper also studies the memorization abilities of LSTM, GRU and PRU networks, motivated by the folk belief that such networks possess long-term memory. For this purpose, we design a simple and controllable task, called ``memorization problem'', where the networks are trained to memorize certain targeted information. We show that the memorization performance of all three networks depends on the amount of targeted information, the amount of ``interfering" information, and the state space dimension of the recurrent unit. Experiments are also performed for another controllable task, the adding problem, and similar conclusions are obtained.
[ "['Dingkun Long' 'Richong Zhang' 'Yongyi Mao']" ]
null
null
1611.06534
null
null
http://arxiv.org/abs/1611.06534v3
2019-11-05T16:35:05Z
2016-11-20T15:52:41Z
Linear Thompson Sampling Revisited
We derive an alternative proof for the regret of Thompson sampling (ts) in the stochastic linear bandit setting. While we obtain a regret bound of order $widetilde{O}(d^{3/2}sqrt{T})$ as in previous results, the proof sheds new light on the functioning of the ts. We leverage on the structure of the problem to show how the regret is related to the sensitivity (i.e., the gradient) of the objective function and how selecting optimal arms associated to textit{optimistic} parameters does control it. Thus we show that ts can be seen as a generic randomized algorithm where the sampling distribution is designed to have a fixed probability of being optimistic, at the cost of an additional $sqrt{d}$ regret factor compared to a UCB-like approach. Furthermore, we show that our proof can be readily applied to regularized linear optimization and generalized linear model problems.
[ "['Marc Abeille' 'Alessandro Lazaric']" ]
cs.NE cs.LG
null
1611.06539
null
null
http://arxiv.org/pdf/1611.06539v1
2016-11-20T16:05:07Z
2016-11-20T16:05:07Z
Efficient Stochastic Inference of Bitwise Deep Neural Networks
Recently published methods enable training of bitwise neural networks which allow reduced representation of down to a single bit per weight. We present a method that exploits ensemble decisions based on multiple stochastically sampled network models to increase performance figures of bitwise neural networks in terms of classification accuracy at inference. Our experiments with the CIFAR-10 and GTSRB datasets show that the performance of such network ensembles surpasses the performance of the high-precision base model. With this technique we achieve 5.81% best classification error on CIFAR-10 test set using bitwise networks. Concerning inference on embedded systems we evaluate these bitwise networks using a hardware efficient stochastic rounding procedure. Our work contributes to efficient embedded bitwise neural networks.
[ "['Sebastian Vogel' 'Christoph Schorn' 'Andre Guntoro' 'Gerd Ascheid']", "Sebastian Vogel, Christoph Schorn, Andre Guntoro, Gerd Ascheid" ]
stat.ML cs.LG stat.ME
null
1611.06585
null
null
http://arxiv.org/pdf/1611.06585v2
2017-02-19T17:30:28Z
2016-11-20T20:25:39Z
Variational Boosting: Iteratively Refining Posterior Approximations
We propose a black-box variational inference method to approximate intractable distributions with an increasingly rich approximating class. Our method, termed variational boosting, iteratively refines an existing variational approximation by solving a sequence of optimization problems, allowing the practitioner to trade computation time for accuracy. We show how to expand the variational approximating class by incorporating additional covariance structure and by introducing new components to form a mixture. We apply variational boosting to synthetic and real statistical models, and show that resulting posterior inferences compare favorably to existing posterior approximation algorithms in both accuracy and efficiency.
[ "Andrew C. Miller, Nicholas Foti, Ryan P. Adams", "['Andrew C. Miller' 'Nicholas Foti' 'Ryan P. Adams']" ]
cs.LG cs.CV
null
1611.06624
null
null
http://arxiv.org/pdf/1611.06624v3
2017-08-18T02:32:16Z
2016-11-21T01:10:50Z
Temporal Generative Adversarial Nets with Singular Value Clipping
In this paper, we propose a generative model, Temporal Generative Adversarial Nets (TGAN), which can learn a semantic representation of unlabeled videos, and is capable of generating videos. Unlike existing Generative Adversarial Nets (GAN)-based methods that generate videos with a single generator consisting of 3D deconvolutional layers, our model exploits two different types of generators: a temporal generator and an image generator. The temporal generator takes a single latent variable as input and outputs a set of latent variables, each of which corresponds to an image frame in a video. The image generator transforms a set of such latent variables into a video. To deal with instability in training of GAN with such advanced networks, we adopt a recently proposed model, Wasserstein GAN, and propose a novel method to train it stably in an end-to-end manner. The experimental results demonstrate the effectiveness of our methods.
[ "Masaki Saito, Eiichi Matsumoto, Shunta Saito", "['Masaki Saito' 'Eiichi Matsumoto' 'Shunta Saito']" ]
q-bio.QM cs.CV cs.LG
null
1611.06651
null
null
http://arxiv.org/pdf/1611.06651v2
2016-11-26T21:43:48Z
2016-11-21T05:12:44Z
Deep Learning for the Classification of Lung Nodules
Deep learning, as a promising new area of machine learning, has attracted a rapidly increasing attention in the field of medical imaging. Compared to the conventional machine learning methods, deep learning requires no hand-tuned feature extractor, and has shown a superior performance in many visual object recognition applications. In this study, we develop a deep convolutional neural network (CNN) and apply it to thoracic CT images for the classification of lung nodules. We present the CNN architecture and classification accuracy for the original images of lung nodules. In order to understand the features of lung nodules, we further construct new datasets, based on the combination of artificial geometric nodules and some transformations of the original images, as well as a stochastic nodule shape model. It is found that simplistic geometric nodules cannot capture the important features of lung nodules.
[ "['He Yang' 'Hengyong Yu' 'Ge Wang']", "He Yang, Hengyong Yu and Ge Wang" ]
stat.ML cs.LG
null
1611.06652
null
null
http://arxiv.org/pdf/1611.06652v1
2016-11-21T05:15:50Z
2016-11-21T05:15:50Z
Scalable Adaptive Stochastic Optimization Using Random Projections
Adaptive stochastic gradient methods such as AdaGrad have gained popularity in particular for training deep neural networks. The most commonly used and studied variant maintains a diagonal matrix approximation to second order information by accumulating past gradients which are used to tune the step size adaptively. In certain situations the full-matrix variant of AdaGrad is expected to attain better performance, however in high dimensions it is computationally impractical. We present Ada-LR and RadaGrad two computationally efficient approximations to full-matrix AdaGrad based on randomized dimensionality reduction. They are able to capture dependencies between features and achieve similar performance to full-matrix AdaGrad but at a much smaller computational cost. We show that the regret of Ada-LR is close to the regret of full-matrix AdaGrad which can have an up-to exponentially smaller dependence on the dimension than the diagonal variant. Empirically, we show that Ada-LR and RadaGrad perform similarly to full-matrix AdaGrad. On the task of training convolutional neural networks as well as recurrent neural networks, RadaGrad achieves faster convergence than diagonal AdaGrad.
[ "Gabriel Krummenacher and Brian McWilliams and Yannic Kilcher and\n Joachim M. Buhmann and Nicolai Meinshausen", "['Gabriel Krummenacher' 'Brian McWilliams' 'Yannic Kilcher'\n 'Joachim M. Buhmann' 'Nicolai Meinshausen']" ]
math.ST cs.LG stat.TH
null
1611.0667
null
null
null
null
null
Error analysis of regularized least-square regression with Fredholm kernel
Learning with Fredholm kernel has attracted increasing attention recently since it can effectively utilize the data information to improve the prediction performance. Despite rapid progress on theoretical and experimental evaluations, its generalization analysis has not been explored in learning theory literature. In this paper, we establish the generalization bound of least square regularized regression with Fredholm kernel, which implies that the fast learning rate O(l^{-1}) can be reached under mild capacity conditions. Simulated examples show that this Fredholm regression algorithm can achieve the satisfactory prediction performance.
[ "Yanfang Tao, Peipei Yuan, Biqin Song" ]
null
null
1611.06670
null
null
http://arxiv.org/pdf/1611.06670v1
2016-11-21T07:03:46Z
2016-11-21T07:03:46Z
Error analysis of regularized least-square regression with Fredholm kernel
Learning with Fredholm kernel has attracted increasing attention recently since it can effectively utilize the data information to improve the prediction performance. Despite rapid progress on theoretical and experimental evaluations, its generalization analysis has not been explored in learning theory literature. In this paper, we establish the generalization bound of least square regularized regression with Fredholm kernel, which implies that the fast learning rate O(l^{-1}) can be reached under mild capacity conditions. Simulated examples show that this Fredholm regression algorithm can achieve the satisfactory prediction performance.
[ "['Yanfang Tao' 'Peipei Yuan' 'Biqin Song']" ]
cs.LG math.PR stat.ML
null
1611.06684
null
null
http://arxiv.org/pdf/1611.06684v1
2016-11-21T08:57:58Z
2016-11-21T08:57:58Z
Probabilistic Duality for Parallel Gibbs Sampling without Graph Coloring
We present a new notion of probabilistic duality for random variables involving mixture distributions. Using this notion, we show how to implement a highly-parallelizable Gibbs sampler for weakly coupled discrete pairwise graphical models with strictly positive factors that requires almost no preprocessing and is easy to implement. Moreover, we show how our method can be combined with blocking to improve mixing. Even though our method leads to inferior mixing times compared to a sequential Gibbs sampler, we argue that our method is still very useful for large dynamic networks, where factors are added and removed on a continuous basis, as it is hard to maintain a graph coloring in this setup. Similarly, our method is useful for parallelizing Gibbs sampling in graphical models that do not allow for graph colorings with a small number of colors such as densely connected graphs.
[ "['Lars Mescheder' 'Sebastian Nowozin' 'Andreas Geiger']", "Lars Mescheder, Sebastian Nowozin and Andreas Geiger" ]
cs.CV cs.LG
null
1611.06694
null
null
http://arxiv.org/pdf/1611.06694v1
2016-11-21T09:24:24Z
2016-11-21T09:24:24Z
Training Sparse Neural Networks
Deep neural networks with lots of parameters are typically used for large-scale computer vision tasks such as image classification. This is a result of using dense matrix multiplications and convolutions. However, sparse computations are known to be much more efficient. In this work, we train and build neural networks which implicitly use sparse computations. We introduce additional gate variables to perform parameter selection and show that this is equivalent to using a spike-and-slab prior. We experimentally validate our method on both small and large networks and achieve state-of-the-art compression results for sparse neural network models.
[ "Suraj Srinivas, Akshayvarun Subramanya, R. Venkatesh Babu", "['Suraj Srinivas' 'Akshayvarun Subramanya' 'R. Venkatesh Babu']" ]
math.OC cs.LG math.DS
null
1611.0673
null
null
null
null
null
On the convergence of gradient-like flows with noisy gradient input
In view of solving convex optimization problems with noisy gradient input, we analyze the asymptotic behavior of gradient-like flows under stochastic disturbances. Specifically, we focus on the widely studied class of mirror descent schemes for convex programs with compact feasible regions, and we examine the dynamics' convergence and concentration properties in the presence of noise. In the vanishing noise limit, we show that the dynamics converge to the solution set of the underlying problem (a.s.). Otherwise, when the noise is persistent, we show that the dynamics are concentrated around interior solutions in the long run, and they converge to boundary solutions that are sufficiently "sharp". Finally, we show that a suitably rectified variant of the method converges irrespective of the magnitude of the noise (or the structure of the underlying convex program), and we derive an explicit estimate for its rate of convergence.
[ "Panayotis Mertikopoulos and Mathias Staudigl" ]
null
null
1611.06730
null
null
http://arxiv.org/pdf/1611.06730v2
2017-09-20T07:32:28Z
2016-11-21T11:29:40Z
On the convergence of gradient-like flows with noisy gradient input
In view of solving convex optimization problems with noisy gradient input, we analyze the asymptotic behavior of gradient-like flows under stochastic disturbances. Specifically, we focus on the widely studied class of mirror descent schemes for convex programs with compact feasible regions, and we examine the dynamics' convergence and concentration properties in the presence of noise. In the vanishing noise limit, we show that the dynamics converge to the solution set of the underlying problem (a.s.). Otherwise, when the noise is persistent, we show that the dynamics are concentrated around interior solutions in the long run, and they converge to boundary solutions that are sufficiently "sharp". Finally, we show that a suitably rectified variant of the method converges irrespective of the magnitude of the noise (or the structure of the underlying convex program), and we derive an explicit estimate for its rate of convergence.
[ "['Panayotis Mertikopoulos' 'Mathias Staudigl']" ]
physics.data-an cond-mat.dis-nn cs.LG stat.ML
10.1103/PhysRevLett.118.138301
1611.06759
null
null
http://arxiv.org/abs/1611.06759v2
2017-03-02T21:50:02Z
2016-11-21T12:46:25Z
Emergence of Compositional Representations in Restricted Boltzmann Machines
Extracting automatically the complex set of features composing real high-dimensional data is crucial for achieving high performance in machine--learning tasks. Restricted Boltzmann Machines (RBM) are empirically known to be efficient for this purpose, and to be able to generate distributed and graded representations of the data. We characterize the structural conditions (sparsity of the weights, low effective temperature, nonlinearities in the activation functions of hidden units, and adaptation of fields maintaining the activity in the visible layer) allowing RBM to operate in such a compositional phase. Evidence is provided by the replica analysis of an adequate statistical ensemble of random RBMs and by RBM trained on the handwritten digits dataset MNIST.
[ "J\\'er\\^ome Tubiana (LPTENS), R\\'emi Monasson (LPTENS)", "['Jérôme Tubiana' 'Rémi Monasson']" ]
cs.LG cs.CV
null
1611.06777
null
null
http://arxiv.org/pdf/1611.06777v1
2016-11-21T13:26:37Z
2016-11-21T13:26:37Z
Effective Deterministic Initialization for $k$-Means-Like Methods via Local Density Peaks Searching
The $k$-means clustering algorithm is popular but has the following main drawbacks: 1) the number of clusters, $k$, needs to be provided by the user in advance, 2) it can easily reach local minima with randomly selected initial centers, 3) it is sensitive to outliers, and 4) it can only deal with well separated hyperspherical clusters. In this paper, we propose a Local Density Peaks Searching (LDPS) initialization framework to address these issues. The LDPS framework includes two basic components: one of them is the local density that characterizes the density distribution of a data set, and the other is the local distinctiveness index (LDI) which we introduce to characterize how distinctive a data point is compared with its neighbors. Based on these two components, we search for the local density peaks which are characterized with high local densities and high LDIs to deal with 1) and 2). Moreover, we detect outliers characterized with low local densities but high LDIs, and exclude them out before clustering begins. Finally, we apply the LDPS initialization framework to $k$-medoids, which is a variant of $k$-means and chooses data samples as centers, with diverse similarity measures other than the Euclidean distance to fix the last drawback of $k$-means. Combining the LDPS initialization framework with $k$-means and $k$-medoids, we obtain two novel clustering methods called LDPS-means and LDPS-medoids, respectively. Experiments on synthetic data sets verify the effectiveness of the proposed methods, especially when the ground truth of the cluster number $k$ is large. Further, experiments on several real world data sets, Handwritten Pendigits, Coil-20, Coil-100 and Olivetti Face Database, illustrate that our methods give a superior performance than the analogous approaches on both estimating $k$ and unsupervised object categorization.
[ "['Fengfu Li' 'Hong Qiao' 'Bo Zhang']", "Fengfu Li, Hong Qiao, and Bo Zhang" ]
cs.LG cs.AI cs.CV cs.NE
null
1611.06791
null
null
http://arxiv.org/pdf/1611.06791v1
2016-11-21T14:06:48Z
2016-11-21T14:06:48Z
Generalized Dropout
Deep Neural Networks often require good regularizers to generalize well. Dropout is one such regularizer that is widely used among Deep Learning practitioners. Recent work has shown that Dropout can also be viewed as performing Approximate Bayesian Inference over the network parameters. In this work, we generalize this notion and introduce a rich family of regularizers which we call Generalized Dropout. One set of methods in this family, called Dropout++, is a version of Dropout with trainable parameters. Classical Dropout emerges as a special case of this method. Another member of this family selects the width of neural network layers. Experiments show that these methods help in improving generalization performance over Dropout.
[ "Suraj Srinivas, R. Venkatesh Babu", "['Suraj Srinivas' 'R. Venkatesh Babu']" ]
cs.LG cs.AI
null
1611.06824
null
null
http://arxiv.org/pdf/1611.06824v3
2017-02-22T13:12:33Z
2016-11-21T15:05:55Z
Options Discovery with Budgeted Reinforcement Learning
We consider the problem of learning hierarchical policies for Reinforcement Learning able to discover options, an option corresponding to a sub-policy over a set of primitive actions. Different models have been proposed during the last decade that usually rely on a predefined set of options. We specifically address the problem of automatically discovering options in decision processes. We describe a new learning model called Budgeted Option Neural Network (BONN) able to discover options based on a budgeted learning objective. The BONN model is evaluated on different classical RL problems, demonstrating both quantitative and qualitative interesting results.
[ "Aur\\'elia L\\'eon, Ludovic Denoyer", "['Aurélia Léon' 'Ludovic Denoyer']" ]
stat.ML cs.LG
null
1611.06863
null
null
http://arxiv.org/pdf/1611.06863v1
2016-11-21T16:08:12Z
2016-11-21T16:08:12Z
Probabilistic structure discovery in time series data
Existing methods for structure discovery in time series data construct interpretable, compositional kernels for Gaussian process regression models. While the learned Gaussian process model provides posterior mean and variance estimates, typically the structure is learned via a greedy optimization procedure. This restricts the space of possible solutions and leads to over-confident uncertainty estimates. We introduce a fully Bayesian approach, inferring a full posterior over structures, which more reliably captures the uncertainty of the model.
[ "David Janz, Brooks Paige, Tom Rainforth, Jan-Willem van de Meent,\n Frank Wood", "['David Janz' 'Brooks Paige' 'Tom Rainforth' 'Jan-Willem van de Meent'\n 'Frank Wood']" ]
cs.LG cs.AI stat.ML
null
1611.06882
null
null
http://arxiv.org/pdf/1611.06882v1
2016-11-21T16:25:34Z
2016-11-21T16:25:34Z
Learning From Graph Neighborhoods Using LSTMs
Many prediction problems can be phrased as inferences over local neighborhoods of graphs. The graph represents the interaction between entities, and the neighborhood of each entity contains information that allows the inferences or predictions. We present an approach for applying machine learning directly to such graph neighborhoods, yielding predicitons for graph nodes on the basis of the structure of their local neighborhood and the features of the nodes in it. Our approach allows predictions to be learned directly from examples, bypassing the step of creating and tuning an inference model or summarizing the neighborhoods via a fixed set of hand-crafted features. The approach is based on a multi-level architecture built from Long Short-Term Memory neural nets (LSTMs); the LSTMs learn how to summarize the neighborhood from data. We demonstrate the effectiveness of the proposed technique on a synthetic example and on real-world data related to crowdsourced grading, Bitcoin transactions, and Wikipedia edit reversions.
[ "['Rakshit Agrawal' 'Luca de Alfaro' 'Vassilis Polychronopoulos']", "Rakshit Agrawal, Luca de Alfaro, Vassilis Polychronopoulos" ]
cs.LG cs.CL stat.ML
null
1611.06933
null
null
http://arxiv.org/pdf/1611.06933v1
2016-11-21T18:30:17Z
2016-11-21T18:30:17Z
Unsupervised Learning for Lexicon-Based Classification
In lexicon-based classification, documents are assigned labels by comparing the number of words that appear from two opposed lexicons, such as positive and negative sentiment. Creating such words lists is often easier than labeling instances, and they can be debugged by non-experts if classification performance is unsatisfactory. However, there is little analysis or justification of this classification heuristic. This paper describes a set of assumptions that can be used to derive a probabilistic justification for lexicon-based classification, as well as an analysis of its expected accuracy. One key assumption behind lexicon-based classification is that all words in each lexicon are equally predictive. This is rarely true in practice, which is why lexicon-based approaches are usually outperformed by supervised classifiers that learn distinct weights on each word from labeled instances. This paper shows that it is possible to learn such weights without labeled data, by leveraging co-occurrence statistics across the lexicons. This offers the best of both worlds: light supervision in the form of lexicons, and data-driven classification with higher accuracy than traditional word-counting heuristics.
[ "['Jacob Eisenstein']", "Jacob Eisenstein" ]
cs.CV cs.CL cs.LG
null
1611.0695
null
null
null
null
null
Statistical Learning for OCR Text Correction
The accuracy of Optical Character Recognition (OCR) is crucial to the success of subsequent applications used in text analyzing pipeline. Recent models of OCR post-processing significantly improve the quality of OCR-generated text, but are still prone to suggest correction candidates from limited observations while insufficiently accounting for the characteristics of OCR errors. In this paper, we show how to enlarge candidate suggestion space by using external corpus and integrating OCR-specific features in a regression approach to correct OCR-generated errors. The evaluation results show that our model can correct 61.5% of the OCR-errors (considering the top 1 suggestion) and 71.5% of the OCR-errors (considering the top 3 suggestions), for cases where the theoretical correction upper-bound is 78%.
[ "Jie Mei, Aminul Islam, Yajing Wu, Abidalrahman Moh'd, Evangelos E.\n Milios" ]
null
null
1611.06950
null
null
http://arxiv.org/pdf/1611.06950v1
2016-11-21T19:00:32Z
2016-11-21T19:00:32Z
Statistical Learning for OCR Text Correction
The accuracy of Optical Character Recognition (OCR) is crucial to the success of subsequent applications used in text analyzing pipeline. Recent models of OCR post-processing significantly improve the quality of OCR-generated text, but are still prone to suggest correction candidates from limited observations while insufficiently accounting for the characteristics of OCR errors. In this paper, we show how to enlarge candidate suggestion space by using external corpus and integrating OCR-specific features in a regression approach to correct OCR-generated errors. The evaluation results show that our model can correct 61.5% of the OCR-errors (considering the top 1 suggestion) and 71.5% of the OCR-errors (considering the top 3 suggestions), for cases where the theoretical correction upper-bound is 78%.
[ "['Jie Mei' 'Aminul Islam' 'Yajing Wu' \"Abidalrahman Moh'd\"\n 'Evangelos E. Milios']" ]
cs.LG cs.AI
null
1611.06953
null
null
http://arxiv.org/pdf/1611.06953v1
2016-11-18T02:11:40Z
2016-11-18T02:11:40Z
Associative Adversarial Networks
We propose a higher-level associative memory for learning adversarial networks. Generative adversarial network (GAN) framework has a discriminator and a generator network. The generator (G) maps white noise (z) to data samples while the discriminator (D) maps data samples to a single scalar. To do so, G learns how to map from high-level representation space to data space, and D learns to do the opposite. We argue that higher-level representation spaces need not necessarily follow a uniform probability distribution. In this work, we use Restricted Boltzmann Machines (RBMs) as a higher-level associative memory and learn the probability distribution for the high-level features generated by D. The associative memory samples its underlying probability distribution and G learns how to map these samples to data space. The proposed associative adversarial networks (AANs) are generative models in the higher-levels of the learning, and use adversarial non-stochastic models D and G for learning the mapping between data and higher-level representation spaces. Experiments show the potential of the proposed networks.
[ "Tarik Arici and Asli Celikyilmaz", "['Tarik Arici' 'Asli Celikyilmaz']" ]
stat.ML cs.LG math.PR
null
1611.06972
null
null
http://arxiv.org/pdf/1611.06972v6
2018-11-13T01:25:12Z
2016-11-21T19:47:08Z
Measuring Sample Quality with Diffusions
Stein's method for measuring convergence to a continuous target distribution relies on an operator characterizing the target and Stein factor bounds on the solutions of an associated differential equation. While such operators and bounds are readily available for a diversity of univariate targets, few multivariate targets have been analyzed. We introduce a new class of characterizing operators based on Ito diffusions and develop explicit multivariate Stein factor bounds for any target with a fast-coupling Ito diffusion. As example applications, we develop computable and convergence-determining diffusion Stein discrepancies for log-concave, heavy-tailed, and multimodal targets and use these quality measures to select the hyperparameters of biased Markov chain Monte Carlo (MCMC) samplers, compare random and deterministic quadrature rules, and quantify bias-variance tradeoffs in approximate MCMC. Our results establish a near-linear relationship between diffusion Stein discrepancies and Wasserstein distances, improving upon past work even for strongly log-concave targets. The exposed relationship between Stein factors and Markov process coupling may be of independent interest.
[ "Jackson Gorham, Andrew B. Duncan, Sebastian J. Vollmer, and Lester\n Mackey", "['Jackson Gorham' 'Andrew B. Duncan' 'Sebastian J. Vollmer'\n 'Lester Mackey']" ]
cs.CL cs.LG cs.SD
null
1611.06986
null
null
http://arxiv.org/pdf/1611.06986v1
2016-11-21T20:08:51Z
2016-11-21T20:08:51Z
Robust end-to-end deep audiovisual speech recognition
Speech is one of the most effective ways of communication among humans. Even though audio is the most common way of transmitting speech, very important information can be found in other modalities, such as vision. Vision is particularly useful when the acoustic signal is corrupted. Multi-modal speech recognition however has not yet found wide-spread use, mostly because the temporal alignment and fusion of the different information sources is challenging. This paper presents an end-to-end audiovisual speech recognizer (AVSR), based on recurrent neural networks (RNN) with a connectionist temporal classification (CTC) loss function. CTC creates sparse "peaky" output activations, and we analyze the differences in the alignments of output targets (phonemes or visemes) between audio-only, video-only, and audio-visual feature representations. We present the first such experiments on the large vocabulary IBM ViaVoice database, which outperform previously published approaches on phone accuracy in clean and noisy conditions.
[ "['Ramon Sanabria' 'Florian Metze' 'Fernando De La Torre']", "Ramon Sanabria, Florian Metze and Fernando De La Torre" ]
stat.ML cs.LG
null
1611.06996
null
null
http://arxiv.org/pdf/1611.06996v1
2016-11-21T20:24:58Z
2016-11-21T20:24:58Z
Spatial contrasting for deep unsupervised learning
Convolutional networks have marked their place over the last few years as the best performing model for various visual tasks. They are, however, most suited for supervised learning from large amounts of labeled data. Previous attempts have been made to use unlabeled data to improve model performance by applying unsupervised techniques. These attempts require different architectures and training methods. In this work we present a novel approach for unsupervised training of Convolutional networks that is based on contrasting between spatial regions within images. This criterion can be employed within conventional neural networks and trained using standard techniques such as SGD and back-propagation, thus complementing supervised methods.
[ "Elad Hoffer, Itay Hubara, Nir Ailon", "['Elad Hoffer' 'Itay Hubara' 'Nir Ailon']" ]
cs.LG stat.ML
null
1611.07012
null
null
http://arxiv.org/pdf/1611.07012v3
2017-04-01T22:21:35Z
2016-11-21T20:59:22Z
GRAM: Graph-based Attention Model for Healthcare Representation Learning
Deep learning methods exhibit promising performance for predictive modeling in healthcare, but two important challenges remain: -Data insufficiency:Often in healthcare predictive modeling, the sample size is insufficient for deep learning methods to achieve satisfactory results. -Interpretation:The representations learned by deep learning methods should align with medical knowledge. To address these challenges, we propose a GRaph-based Attention Model, GRAM that supplements electronic health records (EHR) with hierarchical information inherent to medical ontologies. Based on the data volume and the ontology structure, GRAM represents a medical concept as a combination of its ancestors in the ontology via an attention mechanism. We compared predictive performance (i.e. accuracy, data needs, interpretability) of GRAM to various methods including the recurrent neural network (RNN) in two sequential diagnoses prediction tasks and one heart failure prediction task. Compared to the basic RNN, GRAM achieved 10% higher accuracy for predicting diseases rarely observed in the training data and 3% improved area under the ROC curve for predicting heart failure using an order of magnitude less training data. Additionally, unlike other methods, the medical concept representations learned by GRAM are well aligned with the medical ontology. Finally, GRAM exhibits intuitive attention behaviors by adaptively generalizing to higher level concepts when facing data insufficiency at the lower level concepts.
[ "['Edward Choi' 'Mohammad Taha Bahadori' 'Le Song' 'Walter F. Stewart'\n 'Jimeng Sun']", "Edward Choi, Mohammad Taha Bahadori, Le Song, Walter F. Stewart,\n Jimeng Sun" ]
cs.LG cs.AI stat.ML
null
1611.07054
null
null
http://arxiv.org/pdf/1611.07054v1
2016-11-21T21:09:33Z
2016-11-21T21:09:33Z
An Efficient Training Algorithm for Kernel Survival Support Vector Machines
Survival analysis is a fundamental tool in medical research to identify predictors of adverse events and develop systems for clinical decision support. In order to leverage large amounts of patient data, efficient optimisation routines are paramount. We propose an efficient training algorithm for the kernel survival support vector machine (SSVM). We directly optimise the primal objective function and employ truncated Newton optimisation and order statistic trees to significantly lower computational costs compared to previous training algorithms, which require $O(n^4)$ space and $O(p n^6)$ time for datasets with $n$ samples and $p$ features. Our results demonstrate that our proposed optimisation scheme allows analysing data of a much larger scale with no loss in prediction performance. Experiments on synthetic and 5 real-world datasets show that our technique outperforms existing kernel SSVM formulations if the amount of right censoring is high ($\geq85\%$), and performs comparably otherwise.
[ "['Sebastian Pölsterl' 'Nassir Navab' 'Amin Katouzian']", "Sebastian P\\\"olsterl, Nassir Navab, Amin Katouzian" ]
stat.CO cs.LG stat.ML
10.1016/j.dsp.2017.11.012
1611.07056
null
null
http://arxiv.org/abs/1611.07056v2
2017-12-20T14:59:08Z
2016-11-21T21:13:00Z
The Recycling Gibbs Sampler for Efficient Learning
Monte Carlo methods are essential tools for Bayesian inference. Gibbs sampling is a well-known Markov chain Monte Carlo (MCMC) algorithm, extensively used in signal processing, machine learning, and statistics, employed to draw samples from complicated high-dimensional posterior distributions. The key point for the successful application of the Gibbs sampler is the ability to draw efficiently samples from the full-conditional probability density functions. Since in the general case this is not possible, in order to speed up the convergence of the chain, it is required to generate auxiliary samples whose information is eventually disregarded. In this work, we show that these auxiliary samples can be recycled within the Gibbs estimators, improving their efficiency with no extra cost. This novel scheme arises naturally after pointing out the relationship between the standard Gibbs sampler and the chain rule used for sampling purposes. Numerical simulations involving simple and real inference problems confirm the excellent performance of the proposed scheme in terms of accuracy and computational efficiency. In particular we give empirical evidence of performance in a toy example, inference of Gaussian processes hyperparameters, and learning dependence graphs through regression.
[ "['Luca Martino' 'Victor Elvira' 'Gustau Camps-Valls']", "Luca Martino, Victor Elvira, Gustau Camps-Valls" ]
cs.AI cs.LG stat.ML
null
1611.07078
null
null
http://arxiv.org/pdf/1611.07078v2
2017-08-17T09:00:01Z
2016-11-21T22:06:23Z
A Deep Learning Approach for Joint Video Frame and Reward Prediction in Atari Games
Reinforcement learning is concerned with identifying reward-maximizing behaviour policies in environments that are initially unknown. State-of-the-art reinforcement learning approaches, such as deep Q-networks, are model-free and learn to act effectively across a wide range of environments such as Atari games, but require huge amounts of data. Model-based techniques are more data-efficient, but need to acquire explicit knowledge about the environment. In this paper, we take a step towards using model-based techniques in environments with a high-dimensional visual state space by demonstrating that it is possible to learn system dynamics and the reward structure jointly. Our contribution is to extend a recently developed deep neural network for video frame prediction in Atari games to enable reward prediction as well. To this end, we phrase a joint optimization problem for minimizing both video frame and reward reconstruction loss, and adapt network parameters accordingly. Empirical evaluations on five Atari games demonstrate accurate cumulative reward prediction of up to 200 frames. We consider these results as opening up important directions for model-based reinforcement learning in complex, initially unknown environments.
[ "Felix Leibfried, Nate Kushman, Katja Hofmann", "['Felix Leibfried' 'Nate Kushman' 'Katja Hofmann']" ]