categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
list
cs.PL cs.LG cs.SE
null
1610.09543
null
null
http://arxiv.org/pdf/1610.09543v1
2016-10-29T17:10:15Z
2016-10-29T17:10:15Z
FEAST: An Automated Feature Selection Framework for Compilation Tasks
The success of the application of machine-learning techniques to compilation tasks can be largely attributed to the recent development and advancement of program characterization, a process that numerically or structurally quantifies a target program. While great achievements have been made in identifying key features to characterize programs, choosing a correct set of features for a specific compiler task remains an ad hoc procedure. In order to guarantee a comprehensive coverage of features, compiler engineers usually need to select excessive number of features. This, unfortunately, would potentially lead to a selection of multiple similar features, which in turn could create a new problem of bias that emphasizes certain aspects of a program's characteristics, hence reducing the accuracy and performance of the target compiler task. In this paper, we propose FEAture Selection for compilation Tasks (FEAST), an efficient and automated framework for determining the most relevant and representative features from a feature pool. Specifically, FEAST utilizes widely used statistics and machine-learning tools, including LASSO, sequential forward and backward selection, for automatic feature selection, and can in general be applied to any numerical feature set. This paper further proposes an automated approach to compiler parameter assignment for assessing the performance of FEAST. Intensive experimental results demonstrate that, under the compiler parameter assignment task, FEAST can achieve comparable results with about 18% of features that are automatically selected from the entire feature pool. We also inspect these selected features and discuss their roles in program execution.
[ "Pai-Shun Ting, Chun-Chen Tu, Pin-Yu Chen, Ya-Yun Lo, Shin-Ming Cheng", "['Pai-Shun Ting' 'Chun-Chen Tu' 'Pin-Yu Chen' 'Ya-Yun Lo'\n 'Shin-Ming Cheng']" ]
cs.LG
null
1610.09555
null
null
http://arxiv.org/pdf/1610.09555v2
2018-05-09T13:54:12Z
2016-10-29T18:32:27Z
TensorLy: Tensor Learning in Python
Tensors are higher-order extensions of matrices. While matrix methods form the cornerstone of machine learning and data analysis, tensor methods have been gaining increasing traction. However, software support for tensor operations is not on the same footing. In order to bridge this gap, we have developed \emph{TensorLy}, a high-level API for tensor methods and deep tensorized neural networks in Python. TensorLy aims to follow the same standards adopted by the main projects of the Python scientific community, and seamlessly integrates with them. Its BSD license makes it suitable for both academic and commercial applications. TensorLy's backend system allows users to perform computations with NumPy, MXNet, PyTorch, TensorFlow and CuPy. They can be scaled on multiple CPU or GPU machines. In addition, using the deep-learning frameworks as backend allows users to easily design and train deep tensorized neural networks. TensorLy is available at https://github.com/tensorly/tensorly
[ "['Jean Kossaifi' 'Yannis Panagakis' 'Anima Anandkumar' 'Maja Pantic']", "Jean Kossaifi, Yannis Panagakis, Anima Anandkumar and Maja Pantic" ]
cs.LG
null
1610.09559
null
null
http://arxiv.org/pdf/1610.09559v4
2017-06-29T15:46:55Z
2016-10-29T18:46:11Z
Fair Algorithms for Infinite and Contextual Bandits
We study fairness in linear bandit problems. Starting from the notion of meritocratic fairness introduced in Joseph et al. [2016], we carry out a more refined analysis of a more general problem, achieving better performance guarantees with fewer modelling assumptions on the number and structure of available choices as well as the number selected. We also analyze the previously-unstudied question of fairness in infinite linear bandit problems, obtaining instance-dependent regret upper bounds as well as lower bounds demonstrating that this instance-dependence is necessary. The result is a framework for meritocratic fairness in an online linear setting that is substantially more powerful, general, and realistic than the current state of the art.
[ "['Matthew Joseph' 'Michael Kearns' 'Jamie Morgenstern' 'Seth Neel'\n 'Aaron Roth']", "Matthew Joseph, Michael Kearns, Jamie Morgenstern, Seth Neel, and\n Aaron Roth" ]
cs.LG cs.NE
null
1610.09608
null
null
http://arxiv.org/pdf/1610.09608v1
2016-10-30T06:34:19Z
2016-10-30T06:34:19Z
A Theoretical Study of The Relationship Between Whole An ELM Network and Its Subnetworks
A biological neural network is constituted by numerous subnetworks and modules with different functionalities. For an artificial neural network, the relationship between a network and its subnetworks is also important and useful for both theoretical and algorithmic research, i.e. it can be exploited to develop incremental network training algorithm or parallel network training algorithm. In this paper we explore the relationship between an ELM neural network and its subnetworks. To the best of our knowledge, we are the first to prove a theorem that shows an ELM neural network can be scattered into subnetworks and its optimal solution can be constructed recursively by the optimal solutions of these subnetworks. Based on the theorem we also present two algorithms to train a large ELM neural network efficiently: one is a parallel network training algorithm and the other is an incremental network training algorithm. The experimental results demonstrate the usefulness of the theorem and the validity of the developed algorithms.
[ "['Enmei Tu' 'Guanghao Zhang' 'Lily Rachmawati' 'Eshan Rajabally'\n 'Guang-Bin Huang']", "Enmei Tu, Guanghao Zhang, Lily Rachmawati, Eshan Rajabally and\n Guang-Bin Huang" ]
q-bio.NC cs.CV cs.LG
10.1016/j.cognition.2018.11.001
1610.09625
null
null
http://arxiv.org/abs/1610.09625v1
2016-10-30T10:26:22Z
2016-10-30T10:26:22Z
Discovering containment: from infants to machines
Current artificial learning systems can recognize thousands of visual categories, or play Go at a champion"s level, but cannot explain infants learning, in particular the ability to learn complex concepts without guidance, in a specific order. A notable example is the category of 'containers' and the notion of containment, one of the earliest spatial relations to be learned, starting already at 2.5 months, and preceding other common relations (e.g., support). Such spontaneous unsupervised learning stands in contrast with current highly successful computational models, which learn in a supervised manner, that is, by using large data sets of labeled examples. How can meaningful concepts be learned without guidance, and what determines the trajectory of infant learning, making some notions appear consistently earlier than others?
[ "Shimon Ullman, Nimrod Dorfman, Daniel Harari", "['Shimon Ullman' 'Nimrod Dorfman' 'Daniel Harari']" ]
cs.LG cs.NE
null
1610.09639
null
null
http://arxiv.org/pdf/1610.09639v1
2016-10-30T11:57:20Z
2016-10-30T11:57:20Z
Compact Deep Convolutional Neural Networks With Coarse Pruning
The learning capability of a neural network improves with increasing depth at higher computational costs. Wider layers with dense kernel connectivity patterns furhter increase this cost and may hinder real-time inference. We propose feature map and kernel level pruning for reducing the computational complexity of a deep convolutional neural network. Pruning feature maps reduces the width of a layer and hence does not need any sparse representation. Further, kernel pruning converts the dense connectivity pattern into a sparse one. Due to coarse nature, these pruning granularities can be exploited by GPUs and VLSI based implementations. We propose a simple and generic strategy to choose the least adversarial pruning masks for both granularities. The pruned networks are retrained which compensates the loss in accuracy. We obtain the best pruning ratios when we prune a network with both granularities. Experiments with the CIFAR-10 dataset show that more than 85% sparsity can be induced in the convolution layers with less than 1% increase in the missclassification rate of the baseline network.
[ "Sajid Anwar, Wonyong Sung", "['Sajid Anwar' 'Wonyong Sung']" ]
cs.LG
null
1610.0965
null
null
null
null
null
Deep Model Compression: Distilling Knowledge from Noisy Teachers
The remarkable successes of deep learning models across various applications have resulted in the design of deeper networks that can solve complex problems. However, the increasing depth of such models also results in a higher storage and runtime complexity, which restricts the deployability of such very deep models on mobile and portable devices, which have limited storage and battery capacity. While many methods have been proposed for deep model compression in recent years, almost all of them have focused on reducing storage complexity. In this work, we extend the teacher-student framework for deep model compression, since it has the potential to address runtime and train time complexity too. We propose a simple methodology to include a noise-based regularizer while training the student from the teacher, which provides a healthy improvement in the performance of the student network. Our experiments on the CIFAR-10, SVHN and MNIST datasets show promising improvement, with the best performance on the CIFAR-10 dataset. We also conduct a comprehensive empirical evaluation of the proposed method under related settings on the CIFAR-10 dataset to show the promise of the proposed approach.
[ "Bharat Bhusan Sau and Vineeth N. Balasubramanian" ]
null
null
1610.09650
null
null
http://arxiv.org/pdf/1610.09650v2
2016-11-02T16:32:23Z
2016-10-30T13:54:39Z
Deep Model Compression: Distilling Knowledge from Noisy Teachers
The remarkable successes of deep learning models across various applications have resulted in the design of deeper networks that can solve complex problems. However, the increasing depth of such models also results in a higher storage and runtime complexity, which restricts the deployability of such very deep models on mobile and portable devices, which have limited storage and battery capacity. While many methods have been proposed for deep model compression in recent years, almost all of them have focused on reducing storage complexity. In this work, we extend the teacher-student framework for deep model compression, since it has the potential to address runtime and train time complexity too. We propose a simple methodology to include a noise-based regularizer while training the student from the teacher, which provides a healthy improvement in the performance of the student network. Our experiments on the CIFAR-10, SVHN and MNIST datasets show promising improvement, with the best performance on the CIFAR-10 dataset. We also conduct a comprehensive empirical evaluation of the proposed method under related settings on the CIFAR-10 dataset to show the promise of the proposed approach.
[ "['Bharat Bhusan Sau' 'Vineeth N. Balasubramanian']" ]
cs.LG
null
1610.09716
null
null
http://arxiv.org/pdf/1610.09716v1
2016-10-30T22:07:16Z
2016-10-30T22:07:16Z
Doubly Convolutional Neural Networks
Building large models with parameter sharing accounts for most of the success of deep convolutional neural networks (CNNs). In this paper, we propose doubly convolutional neural networks (DCNNs), which significantly improve the performance of CNNs by further exploring this idea. In stead of allocating a set of convolutional filters that are independently learned, a DCNN maintains groups of filters where filters within each group are translated versions of each other. Practically, a DCNN can be easily implemented by a two-step convolution procedure, which is supported by most modern deep learning libraries. We perform extensive experiments on three image classification benchmarks: CIFAR-10, CIFAR-100 and ImageNet, and show that DCNNs consistently outperform other competing architectures. We have also verified that replacing a convolutional layer with a doubly convolutional layer at any depth of a CNN can improve its performance. Moreover, various design choices of DCNNs are demonstrated, which shows that DCNN can serve the dual purpose of building more accurate models and/or reducing the memory footprint without sacrificing the accuracy.
[ "['Shuangfei Zhai' 'Yu Cheng' 'Weining Lu' 'Zhongfei Zhang']", "Shuangfei Zhai, Yu Cheng, Weining Lu, Zhongfei Zhang" ]
cs.LG
null
1610.09726
null
null
http://arxiv.org/pdf/1610.09726v1
2016-10-30T23:07:49Z
2016-10-30T23:07:49Z
The Multi-fidelity Multi-armed Bandit
We study a variant of the classical stochastic $K$-armed bandit where observing the outcome of each arm is expensive, but cheap approximations to this outcome are available. For example, in online advertising the performance of an ad can be approximated by displaying it for shorter time periods or to narrower audiences. We formalise this task as a multi-fidelity bandit, where, at each time step, the forecaster may choose to play an arm at any one of $M$ fidelities. The highest fidelity (desired outcome) expends cost $\lambda^{(m)}$. The $m^{\text{th}}$ fidelity (an approximation) expends $\lambda^{(m)} < \lambda^{(M)}$ and returns a biased estimate of the highest fidelity. We develop MF-UCB, a novel upper confidence bound procedure for this setting and prove that it naturally adapts to the sequence of available approximations and costs thus attaining better regret than naive strategies which ignore the approximations. For instance, in the above online advertising example, MF-UCB would use the lower fidelities to quickly eliminate suboptimal ads and reserve the larger expensive experiments on a small set of promising candidates. We complement this result with a lower bound and show that MF-UCB is nearly optimal under certain conditions.
[ "['Kirthevasan Kandasamy' 'Gautam Dasarathy' 'Jeff Schneider'\n 'Barnabás Póczos']", "Kirthevasan Kandasamy and Gautam Dasarathy and Jeff Schneider and\n Barnab\\'as P\\'oczos" ]
cs.LG stat.ML
null
1610.0973
null
null
null
null
null
Active Learning from Imperfect Labelers
We study active learning where the labeler can not only return incorrect labels but also abstain from labeling. We consider different noise and abstention conditions of the labeler. We propose an algorithm which utilizes abstention responses, and analyze its statistical consistency and query complexity under fairly natural assumptions on the noise and abstention rate of the labeler. This algorithm is adaptive in a sense that it can automatically request less queries with a more informed or less noisy labeler. We couple our algorithm with lower bounds to show that under some technical conditions, it achieves nearly optimal query complexity.
[ "Songbai Yan, Kamalika Chaudhuri and Tara Javidi" ]
null
null
1610.09730
null
null
http://arxiv.org/pdf/1610.09730v1
2016-10-30T23:39:18Z
2016-10-30T23:39:18Z
Active Learning from Imperfect Labelers
We study active learning where the labeler can not only return incorrect labels but also abstain from labeling. We consider different noise and abstention conditions of the labeler. We propose an algorithm which utilizes abstention responses, and analyze its statistical consistency and query complexity under fairly natural assumptions on the noise and abstention rate of the labeler. This algorithm is adaptive in a sense that it can automatically request less queries with a more informed or less noisy labeler. We couple our algorithm with lower bounds to show that under some technical conditions, it achieves nearly optimal query complexity.
[ "['Songbai Yan' 'Kamalika Chaudhuri' 'Tara Javidi']" ]
cs.CL cs.LG
null
1610.09756
null
null
http://arxiv.org/pdf/1610.09756v2
2016-11-16T17:15:14Z
2016-10-31T01:31:52Z
Towards Deep Learning in Hindi NER: An approach to tackle the Labelled Data Scarcity
In this paper we describe an end to end Neural Model for Named Entity Recognition NER) which is based on Bi-Directional RNN-LSTM. Almost all NER systems for Hindi use Language Specific features and handcrafted rules with gazetteers. Our model is language independent and uses no domain specific features or any handcrafted rules. Our models rely on semantic information in the form of word vectors which are learnt by an unsupervised learning algorithm on an unannotated corpus. Our model attained state of the art performance in both English and Hindi without the use of any morphological analysis or without using gazetteers of any sort.
[ "['Vinayak Athavale' 'Shreenivas Bharadwaj' 'Monik Pamecha' 'Ameya Prabhu'\n 'Manish Shrivastava']", "Vinayak Athavale, Shreenivas Bharadwaj, Monik Pamecha, Ameya Prabhu\n and Manish Shrivastava" ]
cs.SI cs.LG
null
1610.09769
null
null
http://arxiv.org/pdf/1610.09769v1
2016-10-31T03:15:02Z
2016-10-31T03:15:02Z
Meta-Path Guided Embedding for Similarity Search in Large-Scale Heterogeneous Information Networks
Most real-world data can be modeled as heterogeneous information networks (HINs) consisting of vertices of multiple types and their relationships. Search for similar vertices of the same type in large HINs, such as bibliographic networks and business-review networks, is a fundamental problem with broad applications. Although similarity search in HINs has been studied previously, most existing approaches neither explore rich semantic information embedded in the network structures nor take user's preference as a guidance. In this paper, we re-examine similarity search in HINs and propose a novel embedding-based framework. It models vertices as low-dimensional vectors to explore network structure-embedded similarity. To accommodate user preferences at defining similarity semantics, our proposed framework, ESim, accepts user-defined meta-paths as guidance to learn vertex vectors in a user-preferred embedding space. Moreover, an efficient and parallel sampling-based optimization algorithm has been developed to learn embeddings in large-scale HINs. Extensive experiments on real-world large-scale HINs demonstrate a significant improvement on the effectiveness of ESim over several state-of-the-art algorithms as well as its scalability.
[ "['Jingbo Shang' 'Meng Qu' 'Jialu Liu' 'Lance M. Kaplan' 'Jiawei Han'\n 'Jian Peng']", "Jingbo Shang, Meng Qu, Jialu Liu, Lance M. Kaplan, Jiawei Han, Jian\n Peng" ]
cs.LG cs.AI
null
1610.09778
null
null
http://arxiv.org/pdf/1610.09778v1
2016-10-31T03:43:04Z
2016-10-31T03:43:04Z
DPPred: An Effective Prediction Framework with Concise Discriminative Patterns
In the literature, two series of models have been proposed to address prediction problems including classification and regression. Simple models, such as generalized linear models, have ordinary performance but strong interpretability on a set of simple features. The other series, including tree-based models, organize numerical, categorical and high dimensional features into a comprehensive structure with rich interpretable information in the data. In this paper, we propose a novel Discriminative Pattern-based Prediction framework (DPPred) to accomplish the prediction tasks by taking their advantages of both effectiveness and interpretability. Specifically, DPPred adopts the concise discriminative patterns that are on the prefix paths from the root to leaf nodes in the tree-based models. DPPred selects a limited number of the useful discriminative patterns by searching for the most effective pattern combination to fit generalized linear models. Extensive experiments show that in many scenarios, DPPred provides competitive accuracy with the state-of-the-art as well as the valuable interpretability for developers and experts. In particular, taking a clinical application dataset as a case study, our DPPred outperforms the baselines by using only 40 concise discriminative patterns out of a potentially exponentially large set of patterns.
[ "Jingbo Shang, Meng Jiang, Wenzhu Tong, Jinfeng Xiao, Jian Peng, Jiawei\n Han", "['Jingbo Shang' 'Meng Jiang' 'Wenzhu Tong' 'Jinfeng Xiao' 'Jian Peng'\n 'Jiawei Han']" ]
cs.LG cs.NE stat.ML
null
1610.09887
null
null
http://arxiv.org/pdf/1610.09887v3
2020-05-13T12:08:04Z
2016-10-31T12:08:46Z
Depth-Width Tradeoffs in Approximating Natural Functions with Neural Networks
We provide several new depth-based separation results for feed-forward neural networks, proving that various types of simple and natural functions can be better approximated using deeper networks than shallower ones, even if the shallower networks are much larger. This includes indicators of balls and ellipses; non-linear functions which are radial with respect to the $L_1$ norm; and smooth non-linear functions. We also show that these gaps can be observed experimentally: Increasing the depth indeed allows better learning than increasing width, when training neural networks to learn an indicator of a unit ball.
[ "['Itay Safran' 'Ohad Shamir']", "Itay Safran, Ohad Shamir" ]
cs.CL cs.LG
null
1610.09893
null
null
http://arxiv.org/pdf/1610.09893v1
2016-10-31T12:24:13Z
2016-10-31T12:24:13Z
LightRNN: Memory and Computation-Efficient Recurrent Neural Networks
Recurrent neural networks (RNNs) have achieved state-of-the-art performances in many natural language processing tasks, such as language modeling and machine translation. However, when the vocabulary is large, the RNN model will become very big (e.g., possibly beyond the memory capacity of a GPU device) and its training will become very inefficient. In this work, we propose a novel technique to tackle this challenge. The key idea is to use 2-Component (2C) shared embedding for word representations. We allocate every word in the vocabulary into a table, each row of which is associated with a vector, and each column associated with another vector. Depending on its position in the table, a word is jointly represented by two components: a row vector and a column vector. Since the words in the same row share the row vector and the words in the same column share the column vector, we only need $2 \sqrt{|V|}$ vectors to represent a vocabulary of $|V|$ unique words, which are far less than the $|V|$ vectors required by existing approaches. Based on the 2-Component shared embedding, we design a new RNN algorithm and evaluate it using the language modeling task on several benchmark datasets. The results show that our algorithm significantly reduces the model size and speeds up the training process, without sacrifice of accuracy (it achieves similar, if not better, perplexity as compared to state-of-the-art language models). Remarkably, on the One-Billion-Word benchmark Dataset, our algorithm achieves comparable perplexity to previous language models, whilst reducing the model size by a factor of 40-100, and speeding up the training process by a factor of 2. We name our proposed algorithm \emph{LightRNN} to reflect its very small model size and very high training speed.
[ "['Xiang Li' 'Tao Qin' 'Jian Yang' 'Tie-Yan Liu']", "Xiang Li and Tao Qin and Jian Yang and Tie-Yan Liu" ]
cs.AI cs.LG stat.ML
null
1610.099
null
null
null
null
null
Inference Compilation and Universal Probabilistic Programming
We introduce a method for using deep neural networks to amortize the cost of inference in models from the family induced by universal probabilistic programming languages, establishing a framework that combines the strengths of probabilistic programming and deep learning methods. We call what we do "compilation of inference" because our method transforms a denotational specification of an inference problem in the form of a probabilistic program written in a universal programming language into a trained neural network denoted in a neural network specification language. When at test time this neural network is fed observational data and executed, it performs approximate inference in the original model specified by the probabilistic program. Our training objective and learning procedure are designed to allow the trained neural network to be used as a proposal distribution in a sequential importance sampling inference engine. We illustrate our method on mixture models and Captcha solving and show significant speedups in the efficiency of inference.
[ "Tuan Anh Le, Atilim Gunes Baydin, Frank Wood" ]
null
null
1610.09900
null
null
http://arxiv.org/pdf/1610.09900v2
2017-03-02T17:11:01Z
2016-10-31T12:53:20Z
Inference Compilation and Universal Probabilistic Programming
We introduce a method for using deep neural networks to amortize the cost of inference in models from the family induced by universal probabilistic programming languages, establishing a framework that combines the strengths of probabilistic programming and deep learning methods. We call what we do "compilation of inference" because our method transforms a denotational specification of an inference problem in the form of a probabilistic program written in a universal programming language into a trained neural network denoted in a neural network specification language. When at test time this neural network is fed observational data and executed, it performs approximate inference in the original model specified by the probabilistic program. Our training objective and learning procedure are designed to allow the trained neural network to be used as a proposal distribution in a sequential importance sampling inference engine. We illustrate our method on mixture models and Captcha solving and show significant speedups in the efficiency of inference.
[ "['Tuan Anh Le' 'Atilim Gunes Baydin' 'Frank Wood']" ]
cs.LG
null
1610.09903
null
null
http://arxiv.org/pdf/1610.09903v1
2016-10-31T12:57:25Z
2016-10-31T12:57:25Z
Learning Runtime Parameters in Computer Systems with Delayed Experience Injection
Learning effective configurations in computer systems without hand-crafting models for every parameter is a long-standing problem. This paper investigates the use of deep reinforcement learning for runtime parameters of cloud databases under latency constraints. Cloud services serve up to thousands of concurrent requests per second and can adjust critical parameters by leveraging performance metrics. In this work, we use continuous deep reinforcement learning to learn optimal cache expirations for HTTP caching in content delivery networks. To this end, we introduce a technique for asynchronous experience management called delayed experience injection, which facilitates delayed reward and next-state computation in concurrent environments where measurements are not immediately available. Evaluation results show that our approach based on normalized advantage functions and asynchronous CPU-only training outperforms a statistical estimator.
[ "['Michael Schaarschmidt' 'Felix Gessert' 'Valentin Dalibard' 'Eiko Yoneki']", "Michael Schaarschmidt, Felix Gessert, Valentin Dalibard, Eiko Yoneki" ]
stat.ML cs.LG
10.1109/TSP.2017.2726991
1610.09915
null
null
http://arxiv.org/abs/1610.09915v1
2016-10-31T13:36:53Z
2016-10-31T13:36:53Z
Complex-Valued Kernel Methods for Regression
Usually, complex-valued RKHS are presented as an straightforward application of the real-valued case. In this paper we prove that this procedure yields a limited solution for regression. We show that another kernel, here denoted as pseudo kernel, is needed to learn any function in complex-valued fields. Accordingly, we derive a novel RKHS to include it, the widely RKHS (WRKHS). When the pseudo-kernel cancels, WRKHS reduces to complex-valued RKHS of previous approaches. We address the kernel and pseudo-kernel design, paying attention to the kernel and the pseudo-kernel being complex-valued. In the experiments included we report remarkable improvements in simple scenarios where real a imaginary parts have different similitude relations for given inputs or cases where real and imaginary parts are correlated. In the context of these novel results we revisit the problem of non-linear channel equalization, to show that the WRKHS helps to design more efficient solutions.
[ "Rafael Boloix-Tortosa, Juan Jos\\'e Murillo-Fuentes, Irene Santos\n Vel\\'azquez, and Fernando P\\'erez-Cruz", "['Rafael Boloix-Tortosa' 'Juan José Murillo-Fuentes'\n 'Irene Santos Velázquez' 'Fernando Pérez-Cruz']" ]
physics.data-an cs.LG hep-ex
10.1088/1742-6596/762/1/012052
1610.09932
null
null
http://arxiv.org/abs/1610.09932v1
2016-10-19T15:13:03Z
2016-10-19T15:13:03Z
Support Vector Machines and Generalisation in HEP
We review the concept of support vector machines (SVMs) and discuss examples of their use. One of the benefits of SVM algorithms, compared with neural networks and decision trees is that they can be less susceptible to over fitting than those other algorithms are to over training. This issue is related to the generalisation of a multivariate algorithm (MVA); a problem that has often been overlooked in particle physics. We discuss cross validation and how this can be used to improve the generalisation of a MVA in the context of High Energy Physics analyses. The examples presented use the Toolkit for Multivariate Analysis (TMVA) based on ROOT and describe our improvements to the SVM functionality and new tools introduced for cross validation within this framework.
[ "A. Bethani, A. J. Bevan, J. Hays and T. J. Stevenson", "['A. Bethani' 'A. J. Bevan' 'J. Hays' 'T. J. Stevenson']" ]
cs.CL cs.LG cs.NE
null
1610.09975
null
null
http://arxiv.org/pdf/1610.09975v1
2016-10-31T15:36:42Z
2016-10-31T15:36:42Z
Neural Speech Recognizer: Acoustic-to-Word LSTM Model for Large Vocabulary Speech Recognition
We present results that show it is possible to build a competitive, greatly simplified, large vocabulary continuous speech recognition system with whole words as acoustic units. We model the output vocabulary of about 100,000 words directly using deep bi-directional LSTM RNNs with CTC loss. The model is trained on 125,000 hours of semi-supervised acoustic training data, which enables us to alleviate the data sparsity problem for word models. We show that the CTC word models work very well as an end-to-end all-neural speech recognition model without the use of traditional context-dependent sub-word phone units that require a pronunciation lexicon, and without any language model removing the need to decode. We demonstrate that the CTC word models perform better than a strong, more complex, state-of-the-art baseline with sub-word units.
[ "['Hagen Soltau' 'Hank Liao' 'Hasim Sak']", "Hagen Soltau, Hank Liao, Hasim Sak" ]
stat.ML cs.LG
null
1610.1006
null
null
null
null
null
Optimization for Large-Scale Machine Learning with Distributed Features and Observations
As the size of modern data sets exceeds the disk and memory capacities of a single computer, machine learning practitioners have resorted to parallel and distributed computing. Given that optimization is one of the pillars of machine learning and predictive modeling, distributed optimization methods have recently garnered ample attention in the literature. Although previous research has mostly focused on settings where either the observations, or features of the problem at hand are stored in distributed fashion, the situation where both are partitioned across the nodes of a computer cluster (doubly distributed) has barely been studied. In this work we propose two doubly distributed optimization algorithms. The first one falls under the umbrella of distributed dual coordinate ascent methods, while the second one belongs to the class of stochastic gradient/coordinate descent hybrid methods. We conduct numerical experiments in Spark using real-world and simulated data sets and study the scaling properties of our methods. Our empirical evaluation of the proposed algorithms demonstrates the out-performance of a block distributed ADMM method, which, to the best of our knowledge is the only other existing doubly distributed optimization algorithm.
[ "Alexandros Nathan, Diego Klabjan" ]
null
null
1610.10060
null
null
http://arxiv.org/pdf/1610.10060v2
2017-04-15T01:10:43Z
2016-10-31T18:43:21Z
Optimization for Large-Scale Machine Learning with Distributed Features and Observations
As the size of modern data sets exceeds the disk and memory capacities of a single computer, machine learning practitioners have resorted to parallel and distributed computing. Given that optimization is one of the pillars of machine learning and predictive modeling, distributed optimization methods have recently garnered ample attention in the literature. Although previous research has mostly focused on settings where either the observations, or features of the problem at hand are stored in distributed fashion, the situation where both are partitioned across the nodes of a computer cluster (doubly distributed) has barely been studied. In this work we propose two doubly distributed optimization algorithms. The first one falls under the umbrella of distributed dual coordinate ascent methods, while the second one belongs to the class of stochastic gradient/coordinate descent hybrid methods. We conduct numerical experiments in Spark using real-world and simulated data sets and study the scaling properties of our methods. Our empirical evaluation of the proposed algorithms demonstrates the out-performance of a block distributed ADMM method, which, to the best of our knowledge is the only other existing doubly distributed optimization algorithm.
[ "['Alexandros Nathan' 'Diego Klabjan']" ]
cs.NE cs.LG stat.ML
null
1610.10087
null
null
http://arxiv.org/pdf/1610.10087v1
2016-10-31T19:44:50Z
2016-10-31T19:44:50Z
Tensor Switching Networks
We present a novel neural network algorithm, the Tensor Switching (TS) network, which generalizes the Rectified Linear Unit (ReLU) nonlinearity to tensor-valued hidden units. The TS network copies its entire input vector to different locations in an expanded representation, with the location determined by its hidden unit activity. In this way, even a simple linear readout from the TS representation can implement a highly expressive deep-network-like function. The TS network hence avoids the vanishing gradient problem by construction, at the cost of larger representation size. We develop several methods to train the TS network, including equivalent kernels for infinitely wide and deep TS networks, a one-pass linear learning algorithm, and two backpropagation-inspired representation learning algorithms. Our experimental results demonstrate that the TS network is indeed more expressive and consistently learns faster than standard ReLU networks.
[ "['Chuan-Yung Tsai' 'Andrew Saxe' 'David Cox']", "Chuan-Yung Tsai, Andrew Saxe, David Cox" ]
cs.CL cs.LG
null
1610.10099
null
null
http://arxiv.org/pdf/1610.10099v2
2017-03-15T18:09:51Z
2016-10-31T19:56:39Z
Neural Machine Translation in Linear Time
We present a novel neural network for processing sequences. The ByteNet is a one-dimensional convolutional neural network that is composed of two parts, one to encode the source sequence and the other to decode the target sequence. The two network parts are connected by stacking the decoder on top of the encoder and preserving the temporal resolution of the sequences. To address the differing lengths of the source and the target, we introduce an efficient mechanism by which the decoder is dynamically unfolded over the representation of the encoder. The ByteNet uses dilation in the convolutional layers to increase its receptive field. The resulting network has two core properties: it runs in time that is linear in the length of the sequences and it sidesteps the need for excessive memorization. The ByteNet decoder attains state-of-the-art performance on character-level language modelling and outperforms the previous best results obtained with recurrent networks. The ByteNet also achieves state-of-the-art performance on character-to-character machine translation on the English-to-German WMT translation task, surpassing comparable neural translation models that are based on recurrent networks with attentional pooling and run in quadratic time. We find that the latent alignment structure contained in the representations reflects the expected alignment between the tokens.
[ "['Nal Kalchbrenner' 'Lasse Espeholt' 'Karen Simonyan' 'Aaron van den Oord'\n 'Alex Graves' 'Koray Kavukcuoglu']", "Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord,\n Alex Graves, Koray Kavukcuoglu" ]
cs.CL cs.AI cs.LG
null
1611.0002
null
null
null
null
null
Neural Symbolic Machines: Learning Semantic Parsers on Freebase with Weak Supervision
Harnessing the statistical power of neural networks to perform language understanding and symbolic reasoning is difficult, when it requires executing efficient discrete operations against a large knowledge-base. In this work, we introduce a Neural Symbolic Machine, which contains (a) a neural "programmer", i.e., a sequence-to-sequence model that maps language utterances to programs and utilizes a key-variable memory to handle compositionality (b) a symbolic "computer", i.e., a Lisp interpreter that performs program execution, and helps find good programs by pruning the search space. We apply REINFORCE to directly optimize the task reward of this structured prediction problem. To train with weak supervision and improve the stability of REINFORCE, we augment it with an iterative maximum-likelihood training process. NSM outperforms the state-of-the-art on the WebQuestionsSP dataset when trained from question-answer pairs only, without requiring any feature engineering or domain-specific knowledge.
[ "Chen Liang, Jonathan Berant, Quoc Le, Kenneth D. Forbus, Ni Lao" ]
null
null
1611.00020
null
null
http://arxiv.org/pdf/1611.00020v4
2017-04-23T07:16:13Z
2016-10-31T20:07:23Z
Neural Symbolic Machines: Learning Semantic Parsers on Freebase with Weak Supervision
Harnessing the statistical power of neural networks to perform language understanding and symbolic reasoning is difficult, when it requires executing efficient discrete operations against a large knowledge-base. In this work, we introduce a Neural Symbolic Machine, which contains (a) a neural "programmer", i.e., a sequence-to-sequence model that maps language utterances to programs and utilizes a key-variable memory to handle compositionality (b) a symbolic "computer", i.e., a Lisp interpreter that performs program execution, and helps find good programs by pruning the search space. We apply REINFORCE to directly optimize the task reward of this structured prediction problem. To train with weak supervision and improve the stability of REINFORCE, we augment it with an iterative maximum-likelihood training process. NSM outperforms the state-of-the-art on the WebQuestionsSP dataset when trained from question-answer pairs only, without requiring any feature engineering or domain-specific knowledge.
[ "['Chen Liang' 'Jonathan Berant' 'Quoc Le' 'Kenneth D. Forbus' 'Ni Lao']" ]
stat.ML cs.LG cs.NE
null
1611.00035
null
null
http://arxiv.org/pdf/1611.00035v1
2016-10-31T20:43:21Z
2016-10-31T20:43:21Z
Full-Capacity Unitary Recurrent Neural Networks
Recurrent neural networks are powerful models for processing sequential data, but they are generally plagued by vanishing and exploding gradient problems. Unitary recurrent neural networks (uRNNs), which use unitary recurrence matrices, have recently been proposed as a means to avoid these issues. However, in previous experiments, the recurrence matrices were restricted to be a product of parameterized unitary matrices, and an open question remains: when does such a parameterization fail to represent all unitary matrices, and how does this restricted representational capacity limit what can be learned? To address this question, we propose full-capacity uRNNs that optimize their recurrence matrix over all unitary matrices, leading to significantly improved performance over uRNNs that use a restricted-capacity recurrence matrix. Our contribution consists of two main components. First, we provide a theoretical argument to determine if a unitary parameterization has restricted capacity. Using this argument, we show that a recently proposed unitary parameterization has restricted capacity for hidden state dimension greater than 7. Second, we show how a complete, full-capacity unitary recurrence matrix can be optimized over the differentiable manifold of unitary matrices. The resulting multiplicative gradient step is very simple and does not require gradient clipping or learning rate adaptation. We confirm the utility of our claims by empirically evaluating our new full-capacity uRNNs on both synthetic and natural data, achieving superior performance compared to both LSTMs and the original restricted-capacity uRNNs.
[ "Scott Wisdom, Thomas Powers, John R. Hershey, Jonathan Le Roux, and\n Les Atlas", "['Scott Wisdom' 'Thomas Powers' 'John R. Hershey' 'Jonathan Le Roux'\n 'Les Atlas']" ]
cs.LG cs.CV
null
1611.0005
null
null
null
null
null
Exploiting Spatio-Temporal Structure with Recurrent Winner-Take-All Networks
We propose a convolutional recurrent neural network, with Winner-Take-All dropout for high dimensional unsupervised feature learning in multi-dimensional time series. We apply the proposedmethod for object recognition with temporal context in videos and obtain better results than comparable methods in the literature, including the Deep Predictive Coding Networks previously proposed by Chalasani and Principe.Our contributions can be summarized as a scalable reinterpretation of the Deep Predictive Coding Networks trained end-to-end with backpropagation through time, an extension of the previously proposed Winner-Take-All Autoencoders to sequences in time, and a new technique for initializing and regularizing convolutional-recurrent neural networks.
[ "Eder Santana, Matthew Emigh, Pablo Zegers, Jose C Principe" ]
null
null
1611.00050
null
null
http://arxiv.org/pdf/1611.00050v2
2017-03-15T16:01:43Z
2016-10-31T21:16:46Z
Exploiting Spatio-Temporal Structure with Recurrent Winner-Take-All Networks
We propose a convolutional recurrent neural network, with Winner-Take-All dropout for high dimensional unsupervised feature learning in multi-dimensional time series. We apply the proposedmethod for object recognition with temporal context in videos and obtain better results than comparable methods in the literature, including the Deep Predictive Coding Networks previously proposed by Chalasani and Principe.Our contributions can be summarized as a scalable reinterpretation of the Deep Predictive Coding Networks trained end-to-end with backpropagation through time, an extension of the previously proposed Winner-Take-All Autoencoders to sequences in time, and a new technique for initializing and regularizing convolutional-recurrent neural networks.
[ "['Eder Santana' 'Matthew Emigh' 'Pablo Zegers' 'Jose C Principe']" ]
cs.LG stat.ML
10.1109/BigData.2017.8258344
1611.00058
null
null
http://arxiv.org/abs/1611.00058v3
2017-05-19T19:13:20Z
2016-10-31T22:04:54Z
Kernel Bandwidth Selection for SVDD: Peak Criterion Approach for Large Data
Support Vector Data Description (SVDD) provides a useful approach to construct a description of multivariate data for single-class classification and outlier detection with various practical applications. Gaussian kernel used in SVDD formulation allows flexible data description defined by observations designated as support vectors. The data boundary of such description is non-spherical and conforms to the geometric features of the data. By varying the Gaussian kernel bandwidth parameter, the SVDD-generated boundary can be made either smoother (more spherical) or tighter/jagged. The former case may lead to under-fitting, whereas the latter may result in overfitting. Peak criterion has been proposed to select an optimal value of the kernel bandwidth to strike the balance between the data boundary smoothness and its ability to capture the general geometric shape of the data. Peak criterion involves training SVDD at various values of the kernel bandwidth parameter. When training datasets are large, the time required to obtain the optimal value of the Gaussian kernel bandwidth parameter according to Peak method can become prohibitively large. This paper proposes an extension of Peak method for the case of large data. The proposed method gives good results when applied to several datasets. Two existing alternative methods of computing the Gaussian kernel bandwidth parameter (Coefficient of Variation and Distance to the Farthest Neighbor) were modified to allow comparison with the proposed method on convergence. Empirical comparison demonstrates the advantage of the proposed method.
[ "Sergiy Peredriy, Deovrat Kakde, Arin Chaudhuri", "['Sergiy Peredriy' 'Deovrat Kakde' 'Arin Chaudhuri']" ]
cs.LG math.PR stat.ML
null
1611.00065
null
null
http://arxiv.org/pdf/1611.00065v3
2017-03-20T20:07:55Z
2016-10-31T22:24:49Z
Bayesian Adaptive Data Analysis Guarantees from Subgaussianity
The new field of adaptive data analysis seeks to provide algorithms and provable guarantees for models of machine learning that allow researchers to reuse their data, which normally falls outside of the usual statistical paradigm of static data analysis. In 2014, Dwork, Feldman, Hardt, Pitassi, Reingold and Roth introduced one potential model and proposed several solutions based on differential privacy. In previous work in 2016, we described a problem with this model and instead proposed a Bayesian variant, but also found that the analogous Bayesian methods cannot achieve the same statistical guarantees as in the static case. In this paper, we prove the first positive results for the Bayesian model, showing that with a Dirichlet prior, the posterior mean algorithm indeed matches the statistical guarantees of the static case. The main ingredient is a new theorem showing that the $\mathrm{Beta}(\alpha,\beta)$ distribution is subgaussian with variance proxy $O(1/(\alpha+\beta+1))$, a concentration result also of independent interest. We provide two proofs of this result: a probabilistic proof utilizing a simple condition for the raw moments of a positive random variable and a learning-theoretic proof based on considering the beta distribution as a posterior, both of which have implications to other related problems.
[ "['Sam Elder']", "Sam Elder" ]
cs.CV cs.LG
null
1611.00137
null
null
http://arxiv.org/pdf/1611.00137v1
2016-11-01T06:03:48Z
2016-11-01T06:03:48Z
Embedding Deep Metric for Person Re-identication A Study Against Large Variations
Person re-identification is challenging due to the large variations of pose, illumination, occlusion and camera view. Owing to these variations, the pedestrian data is distributed as highly-curved manifolds in the feature space, despite the current convolutional neural networks (CNN)'s capability of feature extraction. However, the distribution is unknown, so it is difficult to use the geodesic distance when comparing two samples. In practice, the current deep embedding methods use the Euclidean distance for the training and test. On the other hand, the manifold learning methods suggest to use the Euclidean distance in the local range, combining with the graphical relationship between samples, for approximating the geodesic distance. From this point of view, selecting suitable positive i.e. intra-class) training samples within a local range is critical for training the CNN embedding, especially when the data has large intra-class variations. In this paper, we propose a novel moderate positive sample mining method to train robust CNN for person re-identification, dealing with the problem of large variation. In addition, we improve the learning by a metric weight constraint, so that the learned metric has a better generalization ability. Experiments show that these two strategies are effective in learning robust deep metrics for person re-identification, and accordingly our deep model significantly outperforms the state-of-the-art methods on several benchmarks of person re-identification. Therefore, the study presented in this paper may be useful in inspiring new designs of deep models for person re-identification.
[ "['Hailin Shi' 'Yang Yang' 'Xiangyu Zhu' 'Shengcai Liao' 'Zhen Lei'\n 'Weishi Zheng' 'Stan Z. Li']", "Hailin Shi, Yang Yang, Xiangyu Zhu, Shengcai Liao, Zhen Lei, Weishi\n Zheng, Stan Z. Li" ]
cs.LG cs.CL cs.IR
null
1611.00138
null
null
http://arxiv.org/pdf/1611.00138v1
2016-11-01T06:05:49Z
2016-11-01T06:05:49Z
MusicMood: Predicting the mood of music from song lyrics using machine learning
Sentiment prediction of contemporary music can have a wide-range of applications in modern society, for instance, selecting music for public institutions such as hospitals or restaurants to potentially improve the emotional well-being of personnel, patients, and customers, respectively. In this project, music recommendation system built upon on a naive Bayes classifier, trained to predict the sentiment of songs based on song lyrics alone. The experimental results show that music corresponding to a happy mood can be detected with high precision based on text features obtained from song lyrics.
[ "['Sebastian Raschka']", "Sebastian Raschka" ]
cs.LG cs.IR
null
1611.00144
null
null
http://arxiv.org/pdf/1611.00144v1
2016-11-01T07:10:22Z
2016-11-01T07:10:22Z
Product-based Neural Networks for User Response Prediction
Predicting user responses, such as clicks and conversions, is of great importance and has found its usage in many Web applications including recommender systems, web search and online advertising. The data in those applications is mostly categorical and contains multiple fields; a typical representation is to transform it into a high-dimensional sparse binary feature representation via one-hot encoding. Facing with the extreme sparsity, traditional models may limit their capacity of mining shallow patterns from the data, i.e. low-order feature combinations. Deep models like deep neural networks, on the other hand, cannot be directly applied for the high-dimensional input because of the huge feature space. In this paper, we propose a Product-based Neural Networks (PNN) with an embedding layer to learn a distributed representation of the categorical data, a product layer to capture interactive patterns between inter-field categories, and further fully connected layers to explore high-order feature interactions. Our experimental results on two large-scale real-world ad click datasets demonstrate that PNNs consistently outperform the state-of-the-art models on various metrics.
[ "Yanru Qu, Han Cai, Kan Ren, Weinan Zhang, Yong Yu, Ying Wen, Jun Wang", "['Yanru Qu' 'Han Cai' 'Kan Ren' 'Weinan Zhang' 'Yong Yu' 'Ying Wen'\n 'Jun Wang']" ]
cs.LG cs.AI
null
1611.00175
null
null
http://arxiv.org/pdf/1611.00175v1
2016-11-01T10:06:57Z
2016-11-01T10:06:57Z
Robust Spectral Inference for Joint Stochastic Matrix Factorization
Spectral inference provides fast algorithms and provable optimality for latent topic analysis. But for real data these algorithms require additional ad-hoc heuristics, and even then often produce unusable results. We explain this poor performance by casting the problem of topic inference in the framework of Joint Stochastic Matrix Factorization (JSMF) and showing that previous methods violate the theoretical conditions necessary for a good solution to exist. We then propose a novel rectification method that learns high quality topics and their interactions even on small, noisy data. This method achieves results comparable to probabilistic techniques in several domains while maintaining scalability and provable optimality.
[ "Moontae Lee, David Bindel, David Mimno", "['Moontae Lee' 'David Bindel' 'David Mimno']" ]
cs.OH cs.LG
null
1611.00228
null
null
http://arxiv.org/pdf/1611.00228v1
2016-10-31T10:58:12Z
2016-10-31T10:58:12Z
Application Specific Instrumentation (ASIN): A Bio-inspired Paradigm to Instrumentation using recognition before detection
In this paper we present a new scheme for instrumentation, which has been inspired by the way small mammals sense their environment. We call this scheme Application Specific Instrumentation (ASIN). A conventional instrumentation system focuses on gathering as much information about the scene as possible. This, usually, is a generic system whose data can be used by another system to take a specific action. ASIN fuses these two steps into one. The major merit of the proposed scheme is that it uses low resolution sensors and much less computational overhead to give good performance for a highly specialised application
[ "Amit Kumar Mishra", "['Amit Kumar Mishra']" ]
cs.LG
null
1611.00252
null
null
http://arxiv.org/pdf/1611.00252v2
2017-03-01T23:40:46Z
2016-10-30T21:09:27Z
Improving a Credit Scoring Model by Incorporating Bank Statement Derived Features
In this paper, we investigate the extent to which features derived from bank statements provided by loan applicants, and which are not declared on an application form, can enhance a credit scoring model for a New Zealand lending company. Exploring the potential of such information to improve credit scoring models in this manner has not been studied previously. We construct a baseline model based solely on the existing scoring features obtained from the loan application form, and a second baseline model based solely on the new bank statement-derived features. A combined feature model is then created by augmenting the application form features with the new bank statement derived features. Our experimental results using ROC analysis show that a combined feature model performs better than both of the two baseline models, and show that a number of the bank statement-derived features have value in improving the credit scoring model. The target data set used for modelling was highly imbalanced, and Naive Bayes was found to be the best performing model, and outperformed a number of other classifiers commonly used in credit scoring, suggesting its potential for future use on highly imbalanced data sets.
[ "Rory P. Bunker, Wenjun Zhang, M. Asif Naeem", "['Rory P. Bunker' 'Wenjun Zhang' 'M. Asif Naeem']" ]
cs.LG cs.DS stat.ML
null
1611.00255
null
null
http://arxiv.org/pdf/1611.00255v3
2019-07-08T12:34:27Z
2016-11-01T14:56:33Z
Stationary time-vertex signal processing
This paper considers regression tasks involving high-dimensional multivariate processes whose structure is dependent on some {known} graph topology. We put forth a new definition of time-vertex wide-sense stationarity, or joint stationarity for short, that goes beyond product graphs. Joint stationarity helps by reducing the estimation variance and recovery complexity. In particular, for any jointly stationary process (a) one reliably learns the covariance structure from as little as a single realization of the process, and (b) solves MMSE recovery problems, such as interpolation and denoising, in computational time nearly linear on the number of edges and timesteps. Experiments with three datasets suggest that joint stationarity can yield accuracy improvements in the recovery of high-dimensional processes evolving over a graph, even when the latter is only approximately known, or the process is not strictly stationary.
[ "['Andreas Loukas' 'Nathanaël Perraudin']", "Andreas Loukas and Nathana\\\"el Perraudin" ]
cs.LG
null
1611.00301
null
null
http://arxiv.org/pdf/1611.00301v1
2016-11-01T17:17:26Z
2016-11-01T17:17:26Z
Recurrent Neural Radio Anomaly Detection
We introduce a powerful recurrent neural network based method for novelty detection to the application of detecting radio anomalies. This approach holds promise in significantly increasing the ability of naive anomaly detection to detect small anomalies in highly complex complexity multi-user radio bands. We demonstrate the efficacy of this approach on a number of common real over the air radio communications bands of interest and quantify detection performance in terms of probability of detection an false alarm rates across a range of interference to band power ratios and compare to baseline methods.
[ "[\"Timothy J O'Shea\" 'T. Charles Clancy' 'Robert W. McGwier']", "Timothy J O'Shea, T. Charles Clancy, Robert W. McGwier" ]
cs.LG cs.IT math.IT stat.ML
null
1611.00303
null
null
http://arxiv.org/pdf/1611.00303v2
2017-01-17T18:23:49Z
2016-11-01T17:21:50Z
Semi-Supervised Radio Signal Identification
Radio emitter recognition in dense multi-user environments is an important tool for optimizing spectrum utilization, identifying and minimizing interference, and enforcing spectrum policy. Radio data is readily available and easy to obtain from an antenna, but labeled and curated data is often scarce making supervised learning strategies difficult and time consuming in practice. We demonstrate that semi-supervised learning techniques can be used to scale learning beyond supervised datasets, allowing for discerning and recalling new radio signals by using sparse signal representations based on both unsupervised and supervised methods for nonlinear feature learning and clustering methods.
[ "Timothy J. O'Shea, Nathan West, Matthew Vondal, T. Charles Clancy", "[\"Timothy J. O'Shea\" 'Nathan West' 'Matthew Vondal' 'T. Charles Clancy']" ]
cs.SD cs.LG stat.ML
null
1611.00326
null
null
http://arxiv.org/pdf/1611.00326v3
2017-04-20T18:43:29Z
2016-11-01T18:38:12Z
Enhanced Factored Three-Way Restricted Boltzmann Machines for Speech Detection
In this letter, we propose enhanced factored three way restricted Boltzmann machines (EFTW-RBMs) for speech detection. The proposed model incorporates conditional feature learning by multiplying the dynamical state of the third unit, which allows a modulation over the visible-hidden node pairs. Instead of stacking previous frames of speech as the third unit in a recursive manner, the correlation related weighting coefficients are assigned to the contextual neighboring frames. Specifically, a threshold function is designed to capture the long-term features and blend the globally stored speech structure. A factored low rank approximation is introduced to reduce the parameters of the three-dimensional interaction tensor, on which non-negative constraint is imposed to address the sparsity characteristic. The validations through the area-under-ROC-curve (AUC) and signal distortion ratio (SDR) show that our approach outperforms several existing 1D and 2D (i.e., time and time-frequency domain) speech detection algorithms in various noisy environments.
[ "['Pengfei Sun' 'Jun Qin']", "Pengfei Sun and Jun Qin" ]
stat.ML cs.LG stat.CO stat.ME
null
1611.00328
null
null
http://arxiv.org/pdf/1611.00328v4
2017-11-12T19:00:57Z
2016-11-01T18:40:23Z
Variational Inference via $\chi$-Upper Bound Minimization
Variational inference (VI) is widely used as an efficient alternative to Markov chain Monte Carlo. It posits a family of approximating distributions $q$ and finds the closest member to the exact posterior $p$. Closeness is usually measured via a divergence $D(q || p)$ from $q$ to $p$. While successful, this approach also has problems. Notably, it typically leads to underestimation of the posterior variance. In this paper we propose CHIVI, a black-box variational inference algorithm that minimizes $D_{\chi}(p || q)$, the $\chi$-divergence from $p$ to $q$. CHIVI minimizes an upper bound of the model evidence, which we term the $\chi$ upper bound (CUBO). Minimizing the CUBO leads to improved posterior uncertainty, and it can also be used with the classical VI lower bound (ELBO) to provide a sandwich estimate of the model evidence. We study CHIVI on three models: probit regression, Gaussian process classification, and a Cox process model of basketball plays. When compared to expectation propagation and classical VI, CHIVI produces better error rates and more accurate estimates of posterior variance.
[ "Adji B. Dieng, Dustin Tran, Rajesh Ranganath, John Paisley, David M.\n Blei", "['Adji B. Dieng' 'Dustin Tran' 'Rajesh Ranganath' 'John Paisley'\n 'David M. Blei']" ]
stat.ML cs.LG stat.ME
null
1611.00336
null
null
http://arxiv.org/pdf/1611.00336v2
2016-11-02T18:06:16Z
2016-11-01T19:04:47Z
Stochastic Variational Deep Kernel Learning
Deep kernel learning combines the non-parametric flexibility of kernel methods with the inductive biases of deep learning architectures. We propose a novel deep kernel learning model and stochastic variational inference procedure which generalizes deep kernel learning approaches to enable classification, multi-task learning, additive covariance structures, and stochastic gradient training. Specifically, we apply additive base kernels to subsets of output features from deep neural architectures, and jointly learn the parameters of the base kernels and deep network through a Gaussian process marginal likelihood objective. Within this framework, we derive an efficient form of stochastic variational inference which leverages local kernel interpolation, inducing points, and structure exploiting algebra. We show improved performance over stand alone deep networks, SVMs, and state of the art scalable Gaussian processes on several classification benchmarks, including an airline delay dataset containing 6 million training points, CIFAR, and ImageNet.
[ "Andrew Gordon Wilson, Zhiting Hu, Ruslan Salakhutdinov, Eric P. Xing", "['Andrew Gordon Wilson' 'Zhiting Hu' 'Ruslan Salakhutdinov' 'Eric P. Xing']" ]
math.OC cs.LG
null
1611.00347
null
null
http://arxiv.org/pdf/1611.00347v2
2018-02-07T21:25:45Z
2016-11-01T19:40:33Z
Surpassing Gradient Descent Provably: A Cyclic Incremental Method with Linear Convergence Rate
Recently, there has been growing interest in developing optimization methods for solving large-scale machine learning problems. Most of these problems boil down to the problem of minimizing an average of a finite set of smooth and strongly convex functions where the number of functions $n$ is large. Gradient descent method (GD) is successful in minimizing convex problems at a fast linear rate; however, it is not applicable to the considered large-scale optimization setting because of the high computational complexity. Incremental methods resolve this drawback of gradient methods by replacing the required gradient for the descent direction with an incremental gradient approximation. They operate by evaluating one gradient per iteration and executing the average of the $n$ available gradients as a gradient approximate. Although, incremental methods reduce the computational cost of GD, their convergence rates do not justify their advantage relative to GD in terms of the total number of gradient evaluations until convergence. In this paper, we introduce a Double Incremental Aggregated Gradient method (DIAG) that computes the gradient of only one function at each iteration, which is chosen based on a cyclic scheme, and uses the aggregated average gradient of all the functions to approximate the full gradient. The iterates of the proposed DIAG method uses averages of both iterates and gradients in oppose to classic incremental methods that utilize gradient averages but do not utilize iterate averages. We prove that not only the proposed DIAG method converges linearly to the optimal solution, but also its linear convergence factor justifies the advantage of incremental methods on GD. In particular, we prove that the worst case performance of DIAG is better than the worst case performance of GD.
[ "Aryan Mokhtari and Mert G\\\"urb\\\"uzbalaban and Alejandro Ribeiro", "['Aryan Mokhtari' 'Mert Gürbüzbalaban' 'Alejandro Ribeiro']" ]
cs.SI cs.LG stat.ML
null
1611.0035
null
null
null
null
null
Adversarial Influence Maximization
We consider the problem of influence maximization in fixed networks for contagion models in an adversarial setting. The goal is to select an optimal set of nodes to seed the influence process, such that the number of influenced nodes at the conclusion of the campaign is as large as possible. We formulate the problem as a repeated game between a player and adversary, where the adversary specifies the edges along which the contagion may spread, and the player chooses sets of nodes to influence in an online fashion. We establish upper and lower bounds on the minimax pseudo-regret in both undirected and directed networks.
[ "Justin Khim, Varun Jog, Po-Ling Loh" ]
null
null
1611.00350
null
null
http://arxiv.org/pdf/1611.00350v2
2019-01-19T16:55:50Z
2016-11-01T19:46:01Z
Adversarial Influence Maximization
We consider the problem of influence maximization in fixed networks for contagion models in an adversarial setting. The goal is to select an optimal set of nodes to seed the influence process, such that the number of influenced nodes at the conclusion of the campaign is as large as possible. We formulate the problem as a repeated game between a player and adversary, where the adversary specifies the edges along which the contagion may spread, and the player chooses sets of nodes to influence in an online fashion. We establish upper and lower bounds on the minimax pseudo-regret in both undirected and directed networks.
[ "['Justin Khim' 'Varun Jog' 'Po-Ling Loh']" ]
cs.CY cs.CL cs.LG
null
1611.00356
null
null
http://arxiv.org/pdf/1611.00356v1
2016-11-01T19:59:48Z
2016-11-01T19:59:48Z
Using Artificial Intelligence to Identify State Secrets
Whether officials can be trusted to protect national security information has become a matter of great public controversy, reigniting a long-standing debate about the scope and nature of official secrecy. The declassification of millions of electronic records has made it possible to analyze these issues with greater rigor and precision. Using machine-learning methods, we examined nearly a million State Department cables from the 1970s to identify features of records that are more likely to be classified, such as international negotiations, military operations, and high-level communications. Even with incomplete data, algorithms can use such features to identify 90% of classified cables with <11% false positives. But our results also show that there are longstanding problems in the identification of sensitive information. Error analysis reveals many examples of both overclassification and underclassification. This indicates both the need for research on inter-coder reliability among officials as to what constitutes classified material and the opportunity to develop recommender systems to better manage both classification and declassification.
[ "['Renato Rocha Souza' 'Flavio Codeco Coelho' 'Rohan Shah'\n 'Matthew Connelly']", "Renato Rocha Souza, Flavio Codeco Coelho, Rohan Shah, Matthew Connelly" ]
cs.HC cs.LG
null
1611.00379
null
null
http://arxiv.org/pdf/1611.00379v1
2016-11-01T20:35:46Z
2016-11-01T20:35:46Z
The Machine Learning Algorithm as Creative Musical Tool
Machine learning is the capacity of a computational system to learn structures from datasets in order to make predictions on newly seen data. Such an approach offers a significant advantage in music scenarios in which musicians can teach the system to learn an idiosyncratic style, or can break the rules to explore the system's capacity in unexpected ways. In this chapter we draw on music, machine learning, and human-computer interaction to elucidate an understanding of machine learning algorithms as creative tools for music and the sonic arts. We motivate a new understanding of learning algorithms as human-computer interfaces. We show that, like other interfaces, learning algorithms can be characterised by the ways their affordances intersect with goals of human users. We also argue that the nature of interaction between users and algorithms impacts the usability and usefulness of those algorithms in profound ways. This human-centred view of machine learning motivates our concluding discussion of what it means to employ machine learning as a creative tool.
[ "Rebecca Fiebrink, Baptiste Caramiaux", "['Rebecca Fiebrink' 'Baptiste Caramiaux']" ]
cs.IR cs.CL cs.LG
null
1611.00384
null
null
http://arxiv.org/pdf/1611.00384v2
2019-09-21T13:59:21Z
2016-11-01T20:48:34Z
CB2CF: A Neural Multiview Content-to-Collaborative Filtering Model for Completely Cold Item Recommendations
In Recommender Systems research, algorithms are often characterized as either Collaborative Filtering (CF) or Content Based (CB). CF algorithms are trained using a dataset of user preferences while CB algorithms are typically based on item profiles. These approaches harness different data sources and therefore the resulting recommended items are generally very different. This paper presents the CB2CF, a deep neural multiview model that serves as a bridge from items content into their CF representations. CB2CF is a real-world algorithm designed for Microsoft Store services that handle around a billion users worldwide. CB2CF is demonstrated on movies and apps recommendations, where it is shown to outperform an alternative CB model on completely cold items.
[ "['Oren Barkan' 'Noam Koenigstein' 'Eylon Yogev' 'Ori Katz']", "Oren Barkan, Noam Koenigstein, Eylon Yogev and Ori Katz" ]
cs.LG
null
1611.00429
null
null
http://arxiv.org/pdf/1611.00429v3
2017-09-25T15:10:54Z
2016-11-02T00:16:18Z
Distributed Mean Estimation with Limited Communication
Motivated by the need for distributed learning and optimization algorithms with low communication cost, we study communication efficient algorithms for distributed mean estimation. Unlike previous works, we make no probabilistic assumptions on the data. We first show that for $d$ dimensional data with $n$ clients, a naive stochastic binary rounding approach yields a mean squared error (MSE) of $\Theta(d/n)$ and uses a constant number of bits per dimension per client. We then extend this naive algorithm in two ways: we show that applying a structured random rotation before quantization reduces the error to $\mathcal{O}((\log d)/n)$ and a better coding strategy further reduces the error to $\mathcal{O}(1/n)$ and uses a constant number of bits per dimension per client. We also show that the latter coding strategy is optimal up to a constant in the minimax sense i.e., it achieves the best MSE for a given communication cost. We finally demonstrate the practicality of our algorithms by applying them to distributed Lloyd's algorithm for k-means and power iteration for PCA.
[ "['Ananda Theertha Suresh' 'Felix X. Yu' 'Sanjiv Kumar'\n 'H. Brendan McMahan']", "Ananda Theertha Suresh, Felix X. Yu, Sanjiv Kumar, H. Brendan McMahan" ]
cs.LG cs.AI cs.CL cs.CV stat.ML
null
1611.00448
null
null
http://arxiv.org/pdf/1611.00448v1
2016-11-02T02:32:05Z
2016-11-02T02:32:05Z
Natural-Parameter Networks: A Class of Probabilistic Neural Networks
Neural networks (NN) have achieved state-of-the-art performance in various applications. Unfortunately in applications where training data is insufficient, they are often prone to overfitting. One effective way to alleviate this problem is to exploit the Bayesian approach by using Bayesian neural networks (BNN). Another shortcoming of NN is the lack of flexibility to customize different distributions for the weights and neurons according to the data, as is often done in probabilistic graphical models. To address these problems, we propose a class of probabilistic neural networks, dubbed natural-parameter networks (NPN), as a novel and lightweight Bayesian treatment of NN. NPN allows the usage of arbitrary exponential-family distributions to model the weights and neurons. Different from traditional NN and BNN, NPN takes distributions as input and goes through layers of transformation before producing distributions to match the target output distributions. As a Bayesian treatment, efficient backpropagation (BP) is performed to learn the natural parameters for the distributions over both the weights and neurons. The output distributions of each layer, as byproducts, may be used as second-order representations for the associated tasks such as link prediction. Experiments on real-world datasets show that NPN can achieve state-of-the-art performance.
[ "Hao Wang, Xingjian Shi, Dit-Yan Yeung", "['Hao Wang' 'Xingjian Shi' 'Dit-Yan Yeung']" ]
cs.LG cs.AI cs.CL cs.CV stat.ML
null
1611.00454
null
null
http://arxiv.org/pdf/1611.00454v1
2016-11-02T02:49:44Z
2016-11-02T02:49:44Z
Collaborative Recurrent Autoencoder: Recommend while Learning to Fill in the Blanks
Hybrid methods that utilize both content and rating information are commonly used in many recommender systems. However, most of them use either handcrafted features or the bag-of-words representation as a surrogate for the content information but they are neither effective nor natural enough. To address this problem, we develop a collaborative recurrent autoencoder (CRAE) which is a denoising recurrent autoencoder (DRAE) that models the generation of content sequences in the collaborative filtering (CF) setting. The model generalizes recent advances in recurrent deep learning from i.i.d. input to non-i.i.d. (CF-based) input and provides a new denoising scheme along with a novel learnable pooling scheme for the recurrent autoencoder. To do this, we first develop a hierarchical Bayesian model for the DRAE and then generalize it to the CF setting. The synergy between denoising and CF enables CRAE to make accurate recommendations while learning to fill in the blanks in sequences. Experiments on real-world datasets from different domains (CiteULike and Netflix) show that, by jointly modeling the order-aware generation of sequences for the content information and performing CF for the ratings, CRAE is able to significantly outperform the state of the art on both the recommendation task based on ratings and the sequence generation task based on content information.
[ "Hao Wang, Xingjian Shi, Dit-Yan Yeung", "['Hao Wang' 'Xingjian Shi' 'Dit-Yan Yeung']" ]
cs.LG
null
1611.00481
null
null
http://arxiv.org/pdf/1611.00481v2
2016-11-06T18:05:35Z
2016-11-02T06:29:46Z
Online Multi-view Clustering with Incomplete Views
In the era of big data, it is common to have data with multiple modalities or coming from multiple sources, known as "multi-view data". Multi-view clustering provides a natural way to generate clusters from such data. Since different views share some consistency and complementary information, previous works on multi-view clustering mainly focus on how to combine various numbers of views to improve clustering performance. However, in reality, each view may be incomplete, i.e., instances missing in the view. Furthermore, the size of data could be extremely huge. It is unrealistic to apply multi-view clustering in large real-world applications without considering the incompleteness of views and the memory requirement. None of previous works have addressed all these challenges simultaneously. In this paper, we propose an online multi-view clustering algorithm, OMVC, which deals with large-scale incomplete views. We model the multi-view clustering problem as a joint weighted nonnegative matrix factorization problem and process the multi-view data chunk by chunk to reduce the memory requirement. OMVC learns the latent feature matrices for all the views and pushes them towards a consensus. We further increase the robustness of the learned latent feature matrices in OMVC via lasso regularization. To minimize the influence of incompleteness, dynamic weight setting is introduced to give lower weights to the incoming missing instances in different views. More importantly, to reduce the computational time, we incorporate a faster projected gradient descent by utilizing the Hessian matrices in OMVC. Extensive experiments conducted on four real data demonstrate the effectiveness of the proposed OMVC method.
[ "['Weixiang Shao' 'Lifang He' 'Chun-Ta Lu' 'Philip S. Yu']", "Weixiang Shao, Lifang He, Chun-Ta Lu, Philip S. Yu" ]
cs.CV cs.LG cs.NE
null
1611.00591
null
null
http://arxiv.org/pdf/1611.00591v1
2016-09-04T16:20:13Z
2016-09-04T16:20:13Z
Deep Neural Networks for HDR imaging
We propose novel methods of solving two tasks using Convolutional Neural Networks, firstly the task of generating HDR map of a static scene using differently exposed LDR images of the scene captured using conventional cameras and secondly the task of finding an optimal tone mapping operator that would give a better score on the TMQI metric compared to the existing methods. We quantitatively show the performance of our networks and illustrate the cases where our networks performs good as well as bad.
[ "Kshiteej Sheth", "['Kshiteej Sheth']" ]
cs.LG cs.AI
null
1611.00625
null
null
http://arxiv.org/pdf/1611.00625v2
2016-11-03T21:54:28Z
2016-11-01T05:01:24Z
TorchCraft: a Library for Machine Learning Research on Real-Time Strategy Games
We present TorchCraft, a library that enables deep learning research on Real-Time Strategy (RTS) games such as StarCraft: Brood War, by making it easier to control these games from a machine learning framework, here Torch. This white paper argues for using RTS games as a benchmark for AI research, and describes the design and components of TorchCraft.
[ "['Gabriel Synnaeve' 'Nantas Nardelli' 'Alex Auvolat' 'Soumith Chintala'\n 'Timothée Lacroix' 'Zeming Lin' 'Florian Richoux' 'Nicolas Usunier']", "Gabriel Synnaeve, Nantas Nardelli, Alex Auvolat, Soumith Chintala,\n Timoth\\'ee Lacroix, Zeming Lin, Florian Richoux, Nicolas Usunier" ]
cs.NE cs.LG
null
1611.0071
null
null
null
null
null
Deep counter networks for asynchronous event-based processing
Despite their advantages in terms of computational resources, latency, and power consumption, event-based implementations of neural networks have not been able to achieve the same performance figures as their equivalent state-of-the-art deep network models. We propose counter neurons as minimal spiking neuron models which only require addition and comparison operations, thus avoiding costly multiplications. We show how inference carried out in deep counter networks converges to the same accuracy levels as are achieved with state-of-the-art conventional networks. As their event-based style of computation leads to reduced latency and sparse updates, counter networks are ideally suited for efficient compact and low-power hardware implementation. We present theory and training methods for counter networks, and demonstrate on the MNIST benchmark that counter networks converge quickly, both in terms of time and number of operations required, to state-of-the-art classification accuracy.
[ "Jonathan Binas, Giacomo Indiveri, Michael Pfeiffer" ]
null
null
1611.00710
null
null
http://arxiv.org/pdf/1611.00710v1
2016-11-02T18:22:33Z
2016-11-02T18:22:33Z
Deep counter networks for asynchronous event-based processing
Despite their advantages in terms of computational resources, latency, and power consumption, event-based implementations of neural networks have not been able to achieve the same performance figures as their equivalent state-of-the-art deep network models. We propose counter neurons as minimal spiking neuron models which only require addition and comparison operations, thus avoiding costly multiplications. We show how inference carried out in deep counter networks converges to the same accuracy levels as are achieved with state-of-the-art conventional networks. As their event-based style of computation leads to reduced latency and sparse updates, counter networks are ideally suited for efficient compact and low-power hardware implementation. We present theory and training methods for counter networks, and demonstrate on the MNIST benchmark that counter networks converge quickly, both in terms of time and number of operations required, to state-of-the-art classification accuracy.
[ "['Jonathan Binas' 'Giacomo Indiveri' 'Michael Pfeiffer']" ]
cs.LG stat.ML
null
1611.00712
null
null
http://arxiv.org/pdf/1611.00712v3
2017-03-05T16:59:44Z
2016-11-02T18:25:40Z
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
The reparameterization trick enables optimizing large scale stochastic computation graphs via gradient descent. The essence of the trick is to refactor each stochastic node into a differentiable function of its parameters and a random variable with fixed distribution. After refactoring, the gradients of the loss propagated by the chain rule through the graph are low variance unbiased estimators of the gradients of the expected loss. While many continuous random variables have such reparameterizations, discrete random variables lack useful reparameterizations due to the discontinuous nature of discrete states. In this work we introduce Concrete random variables---continuous relaxations of discrete random variables. The Concrete distribution is a new family of distributions with closed form densities and a simple reparameterization. Whenever a discrete stochastic node of a computation graph can be refactored into a one-hot bit representation that is treated continuously, Concrete stochastic nodes can be used with automatic differentiation to produce low-variance biased gradients of objectives (including objectives that depend on the log-probability of latent stochastic nodes) on the corresponding discrete graph. We demonstrate the effectiveness of Concrete relaxations on density estimation and structured prediction tasks using neural networks.
[ "Chris J. Maddison, Andriy Mnih, Yee Whye Teh", "['Chris J. Maddison' 'Andriy Mnih' 'Yee Whye Teh']" ]
cs.LG cs.DC
null
1611.00714
null
null
http://arxiv.org/pdf/1611.00714v1
2016-11-02T18:27:53Z
2016-11-02T18:27:53Z
Scalable Semi-Supervised Learning over Networks using Nonsmooth Convex Optimization
We propose a scalable method for semi-supervised (transductive) learning from massive network-structured datasets. Our approach to semi-supervised learning is based on representing the underlying hypothesis as a graph signal with small total variation. Requiring a small total variation of the graph signal representing the underlying hypothesis corresponds to the central smoothness assumption that forms the basis for semi-supervised learning, i.e., input points forming clusters have similar output values or labels. We formulate the learning problem as a nonsmooth convex optimization problem which we solve by appealing to Nesterovs optimal first-order method for nonsmooth optimization. We also provide a message passing formulation of the learning method which allows for a highly scalable implementation in big data frameworks.
[ "['Alexander Jung' 'Alfred O. Hero III' 'Alexandru Mara' 'Sabeur Aridhi']", "Alexander Jung and Alfred O. Hero III and Alexandru Mara and Sabeur\n Aridhi" ]
cs.LG
null
1611.0074
null
null
null
null
null
Why and When Can Deep -- but Not Shallow -- Networks Avoid the Curse of Dimensionality: a Review
The paper characterizes classes of functions for which deep learning can be exponentially better than shallow learning. Deep convolutional networks are a special case of these conditions, though weight sharing is not the main reason for their exponential advantage.
[ "Tomaso Poggio, Hrushikesh Mhaskar, Lorenzo Rosasco, Brando Miranda,\n Qianli Liao" ]
null
null
1611.00740
null
null
http://arxiv.org/pdf/1611.00740v5
2017-02-04T09:10:41Z
2016-11-02T19:35:52Z
Why and When Can Deep -- but Not Shallow -- Networks Avoid the Curse of Dimensionality: a Review
The paper characterizes classes of functions for which deep learning can be exponentially better than shallow learning. Deep convolutional networks are a special case of these conditions, though weight sharing is not the main reason for their exponential advantage.
[ "['Tomaso Poggio' 'Hrushikesh Mhaskar' 'Lorenzo Rosasco' 'Brando Miranda'\n 'Qianli Liao']" ]
quant-ph cs.LG
null
1611.0076
null
null
null
null
null
Quantum Laplacian Eigenmap
Laplacian eigenmap algorithm is a typical nonlinear model for dimensionality reduction in classical machine learning. We propose an efficient quantum Laplacian eigenmap algorithm to exponentially speed up the original counterparts. In our work, we demonstrate that the Hermitian chain product proposed in quantum linear discriminant analysis (arXiv:1510.00113,2015) can be applied to implement quantum Laplacian eigenmap algorithm. While classical Laplacian eigenmap algorithm requires polynomial time to solve the eigenvector problem, our algorithm is able to exponentially speed up nonlinear dimensionality reduction.
[ "Yiming Huang, Xiaoyu Li" ]
null
null
1611.00760
null
null
http://arxiv.org/pdf/1611.00760v1
2016-11-02T03:48:01Z
2016-11-02T03:48:01Z
Quantum Laplacian Eigenmap
Laplacian eigenmap algorithm is a typical nonlinear model for dimensionality reduction in classical machine learning. We propose an efficient quantum Laplacian eigenmap algorithm to exponentially speed up the original counterparts. In our work, we demonstrate that the Hermitian chain product proposed in quantum linear discriminant analysis (arXiv:1510.00113,2015) can be applied to implement quantum Laplacian eigenmap algorithm. While classical Laplacian eigenmap algorithm requires polynomial time to solve the eigenvector problem, our algorithm is able to exponentially speed up nonlinear dimensionality reduction.
[ "['Yiming Huang' 'Xiaoyu Li']" ]
cs.LG cs.CV stat.ML
null
1611.008
null
null
null
null
null
Temporal Matrix Completion with Locally Linear Latent Factors for Medical Applications
Regular medical records are useful for medical practitioners to analyze and monitor patient health status especially for those with chronic disease, but such records are usually incomplete due to unpunctuality and absence of patients. In order to resolve the missing data problem over time, tensor-based model is suggested for missing data imputation in recent papers because this approach makes use of low rank tensor assumption for highly correlated data. However, when the time intervals between records are long, the data correlation is not high along temporal direction and such assumption is not valid. To address this problem, we propose to decompose a matrix with missing data into its latent factors. Then, the locally linear constraint is imposed on these factors for matrix completion in this paper. By using a publicly available dataset and two medical datasets collected from hospital, experimental results show that the proposed algorithm achieves the best performance by comparing with the existing methods.
[ "Frodo Kin Sun Chan, Andy J Ma, Pong C Yuen, Terry Cheuk-Fung Yip,\n Yee-Kit Tse, Vincent Wai-Sun Wong and Grace Lai-Hung Wong" ]
null
null
1611.00800
null
null
http://arxiv.org/pdf/1611.00800v1
2016-10-31T12:02:53Z
2016-10-31T12:02:53Z
Temporal Matrix Completion with Locally Linear Latent Factors for Medical Applications
Regular medical records are useful for medical practitioners to analyze and monitor patient health status especially for those with chronic disease, but such records are usually incomplete due to unpunctuality and absence of patients. In order to resolve the missing data problem over time, tensor-based model is suggested for missing data imputation in recent papers because this approach makes use of low rank tensor assumption for highly correlated data. However, when the time intervals between records are long, the data correlation is not high along temporal direction and such assumption is not valid. To address this problem, we propose to decompose a matrix with missing data into its latent factors. Then, the locally linear constraint is imposed on these factors for matrix completion in this paper. By using a publicly available dataset and two medical datasets collected from hospital, experimental results show that the proposed algorithm achieves the best performance by comparing with the existing methods.
[ "['Frodo Kin Sun Chan' 'Andy J Ma' 'Pong C Yuen' 'Terry Cheuk-Fung Yip'\n 'Yee-Kit Tse' 'Vincent Wai-Sun Wong' 'Grace Lai-Hung Wong']" ]
cs.DS cs.LG
null
1611.00829
null
null
http://arxiv.org/pdf/1611.00829v2
2017-04-26T02:29:51Z
2016-11-02T22:38:32Z
Multidimensional Binary Search for Contextual Decision-Making
We consider a multidimensional search problem that is motivated by questions in contextual decision-making, such as dynamic pricing and personalized medicine. Nature selects a state from a $d$-dimensional unit ball and then generates a sequence of $d$-dimensional directions. We are given access to the directions, but not access to the state. After receiving a direction, we have to guess the value of the dot product between the state and the direction. Our goal is to minimize the number of times when our guess is more than $\epsilon$ away from the true answer. We construct a polynomial time algorithm that we call Projected Volume achieving regret $O(d\log(d/\epsilon))$, which is optimal up to a $\log d$ factor. The algorithm combines a volume cutting strategy with a new geometric technique that we call cylindrification.
[ "['Ilan Lobel' 'Renato Paes Leme' 'Adrian Vladu']", "Ilan Lobel, Renato Paes Leme, Adrian Vladu" ]
stat.ML cs.CV cs.LG
null
1611.00838
null
null
http://arxiv.org/pdf/1611.00838v5
2019-07-18T05:34:20Z
2016-11-02T23:12:05Z
Initialization and Coordinate Optimization for Multi-way Matching
We consider the problem of consistently matching multiple sets of elements to each other, which is a common task in fields such as computer vision. To solve the underlying NP-hard objective, existing methods often relax or approximate it, but end up with unsatisfying empirical performance due to a misaligned objective. We propose a coordinate update algorithm that directly optimizes the target objective. By using pairwise alignment information to build an undirected graph and initializing the permutation matrices along the edges of its Maximum Spanning Tree, our algorithm successfully avoids bad local optima. Theoretically, with high probability our algorithm guarantees an optimal solution under reasonable noise assumptions. Empirically, our algorithm consistently and significantly outperforms existing methods on several benchmark tasks on real datasets.
[ "['Da Tang' 'Tony Jebara']", "Da Tang and Tony Jebara" ]
cs.LG cs.CV cs.NE
null
1611.00847
null
null
http://arxiv.org/pdf/1611.00847v3
2016-11-14T14:10:41Z
2016-11-02T23:48:04Z
Deep Convolutional Neural Network Design Patterns
Recent research in the deep learning field has produced a plethora of new architectures. At the same time, a growing number of groups are applying deep learning to new applications. Some of these groups are likely to be composed of inexperienced deep learning practitioners who are baffled by the dizzying array of architecture choices and therefore opt to use an older architecture (i.e., Alexnet). Here we attempt to bridge this gap by mining the collective knowledge contained in recent deep learning research to discover underlying principles for designing neural network architectures. In addition, we describe several architectural innovations, including Fractal of FractalNet network, Stagewise Boosting Networks, and Taylor Series Networks (our Caffe code and prototxt files is available at https://github.com/iPhysicist/CNNDesignPatterns). We hope others are inspired to build on our preliminary work.
[ "Leslie N. Smith and Nicholay Topin", "['Leslie N. Smith' 'Nicholay Topin']" ]
cs.LG cs.AI
null
1611.00862
null
null
http://arxiv.org/pdf/1611.00862v1
2016-11-03T02:28:53Z
2016-11-03T02:28:53Z
Quantile Reinforcement Learning
In reinforcement learning, the standard criterion to evaluate policies in a state is the expectation of (discounted) sum of rewards. However, this criterion may not always be suitable, we consider an alternative criterion based on the notion of quantiles. In the case of episodic reinforcement learning problems, we propose an algorithm based on stochastic approximation with two timescales. We evaluate our proposition on a simple model of the TV show, Who wants to be a millionaire.
[ "['Hugo Gilbert' 'Paul Weng']", "Hugo Gilbert and Paul Weng" ]
cs.AI cs.LG
null
1611.00873
null
null
http://arxiv.org/pdf/1611.00873v1
2016-11-03T03:53:41Z
2016-11-03T03:53:41Z
Extracting Actionability from Machine Learning Models by Sub-optimal Deterministic Planning
A main focus of machine learning research has been improving the generalization accuracy and efficiency of prediction models. Many models such as SVM, random forest, and deep neural nets have been proposed and achieved great success. However, what emerges as missing in many applications is actionability, i.e., the ability to turn prediction results into actions. For example, in applications such as customer relationship management, clinical prediction, and advertisement, the users need not only accurate prediction, but also actionable instructions which can transfer an input to a desirable goal (e.g., higher profit repays, lower morbidity rates, higher ads hit rates). Existing effort in deriving such actionable knowledge is few and limited to simple action models which restricted to only change one attribute for each action. The dilemma is that in many real applications those action models are often more complex and harder to extract an optimal solution. In this paper, we propose a novel approach that achieves actionability by combining learning with planning, two core areas of AI. In particular, we propose a framework to extract actionable knowledge from random forest, one of the most widely used and best off-the-shelf classifiers. We formulate the actionability problem to a sub-optimal action planning (SOAP) problem, which is to find a plan to alter certain features of a given input so that the random forest would yield a desirable output, while minimizing the total costs of actions. Technically, the SOAP problem is formulated in the SAS+ planning formalism, and solved using a Max-SAT based approach. Our experimental results demonstrate the effectiveness and efficiency of the proposed approach on a personal credit dataset and other benchmarks. Our work represents a new application of automated planning on an emerging and challenging machine learning paradigm.
[ "Qiang Lyu, Yixin Chen, Zhaorong Li, Zhicheng Cui, Ling Chen, Xing\n Zhang, Haihua Shen", "['Qiang Lyu' 'Yixin Chen' 'Zhaorong Li' 'Zhicheng Cui' 'Ling Chen'\n 'Xing Zhang' 'Haihua Shen']" ]
cs.DS cs.CC cs.LG
null
1611.00898
null
null
http://arxiv.org/pdf/1611.00898v2
2020-04-16T13:57:43Z
2016-11-03T07:13:20Z
Low Rank Approximation with Entrywise $\ell_1$-Norm Error
We study the $\ell_1$-low rank approximation problem, where for a given $n \times d$ matrix $A$ and approximation factor $\alpha \geq 1$, the goal is to output a rank-$k$ matrix $\widehat{A}$ for which $$\|A-\widehat{A}\|_1 \leq \alpha \cdot \min_{\textrm{rank-}k\textrm{ matrices}~A'}\|A-A'\|_1,$$ where for an $n \times d$ matrix $C$, we let $\|C\|_1 = \sum_{i=1}^n \sum_{j=1}^d |C_{i,j}|$. This error measure is known to be more robust than the Frobenius norm in the presence of outliers and is indicated in models where Gaussian assumptions on the noise may not apply. The problem was shown to be NP-hard by Gillis and Vavasis and a number of heuristics have been proposed. It was asked in multiple places if there are any approximation algorithms. We give the first provable approximation algorithms for $\ell_1$-low rank approximation, showing that it is possible to achieve approximation factor $\alpha = (\log d) \cdot \mathrm{poly}(k)$ in $\mathrm{nnz}(A) + (n+d) \mathrm{poly}(k)$ time, where $\mathrm{nnz}(A)$ denotes the number of non-zero entries of $A$. If $k$ is constant, we further improve the approximation ratio to $O(1)$ with a $\mathrm{poly}(nd)$-time algorithm. Under the Exponential Time Hypothesis, we show there is no $\mathrm{poly}(nd)$-time algorithm achieving a $(1+\frac{1}{\log^{1+\gamma}(nd)})$-approximation, for $\gamma > 0$ an arbitrarily small constant, even when $k = 1$. We give a number of additional results for $\ell_1$-low rank approximation: nearly tight upper and lower bounds for column subset selection, CUR decompositions, extensions to low rank approximation with respect to $\ell_p$-norms for $1 \leq p < 2$ and earthmover distance, low-communication distributed protocols and low-memory streaming algorithms, algorithms with limited randomness, and bicriteria algorithms. We also give a preliminary empirical evaluation.
[ "Zhao Song, David P. Woodruff, Peilin Zhong", "['Zhao Song' 'David P. Woodruff' 'Peilin Zhong']" ]
cs.DS cs.LG stat.ML
null
1611.00938
null
null
http://arxiv.org/pdf/1611.00938v2
2016-11-04T09:25:41Z
2016-11-03T10:08:22Z
Fast Eigenspace Approximation using Random Signals
We focus in this work on the estimation of the first $k$ eigenvectors of any graph Laplacian using filtering of Gaussian random signals. We prove that we only need $k$ such signals to be able to exactly recover as many of the smallest eigenvectors, regardless of the number of nodes in the graph. In addition, we address key issues in implementing the theoretical concepts in practice using accurate approximated methods. We also propose fast algorithms both for eigenspace approximation and for the determination of the $k$th smallest eigenvalue $\lambda_k$. The latter proves to be extremely efficient under the assumption of locally uniform distribution of the eigenvalue over the spectrum. Finally, we present experiments which show the validity of our method in practice and compare it to state-of-the-art methods for clustering and visualization both on synthetic small-scale datasets and larger real-world problems of millions of nodes. We show that our method allows a better scaling with the number of nodes than all previous methods while achieving an almost perfect reconstruction of the eigenspace formed by the first $k$ eigenvectors.
[ "['Johan Paratte' 'Lionel Martin']", "Johan Paratte and Lionel Martin" ]
stat.ML cs.LG q-bio.QM
10.1109/TCBB.2017.2684127
1611.00962
null
null
http://arxiv.org/abs/1611.00962v1
2016-11-03T11:40:59Z
2016-11-03T11:40:59Z
Multitask Protein Function Prediction Through Task Dissimilarity
Automated protein function prediction is a challenging problem with distinctive features, such as the hierarchical organization of protein functions and the scarcity of annotated proteins for most biological functions. We propose a multitask learning algorithm addressing both issues. Unlike standard multitask algorithms, which use task (protein functions) similarity information as a bias to speed up learning, we show that dissimilarity information enforces separation of rare class labels from frequent class labels, and for this reason is better suited for solving unbalanced protein function prediction problems. We support our claim by showing that a multitask extension of the label propagation algorithm empirically works best when the task relatedness information is represented using a dissimilarity matrix as opposed to a similarity matrix. Moreover, the experimental comparison carried out on three model organism shows that our method has a more stable performance in both "protein-centric" and "function-centric" evaluation settings.
[ "Marco Frasca and Nicol\\`o Cesa Bianchi", "['Marco Frasca' 'Nicolò Cesa Bianchi']" ]
stat.ML cs.LG cs.NE physics.data-an stat.ME
null
1611.01046
null
null
http://arxiv.org/pdf/1611.01046v3
2017-06-01T19:04:01Z
2016-11-03T14:41:40Z
Learning to Pivot with Adversarial Networks
Several techniques for domain adaptation have been proposed to account for differences in the distribution of the data used for training and testing. The majority of this work focuses on a binary domain label. Similar problems occur in a scientific context where there may be a continuous family of plausible data generation processes associated to the presence of systematic uncertainties. Robust inference is possible if it is based on a pivot -- a quantity whose distribution does not depend on the unknown values of the nuisance parameters that parametrize this family of data generation processes. In this work, we introduce and derive theoretical results for a training procedure based on adversarial networks for enforcing the pivotal property (or, equivalently, fairness with respect to continuous attributes) on a predictive model. The method includes a hyperparameter to control the trade-off between accuracy and robustness. We demonstrate the effectiveness of this approach with a toy example and examples from particle physics.
[ "Gilles Louppe, Michael Kagan, Kyle Cranmer", "['Gilles Louppe' 'Michael Kagan' 'Kyle Cranmer']" ]
cs.LG cs.GR cs.RO
10.1145/3099564.3099567
1611.01055
null
null
http://arxiv.org/abs/1611.01055v1
2016-11-03T15:15:00Z
2016-11-03T15:15:00Z
Learning Locomotion Skills Using DeepRL: Does the Choice of Action Space Matter?
The use of deep reinforcement learning allows for high-dimensional state descriptors, but little is known about how the choice of action representation impacts the learning difficulty and the resulting performance. We compare the impact of four different action parameterizations (torques, muscle-activations, target joint angles, and target joint-angle velocities) in terms of learning time, policy robustness, motion quality, and policy query rates. Our results are evaluated on a gait-cycle imitation task for multiple planar articulated figures and multiple gaits. We demonstrate that the local feedback provided by higher-level action parameterizations can significantly impact the learning, robustness, and quality of the resulting policies.
[ "Xue Bin Peng, Michiel van de Panne", "['Xue Bin Peng' 'Michiel van de Panne']" ]
cs.LG stat.ML
10.1016/j.ins.2016.07.076
1611.0106
null
null
null
null
null
A-Ward_p\b{eta}: Effective hierarchical clustering using the Minkowski metric and a fast k -means initialisation
In this paper we make two novel contributions to hierarchical clustering. First, we introduce an anomalous pattern initialisation method for hierarchical clustering algorithms, called A-Ward, capable of substantially reducing the time they take to converge. This method generates an initial partition with a sufficiently large number of clusters. This allows the cluster merging process to start from this partition rather than from a trivial partition composed solely of singletons. Our second contribution is an extension of the Ward and Ward p algorithms to the situation where the feature weight exponent can differ from the exponent of the Minkowski distance. This new method, called A-Ward p\b{eta} , is able to generate a much wider variety of clustering solutions. We also demonstrate that its parameters can be estimated reasonably well by using a cluster validity index. We perform numerous experiments using data sets with two types of noise, insertion of noise features and blurring within-cluster values of some features. These experiments allow us to conclude: (i) our anomalous pattern initialisation method does indeed reduce the time a hierarchical clustering algorithm takes to complete, without negatively impacting its cluster recovery ability; (ii) A-Ward p\b{eta} provides better cluster recovery than both Ward and Ward p.
[ "Renato Cordeiro de Amorim, Vladimir Makarenkov, Boris Mirkin" ]
null
null
1611.01060
null
null
http://arxiv.org/abs/1611.01060v1
2016-11-03T15:23:53Z
2016-11-03T15:23:53Z
A-Ward_p\b{eta}: Effective hierarchical clustering using the Minkowski metric and a fast k -means initialisation
In this paper we make two novel contributions to hierarchical clustering. First, we introduce an anomalous pattern initialisation method for hierarchical clustering algorithms, called A-Ward, capable of substantially reducing the time they take to converge. This method generates an initial partition with a sufficiently large number of clusters. This allows the cluster merging process to start from this partition rather than from a trivial partition composed solely of singletons. Our second contribution is an extension of the Ward and Ward p algorithms to the situation where the feature weight exponent can differ from the exponent of the Minkowski distance. This new method, called A-Ward pb{eta} , is able to generate a much wider variety of clustering solutions. We also demonstrate that its parameters can be estimated reasonably well by using a cluster validity index. We perform numerous experiments using data sets with two types of noise, insertion of noise features and blurring within-cluster values of some features. These experiments allow us to conclude: (i) our anomalous pattern initialisation method does indeed reduce the time a hierarchical clustering algorithm takes to complete, without negatively impacting its cluster recovery ability; (ii) A-Ward pb{eta} provides better cluster recovery than both Ward and Ward p.
[ "['Renato Cordeiro de Amorim' 'Vladimir Makarenkov' 'Boris Mirkin']" ]
stat.ME cs.LG math.ST stat.ML stat.TH
null
1611.01129
null
null
http://arxiv.org/pdf/1611.01129v2
2018-11-27T18:08:38Z
2016-11-03T19:02:02Z
Cross: Efficient Low-rank Tensor Completion
The completion of tensors, or high-order arrays, attracts significant attention in recent research. Current literature on tensor completion primarily focuses on recovery from a set of uniformly randomly measured entries, and the required number of measurements to achieve recovery is not guaranteed to be optimal. In addition, the implementation of some previous methods is NP-hard. In this article, we propose a framework for low-rank tensor completion via a novel tensor measurement scheme we name Cross. The proposed procedure is efficient and easy to implement. In particular, we show that a third order tensor of Tucker rank-$(r_1, r_2, r_3)$ in $p_1$-by-$p_2$-by-$p_3$ dimensional space can be recovered from as few as $r_1r_2r_3 + r_1(p_1-r_1) + r_2(p_2-r_2) + r_3(p_3-r_3)$ noiseless measurements, which matches the sample complexity lower-bound. In the case of noisy measurements, we also develop a theoretical upper bound and the matching minimax lower bound for recovery error over certain classes of low-rank tensors for the proposed procedure. The results can be further extended to fourth or higher-order tensors. Simulation studies show that the method performs well under a variety of settings. Finally, the procedure is illustrated through a real dataset in neuroimaging.
[ "['Anru Zhang']", "Anru Zhang" ]
cs.LG cs.SY
null
1611.01142
null
null
http://arxiv.org/pdf/1611.01142v1
2016-11-03T19:46:19Z
2016-11-03T19:46:19Z
Using a Deep Reinforcement Learning Agent for Traffic Signal Control
Ensuring transportation systems are efficient is a priority for modern society. Technological advances have made it possible for transportation systems to collect large volumes of varied data on an unprecedented scale. We propose a traffic signal control system which takes advantage of this new, high quality data, with minimal abstraction compared to other proposed systems. We apply modern deep reinforcement learning methods to build a truly adaptive traffic signal control agent in the traffic microsimulator SUMO. We propose a new state space, the discrete traffic state encoding, which is information dense. The discrete traffic state encoding is used as input to a deep convolutional neural network, trained using Q-learning with experience replay. Our agent was compared against a one hidden layer neural network traffic signal control agent and reduces average cumulative delay by 82%, average queue length by 66% and average travel time by 20%.
[ "['Wade Genders' 'Saiedeh Razavi']", "Wade Genders, Saiedeh Razavi" ]
stat.ML cs.LG
null
1611.01144
null
null
http://arxiv.org/pdf/1611.01144v5
2017-08-05T22:45:19Z
2016-11-03T19:48:08Z
Categorical Reparameterization with Gumbel-Softmax
Categorical variables are a natural choice for representing discrete structure in the world. However, stochastic neural networks rarely use categorical latent variables due to the inability to backpropagate through samples. In this work, we present an efficient gradient estimator that replaces the non-differentiable sample from a categorical distribution with a differentiable sample from a novel Gumbel-Softmax distribution. This distribution has the essential property that it can be smoothly annealed into a categorical distribution. We show that our Gumbel-Softmax estimator outperforms state-of-the-art gradient estimators on structured output prediction and unsupervised generative modeling tasks with categorical latent variables, and enables large speedups on semi-supervised classification.
[ "['Eric Jang' 'Shixiang Gu' 'Ben Poole']", "Eric Jang, Shixiang Gu, Ben Poole" ]
cs.LG cs.CR stat.ML
null
1611.0117
null
null
null
null
null
PrivLogit: Efficient Privacy-preserving Logistic Regression by Tailoring Numerical Optimizers
Safeguarding privacy in machine learning is highly desirable, especially in collaborative studies across many organizations. Privacy-preserving distributed machine learning (based on cryptography) is popular to solve the problem. However, existing cryptographic protocols still incur excess computational overhead. Here, we make a novel observation that this is partially due to naive adoption of mainstream numerical optimization (e.g., Newton method) and failing to tailor for secure computing. This work presents a contrasting perspective: customizing numerical optimization specifically for secure settings. We propose a seemingly less-favorable optimization method that can in fact significantly accelerate privacy-preserving logistic regression. Leveraging this new method, we propose two new secure protocols for conducting logistic regression in a privacy-preserving and distributed manner. Extensive theoretical and empirical evaluations prove the competitive performance of our two secure proposals while without compromising accuracy or privacy: with speedup up to 2.3x and 8.1x, respectively, over state-of-the-art; and even faster as data scales up. Such drastic speedup is on top of and in addition to performance improvements from existing (and future) state-of-the-art cryptography. Our work provides a new way towards efficient and practical privacy-preserving logistic regression for large-scale studies which are common for modern science.
[ "Wei Xie, Yang Wang, Steven M. Boker, Donald E. Brown" ]
null
null
1611.01170
null
null
http://arxiv.org/pdf/1611.01170v1
2016-11-03T20:04:29Z
2016-11-03T20:04:29Z
PrivLogit: Efficient Privacy-preserving Logistic Regression by Tailoring Numerical Optimizers
Safeguarding privacy in machine learning is highly desirable, especially in collaborative studies across many organizations. Privacy-preserving distributed machine learning (based on cryptography) is popular to solve the problem. However, existing cryptographic protocols still incur excess computational overhead. Here, we make a novel observation that this is partially due to naive adoption of mainstream numerical optimization (e.g., Newton method) and failing to tailor for secure computing. This work presents a contrasting perspective: customizing numerical optimization specifically for secure settings. We propose a seemingly less-favorable optimization method that can in fact significantly accelerate privacy-preserving logistic regression. Leveraging this new method, we propose two new secure protocols for conducting logistic regression in a privacy-preserving and distributed manner. Extensive theoretical and empirical evaluations prove the competitive performance of our two secure proposals while without compromising accuracy or privacy: with speedup up to 2.3x and 8.1x, respectively, over state-of-the-art; and even faster as data scales up. Such drastic speedup is on top of and in addition to performance improvements from existing (and future) state-of-the-art cryptography. Our work provides a new way towards efficient and practical privacy-preserving logistic regression for large-scale studies which are common for modern science.
[ "['Wei Xie' 'Yang Wang' 'Steven M. Boker' 'Donald E. Brown']" ]
cs.NE cs.LG stat.ML
null
1611.01186
null
null
http://arxiv.org/pdf/1611.01186v2
2017-05-20T10:18:06Z
2016-11-03T20:55:49Z
Demystifying ResNet
The Residual Network (ResNet), proposed in He et al. (2015), utilized shortcut connections to significantly reduce the difficulty of training, which resulted in great performance boosts in terms of both training and generalization error. It was empirically observed in He et al. (2015) that stacking more layers of residual blocks with shortcut 2 results in smaller training error, while it is not true for shortcut of length 1 or 3. We provide a theoretical explanation for the uniqueness of shortcut 2. We show that with or without nonlinearities, by adding shortcuts that have depth two, the condition number of the Hessian of the loss function at the zero initial point is depth-invariant, which makes training very deep models no more difficult than shallow ones. Shortcuts of higher depth result in an extremely flat (high-order) stationary point initially, from which the optimization algorithm is hard to escape. The shortcut 1, however, is essentially equivalent to no shortcuts, which has a condition number exploding to infinity as the number of layers grows. We further argue that as the number of layers tends to infinity, it suffices to only look at the loss function at the zero initial point. Extensive experiments are provided accompanying our theoretical results. We show that initializing the network to small weights with shortcut 2 achieves significantly better results than random Gaussian (Xavier) initialization, orthogonal initialization, and shortcuts of deeper depth, from various perspectives ranging from final loss, learning dynamics and stability, to the behavior of the Hessian along the learning process.
[ "Sihan Li, Jiantao Jiao, Yanjun Han, Tsachy Weissman", "['Sihan Li' 'Jiantao Jiao' 'Yanjun Han' 'Tsachy Weissman']" ]
cs.CC cs.CR cs.DS cs.LG
null
1611.0119
null
null
null
null
null
Conspiracies between Learning Algorithms, Circuit Lower Bounds and Pseudorandomness
We prove several results giving new and stronger connections between learning, circuit lower bounds and pseudorandomness. Among other results, we show a generic learning speedup lemma, equivalences between various learning models in the exponential time and subexponential time regimes, a dichotomy between learning and pseudorandomness, consequences of non-trivial learning for circuit lower bounds, Karp-Lipton theorems for probabilistic exponential time, and NC$^1$-hardness for the Minimum Circuit Size Problem.
[ "Igor C. Oliveira, Rahul Santhanam" ]
null
null
1611.01190
null
null
http://arxiv.org/pdf/1611.01190v1
2016-11-03T21:08:38Z
2016-11-03T21:08:38Z
Conspiracies between Learning Algorithms, Circuit Lower Bounds and Pseudorandomness
We prove several results giving new and stronger connections between learning, circuit lower bounds and pseudorandomness. Among other results, we show a generic learning speedup lemma, equivalences between various learning models in the exponential time and subexponential time regimes, a dichotomy between learning and pseudorandomness, consequences of non-trivial learning for circuit lower bounds, Karp-Lipton theorems for probabilistic exponential time, and NC$^1$-hardness for the Minimum Circuit Size Problem.
[ "['Igor C. Oliveira' 'Rahul Santhanam']" ]
cs.LG cs.NE stat.ML
null
1611.01211
null
null
http://arxiv.org/pdf/1611.01211v8
2018-03-13T21:24:47Z
2016-11-03T22:30:10Z
Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear
Many practical environments contain catastrophic states that an optimal agent would visit infrequently or never. Even on toy problems, Deep Reinforcement Learning (DRL) agents tend to periodically revisit these states upon forgetting their existence under a new policy. We introduce intrinsic fear (IF), a learned reward shaping that guards DRL agents against periodic catastrophes. IF agents possess a fear model trained to predict the probability of imminent catastrophe. This score is then used to penalize the Q-learning objective. Our theoretical analysis bounds the reduction in average return due to learning on the perturbed objective. We also prove robustness to classification errors. As a bonus, IF models tend to learn faster, owing to reward shaping. Experiments demonstrate that intrinsic-fear DQNs solve otherwise pathological environments and improve on several Atari games.
[ "['Zachary C. Lipton' 'Kamyar Azizzadenesheli' 'Abhishek Kumar' 'Lihong Li'\n 'Jianfeng Gao' 'Li Deng']", "Zachary C. Lipton, Kamyar Azizzadenesheli, Abhishek Kumar, Lihong Li,\n Jianfeng Gao, Li Deng" ]
cs.LG
null
1611.01224
null
null
http://arxiv.org/pdf/1611.01224v2
2017-07-10T14:38:10Z
2016-11-03T23:21:32Z
Sample Efficient Actor-Critic with Experience Replay
This paper presents an actor-critic deep reinforcement learning agent with experience replay that is stable, sample efficient, and performs remarkably well on challenging environments, including the discrete 57-game Atari domain and several continuous control problems. To achieve this, the paper introduces several innovations, including truncated importance sampling with bias correction, stochastic dueling network architectures, and a new trust region policy optimization method.
[ "Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos,\n Koray Kavukcuoglu, Nando de Freitas", "['Ziyu Wang' 'Victor Bapst' 'Nicolas Heess' 'Volodymyr Mnih' 'Remi Munos'\n 'Koray Kavukcuoglu' 'Nando de Freitas']" ]
stat.ML cs.LG
null
1611.01232
null
null
http://arxiv.org/pdf/1611.01232v2
2017-04-04T19:36:14Z
2016-11-04T00:44:32Z
Deep Information Propagation
We study the behavior of untrained neural networks whose weights and biases are randomly distributed using mean field theory. We show the existence of depth scales that naturally limit the maximum depth of signal propagation through these random networks. Our main practical result is to show that random networks may be trained precisely when information can travel through them. Thus, the depth scales that we identify provide bounds on how deep a network may be trained for a specific choice of hyperparameters. As a corollary to this, we argue that in networks at the edge of chaos, one of these depth scales diverges. Thus arbitrarily deep networks may be trained only sufficiently close to criticality. We show that the presence of dropout destroys the order-to-chaos critical point and therefore strongly limits the maximum trainable depth for random networks. Finally, we develop a mean field theory for backpropagation and we show that the ordered and chaotic phases correspond to regions of vanishing and exploding gradient respectively.
[ "Samuel S. Schoenholz, Justin Gilmer, Surya Ganguli and Jascha\n Sohl-Dickstein", "['Samuel S. Schoenholz' 'Justin Gilmer' 'Surya Ganguli'\n 'Jascha Sohl-Dickstein']" ]
cs.CV cs.CR cs.LG stat.ML
null
1611.01236
null
null
http://arxiv.org/pdf/1611.01236v2
2017-02-11T00:15:46Z
2016-11-04T01:11:02Z
Adversarial Machine Learning at Scale
Adversarial examples are malicious inputs designed to fool machine learning models. They often transfer from one model to another, allowing attackers to mount black box attacks without knowledge of the target model's parameters. Adversarial training is the process of explicitly training a model on adversarial examples, in order to make it more robust to attack or to reduce its test error on clean inputs. So far, adversarial training has primarily been applied to small problems. In this research, we apply adversarial training to ImageNet. Our contributions include: (1) recommendations for how to succesfully scale adversarial training to large models and datasets, (2) the observation that adversarial training confers robustness to single-step attack methods, (3) the finding that multi-step attack methods are somewhat less transferable than single-step attack methods, so single-step attacks are the best for mounting black-box attacks, and (4) resolution of a "label leaking" effect that causes adversarially trained models to perform better on adversarial examples than on clean examples, because the adversarial example construction process uses the true label and the model can learn to exploit regularities in the construction process.
[ "['Alexey Kurakin' 'Ian Goodfellow' 'Samy Bengio']", "Alexey Kurakin, Ian Goodfellow, Samy Bengio" ]
stat.ML cs.LG
null
1611.01239
null
null
http://arxiv.org/pdf/1611.01239v1
2016-11-04T01:46:47Z
2016-11-04T01:46:47Z
Reparameterization trick for discrete variables
Low-variance gradient estimation is crucial for learning directed graphical models parameterized by neural networks, where the reparameterization trick is widely used for those with continuous variables. While this technique gives low-variance gradient estimates, it has not been directly applicable to discrete variables, the sampling of which inherently requires discontinuous operations. We argue that the discontinuity can be bypassed by marginalizing out the variable of interest, which results in a new reparameterization trick for discrete variables. This reparameterization greatly reduces the variance, which is understood by regarding the method as an application of common random numbers to the estimation. The resulting estimator is theoretically guaranteed to have a variance not larger than that of the likelihood-ratio method with the optimal input-dependent baseline. We give empirical results for variational learning of sigmoid belief networks.
[ "['Seiya Tokui' 'Issei sato']", "Seiya Tokui and Issei sato" ]
cs.LG cs.CL cs.DS cs.IR
null
1611.01259
null
null
http://arxiv.org/pdf/1611.01259v1
2016-11-04T03:45:03Z
2016-11-04T03:45:03Z
Generalized Topic Modeling
Recently there has been significant activity in developing algorithms with provable guarantees for topic modeling. In standard topic models, a topic (such as sports, business, or politics) is viewed as a probability distribution $\vec a_i$ over words, and a document is generated by first selecting a mixture $\vec w$ over topics, and then generating words i.i.d. from the associated mixture $A{\vec w}$. Given a large collection of such documents, the goal is to recover the topic vectors and then to correctly classify new documents according to their topic mixture. In this work we consider a broad generalization of this framework in which words are no longer assumed to be drawn i.i.d. and instead a topic is a complex distribution over sequences of paragraphs. Since one could not hope to even represent such a distribution in general (even if paragraphs are given using some natural feature representation), we aim instead to directly learn a document classifier. That is, we aim to learn a predictor that given a new document, accurately predicts its topic mixture, without learning the distributions explicitly. We present several natural conditions under which one can do this efficiently and discuss issues such as noise tolerance and sample complexity in this model. More generally, our model can be viewed as a generalization of the multi-view or co-training setting in machine learning.
[ "Avrim Blum, Nika Haghtalab", "['Avrim Blum' 'Nika Haghtalab']" ]
cs.CV cs.LG
null
1611.0126
null
null
null
null
null
Learning Identity Mappings with Residual Gates
We propose a new layer design by adding a linear gating mechanism to shortcut connections. By using a scalar parameter to control each gate, we provide a way to learn identity mappings by optimizing only one parameter. We build upon the motivation behind Residual Networks, where a layer is reformulated in order to make learning identity mappings less problematic to the optimizer. The augmentation introduces only one extra parameter per layer, and provides easier optimization by making degeneration into identity mappings simpler. We propose a new model, the Gated Residual Network, which is the result when augmenting Residual Networks. Experimental results show that augmenting layers provides better optimization, increased performance, and more layer independence. We evaluate our method on MNIST using fully-connected networks, showing empirical indications that our augmentation facilitates the optimization of deep models, and that it provides high tolerance to full layer removal: the model retains over 90% of its performance even after half of its layers have been randomly removed. We also evaluate our model on CIFAR-10 and CIFAR-100 using Wide Gated ResNets, achieving 3.65% and 18.27% error, respectively.
[ "Pedro H. P. Savarese and Leonardo O. Mazza and Daniel R. Figueiredo" ]
null
null
1611.01260
null
null
http://arxiv.org/pdf/1611.01260v2
2016-12-29T01:36:47Z
2016-11-04T04:34:38Z
Learning Identity Mappings with Residual Gates
We propose a new layer design by adding a linear gating mechanism to shortcut connections. By using a scalar parameter to control each gate, we provide a way to learn identity mappings by optimizing only one parameter. We build upon the motivation behind Residual Networks, where a layer is reformulated in order to make learning identity mappings less problematic to the optimizer. The augmentation introduces only one extra parameter per layer, and provides easier optimization by making degeneration into identity mappings simpler. We propose a new model, the Gated Residual Network, which is the result when augmenting Residual Networks. Experimental results show that augmenting layers provides better optimization, increased performance, and more layer independence. We evaluate our method on MNIST using fully-connected networks, showing empirical indications that our augmentation facilitates the optimization of deep models, and that it provides high tolerance to full layer removal: the model retains over 90% of its performance even after half of its layers have been randomly removed. We also evaluate our model on CIFAR-10 and CIFAR-100 using Wide Gated ResNets, achieving 3.65% and 18.27% error, respectively.
[ "['Pedro H. P. Savarese' 'Leonardo O. Mazza' 'Daniel R. Figueiredo']" ]
cs.LG cs.NE
null
1611.01268
null
null
http://arxiv.org/pdf/1611.01268v1
2016-11-04T05:52:17Z
2016-11-04T05:52:17Z
Semantic Noise Modeling for Better Representation Learning
Latent representation learned from multi-layered neural networks via hierarchical feature abstraction enables recent success of deep learning. Under the deep learning framework, generalization performance highly depends on the learned latent representation which is obtained from an appropriate training scenario with a task-specific objective on a designed network model. In this work, we propose a novel latent space modeling method to learn better latent representation. We designed a neural network model based on the assumption that good base representation can be attained by maximizing the total correlation between the input, latent, and output variables. From the base model, we introduce a semantic noise modeling method which enables class-conditional perturbation on latent space to enhance the representational power of learned latent feature. During training, latent vector representation can be stochastically perturbed by a modeled class-conditional additive noise while maintaining its original semantic feature. It implicitly brings the effect of semantic augmentation on the latent space. The proposed model can be easily learned by back-propagation with common gradient-based optimization algorithms. Experimental results show that the proposed method helps to achieve performance benefits against various previous approaches. We also provide the empirical analyses for the proposed class-conditional perturbation process including t-SNE visualization.
[ "Hyo-Eun Kim, Sangheum Hwang, Kyunghyun Cho", "['Hyo-Eun Kim' 'Sangheum Hwang' 'Kyunghyun Cho']" ]
cs.LG
null
1611.01276
null
null
http://arxiv.org/pdf/1611.01276v1
2016-11-04T07:09:03Z
2016-11-04T07:09:03Z
A Communication-Efficient Parallel Algorithm for Decision Tree
Decision tree (and its extensions such as Gradient Boosting Decision Trees and Random Forest) is a widely used machine learning algorithm, due to its practical effectiveness and model interpretability. With the emergence of big data, there is an increasing need to parallelize the training process of decision tree. However, most existing attempts along this line suffer from high communication costs. In this paper, we propose a new algorithm, called \emph{Parallel Voting Decision Tree (PV-Tree)}, to tackle this challenge. After partitioning the training data onto a number of (e.g., $M$) machines, this algorithm performs both local voting and global voting in each iteration. For local voting, the top-$k$ attributes are selected from each machine according to its local data. Then, globally top-$2k$ attributes are determined by a majority voting among these local candidates. Finally, the full-grained histograms of the globally top-$2k$ attributes are collected from local machines in order to identify the best (most informative) attribute and its split point. PV-Tree can achieve a very low communication cost (independent of the total number of attributes) and thus can scale out very well. Furthermore, theoretical analysis shows that this algorithm can learn a near optimal decision tree, since it can find the best attribute with a large probability. Our experiments on real-world datasets show that PV-Tree significantly outperforms the existing parallel decision tree algorithms in the trade-off between accuracy and efficiency.
[ "Qi Meng, Guolin Ke, Taifeng Wang, Wei Chen, Qiwei Ye, Zhi-Ming Ma and\n Tie-Yan Liu", "['Qi Meng' 'Guolin Ke' 'Taifeng Wang' 'Wei Chen' 'Qiwei Ye' 'Zhi-Ming Ma'\n 'Tie-Yan Liu']" ]
stat.ML cs.LG stat.CO
null
1611.01353
null
null
http://arxiv.org/pdf/1611.01353v3
2017-02-12T09:26:25Z
2016-11-04T12:46:37Z
Information Dropout: Learning Optimal Representations Through Noisy Computation
The cross-entropy loss commonly used in deep learning is closely related to the defining properties of optimal representations, but does not enforce some of the key properties. We show that this can be solved by adding a regularization term, which is in turn related to injecting multiplicative noise in the activations of a Deep Neural Network, a special case of which is the common practice of dropout. We show that our regularized loss function can be efficiently minimized using Information Dropout, a generalization of dropout rooted in information theoretic principles that automatically adapts to the data and can better exploit architectures of limited capacity. When the task is the reconstruction of the input, we show that our loss function yields a Variational Autoencoder as a special case, thus providing a link between representation learning, information theory and variational inference. Finally, we prove that we can promote the creation of disentangled representations simply by enforcing a factorized prior, a fact that has been observed empirically in recent work. Our experiments validate the theoretical intuitions behind our method, and we find that information dropout achieves a comparable or better generalization performance than binary dropout, especially on smaller models, since it can automatically adapt the noise to the structure of the network, as well as to the test sample.
[ "Alessandro Achille, Stefano Soatto", "['Alessandro Achille' 'Stefano Soatto']" ]
cs.IR cs.CL cs.DL cs.LG cs.SI
null
1611.014
null
null
null
null
null
Learning to Rank Scientific Documents from the Crowd
Finding related published articles is an important task in any science, but with the explosion of new work in the biomedical domain it has become especially challenging. Most existing methodologies use text similarity metrics to identify whether two articles are related or not. However biomedical knowledge discovery is hypothesis-driven. The most related articles may not be ones with the highest text similarities. In this study, we first develop an innovative crowd-sourcing approach to build an expert-annotated document-ranking corpus. Using this corpus as the gold standard, we then evaluate the approaches of using text similarity to rank the relatedness of articles. Finally, we develop and evaluate a new supervised model to automatically rank related scientific articles. Our results show that authors' ranking differ significantly from rankings by text-similarity-based models. By training a learning-to-rank model on a subset of the annotated corpus, we found the best supervised learning-to-rank model (SVM-Rank) significantly surpassed state-of-the-art baseline systems.
[ "Jesse M Lingeman, Hong Yu" ]