categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
list
stat.ML cs.CL cs.LG
null
1606.05925
null
null
http://arxiv.org/pdf/1606.05925v1
2016-06-19T23:40:51Z
2016-06-19T23:40:51Z
Graph based manifold regularized deep neural networks for automatic speech recognition
Deep neural networks (DNNs) have been successfully applied to a wide variety of acoustic modeling tasks in recent years. These include the applications of DNNs either in a discriminative feature extraction or in a hybrid acoustic modeling scenario. Despite the rapid progress in this area, a number of challenges remain in training DNNs. This paper presents an effective way of training DNNs using a manifold learning based regularization framework. In this framework, the parameters of the network are optimized to preserve underlying manifold based relationships between speech feature vectors while minimizing a measure of loss between network outputs and targets. This is achieved by incorporating manifold based locality constraints in the objective criterion of DNNs. Empirical evidence is provided to demonstrate that training a network with manifold constraints preserves structural compactness in the hidden layers of the network. Manifold regularization is applied to train bottleneck DNNs for feature extraction in hidden Markov model (HMM) based speech recognition. The experiments in this work are conducted on the Aurora-2 spoken digits and the Aurora-4 read news large vocabulary continuous speech recognition tasks. The performance is measured in terms of word error rate (WER) on these tasks. It is shown that the manifold regularized DNNs result in up to 37% reduction in WER relative to standard DNNs.
[ "['Vikrant Singh Tomar' 'Richard C. Rose']", "Vikrant Singh Tomar and Richard C. Rose" ]
cs.LG
null
1606.05934
null
null
http://arxiv.org/pdf/1606.05934v1
2016-06-20T00:59:11Z
2016-06-20T00:59:11Z
Adapting ELM to Time Series Classification: A Novel Diversified Top-k Shapelets Extraction Method
ELM (Extreme Learning Machine) is a single hidden layer feed-forward network, where the weights between input and hidden layer are initialized randomly. ELM is efficient due to its utilization of the analytical approach to compute weights between hidden and output layer. However, ELM still fails to output the semantic classification outcome. To address such limitation, in this paper, we propose a diversified top-k shapelets transform framework, where the shapelets are the subsequences i.e., the best representative and interpretative features of each class. As we identified, the most challenge problems are how to extract the best k shapelets in original candidate sets and how to automatically determine the k value. Specifically, we first define the similar shapelets and diversified top-k shapelets to construct diversity shapelets graph. Then, a novel diversity graph based top-k shapelets extraction algorithm named as \textbf{DivTopkshapelets}\ is proposed to search top-k diversified shapelets. Finally, we propose a shapelets transformed ELM algorithm named as \textbf{DivShapELM} to automatically determine the k value, which is further utilized for time series classification. The experimental results over public data sets demonstrate that the proposed approach significantly outperforms traditional ELM algorithm in terms of effectiveness and efficiency.
[ "Qiuyan Yan and Qifa Sun and Xinming Yan", "['Qiuyan Yan' 'Qifa Sun' 'Xinming Yan']" ]
stat.ME cs.LG stat.ML
10.1016/j.csda.2018.03.015
1606.05988
null
null
http://arxiv.org/abs/1606.05988v3
2018-03-21T17:04:18Z
2016-06-20T06:52:41Z
Continuum directions for supervised dimension reduction
Dimension reduction of multivariate data supervised by auxiliary information is considered. A series of basis for dimension reduction is obtained as minimizers of a novel criterion. The proposed method is akin to continuum regression, and the resulting basis is called continuum directions. With a presence of binary supervision data, these directions continuously bridge the principal component, mean difference and linear discriminant directions, thus ranging from unsupervised to fully supervised dimension reduction. High-dimensional asymptotic studies of continuum directions for binary supervision reveal several interesting facts. The conditions under which the sample continuum directions are inconsistent, but their classification performance is good, are specified. While the proposed method can be directly used for binary and multi-category classification, its generalizations to incorporate any form of auxiliary data are also presented. The proposed method enjoys fast computation, and the performance is better or on par with more computer-intensive alternatives.
[ "Sungkyu Jung", "['Sungkyu Jung']" ]
cs.NE cs.LG
null
1606.05990
null
null
http://arxiv.org/pdf/1606.05990v2
2018-08-11T19:52:59Z
2016-06-20T07:05:14Z
A New Training Method for Feedforward Neural Networks Based on Geometric Contraction Property of Activation Functions
We propose a new training method for a feedforward neural network having the activation functions with the geometric contraction property. The method consists of constructing a new functional that is less nonlinear in comparison with the classical functional by removing the nonlinearity of the activation function from the output layer. We validate this new method by a series of experiments that show an improved learning speed and better classification error.
[ "['Petre Birtea' 'Cosmin Cernazanu-Glavan' 'Alexandru Sisu']", "Petre Birtea, Cosmin Cernazanu-Glavan, Alexandru Sisu" ]
cs.CL cs.AI cs.LG
null
1606.06031
null
null
http://arxiv.org/pdf/1606.06031v1
2016-06-20T09:37:17Z
2016-06-20T09:37:17Z
The LAMBADA dataset: Word prediction requiring a broad discourse context
We introduce LAMBADA, a dataset to evaluate the capabilities of computational models for text understanding by means of a word prediction task. LAMBADA is a collection of narrative passages sharing the characteristic that human subjects are able to guess their last word if they are exposed to the whole passage, but not if they only see the last sentence preceding the target word. To succeed on LAMBADA, computational models cannot simply rely on local context, but must be able to keep track of information in the broader discourse. We show that LAMBADA exemplifies a wide range of linguistic phenomena, and that none of several state-of-the-art language models reaches accuracy above 1% on this novel benchmark. We thus propose LAMBADA as a challenging test set, meant to encourage the development of new models capable of genuine understanding of broad context in natural language text.
[ "Denis Paperno (1), Germ\\'an Kruszewski (1), Angeliki Lazaridou (1),\n Quan Ngoc Pham (1), Raffaella Bernardi (1), Sandro Pezzelle (1), Marco Baroni\n (1), Gemma Boleda (1), Raquel Fern\\'andez (2) ((1) CIMeC - Center for\n Mind/Brain Sciences, University of Trento, (2) Institute for Logic, Language\n & Computation, University of Amsterdam)", "['Denis Paperno' 'Germán Kruszewski' 'Angeliki Lazaridou' 'Quan Ngoc Pham'\n 'Raffaella Bernardi' 'Sandro Pezzelle' 'Marco Baroni' 'Gemma Boleda'\n 'Raquel Fernández']" ]
cs.DB cs.LG
10.1016/j.jides.2016.11.001
1606.06066
null
null
http://arxiv.org/abs/1606.06066v2
2017-05-16T17:23:10Z
2016-06-20T11:28:26Z
Mining Local Process Models
In this paper we describe a method to discover frequent behavioral patterns in event logs. We express these patterns as \emph{local process models}. Local process model mining can be positioned in-between process discovery and episode / sequential pattern mining. The technique presented in this paper is able to learn behavioral patterns involving sequential composition, concurrency, choice and loop, like in process mining. However, we do not look at start-to-end models, which distinguishes our approach from process discovery and creates a link to episode / sequential pattern mining. We propose an incremental procedure for building local process models capturing frequent patterns based on so-called process trees. We propose five quality dimensions and corresponding metrics for local process models, given an event log. We show monotonicity properties for some quality dimensions, enabling a speedup of local process model discovery through pruning. We demonstrate through a real life case study that mining local patterns allows us to get insights in processes where regular start-to-end process discovery techniques are only able to learn unstructured, flower-like, models.
[ "Niek Tax, Natalia Sidorova, Reinder Haakma, Wil M. P. van der Aalst", "['Niek Tax' 'Natalia Sidorova' 'Reinder Haakma' 'Wil M. P. van der Aalst']" ]
cs.LG
null
1606.06069
null
null
http://arxiv.org/pdf/1606.06069v1
2016-06-20T11:36:40Z
2016-06-20T11:36:40Z
Relative Natural Gradient for Learning Large Complex Models
Fisher information and natural gradient provided deep insights and powerful tools to artificial neural networks. However related analysis becomes more and more difficult as the learner's structure turns large and complex. This paper makes a preliminary step towards a new direction. We extract a local component of a large neuron system, and defines its relative Fisher information metric that describes accurately this small component, and is invariant to the other parts of the system. This concept is important because the geometry structure is much simplified and it can be easily applied to guide the learning of neural networks. We provide an analysis on a list of commonly used components, and demonstrate how to use this concept to further improve optimization.
[ "['Ke Sun' 'Frank Nielsen']", "Ke Sun and Frank Nielsen" ]
cs.CL cs.LG stat.ML
null
1606.06121
null
null
http://arxiv.org/pdf/1606.06121v1
2016-06-20T13:58:45Z
2016-06-20T13:58:45Z
Quantifying and Reducing Stereotypes in Word Embeddings
Machine learning algorithms are optimized to model statistical properties of the training data. If the input data reflects stereotypes and biases of the broader society, then the output of the learning algorithm also captures these stereotypes. In this paper, we initiate the study of gender stereotypes in {\em word embedding}, a popular framework to represent text data. As their use becomes increasingly common, applications can inadvertently amplify unwanted stereotypes. We show across multiple datasets that the embeddings contain significant gender stereotypes, especially with regard to professions. We created a novel gender analogy task and combined it with crowdsourcing to systematically quantify the gender bias in a given embedding. We developed an efficient algorithm that reduces gender stereotype using just a handful of training examples while preserving the useful geometric properties of the embedding. We evaluated our algorithm on several metrics. While we focus on male/female stereotypes, our framework may be applicable to other types of embedding biases.
[ "['Tolga Bolukbasi' 'Kai-Wei Chang' 'James Zou' 'Venkatesh Saligrama'\n 'Adam Kalai']", "Tolga Bolukbasi, Kai-Wei Chang, James Zou, Venkatesh Saligrama, Adam\n Kalai" ]
cs.AI cs.LG stat.ML
null
1606.06126
null
null
http://arxiv.org/pdf/1606.06126v3
2018-09-24T17:13:08Z
2016-06-20T14:06:22Z
Bootstrapping with Models: Confidence Intervals for Off-Policy Evaluation
For an autonomous agent, executing a poor policy may be costly or even dangerous. For such agents, it is desirable to determine confidence interval lower bounds on the performance of any given policy without executing said policy. Current methods for exact high confidence off-policy evaluation that use importance sampling require a substantial amount of data to achieve a tight lower bound. Existing model-based methods only address the problem in discrete state spaces. Since exact bounds are intractable for many domains we trade off strict guarantees of safety for more data-efficient approximate bounds. In this context, we propose two bootstrapping off-policy evaluation methods which use learned MDP transition models in order to estimate lower confidence bounds on policy performance with limited data in both continuous and discrete state spaces. Since direct use of a model may introduce bias, we derive a theoretical upper bound on model bias for when the model transition function is estimated with i.i.d. trajectories. This bound broadens our understanding of the conditions under which model-based methods have high bias. Finally, we empirically evaluate our proposed methods and analyze the settings in which different bootstrapping off-policy confidence interval methods succeed and fail.
[ "['Josiah P. Hanna' 'Peter Stone' 'Scott Niekum']", "Josiah P. Hanna, Peter Stone, Scott Niekum" ]
cs.NE cs.LG
null
1606.06160
null
null
http://arxiv.org/pdf/1606.06160v3
2018-02-02T01:43:54Z
2016-06-20T15:02:31Z
DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients
We propose DoReFa-Net, a method to train convolutional neural networks that have low bitwidth weights and activations using low bitwidth parameter gradients. In particular, during backward pass, parameter gradients are stochastically quantized to low bitwidth numbers before being propagated to convolutional layers. As convolutions during forward/backward passes can now operate on low bitwidth weights and activations/gradients respectively, DoReFa-Net can use bit convolution kernels to accelerate both training and inference. Moreover, as bit convolutions can be efficiently implemented on CPU, FPGA, ASIC and GPU, DoReFa-Net opens the way to accelerate training of low bitwidth neural network on these hardware. Our experiments on SVHN and ImageNet datasets prove that DoReFa-Net can achieve comparable prediction accuracy as 32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has 1-bit weights, 2-bit activations, can be trained from scratch using 6-bit gradients to get 46.1\% top-1 accuracy on ImageNet validation set. The DoReFa-Net AlexNet model is released publicly.
[ "Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, Yuheng Zou", "['Shuchang Zhou' 'Yuxin Wu' 'Zekun Ni' 'Xinyu Zhou' 'He Wen' 'Yuheng Zou']" ]
cs.LG cs.DC
null
1606.06234
null
null
http://arxiv.org/pdf/1606.06234v1
2016-06-20T18:22:09Z
2016-06-20T18:22:09Z
CNNLab: a Novel Parallel Framework for Neural Networks using GPU and FPGA-a Practical Study with Trade-off Analysis
Designing and implementing efficient, provably correct parallel neural network processing is challenging. Existing high-level parallel abstractions like MapReduce are insufficiently expressive while low-level tools like MPI and Pthreads leave ML experts repeatedly solving the same design challenges. However, the diversity and large-scale data size have posed a significant challenge to construct a flexible and high-performance implementation of deep learning neural networks. To improve the performance and maintain the scalability, we present CNNLab, a novel deep learning framework using GPU and FPGA-based accelerators. CNNLab provides a uniform programming model to users so that the hardware implementation and the scheduling are invisible to the programmers. At runtime, CNNLab leverages the trade-offs between GPU and FPGA before offloading the tasks to the accelerators. Experimental results on the state-of-the-art Nvidia K40 GPU and Altera DE5 FPGA board demonstrate that the CNNLab can provide a universal framework with efficient support for diverse applications without increasing the burden of the programmers. Moreover, we analyze the detailed quantitative performance, throughput, power, energy, and performance density for both approaches. Experimental results leverage the trade-offs between GPU and FPGA and provide useful practical experiences for the deep learning research community.
[ "Maohua Zhu, Liu Liu, Chao Wang, Yuan Xie", "['Maohua Zhu' 'Liu Liu' 'Chao Wang' 'Yuan Xie']" ]
stat.ML cs.LG
null
1606.06237
null
null
http://arxiv.org/pdf/1606.06237v4
2016-12-15T13:35:22Z
2016-06-20T18:30:10Z
Online and Differentially-Private Tensor Decomposition
In this paper, we resolve many of the key algorithmic questions regarding robustness, memory efficiency, and differential privacy of tensor decomposition. We propose simple variants of the tensor power method which enjoy these strong properties. We present the first guarantees for online tensor power method which has a linear memory requirement. Moreover, we present a noise calibrated tensor power method with efficient privacy guarantees. At the heart of all these guarantees lies a careful perturbation analysis derived in this paper which improves up on the existing results significantly.
[ "['Yining Wang' 'Animashree Anandkumar']", "Yining Wang, Animashree Anandkumar" ]
cs.GT cs.LG
null
1606.06244
null
null
http://arxiv.org/pdf/1606.06244v4
2016-12-16T20:44:36Z
2016-06-20T18:54:19Z
Learning in Games: Robustness of Fast Convergence
We show that learning algorithms satisfying a $\textit{low approximate regret}$ property experience fast convergence to approximate optimality in a large class of repeated games. Our property, which simply requires that each learner has small regret compared to a $(1+\epsilon)$-multiplicative approximation to the best action in hindsight, is ubiquitous among learning algorithms; it is satisfied even by the vanilla Hedge forecaster. Our results improve upon recent work of Syrgkanis et al. [SALS15] in a number of ways. We require only that players observe payoffs under other players' realized actions, as opposed to expected payoffs. We further show that convergence occurs with high probability, and show convergence under bandit feedback. Finally, we improve upon the speed of convergence by a factor of $n$, the number of players. Both the scope of settings and the class of algorithms for which our analysis provides fast convergence are considerably broader than in previous work. Our framework applies to dynamic population games via a low approximate regret property for shifting experts. Here we strengthen the results of Lykouris et al. [LST16] in two ways: We allow players to select learning algorithms from a larger class, which includes a minor variant of the basic Hedge algorithm, and we increase the maximum churn in players for which approximate optimality is achieved. In the bandit setting we present a new algorithm which provides a "small loss"-type bound with improved dependence on the number of actions in utility settings, and is both simple and efficient. This result may be of independent interest.
[ "['Dylan J. Foster' 'Zhiyuan Li' 'Thodoris Lykouris' 'Karthik Sridharan'\n 'Eva Tardos']", "Dylan J. Foster, Zhiyuan Li, Thodoris Lykouris, Karthik Sridharan, Eva\n Tardos" ]
cs.LG stat.ML
null
1606.06250
null
null
http://arxiv.org/pdf/1606.06250v1
2016-06-20T18:59:34Z
2016-06-20T18:59:34Z
An Empirical Comparison of Sampling Quality Metrics: A Case Study for Bayesian Nonnegative Matrix Factorization
In this work, we empirically explore the question: how can we assess the quality of samples from some target distribution? We assume that the samples are provided by some valid Monte Carlo procedure, so we are guaranteed that the collection of samples will asymptotically approximate the true distribution. Most current evaluation approaches focus on two questions: (1) Has the chain mixed, that is, is it sampling from the distribution? and (2) How independent are the samples (as MCMC procedures produce correlated samples)? Focusing on the case of Bayesian nonnegative matrix factorization, we empirically evaluate standard metrics of sampler quality as well as propose new metrics to capture aspects that these measures fail to expose. The aspect of sampling that is of particular interest to us is the ability (or inability) of sampling methods to move between multiple optima in NMF problems. As a proxy, we propose and study a number of metrics that might quantify the diversity of a set of NMF factorizations obtained by a sampler through quantifying the coverage of the posterior distribution. We compare the performance of a number of standard sampling methods for NMF in terms of these new metrics.
[ "['Arjumand Masood' 'Weiwei Pan' 'Finale Doshi-Velez']", "Arjumand Masood and Weiwei Pan and Finale Doshi-Velez" ]
stat.ML cs.CL cs.LG
null
1606.06352
null
null
http://arxiv.org/pdf/1606.06352v1
2016-06-20T22:30:19Z
2016-06-20T22:30:19Z
Visualizing textual models with in-text and word-as-pixel highlighting
We explore two techniques which use color to make sense of statistical text models. One method uses in-text annotations to illustrate a model's view of particular tokens in particular documents. Another uses a high-level, "words-as-pixels" graphic to display an entire corpus. Together, these methods offer both zoomed-in and zoomed-out perspectives into a model's understanding of text. We show how these interconnected methods help diagnose a classifier's poor performance on Twitter slang, and make sense of a topic model on historical political texts.
[ "Abram Handler, Su Lin Blodgett, Brendan O'Connor", "['Abram Handler' 'Su Lin Blodgett' \"Brendan O'Connor\"]" ]
cs.AI cs.LG stat.ML
null
1606.06357
null
null
http://arxiv.org/pdf/1606.06357v1
2016-06-20T22:52:48Z
2016-06-20T22:52:48Z
Complex Embeddings for Simple Link Prediction
In statistical relational learning, the link prediction problem is key to automatically understand the structure of large knowledge bases. As in previous studies, we propose to solve this problem through latent factorization. However, here we make use of complex valued embeddings. The composition of complex embeddings can handle a large variety of binary relations, among them symmetric and antisymmetric relations. Compared to state-of-the-art models such as Neural Tensor Network and Holographic Embeddings, our approach based on complex embeddings is arguably simpler, as it only uses the Hermitian dot product, the complex counterpart of the standard dot product between real vectors. Our approach is scalable to large datasets as it remains linear in both space and time, while consistently outperforming alternative approaches on standard link prediction benchmarks.
[ "Th\\'eo Trouillon, Johannes Welbl, Sebastian Riedel, \\'Eric Gaussier,\n Guillaume Bouchard", "['Théo Trouillon' 'Johannes Welbl' 'Sebastian Riedel' 'Éric Gaussier'\n 'Guillaume Bouchard']" ]
cs.CL cs.LG stat.ML
null
1606.06361
null
null
null
null
null
A Probabilistic Generative Grammar for Semantic Parsing
Domain-general semantic parsing is a long-standing goal in natural language processing, where the semantic parser is capable of robustly parsing sentences from domains outside of which it was trained. Current approaches largely rely on additional supervision from new domains in order to generalize to those domains. We present a generative model of natural language utterances and logical forms and demonstrate its application to semantic parsing. Our approach relies on domain-independent supervision to generalize to new domains. We derive and implement efficient algorithms for training, parsing, and sentence generation. The work relies on a novel application of hierarchical Dirichlet processes (HDPs) for structured prediction, which we also present in this manuscript. This manuscript is an excerpt of chapter 4 from the Ph.D. thesis of Saparov (2022), where the model plays a central role in a larger natural language understanding system. This manuscript provides a new simplified and more complete presentation of the work first introduced in Saparov, Saraswat, and Mitchell (2017). The description and proofs of correctness of the training algorithm, parsing algorithm, and sentence generation algorithm are much simplified in this new presentation. We also describe the novel application of hierarchical Dirichlet processes for structured prediction. In addition, we extend the earlier work with a new model of word morphology, which utilizes the comprehensive morphological data from Wiktionary.
[ "Abulhair Saparov" ]
stat.ML cs.LG
null
1606.06366
null
null
http://arxiv.org/pdf/1606.06366v1
2016-06-20T23:58:13Z
2016-06-20T23:58:13Z
FSMJ: Feature Selection with Maximum Jensen-Shannon Divergence for Text Categorization
In this paper, we present a new wrapper feature selection approach based on Jensen-Shannon (JS) divergence, termed feature selection with maximum JS-divergence (FSMJ), for text categorization. Unlike most existing feature selection approaches, the proposed FSMJ approach is based on real-valued features which provide more information for discrimination than binary-valued features used in conventional approaches. We show that the FSMJ is a greedy approach and the JS-divergence monotonically increases when more features are selected. We conduct several experiments on real-life data sets, compared with the state-of-the-art feature selection approaches for text categorization. The superior performance of the proposed FSMJ approach demonstrates its effectiveness and further indicates its wide potential applications on data mining.
[ "Bo Tang, Haibo He", "['Bo Tang' 'Haibo He']" ]
cs.LG cs.AI cs.CL
null
1606.06368
null
null
http://arxiv.org/pdf/1606.06368v2
2016-06-23T07:33:01Z
2016-06-20T23:59:25Z
Unanimous Prediction for 100% Precision with Application to Learning Semantic Mappings
Can we train a system that, on any new input, either says "don't know" or makes a prediction that is guaranteed to be correct? We answer the question in the affirmative provided our model family is well-specified. Specifically, we introduce the unanimity principle: only predict when all models consistent with the training data predict the same output. We operationalize this principle for semantic parsing, the task of mapping utterances to logical forms. We develop a simple, efficient method that reasons over the infinite set of all consistent models by only checking two of the models. We prove that our method obtains 100% precision even with a modest amount of training data from a possibly adversarial distribution. Empirically, we demonstrate the effectiveness of our approach on the standard GeoQuery dataset.
[ "Fereshte Khani, Martin Rinard, Percy Liang", "['Fereshte Khani' 'Martin Rinard' 'Percy Liang']" ]
cs.CR cs.LG
null
1606.06369
null
null
http://arxiv.org/pdf/1606.06369v1
2016-06-21T00:02:45Z
2016-06-21T00:02:45Z
Contextual Weisfeiler-Lehman Graph Kernel For Malware Detection
In this paper, we propose a novel graph kernel specifically to address a challenging problem in the field of cyber-security, namely, malware detection. Previous research has revealed the following: (1) Graph representations of programs are ideally suited for malware detection as they are robust against several attacks, (2) Besides capturing topological neighbourhoods (i.e., structural information) from these graphs it is important to capture the context under which the neighbourhoods are reachable to accurately detect malicious neighbourhoods. We observe that state-of-the-art graph kernels, such as Weisfeiler-Lehman kernel (WLK) capture the structural information well but fail to capture contextual information. To address this, we develop the Contextual Weisfeiler-Lehman kernel (CWLK) which is capable of capturing both these types of information. We show that for the malware detection problem, CWLK is more expressive and hence more accurate than WLK while maintaining comparable efficiency. Through our large-scale experiments with more than 50,000 real-world Android apps, we demonstrate that CWLK outperforms two state-of-the-art graph kernels (including WLK) and three malware detection techniques by more than 5.27% and 4.87% F-measure, respectively, while maintaining high efficiency. This high accuracy and efficiency make CWLK suitable for large-scale real-world malware detection.
[ "['Annamalai Narayanan' 'Guozhu Meng' 'Liu Yang' 'Jinliang Liu'\n 'Lihui Chen']", "Annamalai Narayanan, Guozhu Meng, Liu Yang, Jinliang Liu and Lihui\n Chen" ]
stat.ML cs.LG
null
1606.06377
null
null
http://arxiv.org/pdf/1606.06377v1
2016-06-21T00:45:35Z
2016-06-21T00:45:35Z
Kernel-based Generative Learning in Distortion Feature Space
This paper presents a novel kernel-based generative classifier which is defined in a distortion subspace using polynomial series expansion, named Kernel-Distortion (KD) classifier. An iterative kernel selection algorithm is developed to steadily improve classification performance by repeatedly removing and adding kernels. The experimental results on character recognition application not only show that the proposed generative classifier performs better than many existing classifiers, but also illustrate that it has different recognition capability compared to the state-of-the-art discriminative classifier - deep belief network. The recognition diversity indicates that a hybrid combination of the proposed generative classifier and the discriminative classifier could further improve the classification performance. Two hybrid combination methods, cascading and stacking, have been implemented to verify the diversity and the improvement of the proposed classifier.
[ "['Bo Tang' 'Paul M. Baggenstoss' 'Haibo He']", "Bo Tang, Paul M. Baggenstoss, Haibo He" ]
cs.IR cs.CL cs.LG
null
1606.06424
null
null
http://arxiv.org/pdf/1606.06424v1
2016-06-21T04:56:33Z
2016-06-21T04:56:33Z
A Novel Framework to Expedite Systematic Reviews by Automatically Building Information Extraction Training Corpora
A systematic review identifies and collates various clinical studies and compares data elements and results in order to provide an evidence based answer for a particular clinical question. The process is manual and involves lot of time. A tool to automate this process is lacking. The aim of this work is to develop a framework using natural language processing and machine learning to build information extraction algorithms to identify data elements in a new primary publication, without having to go through the expensive task of manual annotation to build gold standards for each data element type. The system is developed in two stages. Initially, it uses information contained in existing systematic reviews to identify the sentences from the PDF files of the included references that contain specific data elements of interest using a modified Jaccard similarity measure. These sentences have been treated as labeled data.A Support Vector Machine (SVM) classifier is trained on this labeled data to extract data elements of interests from a new article. We conducted experiments on Cochrane Database systematic reviews related to congestive heart failure using inclusion criteria as an example data element. The empirical results show that the proposed system automatically identifies sentences containing the data element of interest with a high recall (93.75%) and reasonable precision (27.05% - which means the reviewers have to read only 3.7 sentences on average). The empirical results suggest that the tool is retrieving valuable information from the reference articles, even when it is time-consuming to identify them manually. Thus we hope that the tool will be useful for automatic data extraction from biomedical research publications. The future scope of this work is to generalize this information framework for all types of systematic reviews.
[ "Tanmay Basu, Shraman Kumar, Abhishek Kalyan, Priyanka Jayaswal, Pawan\n Goyal, Stephen Pettifer and Siddhartha R. Jonnalagadda", "['Tanmay Basu' 'Shraman Kumar' 'Abhishek Kalyan' 'Priyanka Jayaswal'\n 'Pawan Goyal' 'Stephen Pettifer' 'Siddhartha R. Jonnalagadda']" ]
cs.LG q-bio.NC stat.ML
null
1606.06564
null
null
http://arxiv.org/pdf/1606.06564v2
2017-06-30T17:52:10Z
2016-06-21T13:35:43Z
An artificial neural network to find correlation patterns in an arbitrary number of variables
Methods to find correlation among variables are of interest to many disciplines, including statistics, machine learning, (big) data mining and neurosciences. Parameters that measure correlation between two variables are of limited utility when used with multiple variables. In this work, I propose a simple criterion to measure correlation among an arbitrary number of variables, based on a data set. The central idea is to i) design a function of the variables that can take different forms depending on a set of parameters, ii) calculate the difference between a statistics associated to the function computed on the data set and the same statistics computed on a randomised version of the data set, called "scrambled" data set, and iii) optimise the parameters to maximise this difference. Many such functions can be organised in layers, which can in turn be stacked one on top of the other, forming a neural network. The function parameters are searched with an enhanced genetic algortihm called POET and the resulting method is tested on a cancer gene data set. The method may have potential implications for some issues that affect the field of neural networks, such as overfitting, the need to process huge amounts of data for training and the presence of "adversarial examples".
[ "Alessandro Fontana", "['Alessandro Fontana']" ]
cs.AI cs.LG
null
1606.06565
null
null
http://arxiv.org/pdf/1606.06565v2
2016-07-25T17:23:29Z
2016-06-21T13:37:05Z
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
[ "Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John\n Schulman, Dan Man\\'e", "['Dario Amodei' 'Chris Olah' 'Jacob Steinhardt' 'Paul Christiano'\n 'John Schulman' 'Dan Mané']" ]
cs.LG cs.CV
null
1606.06582
null
null
http://arxiv.org/pdf/1606.06582v1
2016-06-21T14:12:52Z
2016-06-21T14:12:52Z
Augmenting Supervised Neural Networks with Unsupervised Objectives for Large-scale Image Classification
Unsupervised learning and supervised learning are key research topics in deep learning. However, as high-capacity supervised neural networks trained with a large amount of labels have achieved remarkable success in many computer vision tasks, the availability of large-scale labeled images reduced the significance of unsupervised learning. Inspired by the recent trend toward revisiting the importance of unsupervised learning, we investigate joint supervised and unsupervised learning in a large-scale setting by augmenting existing neural networks with decoding pathways for reconstruction. First, we demonstrate that the intermediate activations of pretrained large-scale classification networks preserve almost all the information of input images except a portion of local spatial details. Then, by end-to-end training of the entire augmented architecture with the reconstructive objective, we show improvement of the network performance for supervised tasks. We evaluate several variants of autoencoders, including the recently proposed "what-where" autoencoder that uses the encoder pooling switches, to study the importance of the architecture design. Taking the 16-layer VGGNet trained under the ImageNet ILSVRC 2012 protocol as a strong baseline for image classification, our methods improve the validation-set accuracy by a noticeable margin.
[ "Yuting Zhang, Kibok Lee, Honglak Lee", "['Yuting Zhang' 'Kibok Lee' 'Honglak Lee']" ]
cs.RO cs.LG
null
1606.06588
null
null
http://arxiv.org/pdf/1606.06588v1
2016-06-21T14:23:20Z
2016-06-21T14:23:20Z
ML-based tactile sensor calibration: A universal approach
We study the responses of two tactile sensors, the fingertip sensor from the iCub and the BioTac under different external stimuli. The question of interest is to which degree both sensors i) allow the estimation of force exerted on the sensor and ii) enable the recognition of differing degrees of curvature. Making use of a force controlled linear motor affecting the tactile sensors we acquire several high-quality data sets allowing the study of both sensors under exactly the same conditions. We also examined the structure of the representation of tactile stimuli in the recorded tactile sensor data using t-SNE embeddings. The experiments show that both the iCub and the BioTac excel in different settings.
[ "Maximilian Karl, Artur Lohrer, Dhananjay Shah, Frederik Diehl, Max\n Fiedler, Saahil Ognawala, Justin Bayer, Patrick van der Smagt", "['Maximilian Karl' 'Artur Lohrer' 'Dhananjay Shah' 'Frederik Diehl'\n 'Max Fiedler' 'Saahil Ognawala' 'Justin Bayer' 'Patrick van der Smagt']" ]
cs.CV cs.CL cs.LG
null
1606.06622
null
null
http://arxiv.org/pdf/1606.06622v3
2016-09-26T15:24:28Z
2016-06-21T15:38:27Z
Question Relevance in VQA: Identifying Non-Visual And False-Premise Questions
Visual Question Answering (VQA) is the task of answering natural-language questions about images. We introduce the novel problem of determining the relevance of questions to images in VQA. Current VQA models do not reason about whether a question is even related to the given image (e.g. What is the capital of Argentina?) or if it requires information from external resources to answer correctly. This can break the continuity of a dialogue in human-machine interaction. Our approaches for determining relevance are composed of two stages. Given an image and a question, (1) we first determine whether the question is visual or not, (2) if visual, we determine whether the question is relevant to the given image or not. Our approaches, based on LSTM-RNNs, VQA model uncertainty, and caption-question similarity, are able to outperform strong baselines on both relevance tasks. We also present human studies showing that VQA models augmented with such question relevance reasoning are perceived as more intelligent, reasonable, and human-like.
[ "['Arijit Ray' 'Gordon Christie' 'Mohit Bansal' 'Dhruv Batra' 'Devi Parikh']", "Arijit Ray, Gordon Christie, Mohit Bansal, Dhruv Batra, Devi Parikh" ]
cs.LG
null
1606.06630
null
null
http://arxiv.org/pdf/1606.06630v2
2016-11-12T19:47:10Z
2016-06-21T15:55:29Z
On Multiplicative Integration with Recurrent Neural Networks
We introduce a general and simple structural design called Multiplicative Integration (MI) to improve recurrent neural networks (RNNs). MI changes the way in which information from difference sources flows and is integrated in the computational building block of an RNN, while introducing almost no extra parameters. The new structure can be easily embedded into many popular RNN models, including LSTMs and GRUs. We empirically analyze its learning behaviour and conduct evaluations on several tasks using different RNN models. Our experimental results demonstrate that Multiplicative Integration can provide a substantial performance boost over many of the existing RNN models.
[ "['Yuhuai Wu' 'Saizheng Zhang' 'Ying Zhang' 'Yoshua Bengio'\n 'Ruslan Salakhutdinov']", "Yuhuai Wu, Saizheng Zhang, Ying Zhang, Yoshua Bengio and Ruslan\n Salakhutdinov" ]
cs.LG
null
1606.06653
null
null
http://arxiv.org/pdf/1606.06653v1
2016-06-21T16:48:25Z
2016-06-21T16:48:25Z
Tracking Time-Vertex Propagation using Dynamic Graph Wavelets
Graph Signal Processing generalizes classical signal processing to signal or data indexed by the vertices of a weighted graph. So far, the research efforts have been focused on static graph signals. However numerous applications involve graph signals evolving in time, such as spreading or propagation of waves on a network. The analysis of this type of data requires a new set of methods that fully takes into account the time and graph dimensions. We propose a novel class of wavelet frames named Dynamic Graph Wavelets, whose time-vertex evolution follows a dynamic process. We demonstrate that this set of functions can be combined with sparsity based approaches such as compressive sensing to reveal information on the dynamic processes occurring on a graph. Experiments on real seismological data show the efficiency of the technique, allowing to estimate the epicenter of earthquake events recorded by a seismic network.
[ "Francesco Grassi, Nathanael Perraudin, Benjamin Ricaud", "['Francesco Grassi' 'Nathanael Perraudin' 'Benjamin Ricaud']" ]
cs.CR cs.LG
null
1606.06771
null
null
http://arxiv.org/pdf/1606.06771v2
2016-08-10T17:11:49Z
2016-06-21T21:14:48Z
A Stackelberg Game Perspective on the Conflict Between Machine Learning and Data Obfuscation
Data is the new oil; this refrain is repeated extensively in the age of internet tracking, machine learning, and data analytics. Social network analysis, cookie-based advertising, and government surveillance are all evidence of the use of data for commercial and national interests. Public pressure, however, is mounting for the protection of privacy. Frameworks such as differential privacy offer machine learning algorithms methods to guarantee limits to information disclosure, but they are seldom implemented. Recently, however, developers have made significant efforts to undermine tracking through obfuscation tools that hide user characteristics in a sea of noise. These services highlight an emerging clash between tracking and data obfuscation. In this paper, we conceptualize this conflict through a dynamic game between users and a machine learning algorithm that uses empirical risk minimization. First, a machine learner declares a privacy protection level, and then users respond by choosing their own perturbation amounts. We study the interaction between the users and the learner using a Stackelberg game. The utility functions quantify accuracy using expected loss and privacy in terms of the bounds of differential privacy. In equilibrium, we find selfish users tend to cause significant utility loss to trackers by perturbing heavily, in a phenomenon reminiscent of public good games. Trackers, however, can improve the balance by proactively perturbing the data themselves. While other work in this area has studied privacy markets and mechanism design for truthful reporting of user information, we take a different viewpoint by considering both user and learner perturbation.
[ "['Jeffrey Pawlick' 'Quanyan Zhu']", "Jeffrey Pawlick and Quanyan Zhu" ]
cs.LG
null
1606.06793
null
null
http://arxiv.org/pdf/1606.06793v3
2017-04-06T02:40:23Z
2016-06-22T00:26:59Z
Scalable Semi-supervised Learning with Graph-based Kernel Machine
Acquiring labels are often costly, whereas unlabeled data are usually easy to obtain in modern machine learning applications. Semi-supervised learning provides a principled machine learning framework to address such situations, and has been applied successfully in many real-word applications and industries. Nonetheless, most of existing semi-supervised learning methods encounter two serious limitations when applied to modern and large-scale datasets: computational burden and memory usage demand. To this end, we present in this paper the Graph-based semi-supervised Kernel Machine (GKM), a method that leverages the generalization ability of kernel-based method with the geometrical and distributive information formulated through a spectral graph induced from data for semi-supervised learning purpose. Our proposed GKM can be solved directly in the primal form using the Stochastic Gradient Descent method with the ideal convergence rate $O(\frac{1}{T})$. Besides, our formulation is suitable for a wide spectrum of important loss functions in the literature of machine learning (e.g., Hinge, smooth Hinge, Logistic, L1, and {\epsilon}-insensitive) and smoothness functions (i.e., $l_p(t) = |t|^p$ with $p\ge1$). We further show that the well-known Laplacian Support Vector Machine is a special case of our formulation. We validate our proposed method on several benchmark datasets to demonstrate that GKM is appropriate for the large-scale datasets since it is optimal in memory usage and yields superior classification accuracy whilst simultaneously achieving a significant computation speed-up in comparison with the state-of-the-art baselines.
[ "Trung Le, Khanh Nguyen, Van Nguyen, Vu Nguyen, Dinh Phung", "['Trung Le' 'Khanh Nguyen' 'Van Nguyen' 'Vu Nguyen' 'Dinh Phung']" ]
cs.SI cs.LG physics.soc-ph
10.1209/0295-5075/117/38002
1606.06812
null
null
http://arxiv.org/abs/1606.06812v2
2016-06-23T02:58:40Z
2016-06-22T03:55:38Z
Link Prediction via Matrix Completion
Inspired by practical importance of social networks, economic networks, biological networks and so on, studies on large and complex networks have attracted a surge of attentions in the recent years. Link prediction is a fundamental issue to understand the mechanisms by which new links are added to the networks. We introduce the method of robust principal component analysis (robust PCA) into link prediction, and estimate the missing entries of the adjacency matrix. On one hand, our algorithm is based on the sparsity and low rank property of the matrix, on the other hand, it also performs very well when the network is dense. This is because a relatively dense real network is also sparse in comparison to the complete graph. According to extensive experiments on real networks from disparate fields, when the target network is connected and sufficiently dense, whatever it is weighted or unweighted, our method is demonstrated to be very effective and with prediction accuracy being considerably improved comparing with many state-of-the-art algorithms.
[ "['Ratha Pech' 'Dong Hao' 'Liming Pan' 'Hong Cheng' 'Tao Zhou']", "Ratha Pech, Dong Hao, Liming Pan, Hong Cheng and Tao Zhou" ]
cs.CL cs.LG cs.SD
null
1606.06864
null
null
http://arxiv.org/pdf/1606.06864v2
2016-09-16T15:20:39Z
2016-06-22T09:29:40Z
A Curriculum Learning Method for Improved Noise Robustness in Automatic Speech Recognition
The performance of automatic speech recognition systems under noisy environments still leaves room for improvement. Speech enhancement or feature enhancement techniques for increasing noise robustness of these systems usually add components to the recognition system that need careful optimization. In this work, we propose the use of a relatively simple curriculum training strategy called accordion annealing (ACCAN). It uses a multi-stage training schedule where samples at signal-to-noise ratio (SNR) values as low as 0dB are first added and samples at increasing higher SNR values are gradually added up to an SNR value of 50dB. We also use a method called per-epoch noise mixing (PEM) that generates noisy training samples online during training and thus enables dynamically changing the SNR of our training data. Both the ACCAN and the PEM methods are evaluated on a end-to-end speech recognition pipeline on the Wall Street Journal corpus. ACCAN decreases the average word error rate (WER) on the 20dB to -10dB SNR range by up to 31.4% when compared to a conventional multi-condition training method.
[ "Stefan Braun, Daniel Neil, Shih-Chii Liu", "['Stefan Braun' 'Daniel Neil' 'Shih-Chii Liu']" ]
cs.NE cs.CL cs.LG cs.SD
10.1109/ICASSP.2017.7952599
1606.06871
null
null
http://arxiv.org/abs/1606.06871v2
2017-03-29T08:08:29Z
2016-06-22T10:00:14Z
A Comprehensive Study of Deep Bidirectional LSTM RNNs for Acoustic Modeling in Speech Recognition
We present a comprehensive study of deep bidirectional long short-term memory (LSTM) recurrent neural network (RNN) based acoustic models for automatic speech recognition (ASR). We study the effect of size and depth and train models of up to 8 layers. We investigate the training aspect and study different variants of optimization methods, batching, truncated backpropagation, different regularization techniques such as dropout and $L_2$ regularization, and different gradient clipping variants. The major part of the experimental analysis was performed on the Quaero corpus. Additional experiments also were performed on the Switchboard corpus. Our best LSTM model has a relative improvement in word error rate of over 14\% compared to our best feed-forward neural network (FFNN) baseline on the Quaero task. On this task, we get our best result with an 8 layer bidirectional LSTM and we show that a pretraining scheme with layer-wise construction helps for deep LSTMs. Finally we compare the training calculation time of many of the presented experiments in relation with recognition performance. All the experiments were done with RETURNN, the RWTH extensible training framework for universal recurrent neural networks in combination with RASR, the RWTH ASR toolkit.
[ "['Albert Zeyer' 'Patrick Doetsch' 'Paul Voigtlaender' 'Ralf Schlüter'\n 'Hermann Ney']", "Albert Zeyer, Patrick Doetsch, Paul Voigtlaender, Ralf Schl\\\"uter,\n Hermann Ney" ]
cs.CL cs.LG
10.1016/j.csl.2017.04.008
1606.06950
null
null
http://arxiv.org/abs/1606.06950v2
2017-09-16T09:36:02Z
2016-06-22T13:51:57Z
A segmental framework for fully-unsupervised large-vocabulary speech recognition
Zero-resource speech technology is a growing research area that aims to develop methods for speech processing in the absence of transcriptions, lexicons, or language modelling text. Early term discovery systems focused on identifying isolated recurring patterns in a corpus, while more recent full-coverage systems attempt to completely segment and cluster the audio into word-like units---effectively performing unsupervised speech recognition. This article presents the first attempt we are aware of to apply such a system to large-vocabulary multi-speaker data. Our system uses a Bayesian modelling framework with segmental word representations: each word segment is represented as a fixed-dimensional acoustic embedding obtained by mapping the sequence of feature frames to a single embedding vector. We compare our system on English and Xitsonga datasets to state-of-the-art baselines, using a variety of measures including word error rate (obtained by mapping the unsupervised output to ground truth transcriptions). Very high word error rates are reported---in the order of 70--80% for speaker-dependent and 80--95% for speaker-independent systems---highlighting the difficulty of this task. Nevertheless, in terms of cluster quality and word segmentation metrics, we show that by imposing a consistent top-down segmentation while also using bottom-up knowledge from detected syllable boundaries, both single-speaker and multi-speaker versions of our system outperform a purely bottom-up single-speaker syllable-based approach. We also show that the discovered clusters can be made less speaker- and gender-specific by using an unsupervised autoencoder-like feature extractor to learn better frame-level features (prior to embedding). Our system's discovered clusters are still less pure than those of unsupervised term discovery systems, but provide far greater coverage.
[ "Herman Kamper, Aren Jansen, Sharon Goldwater", "['Herman Kamper' 'Aren Jansen' 'Sharon Goldwater']" ]
cs.LG cs.SI stat.ML
null
1606.06962
null
null
http://arxiv.org/pdf/1606.06962v1
2016-06-22T14:33:15Z
2016-06-22T14:33:15Z
Towards stationary time-vertex signal processing
Graph-based methods for signal processing have shown promise for the analysis of data exhibiting irregular structure, such as those found in social, transportation, and sensor networks. Yet, though these systems are often dynamic, state-of-the-art methods for signal processing on graphs ignore the dimension of time, treating successive graph signals independently or taking a global average. To address this shortcoming, this paper considers the statistical analysis of time-varying graph signals. We introduce a novel definition of joint (time-vertex) stationarity, which generalizes the classical definition of time stationarity and the more recent definition appropriate for graphs. Joint stationarity gives rise to a scalable Wiener optimization framework for joint denoising, semi-supervised learning, or more generally inversing a linear operator, that is provably optimal. Experimental results on real weather data demonstrate that taking into account graph and time dimensions jointly can yield significant accuracy improvements in the reconstruction effort.
[ "Nathanael Perraudin and Andreas Loukas and Francesco Grassi and Pierre\n Vandergheynst", "['Nathanael Perraudin' 'Andreas Loukas' 'Francesco Grassi'\n 'Pierre Vandergheynst']" ]
cs.LG cs.AI stat.ML
null
1606.07035
null
null
http://arxiv.org/pdf/1606.07035v3
2017-01-26T14:26:27Z
2016-06-22T18:26:27Z
Ancestral Causal Inference
Constraint-based causal discovery from limited data is a notoriously difficult challenge due to the many borderline independence test decisions. Several approaches to improve the reliability of the predictions by exploiting redundancy in the independence information have been proposed recently. Though promising, existing approaches can still be greatly improved in terms of accuracy and scalability. We present a novel method that reduces the combinatorial explosion of the search space by using a more coarse-grained representation of causal information, drastically reducing computation time. Additionally, we propose a method to score causal predictions based on their confidence. Crucially, our implementation also allows one to easily combine observational and interventional data and to incorporate various types of available background knowledge. We prove soundness and asymptotic consistency of our method and demonstrate that it can outperform the state-of-the-art on synthetic data, achieving a speedup of several orders of magnitude. We illustrate its practical feasibility by applying it on a challenging protein data set.
[ "Sara Magliacane, Tom Claassen, Joris M. Mooij", "['Sara Magliacane' 'Tom Claassen' 'Joris M. Mooij']" ]
stat.ML cs.CL cs.LG
null
1606.07043
null
null
http://arxiv.org/pdf/1606.07043v1
2016-06-22T19:00:38Z
2016-06-22T19:00:38Z
Toward Interpretable Topic Discovery via Anchored Correlation Explanation
Many predictive tasks, such as diagnosing a patient based on their medical chart, are ultimately defined by the decisions of human experts. Unfortunately, encoding experts' knowledge is often time consuming and expensive. We propose a simple way to use fuzzy and informal knowledge from experts to guide discovery of interpretable latent topics in text. The underlying intuition of our approach is that latent factors should be informative about both correlations in the data and a set of relevance variables specified by an expert. Mathematically, this approach is a combination of the information bottleneck and Total Correlation Explanation (CorEx). We give a preliminary evaluation of Anchored CorEx, showing that it produces more coherent and interpretable topics on two distinct corpora.
[ "Kyle Reing, David C. Kale, Greg Ver Steeg, Aram Galstyan", "['Kyle Reing' 'David C. Kale' 'Greg Ver Steeg' 'Aram Galstyan']" ]
stat.ML cs.LG
null
1606.07081
null
null
http://arxiv.org/pdf/1606.07081v1
2016-06-22T20:06:10Z
2016-06-22T20:06:10Z
Finite Sample Prediction and Recovery Bounds for Ordinal Embedding
The goal of ordinal embedding is to represent items as points in a low-dimensional Euclidean space given a set of constraints in the form of distance comparisons like "item $i$ is closer to item $j$ than item $k$". Ordinal constraints like this often come from human judgments. To account for errors and variation in judgments, we consider the noisy situation in which the given constraints are independently corrupted by reversing the correct constraint with some probability. This paper makes several new contributions to this problem. First, we derive prediction error bounds for ordinal embedding with noise by exploiting the fact that the rank of a distance matrix of points in $\mathbb{R}^d$ is at most $d+2$. These bounds characterize how well a learned embedding predicts new comparative judgments. Second, we investigate the special case of a known noise model and study the Maximum Likelihood estimator. Third, knowledge of the noise model enables us to relate prediction errors to embedding accuracy. This relationship is highly non-trivial since we show that the linear map corresponding to distance comparisons is non-invertible, but there exists a nonlinear map that is invertible. Fourth, two new algorithms for ordinal embedding are proposed and evaluated in experiments.
[ "['Lalit Jain' 'Kevin Jamieson' 'Robert Nowak']", "Lalit Jain, Kevin Jamieson, Robert Nowak" ]
cs.GR cs.LG math.DG
10.1007/s00365-019-09489-8
1606.07104
null
null
http://arxiv.org/abs/1606.07104v7
2020-02-26T16:06:19Z
2016-06-22T20:59:12Z
Manifold Approximation by Moving Least-Squares Projection (MMLS)
In order to avoid the curse of dimensionality, frequently encountered in Big Data analysis, there was a vast development in the field of linear and nonlinear dimension reduction techniques in recent years. These techniques (sometimes referred to as manifold learning) assume that the scattered input data is lying on a lower dimensional manifold, thus the high dimensionality problem can be overcome by learning the lower dimensionality behavior. However, in real life applications, data is often very noisy. In this work, we propose a method to approximate $\mathcal{M}$ a $d$-dimensional $C^{m+1}$ smooth submanifold of $\mathbb{R}^n$ ($d \ll n$) based upon noisy scattered data points (i.e., a data cloud). We assume that the data points are located "near" the lower dimensional manifold and suggest a non-linear moving least-squares projection on an approximating $d$-dimensional manifold. Under some mild assumptions, the resulting approximant is shown to be infinitely smooth and of high approximation order (i.e., $O(h^{m+1})$, where $h$ is the fill distance and $m$ is the degree of the local polynomial approximation). The method presented here assumes no analytic knowledge of the approximated manifold and the approximation algorithm is linear in the large dimension $n$. Furthermore, the approximating manifold can serve as a framework to perform operations directly on the high dimensional data in a computationally efficient manner. This way, the preparatory step of dimension reduction, which induces distortions to the data, can be avoided altogether.
[ "['Barak Sober' 'David Levin']", "Barak Sober and David Levin" ]
stat.ML cs.LG
null
1606.07112
null
null
http://arxiv.org/pdf/1606.07112v1
2016-06-22T21:18:50Z
2016-06-22T21:18:50Z
Visualizing Dynamics: from t-SNE to SEMI-MDPs
Deep Reinforcement Learning (DRL) is a trending field of research, showing great promise in many challenging problems such as playing Atari, solving Go and controlling robots. While DRL agents perform well in practice we are still missing the tools to analayze their performance and visualize the temporal abstractions that they learn. In this paper, we present a novel method that automatically discovers an internal Semi Markov Decision Process (SMDP) model in the Deep Q Network's (DQN) learned representation. We suggest a novel visualization method that represents the SMDP model by a directed graph and visualize it above a t-SNE map. We show how can we interpret the agent's policy and give evidence for the hierarchical state aggregation that DQNs are learning automatically. Our algorithm is fully automatic, does not require any domain specific knowledge and is evaluated by a novel likelihood based evaluation criteria.
[ "Nir Ben Zrihem, Tom Zahavy, Shie Mannor", "['Nir Ben Zrihem' 'Tom Zahavy' 'Shie Mannor']" ]
stat.ML cs.LG
null
1606.07129
null
null
http://arxiv.org/pdf/1606.07129v1
2016-06-22T22:24:30Z
2016-06-22T22:24:30Z
Explainable Restricted Boltzmann Machines for Collaborative Filtering
Most accurate recommender systems are black-box models, hiding the reasoning behind their recommendations. Yet explanations have been shown to increase the user's trust in the system in addition to providing other benefits such as scrutability, meaning the ability to verify the validity of recommendations. This gap between accuracy and transparency or explainability has generated an interest in automated explanation generation methods. Restricted Boltzmann Machines (RBM) are accurate models for CF that also lack interpretability. In this paper, we focus on RBM based collaborative filtering recommendations, and further assume the absence of any additional data source, such as item content or user attributes. We thus propose a new Explainable RBM technique that computes the top-n recommendation list from items that are explainable. Experimental results show that our method is effective in generating accurate and explainable recommendations.
[ "['Behnoush Abdollahi' 'Olfa Nasraoui']", "Behnoush Abdollahi, Olfa Nasraoui" ]
cs.NE cs.AI cs.CE cs.LG cs.SY
10.1109/TNNLS.2016.2572310
1606.07149
null
null
http://arxiv.org/abs/1606.07149v1
2016-06-23T01:07:27Z
2016-06-23T01:07:27Z
An Approach to Stable Gradient Descent Adaptation of Higher-Order Neural Units
Stability evaluation of a weight-update system of higher-order neural units (HONUs) with polynomial aggregation of neural inputs (also known as classes of polynomial neural networks) for adaptation of both feedforward and recurrent HONUs by a gradient descent method is introduced. An essential core of the approach is based on spectral radius of a weight-update system, and it allows stability monitoring and its maintenance at every adaptation step individually. Assuring stability of the weight-update system (at every single adaptation step) naturally results in adaptation stability of the whole neural architecture that adapts to target data. As an aside, the used approach highlights the fact that the weight optimization of HONU is a linear problem, so the proposed approach can be generally extended to any neural architecture that is linear in its adaptable parameters.
[ "['Ivo Bukovsky' 'Noriyasu Homma']", "Ivo Bukovsky and Noriyasu Homma" ]
cs.CR cs.LG
null
1606.07150
null
null
http://arxiv.org/pdf/1606.07150v2
2016-09-26T10:07:11Z
2016-06-23T01:08:10Z
Adaptive and Scalable Android Malware Detection through Online Learning
It is well-known that malware constantly evolves so as to evade detection and this causes the entire malware population to be non-stationary. Contrary to this fact, prior works on machine learning based Android malware detection have assumed that the distribution of the observed malware characteristics (i.e., features) do not change over time. In this work, we address the problem of malware population drift and propose a novel online machine learning based framework, named DroidOL to handle it and effectively detect malware. In order to perform accurate detection, security-sensitive behaviors are captured from apps in the form of inter-procedural control-flow sub-graph features using a state-of-the-art graph kernel. In order to perform scalable detection and to adapt to the drift and evolution in malware population, an online passive-aggressive classifier is used. In a large-scale comparative analysis with more than 87,000 apps, DroidOL achieves 84.29% accuracy outperforming two state-of-the-art malware techniques by more than 20% in their typical batch learning setting and more than 3% when they are continuously re-trained. Our experimental findings strongly indicate that online learning based approaches are highly suitable for real-world malware detection.
[ "Annamalai Narayanan, Liu Yang, Lihui Chen and Liu Jinliang", "['Annamalai Narayanan' 'Liu Yang' 'Lihui Chen' 'Liu Jinliang']" ]
stat.ML cs.LG
null
1606.07163
null
null
http://arxiv.org/pdf/1606.07163v1
2016-06-23T02:08:58Z
2016-06-23T02:08:58Z
Interpretable Machine Learning Models for the Digital Clock Drawing Test
The Clock Drawing Test (CDT) is a rapid, inexpensive, and popular neuropsychological screening tool for cognitive conditions. The Digital Clock Drawing Test (dCDT) uses novel software to analyze data from a digitizing ballpoint pen that reports its position with considerable spatial and temporal precision, making possible the analysis of both the drawing process and final product. We developed methodology to analyze pen stroke data from these drawings, and computed a large collection of features which were then analyzed with a variety of machine learning techniques. The resulting scoring systems were designed to be more accurate than the systems currently used by clinicians, but just as interpretable and easy to use. The systems also allow us to quantify the tradeoff between accuracy and interpretability. We created automated versions of the CDT scoring systems currently used by clinicians, allowing us to benchmark our models, which indicated that our machine learning models substantially outperformed the existing scoring systems.
[ "['William Souillard-Mandar' 'Randall Davis' 'Cynthia Rudin' 'Rhoda Au'\n 'Dana Penney']", "William Souillard-Mandar, Randall Davis, Cynthia Rudin, Rhoda Au, Dana\n Penney" ]
cs.IR cs.LG
null
1606.07219
null
null
http://arxiv.org/pdf/1606.07219v2
2016-07-01T21:53:43Z
2016-06-23T08:16:38Z
Learning Dynamic Classes of Events using Stacked Multilayer Perceptron Networks
People often use a web search engine to find information about events of interest, for example, sport competitions, political elections, festivals and entertainment news. In this paper, we study a problem of detecting event-related queries, which is the first step before selecting a suitable time-aware retrieval model. In general, event-related information needs can be observed in query streams through various temporal patterns of user search behavior, e.g., spiky peaks for popular events, and periodicities for repetitive events. However, it is also common that users search for non-popular events, which may not exhibit temporal variations in query streams, e.g., past events recently occurred, historical events triggered by anniversaries or similar events, and future events anticipated to happen. To address the challenge of detecting dynamic classes of events, we propose a novel deep learning model to classify a given query into a predetermined set of multiple event types. Our proposed model, a Stacked Multilayer Perceptron (S-MLP) network, consists of multilayer perceptron used as a basic learning unit. We assemble stacked units to further learn complex relationships between neutrons in successive layers. To evaluate our proposed model, we conduct experiments using real-world queries and a set of manually created ground truth. Preliminary results have shown that our proposed deep learning model outperforms the state-of-the-art classification models significantly.
[ "Nattiya Kanhabua and Huamin Ren and Thomas B. Moeslund", "['Nattiya Kanhabua' 'Huamin Ren' 'Thomas B. Moeslund']" ]
cs.CV cs.LG
null
1606.07230
null
null
http://arxiv.org/pdf/1606.07230v2
2017-08-08T09:24:18Z
2016-06-23T08:52:39Z
Deep Learning Markov Random Field for Semantic Segmentation
Semantic segmentation tasks can be well modeled by Markov Random Field (MRF). This paper addresses semantic segmentation by incorporating high-order relations and mixture of label contexts into MRF. Unlike previous works that optimized MRFs using iterative algorithm, we solve MRF by proposing a Convolutional Neural Network (CNN), namely Deep Parsing Network (DPN), which enables deterministic end-to-end computation in a single forward pass. Specifically, DPN extends a contemporary CNN to model unary terms and additional layers are devised to approximate the mean field (MF) algorithm for pairwise terms. It has several appealing properties. First, different from the recent works that required many iterations of MF during back-propagation, DPN is able to achieve high performance by approximating one iteration of MF. Second, DPN represents various types of pairwise terms, making many existing models as its special cases. Furthermore, pairwise terms in DPN provide a unified framework to encode rich contextual information in high-dimensional data, such as images and videos. Third, DPN makes MF easier to be parallelized and speeded up, thus enabling efficient inference. DPN is thoroughly evaluated on standard semantic image/video segmentation benchmarks, where a single DPN model yields state-of-the-art segmentation accuracies on PASCAL VOC 2012, Cityscapes dataset and CamVid dataset.
[ "Ziwei Liu, Xiaoxiao Li, Ping Luo, Chen Change Loy, Xiaoou Tang", "['Ziwei Liu' 'Xiaoxiao Li' 'Ping Luo' 'Chen Change Loy' 'Xiaoou Tang']" ]
stat.ML cs.LG
10.13140/RG.2.1.2436.5683
1606.07251
null
null
http://arxiv.org/abs/1606.07251v1
2016-06-23T09:53:30Z
2016-06-23T09:53:30Z
Algorithmic Composition of Melodies with Deep Recurrent Neural Networks
A big challenge in algorithmic composition is to devise a model that is both easily trainable and able to reproduce the long-range temporal dependencies typical of music. Here we investigate how artificial neural networks can be trained on a large corpus of melodies and turned into automated music composers able to generate new melodies coherent with the style they have been trained on. We employ gated recurrent unit networks that have been shown to be particularly efficient in learning complex sequential activations with arbitrary long time lags. Our model processes rhythm and melody in parallel while modeling the relation between these two features. Using such an approach, we were able to generate interesting complete melodies or suggest possible continuations of a melody fragment that is coherent with the characteristics of the fragment itself.
[ "['Florian Colombo' 'Samuel P. Muscinelli' 'Alexander Seeholzer'\n 'Johanni Brea' 'Wulfram Gerstner']", "Florian Colombo, Samuel P. Muscinelli, Alexander Seeholzer, Johanni\n Brea and Wulfram Gerstner" ]
cs.NE cs.LG
null
1606.07262
null
null
http://arxiv.org/pdf/1606.07262v1
2016-06-23T10:38:49Z
2016-06-23T10:38:49Z
On the Theoretical Capacity of Evolution Strategies to Statistically Learn the Landscape Hessian
We study the theoretical capacity to statistically learn local landscape information by Evolution Strategies (ESs). Specifically, we investigate the covariance matrix when constructed by ESs operating with the selection operator alone. We model continuous generation of candidate solutions about quadratic basins of attraction, with deterministic selection of the decision vectors that minimize the objective function values. Our goal is to rigorously show that accumulation of winning individuals carries the potential to reveal valuable information about the search landscape, e.g., as already practically utilized by derandomized ES variants. We first show that the statistically-constructed covariance matrix over such winning decision vectors shares the same eigenvectors with the Hessian matrix about the optimum. We then provide an analytic approximation of this covariance matrix for a non-elitist multi-child $(1,\lambda)$-strategy, which holds for a large population size $\lambda$. Finally, we also numerically corroborate our results.
[ "['Ofer M. Shir' 'Jonathan Roslund' 'Amir Yehudayoff']", "Ofer M. Shir, Jonathan Roslund and Amir Yehudayoff" ]
stat.ML cs.LG
10.1016/j.isprsjprs.2015.01.006
1606.07279
null
null
http://arxiv.org/abs/1606.07279v1
2016-06-23T12:05:23Z
2016-06-23T12:05:23Z
Multiclass feature learning for hyperspectral image classification: sparse and hierarchical solutions
In this paper, we tackle the question of discovering an effective set of spatial filters to solve hyperspectral classification problems. Instead of fixing a priori the filters and their parameters using expert knowledge, we let the model find them within random draws in the (possibly infinite) space of possible filters. We define an active set feature learner that includes in the model only features that improve the classifier. To this end, we consider a fast and linear classifier, multiclass logistic classification, and show that with a good representation (the filters discovered), such a simple classifier can reach at least state of the art performances. We apply the proposed active set learner in four hyperspectral image classification problems, including agricultural and urban classification at different resolutions, as well as multimodal data. We also propose a hierarchical setting, which allows to generate more complex banks of features that can better describe the nonlinearities present in the data.
[ "['Devis Tuia' 'Rémi Flamary' 'Nicolas Courty']", "Devis Tuia, R\\'emi Flamary, Nicolas Courty" ]
cs.LG
10.1007/978-3-319-56994-9_18
1606.07283
null
null
http://arxiv.org/abs/1606.07283v1
2016-06-23T12:12:45Z
2016-06-23T12:12:45Z
Event Abstraction for Process Mining using Supervised Learning Techniques
Process mining techniques focus on extracting insight in processes from event logs. In many cases, events recorded in the event log are too fine-grained, causing process discovery algorithms to discover incomprehensible process models or process models that are not representative of the event log. We show that when process discovery algorithms are only able to discover an unrepresentative process model from a low-level event log, structure in the process can in some cases still be discovered by first abstracting the event log to a higher level of granularity. This gives rise to the challenge to bridge the gap between an original low-level event log and a desired high-level perspective on this log, such that a more structured or more comprehensible process model can be discovered. We show that supervised learning can be leveraged for the event abstraction task when annotations with high-level interpretations of the low-level events are available for a subset of the sequences (i.e., traces). We present a method to generate feature vector representations of events based on XES extensions, and describe an approach to abstract events in an event log with Condition Random Fields using these event features. Furthermore, we propose a sequence-focused metric to evaluate supervised event abstraction results that fits closely to the tasks of process discovery and conformance checking. We conclude this paper by demonstrating the usefulness of supervised event abstraction for obtaining more structured and/or more comprehensible process models using both real life event data and synthetic event data.
[ "Niek Tax, Natalia Sidorova, Reinder Haakma, Wil M. P. van der Aalst", "['Niek Tax' 'Natalia Sidorova' 'Reinder Haakma' 'Wil M. P. van der Aalst']" ]
cs.LG math.OC
10.1109/CAMSAP.2015.7383796
1606.07286
null
null
http://arxiv.org/abs/1606.07286v1
2016-06-23T12:25:01Z
2016-06-23T12:25:01Z
Importance sampling strategy for non-convex randomized block-coordinate descent
As the number of samples and dimensionality of optimization problems related to statistics an machine learning explode, block coordinate descent algorithms have gained popularity since they reduce the original problem to several smaller ones. Coordinates to be optimized are usually selected randomly according to a given probability distribution. We introduce an importance sampling strategy that helps randomized coordinate descent algorithms to focus on blocks that are still far from convergence. The framework applies to problems composed of the sum of two possibly non-convex terms, one being separable and non-smooth. We have compared our algorithm to a full gradient proximal approach as well as to a randomized block coordinate algorithm that considers uniform sampling and cyclic block coordinate descent. Experimental evidences show the clear benefit of using an importance sampling strategy.
[ "['Rémi Flamary' 'Alain Rakotomamonjy' 'Gilles Gasso']", "R\\'emi Flamary (LAGRANGE, OCA), Alain Rakotomamonjy (LITIS), Gilles\n Gasso (LITIS)" ]
stat.ML cs.LG
10.1109/TGRS.2016.2585201
1606.07289
null
null
http://arxiv.org/abs/1606.07289v1
2016-06-23T12:36:01Z
2016-06-23T12:36:01Z
Non-convex regularization in remote sensing
In this paper, we study the effect of different regularizers and their implications in high dimensional image classification and sparse linear unmixing. Although kernelization or sparse methods are globally accepted solutions for processing data in high dimensions, we present here a study on the impact of the form of regularization used and its parametrization. We consider regularization via traditional squared (2) and sparsity-promoting (1) norms, as well as more unconventional nonconvex regularizers (p and Log Sum Penalty). We compare their properties and advantages on several classification and linear unmixing tasks and provide advices on the choice of the best regularizer for the problem at hand. Finally, we also provide a fully functional toolbox for the community.
[ "['Devis Tuia' 'Remi Flamary' 'Michel Barlaud']", "Devis Tuia, Remi Flamary, Michel Barlaud" ]
cs.CL cs.IR cs.LG cs.NE stat.ML
null
1606.07298
null
null
http://arxiv.org/pdf/1606.07298v1
2016-06-23T12:53:31Z
2016-06-23T12:53:31Z
Explaining Predictions of Non-Linear Classifiers in NLP
Layer-wise relevance propagation (LRP) is a recently proposed technique for explaining predictions of complex non-linear classifiers in terms of input variables. In this paper, we apply LRP for the first time to natural language processing (NLP). More precisely, we use it to explain the predictions of a convolutional neural network (CNN) trained on a topic categorization task. Our analysis highlights which words are relevant for a specific prediction of the CNN. We compare our technique to standard sensitivity analysis, both qualitatively and quantitatively, using a "word deleting" perturbation experiment, a PCA analysis, and various visualizations. All experiments validate the suitability of LRP for explaining the CNN predictions, which is also in line with results reported in recent image classification studies.
[ "Leila Arras and Franziska Horn and Gr\\'egoire Montavon and\n Klaus-Robert M\\\"uller and Wojciech Samek", "['Leila Arras' 'Franziska Horn' 'Grégoire Montavon' 'Klaus-Robert Müller'\n 'Wojciech Samek']" ]
cs.RO cs.LG stat.ML
null
1606.07312
null
null
http://arxiv.org/pdf/1606.07312v1
2016-06-23T13:44:28Z
2016-06-23T13:44:28Z
Unsupervised preprocessing for Tactile Data
Tactile information is important for gripping, stable grasp, and in-hand manipulation, yet the complexity of tactile data prevents widespread use of such sensors. We make use of an unsupervised learning algorithm that transforms the complex tactile data into a compact, latent representation without the need to record ground truth reference data. These compact representations can either be used directly in a reinforcement learning based controller or can be used to calibrate the tactile sensor to physical quantities with only a few datapoints. We show the quality of our latent representation by predicting important features and with a simple control task.
[ "['Maximilian Karl' 'Justin Bayer' 'Patrick van der Smagt']", "Maximilian Karl, Justin Bayer, Patrick van der Smagt" ]
cs.LG cs.NA
null
1606.07315
null
null
http://arxiv.org/pdf/1606.07315v3
2016-12-08T19:48:40Z
2016-06-23T13:57:56Z
Nearly-optimal Robust Matrix Completion
In this paper, we consider the problem of Robust Matrix Completion (RMC) where the goal is to recover a low-rank matrix by observing a small number of its entries out of which a few can be arbitrarily corrupted. We propose a simple projected gradient descent method to estimate the low-rank matrix that alternately performs a projected gradient descent step and cleans up a few of the corrupted entries using hard-thresholding. Our algorithm solves RMC using nearly optimal number of observations as well as nearly optimal number of corruptions. Our result also implies significant improvement over the existing time complexity bounds for the low-rank matrix completion problem. Finally, an application of our result to the robust PCA problem (low-rank+sparse matrix separation) leads to nearly linear time (in matrix dimensions) algorithm for the same; existing state-of-the-art methods require quadratic time. Our empirical results corroborate our theoretical results and show that even for moderate sized problems, our method for robust PCA is an an order of magnitude faster than the existing methods.
[ "['Yeshwanth Cherapanamjeri' 'Kartik Gupta' 'Prateek Jain']", "Yeshwanth Cherapanamjeri, Kartik Gupta, Prateek Jain" ]
cs.CV cs.LG stat.ML
null
1606.07326
null
null
http://arxiv.org/pdf/1606.07326v3
2016-07-03T09:39:30Z
2016-06-23T14:30:36Z
DropNeuron: Simplifying the Structure of Deep Neural Networks
Deep learning using multi-layer neural networks (NNs) architecture manifests superb power in modern machine learning systems. The trained Deep Neural Networks (DNNs) are typically large. The question we would like to address is whether it is possible to simplify the NN during training process to achieve a reasonable performance within an acceptable computational time. We presented a novel approach of optimising a deep neural network through regularisation of net- work architecture. We proposed regularisers which support a simple mechanism of dropping neurons during a network training process. The method supports the construction of a simpler deep neural networks with compatible performance with its simplified version. As a proof of concept, we evaluate the proposed method with examples including sparse linear regression, deep autoencoder and convolutional neural network. The valuations demonstrate excellent performance. The code for this work can be found in http://www.github.com/panweihit/DropNeuron
[ "Wei Pan and Hao Dong and Yike Guo", "['Wei Pan' 'Hao Dong' 'Yike Guo']" ]
cs.CL cs.AI cs.CV cs.LG
null
1606.07356
null
null
http://arxiv.org/pdf/1606.07356v2
2016-09-27T19:56:22Z
2016-06-23T16:05:16Z
Analyzing the Behavior of Visual Question Answering Models
Recently, a number of deep-learning based models have been proposed for the task of Visual Question Answering (VQA). The performance of most models is clustered around 60-70%. In this paper we propose systematic methods to analyze the behavior of these models as a first step towards recognizing their strengths and weaknesses, and identifying the most fruitful directions for progress. We analyze two models, one each from two major classes of VQA models -- with-attention and without-attention and show the similarities and differences in the behavior of these models. We also analyze the winning entry of the VQA Challenge 2016. Our behavior analysis reveals that despite recent progress, today's VQA models are "myopic" (tend to fail on sufficiently novel instances), often "jump to conclusions" (converge on a predicted answer after 'listening' to just half the question), and are "stubborn" (do not change their answers across images).
[ "['Aishwarya Agrawal' 'Dhruv Batra' 'Devi Parikh']", "Aishwarya Agrawal, Dhruv Batra, Devi Parikh" ]
stat.ML cs.LG
null
1606.07365
null
null
http://arxiv.org/pdf/1606.07365v1
2016-06-23T16:23:35Z
2016-06-23T16:23:35Z
Parallel SGD: When does averaging help?
Consider a number of workers running SGD independently on the same pool of data and averaging the models every once in a while -- a common but not well understood practice. We study model averaging as a variance-reducing mechanism and describe two ways in which the frequency of averaging affects convergence. For convex objectives, we show the benefit of frequent averaging depends on the gradient variance envelope. For non-convex objectives, we illustrate that this benefit depends on the presence of multiple globally optimal points. We complement our findings with multicore experiments on both synthetic and real data.
[ "Jian Zhang, Christopher De Sa, Ioannis Mitliagkas, Christopher R\\'e", "['Jian Zhang' 'Christopher De Sa' 'Ioannis Mitliagkas' 'Christopher Ré']" ]
stat.AP cs.LG stat.ML
null
1606.07369
null
null
http://arxiv.org/pdf/1606.07369v1
2016-06-22T15:55:22Z
2016-06-22T15:55:22Z
Personalized Prognostic Models for Oncology: A Machine Learning Approach
We have applied a little-known data transformation to subsets of the Surveillance, Epidemiology, and End Results (SEER) publically available data of the National Cancer Institute (NCI) to make it suitable input to standard machine learning classifiers. This transformation properly treats the right-censored data in the SEER data and the resulting Random Forest and Multi-Layer Perceptron models predict full survival curves. Treating the 6, 12, and 60 months points of the resulting survival curves as 3 binary classifiers, the 18 resulting classifiers have AUC values ranging from .765 to .885. Further evidence that the models have generalized well from the training data is provided by the extremely high levels of agreement between the random forest and neural network models predictions on the 6, 12, and 60 month binary classifiers.
[ "David Dooling, Angela Kim, Barbara McAneny, Jennifer Webster", "['David Dooling' 'Angela Kim' 'Barbara McAneny' 'Jennifer Webster']" ]
cs.LG
null
1606.07374
null
null
http://arxiv.org/pdf/1606.07374v2
2016-07-19T18:36:49Z
2016-06-23T16:58:33Z
Multi-Stage Temporal Difference Learning for 2048-like Games
Szubert and Jaskowski successfully used temporal difference (TD) learning together with n-tuple networks for playing the game 2048. However, we observed a phenomenon that the programs based on TD learning still hardly reach large tiles. In this paper, we propose multi-stage TD (MS-TD) learning, a kind of hierarchical reinforcement learning method, to effectively improve the performance for the rates of reaching large tiles, which are good metrics to analyze the strength of 2048 programs. Our experiments showed significant improvements over the one without using MS-TD learning. Namely, using 3-ply expectimax search, the program with MS-TD learning reached 32768-tiles with a rate of 18.31%, while the one with TD learning did not reach any. After further tuned, our 2048 program reached 32768-tiles with a rate of 31.75% in 10,000 games, and one among these games even reached a 65536-tile, which is the first ever reaching a 65536-tile to our knowledge. In addition, MS-TD learning method can be easily applied to other 2048-like games, such as Threes. Based on MS-TD learning, our experiments for Threes also demonstrated similar performance improvement, where the program with MS-TD learning reached 6144-tiles with a rate of 7.83%, while the one with TD learning only reached 0.45%.
[ "Kun-Hao Yeh, I-Chen Wu, Chu-Hsuan Hsueh, Chia-Chuan Chang, Chao-Chin\n Liang, Han Chiang", "['Kun-Hao Yeh' 'I-Chen Wu' 'Chu-Hsuan Hsueh' 'Chia-Chuan Chang'\n 'Chao-Chin Liang' 'Han Chiang']" ]
cs.DS cs.AI cs.LG math.ST stat.TH
null
1606.07384
null
null
http://arxiv.org/pdf/1606.07384v2
2018-10-29T05:31:52Z
2016-06-23T17:47:13Z
Robust Learning of Fixed-Structure Bayesian Networks
We investigate the problem of learning Bayesian networks in a robust model where an $\epsilon$-fraction of the samples are adversarially corrupted. In this work, we study the fully observable discrete case where the structure of the network is given. Even in this basic setting, previous learning algorithms either run in exponential time or lose dimension-dependent factors in their error guarantees. We provide the first computationally efficient robust learning algorithm for this problem with dimension-independent error guarantees. Our algorithm has near-optimal sample complexity, runs in polynomial time, and achieves error that scales nearly-linearly with the fraction of adversarially corrupted samples. Finally, we show on both synthetic and semi-synthetic data that our algorithm performs well in practice.
[ "['Yu Cheng' 'Ilias Diakonikolas' 'Daniel Kane' 'Alistair Stewart']", "Yu Cheng, Ilias Diakonikolas, Daniel Kane, Alistair Stewart" ]
astro-ph.IM astro-ph.CO cs.LG physics.data-an
10.3847/2041-8213/aa603d
1606.07442
null
null
http://arxiv.org/abs/1606.07442v2
2017-05-05T18:57:31Z
2016-06-23T20:00:02Z
Deep Recurrent Neural Networks for Supernovae Classification
We apply deep recurrent neural networks, which are capable of learning complex sequential information, to classify supernovae\footnote{Code available at \href{https://github.com/adammoss/supernovae}{https://github.com/adammoss/supernovae}}. The observational time and filter fluxes are used as inputs to the network, but since the inputs are agnostic additional data such as host galaxy information can also be included. Using the Supernovae Photometric Classification Challenge (SPCC) data, we find that deep networks are capable of learning about light curves, however the performance of the network is highly sensitive to the amount of training data. For a training size of 50\% of the representational SPCC dataset (around $10^4$ supernovae) we obtain a type-Ia vs. non-type-Ia classification accuracy of 94.7\%, an area under the Receiver Operating Characteristic curve AUC of 0.986 and a SPCC figure-of-merit $F_1=0.64$. When using only the data for the early-epoch challenge defined by the SPCC we achieve a classification accuracy of 93.1\%, AUC of 0.977 and $F_1=0.58$, results almost as good as with the whole light-curve. By employing bidirectional neural networks we can acquire impressive classification results between supernovae types -I,~-II and~-III at an accuracy of 90.4\% and AUC of 0.974. We also apply a pre-trained model to obtain classification probabilities as a function of time, and show it can give early indications of supernovae type. Our method is competitive with existing algorithms and has applications for future large-scale photometric surveys.
[ "['Tom Charnock' 'Adam Moss']", "Tom Charnock and Adam Moss" ]
cs.HC cs.AI cs.LG
null
1606.07487
null
null
http://arxiv.org/pdf/1606.07487v2
2016-07-03T20:04:55Z
2016-06-23T21:36:36Z
The VGLC: The Video Game Level Corpus
Levels are a key component of many different video games, and a large body of work has been produced on how to procedurally generate game levels. Recently, Machine Learning techniques have been applied to video game level generation towards the purpose of automatically generating levels that have the properties of the training corpus. Towards that end we have made available a corpora of video game levels in an easy to parse format ideal for different machine learning and other game AI research purposes.
[ "['Adam James Summerville' 'Sam Snodgrass' 'Michael Mateas'\n 'Santiago Ontañón']", "Adam James Summerville, Sam Snodgrass, Michael Mateas, Santiago\n Onta\\~n\\'on" ]
cs.CL cs.AI cs.CV cs.LG
null
1606.07493
null
null
http://arxiv.org/pdf/1606.07493v5
2016-11-07T18:48:13Z
2016-06-23T21:54:44Z
Sort Story: Sorting Jumbled Images and Captions into Stories
Temporal common sense has applications in AI tasks such as QA, multi-document summarization, and human-AI communication. We propose the task of sequencing -- given a jumbled set of aligned image-caption pairs that belong to a story, the task is to sort them such that the output sequence forms a coherent story. We present multiple approaches, via unary (position) and pairwise (order) predictions, and their ensemble-based combinations, achieving strong results on this task. We use both text-based and image-based features, which depict complementary improvements. Using qualitative examples, we demonstrate that our models have learnt interesting aspects of temporal common sense.
[ "Harsh Agrawal, Arjun Chandrasekaran, Dhruv Batra, Devi Parikh, Mohit\n Bansal", "['Harsh Agrawal' 'Arjun Chandrasekaran' 'Dhruv Batra' 'Devi Parikh'\n 'Mohit Bansal']" ]
cs.CV cs.CL cs.IR cs.LG cs.NE
null
1606.07496
null
null
http://arxiv.org/pdf/1606.07496v1
2016-06-23T22:04:08Z
2016-06-23T22:04:08Z
Is a Picture Worth Ten Thousand Words in a Review Dataset?
While textual reviews have become prominent in many recommendation-based systems, automated frameworks to provide relevant visual cues against text reviews where pictures are not available is a new form of task confronted by data mining and machine learning researchers. Suggestions of pictures that are relevant to the content of a review could significantly benefit the users by increasing the effectiveness of a review. We propose a deep learning-based framework to automatically: (1) tag the images available in a review dataset, (2) generate a caption for each image that does not have one, and (3) enhance each review by recommending relevant images that might not be uploaded by the corresponding reviewer. We evaluate the proposed framework using the Yelp Challenge Dataset. While a subset of the images in this particular dataset are correctly captioned, the majority of the pictures do not have any associated text. Moreover, there is no mapping between reviews and images. Each image has a corresponding business-tag where the picture was taken, though. The overall data setting and unavailability of crucial pieces required for a mapping make the problem of recommending images for reviews a major challenge. Qualitative and quantitative evaluations indicate that our proposed framework provides high quality enhancements through automatic captioning, tagging, and recommendation for mapping reviews and images.
[ "Roberto Camacho Barranco (1), Laura M. Rodriguez (1), Rebecca Urbina\n (1), and M. Shahriar Hossain (1) ((1) The University of Texas at El Paso)", "['Roberto Camacho Barranco' 'Laura M. Rodriguez' 'Rebecca Urbina'\n 'M. Shahriar Hossain']" ]
cs.LO cs.LG
10.4204/EPTCS.215.7
1606.07518
null
null
http://arxiv.org/abs/1606.07518v1
2016-06-24T00:30:59Z
2016-06-24T00:30:59Z
On the Solvability of Inductive Problems: A Study in Epistemic Topology
We investigate the issues of inductive problem-solving and learning by doxastic agents. We provide topological characterizations of solvability and learnability, and we use them to prove that AGM-style belief revision is "universal", i.e., that every solvable problem is solvable by AGM conditioning.
[ "Alexandru Baltag (Institute for logic, Language and Computation.\n University of Amsterdam), Nina Gierasimczuk (Institute for Logic, Language\n and Computation. University of Amsterdam), Sonja Smets (Institute for Logic,\n Language and Computation. University of Amsterdam)", "['Alexandru Baltag' 'Nina Gierasimczuk' 'Sonja Smets']" ]
cs.LG
null
1606.07558
null
null
http://arxiv.org/pdf/1606.07558v2
2017-05-03T23:02:56Z
2016-06-24T03:42:41Z
Satisfying Real-world Goals with Dataset Constraints
The goal of minimizing misclassification error on a training set is often just one of several real-world goals that might be defined on different datasets. For example, one may require a classifier to also make positive predictions at some specified rate for some subpopulation (fairness), or to achieve a specified empirical recall. Other real-world goals include reducing churn with respect to a previously deployed model, or stabilizing online training. In this paper we propose handling multiple goals on multiple datasets by training with dataset constraints, using the ramp penalty to accurately quantify costs, and present an efficient algorithm to approximately optimize the resulting non-convex constrained optimization problem. Experiments on both benchmark and real-world industry datasets demonstrate the effectiveness of our approach.
[ "Gabriel Goh, Andrew Cotter, Maya Gupta, Michael Friedlander", "['Gabriel Goh' 'Andrew Cotter' 'Maya Gupta' 'Michael Friedlander']" ]
stat.ML cs.CV cs.LG
null
1606.07575
null
null
http://arxiv.org/pdf/1606.07575v1
2016-06-24T06:15:45Z
2016-06-24T06:15:45Z
Multipartite Ranking-Selection of Low-Dimensional Instances by Supervised Projection to High-Dimensional Space
Pruning of redundant or irrelevant instances of data is a key to every successful solution for pattern recognition. In this paper, we present a novel ranking-selection framework for low-length but highly correlated instances. Instead of working in the low-dimensional instance space, we learn a supervised projection to high-dimensional space spanned by the number of classes in the dataset under study. Imposing higher distinctions via exposing the notion of labels to the instances, lets to deploy one versus all ranking for each individual classes and selecting quality instances via adaptive thresholding of the overall scores. To prove the efficiency of our paradigm, we employ it for the purpose of texture understanding which is a hard recognition challenge due to high similarity of texture pixels and low dimensionality of their color features. Our experiments show considerable improvements in recognition performance over other local descriptors on several publicly available datasets.
[ "['Arash Shahriari']", "Arash Shahriari" ]
stat.ML cs.LG
null
1606.07578
null
null
http://arxiv.org/pdf/1606.07578v1
2016-06-24T06:34:17Z
2016-06-24T06:34:17Z
Regression Trees and Random forest based feature selection for malaria risk exposure prediction
This paper deals with prediction of anopheles number, the main vector of malaria risk, using environmental and climate variables. The variables selection is based on an automatic machine learning method using regression trees, and random forests combined with stratified two levels cross validation. The minimum threshold of variables importance is accessed using the quadratic distance of variables importance while the optimal subset of selected variables is used to perform predictions. Finally the results revealed to be qualitatively better, at the selection, the prediction , and the CPU time point of view than those obtained by GLM-Lasso method.
[ "['Bienvenue Kouwayè']", "Bienvenue Kouway\\`e" ]
cs.LG stat.ML
null
1606.07636
null
null
http://arxiv.org/pdf/1606.07636v3
2017-12-12T14:17:46Z
2016-06-24T10:54:41Z
Is the Bellman residual a bad proxy?
This paper aims at theoretically and empirically comparing two standard optimization criteria for Reinforcement Learning: i) maximization of the mean value and ii) minimization of the Bellman residual. For that purpose, we place ourselves in the framework of policy search algorithms, that are usually designed to maximize the mean value, and derive a method that minimizes the residual $\|T_* v_\pi - v_\pi\|_{1,\nu}$ over policies. A theoretical analysis shows how good this proxy is to policy optimization, and notably that it is better than its value-based counterpart. We also propose experiments on randomly generated generic Markov decision processes, specifically designed for studying the influence of the involved concentrability coefficient. They show that the Bellman residual is generally a bad proxy to policy optimization and that directly maximizing the mean value is much better, despite the current lack of deep theoretical analysis. This might seem obvious, as directly addressing the problem of interest is usually better, but given the prevalence of (projected) Bellman residual minimization in value-based reinforcement learning, we believe that this question is worth to be considered.
[ "Matthieu Geist and Bilal Piot and Olivier Pietquin", "['Matthieu Geist' 'Bilal Piot' 'Olivier Pietquin']" ]
cs.LG cs.IR
10.1145/2988450.2988456
1606.07659
null
null
http://arxiv.org/abs/1606.07659v3
2017-12-29T14:32:51Z
2016-06-24T12:37:04Z
Hybrid Recommender System based on Autoencoders
A standard model for Recommender Systems is the Matrix Completion setting: given partially known matrix of ratings given by users (rows) to items (columns), infer the unknown ratings. In the last decades, few attempts where done to handle that objective with Neural Networks, but recently an architecture based on Autoencoders proved to be a promising approach. In current paper, we enhanced that architecture (i) by using a loss function adapted to input data with missing values, and (ii) by incorporating side information. The experiments demonstrate that while side information only slightly improve the test error averaged on all users/items, it has more impact on cold users/items.
[ "['Florian Strub' 'Romaric Gaudel' 'Jérémie Mary']", "Florian Strub (CRIStAL, SEQUEL), Romaric Gaudel (CRIStAL, SEQUEL),\n J\\'er\\'emie Mary (CRIStAL, SEQUEL)" ]
cs.SI cs.LG
null
1606.07707
null
null
http://arxiv.org/pdf/1606.07707v1
2016-06-24T14:42:17Z
2016-06-24T14:42:17Z
Collective Semi-Supervised Learning for User Profiling in Social Media
The abundance of user-generated data in social media has incentivized the development of methods to infer the latent attributes of users, which are crucially useful for personalization, advertising and recommendation. However, the current user profiling approaches have limited success, due to the lack of a principled way to integrate different types of social relationships of a user, and the reliance on scarcely-available labeled data in building a prediction model. In this paper, we present a novel solution termed Collective Semi-Supervised Learning (CSL), which provides a principled means to integrate different types of social relationship and unlabeled data under a unified computational framework. The joint learning from multiple relationships and unlabeled data yields a computationally sound and accurate approach to model user attributes in social media. Extensive experiments using Twitter data have demonstrated the efficacy of our CSL approach in inferring user attributes such as account type and marital status. We also show how CSL can be used to determine important user features, and to make inference on a larger user population.
[ "['Richard J. Oentaryo' 'Ee-Peng Lim' 'Freddy Chong Tat Chua' 'Jia-Wei Low'\n 'David Lo']", "Richard J. Oentaryo, Ee-Peng Lim, Freddy Chong Tat Chua, Jia-Wei Low,\n David Lo" ]
cs.IR cs.AI cs.LG
null
1606.07722
null
null
http://arxiv.org/pdf/1606.07722v1
2016-06-24T15:25:55Z
2016-06-24T15:25:55Z
Neural Network Based Next-Song Recommendation
Recently, the next-item/basket recommendation system, which considers the sequential relation between bought items, has drawn attention of researchers. The utilization of sequential patterns has boosted performance on several kinds of recommendation tasks. Inspired by natural language processing (NLP) techniques, we propose a novel neural network (NN) based next-song recommender, CNN-rec, in this paper. Then, we compare the proposed system with several NN based and classic recommendation systems on the next-song recommendation task. Verification results indicate the proposed system outperforms classic systems and has comparable performance with the state-of-the-art system.
[ "Kai-Chun Hsu, Szu-Yu Chou, Yi-Hsuan Yang, Tai-Shih Chi", "['Kai-Chun Hsu' 'Szu-Yu Chou' 'Yi-Hsuan Yang' 'Tai-Shih Chi']" ]
cs.NE cs.LG
null
1606.07767
null
null
http://arxiv.org/pdf/1606.07767v3
2017-02-13T21:25:26Z
2016-06-24T17:31:02Z
Sampling-based Gradient Regularization for Capturing Long-Term Dependencies in Recurrent Neural Networks
Vanishing (and exploding) gradients effect is a common problem for recurrent neural networks with nonlinear activation functions which use backpropagation method for calculation of derivatives. Deep feedforward neural networks with many hidden layers also suffer from this effect. In this paper we propose a novel universal technique that makes the norm of the gradient stay in the suitable range. We construct a way to estimate a contribution of each training example to the norm of the long-term components of the target function s gradient. Using this subroutine we can construct mini-batches for the stochastic gradient descent (SGD) training that leads to high performance and accuracy of the trained network even for very complex tasks. We provide a straightforward mathematical estimation of minibatch s impact on for the gradient norm and prove its correctness theoretically. To check our framework experimentally we use some special synthetic benchmarks for testing RNNs on ability to capture long-term dependencies. Our network can detect links between events in the (temporal) sequence at the range approx. 100 and longer.
[ "['Artem Chernodub' 'Dimitri Nowicki']", "Artem Chernodub and Dimitri Nowicki" ]
cs.NE cs.AI cs.LG
null
1606.07786
null
null
http://arxiv.org/pdf/1606.07786v2
2020-02-20T06:27:05Z
2016-06-23T18:32:43Z
Precise neural network computation with imprecise analog devices
The operations used for neural network computation map favorably onto simple analog circuits, which outshine their digital counterparts in terms of compactness and efficiency. Nevertheless, such implementations have been largely supplanted by digital designs, partly because of device mismatch effects due to material and fabrication imperfections. We propose a framework that exploits the power of deep learning to compensate for this mismatch by incorporating the measured device variations as constraints in the neural network training process. This eliminates the need for mismatch minimization strategies and allows circuit complexity and power-consumption to be reduced to a minimum. Our results, based on large-scale simulations as well as a prototype VLSI chip implementation indicate a processing efficiency comparable to current state-of-art digital implementations. This method is suitable for future technology based on nanodevices with large variability, such as memristive arrays.
[ "['Jonathan Binas' 'Daniel Neil' 'Giacomo Indiveri' 'Shih-Chii Liu'\n 'Michael Pfeiffer']", "Jonathan Binas, Daniel Neil, Giacomo Indiveri, Shih-Chii Liu, Michael\n Pfeiffer" ]
cs.LG cs.IR stat.ML
null
1606.07792
null
null
http://arxiv.org/pdf/1606.07792v1
2016-06-24T19:07:02Z
2016-06-24T19:07:02Z
Wide & Deep Learning for Recommender Systems
Generalized linear models with nonlinear feature transformations are widely used for large-scale regression and classification problems with sparse inputs. Memorization of feature interactions through a wide set of cross-product feature transformations are effective and interpretable, while generalization requires more feature engineering effort. With less feature engineering, deep neural networks can generalize better to unseen feature combinations through low-dimensional dense embeddings learned for the sparse features. However, deep neural networks with embeddings can over-generalize and recommend less relevant items when the user-item interactions are sparse and high-rank. In this paper, we present Wide & Deep learning---jointly trained wide linear models and deep neural networks---to combine the benefits of memorization and generalization for recommender systems. We productionized and evaluated the system on Google Play, a commercial mobile app store with over one billion active users and over one million apps. Online experiment results show that Wide & Deep significantly increased app acquisitions compared with wide-only and deep-only models. We have also open-sourced our implementation in TensorFlow.
[ "Heng-Tze Cheng, Levent Koc, Jeremiah Harmsen, Tal Shaked, Tushar\n Chandra, Hrishi Aradhye, Glen Anderson, Greg Corrado, Wei Chai, Mustafa\n Ispir, Rohan Anil, Zakaria Haque, Lichan Hong, Vihan Jain, Xiaobing Liu,\n Hemal Shah", "['Heng-Tze Cheng' 'Levent Koc' 'Jeremiah Harmsen' 'Tal Shaked'\n 'Tushar Chandra' 'Hrishi Aradhye' 'Glen Anderson' 'Greg Corrado'\n 'Wei Chai' 'Mustafa Ispir' 'Rohan Anil' 'Zakaria Haque' 'Lichan Hong'\n 'Vihan Jain' 'Xiaobing Liu' 'Hemal Shah']" ]
cs.CL cs.LG cs.NE
null
1606.07947
null
null
http://arxiv.org/pdf/1606.07947v4
2016-09-22T01:17:12Z
2016-06-25T18:16:39Z
Sequence-Level Knowledge Distillation
Neural machine translation (NMT) offers a novel alternative formulation of translation that is potentially simpler than statistical approaches. However to reach competitive performance, NMT models need to be exceedingly large. In this paper we consider applying knowledge distillation approaches (Bucila et al., 2006; Hinton et al., 2015) that have proven successful for reducing the size of neural models in other domains to the problem of NMT. We demonstrate that standard knowledge distillation applied to word-level prediction can be effective for NMT, and also introduce two novel sequence-level versions of knowledge distillation that further improve performance, and somewhat surprisingly, seem to eliminate the need for beam search (even when applied on the original teacher model). Our best student model runs 10 times faster than its state-of-the-art teacher with little loss in performance. It is also significantly better than a baseline model trained without knowledge distillation: by 4.2/1.7 BLEU with greedy decoding/beam search. Applying weight pruning on top of knowledge distillation results in a student model that has 13 times fewer parameters than the original teacher model, with a decrease of 0.4 BLEU.
[ "Yoon Kim, Alexander M. Rush", "['Yoon Kim' 'Alexander M. Rush']" ]
cs.CL cs.LG cs.NE
null
1606.07953
null
null
http://arxiv.org/pdf/1606.07953v2
2016-07-12T17:10:38Z
2016-06-25T19:46:28Z
Bidirectional Recurrent Neural Networks for Medical Event Detection in Electronic Health Records
Sequence labeling for extraction of medical events and their attributes from unstructured text in Electronic Health Record (EHR) notes is a key step towards semantic understanding of EHRs. It has important applications in health informatics including pharmacovigilance and drug surveillance. The state of the art supervised machine learning models in this domain are based on Conditional Random Fields (CRFs) with features calculated from fixed context windows. In this application, we explored various recurrent neural network frameworks and show that they significantly outperformed the CRF models.
[ "['Abhyuday Jagannatha' 'Hong Yu']", "Abhyuday Jagannatha, Hong Yu" ]
cs.IT cs.CE cs.LG math.IT
10.1016/j.asoc.2011.06.020
1606.07981
null
null
http://arxiv.org/abs/1606.07981v1
2016-06-26T00:16:44Z
2016-06-26T00:16:44Z
Gear fault diagnosis based on Gaussian correlation of vibrations signals and wavelet coefficients
The features of non-stationary multi-component signals are often difficult to be extracted for expert systems. In this paper, a new method for feature extraction that is based on maximization of local Gaussian correlation function of wavelet coefficients and signal is presented. The effect of empirical mode decomposition (EMD) to decompose multi-component signals to intrinsic mode functions (IMFs), before using of local Gaussian correlation is discussed. The experimental vibration signals from two gearbox systems are used to show the efficiency of the presented method. Linear support vector machine (SVM) is utilized to classify feature sets extracted with the presented method. The obtained results show that the features extracted in this method have excellent ability to classify faults without any additional feature selection; it is also shown that EMD can improve or degrade features according to the utilized feature reduction method.
[ "['Amir Hosein Zamanian' 'Abdolreza Ohadi']", "Amir Hosein Zamanian, Abdolreza Ohadi" ]
cs.LG stat.ML
null
1606.08009
null
null
http://arxiv.org/pdf/1606.08009v2
2016-11-17T14:46:39Z
2016-06-26T08:27:45Z
Fast Methods for Recovering Sparse Parameters in Linear Low Rank Models
In this paper, we investigate the recovery of a sparse weight vector (parameters vector) from a set of noisy linear combinations. However, only partial information about the matrix representing the linear combinations is available. Assuming a low-rank structure for the matrix, one natural solution would be to first apply a matrix completion on the data, and then to solve the resulting compressed sensing problem. In big data applications such as massive MIMO and medical data, the matrix completion step imposes a huge computational burden. Here, we propose to reduce the computational cost of the completion task by ignoring the columns corresponding to zero elements in the sparse vector. To this end, we employ a technique to initially approximate the support of the sparse vector. We further propose to unify the partial matrix completion and sparse vector recovery into an augmented four-step problem. Simulation results reveal that the augmented approach achieves the best performance, while both proposed methods outperform the natural two-step technique with substantially less computational requirements.
[ "['Ashkan Esmaeili' 'Arash Amini' 'Farokh Marvasti']", "Ashkan Esmaeili, Arash Amini, and Farokh Marvasti" ]
cs.LG cs.CV
null
1606.08051
null
null
http://arxiv.org/pdf/1606.08051v3
2016-09-06T11:57:06Z
2016-06-26T16:26:19Z
Training LDCRF model on unsegmented sequences using Connectionist Temporal Classification
Many machine learning problems such as speech recognition, gesture recognition, and handwriting recognition are concerned with simultaneous segmentation and labeling of sequence data. Latent-dynamic conditional random field (LDCRF) is a well-known discriminative method that has been successfully used for this task. However, LDCRF can only be trained with pre-segmented data sequences in which the label of each frame is available apriori. In the realm of neural networks, the invention of connectionist temporal classification (CTC) made it possible to train recurrent neural networks on unsegmented sequences with great success. In this paper, we use CTC to train an LDCRF model on unsegmented sequences. Experimental results on two gesture recognition tasks show that the proposed method outperforms LDCRFs, hidden Markov models, and conditional random fields.
[ "Amir Ahooye Atashin, Kamaledin Ghiasi-Shirazi, Ahad Harati", "['Amir Ahooye Atashin' 'Kamaledin Ghiasi-Shirazi' 'Ahad Harati']" ]
cs.NE cs.LG
null
1606.08061
null
null
http://arxiv.org/pdf/1606.08061v1
2016-06-26T17:57:36Z
2016-06-26T17:57:36Z
Exact gradient updates in time independent of output size for the spherical loss family
An important class of problems involves training deep neural networks with sparse prediction targets of very high dimension D. These occur naturally in e.g. neural language models or the learning of word-embeddings, often posed as predicting the probability of next words among a vocabulary of size D (e.g. 200,000). Computing the equally large, but typically non-sparse D-dimensional output vector from a last hidden layer of reasonable dimension d (e.g. 500) incurs a prohibitive O(Dd) computational cost for each example, as does updating the $D \times d$ output weight matrix and computing the gradient needed for backpropagation to previous layers. While efficient handling of large sparse network inputs is trivial, the case of large sparse targets is not, and has thus so far been sidestepped with approximate alternatives such as hierarchical softmax or sampling-based approximations during training. In this work we develop an original algorithmic approach which, for a family of loss functions that includes squared error and spherical softmax, can compute the exact loss, gradient update for the output weights, and gradient for backpropagation, all in $O(d^{2})$ per example instead of $O(Dd)$, remarkably without ever computing the D-dimensional output. The proposed algorithm yields a speedup of up to $D/4d$ i.e. two orders of magnitude for typical sizes, for that critical part of the computations that often dominates the training time in this kind of network architecture.
[ "Pascal Vincent, Alexandre de Br\\'ebisson, Xavier Bouthillier", "['Pascal Vincent' 'Alexandre de Brébisson' 'Xavier Bouthillier']" ]
cs.LG
null
1606.08117
null
null
http://arxiv.org/pdf/1606.08117v2
2016-09-16T09:41:10Z
2016-06-27T03:06:44Z
Improved Recurrent Neural Networks for Session-based Recommendations
Recurrent neural networks (RNNs) were recently proposed for the session-based recommendation task. The models showed promising improvements over traditional recommendation approaches. In this work, we further study RNN-based models for session-based recommendations. We propose the application of two techniques to improve model performance, namely, data augmentation, and a method to account for shifts in the input data distribution. We also empirically study the use of generalised distillation, and a novel alternative model that directly predicts item embeddings. Experiments on the RecSys Challenge 2015 dataset demonstrate relative improvements of 12.8% and 14.8% over previously reported results on the Recall@20 and Mean Reciprocal Rank@20 metrics respectively.
[ "Yong Kiam Tan, Xinxing Xu and Yong Liu", "['Yong Kiam Tan' 'Xinxing Xu' 'Yong Liu']" ]
cs.NE cs.LG
null
1606.08165
null
null
http://arxiv.org/pdf/1606.08165v2
2017-08-16T16:15:20Z
2016-06-27T08:58:29Z
Supervised learning based on temporal coding in spiking neural networks
Gradient descent training techniques are remarkably successful in training analog-valued artificial neural networks (ANNs). Such training techniques, however, do not transfer easily to spiking networks due to the spike generation hard non-linearity and the discrete nature of spike communication. We show that in a feedforward spiking network that uses a temporal coding scheme where information is encoded in spike times instead of spike rates, the network input-output relation is differentiable almost everywhere. Moreover, this relation is piece-wise linear after a transformation of variables. Methods for training ANNs thus carry directly to the training of such spiking networks as we show when training on the permutation invariant MNIST task. In contrast to rate-based spiking networks that are often used to approximate the behavior of ANNs, the networks we present spike much more sparsely and their behavior can not be directly approximated by conventional ANNs. Our results highlight a new approach for controlling the behavior of spiking networks with realistic temporal dynamics, opening up the potential for using these networks to process spike patterns with complex temporal information.
[ "['Hesham Mostafa']", "Hesham Mostafa" ]
stat.ML cs.CG cs.CV cs.LG cs.NE
10.1109/TIP.2017.2735189
1606.08282
null
null
http://arxiv.org/abs/1606.08282v3
2017-07-29T01:37:40Z
2016-06-27T14:03:40Z
Out-of-Sample Extension for Dimensionality Reduction of Noisy Time Series
This paper proposes an out-of-sample extension framework for a global manifold learning algorithm (Isomap) that uses temporal information in out-of-sample points in order to make the embedding more robust to noise and artifacts. Given a set of noise-free training data and its embedding, the proposed framework extends the embedding for a noisy time series. This is achieved by adding a spatio-temporal compactness term to the optimization objective of the embedding. To the best of our knowledge, this is the first method for out-of-sample extension of manifold embeddings that leverages timing information available for the extension set. Experimental results demonstrate that our out-of-sample extension algorithm renders a more robust and accurate embedding of sequentially ordered image data in the presence of various noise and artifacts when compared to other timing-aware embeddings. Additionally, we show that an out-of-sample extension framework based on the proposed algorithm outperforms the state of the art in eye-gaze estimation.
[ "['Hamid Dadkhahi' 'Marco F. Duarte' 'Benjamin Marlin']", "Hamid Dadkhahi and Marco F. Duarte and Benjamin Marlin" ]
cs.LG cs.AI cs.CL
null
1606.08359
null
null
http://arxiv.org/pdf/1606.08359v2
2016-09-23T20:40:25Z
2016-06-27T16:39:23Z
Lifted Rule Injection for Relation Embeddings
Methods based on representation learning currently hold the state-of-the-art in many natural language processing and knowledge base inference tasks. Yet, a major challenge is how to efficiently incorporate commonsense knowledge into such models. A recent approach regularizes relation and entity representations by propositionalization of first-order logic rules. However, propositionalization does not scale beyond domains with only few entities and rules. In this paper we present a highly efficient method for incorporating implication rules into distributed representations for automated knowledge base construction. We map entity-tuple embeddings into an approximately Boolean space and encourage a partial ordering over relation embeddings based on implication rules mined from WordNet. Surprisingly, we find that the strong restriction of the entity-tuple embedding space does not hurt the expressiveness of the model and even acts as a regularizer that improves generalization. By incorporating few commonsense rules, we achieve an increase of 2 percentage points mean average precision over a matrix factorization baseline, while observing a negligible increase in runtime.
[ "Thomas Demeester and Tim Rockt\\\"aschel and Sebastian Riedel", "['Thomas Demeester' 'Tim Rocktäschel' 'Sebastian Riedel']" ]
cs.DS cs.AI cs.LG
null
1606.08362
null
null
http://arxiv.org/pdf/1606.08362v2
2018-05-25T19:49:59Z
2016-06-27T16:44:44Z
A Reduction for Optimizing Lattice Submodular Functions with Diminishing Returns
A function $f: \mathbb{Z}_+^E \rightarrow \mathbb{R}_+$ is DR-submodular if it satisfies $f({\bf x} + \chi_i) -f ({\bf x}) \ge f({\bf y} + \chi_i) - f({\bf y})$ for all ${\bf x}\le {\bf y}, i\in E$. Recently, the problem of maximizing a DR-submodular function $f: \mathbb{Z}_+^E \rightarrow \mathbb{R}_+$ subject to a budget constraint $\|{\bf x}\|_1 \leq B$ as well as additional constraints has received significant attention \cite{SKIK14,SY15,MYK15,SY16}. In this note, we give a generic reduction from the DR-submodular setting to the submodular setting. The running time of the reduction and the size of the resulting submodular instance depends only \emph{logarithmically} on $B$. Using this reduction, one can translate the results for unconstrained and constrained submodular maximization to the DR-submodular setting for many types of constraints in a unified manner.
[ "['Alina Ene' 'Huy L. Nguyen']", "Alina Ene, Huy L. Nguyen" ]
cs.LG
null
1606.08415
null
null
http://arxiv.org/pdf/1606.08415v5
2023-06-06T01:53:32Z
2016-06-27T19:20:40Z
Gaussian Error Linear Units (GELUs)
We propose the Gaussian Error Linear Unit (GELU), a high-performing neural network activation function. The GELU activation function is $x\Phi(x)$, where $\Phi(x)$ the standard Gaussian cumulative distribution function. The GELU nonlinearity weights inputs by their value, rather than gates inputs by their sign as in ReLUs ($x\mathbf{1}_{x>0}$). We perform an empirical evaluation of the GELU nonlinearity against the ReLU and ELU activations and find performance improvements across all considered computer vision, natural language processing, and speech tasks.
[ "Dan Hendrycks and Kevin Gimpel", "['Dan Hendrycks' 'Kevin Gimpel']" ]
cs.LG
null
1606.08501
null
null
http://arxiv.org/pdf/1606.08501v2
2016-10-28T02:13:31Z
2016-06-27T22:34:14Z
Symmetric and antisymmetric properties of solutions to kernel-based machine learning problems
A particularly interesting instance of supervised learning with kernels is when each training example is associated with two objects, as in pairwise classification (Brunner et al., 2012), and in supervised learning of preference relations (Herbrich et al., 1998). In these cases, one may want to embed additional prior knowledge into the optimization problem associated with the training of the learning machine, modeled, respectively, by the symmetry of its optimal solution with respect to an exchange of order between the two objects, and by its antisymmetry. Extending the approach proposed in (Brunner et al., 2012) (where the only symmetric case was considered), we show, focusing on support vector binary classification, how such embedding is possible through the choice of a suitable pairwise kernel, which takes as inputs the individual feature vectors and also the group feature vectors associated with the two objects. We also prove that the symmetry/antisymmetry constraints still hold when considering the sequence of suboptimal solutions generated by one version of the Sequential Minimal Optimization (SMO) algorithm, and we present numerical results supporting the theoretical findings. We conclude discussing extensions of the main results to support vector regression, to transductive support vector machines, and to several kinds of graph kernels, including diffusion kernels.
[ "Giorgio Gnecco", "['Giorgio Gnecco']" ]
cs.AI cs.LG stat.ML
null
1606.08531
null
null
http://arxiv.org/pdf/1606.08531v1
2016-06-28T01:43:38Z
2016-06-28T01:43:38Z
A Learning Algorithm for Relational Logistic Regression: Preliminary Results
Relational logistic regression (RLR) is a representation of conditional probability in terms of weighted formulae for modelling multi-relational data. In this paper, we develop a learning algorithm for RLR models. Learning an RLR model from data consists of two steps: 1- learning the set of formulae to be used in the model (a.k.a. structure learning) and learning the weight of each formula (a.k.a. parameter learning). For structure learning, we deploy Schmidt and Murphy's hierarchical assumption: first we learn a model with simple formulae, then more complex formulae are added iteratively only if all their sub-formulae have proven effective in previous learned models. For parameter learning, we convert the problem into a non-relational learning problem and use an off-the-shelf logistic regression learning algorithm from Weka, an open-source machine learning tool, to learn the weights. We also indicate how hidden features about the individuals can be incorporated into RLR to boost the learning performance. We compare our learning algorithm to other structure and parameter learning algorithms in the literature, and compare the performance of RLR models to standard logistic regression and RDN-Boost on a modified version of the MovieLens data-set.
[ "['Bahare Fatemi' 'Seyed Mehran Kazemi' 'David Poole']", "Bahare Fatemi, Seyed Mehran Kazemi, David Poole" ]
cs.AI cs.LG stat.ML
null
1606.08538
null
null
http://arxiv.org/pdf/1606.08538v1
2016-06-28T02:23:58Z
2016-06-28T02:23:58Z
A Local Density-Based Approach for Local Outlier Detection
This paper presents a simple but effective density-based outlier detection approach with the local kernel density estimation (KDE). A Relative Density-based Outlier Score (RDOS) is introduced to measure the local outlierness of objects, in which the density distribution at the location of an object is estimated with a local KDE method based on extended nearest neighbors of the object. Instead of using only $k$ nearest neighbors, we further consider reverse nearest neighbors and shared nearest neighbors of an object for density distribution estimation. Some theoretical properties of the proposed RDOS including its expected value and false alarm probability are derived. A comprehensive experimental study on both synthetic and real-life data sets demonstrates that our approach is more effective than state-of-the-art outlier detection methods.
[ "Bo Tang and Haibo He", "['Bo Tang' 'Haibo He']" ]
stat.ML cs.LG
null
1606.08561
null
null
http://arxiv.org/pdf/1606.08561v2
2017-01-31T19:25:14Z
2016-06-28T05:29:25Z
Estimating the class prior and posterior from noisy positives and unlabeled data
We develop a classification algorithm for estimating posterior distributions from positive-unlabeled data, that is robust to noise in the positive labels and effective for high-dimensional data. In recent years, several algorithms have been proposed to learn from positive-unlabeled data; however, many of these contributions remain theoretical, performing poorly on real high-dimensional data that is typically contaminated with noise. We build on this previous work to develop two practical classification algorithms that explicitly model the noise in the positive labels and utilize univariate transforms built on discriminative classifiers. We prove that these univariate transforms preserve the class prior, enabling estimation in the univariate space and avoiding kernel density estimation for high-dimensional data. The theoretical development and both parametric and nonparametric algorithms proposed here constitutes an important step towards wide-spread use of robust classification algorithms for positive-unlabeled data.
[ "Shantanu Jain, Martha White, Predrag Radivojac", "['Shantanu Jain' 'Martha White' 'Predrag Radivojac']" ]
stat.ML cs.CV cs.LG cs.NE
null
1606.08571
null
null
http://arxiv.org/pdf/1606.08571v4
2016-12-06T04:04:19Z
2016-06-28T06:46:05Z
Alternating Back-Propagation for Generator Network
This paper proposes an alternating back-propagation algorithm for learning the generator network model. The model is a non-linear generalization of factor analysis. In this model, the mapping from the continuous latent factors to the observed signal is parametrized by a convolutional neural network. The alternating back-propagation algorithm iterates the following two steps: (1) Inferential back-propagation, which infers the latent factors by Langevin dynamics or gradient descent. (2) Learning back-propagation, which updates the parameters given the inferred latent factors by gradient descent. The gradient computations in both steps are powered by back-propagation, and they share most of their code in common. We show that the alternating back-propagation algorithm can learn realistic generator models of natural images, video sequences, and sounds. Moreover, it can also be used to learn from incomplete or indirect training data.
[ "Tian Han, Yang Lu, Song-Chun Zhu, and Ying Nian Wu", "['Tian Han' 'Yang Lu' 'Song-Chun Zhu' 'Ying Nian Wu']" ]
stat.ML cs.LG
10.24963/ijcai.2017/226
1606.08658
null
null
http://arxiv.org/abs/1606.08658v3
2017-03-08T09:21:40Z
2016-06-28T11:37:45Z
Clustering-Based Relational Unsupervised Representation Learning with an Explicit Distributed Representation
The goal of unsupervised representation learning is to extract a new representation of data, such that solving many different tasks becomes easier. Existing methods typically focus on vectorized data and offer little support for relational data, which additionally describe relationships among instances. In this work we introduce an approach for relational unsupervised representation learning. Viewing a relational dataset as a hypergraph, new features are obtained by clustering vertices and hyperedges. To find a representation suited for many relational learning tasks, a wide range of similarities between relational objects is considered, e.g. feature and structural similarities. We experimentally evaluate the proposed approach and show that models learned on such latent representations perform better, have lower complexity, and outperform the existing approaches on classification tasks.
[ "Sebastijan Dumancic and Hendrik Blockeel", "['Sebastijan Dumancic' 'Hendrik Blockeel']" ]
stat.ML cs.LG cs.LO
null
1606.08660
null
null
http://arxiv.org/pdf/1606.08660v2
2016-06-29T11:29:35Z
2016-06-28T11:41:03Z
Theory reconstruction: a representation learning view on predicate invention
With this positional paper we present a representation learning view on predicate invention. The intention of this proposal is to bridge the relational and deep learning communities on the problem of predicate invention. We propose a theory reconstruction approach, a formalism that extends autoencoder approach to representation learning to the relational settings. Our intention is to start a discussion to define a unifying framework for predicate invention and theory revision.
[ "Sebastijan Dumancic and Wannes Meert and Hendrik Blockeel", "['Sebastijan Dumancic' 'Wannes Meert' 'Hendrik Blockeel']" ]
cs.LG stat.AP stat.ML
null
1606.08698
null
null
http://arxiv.org/pdf/1606.08698v3
2017-06-20T08:09:37Z
2016-06-28T13:49:30Z
Reviving Threshold-Moving: a Simple Plug-in Bagging Ensemble for Binary and Multiclass Imbalanced Data
Class imbalance presents a major hurdle in the application of data mining methods. A common practice to deal with it is to create ensembles of classifiers that learn from resampled balanced data. For example, bagged decision trees combined with random undersampling (RUS) or the synthetic minority oversampling technique (SMOTE). However, most of the resampling methods entail asymmetric changes to the examples of different classes, which in turn can introduce its own biases in the model. Furthermore, those methods require a performance measure to be specified a priori before learning. An alternative is to use a so-called threshold-moving method that a posteriori changes the decision threshold of a model to counteract the imbalance, thus has a potential to adapt to the performance measure of interest. Surprisingly, little attention has been paid to the potential of combining bagging ensemble with threshold-moving. In this paper, we present probability thresholding bagging (PT-bagging), a versatile plug-in method that fills this gap. Contrary to usual rebalancing practice, our method preserves the natural class distribution of the data resulting in well calibrated posterior probabilities. We also extend the proposed method to handle multiclass data. The method is validated on binary and multiclass benchmark data sets. We perform analyses that provide insights into the proposed method.
[ "Guillem Collell, Drazen Prelec, Kaustubh Patil", "['Guillem Collell' 'Drazen Prelec' 'Kaustubh Patil']" ]
cs.CL cs.AI cs.LG
10.1007/978-3-319-77113-7_17
1606.08777
null
null
http://arxiv.org/abs/1606.08777v1
2016-06-28T16:31:50Z
2016-06-28T16:31:50Z
"Show me the cup": Reference with Continuous Representations
One of the most basic functions of language is to refer to objects in a shared scene. Modeling reference with continuous representations is challenging because it requires individuation, i.e., tracking and distinguishing an arbitrary number of referents. We introduce a neural network model that, given a definite description and a set of objects represented by natural images, points to the intended object if the expression has a unique referent, or indicates a failure, if it does not. The model, directly trained on reference acts, is competitive with a pipeline manually engineered to perform the same task, both when referents are purely visual, and when they are characterized by a combination of visual and linguistic properties.
[ "['Gemma Boleda' 'Sebastian Padó' 'Marco Baroni']", "Gemma Boleda and Sebastian Pad\\'o and Marco Baroni" ]
cs.LG cs.AI
null
1606.08808
null
null
http://arxiv.org/pdf/1606.08808v2
2017-05-26T15:24:26Z
2016-06-28T18:15:32Z
Adaptive Training of Random Mapping for Data Quantization
Data quantization learns encoding results of data with certain requirements, and provides a broad perspective of many real-world applications to data handling. Nevertheless, the results of encoder is usually limited to multivariate inputs with the random mapping, and side information of binary codes are hardly to mostly depict the original data patterns as possible. In the literature, cosine based random quantization has attracted much attentions due to its intrinsic bounded results. Nevertheless, it usually suffers from the uncertain outputs, and information of original data fails to be fully preserved in the reduced codes. In this work, a novel binary embedding method, termed adaptive training quantization (ATQ), is proposed to learn the ideal transform of random encoder, where the limitation of cosine random mapping is tackled. As an adaptive learning idea, the reduced mapping is adaptively calculated with idea of data group, while the bias of random transform is to be improved to hold most matching information. Experimental results show that the proposed method is able to obtain outstanding performance compared with other random quantization methods.
[ "Miao Cheng, Ah Chung Tsoi", "['Miao Cheng' 'Ah Chung Tsoi']" ]
stat.ML cs.CY cs.LG
10.1609/aimag.v38i3.2741
1606.08813
null
null
http://arxiv.org/abs/1606.08813v3
2016-08-31T12:30:13Z
2016-06-28T18:20:06Z
European Union regulations on algorithmic decision-making and a "right to explanation"
We summarize the potential impact that the European Union's new General Data Protection Regulation will have on the routine use of machine learning algorithms. Slated to take effect as law across the EU in 2018, it will restrict automated individual decision-making (that is, algorithms that make decisions based on user-level predictors) which "significantly affect" users. The law will also effectively create a "right to explanation," whereby a user can ask for an explanation of an algorithmic decision that was made about them. We argue that while this law will pose large challenges for industry, it highlights opportunities for computer scientists to take the lead in designing algorithms and evaluation frameworks which avoid discrimination and enable explanation.
[ "['Bryce Goodman' 'Seth Flaxman']", "Bryce Goodman and Seth Flaxman" ]