categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
list
cs.LG stat.ML
null
1606.08819
null
null
http://arxiv.org/pdf/1606.08819v2
2019-01-29T11:11:43Z
2016-06-28T18:32:43Z
Multi-View Kernel Consensus For Data Analysis
The input data features set for many data driven tasks is high-dimensional while the intrinsic dimension of the data is low. Data analysis methods aim to uncover the underlying low dimensional structure imposed by the low dimensional hidden parameters by utilizing distance metrics that consider the set of attributes as a single monolithic set. However, the transformation of the low dimensional phenomena into the measured high dimensional observations might distort the distance metric, This distortion can effect the desired estimated low dimensional geometric structure. In this paper, we suggest to utilize the redundancy in the attribute domain by partitioning the attributes into multiple subsets we call views. The proposed methods utilize the agreement also called consensus between different views to extract valuable geometric information that unifies multiple views about the intrinsic relationships among several different observations. This unification enhances the information that a single view or a simple concatenations of views provides.
[ "['Moshe Salhov' 'Ofir Lindenbaum' 'Yariv Aizenbud' 'Avi Silberschatz'\n 'Yoel Shkolnisky' 'Amir Averbuch']", "Moshe Salhov, Ofir Lindenbaum, Yariv Aizenbud, Avi Silberschatz, Yoel\n Shkolnisky, Amir Averbuch" ]
cs.LG cs.AI cs.IT math.IT stat.ML
null
1606.08842
null
null
http://arxiv.org/pdf/1606.08842v2
2016-09-23T15:55:44Z
2016-06-28T19:59:52Z
Active Ranking from Pairwise Comparisons and when Parametric Assumptions Don't Help
We consider sequential or active ranking of a set of n items based on noisy pairwise comparisons. Items are ranked according to the probability that a given item beats a randomly chosen item, and ranking refers to partitioning the items into sets of pre-specified sizes according to their scores. This notion of ranking includes as special cases the identification of the top-k items and the total ordering of the items. We first analyze a sequential ranking algorithm that counts the number of comparisons won, and uses these counts to decide whether to stop, or to compare another pair of items, chosen based on confidence intervals specified by the data collected up to that point. We prove that this algorithm succeeds in recovering the ranking using a number of comparisons that is optimal up to logarithmic factors. This guarantee does not require any structural properties of the underlying pairwise probability matrix, unlike a significant body of past work on pairwise ranking based on parametric models such as the Thurstone or Bradley-Terry-Luce models. It has been a long-standing open question as to whether or not imposing these parametric assumptions allows for improved ranking algorithms. For stochastic comparison models, in which the pairwise probabilities are bounded away from zero, our second contribution is to resolve this issue by proving a lower bound for parametric models. This shows, perhaps surprisingly, that these popular parametric modeling choices offer at most logarithmic gains for stochastic comparisons.
[ "Reinhard Heckel and Nihar B. Shah and Kannan Ramchandran and Martin J.\n Wainwright", "['Reinhard Heckel' 'Nihar B. Shah' 'Kannan Ramchandran'\n 'Martin J. Wainwright']" ]
cs.PL cs.AI cs.LG
null
1606.08866
null
null
http://arxiv.org/pdf/1606.08866v1
2016-06-28T20:04:07Z
2016-06-28T20:04:07Z
Technical Report: Towards a Universal Code Formatter through Machine Learning
There are many declarative frameworks that allow us to implement code formatters relatively easily for any specific language, but constructing them is cumbersome. The first problem is that "everybody" wants to format their code differently, leading to either many formatter variants or a ridiculous number of configuration options. Second, the size of each implementation scales with a language's grammar size, leading to hundreds of rules. In this paper, we solve the formatter construction problem using a novel approach, one that automatically derives formatters for any given language without intervention from a language expert. We introduce a code formatter called CodeBuff that uses machine learning to abstract formatting rules from a representative corpus, using a carefully designed feature set. Our experiments on Java, SQL, and ANTLR grammars show that CodeBuff is efficient, has excellent accuracy, and is grammar invariant for a given language. It also generalizes to a 4th language tested during manuscript preparation.
[ "Terence Parr and Jurgin Vinju", "['Terence Parr' 'Jurgin Vinju']" ]
cs.DC cs.LG
null
1606.08883
null
null
http://arxiv.org/pdf/1606.08883v1
2016-06-28T20:50:08Z
2016-06-28T20:50:08Z
Defending Non-Bayesian Learning against Adversarial Attacks
This paper addresses the problem of non-Bayesian learning over multi-agent networks, where agents repeatedly collect partially informative observations about an unknown state of the world, and try to collaboratively learn the true state. We focus on the impact of the adversarial agents on the performance of consensus-based non-Bayesian learning, where non-faulty agents combine local learning updates with consensus primitives. In particular, we consider the scenario where an unknown subset of agents suffer Byzantine faults -- agents suffering Byzantine faults behave arbitrarily. Two different learning rules are proposed.
[ "['Lili Su' 'Nitin H. Vaidya']", "Lili Su, Nitin H. Vaidya" ]
cs.LG math.PR math.ST stat.TH
null
1606.08920
null
null
http://arxiv.org/pdf/1606.08920v2
2017-12-31T03:45:48Z
2016-06-29T00:19:55Z
Exact Lower Bounds for the Agnostic Probably-Approximately-Correct (PAC) Machine Learning Model
We provide an exact non-asymptotic lower bound on the minimax expected excess risk (EER) in the agnostic probably-ap\-proximately-correct (PAC) machine learning classification model and identify minimax learning algorithms as certain maximally symmetric and minimally randomized "voting" procedures. Based on this result, an exact asymptotic lower bound on the minimax EER is provided. This bound is of the simple form $c_\infty/\sqrt{\nu}$ as $\nu\to\infty$, where $c_\infty=0.16997\dots$ is a universal constant, $\nu=m/d$, $m$ is the size of the training sample, and $d$ is the Vapnik--Chervonenkis dimension of the hypothesis class. It is shown that the differences between these asymptotic and non-asymptotic bounds, as well as the differences between these two bounds and the maximum EER of any learning algorithms that minimize the empirical risk, are asymptotically negligible, and all these differences are due to ties in the mentioned "voting" procedures. A few easy to compute non-asymptotic lower bounds on the minimax EER are also obtained, which are shown to be close to the exact asymptotic lower bound $c_\infty/\sqrt{\nu}$ even for rather small values of the ratio $\nu=m/d$. As an application of these results, we substantially improve existing lower bounds on the tail probability of the excess risk. Among the tools used are Bayes estimation and apparently new identities and inequalities for binomial distributions.
[ "Aryeh Kontorovich and Iosif Pinelis", "['Aryeh Kontorovich' 'Iosif Pinelis']" ]
cs.LG cs.AI cs.CR cs.SE
null
1606.08928
null
null
http://arxiv.org/pdf/1606.08928v1
2016-06-29T01:05:36Z
2016-06-29T01:05:36Z
subgraph2vec: Learning Distributed Representations of Rooted Sub-graphs from Large Graphs
In this paper, we present subgraph2vec, a novel approach for learning latent representations of rooted subgraphs from large graphs inspired by recent advancements in Deep Learning and Graph Kernels. These latent representations encode semantic substructure dependencies in a continuous vector space, which is easily exploited by statistical models for tasks such as graph classification, clustering, link prediction and community detection. subgraph2vec leverages on local information obtained from neighbourhoods of nodes to learn their latent representations in an unsupervised fashion. We demonstrate that subgraph vectors learnt by our approach could be used in conjunction with classifiers such as CNNs, SVMs and relational data clustering algorithms to achieve significantly superior accuracies. Also, we show that the subgraph vectors could be used for building a deep learning variant of Weisfeiler-Lehman graph kernel. Our experiments on several benchmark and large-scale real-world datasets reveal that subgraph2vec achieves significant improvements in accuracies over existing graph kernels on both supervised and unsupervised learning tasks. Specifically, on two realworld program analysis tasks, namely, code clone and malware detection, subgraph2vec outperforms state-of-the-art kernels by more than 17% and 4%, respectively.
[ "Annamalai Narayanan, Mahinthan Chandramohan, Lihui Chen, Yang Liu and\n Santhoshkumar Saminathan", "['Annamalai Narayanan' 'Mahinthan Chandramohan' 'Lihui Chen' 'Yang Liu'\n 'Santhoshkumar Saminathan']" ]
cs.AI cs.LG stat.ML
null
1606.08963
null
null
http://arxiv.org/pdf/1606.08963v1
2016-06-29T06:00:35Z
2016-06-29T06:00:35Z
Non-linear Label Ranking for Large-scale Prediction of Long-Term User Interests
We consider the problem of personalization of online services from the viewpoint of ad targeting, where we seek to find the best ad categories to be shown to each user, resulting in improved user experience and increased advertisers' revenue. We propose to address this problem as a task of ranking the ad categories depending on a user's preference, and introduce a novel label ranking approach capable of efficiently learning non-linear, highly accurate models in large-scale settings. Experiments on a real-world advertising data set with more than 3.2 million users show that the proposed algorithm outperforms the existing solutions in terms of both rank loss and top-K retrieval performance, strongly suggesting the benefit of using the proposed model on large-scale ranking problems.
[ "['Nemanja Djuric' 'Mihajlo Grbovic' 'Vladan Radosavljevic'\n 'Narayan Bhamidipati' 'Slobodan Vucetic']", "Nemanja Djuric, Mihajlo Grbovic, Vladan Radosavljevic, Narayan\n Bhamidipati, Slobodan Vucetic" ]
cs.LG
null
1606.09022
null
null
http://arxiv.org/pdf/1606.09022v1
2016-06-29T09:47:29Z
2016-06-29T09:47:29Z
Decision making via semi-supervised machine learning techniques
Semi-supervised learning (SSL) is a class of supervised learning tasks and techniques that also exploits the unlabeled data for training. SSL significantly reduces labeling related costs and is able to handle large data sets. The primary objective is the extraction of robust inference rules. Decision support systems (DSSs) who utilize SSL have significant advantages. Only a small amount of labelled data is required for the initialization. Then, new (unlabeled) data can be utilized and improve system's performance. Thus, the DSS is continuously adopted to new conditions, with minimum effort. Techniques which are cost effective and easily adopted to dynamic systems, can be beneficial for many practical applications. Such applications fields are: (a) industrial assembly lines monitoring, (b) sea border surveillance, (c) elders' falls detection, (d) transportation tunnels inspection, (e) concrete foundation piles defect recognition, (f) commercial sector companies financial assessment and (g) image advanced filtering for cultural heritage applications.
[ "['Eftychios Protopapadakis']", "Eftychios Protopapadakis" ]
cs.CL cs.LG
null
1606.09058
null
null
http://arxiv.org/pdf/1606.09058v1
2016-06-29T12:08:51Z
2016-06-29T12:08:51Z
A Distributional Semantics Approach to Implicit Language Learning
In the present paper we show that distributional information is particularly important when considering concept availability under implicit language learning conditions. Based on results from different behavioural experiments we argue that the implicit learnability of semantic regularities depends on the degree to which the relevant concept is reflected in language use. In our simulations, we train a Vector-Space model on either an English or a Chinese corpus and then feed the resulting representations to a feed-forward neural network. The task of the neural network was to find a mapping between the word representations and the novel words. Using datasets from four behavioural experiments, which used different semantic manipulations, we were able to obtain learning patterns very similar to those obtained by humans.
[ "Dimitrios Alikaniotis and John N. Williams", "['Dimitrios Alikaniotis' 'John N. Williams']" ]
cs.LG
null
1606.09152
null
null
http://arxiv.org/pdf/1606.09152v2
2016-08-22T11:07:23Z
2016-06-29T15:22:13Z
Actor-critic versus direct policy search: a comparison based on sample complexity
Sample efficiency is a critical property when optimizing policy parameters for the controller of a robot. In this paper, we evaluate two state-of-the-art policy optimization algorithms. One is a recent deep reinforcement learning method based on an actor-critic algorithm, Deep Deterministic Policy Gradient (DDPG), that has been shown to perform well on various control benchmarks. The other one is a direct policy search method, Covariance Matrix Adaptation Evolution Strategy (CMA-ES), a black-box optimization method that is widely used for robot learning. The algorithms are evaluated on a continuous version of the mountain car benchmark problem, so as to compare their sample complexity. From a preliminary analysis, we expect DDPG to be more sample efficient than CMA-ES, which is confirmed by our experimental results.
[ "Arnaud de Froissard de Broissia and Olivier Sigaud", "['Arnaud de Froissard de Broissia' 'Olivier Sigaud']" ]
stat.ML cs.LG stat.AP
null
1606.09184
null
null
http://arxiv.org/pdf/1606.09184v1
2016-06-29T17:06:45Z
2016-06-29T17:06:45Z
Disease Trajectory Maps
Medical researchers are coming to appreciate that many diseases are in fact complex, heterogeneous syndromes composed of subpopulations that express different variants of a related complication. Time series data extracted from individual electronic health records (EHR) offer an exciting new way to study subtle differences in the way these diseases progress over time. In this paper, we focus on answering two questions that can be asked using these databases of time series. First, we want to understand whether there are individuals with similar disease trajectories and whether there are a small number of degrees of freedom that account for differences in trajectories across the population. Second, we want to understand how important clinical outcomes are associated with disease trajectories. To answer these questions, we propose the Disease Trajectory Map (DTM), a novel probabilistic model that learns low-dimensional representations of sparse and irregularly sampled time series. We propose a stochastic variational inference algorithm for learning the DTM that allows the model to scale to large modern medical datasets. To demonstrate the DTM, we analyze data collected on patients with the complex autoimmune disease, scleroderma. We find that DTM learns meaningful representations of disease trajectories and that the representations are significantly associated with important clinical outcomes.
[ "Peter Schulam and Raman Arora", "['Peter Schulam' 'Raman Arora']" ]
stat.ML cs.LG
null
1606.09190
null
null
http://arxiv.org/pdf/1606.09190v1
2016-06-29T17:20:39Z
2016-06-29T17:20:39Z
A Semi-Definite Programming approach to low dimensional embedding for unsupervised clustering
This paper proposes a variant of the method of Gu\'edon and Verhynin for estimating the cluster matrix in the Mixture of Gaussians framework via Semi-Definite Programming. A clustering oriented embedding is deduced from this estimate. The procedure is suitable for very high dimensional data because it is based on pairwise distances only. Theoretical garantees are provided and an eigenvalue optimisation approach is proposed for computing the embedding. The performance of the method is illustrated via Monte Carlo experiements and comparisons with other embeddings from the literature.
[ "St\\'ephane Chr\\'etien, Cl\\'ement Dombry and Adrien Faivre", "['Stéphane Chrétien' 'Clément Dombry' 'Adrien Faivre']" ]
cs.LG cs.RO
null
1606.09197
null
null
http://arxiv.org/pdf/1606.09197v4
2018-07-02T12:40:05Z
2016-06-29T17:39:09Z
Model-Free Trajectory-based Policy Optimization with Monotonic Improvement
Many of the recent trajectory optimization algorithms alternate between linear approximation of the system dynamics around the mean trajectory and conservative policy update. One way of constraining the policy change is by bounding the Kullback-Leibler (KL) divergence between successive policies. These approaches already demonstrated great experimental success in challenging problems such as end-to-end control of physical systems. However, the linear approximation of the system dynamics can introduce a bias in the policy update and prevent convergence to the optimal policy. In this article, we propose a new model-free trajectory-based policy optimization algorithm with guaranteed monotonic improvement. The algorithm backpropagates a local, quadratic and time-dependent \qfunc~learned from trajectory data instead of a model of the system dynamics. Our policy update ensures exact KL-constraint satisfaction without simplifying assumptions on the system dynamics. We experimentally demonstrate on highly non-linear control tasks the improvement in performance of our algorithm in comparison to approaches linearizing the system dynamics. In order to show the monotonic improvement of our algorithm, we additionally conduct a theoretical analysis of our policy update scheme to derive a lower bound of the change in policy return between successive iterations.
[ "['Riad Akrour' 'Abbas Abdolmaleki' 'Hany Abdulsamad' 'Jan Peters'\n 'Gerhard Neumann']", "Riad Akrour, Abbas Abdolmaleki, Hany Abdulsamad, Jan Peters and\n Gerhard Neumann" ]
cs.LG stat.ML
null
1606.09202
null
null
http://arxiv.org/pdf/1606.09202v2
2016-12-28T19:54:19Z
2016-06-29T18:01:15Z
Tighter bounds lead to improved classifiers
The standard approach to supervised classification involves the minimization of a log-loss as an upper bound to the classification error. While this is a tight bound early on in the optimization, it overemphasizes the influence of incorrectly classified examples far from the decision boundary. Updating the upper bound during the optimization leads to improved classification rates while transforming the learning into a sequence of minimization problems. In addition, in the context where the classifier is part of a larger system, this modification makes it possible to link the performance of the classifier to that of the whole system, allowing the seamless introduction of external constraints.
[ "['Nicolas Le Roux']", "Nicolas Le Roux" ]
cs.CL cs.CV cs.LG
null
1606.09239
null
null
http://arxiv.org/pdf/1606.09239v1
2016-06-29T19:52:53Z
2016-06-29T19:52:53Z
Learning Concept Taxonomies from Multi-modal Data
We study the problem of automatically building hypernym taxonomies from textual and visual data. Previous works in taxonomy induction generally ignore the increasingly prominent visual data, which encode important perceptual semantics. Instead, we propose a probabilistic model for taxonomy induction by jointly leveraging text and images. To avoid hand-crafted feature engineering, we design end-to-end features based on distributed representations of images and words. The model is discriminatively trained given a small set of existing ontologies and is capable of building full taxonomies from scratch for a collection of unseen conceptual label items with associated images. We evaluate our model and features on the WordNet hierarchies, where our system outperforms previous approaches by a large gap.
[ "Hao Zhang, Zhiting Hu, Yuntian Deng, Mrinmaya Sachan, Zhicheng Yan,\n Eric P. Xing", "['Hao Zhang' 'Zhiting Hu' 'Yuntian Deng' 'Mrinmaya Sachan' 'Zhicheng Yan'\n 'Eric P. Xing']" ]
cs.CV cs.LG stat.ML
null
1606.09282
null
null
http://arxiv.org/pdf/1606.09282v3
2017-02-14T22:32:30Z
2016-06-29T20:54:04Z
Learning without Forgetting
When building a unified vision system or gradually adding new capabilities to a system, the usual assumption is that training data for all tasks is always available. However, as the number of tasks grows, storing and retraining on such data becomes infeasible. A new problem arises where we add new capabilities to a Convolutional Neural Network (CNN), but the training data for its existing capabilities are unavailable. We propose our Learning without Forgetting method, which uses only new task data to train the network while preserving the original capabilities. Our method performs favorably compared to commonly used feature extraction and fine-tuning adaption techniques and performs similarly to multitask learning that uses original task data we assume unavailable. A more surprising observation is that Learning without Forgetting may be able to replace fine-tuning with similar old and new task datasets for improved new task performance.
[ "Zhizhong Li, Derek Hoiem", "['Zhizhong Li' 'Derek Hoiem']" ]
math.OC cs.LG math.NA
null
1606.09333
null
null
http://arxiv.org/pdf/1606.09333v1
2016-06-30T03:10:54Z
2016-06-30T03:10:54Z
Dimension-Free Iteration Complexity of Finite Sum Optimization Problems
Many canonical machine learning problems boil down to a convex optimization problem with a finite sum structure. However, whereas much progress has been made in developing faster algorithms for this setting, the inherent limitations of these problems are not satisfactorily addressed by existing lower bounds. Indeed, current bounds focus on first-order optimization algorithms, and only apply in the often unrealistic regime where the number of iterations is less than $\mathcal{O}(d/n)$ (where $d$ is the dimension and $n$ is the number of samples). In this work, we extend the framework of (Arjevani et al., 2015) to provide new lower bounds, which are dimension-free, and go beyond the assumptions of current bounds, thereby covering standard finite sum optimization methods, e.g., SAG, SAGA, SVRG, SDCA without duality, as well as stochastic coordinate-descent methods, such as SDCA and accelerated proximal SDCA.
[ "Yossi Arjevani and Ohad Shamir", "['Yossi Arjevani' 'Ohad Shamir']" ]
cs.LG stat.ML
null
1606.09375
null
null
http://arxiv.org/pdf/1606.09375v3
2017-02-05T17:04:39Z
2016-06-30T07:42:13Z
Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering
In this work, we are interested in generalizing convolutional neural networks (CNNs) from low-dimensional regular grids, where image, video and speech are represented, to high-dimensional irregular domains, such as social networks, brain connectomes or words' embedding, represented by graphs. We present a formulation of CNNs in the context of spectral graph theory, which provides the necessary mathematical background and efficient numerical schemes to design fast localized convolutional filters on graphs. Importantly, the proposed technique offers the same linear computational complexity and constant learning complexity as classical CNNs, while being universal to any graph structure. Experiments on MNIST and 20NEWS demonstrate the ability of this novel deep learning system to learn local, stationary, and compositional features on graphs.
[ "['Michaël Defferrard' 'Xavier Bresson' 'Pierre Vandergheynst']", "Micha\\\"el Defferrard, Xavier Bresson, Pierre Vandergheynst" ]
cs.LG cs.SY
null
1606.09383
null
null
http://arxiv.org/pdf/1606.09383v1
2016-06-30T08:12:21Z
2016-06-30T08:12:21Z
On Approximate Dynamic Programming with Multivariate Splines for Adaptive Control
We define a SDP framework based on the RLSTD algorithm and multivariate simplex B-splines. We introduce a local forget factor capable of preserving the continuity of the simplex splines. This local forget factor is integrated with the RLSTD algorithm, resulting in a modified RLSTD algorithm that is capable of tracking time-varying systems. We present the results of two numerical experiments, one validating SDP and comparing it with NDP and another to show the advantages of the modified RLSTD algorithm over the original. While SDP requires more computations per time-step, the experiment shows that for the same amount of function approximator parameters, there is an increase in performance in terms of stability and learning rate compared to NDP. The second experiment shows that SDP in combination with the modified RLSTD algorithm allows for faster recovery compared to the original RLSTD algorithm when system parameters are altered, paving the way for an adaptive high-performance non-linear control method.
[ "Willem Eerland, Coen de Visser, Erik-Jan van Kampen", "['Willem Eerland' 'Coen de Visser' 'Erik-Jan van Kampen']" ]
stat.ML cs.LG
null
1606.09388
null
null
http://arxiv.org/pdf/1606.09388v3
2019-09-12T12:20:09Z
2016-06-30T08:30:57Z
Asymptotically Optimal Algorithms for Budgeted Multiple Play Bandits
We study a generalization of the multi-armed bandit problem with multiple plays where there is a cost associated with pulling each arm and the agent has a budget at each time that dictates how much she can expect to spend. We derive an asymptotic regret lower bound for any uniformly efficient algorithm in our setting. We then study a variant of Thompson sampling for Bernoulli rewards and a variant of KL-UCB for both single-parameter exponential families and bounded, finitely supported rewards. We show these algorithms are asymptotically optimal, both in rateand leading problem-dependent constants, including in the thick margin setting where multiple arms fall on the decision boundary.
[ "['Alexander Luedtke' 'Emilie Kaufmann' 'Antoine Chambaz']", "Alexander Luedtke, Emilie Kaufmann (CRIStAL), Antoine Chambaz (MAP5 -\n UMR 8145)" ]
cs.LG stat.ML
null
1606.09458
null
null
http://arxiv.org/pdf/1606.09458v2
2018-02-21T12:31:01Z
2016-06-30T12:24:04Z
Vote-boosting ensembles
Vote-boosting is a sequential ensemble learning method in which the individual classifiers are built on different weighted versions of the training data. To build a new classifier, the weight of each training instance is determined in terms of the degree of disagreement among the current ensemble predictions for that instance. For low class-label noise levels, especially when simple base learners are used, emphasis should be made on instances for which the disagreement rate is high. When more flexible classifiers are used and as the noise level increases, the emphasis on these uncertain instances should be reduced. In fact, at sufficiently high levels of class-label noise, the focus should be on instances on which the ensemble classifiers agree. The optimal type of emphasis can be automatically determined using cross-validation. An extensive empirical analysis using the beta distribution as emphasis function illustrates that vote-boosting is an effective method to generate ensembles that are both accurate and robust.
[ "Maryam Sabzevari, Gonzalo Mart\\'inez-Mu\\~noz, Alberto Su\\'arez", "['Maryam Sabzevari' 'Gonzalo Martínez-Muñoz' 'Alberto Suárez']" ]
stat.ML cs.LG
null
1606.09517
null
null
http://arxiv.org/pdf/1606.09517v1
2016-06-30T14:44:26Z
2016-06-30T14:44:26Z
A Model Explanation System: Latest Updates and Extensions
We propose a general model explanation system (MES) for "explaining" the output of black box classifiers. This paper describes extensions to Turner (2015), which is referred to frequently in the text. We use the motivating example of a classifier trained to detect fraud in a credit card transaction history. The key aspect is that we provide explanations applicable to a single prediction, rather than provide an interpretable set of parameters. We focus on explaining positive predictions (alerts). However, the presented methodology is symmetrically applicable to negative predictions.
[ "['Ryan Turner']", "Ryan Turner" ]
cs.LG cs.AI cs.CY
null
1606.09581
null
null
http://arxiv.org/pdf/1606.09581v2
2016-07-18T08:14:43Z
2016-06-28T07:00:07Z
Performance Based Evaluation of Various Machine Learning Classification Techniques for Chronic Kidney Disease Diagnosis
Areas where Artificial Intelligence (AI) & related fields are finding their applications are increasing day by day, moving from core areas of computer science they are finding their applications in various other domains.In recent times Machine Learning i.e. a sub-domain of AI has been widely used in order to assist medical experts and doctors in the prediction, diagnosis and prognosis of various diseases and other medical disorders. In this manuscript the authors applied various machine learning algorithms to a problem in the domain of medical diagnosis and analyzed their efficiency in predicting the results. The problem selected for the study is the diagnosis of the Chronic Kidney Disease.The dataset used for the study consists of 400 instances and 24 attributes. The authors evaluated 12 classification techniques by applying them to the Chronic Kidney Disease data. In order to calculate efficiency, results of the prediction by candidate methods were compared with the actual medical results of the subject.The various metrics used for performance evaluation are predictive accuracy, precision, sensitivity and specificity. The results indicate that decision-tree performed best with nearly the accuracy of 98.6%, sensitivity of 0.9720, precision of 1 and specificity of 1.
[ "['Sahil Sharma' 'Vinod Sharma' 'Atul Sharma']", "Sahil Sharma, Vinod Sharma and Atul Sharma" ]
cs.LG cs.AI cs.IT math.IT stat.ML
10.1109/TIT.2020.3045613
1606.09632
null
null
null
null
null
A Permutation-based Model for Crowd Labeling: Optimal Estimation and Robustness
The task of aggregating and denoising crowd-labeled data has gained increased significance with the advent of crowdsourcing platforms and massive datasets. We propose a permutation-based model for crowd labeled data that is a significant generalization of the classical Dawid-Skene model, and introduce a new error metric by which to compare different estimators. We derive global minimax rates for the permutation-based model that are sharp up to logarithmic factors, and match the minimax lower bounds derived under the simpler Dawid-Skene model. We then design two computationally-efficient estimators: the WAN estimator for the setting where the ordering of workers in terms of their abilities is approximately known, and the OBI-WAN estimator where that is not known. For each of these estimators, we provide non-asymptotic bounds on their performance. We conduct synthetic simulations and experiments on real-world crowdsourcing data, and the experimental results corroborate our theoretical findings.
[ "Nihar B. Shah, Sivaraman Balakrishnan, Martin J. Wainwright" ]
cs.IR cs.LG
null
1607.00024
null
null
http://arxiv.org/pdf/1607.00024v4
2016-07-28T07:48:46Z
2016-06-30T20:16:58Z
Review Based Rating Prediction
Recommendation systems are an important units in today's e-commerce applications, such as targeted advertising, personalized marketing and information retrieval. In recent years, the importance of contextual information has motivated generation of personalized recommendations according to the available contextual information of users. Compared to the traditional systems which mainly utilize users' rating history, review-based recommendation hopefully provide more relevant results to users. We introduce a review-based recommendation approach that obtains contextual information by mining user reviews. The proposed approach relate to features obtained by analyzing textual reviews using methods developed in Natural Language Processing (NLP) and information retrieval discipline to compute a utility function over a given item. An item utility is a measure that shows how much it is preferred according to user's current context. In our system, the context inference is modeled as similarity between the users reviews history and the item reviews history. As an example application, we used our method to mine contextual data from customers' reviews of movies and use it to produce review-based rating prediction. The predicted ratings can generate recommendations that are item-based and should appear at the recommended items list in the product page. Our evaluations suggest that our system can help produce better prediction rating scores in comparison to the standard prediction methods.
[ "['Tal Hadad']", "Tal Hadad" ]
stat.ML cs.LG
null
1607.00034
null
null
http://arxiv.org/pdf/1607.00034v1
2016-06-30T20:40:24Z
2016-06-30T20:40:24Z
Ballpark Learning: Estimating Labels from Rough Group Comparisons
We are interested in estimating individual labels given only coarse, aggregated signal over the data points. In our setting, we receive sets ("bags") of unlabeled instances with constraints on label proportions. We relax the unrealistic assumption of known label proportions, made in previous work; instead, we assume only to have upper and lower bounds, and constraints on bag differences. We motivate the problem, propose an intuitive formulation and algorithm, and apply our methods to real-world scenarios. Across several domains, we show how using only proportion constraints and no labeled examples, we can achieve surprisingly high accuracy. In particular, we demonstrate how to predict income level using rough stereotypes and how to perform sentiment analysis using very little information. We also apply our method to guide exploratory analysis, recovering geographical differences in twitter dialect.
[ "Tom Hope and Dafna Shahaf", "['Tom Hope' 'Dafna Shahaf']" ]
cs.LG cs.NE
null
1607.00036
null
null
http://arxiv.org/pdf/1607.00036v2
2017-03-17T05:56:48Z
2016-06-30T20:45:12Z
Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes
We extend neural Turing machine (NTM) model into a dynamic neural Turing machine (D-NTM) by introducing a trainable memory addressing scheme. This addressing scheme maintains for each memory cell two separate vectors, content and address vectors. This allows the D-NTM to learn a wide variety of location-based addressing strategies including both linear and nonlinear ones. We implement the D-NTM with both continuous, differentiable and discrete, non-differentiable read/write mechanisms. We investigate the mechanisms and effects of learning to read and write into a memory through experiments on Facebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM baselines. We have done extensive analysis of our model and different variations of NTM on bAbI task. We also provide further experimental results on sequential pMNIST, Stanford Natural Language Inference, associative recall and copy tasks.
[ "Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio", "['Caglar Gulcehre' 'Sarath Chandar' 'Kyunghyun Cho' 'Yoshua Bengio']" ]
cs.LG stat.ML
null
1607.00067
null
null
http://arxiv.org/pdf/1607.00067v1
2016-06-30T22:25:20Z
2016-06-30T22:25:20Z
Unsupervised Learning with Imbalanced Data via Structure Consolidation Latent Variable Model
Unsupervised learning on imbalanced data is challenging because, when given imbalanced data, current model is often dominated by the major category and ignores the categories with small amount of data. We develop a latent variable model that can cope with imbalanced data by dividing the latent space into a shared space and a private space. Based on Gaussian Process Latent Variable Models, we propose a new kernel formulation that enables the separation of latent space and derives an efficient variational inference method. The performance of our model is demonstrated with an imbalanced medical image dataset.
[ "Fariba Yousefi, Zhenwen Dai, Carl Henrik Ek, Neil Lawrence", "['Fariba Yousefi' 'Zhenwen Dai' 'Carl Henrik Ek' 'Neil Lawrence']" ]
math.OC cs.LG stat.ML
null
1607.00076
null
null
http://arxiv.org/pdf/1607.00076v2
2016-12-08T08:15:53Z
2016-06-30T23:12:20Z
Multi-class classification: mirror descent approach
We consider the problem of multi-class classification and a stochastic opti- mization approach to it. We derive risk bounds for stochastic mirror descent algorithm and provide examples of set geometries that make the use of the algorithm efficient in terms of error in k.
[ "['Daria Reshetova']", "Daria Reshetova" ]
cs.AI cs.LG cs.SD
null
1607.00087
null
null
http://arxiv.org/pdf/1607.00087v2
2016-12-02T13:12:35Z
2016-07-01T00:54:10Z
Fractal Dimension Pattern Based Multiresolution Analysis for Rough Estimator of Person-Dependent Audio Emotion Recognition
As a general means of expression, audio analysis and recognition has attracted much attentions for its wide applications in real-life world. Audio emotion recognition (AER) attempts to understand emotional states of human with the given utterance signals, and has been studied abroad for its further development on friendly human-machine interfaces. Distinguish from other existing works, the person-dependent patterns of audio emotions are conducted, and fractal dimension features are calculated for acoustic feature extraction. Furthermore, it is able to efficiently learn intrinsic characteristics of auditory emotions, while the utterance features are learned from fractal dimensions of each sub-bands. Experimental results show the proposed method is able to provide comparative performance for audio emotion recognition.
[ "Miao Cheng and Ah Chung Tsoi", "['Miao Cheng' 'Ah Chung Tsoi']" ]
math.OC cs.LG cs.NA math.NA stat.CO stat.ML
null
1607.00101
null
null
http://arxiv.org/pdf/1607.00101v1
2016-07-01T03:16:57Z
2016-07-01T03:16:57Z
Randomized block proximal damped Newton method for composite self-concordant minimization
In this paper we consider the composite self-concordant (CSC) minimization problem, which minimizes the sum of a self-concordant function $f$ and a (possibly nonsmooth) proper closed convex function $g$. The CSC minimization is the cornerstone of the path-following interior point methods for solving a broad class of convex optimization problems. It has also found numerous applications in machine learning. The proximal damped Newton (PDN) methods have been well studied in the literature for solving this problem that enjoy a nice iteration complexity. Given that at each iteration these methods typically require evaluating or accessing the Hessian of $f$ and also need to solve a proximal Newton subproblem, the cost per iteration can be prohibitively high when applied to large-scale problems. Inspired by the recent success of block coordinate descent methods, we propose a randomized block proximal damped Newton (RBPDN) method for solving the CSC minimization. Compared to the PDN methods, the computational cost per iteration of RBPDN is usually significantly lower. The computational experiment on a class of regularized logistic regression problems demonstrate that RBPDN is indeed promising in solving large-scale CSC minimization problems. The convergence of RBPDN is also analyzed in the paper. In particular, we show that RBPDN is globally convergent when $g$ is Lipschitz continuous. It is also shown that RBPDN enjoys a local linear convergence. Moreover, we show that for a class of $g$ including the case where $g$ is Lipschitz differentiable, RBPDN enjoys a global linear convergence. As a striking consequence, it shows that the classical damped Newton methods [22,40] and the PDN [31] for such $g$ are globally linearly convergent, which was previously unknown in the literature. Moreover, this result can be used to sharpen the existing iteration complexity of these methods.
[ "Zhaosong Lu", "['Zhaosong Lu']" ]
cs.LG stat.ML
null
1607.00110
null
null
http://arxiv.org/pdf/1607.00110v1
2016-07-01T05:21:15Z
2016-07-01T05:21:15Z
Combining Gradient Boosting Machines with Collective Inference to Predict Continuous Values
Gradient boosting of regression trees is a competitive procedure for learning predictive models of continuous data that fits the data with an additive non-parametric model. The classic version of gradient boosting assumes that the data is independent and identically distributed. However, relational data with interdependent, linked instances is now common and the dependencies in such data can be exploited to improve predictive performance. Collective inference is one approach to exploit relational correlation patterns and significantly reduce classification error. However, much of the work on collective learning and inference has focused on discrete prediction tasks rather than continuous. %target values has not got that attention in terms of collective inference. In this work, we investigate how to combine these two paradigms together to improve regression in relational domains. Specifically, we propose a boosting algorithm for learning a collective inference model that predicts a continuous target variable. In the algorithm, we learn a basic relational model, collectively infer the target values, and then iteratively learn relational models to predict the residuals. We evaluate our proposed algorithm on a real network dataset and show that it outperforms alternative boosting methods. However, our investigation also revealed that the relational features interact together to produce better predictions.
[ "['Iman Alodah' 'Jennifer Neville']", "Iman Alodah and Jennifer Neville" ]
cs.LG
null
1607.00122
null
null
http://arxiv.org/pdf/1607.00122v1
2016-07-01T06:50:17Z
2016-07-01T06:50:17Z
Less-forgetting Learning in Deep Neural Networks
A catastrophic forgetting problem makes deep neural networks forget the previously learned information, when learning data collected in new environments, such as by different sensors or in different light conditions. This paper presents a new method for alleviating the catastrophic forgetting problem. Unlike previous research, our method does not use any information from the source domain. Surprisingly, our method is very effective to forget less of the information in the source domain, and we show the effectiveness of our method using several experiments. Furthermore, we observed that the forgetting problem occurs between mini-batches when performing general training processes using stochastic gradient descent methods, and this problem is one of the factors that degrades generalization performance of the network. We also try to solve this problem using the proposed method. Finally, we show our less-forgetting learning method is also helpful to improve the performance of deep neural networks in terms of recognition rates.
[ "['Heechul Jung' 'Jeongwoo Ju' 'Minju Jung' 'Junmo Kim']", "Heechul Jung, Jeongwoo Ju, Minju Jung, Junmo Kim" ]
stat.ML cs.CR cs.LG
10.1145/2976749.2978318
1607.00133
null
null
http://arxiv.org/abs/1607.00133v2
2016-10-24T11:59:40Z
2016-07-01T07:29:10Z
Deep Learning with Differential Privacy
Machine learning techniques based on neural networks are achieving remarkable results in a wide variety of domains. Often, the training of models requires large, representative datasets, which may be crowdsourced and contain sensitive information. The models should not expose private information in these datasets. Addressing this goal, we develop new algorithmic techniques for learning and a refined analysis of privacy costs within the framework of differential privacy. Our implementation and experiments demonstrate that we can train deep neural networks with non-convex objectives, under a modest privacy budget, and at a manageable cost in software complexity, training efficiency, and model quality.
[ "['Martín Abadi' 'Andy Chu' 'Ian Goodfellow' 'H. Brendan McMahan'\n 'Ilya Mironov' 'Kunal Talwar' 'Li Zhang']", "Mart\\'in Abadi and Andy Chu and Ian Goodfellow and H. Brendan McMahan\n and Ilya Mironov and Kunal Talwar and Li Zhang" ]
cs.AI cs.LG stat.ML
null
1607.00136
null
null
http://arxiv.org/pdf/1607.00136v1
2016-07-01T07:34:50Z
2016-07-01T07:34:50Z
Missing Data Estimation in High-Dimensional Datasets: A Swarm Intelligence-Deep Neural Network Approach
In this paper, we examine the problem of missing data in high-dimensional datasets by taking into consideration the Missing Completely at Random and Missing at Random mechanisms, as well as theArbitrary missing pattern. Additionally, this paper employs a methodology based on Deep Learning and Swarm Intelligence algorithms in order to provide reliable estimates for missing data. The deep learning technique is used to extract features from the input data via an unsupervised learning approach by modeling the data distribution based on the input. This deep learning technique is then used as part of the objective function for the swarm intelligence technique in order to estimate the missing data after a supervised fine-tuning phase by minimizing an error function based on the interrelationship and correlation between features in the dataset. The investigated methodology in this paper therefore has longer running times, however, the promising potential outcomes justify the trade-off. Also, basic knowledge of statistics is presumed.
[ "['Collins Leke' 'Tshilidzi Marwala']", "Collins Leke and Tshilidzi Marwala" ]
cs.LG stat.ML
null
1607.00146
null
null
http://arxiv.org/pdf/1607.00146v1
2016-07-01T08:17:27Z
2016-07-01T08:17:27Z
Efficient and Consistent Robust Time Series Analysis
We study the problem of robust time series analysis under the standard auto-regressive (AR) time series model in the presence of arbitrary outliers. We devise an efficient hard thresholding based algorithm which can obtain a consistent estimate of the optimal AR model despite a large fraction of the time series points being corrupted. Our algorithm alternately estimates the corrupted set of points and the model parameters, and is inspired by recent advances in robust regression and hard-thresholding methods. However, a direct application of existing techniques is hindered by a critical difference in the time-series domain: each point is correlated with all previous points rendering existing tools inapplicable directly. We show how to overcome this hurdle using novel proof techniques. Using our techniques, we are also able to provide the first efficient and provably consistent estimator for the robust regression problem where a standard linear observation model with white additive noise is corrupted arbitrarily. We illustrate our methods on synthetic datasets and show that our methods indeed are able to consistently recover the optimal parameters despite a large fraction of points being corrupted.
[ "Kush Bhatia, Prateek Jain, Parameswaran Kamalaruban, Purushottam Kar", "['Kush Bhatia' 'Prateek Jain' 'Parameswaran Kamalaruban' 'Purushottam Kar']" ]
cs.AI cs.LG stat.ML
null
1607.00148
null
null
http://arxiv.org/pdf/1607.00148v2
2016-07-11T09:33:48Z
2016-07-01T08:25:48Z
LSTM-based Encoder-Decoder for Multi-sensor Anomaly Detection
Mechanical devices such as engines, vehicles, aircrafts, etc., are typically instrumented with numerous sensors to capture the behavior and health of the machine. However, there are often external factors or variables which are not captured by sensors leading to time-series which are inherently unpredictable. For instance, manual controls and/or unmonitored environmental conditions or load may lead to inherently unpredictable time-series. Detecting anomalies in such scenarios becomes challenging using standard approaches based on mathematical models that rely on stationarity, or prediction models that utilize prediction errors to detect anomalies. We propose a Long Short Term Memory Networks based Encoder-Decoder scheme for Anomaly Detection (EncDec-AD) that learns to reconstruct 'normal' time-series behavior, and thereafter uses reconstruction error to detect anomalies. We experiment with three publicly available quasi predictable time-series datasets: power demand, space shuttle, and ECG, and two real-world engine datasets with both predictive and unpredictable behavior. We show that EncDec-AD is robust and can detect anomalies from predictable, unpredictable, periodic, aperiodic, and quasi-periodic time-series. Further, we show that EncDec-AD is able to detect anomalies from short time-series (length as small as 30) as well as long time-series (length as large as 500).
[ "Pankaj Malhotra, Anusha Ramakrishnan, Gaurangi Anand, Lovekesh Vig,\n Puneet Agarwal, Gautam Shroff", "['Pankaj Malhotra' 'Anusha Ramakrishnan' 'Gaurangi Anand' 'Lovekesh Vig'\n 'Puneet Agarwal' 'Gautam Shroff']" ]
stat.ML cs.AI cs.LG
null
1607.00215
null
null
http://arxiv.org/pdf/1607.00215v3
2017-06-13T15:54:51Z
2016-07-01T11:58:28Z
Why is Posterior Sampling Better than Optimism for Reinforcement Learning?
Computational results demonstrate that posterior sampling for reinforcement learning (PSRL) dramatically outperforms algorithms driven by optimism, such as UCRL2. We provide insight into the extent of this performance boost and the phenomenon that drives it. We leverage this insight to establish an $\tilde{O}(H\sqrt{SAT})$ Bayesian expected regret bound for PSRL in finite-horizon episodic Markov decision processes, where $H$ is the horizon, $S$ is the number of states, $A$ is the number of actions and $T$ is the time elapsed. This improves upon the best previous bound of $\tilde{O}(H S \sqrt{AT})$ for any reinforcement learning algorithm.
[ "Ian Osband, Benjamin Van Roy", "['Ian Osband' 'Benjamin Van Roy']" ]
cs.CL cs.LG cs.SD eess.AS
null
1607.00325
null
null
http://arxiv.org/pdf/1607.00325v2
2017-01-03T19:57:37Z
2016-07-01T17:34:16Z
Permutation Invariant Training of Deep Models for Speaker-Independent Multi-talker Speech Separation
We propose a novel deep learning model, which supports permutation invariant training (PIT), for speaker independent multi-talker speech separation, commonly known as the cocktail-party problem. Different from most of the prior arts that treat speech separation as a multi-class regression problem and the deep clustering technique that considers it a segmentation (or clustering) problem, our model optimizes for the separation regression error, ignoring the order of mixing sources. This strategy cleverly solves the long-lasting label permutation problem that has prevented progress on deep learning based techniques for speech separation. Experiments on the equal-energy mixing setup of a Danish corpus confirms the effectiveness of PIT. We believe improvements built upon PIT can eventually solve the cocktail-party problem and enable real-world adoption of, e.g., automatic meeting transcription and multi-party human-computer interaction, where overlapping speech is common.
[ "Dong Yu, Morten Kolb{\\ae}k, Zheng-Hua Tan, and Jesper Jensen", "['Dong Yu' 'Morten Kolbæk' 'Zheng-Hua Tan' 'Jesper Jensen']" ]
math.OC cs.LG cs.NA stat.ML
null
1607.00345
null
null
http://arxiv.org/pdf/1607.00345v1
2016-07-01T18:37:33Z
2016-07-01T18:37:33Z
Convergence Rate of Frank-Wolfe for Non-Convex Objectives
We give a simple proof that the Frank-Wolfe algorithm obtains a stationary point at a rate of $O(1/\sqrt{t})$ on non-convex objectives with a Lipschitz continuous gradient. Our analysis is affine invariant and is the first, to the best of our knowledge, giving a similar rate to what was already proven for projected gradient methods (though on slightly different measures of stationarity).
[ "['Simon Lacoste-Julien']", "Simon Lacoste-Julien" ]
cs.LG stat.ML
null
1607.00360
null
null
http://arxiv.org/pdf/1607.00360v1
2016-07-01T19:27:28Z
2016-07-01T19:27:28Z
A scaled Bregman theorem with applications
Bregman divergences play a central role in the design and analysis of a range of machine learning algorithms. This paper explores the use of Bregman divergences to establish reductions between such algorithms and their analyses. We present a new scaled isodistortion theorem involving Bregman divergences (scaled Bregman theorem for short) which shows that certain "Bregman distortions'" (employing a potentially non-convex generator) may be exactly re-written as a scaled Bregman divergence computed over transformed data. Admissible distortions include geodesic distances on curved manifolds and projections or gauge-normalisation, while admissible data include scalars, vectors and matrices. Our theorem allows one to leverage to the wealth and convenience of Bregman divergences when analysing algorithms relying on the aforementioned Bregman distortions. We illustrate this with three novel applications of our theorem: a reduction from multi-class density ratio to class-probability estimation, a new adaptive projection free yet norm-enforcing dual norm mirror descent algorithm, and a reduction from clustering on flat manifolds to clustering on curved manifolds. Experiments on each of these domains validate the analyses and suggest that the scaled Bregman theorem might be a worthy addition to the popular handful of Bregman divergence properties that have been pervasive in machine learning.
[ "Richard Nock and Aditya Krishna Menon and Cheng Soon Ong", "['Richard Nock' 'Aditya Krishna Menon' 'Cheng Soon Ong']" ]
cs.CL cs.AI cs.LG
null
1607.00410
null
null
http://arxiv.org/pdf/1607.00410v1
2016-07-01T21:24:21Z
2016-07-01T21:24:21Z
Domain Adaptation for Neural Networks by Parameter Augmentation
We propose a simple domain adaptation method for neural networks in a supervised setting. Supervised domain adaptation is a way of improving the generalization performance on the target domain by using the source domain dataset, assuming that both of the datasets are labeled. Recently, recurrent neural networks have been shown to be successful on a variety of NLP tasks such as caption generation; however, the existing domain adaptation techniques are limited to (1) tune the model parameters by the target dataset after the training by the source dataset, or (2) design the network to have dual output, one for the source domain and the other for the target domain. Reformulating the idea of the domain adaptation technique proposed by Daume (2007), we propose a simple domain adaptation method, which can be applied to neural networks trained with a cross-entropy loss. On captioning datasets, we show performance improvements over other domain adaptation methods.
[ "Yusuke Watanabe, Kazuma Hashimoto, Yoshimasa Tsuruoka", "['Yusuke Watanabe' 'Kazuma Hashimoto' 'Yoshimasa Tsuruoka']" ]
cs.AI cs.CL cs.LG
null
1607.00424
null
null
http://arxiv.org/pdf/1607.00424v1
2016-07-01T22:11:38Z
2016-07-01T22:11:38Z
Learning Relational Dependency Networks for Relation Extraction
We consider the task of KBP slot filling -- extracting relation information from newswire documents for knowledge base construction. We present our pipeline, which employs Relational Dependency Networks (RDNs) to learn linguistic patterns for relation extraction. Additionally, we demonstrate how several components such as weak supervision, word2vec features, joint learning and the use of human advice, can be incorporated in this relational framework. We evaluate the different components in the benchmark KBP 2015 task and show that RDNs effectively model a diverse set of features and perform competitively with current state-of-the-art relation extraction.
[ "['Dileep Viswanathan' 'Ameet Soni' 'Jude Shavlik' 'Sriraam Natarajan']", "Dileep Viswanathan and Ameet Soni and Jude Shavlik and Sriraam\n Natarajan" ]
q-bio.NC cs.LG stat.ML
null
1607.00435
null
null
http://arxiv.org/pdf/1607.00435v1
2016-07-01T23:48:35Z
2016-07-01T23:48:35Z
Decoding the Encoding of Functional Brain Networks: an fMRI Classification Comparison of Non-negative Matrix Factorization (NMF), Independent Component Analysis (ICA), and Sparse Coding Algorithms
Brain networks in fMRI are typically identified using spatial independent component analysis (ICA), yet mathematical constraints such as sparse coding and positivity both provide alternate biologically-plausible frameworks for generating brain networks. Non-negative Matrix Factorization (NMF) would suppress negative BOLD signal by enforcing positivity. Spatial sparse coding algorithms ($L1$ Regularized Learning and K-SVD) would impose local specialization and a discouragement of multitasking, where the total observed activity in a single voxel originates from a restricted number of possible brain networks. The assumptions of independence, positivity, and sparsity to encode task-related brain networks are compared; the resulting brain networks for different constraints are used as basis functions to encode the observed functional activity at a given time point. These encodings are decoded using machine learning to compare both the algorithms and their assumptions, using the time series weights to predict whether a subject is viewing a video, listening to an audio cue, or at rest, in 304 fMRI scans from 51 subjects. For classifying cognitive activity, the sparse coding algorithm of $L1$ Regularized Learning consistently outperformed 4 variations of ICA across different numbers of networks and noise levels (p$<$0.001). The NMF algorithms, which suppressed negative BOLD signal, had the poorest accuracy. Within each algorithm, encodings using sparser spatial networks (containing more zero-valued voxels) had higher classification accuracy (p$<$0.001). The success of sparse coding algorithms may suggest that algorithms which enforce sparse coding, discourage multitasking, and promote local specialization may capture better the underlying source processes than those which allow inexhaustible local processes such as ICA.
[ "Jianwen Xie, Pamela K. Douglas, Ying Nian Wu, Arthur L. Brody, Ariana\n E. Anderson", "['Jianwen Xie' 'Pamela K. Douglas' 'Ying Nian Wu' 'Arthur L. Brody'\n 'Ariana E. Anderson']" ]
cs.AI cs.LG stat.ML
null
1607.00446
null
null
http://arxiv.org/pdf/1607.00446v2
2016-10-24T19:25:51Z
2016-07-02T01:33:00Z
A Greedy Approach to Adapting the Trace Parameter for Temporal Difference Learning
One of the main obstacles to broad application of reinforcement learning methods is the parameter sensitivity of our core learning algorithms. In many large-scale applications, online computation and function approximation represent key strategies in scaling up reinforcement learning algorithms. In this setting, we have effective and reasonably well understood algorithms for adapting the learning-rate parameter, online during learning. Such meta-learning approaches can improve robustness of learning and enable specialization to current task, improving learning speed. For temporal-difference learning algorithms which we study here, there is yet another parameter, $\lambda$, that similarly impacts learning speed and stability in practice. Unfortunately, unlike the learning-rate parameter, $\lambda$ parametrizes the objective function that temporal-difference methods optimize. Different choices of $\lambda$ produce different fixed-point solutions, and thus adapting $\lambda$ online and characterizing the optimization is substantially more complex than adapting the learning-rate parameter. There are no meta-learning method for $\lambda$ that can achieve (1) incremental updating, (2) compatibility with function approximation, and (3) maintain stability of learning under both on and off-policy sampling. In this paper we contribute a novel objective function for optimizing $\lambda$ as a function of state rather than time. We derive a new incremental, linear complexity $\lambda$-adaption algorithm that does not require offline batch updating or access to a model of the world, and present a suite of experiments illustrating the practicality of our new algorithm in three different settings. Taken together, our contributions represent a concrete step towards black-box application of temporal-difference learning methods in real world problems.
[ "['Martha White' 'Adam White']", "Martha White and Adam White" ]
cs.LG q-bio.NC stat.ML
10.1109/ICIP.2016.7532332
1607.00455
null
null
http://arxiv.org/abs/1607.00455v1
2016-07-02T02:55:16Z
2016-07-02T02:55:16Z
Alzheimer's Disease Diagnostics by Adaptation of 3D Convolutional Network
Early diagnosis, playing an important role in preventing progress and treating the Alzheimer\{'}s disease (AD), is based on classification of features extracted from brain images. The features have to accurately capture main AD-related variations of anatomical brain structures, such as, e.g., ventricles size, hippocampus shape, cortical thickness, and brain volume. This paper proposed to predict the AD with a deep 3D convolutional neural network (3D-CNN), which can learn generic features capturing AD biomarkers and adapt to different domain datasets. The 3D-CNN is built upon a 3D convolutional autoencoder, which is pre-trained to capture anatomical shape variations in structural brain MRI scans. Fully connected upper layers of the 3D-CNN are then fine-tuned for each task-specific AD classification. Experiments on the CADDementia MRI dataset with no skull-stripping preprocessing have shown our 3D-CNN outperforms several conventional classifiers by accuracy. Abilities of the 3D-CNN to generalize the features learnt and adapt to other domains have been validated on the ADNI dataset.
[ "['Ehsan Hosseini-Asl' 'Robert Keynto' 'Ayman El-Baz']", "Ehsan Hosseini-Asl, Robert Keynto, Ayman El-Baz" ]
cs.LG
null
1607.00466
null
null
http://arxiv.org/pdf/1607.00466v1
2016-07-02T04:48:59Z
2016-07-02T04:48:59Z
Outlier absorbing based on a Bayesian approach
The presence of outliers is prevalent in machine learning applications and may produce misleading results. In this paper a new method for dealing with outliers and anomal samples is proposed. To overcome the outlier issue, the proposed method combines the global and local views of the samples. By combination of these views, our algorithm performs in a robust manner. The experimental results show the capabilities of the proposed method.
[ "Parsa Bagherzadeh and Hadi Sadoghi Yazdi", "['Parsa Bagherzadeh' 'Hadi Sadoghi Yazdi']" ]
cs.SI cs.AI cs.LG
null
1607.00474
null
null
http://arxiv.org/pdf/1607.00474v1
2016-07-02T07:41:45Z
2016-07-02T07:41:45Z
Adaptive Neighborhood Graph Construction for Inference in Multi-Relational Networks
A neighborhood graph, which represents the instances as vertices and their relations as weighted edges, is the basis of many semi-supervised and relational models for node labeling and link prediction. Most methods employ a sequential process to construct the neighborhood graph. This process often consists of generating a candidate graph, pruning the candidate graph to make a neighborhood graph, and then performing inference on the variables (i.e., nodes) in the neighborhood graph. In this paper, we propose a framework that can dynamically adapt the neighborhood graph based on the states of variables from intermediate inference results, as well as structural properties of the relations connecting them. A key strength of our framework is its ability to handle multi-relational data and employ varying amounts of relations for each instance based on the intermediate inference results. We formulate the link prediction task as inference on neighborhood graphs, and include preliminary results illustrating the effects of different strategies in our proposed framework.
[ "Shobeir Fakhraei, Dhanya Sridhar, Jay Pujara, Lise Getoor", "['Shobeir Fakhraei' 'Dhanya Sridhar' 'Jay Pujara' 'Lise Getoor']" ]
stat.ML cs.LG
10.1016/j.neucom.2017.02.029
1607.00485
null
null
http://arxiv.org/abs/1607.00485v1
2016-07-02T09:55:26Z
2016-07-02T09:55:26Z
Group Sparse Regularization for Deep Neural Networks
In this paper, we consider the joint task of simultaneously optimizing (i) the weights of a deep neural network, (ii) the number of neurons for each hidden layer, and (iii) the subset of active input features (i.e., feature selection). While these problems are generally dealt with separately, we present a simple regularized formulation allowing to solve all three of them in parallel, using standard optimization routines. Specifically, we extend the group Lasso penalty (originated in the linear regression literature) in order to impose group-level sparsity on the network's connections, where each group is defined as the set of outgoing weights from a unit. Depending on the specific case, the weights can be related to an input variable, to a hidden neuron, or to a bias unit, thus performing simultaneously all the aforementioned tasks in order to obtain a compact network. We perform an extensive experimental evaluation, by comparing with classical weight decay and Lasso penalties. We show that a sparse version of the group Lasso penalty is able to achieve competitive performances, while at the same time resulting in extremely compact networks with a smaller number of input features. We evaluate both on a toy dataset for handwritten digit recognition, and on multiple realistic large-scale classification problems.
[ "['Simone Scardapane' 'Danilo Comminiello' 'Amir Hussain' 'Aurelio Uncini']", "Simone Scardapane, Danilo Comminiello, Amir Hussain, Aurelio Uncini" ]
cs.CY cs.LG cs.SI
null
1607.00509
null
null
http://arxiv.org/pdf/1607.00509v1
2016-07-02T13:35:02Z
2016-07-02T13:35:02Z
Big IoT and social networking data for smart cities: Algorithmic improvements on Big Data Analysis in the context of RADICAL city applications
In this paper we present a SOA (Service Oriented Architecture)-based platform, enabling the retrieval and analysis of big datasets stemming from social networking (SN) sites and Internet of Things (IoT) devices, collected by smart city applications and socially-aware data aggregation services. A large set of city applications in the areas of Participating Urbanism, Augmented Reality and Sound-Mapping throughout participating cities is being applied, resulting into produced sets of millions of user-generated events and online SN reports fed into the RADICAL platform. Moreover, we study the application of data analytics such as sentiment analysis to the combined IoT and SN data saved into an SQL database, further investigating algorithmic and configurations to minimize delays in dataset processing and results retrieval.
[ "['Evangelos Psomakelis' 'Fotis Aisopos' 'Antonios Litke'\n 'Konstantinos Tserpes' 'Magdalini Kardara' 'Pablo Martínez Campo']", "Evangelos Psomakelis, Fotis Aisopos, Antonios Litke, Konstantinos\n Tserpes, Magdalini Kardara, Pablo Mart\\'inez Campo" ]
cs.NA cs.LG math.NA stat.ML
null
1607.00514
null
null
http://arxiv.org/pdf/1607.00514v1
2016-07-02T14:25:58Z
2016-07-02T14:25:58Z
Approximate Joint Matrix Triangularization
We consider the problem of approximate joint triangularization of a set of noisy jointly diagonalizable real matrices. Approximate joint triangularizers are commonly used in the estimation of the joint eigenstructure of a set of matrices, with applications in signal processing, linear algebra, and tensor decomposition. By assuming the input matrices to be perturbations of noise-free, simultaneously diagonalizable ground-truth matrices, the approximate joint triangularizers are expected to be perturbations of the exact joint triangularizers of the ground-truth matrices. We provide a priori and a posteriori perturbation bounds on the `distance' between an approximate joint triangularizer and its exact counterpart. The a priori bounds are theoretical inequalities that involve functions of the ground-truth matrices and noise matrices, whereas the a posteriori bounds are given in terms of observable quantities that can be computed from the input matrices. From a practical perspective, the problem of finding the best approximate joint triangularizer of a set of noisy matrices amounts to solving a nonconvex optimization problem. We show that, under a condition on the noise level of the input matrices, it is possible to find a good initial triangularizer such that the solution obtained by any local descent-type algorithm has certain global guarantees. Finally, we discuss the application of approximate joint matrix triangularization to canonical tensor decomposition and we derive novel estimation error bounds.
[ "Nicolo Colombo and Nikos Vlassis", "['Nicolo Colombo' 'Nikos Vlassis']" ]
cs.LG q-bio.NC stat.ML
null
1607.00556
null
null
http://arxiv.org/pdf/1607.00556v1
2016-07-02T19:55:56Z
2016-07-02T19:55:56Z
Alzheimer's Disease Diagnostics by a Deeply Supervised Adaptable 3D Convolutional Network
Early diagnosis, playing an important role in preventing progress and treating the Alzheimer's disease (AD), is based on classification of features extracted from brain images. The features have to accurately capture main AD-related variations of anatomical brain structures, such as, e.g., ventricles size, hippocampus shape, cortical thickness, and brain volume. This paper proposes to predict the AD with a deep 3D convolutional neural network (3D-CNN), which can learn generic features capturing AD biomarkers and adapt to different domain datasets. The 3D-CNN is built upon a 3D convolutional autoencoder, which is pre-trained to capture anatomical shape variations in structural brain MRI scans. Fully connected upper layers of the 3D-CNN are then fine-tuned for each task-specific AD classification. Experiments on the \emph{ADNI} MRI dataset with no skull-stripping preprocessing have shown our 3D-CNN outperforms several conventional classifiers by accuracy and robustness. Abilities of the 3D-CNN to generalize the features learnt and adapt to other domains have been validated on the \emph{CADDementia} dataset.
[ "['Ehsan Hosseini-Asl' \"Georgy Gimel'farb\" 'Ayman El-Baz']", "Ehsan Hosseini-Asl, Georgy Gimel'farb, Ayman El-Baz" ]
stat.ML cs.LG
10.1613/jair.5638
1607.00567
null
null
http://arxiv.org/abs/1607.00567v3
2018-01-25T08:37:30Z
2016-07-02T22:20:59Z
Rademacher Complexity Bounds for a Penalized Multiclass Semi-Supervised Algorithm
We propose Rademacher complexity bounds for multiclass classifiers trained with a two-step semi-supervised model. In the first step, the algorithm partitions the partially labeled data and then identifies dense clusters containing $\kappa$ predominant classes using the labeled training examples such that the proportion of their non-predominant classes is below a fixed threshold. In the second step, a classifier is trained by minimizing a margin empirical loss over the labeled training set and a penalization term measuring the disability of the learner to predict the $\kappa$ predominant classes of the identified clusters. The resulting data-dependent generalization error bound involves the margin distribution of the classifier, the stability of the clustering technique used in the first step and Rademacher complexity terms corresponding to partially labeled training data. Our theoretical result exhibit convergence rates extending those proposed in the literature for the binary case, and experimental results on different multiclass classification problems show empirical evidence that supports the theory.
[ "['Yury Maximov' 'Massih-Reza Amini' 'Zaid Harchaoui']", "Yury Maximov, Massih-Reza Amini, Zaid Harchaoui" ]
cs.SI cs.LG stat.ML
null
1607.00653
null
null
http://arxiv.org/pdf/1607.00653v1
2016-07-03T16:09:30Z
2016-07-03T16:09:30Z
node2vec: Scalable Feature Learning for Networks
Prediction tasks over nodes and edges in networks require careful effort in engineering features used by learning algorithms. Recent research in the broader field of representation learning has led to significant progress in automating prediction by learning the features themselves. However, present feature learning approaches are not expressive enough to capture the diversity of connectivity patterns observed in networks. Here we propose node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks. In node2vec, we learn a mapping of nodes to a low-dimensional space of features that maximizes the likelihood of preserving network neighborhoods of nodes. We define a flexible notion of a node's network neighborhood and design a biased random walk procedure, which efficiently explores diverse neighborhoods. Our algorithm generalizes prior work which is based on rigid notions of network neighborhoods, and we argue that the added flexibility in exploring neighborhoods is the key to learning richer representations. We demonstrate the efficacy of node2vec over existing state-of-the-art techniques on multi-label classification and link prediction in several real-world networks from diverse domains. Taken together, our work represents a new way for efficiently learning state-of-the-art task-independent representations in complex networks.
[ "['Aditya Grover' 'Jure Leskovec']", "Aditya Grover, Jure Leskovec" ]
cs.CV cs.LG stat.ML
null
1607.00662
null
null
http://arxiv.org/pdf/1607.00662v2
2018-06-19T17:26:53Z
2016-07-03T17:53:11Z
Unsupervised Learning of 3D Structure from Images
A key goal of computer vision is to recover the underlying 3D structure from 2D observations of the world. In this paper we learn strong deep generative models of 3D structures, and recover these structures from 3D and 2D images via probabilistic inference. We demonstrate high-quality samples and report log-likelihoods on several datasets, including ShapeNet [2], and establish the first benchmarks in the literature. We also show how these models and their inference networks can be trained end-to-end from 2D images. This demonstrates for the first time the feasibility of learning to infer 3D representations of the world in a purely unsupervised manner.
[ "['Danilo Jimenez Rezende' 'S. M. Ali Eslami' 'Shakir Mohamed'\n 'Peter Battaglia' 'Max Jaderberg' 'Nicolas Heess']", "Danilo Jimenez Rezende and S. M. Ali Eslami and Shakir Mohamed and\n Peter Battaglia and Max Jaderberg and Nicolas Heess" ]
stat.ML cs.LG
null
1607.00669
null
null
http://arxiv.org/pdf/1607.00669v3
2016-08-26T21:56:03Z
2016-07-03T18:54:25Z
Understanding the Energy and Precision Requirements for Online Learning
It is well-known that the precision of data, hyperparameters, and internal representations employed in learning systems directly impacts its energy, throughput, and latency. The precision requirements for the training algorithm are also important for systems that learn on-the-fly. Prior work has shown that the data and hyperparameters can be quantized heavily without incurring much penalty in classification accuracy when compared to floating point implementations. These works suffer from two key limitations. First, they assume uniform precision for the classifier and for the training algorithm and thus miss out on the opportunity to further reduce precision. Second, prior works are empirical studies. In this article, we overcome both these limitations by deriving analytical lower bounds on the precision requirements of the commonly employed stochastic gradient descent (SGD) on-line learning algorithm in the specific context of a support vector machine (SVM). Lower bounds on the data precision are derived in terms of the the desired classification accuracy and precision of the hyperparameters used in the classifier. Additionally, lower bounds on the hyperparameter precision in the SGD training algorithm are obtained. These bounds are validated using both synthetic and the UCI breast cancer dataset. Additionally, the impact of these precisions on the energy consumption of a fixed-point SVM with on-line training is studied.
[ "['Charbel Sakr' 'Ameya Patil' 'Sai Zhang' 'Yongjune Kim' 'Naresh Shanbhag']", "Charbel Sakr, Ameya Patil, Sai Zhang, Yongjune Kim, Naresh Shanbhag" ]
cs.LG
null
1607.00847
null
null
http://arxiv.org/pdf/1607.00847v9
2019-03-10T08:17:05Z
2016-07-04T12:21:04Z
Confidence-Weighted Bipartite Ranking
Bipartite ranking is a fundamental machine learning and data mining problem. It commonly concerns the maximization of the AUC metric. Recently, a number of studies have proposed online bipartite ranking algorithms to learn from massive streams of class-imbalanced data. These methods suggest both linear and kernel-based bipartite ranking algorithms based on first and second-order online learning. Unlike kernelized ranker, linear ranker is more scalable learning algorithm. The existing linear online bipartite ranking algorithms lack either handling non-separable data or constructing adaptive large margin. These limitations yield unreliable bipartite ranking performance. In this work, we propose a linear online confidence-weighted bipartite ranking algorithm (CBR) that adopts soft confidence-weighted learning. The proposed algorithm leverages the same properties of soft confidence-weighted learning in a framework for bipartite ranking. We also develop a diagonal variation of the proposed confidence-weighted bipartite ranking algorithm to deal with high-dimensional data by maintaining only the diagonal elements of the covariance matrix. We empirically evaluate the effectiveness of the proposed algorithms on several benchmark and high-dimensional datasets. The experimental results validate the reliability of the proposed algorithms. The results also show that our algorithms outperform or are at least comparable to the competing online AUC maximization methods.
[ "Majdi Khalid, Indrakshi Ray, and Hamidreza Chitsaz", "['Majdi Khalid' 'Indrakshi Ray' 'Hamidreza Chitsaz']" ]
cs.LG cs.AI
null
1607.00872
null
null
http://arxiv.org/pdf/1607.00872v2
2017-07-25T04:41:44Z
2016-07-04T13:08:19Z
Neighborhood Features Help Detecting Non-Technical Losses in Big Data Sets
Electricity theft is a major problem around the world in both developed and developing countries and may range up to 40% of the total electricity distributed. More generally, electricity theft belongs to non-technical losses (NTL), which are losses that occur during the distribution of electricity in power grids. In this paper, we build features from the neighborhood of customers. We first split the area in which the customers are located into grids of different sizes. For each grid cell we then compute the proportion of inspected customers and the proportion of NTL found among the inspected customers. We then analyze the distributions of features generated and show why they are useful to predict NTL. In addition, we compute features from the consumption time series of customers. We also use master data features of customers, such as their customer class and voltage of their connection. We compute these features for a Big Data base of 31M meter readings, 700K customers and 400K inspection results. We then use these features to train four machine learning algorithms that are particularly suitable for Big Data sets because of their parallelizable structure: logistic regression, k-nearest neighbors, linear support vector machine and random forest. Using the neighborhood features instead of only analyzing the time series has resulted in appreciable results for Big Data sets for varying NTL proportions of 1%-90%. This work can therefore be deployed to a wide range of different regions around the world.
[ "Patrick Glauner, Jorge Meira, Lautaro Dolberg, Radu State, Franck\n Bettinger, Yves Rangoni, Diogo Duarte", "['Patrick Glauner' 'Jorge Meira' 'Lautaro Dolberg' 'Radu State'\n 'Franck Bettinger' 'Yves Rangoni' 'Diogo Duarte']" ]
quant-ph cs.CC cs.LG
null
1607.00932
null
null
http://arxiv.org/pdf/1607.00932v3
2017-06-06T21:14:58Z
2016-07-04T15:31:32Z
Optimal Quantum Sample Complexity of Learning Algorithms
$ \newcommand{\eps}{\varepsilon} $In learning theory, the VC dimension of a concept class $C$ is the most common way to measure its "richness." In the PAC model $$ \Theta\Big(\frac{d}{\eps} + \frac{\log(1/\delta)}{\eps}\Big) $$ examples are necessary and sufficient for a learner to output, with probability $1-\delta$, a hypothesis $h$ that is $\eps$-close to the target concept $c$. In the related agnostic model, where the samples need not come from a $c\in C$, we know that $$ \Theta\Big(\frac{d}{\eps^2} + \frac{\log(1/\delta)}{\eps^2}\Big) $$ examples are necessary and sufficient to output an hypothesis $h\in C$ whose error is at most $\eps$ worse than the best concept in $C$. Here we analyze quantum sample complexity, where each example is a coherent quantum state. This model was introduced by Bshouty and Jackson, who showed that quantum examples are more powerful than classical examples in some fixed-distribution settings. However, Atici and Servedio, improved by Zhang, showed that in the PAC setting, quantum examples cannot be much more powerful: the required number of quantum examples is $$ \Omega\Big(\frac{d^{1-\eta}}{\eps} + d + \frac{\log(1/\delta)}{\eps}\Big)\mbox{ for all }\eta> 0. $$ Our main result is that quantum and classical sample complexity are in fact equal up to constant factors in both the PAC and agnostic models. We give two approaches. The first is a fairly simple information-theoretic argument that yields the above two classical bounds and yields the same bounds for quantum sample complexity up to a $\log(d/\eps)$ factor. We then give a second approach that avoids the log-factor loss, based on analyzing the behavior of the "Pretty Good Measurement" on the quantum state identification problems that correspond to learning. This shows classical and quantum sample complexity are equal up to constant factors.
[ "Srinivasan Arunachalam (CWI) and Ronald de Wolf (CWI and U of\n Amsterdam)", "['Srinivasan Arunachalam' 'Ronald de Wolf']" ]
cs.CL cs.LG
null
1607.00970
null
null
http://arxiv.org/pdf/1607.00970v2
2016-10-13T07:40:37Z
2016-07-04T17:42:52Z
Sequence to Backward and Forward Sequences: A Content-Introducing Approach to Generative Short-Text Conversation
Using neural networks to generate replies in human-computer dialogue systems is attracting increasing attention over the past few years. However, the performance is not satisfactory: the neural network tends to generate safe, universally relevant replies which carry little meaning. In this paper, we propose a content-introducing approach to neural network-based generative dialogue systems. We first use pointwise mutual information (PMI) to predict a noun as a keyword, reflecting the main gist of the reply. We then propose seq2BF, a "sequence to backward and forward sequences" model, which generates a reply containing the given keyword. Experimental results show that our approach significantly outperforms traditional sequence-to-sequence models in terms of human evaluation and the entropy measure, and that the predicted keyword can appear at an appropriate position in the reply.
[ "Lili Mou, Yiping Song, Rui Yan, Ge Li, Lu Zhang, Zhi Jin", "['Lili Mou' 'Yiping Song' 'Rui Yan' 'Ge Li' 'Lu Zhang' 'Zhi Jin']" ]
math.OC cs.LG cs.NA stat.ML
null
1607.01027
null
null
http://arxiv.org/pdf/1607.01027v5
2020-05-06T03:21:49Z
2016-07-04T20:01:17Z
Accelerate Stochastic Subgradient Method by Leveraging Local Growth Condition
In this paper, a new theory is developed for first-order stochastic convex optimization, showing that the global convergence rate is sufficiently quantified by a local growth rate of the objective function in a neighborhood of the optimal solutions. In particular, if the objective function $F(\mathbf w)$ in the $\epsilon$-sublevel set grows as fast as $\|\mathbf w - \mathbf w_*\|_2^{1/\theta}$, where $\mathbf w_*$ represents the closest optimal solution to $\mathbf w$ and $\theta\in(0,1]$ quantifies the local growth rate, the iteration complexity of first-order stochastic optimization for achieving an $\epsilon$-optimal solution can be $\widetilde O(1/\epsilon^{2(1-\theta)})$, which is optimal at most up to a logarithmic factor. To achieve the faster global convergence, we develop two different accelerated stochastic subgradient methods by iteratively solving the original problem approximately in a local region around a historical solution with the size of the local region gradually decreasing as the solution approaches the optimal set. Besides the theoretical improvements, this work also includes new contributions towards making the proposed algorithms practical: (i) we present practical variants of accelerated stochastic subgradient methods that can run without the knowledge of multiplicative growth constant and even the growth rate $\theta$; (ii) we consider a broad family of problems in machine learning to demonstrate that the proposed algorithms enjoy faster convergence than traditional stochastic subgradient method. We also characterize the complexity of the proposed algorithms for ensuring the gradient is small without the smoothness assumption.
[ "['Yi Xu' 'Qihang Lin' 'Tianbao Yang']", "Yi Xu, Qihang Lin, Tianbao Yang" ]
stat.ML cs.AI cs.LG
null
1607.01036
null
null
http://arxiv.org/pdf/1607.01036v4
2017-02-27T21:01:20Z
2016-07-04T20:12:41Z
Bootstrap Model Aggregation for Distributed Statistical Learning
In distributed, or privacy-preserving learning, we are often given a set of probabilistic models estimated from different local repositories, and asked to combine them into a single model that gives efficient statistical estimation. A simple method is to linearly average the parameters of the local models, which, however, tends to be degenerate or not applicable on non-convex models, or models with different parameter dimensions. One more practical strategy is to generate bootstrap samples from the local models, and then learn a joint model based on the combined bootstrap set. Unfortunately, the bootstrap procedure introduces additional noise and can significantly deteriorate the performance. In this work, we propose two variance reduction methods to correct the bootstrap noise, including a weighted M-estimator that is both statistically efficient and practically powerful. Both theoretical and empirical analysis is provided to demonstrate our methods.
[ "Jun Han, Qiang Liu", "['Jun Han' 'Qiang Liu']" ]
cs.AI cs.IR cs.LG
null
1607.01050
null
null
http://arxiv.org/pdf/1607.01050v1
2016-07-04T21:21:59Z
2016-07-04T21:21:59Z
Application of Statistical Relational Learning to Hybrid Recommendation Systems
Recommendation systems usually involve exploiting the relations among known features and content that describe items (content-based filtering) or the overlap of similar users who interacted with or rated the target item (collaborative filtering). To combine these two filtering approaches, current model-based hybrid recommendation systems typically require extensive feature engineering to construct a user profile. Statistical Relational Learning (SRL) provides a straightforward way to combine the two approaches. However, due to the large scale of the data used in real world recommendation systems, little research exists on applying SRL models to hybrid recommendation systems, and essentially none of that research has been applied on real big-data-scale systems. In this paper, we proposed a way to adapt the state-of-the-art in SRL learning approaches to construct a real hybrid recommendation system. Furthermore, in order to satisfy a common requirement in recommendation systems (i.e. that false positives are more undesirable and therefore penalized more harshly than false negatives), our approach can also allow tuning the trade-off between the precision and recall of the system in a principled way. Our experimental results demonstrate the efficiency of our proposed approach as well as its improved performance on recommendation precision.
[ "['Shuo Yang' 'Mohammed Korayem' 'Khalifeh AlJadda' 'Trey Grainger'\n 'Sriraam Natarajan']", "Shuo Yang, Mohammed Korayem, Khalifeh AlJadda, Trey Grainger, Sriraam\n Natarajan" ]
cs.LG
null
1607.01097
null
null
http://arxiv.org/pdf/1607.01097v3
2017-02-28T02:58:11Z
2016-07-05T02:51:33Z
AdaNet: Adaptive Structural Learning of Artificial Neural Networks
We present new algorithms for adaptively learning artificial neural networks. Our algorithms (AdaNet) adaptively learn both the structure of the network and its weights. They are based on a solid theoretical analysis, including data-dependent generalization guarantees that we prove and discuss in detail. We report the results of large-scale experiments with one of our algorithms on several binary classification tasks extracted from the CIFAR-10 dataset. The results demonstrate that our algorithm can automatically learn network structures with very competitive performance accuracies when compared with those achieved for neural networks found by standard approaches.
[ "['Corinna Cortes' 'Xavi Gonzalvo' 'Vitaly Kuznetsov' 'Mehryar Mohri'\n 'Scott Yang']", "Corinna Cortes, Xavi Gonzalvo, Vitaly Kuznetsov, Mehryar Mohri and\n Scott Yang" ]
cs.LG cs.RO
null
1607.01136
null
null
http://arxiv.org/pdf/1607.01136v1
2016-07-05T07:47:37Z
2016-07-05T07:47:37Z
Minimalist Regression Network with Reinforced Gradients and Weighted Estimates: a Case Study on Parameters Estimation in Automated Welding
This paper presents a minimalist neural regression network as an aggregate of independent identical regression blocks that are trained simultaneously. Moreover, it introduces a new multiplicative parameter, shared by all the neural units of a given layer, to maintain the quality of its gradients. Furthermore, it increases its estimation accuracy via learning a weight factor whose quantity captures the redundancy between the estimated and actual values at each training iteration. We choose the estimation of the direct weld parameters of different welding techniques to show a significant improvement in calculation of these parameters by our model in contrast to state-of-the-arts techniques in the literature. Furthermore, we demonstrate the ability of our model to retain its performance when presented with combined data of different welding techniques. This is a nontrivial result in attaining an scalable model whose quality of estimation is independent of adopted welding techniques.
[ "Soheil Keshmiri", "['Soheil Keshmiri']" ]
stat.ML cs.LG
null
1607.01152
null
null
http://arxiv.org/pdf/1607.01152v1
2016-07-05T08:58:44Z
2016-07-05T08:58:44Z
How to Evaluate the Quality of Unsupervised Anomaly Detection Algorithms?
When sufficient labeled data are available, classical criteria based on Receiver Operating Characteristic (ROC) or Precision-Recall (PR) curves can be used to compare the performance of un-supervised anomaly detection algorithms. However , in many situations, few or no data are labeled. This calls for alternative criteria one can compute on non-labeled data. In this paper, two criteria that do not require labels are empirically shown to discriminate accurately (w.r.t. ROC or PR based criteria) between algorithms. These criteria are based on existing Excess-Mass (EM) and Mass-Volume (MV) curves, which generally cannot be well estimated in large dimension. A methodology based on feature sub-sampling and aggregating is also described and tested, extending the use of these criteria to high-dimensional datasets and solving major drawbacks inherent to standard EM and MV curves.
[ "['Nicolas Goix']", "Nicolas Goix (LTCI)" ]
math.OC cs.LG cs.NA stat.ML
null
1607.01231
null
null
http://arxiv.org/pdf/1607.01231v4
2017-05-21T06:23:50Z
2016-07-05T12:51:33Z
Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization
In this paper we study stochastic quasi-Newton methods for nonconvex stochastic optimization, where we assume that noisy information about the gradients of the objective function is available via a stochastic first-order oracle (SFO). We propose a general framework for such methods, for which we prove almost sure convergence to stationary points and analyze its worst-case iteration complexity. When a randomly chosen iterate is returned as the output of such an algorithm, we prove that in the worst-case, the SFO-calls complexity is $O(\epsilon^{-2})$ to ensure that the expectation of the squared norm of the gradient is smaller than the given accuracy tolerance $\epsilon$. We also propose a specific algorithm, namely a stochastic damped L-BFGS (SdLBFGS) method, that falls under the proposed framework. {Moreover, we incorporate the SVRG variance reduction technique into the proposed SdLBFGS method, and analyze its SFO-calls complexity. Numerical results on a nonconvex binary classification problem using SVM, and a multiclass classification problem using neural networks are reported.
[ "['Xiao Wang' 'Shiqian Ma' 'Donald Goldfarb' 'Wei Liu']", "Xiao Wang, Shiqian Ma, Donald Goldfarb, Wei Liu" ]
cs.CL cs.IR cs.LG
null
1607.01274
null
null
http://arxiv.org/pdf/1607.01274v1
2016-07-04T01:16:55Z
2016-07-04T01:16:55Z
Temporal Topic Analysis with Endogenous and Exogenous Processes
We consider the problem of modeling temporal textual data taking endogenous and exogenous processes into account. Such text documents arise in real world applications, including job advertisements and economic news articles, which are influenced by the fluctuations of the general economy. We propose a hierarchical Bayesian topic model which imposes a "group-correlated" hierarchical structure on the evolution of topics over time incorporating both processes, and show that this model can be estimated from Markov chain Monte Carlo sampling methods. We further demonstrate that this model captures the intrinsic relationships between the topic distribution and the time-dependent factors, and compare its performance with latent Dirichlet allocation (LDA) and two other related models. The model is applied to two collections of documents to illustrate its empirical performance: online job advertisements from DirectEmployers Association and journalists' postings on BusinessInsider.com.
[ "['Baiyang Wang' 'Diego Klabjan']", "Baiyang Wang, Diego Klabjan" ]
cs.IT cs.LG math.IT
null
1607.01346
null
null
http://arxiv.org/pdf/1607.01346v1
2016-07-05T17:43:52Z
2016-07-05T17:43:52Z
Resource Allocation in a MAC with and without security via Game Theoretic Learning
In this paper a $K$-user fading multiple access channel with and without security constraints is studied. First we consider a F-MAC without the security constraints. Under the assumption of individual CSI of users, we propose the problem of power allocation as a stochastic game when the receiver sends an ACK or a NACK depending on whether it was able to decode the message or not. We have used Multiplicative weight no-regret algorithm to obtain a Coarse Correlated Equilibrium (CCE). Then we consider the case when the users can decode ACK/NACK of each other. In this scenario we provide an algorithm to maximize the weighted sum-utility of all the users and obtain a Pareto optimal point. PP is socially optimal but may be unfair to individual users. Next we consider the case where the users can cooperate with each other so as to disagree with the policy which will be unfair to individual user. We then obtain a Nash bargaining solution, which in addition to being Pareto optimal, is also fair to each user. Next we study a $K$-user fading multiple access wiretap Channel with CSI of Eve available to the users. We use the previous algorithms to obtain a CCE, PP and a NBS. Next we consider the case where each user does not know the CSI of Eve but only its distribution. In that case we use secrecy outage as the criterion for the receiver to send an ACK or a NACK. Here also we use the previous algorithms to obtain a CCE, PP or a NBS. Finally we show that our algorithms can be extended to the case where a user can transmit at different rates. At the end we provide a few examples to compute different solutions and compare them under different CSI scenarios.
[ "['Shahid Mehraj Shah' 'Krishna Chaitanya A' 'Vinod Sharma']", "Shahid Mehraj Shah, Krishna Chaitanya A and Vinod Sharma" ]
cs.LG stat.ML
null
1607.01354
null
null
http://arxiv.org/pdf/1607.01354v1
2016-03-22T18:46:13Z
2016-03-22T18:46:13Z
Learning Discriminative Features using Encoder-Decoder type Deep Neural Nets
As machine learning is applied to an increasing variety of complex problems, which are defined by high dimensional and complex data sets, the necessity for task oriented feature learning grows in importance. With the advancement of Deep Learning algorithms, various successful feature learning techniques have evolved. In this paper, we present a novel way of learning discriminative features by training Deep Neural Nets which have Encoder or Decoder type architecture similar to an Autoencoder. We demonstrate that our approach can learn discriminative features which can perform better at pattern classification tasks when the number of training samples is relatively small in size.
[ "Vishwajeet Singh, Killamsetti Ravi Kumar and K Eswaran", "['Vishwajeet Singh' 'Killamsetti Ravi Kumar' 'K Eswaran']" ]
stat.ML cs.LG
10.1007/s10994-016-5562-z
1607.01400
null
null
http://arxiv.org/abs/1607.01400v1
2016-07-05T20:04:57Z
2016-07-05T20:04:57Z
An Aggregate and Iterative Disaggregate Algorithm with Proven Optimality in Machine Learning
We propose a clustering-based iterative algorithm to solve certain optimization problems in machine learning, where we start the algorithm by aggregating the original data, solving the problem on aggregated data, and then in subsequent steps gradually disaggregate the aggregated data. We apply the algorithm to common machine learning problems such as the least absolute deviation regression problem, support vector machines, and semi-supervised support vector machines. We derive model-specific data aggregation and disaggregation procedures. We also show optimality, convergence, and the optimality gap of the approximated solution in each iteration. A computational study is provided.
[ "Young Woong Park and Diego Klabjan", "['Young Woong Park' 'Diego Klabjan']" ]
stat.ML cs.LG
10.1287/ijoc.2016.0729
1607.01417
null
null
http://arxiv.org/abs/1607.01417v2
2016-07-11T21:38:18Z
2016-07-05T21:04:08Z
Algorithms for Generalized Cluster-wise Linear Regression
Cluster-wise linear regression (CLR), a clustering problem intertwined with regression, is to find clusters of entities such that the overall sum of squared errors from regressions performed over these clusters is minimized, where each cluster may have different variances. We generalize the CLR problem by allowing each entity to have more than one observation, and refer to it as generalized CLR. We propose an exact mathematical programming based approach relying on column generation, a column generation based heuristic algorithm that clusters predefined groups of entities, a metaheuristic genetic algorithm with adapted Lloyd's algorithm for K-means clustering, a two-stage approach, and a modified algorithm of Sp{\"a}th \cite{Spath1979} for solving generalized CLR. We examine the performance of our algorithms on a stock keeping unit (SKU) clustering problem employed in forecasting halo and cannibalization effects in promotions using real-world retail data from a large supermarket chain. In the SKU clustering problem, the retailer needs to cluster SKUs based on their seasonal effects in response to promotions. The seasonal effects are the results of regressions with predictors being promotion mechanisms and seasonal dummies performed over clusters generated. We compare the performance of all proposed algorithms for the SKU problem with real-world and synthetic data.
[ "Young Woong Park, Yan Jiang, Diego Klabjan, Loren Williams", "['Young Woong Park' 'Yan Jiang' 'Diego Klabjan' 'Loren Williams']" ]
stat.ML cs.LG
null
1607.01462
null
null
http://arxiv.org/pdf/1607.01462v1
2016-07-06T02:34:21Z
2016-07-06T02:34:21Z
An optimal learning method for developing personalized treatment regimes
A treatment regime is a function that maps individual patient information to a recommended treatment, hence explicitly incorporating the heterogeneity in need for treatment across individuals. Patient responses are dichotomous and can be predicted through an unknown relationship that depends on the patient information and the selected treatment. The goal is to find the treatments that lead to the best patient responses on average. Each experiment is expensive, forcing us to learn the most from each experiment. We adopt a Bayesian approach both to incorporate possible prior information and to update our treatment regime continuously as information accrues, with the potential to allow smaller yet more informative trials and for patients to receive better treatment. By formulating the problem as contextual bandits, we introduce a knowledge gradient policy to guide the treatment assignment by maximizing the expected value of information, for which an approximation method is used to overcome computational challenges. We provide a detailed study on how to make sequential medical decisions under uncertainty to reduce health care costs on a real world knee replacement dataset. We use clustering and LASSO to deal with the intrinsic sparsity in health datasets. We show experimentally that even though the problem is sparse, through careful selection of physicians (versus picking them at random), we can significantly improve the success rates.
[ "Yingfei Wang and Warren Powell", "['Yingfei Wang' 'Warren Powell']" ]
cs.DS cs.LG math.PR
null
1607.01551
null
null
http://arxiv.org/pdf/1607.01551v1
2016-07-06T10:40:23Z
2016-07-06T10:40:23Z
On Sampling and Greedy MAP Inference of Constrained Determinantal Point Processes
Subset selection problems ask for a small, diverse yet representative subset of the given data. When pairwise similarities are captured by a kernel, the determinants of submatrices provide a measure of diversity or independence of items within a subset. Matroid theory gives another notion of independence, thus giving rise to optimization and sampling questions about Determinantal Point Processes (DPPs) under matroid constraints. Partition constraints, as a special case, arise naturally when incorporating additional labeling or clustering information, besides the kernel, in DPPs. Finding the maximum determinant submatrix under matroid constraints on its row/column indices has been previously studied. However, the corresponding question of sampling from DPPs under matroid constraints has been unresolved, beyond the simple cardinality constrained k-DPPs. We give the first polynomial time algorithm to sample exactly from DPPs under partition constraints, for any constant number of partitions. We complement this by a complexity theoretic barrier that rules out such a result under general matroid constraints. Our experiments indicate that partition-constrained DPPs offer more flexibility and more diversity than k-DPPs and their naive extensions, while being reasonably efficient in running time. We also show that a simple greedy initialization followed by local search gives improved approximation guarantees for the problem of MAP inference from k- DPPs on well-conditioned kernels. Our experiments show that this improvement is significant for larger values of k, supporting our theoretical result.
[ "Tarun Kathuria, Amit Deshpande", "['Tarun Kathuria' 'Amit Deshpande']" ]
cs.LG
null
1607.01582
null
null
http://arxiv.org/pdf/1607.01582v1
2016-07-06T12:10:29Z
2016-07-06T12:10:29Z
Bagged Boosted Trees for Classification of Ecological Momentary Assessment Data
Ecological Momentary Assessment (EMA) data is organized in multiple levels (per-subject, per-day, etc.) and this particular structure should be taken into account in machine learning algorithms used in EMA like decision trees and its variants. We propose a new algorithm called BBT (standing for Bagged Boosted Trees) that is enhanced by a over/under sampling method and can provide better estimates for the conditional class probability function. Experimental results on a real-world dataset show that BBT can benefit EMA data classification and performance.
[ "['Gerasimos Spanakis' 'Gerhard Weiss' 'Anne Roefs']", "Gerasimos Spanakis and Gerhard Weiss and Anne Roefs" ]
stat.ML cs.LG cs.NA math.NA
10.1109/TSP.2017.2690524
1607.01668
null
null
http://arxiv.org/abs/1607.01668v2
2016-12-14T15:16:53Z
2016-07-06T15:22:31Z
Tensor Decomposition for Signal Processing and Machine Learning
Tensors or {\em multi-way arrays} are functions of three or more indices $(i,j,k,\cdots)$ -- similar to matrices (two-way arrays), which are functions of two indices $(r,c)$ for (row,column). Tensors have a rich history, stretching over almost a century, and touching upon numerous disciplines; but they have only recently become ubiquitous in signal and data analytics at the confluence of signal processing, statistics, data mining and machine learning. This overview article aims to provide a good starting point for researchers and practitioners interested in learning about and working with tensors. As such, it focuses on fundamentals and motivation (using various application examples), aiming to strike an appropriate balance of breadth {\em and depth} that will enable someone having taken first graduate courses in matrix algebra and probability to get started doing research and/or developing tensor algorithms and software. Some background in applied optimization is useful but not strictly required. The material covered includes tensor rank and rank decomposition; basic tensor factorization models and their relationships and properties (including fairly good coverage of identifiability); broad coverage of algorithms ranging from alternating optimization to stochastic gradient; statistical performance analysis; and applications ranging from source separation to collaborative filtering, mixture and topic modeling, classification, and multilinear subspace learning.
[ "['Nicholas D. Sidiropoulos' 'Lieven De Lathauwer' 'Xiao Fu' 'Kejun Huang'\n 'Evangelos E. Papalexakis' 'Christos Faloutsos']", "Nicholas D. Sidiropoulos, Lieven De Lathauwer, Xiao Fu, Kejun Huang,\n Evangelos E. Papalexakis, Christos Faloutsos" ]
cs.LG cs.AI
null
1607.01690
null
null
http://arxiv.org/pdf/1607.01690v1
2016-07-06T16:00:43Z
2016-07-06T16:00:43Z
A New Hierarchical Redundancy Eliminated Tree Augmented Naive Bayes Classifier for Coping with Gene Ontology-based Features
The Tree Augmented Naive Bayes classifier is a type of probabilistic graphical model that can represent some feature dependencies. In this work, we propose a Hierarchical Redundancy Eliminated Tree Augmented Naive Bayes (HRE-TAN) algorithm, which considers removing the hierarchical redundancy during the classifier learning process, when coping with data containing hierarchically structured features. The experiments showed that HRE-TAN obtains significantly better predictive performance than the conventional Tree Augmented Naive Bayes classifier, and enhanced the robustness against imbalanced class distributions, in aging-related gene datasets with Gene Ontology terms used as features.
[ "['Cen Wan' 'Alex A. Freitas']", "Cen Wan and Alex A. Freitas" ]
cs.CV cs.AI cs.LG cs.NE
null
1607.01719
null
null
http://arxiv.org/pdf/1607.01719v1
2016-07-06T17:35:55Z
2016-07-06T17:35:55Z
Deep CORAL: Correlation Alignment for Deep Domain Adaptation
Deep neural networks are able to learn powerful representations from large quantities of labeled input data, however they cannot always generalize well across changes in input distributions. Domain adaptation algorithms have been proposed to compensate for the degradation in performance due to domain shift. In this paper, we address the case when the target domain is unlabeled, requiring unsupervised adaptation. CORAL is a "frustratingly easy" unsupervised domain adaptation method that aligns the second-order statistics of the source and target distributions with a linear transformation. Here, we extend CORAL to learn a nonlinear transformation that aligns correlations of layer activations in deep neural networks (Deep CORAL). Experiments on standard benchmark datasets show state-of-the-art performance.
[ "['Baochen Sun' 'Kate Saenko']", "Baochen Sun, Kate Saenko" ]
cs.CR cs.DS cs.LG
null
1607.01842
null
null
http://arxiv.org/pdf/1607.01842v4
2018-12-13T15:26:07Z
2016-07-06T23:54:40Z
Finding Significant Fourier Coefficients: Clarifications, Simplifications, Applications and Limitations
Ideas from Fourier analysis have been used in cryptography for the last three decades. Akavia, Goldwasser and Safra unified some of these ideas to give a complete algorithm that finds significant Fourier coefficients of functions on any finite abelian group. Their algorithm stimulated a lot of interest in the cryptography community, especially in the context of `bit security'. This manuscript attempts to be a friendly and comprehensive guide to the tools and results in this field. The intended readership is cryptographers who have heard about these tools and seek an understanding of their mechanics and their usefulness and limitations. A compact overview of the algorithm is presented with emphasis on the ideas behind it. We show how these ideas can be extended to a `modulus-switching' variant of the algorithm. We survey some applications of this algorithm, and explain that several results should be taken in the right context. In particular, we point out that some of the most important bit security problems are still open. Our original contributions include: a discussion of the limitations on the usefulness of these tools; an answer to an open question about the modular inversion hidden number problem.
[ "Steven D. Galbraith, Joel Laity and Barak Shani", "['Steven D. Galbraith' 'Joel Laity' 'Barak Shani']" ]
cs.CL cs.IR cs.LG
null
1607.01958
null
null
http://arxiv.org/pdf/1607.01958v1
2016-07-07T10:48:34Z
2016-07-07T10:48:34Z
Stock trend prediction using news sentiment analysis
Efficient Market Hypothesis is the popular theory about stock prediction. With its failure much research has been carried in the area of prediction of stocks. This project is about taking non quantifiable data such as financial news articles about a company and predicting its future stock trend with news sentiment classification. Assuming that news articles have impact on stock market, this is an attempt to study relationship between news and stock trend. To show this, we created three different classification models which depict polarity of news articles being positive or negative. Observations show that RF and SVM perform well in all types of testing. Na\"ive Bayes gives good result but not compared to the other two. Experiments are conducted to evaluate various aspects of the proposed model and encouraging results are obtained in all of the experiments. The accuracy of the prediction model is more than 80% and in comparison with news random labeling with 50% of accuracy; the model has increased the accuracy by 30%.
[ "Joshi Kalyani, Prof. H. N. Bharathi, Prof. Rao Jyothi", "['Joshi Kalyani' 'Prof. H. N. Bharathi' 'Prof. Rao Jyothi']" ]
cs.CL cs.LG cs.NE
null
1607.01963
null
null
http://arxiv.org/pdf/1607.01963v5
2017-03-22T15:59:30Z
2016-07-07T11:24:51Z
Sequence Training and Adaptation of Highway Deep Neural Networks
Highway deep neural network (HDNN) is a type of depth-gated feedforward neural network, which has shown to be easier to train with more hidden layers and also generalise better compared to conventional plain deep neural networks (DNNs). Previously, we investigated a structured HDNN architecture for speech recognition, in which the two gate functions were tied across all the hidden layers, and we were able to train a much smaller model without sacrificing the recognition accuracy. In this paper, we carry on the study of this architecture with sequence-discriminative training criterion and speaker adaptation techniques on the AMI meeting speech recognition corpus. We show that these two techniques improve speech recognition accuracy on top of the model trained with the cross entropy criterion. Furthermore, we demonstrate that the two gate functions that are tied across all the hidden layers are able to control the information flow over the whole network, and we can achieve considerable improvements by only updating these gate functions in both sequence training and adaptation experiments.
[ "Liang Lu", "['Liang Lu']" ]
stat.ML cs.LG
null
1607.01981
null
null
http://arxiv.org/pdf/1607.01981v2
2016-07-11T08:05:18Z
2016-07-07T12:12:11Z
Nesterov's Accelerated Gradient and Momentum as approximations to Regularised Update Descent
We present a unifying framework for adapting the update direction in gradient-based iterative optimization methods. As natural special cases we re-derive classical momentum and Nesterov's accelerated gradient method, lending a new intuitive interpretation to the latter algorithm. We show that a new algorithm, which we term Regularised Gradient Descent, can converge more quickly than either Nesterov's algorithm or the classical momentum algorithm.
[ "['Aleksandar Botev' 'Guy Lever' 'David Barber']", "Aleksandar Botev, Guy Lever, David Barber" ]
stat.ML cs.LG
null
1607.02024
null
null
http://arxiv.org/pdf/1607.02024v2
2016-08-12T12:52:39Z
2016-07-07T14:06:06Z
Mini-Batch Spectral Clustering
The cost of computing the spectrum of Laplacian matrices hinders the application of spectral clustering to large data sets. While approximations recover computational tractability, they can potentially affect clustering performance. This paper proposes a practical approach to learn spectral clustering based on adaptive stochastic gradient optimization. Crucially, the proposed approach recovers the exact spectrum of Laplacian matrices in the limit of the iterations, and the cost of each iteration is linear in the number of samples. Extensive experimental validation on data sets with up to half a million samples demonstrate its scalability and its ability to outperform state-of-the-art approximate methods to learn spectral clustering for a given computational budget.
[ "['Yufei Han' 'Maurizio Filippone']", "Yufei Han, Maurizio Filippone" ]
cs.NE cs.LG
10.1007/978-3-319-41264-1_1
1607.02028
null
null
http://arxiv.org/abs/1607.02028v1
2016-07-06T12:23:47Z
2016-07-06T12:23:47Z
Artificial neural networks and fuzzy logic for recognizing alphabet characters and mathematical symbols
Optical Character Recognition software (OCR) are important tools for obtaining accessible texts. We propose the use of artificial neural networks (ANN) in order to develop pattern recognition algorithms capable of recognizing both normal texts and formulae. We present an original improvement of the backpropagation algorithm. Moreover, we describe a novel image segmentation algorithm that exploits fuzzy logic for separating touching characters.
[ "Giuseppe Air\\`o Farulla, Tiziana Armano, Anna Capietto, Nadir Murru,\n Rosaria Rossini", "['Giuseppe Airò Farulla' 'Tiziana Armano' 'Anna Capietto' 'Nadir Murru'\n 'Rosaria Rossini']" ]
cs.LG q-bio.GN
null
1607.02078
null
null
http://arxiv.org/pdf/1607.02078v1
2016-07-07T16:50:57Z
2016-07-07T16:50:57Z
DeepChrome: Deep-learning for predicting gene expression from histone modifications
Motivation: Histone modifications are among the most important factors that control gene regulation. Computational methods that predict gene expression from histone modification signals are highly desirable for understanding their combinatorial effects in gene regulation. This knowledge can help in developing 'epigenetic drugs' for diseases like cancer. Previous studies for quantifying the relationship between histone modifications and gene expression levels either failed to capture combinatorial effects or relied on multiple methods that separate predictions and combinatorial analysis. This paper develops a unified discriminative framework using a deep convolutional neural network to classify gene expression using histone modification data as input. Our system, called DeepChrome, allows automatic extraction of complex interactions among important features. To simultaneously visualize the combinatorial interactions among histone modifications, we propose a novel optimization-based technique that generates feature pattern maps from the learnt deep model. This provides an intuitive description of underlying epigenetic mechanisms that regulate genes. Results: We show that DeepChrome outperforms state-of-the-art models like Support Vector Machines and Random Forests for gene expression classification task on 56 different cell-types from REMC database. The output of our visualization technique not only validates the previous observations but also allows novel insights about combinatorial interactions among histone modification marks, some of which have recently been observed by experimental studies.
[ "Ritambhara Singh, Jack Lanchantin, Gabriel Robins, and Yanjun Qi", "['Ritambhara Singh' 'Jack Lanchantin' 'Gabriel Robins' 'Yanjun Qi']" ]
cs.LG cs.SD stat.ML
null
1607.02173
null
null
http://arxiv.org/pdf/1607.02173v1
2016-07-07T21:06:48Z
2016-07-07T21:06:48Z
Single-Channel Multi-Speaker Separation using Deep Clustering
Deep clustering is a recently introduced deep learning architecture that uses discriminatively trained embeddings as the basis for clustering. It was recently applied to spectrogram segmentation, resulting in impressive results on speaker-independent multi-speaker separation. In this paper we extend the baseline system with an end-to-end signal approximation objective that greatly improves performance on a challenging speech separation. We first significantly improve upon the baseline system performance by incorporating better regularization, larger temporal context, and a deeper architecture, culminating in an overall improvement in signal to distortion ratio (SDR) of 10.3 dB compared to the baseline of 6.0 dB for two-speaker separation, as well as a 7.1 dB SDR improvement for three-speaker separation. We then extend the model to incorporate an enhancement layer to refine the signal estimates, and perform end-to-end training through both the clustering and enhancement stages to maximize signal fidelity. We evaluate the results using automatic speech recognition. The new signal approximation objective, combined with end-to-end training, produces unprecedented performance, reducing the word error rate (WER) from 89.1% down to 30.8%. This represents a major advancement towards solving the cocktail party problem.
[ "Yusuf Isik, Jonathan Le Roux, Zhuo Chen, Shinji Watanabe, John R.\n Hershey", "['Yusuf Isik' 'Jonathan Le Roux' 'Zhuo Chen' 'Shinji Watanabe'\n 'John R. Hershey']" ]
cs.LG
null
1607.02177
null
null
http://arxiv.org/pdf/1607.02177v4
2018-03-06T18:39:00Z
2016-07-07T21:44:53Z
Applying Deep Learning to the Newsvendor Problem
The newsvendor problem is one of the most basic and widely applied inventory models. There are numerous extensions of this problem. If the probability distribution of the demand is known, the problem can be solved analytically. However, approximating the probability distribution is not easy and is prone to error; therefore, the resulting solution to the newsvendor problem may be not optimal. To address this issue, we propose an algorithm based on deep learning that optimizes the order quantities for all products based on features of the demand data. Our algorithm integrates the forecasting and inventory-optimization steps, rather than solving them separately, as is typically done, and does not require knowledge of the probability distributions of the demand. Numerical experiments on real-world data suggest that our algorithm outperforms other approaches, including data-driven and machine learning approaches, especially for demands with high volatility. Finally, in order to show how this approach can be used for other inventory optimization problems, we provide an extension for (r,Q) policies.
[ "['Afshin Oroojlooyjadid' 'Lawrence Snyder' 'Martin Takáč']", "Afshin Oroojlooyjadid and Lawrence Snyder and Martin Tak\\'a\\v{c}" ]
cs.LG cs.CV
null
1607.02241
null
null
http://arxiv.org/pdf/1607.02241v1
2016-07-08T06:07:03Z
2016-07-08T06:07:03Z
Overcoming Challenges in Fixed Point Training of Deep Convolutional Networks
It is known that training deep neural networks, in particular, deep convolutional networks, with aggressively reduced numerical precision is challenging. The stochastic gradient descent algorithm becomes unstable in the presence of noisy gradient updates resulting from arithmetic with limited numeric precision. One of the well-accepted solutions facilitating the training of low precision fixed point networks is stochastic rounding. However, to the best of our knowledge, the source of the instability in training neural networks with noisy gradient updates has not been well investigated. This work is an attempt to draw a theoretical connection between low numerical precision and training algorithm stability. In doing so, we will also propose and verify through experiments methods that are able to improve the training performance of deep convolutional networks in fixed point.
[ "['Darryl D. Lin' 'Sachin S. Talathi']", "Darryl D. Lin and Sachin S. Talathi" ]
cs.NE cs.CV cs.LG cs.MM cs.SD
null
1607.02303
null
null
http://arxiv.org/pdf/1607.02303v2
2016-08-15T18:05:00Z
2016-07-08T10:39:05Z
CNN-LTE: a Class of 1-X Pooling Convolutional Neural Networks on Label Tree Embeddings for Audio Scene Recognition
We describe in this report our audio scene recognition system submitted to the DCASE 2016 challenge. Firstly, given the label set of the scenes, a label tree is automatically constructed. This category taxonomy is then used in the feature extraction step in which an audio scene instance is represented by a label tree embedding image. Different convolutional neural networks, which are tailored for the task at hand, are finally learned on top of the image features for scene recognition. Our system reaches an overall recognition accuracy of 81.2% and 83.3% and outperforms the DCASE 2016 baseline with absolute improvements of 8.7% and 6.1% on the development and test data, respectively.
[ "Huy Phan, Lars Hertel, Marco Maass, Philipp Koch, Alfred Mertins", "['Huy Phan' 'Lars Hertel' 'Marco Maass' 'Philipp Koch' 'Alfred Mertins']" ]
cs.SD cs.AI cs.LG cs.MM
null
1607.02306
null
null
http://arxiv.org/pdf/1607.02306v2
2016-08-15T18:02:09Z
2016-07-08T10:42:43Z
CaR-FOREST: Joint Classification-Regression Decision Forests for Overlapping Audio Event Detection
This report describes our submissions to Task2 and Task3 of the DCASE 2016 challenge. The systems aim at dealing with the detection of overlapping audio events in continuous streams, where the detectors are based on random decision forests. The proposed forests are jointly trained for classification and regression simultaneously. Initially, the training is classification-oriented to encourage the trees to select discriminative features from overlapping mixtures to separate positive audio segments from the negative ones. The regression phase is then carried out to let the positive audio segments vote for the event onsets and offsets, and therefore model the temporal structure of audio events. One random decision forest is specifically trained for each event category of interest. Experimental results on the development data show that our systems significantly outperform the baseline on the Task2 evaluation while they are inferior to the baseline in the Task3 evaluation.
[ "Huy Phan, Lars Hertel, Marco Maass, Philipp Koch, Alfred Mertins", "['Huy Phan' 'Lars Hertel' 'Marco Maass' 'Philipp Koch' 'Alfred Mertins']" ]
cs.CL cs.LG
null
1607.02310
null
null
http://arxiv.org/pdf/1607.02310v3
2017-05-05T11:17:57Z
2016-07-08T11:01:56Z
Collaborative Training of Tensors for Compositional Distributional Semantics
Type-based compositional distributional semantic models present an interesting line of research into functional representations of linguistic meaning. One of the drawbacks of such models, however, is the lack of training data required to train each word-type combination. In this paper we address this by introducing training methods that share parameters between similar words. We show that these methods enable zero-shot learning for words that have no training data at all, as well as enabling construction of high-quality tensors from very few training examples per word.
[ "Tamara Polajnar", "['Tamara Polajnar']" ]
cs.RO cs.LG
null
1607.02329
null
null
http://arxiv.org/pdf/1607.02329v1
2016-07-08T11:59:51Z
2016-07-08T11:59:51Z
Watch This: Scalable Cost-Function Learning for Path Planning in Urban Environments
In this work, we present an approach to learn cost maps for driving in complex urban environments from a very large number of demonstrations of driving behaviour by human experts. The learned cost maps are constructed directly from raw sensor measurements, bypassing the effort of manually designing cost maps as well as features. When deploying the learned cost maps, the trajectories generated not only replicate human-like driving behaviour but are also demonstrably robust against systematic errors in putative robot configuration. To achieve this we deploy a Maximum Entropy based, non-linear IRL framework which uses Fully Convolutional Neural Networks (FCNs) to represent the cost model underlying expert driving behaviour. Using a deep, parametric approach enables us to scale efficiently to large datasets and complex behaviours by being run-time independent of dataset extent during deployment. We demonstrate the scalability and the performance of the proposed approach on an ambitious dataset collected over the course of one year including more than 25k demonstration trajectories extracted from over 120km of driving around pedestrianised areas in the city of Milton Keynes, UK. We evaluate the resulting cost representations by showing the advantages over a carefully manually designed cost map and, in addition, demonstrate its robustness to systematic errors by learning precise cost-maps even in the presence of system calibration perturbations.
[ "Markus Wulfmeier, Dominic Zeng Wang and Ingmar Posner", "['Markus Wulfmeier' 'Dominic Zeng Wang' 'Ingmar Posner']" ]
cs.IT cs.LG cs.SI math.IT stat.ML
null
1607.02413
null
null
http://arxiv.org/pdf/1607.02413v2
2017-02-07T16:42:59Z
2016-07-08T15:25:46Z
Lower Bounds on Active Learning for Graphical Model Selection
We consider the problem of estimating the underlying graph associated with a Markov random field, with the added twist that the decoding algorithm can iteratively choose which subsets of nodes to sample based on the previous samples, resulting in an active learning setting. Considering both Ising and Gaussian models, we provide algorithm-independent lower bounds for high-probability recovery within the class of degree-bounded graphs. Our main results are minimax lower bounds for the active setting that match the best known lower bounds for the passive setting, which in turn are known to be tight in several cases of interest. Our analysis is based on Fano's inequality, along with novel mutual information bounds for the active learning setting, and the application of restricted graph ensembles. While we consider ensembles that are similar or identical to those used in the passive setting, we require different analysis techniques, with a key challenge being bounding a mutual information quantity associated with observed subsets of nodes, as opposed to full observations.
[ "['Jonathan Scarlett' 'Volkan Cevher']", "Jonathan Scarlett and Volkan Cevher" ]
cs.LG cs.AI cs.MM cs.SD
null
1607.02444
null
null
http://arxiv.org/pdf/1607.02444v1
2016-07-08T16:40:30Z
2016-07-08T16:40:30Z
Explaining Deep Convolutional Neural Networks on Music Classification
Deep convolutional neural networks (CNNs) have been actively adopted in the field of music information retrieval, e.g. genre classification, mood detection, and chord recognition. However, the process of learning and prediction is little understood, particularly when it is applied to spectrograms. We introduce auralisation of a CNN to understand its underlying mechanism, which is based on a deconvolution procedure introduced in [2]. Auralisation of a CNN is converting the learned convolutional features that are obtained from deconvolution into audio signals. In the experiments and discussions, we explain trained features of a 5-layer CNN based on the deconvolved spectrograms and auralised signals. The pairwise correlations per layers with varying different musical attributes are also investigated to understand the evolution of the learnt features. It is shown that in the deep layers, the features are learnt to capture textures, the patterns of continuous distributions, rather than shapes of lines.
[ "Keunwoo Choi, George Fazekas, Mark Sandler", "['Keunwoo Choi' 'George Fazekas' 'Mark Sandler']" ]
null
null
1607.02450
null
null
http://arxiv.org/pdf/1607.02450v2
2016-08-28T15:23:47Z
2016-07-08T16:55:31Z
Proceedings of the 2016 ICML Workshop on #Data4Good: Machine Learning in Social Good Applications
This is the Proceedings of the ICML Workshop on #Data4Good: Machine Learning in Social Good Applications, which was held on June 24, 2016 in New York.
[ "['Kush R. Varshney']" ]
cs.AI cs.CL cs.LG cs.NE
null
1607.02467
null
null
http://arxiv.org/pdf/1607.02467v2
2016-12-16T10:56:22Z
2016-07-08T17:35:51Z
Log-Linear RNNs: Towards Recurrent Neural Networks with Flexible Prior Knowledge
We introduce LL-RNNs (Log-Linear RNNs), an extension of Recurrent Neural Networks that replaces the softmax output layer by a log-linear output layer, of which the softmax is a special case. This conceptually simple move has two main advantages. First, it allows the learner to combat training data sparsity by allowing it to model words (or more generally, output symbols) as complex combinations of attributes without requiring that each combination is directly observed in the training data (as the softmax does). Second, it permits the inclusion of flexible prior knowledge in the form of a priori specified modular features, where the neural network component learns to dynamically control the weights of a log-linear distribution exploiting these features. We conduct experiments in the domain of language modelling of French, that exploit morphological prior knowledge and show an important decrease in perplexity relative to a baseline RNN. We provide other motivating iillustrations, and finally argue that the log-linear and the neural-network components contribute complementary strengths to the LL-RNN: the LL aspect allows the model to incorporate rich prior knowledge, while the NN aspect, according to the "representation learning" paradigm, allows the model to discover novel combination of characteristics.
[ "['Marc Dymetman' 'Chunyang Xiao']", "Marc Dymetman, Chunyang Xiao" ]
cs.LG cs.NE
null
1607.02488
null
null
http://arxiv.org/pdf/1607.02488v2
2017-03-23T18:36:50Z
2016-07-08T18:39:47Z
Adjusting for Dropout Variance in Batch Normalization and Weight Initialization
We show how to adjust for the variance introduced by dropout with corrections to weight initialization and Batch Normalization, yielding higher accuracy. Though dropout can preserve the expected input to a neuron between train and test, the variance of the input differs. We thus propose a new weight initialization by correcting for the influence of dropout rates and an arbitrary nonlinearity's influence on variance through simple corrective scalars. Since Batch Normalization trained with dropout estimates the variance of a layer's incoming distribution with some inputs dropped, the variance also differs between train and test. After training a network with Batch Normalization and dropout, we simply update Batch Normalization's variance moving averages with dropout off and obtain state of the art on CIFAR-10 and CIFAR-100 without data augmentation.
[ "Dan Hendrycks and Kevin Gimpel", "['Dan Hendrycks' 'Kevin Gimpel']" ]
stat.ML cs.LG
null
1607.02531
null
null
http://arxiv.org/pdf/1607.02531v2
2016-07-27T19:00:49Z
2016-07-08T21:07:54Z
Proceedings of the 2016 ICML Workshop on Human Interpretability in Machine Learning (WHI 2016)
This is the Proceedings of the 2016 ICML Workshop on Human Interpretability in Machine Learning (WHI 2016), which was held in New York, NY, June 23, 2016. Invited speakers were Susan Athey, Rich Caruana, Jacob Feldman, Percy Liang, and Hanna Wallach.
[ "Been Kim, Dmitry M. Malioutov, Kush R. Varshney", "['Been Kim' 'Dmitry M. Malioutov' 'Kush R. Varshney']" ]
cs.CV cs.CR cs.LG stat.ML
null
1607.02533
null
null
http://arxiv.org/pdf/1607.02533v4
2017-02-11T00:39:39Z
2016-07-08T21:12:11Z
Adversarial examples in the physical world
Most existing machine learning classifiers are highly vulnerable to adversarial examples. An adversarial example is a sample of input data which has been modified very slightly in a way that is intended to cause a machine learning classifier to misclassify it. In many cases, these modifications can be so subtle that a human observer does not even notice the modification at all, yet the classifier still makes a mistake. Adversarial examples pose security concerns because they could be used to perform an attack on machine learning systems, even if the adversary has no access to the underlying model. Up to now, all previous work have assumed a threat model in which the adversary can feed data directly into the machine learning classifier. This is not always the case for systems operating in the physical world, for example those which are using signals from cameras and other sensors as an input. This paper shows that even in such physical world scenarios, machine learning systems are vulnerable to adversarial examples. We demonstrate this by feeding adversarial images obtained from cell-phone camera to an ImageNet Inception classifier and measuring the classification accuracy of the system. We find that a large fraction of adversarial examples are classified incorrectly even when perceived through the camera.
[ "['Alexey Kurakin' 'Ian Goodfellow' 'Samy Bengio']", "Alexey Kurakin, Ian Goodfellow and Samy Bengio" ]
cs.LG
null
1607.02535
null
null
http://arxiv.org/pdf/1607.02535v1
2016-07-08T21:40:44Z
2016-07-08T21:40:44Z
Learning from Multiway Data: Simple and Efficient Tensor Regression
Tensor regression has shown to be advantageous in learning tasks with multi-directional relatedness. Given massive multiway data, traditional methods are often too slow to operate on or suffer from memory bottleneck. In this paper, we introduce subsampled tensor projected gradient to solve the problem. Our algorithm is impressively simple and efficient. It is built upon projected gradient method with fast tensor power iterations, leveraging randomized sketching for further acceleration. Theoretical analysis shows that our algorithm converges to the correct solution in fixed number of iterations. The memory requirement grows linearly with the size of the problem. We demonstrate superior empirical performance on both multi-linear multi-task learning and spatio-temporal applications.
[ "['Rose Yu' 'Yan Liu']", "Rose Yu, Yan Liu" ]