categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
list
cs.CL cs.AI cs.LG cs.NE
null
1604.04378
null
null
http://arxiv.org/pdf/1604.04378v1
2016-04-15T07:23:53Z
2016-04-15T07:23:53Z
Match-SRNN: Modeling the Recursive Matching Structure with Spatial RNN
Semantic matching, which aims to determine the matching degree between two texts, is a fundamental problem for many NLP applications. Recently, deep learning approach has been applied to this problem and significant improvements have been achieved. In this paper, we propose to view the generation of the global interaction between two texts as a recursive process: i.e. the interaction of two texts at each position is a composition of the interactions between their prefixes as well as the word level interaction at the current position. Based on this idea, we propose a novel deep architecture, namely Match-SRNN, to model the recursive matching structure. Firstly, a tensor is constructed to capture the word level interactions. Then a spatial RNN is applied to integrate the local interactions recursively, with importance determined by four types of gates. Finally, the matching score is calculated based on the global interaction. We show that, after degenerated to the exact matching scenario, Match-SRNN can approximate the dynamic programming process of longest common subsequence. Thus, there exists a clear interpretation for Match-SRNN. Our experiments on two semantic matching tasks showed the effectiveness of Match-SRNN, and its ability of visualizing the learned matching structure.
[ "['Shengxian Wan' 'Yanyan Lan' 'Jun Xu' 'Jiafeng Guo' 'Liang Pang'\n 'Xueqi Cheng']", "Shengxian Wan, Yanyan Lan, Jun Xu, Jiafeng Guo, Liang Pang, Xueqi\n Cheng" ]
cs.LG cs.NE
null
1604.04428
null
null
http://arxiv.org/pdf/1604.04428v2
2016-07-14T15:18:56Z
2016-04-15T11:07:45Z
The Artificial Mind's Eye: Resisting Adversarials for Convolutional Neural Networks using Internal Projection
We introduce a novel artificial neural network architecture that integrates robustness to adversarial input in the network structure. The main idea of our approach is to force the network to make predictions on what the given instance of the class under consideration would look like and subsequently test those predictions. By forcing the network to redraw the relevant parts of the image and subsequently comparing this new image to the original, we are having the network give a "proof" of the presence of the object.
[ "Harm Berntsen, Wouter Kuijper and Tom Heskes", "['Harm Berntsen' 'Wouter Kuijper' 'Tom Heskes']" ]
cs.LG stat.ML
null
1604.04434
null
null
http://arxiv.org/pdf/1604.04434v1
2016-04-15T11:21:27Z
2016-04-15T11:21:27Z
Bayesian linear regression with Student-t assumptions
As an automatic method of determining model complexity using the training data alone, Bayesian linear regression provides us a principled way to select hyperparameters. But one often needs approximation inference if distribution assumption is beyond Gaussian distribution. In this paper, we propose a Bayesian linear regression model with Student-t assumptions (BLRS), which can be inferred exactly. In this framework, both conjugate prior and expectation maximization (EM) algorithm are generalized. Meanwhile, we prove that the maximum likelihood solution is equivalent to the standard Bayesian linear regression with Gaussian assumptions (BLRG). The $q$-EM algorithm for BLRS is nearly identical to the EM algorithm for BLRG. It is showed that $q$-EM for BLRS can converge faster than EM for BLRG for the task of predicting online news popularity.
[ "['Chaobing Song' 'Shu-Tao Xia']", "Chaobing Song, Shu-Tao Xia" ]
cs.LG cs.IT math.IT stat.ML
null
1604.04451
null
null
http://arxiv.org/pdf/1604.04451v2
2016-07-04T13:18:52Z
2016-04-15T12:06:48Z
Delta divergence: A novel decision cognizant measure of classifier incongruence
Disagreement between two classifiers regarding the class membership of an observation in pattern recognition can be indicative of an anomaly and its nuance. As in general classifiers base their decision on class aposteriori probabilities, the most natural approach to detecting classifier incongruence is to use divergence. However, existing divergences are not particularly suitable to gauge classifier incongruence. In this paper, we postulate the properties that a divergence measure should satisfy and propose a novel divergence measure, referred to as Delta divergence. In contrast to existing measures, it is decision cognizant. The focus in Delta divergence on the dominant hypotheses has a clutter reducing property, the significance of which grows with increasing number of classes. The proposed measure satisfies other important properties such as symmetry, and independence of classifier confidence. The relationship of the proposed divergence to some baseline measures is demonstrated experimentally, showing its superiority.
[ "['Josef Kittler' 'Cemre Zor']", "Josef Kittler and Cemre Zor" ]
stat.ML cs.LG
null
1604.04505
null
null
http://arxiv.org/pdf/1604.04505v1
2016-04-15T13:51:57Z
2016-04-15T13:51:57Z
A short note on extension theorems and their connection to universal consistency in machine learning
Statistical machine learning plays an important role in modern statistics and computer science. One main goal of statistical machine learning is to provide universally consistent algorithms, i.e., the estimator converges in probability or in some stronger sense to the Bayes risk or to the Bayes decision function. Kernel methods based on minimizing the regularized risk over a reproducing kernel Hilbert space (RKHS) belong to these statistical machine learning methods. It is in general unknown which kernel yields optimal results for a particular data set or for the unknown probability measure. Hence various kernel learning methods were proposed to choose the kernel and therefore also its RKHS in a data adaptive manner. Nevertheless, many practitioners often use the classical Gaussian RBF kernel or certain Sobolev kernels with good success. The goal of this short note is to offer one possible theoretical explanation for this empirical fact.
[ "Andreas Christmann, Florian Dumpert, and Dao-Hong Xiang", "['Andreas Christmann' 'Florian Dumpert' 'Dao-Hong Xiang']" ]
cs.CV cs.LG cs.NE cs.RO
null
1604.04528
null
null
http://arxiv.org/pdf/1604.04528v1
2016-04-15T14:55:27Z
2016-04-15T14:55:27Z
Tracking Human-like Natural Motion Using Deep Recurrent Neural Networks
Kinect skeleton tracker is able to achieve considerable human body tracking performance in convenient and a low-cost manner. However, The tracker often captures unnatural human poses such as discontinuous and vibrated motions when self-occlusions occur. A majority of approaches tackle this problem by using multiple Kinect sensors in a workspace. Combination of the measurements from different sensors is then conducted in Kalman filter framework or optimization problem is formulated for sensor fusion. However, these methods usually require heuristics to measure reliability of measurements observed from each Kinect sensor. In this paper, we developed a method to improve Kinect skeleton using single Kinect sensor, in which supervised learning technique was employed to correct unnatural tracking motions. Specifically, deep recurrent neural networks were used for improving joint positions and velocities of Kinect skeleton, and three methods were proposed to integrate the refined positions and velocities for further enhancement. Moreover, we suggested a novel measure to evaluate naturalness of captured motions. We evaluated the proposed approach by comparison with the ground truth obtained using a commercial optical maker-based motion capture system.
[ "Youngbin Park, Sungphill Moon and Il Hong Suh", "['Youngbin Park' 'Sungphill Moon' 'Il Hong Suh']" ]
cs.IR cs.AI cs.LG
null
1604.04558
null
null
http://arxiv.org/pdf/1604.04558v1
2016-04-15T16:27:38Z
2016-04-15T16:27:38Z
Accessing accurate documents by mining auxiliary document information
Earlier techniques of text mining included algorithms like k-means, Naive Bayes, SVM which classify and cluster the text document for mining relevant information about the documents. The need for improving the mining techniques has us searching for techniques using the available algorithms. This paper proposes one technique which uses the auxiliary information that is present inside the text documents to improve the mining. This auxiliary information can be a description to the content. This information can be either useful or completely useless for mining. The user should assess the worth of the auxiliary information before considering this technique for text mining. In this paper, a combination of classical clustering algorithms is used to mine the datasets. The algorithm runs in two stages which carry out mining at different levels of abstraction. The clustered documents would then be classified based on the necessary groups. The proposed technique is aimed at improved results of document clustering.
[ "Jinju Joby and Jyothi Korra", "['Jinju Joby' 'Jyothi Korra']" ]
cs.CV cs.LG cs.NE
null
1604.04573
null
null
http://arxiv.org/pdf/1604.04573v1
2016-04-15T17:10:54Z
2016-04-15T17:10:54Z
CNN-RNN: A Unified Framework for Multi-label Image Classification
While deep convolutional neural networks (CNNs) have shown a great success in single-label image classification, it is important to note that real world images generally contain multiple labels, which could correspond to different objects, scenes, actions and attributes in an image. Traditional approaches to multi-label image classification learn independent classifiers for each category and employ ranking or thresholding on the classification results. These techniques, although working well, fail to explicitly exploit the label dependencies in an image. In this paper, we utilize recurrent neural networks (RNNs) to address this problem. Combined with CNNs, the proposed CNN-RNN framework learns a joint image-label embedding to characterize the semantic label dependency as well as the image-label relevance, and it can be trained end-to-end from scratch to integrate both information in a unified framework. Experimental results on public benchmark datasets demonstrate that the proposed architecture achieves better performance than the state-of-the-art multi-label classification model
[ "Jiang Wang, Yi Yang, Junhua Mao, Zhiheng Huang, Chang Huang, Wei Xu", "['Jiang Wang' 'Yi Yang' 'Junhua Mao' 'Zhiheng Huang' 'Chang Huang'\n 'Wei Xu']" ]
cs.CR cs.DS cs.LG
null
1604.04618
null
null
http://arxiv.org/pdf/1604.04618v1
2016-04-15T19:55:26Z
2016-04-15T19:55:26Z
Make Up Your Mind: The Price of Online Queries in Differential Privacy
We consider the problem of answering queries about a sensitive dataset subject to differential privacy. The queries may be chosen adversarially from a larger set Q of allowable queries in one of three ways, which we list in order from easiest to hardest to answer: Offline: The queries are chosen all at once and the differentially private mechanism answers the queries in a single batch. Online: The queries are chosen all at once, but the mechanism only receives the queries in a streaming fashion and must answer each query before seeing the next query. Adaptive: The queries are chosen one at a time and the mechanism must answer each query before the next query is chosen. In particular, each query may depend on the answers given to previous queries. Many differentially private mechanisms are just as efficient in the adaptive model as they are in the offline model. Meanwhile, most lower bounds for differential privacy hold in the offline setting. This suggests that the three models may be equivalent. We prove that these models are all, in fact, distinct. Specifically, we show that there is a family of statistical queries such that exponentially more queries from this family can be answered in the offline model than in the online model. We also exhibit a family of search queries such that exponentially more queries from this family can be answered in the online model than in the adaptive model. We also investigate whether such separations might hold for simple queries like threshold queries over the real line.
[ "['Mark Bun' 'Thomas Steinke' 'Jonathan Ullman']", "Mark Bun, Thomas Steinke, Jonathan Ullman" ]
cs.PL cs.LG
null
1604.04639
null
null
http://arxiv.org/pdf/1604.04639v1
2016-04-15T20:43:20Z
2016-04-15T20:43:20Z
ModelWizard: Toward Interactive Model Construction
Data scientists engage in model construction to discover machine learning models that well explain a dataset, in terms of predictiveness, understandability and generalization across domains. Questions such as "what if we model common cause Z" and "what if Y's dependence on X reverses" inspire many candidate models to consider and compare, yet current tools emphasize constructing a final model all at once. To more naturally reflect exploration when debating numerous models, we propose an interactive model construction framework grounded in composable operations. Primitive operations capture core steps refining data and model that, when verified, form an inductive basis to prove model validity. Derived, composite operations enable advanced model families, both generic and specialized, abstracted away from low-level details. We prototype our envisioned framework in ModelWizard, a domain-specific language embedded in F# to construct Tabular models. We enumerate language design and demonstrate its use through several applications, emphasizing how language may facilitate creation of complex models. To future engineers designing data science languages and tools, we offer ModelWizard's design as a new model construction paradigm, speeding discovery of our universe's structure.
[ "Dylan Hutchison", "['Dylan Hutchison']" ]
cs.SI cs.LG physics.soc-ph
10.1016/j.jrp.2017.12.004
1604.04696
null
null
http://arxiv.org/abs/1604.04696v1
2016-04-16T05:44:47Z
2016-04-16T05:44:47Z
Phone-based Metric as a Predictor for Basic Personality Traits
Basic personality traits are typically assessed through questionnaires. Here we consider phone-based metrics as a way to asses personality traits. We use data from smartphones with custom data-collection software distributed to 730 individuals. The data includes information about location, physical motion, face-to-face contacts, online social network friends, text messages and calls. The data is further complemented by questionnaire-based data on basic personality traits. From the phone-based metrics, we define a set of behavioral variables, which we use in a prediction of basic personality traits. We find that predominantly, the Big Five personality traits extraversion and, to some degree, neuroticism are strongly expressed in our data. As an alternative to the Big Five, we investigate whether other linear combinations of the 44 questions underlying the Big Five Inventory are more predictable. In a tertile classification problem, basic dimensionality reduction techniques, such as independent component analysis, increase the predictability relative to the baseline from $11\%$ to $23\%$. Finally, from a supervised linear classifier, we were able to further improve this predictability to $33\%$. In all cases, the most predictable projections had an overweight of the questions related to extraversion and neuroticism. In addition, our findings indicate that the score system underlying the Big Five Inventory disregards a part of the information available in the 44 questions.
[ "['Bjarke Mønsted' 'Anders Mollgaard' 'Joachim Mathiesen']", "Bjarke M{\\o}nsted, Anders Mollgaard, Joachim Mathiesen" ]
cs.LG stat.ML
null
1604.04706
null
null
http://arxiv.org/pdf/1604.04706v7
2018-08-03T22:13:06Z
2016-04-16T07:26:58Z
DS-MLR: Exploiting Double Separability for Scaling up Distributed Multinomial Logistic Regression
Scaling multinomial logistic regression to datasets with very large number of data points and classes is challenging. This is primarily because one needs to compute the log-partition function on every data point. This makes distributing the computation hard. In this paper, we present a distributed stochastic gradient descent based optimization method (DS-MLR) for scaling up multinomial logistic regression problems to massive scale datasets without hitting any storage constraints on the data and model parameters. Our algorithm exploits double-separability, an attractive property that allows us to achieve both data as well as model parallelism simultaneously. In addition, we introduce a non-blocking and asynchronous variant of our algorithm that avoids bulk-synchronization. We demonstrate the versatility of DS-MLR to various scenarios in data and model parallelism, through an extensive empirical study using several real-world datasets. In particular, we demonstrate the scalability of DS-MLR by solving an extreme multi-class classification problem on the Reddit dataset (159 GB data, 358 GB parameters) where, to the best of our knowledge, no other existing methods apply.
[ "Parameswaran Raman, Sriram Srinivasan, Shin Matsushima, Xinhua Zhang,\n Hyokun Yun, S.V.N. Vishwanathan", "['Parameswaran Raman' 'Sriram Srinivasan' 'Shin Matsushima' 'Xinhua Zhang'\n 'Hyokun Yun' 'S. V. N. Vishwanathan']" ]
cs.LG cs.CV cs.NE
10.1007/s11263-015-0799-8
1604.04767
null
null
http://arxiv.org/abs/1604.04767v1
2016-04-16T15:42:12Z
2016-04-16T15:42:12Z
Efficient Dictionary Learning with Sparseness-Enforcing Projections
Learning dictionaries suitable for sparse coding instead of using engineered bases has proven effective in a variety of image processing tasks. This paper studies the optimization of dictionaries on image data where the representation is enforced to be explicitly sparse with respect to a smooth, normalized sparseness measure. This involves the computation of Euclidean projections onto level sets of the sparseness measure. While previous algorithms for this optimization problem had at least quasi-linear time complexity, here the first algorithm with linear time complexity and constant space complexity is proposed. The key for this is the mathematically rigorous derivation of a characterization of the projection's result based on a soft-shrinkage function. This theory is applied in an original algorithm called Easy Dictionary Learning (EZDL), which learns dictionaries with a simple and fast-to-compute Hebbian-like learning rule. The new algorithm is efficient, expressive and particularly simple to implement. It is demonstrated that despite its simplicity, the proposed learning algorithm is able to generate a rich variety of dictionaries, in particular a topographic organization of atoms or separable atoms. Further, the dictionaries are as expressive as those of benchmark learning algorithms in terms of the reproduction quality on entire images, and result in an equivalent denoising performance. EZDL learns approximately 30 % faster than the already very efficient Online Dictionary Learning algorithm, and is therefore eligible for rapid data set analysis and problems with vast quantities of learning samples.
[ "['Markus Thom' 'Matthias Rapp' 'Günther Palm']", "Markus Thom, Matthias Rapp, G\\\"unther Palm" ]
cs.CL cs.LG
null
1604.04802
null
null
http://arxiv.org/pdf/1604.04802v1
2016-04-16T21:18:14Z
2016-04-16T21:18:14Z
Supervised and Unsupervised Ensembling for Knowledge Base Population
We present results on combining supervised and unsupervised methods to ensemble multiple systems for two popular Knowledge Base Population (KBP) tasks, Cold Start Slot Filling (CSSF) and Tri-lingual Entity Discovery and Linking (TEDL). We demonstrate that our combined system along with auxiliary features outperforms the best performing system for both tasks in the 2015 competition, several ensembling baselines, as well as the state-of-the-art stacking approach to ensembling KBP systems. The success of our technique on two different and challenging problems demonstrates the power and generality of our combined approach to ensembling.
[ "['Nazneen Fatema Rajani' 'Raymond J. Mooney']", "Nazneen Fatema Rajani and Raymond J. Mooney" ]
cs.LG cs.NE
null
1604.04812
null
null
http://arxiv.org/pdf/1604.04812v3
2017-01-02T18:33:43Z
2016-04-17T00:26:57Z
Structured Sparse Convolutional Autoencoder
This paper aims to improve the feature learning in Convolutional Networks (Convnet) by capturing the structure of objects. A new sparsity function is imposed on the extracted featuremap to capture the structure and shape of the learned object, extracting interpretable features to improve the prediction performance. The proposed algorithm is based on organizing the activation within and across featuremap by constraining the node activities through $\ell_{2}$ and $\ell_{1}$ normalization in a structured form.
[ "Ehsan Hosseini-Asl", "['Ehsan Hosseini-Asl']" ]
cs.CL cs.LG
null
1604.04835
null
null
http://arxiv.org/pdf/1604.04835v3
2017-06-17T04:33:41Z
2016-04-17T07:15:33Z
SSP: Semantic Space Projection for Knowledge Graph Embedding with Text Descriptions
Knowledge representation is an important, long-history topic in AI, and there have been a large amount of work for knowledge graph embedding which projects symbolic entities and relations into low-dimensional, real-valued vector space. However, most embedding methods merely concentrate on data fitting and ignore the explicit semantic expression, leading to uninterpretable representations. Thus, traditional embedding methods have limited potentials for many applications such as question answering, and entity classification. To this end, this paper proposes a semantic representation method for knowledge graph \textbf{(KSR)}, which imposes a two-level hierarchical generative process that globally extracts many aspects and then locally assigns a specific category in each aspect for every triple. Since both aspects and categories are semantics-relevant, the collection of categories in each aspect is treated as the semantic representation of this triple. Extensive experiments justify our model outperforms other state-of-the-art baselines substantially.
[ "['Han Xiao' 'Minlie Huang' 'Xiaoyan Zhu']", "Han Xiao, Minlie Huang, Xiaoyan Zhu" ]
cs.LG
null
1604.04879
null
null
http://arxiv.org/pdf/1604.04879v1
2016-04-17T15:01:51Z
2016-04-17T15:01:51Z
Mahalanobis Distance Metric Learning Algorithm for Instance-based Data Stream Classification
With the massive data challenges nowadays and the rapid growing of technology, stream mining has recently received considerable attention. To address the large number of scenarios in which this phenomenon manifests itself suitable tools are required in various research fields. Instance-based data stream algorithms generally employ the Euclidean distance for the classification task underlying this problem. A novel way to look into this issue is to take advantage of a more flexible metric due to the increased requirements imposed by the data stream scenario. In this paper we present a new algorithm that learns a Mahalanobis metric using similarity and dissimilarity constraints in an online manner. This approach hybridizes a Mahalanobis distance metric learning algorithm and a k-NN data stream classification algorithm with concept drift detection. First, some basic aspects of Mahalanobis distance metric learning are described taking into account key properties as well as online distance metric learning algorithms. Second, we implement specific evaluation methodologies and comparative metrics such as Q statistic for data stream classification algorithms. Finally, our algorithm is evaluated on different datasets by comparing its results with one of the best instance-based data stream classification algorithm of the state of the art. The results demonstrate that our proposal is better
[ "Jorge Luis Rivero Perez, Bernardete Ribeiro, Carlos Morell Perez", "['Jorge Luis Rivero Perez' 'Bernardete Ribeiro' 'Carlos Morell Perez']" ]
cs.LG cs.DS
10.1016/j.asoc.2012.07.021
1604.04893
null
null
http://arxiv.org/abs/1604.04893v1
2016-04-17T16:25:15Z
2016-04-17T16:25:15Z
An Initial Seed Selection Algorithm for K-means Clustering of Georeferenced Data to Improve Replicability of Cluster Assignments for Mapping Application
K-means is one of the most widely used clustering algorithms in various disciplines, especially for large datasets. However the method is known to be highly sensitive to initial seed selection of cluster centers. K-means++ has been proposed to overcome this problem and has been shown to have better accuracy and computational efficiency than k-means. In many clustering problems though -such as when classifying georeferenced data for mapping applications- standardization of clustering methodology, specifically, the ability to arrive at the same cluster assignment for every run of the method i.e. replicability of the methodology, may be of greater significance than any perceived measure of accuracy, especially when the solution is known to be non-unique, as in the case of k-means clustering. Here we propose a simple initial seed selection algorithm for k-means clustering along one attribute that draws initial cluster boundaries along the 'deepest valleys' or greatest gaps in dataset. Thus, it incorporates a measure to maximize distance between consecutive cluster centers which augments the conventional k-means optimization for minimum distance between cluster center and cluster members. Unlike existing initialization methods, no additional parameters or degrees of freedom are introduced to the clustering algorithm. This improves the replicability of cluster assignments by as much as 100% over k-means and k-means++, virtually reducing the variance over different runs to zero, without introducing any additional parameters to the clustering process. Further, the proposed method is more computationally efficient than k-means++ and in some cases, more accurate.
[ "Fouad Khan", "['Fouad Khan']" ]
stat.ML cs.LG math.PR
null
1604.04939
null
null
http://arxiv.org/pdf/1604.04939v1
2016-04-17T23:13:50Z
2016-04-17T23:13:50Z
Multi-view Learning as a Nonparametric Nonlinear Inter-Battery Factor Analysis
Factor analysis aims to determine latent factors, or traits, which summarize a given data set. Inter-battery factor analysis extends this notion to multiple views of the data. In this paper we show how a nonlinear, nonparametric version of these models can be recovered through the Gaussian process latent variable model. This gives us a flexible formalism for multi-view learning where the latent variables can be used both for exploratory purposes and for learning representations that enable efficient inference for ambiguous estimation tasks. Learning is performed in a Bayesian manner through the formulation of a variational compression scheme which gives a rigorous lower bound on the log likelihood. Our Bayesian framework provides strong regularization during training, allowing the structure of the latent space to be determined efficiently and automatically. We demonstrate this by producing the first (to our knowledge) published results of learning from dozens of views, even when data is scarce. We further show experimental results on several different types of multi-view data sets and for different kinds of tasks, including exploratory data analysis, generation, ambiguity modelling through latent priors and classification.
[ "['Andreas Damianou' 'Neil D. Lawrence' 'Carl Henrik Ek']", "Andreas Damianou, Neil D. Lawrence, Carl Henrik Ek" ]
stat.ML cs.LG
null
1604.04942
null
null
http://arxiv.org/pdf/1604.04942v4
2017-08-07T02:04:52Z
2016-04-17T23:46:04Z
Identifying global optimality for dictionary learning
Learning new representations of input observations in machine learning is often tackled using a factorization of the data. For many such problems, including sparse coding and matrix completion, learning these factorizations can be difficult, in terms of efficiency and to guarantee that the solution is a global minimum. Recently, a general class of objectives have been introduced-which we term induced dictionary learning models (DLMs)-that have an induced convex form that enables global optimization. Though attractive theoretically, this induced form is impractical, particularly for large or growing datasets. In this work, we investigate the use of practical alternating minimization algorithms for induced DLMs, that ensure convergence to global optima. We characterize the stationary points of these models, and, using these insights, highlight practical choices for the objectives. We then provide theoretical and empirical evidence that alternating minimization, from a random initialization, converges to global minima for a large subclass of induced DLMs. In particular, we take advantage of the existence of the (potentially unknown) convex induced form, to identify when stationary points are global minima for the dictionary learning objective. We then provide an empirical investigation into practical optimization choices for using alternating minimization for induced DLMs, for both batch and stochastic gradient descent.
[ "Lei Le and Martha White", "['Lei Le' 'Martha White']" ]
stat.ML cs.LG
null
1604.04960
null
null
http://arxiv.org/pdf/1604.04960v1
2016-04-18T02:14:07Z
2016-04-18T02:14:07Z
Gaussian Copula Variational Autoencoders for Mixed Data
The variational autoencoder (VAE) is a generative model with continuous latent variables where a pair of probabilistic encoder (bottom-up) and decoder (top-down) is jointly learned by stochastic gradient variational Bayes. We first elaborate Gaussian VAE, approximating the local covariance matrix of the decoder as an outer product of the principal direction at a position determined by a sample drawn from Gaussian distribution. We show that this model, referred to as VAE-ROC, better captures the data manifold, compared to the standard Gaussian VAE where independent multivariate Gaussian was used to model the decoder. Then we extend the VAE-ROC to handle mixed categorical and continuous data. To this end, we employ Gaussian copula to model the local dependency in mixed categorical and continuous data, leading to {\em Gaussian copula variational autoencoder} (GCVAE). As in VAE-ROC, we use the rank-one approximation for the covariance in the Gaussian copula, to capture the local dependency structure in the mixed data. Experiments on various datasets demonstrate the useful behaviour of VAE-ROC and GCVAE, compared to the standard VAE.
[ "['Suwon Suh' 'Seungjin Choi']", "Suwon Suh and Seungjin Choi" ]
cs.CV cs.LG cs.NE
10.1109/TIP.2017.2651399
1604.04970
null
null
http://arxiv.org/abs/1604.04970v3
2016-10-21T07:46:54Z
2016-04-18T03:16:56Z
Deep Aesthetic Quality Assessment with Semantic Information
Human beings often assess the aesthetic quality of an image coupled with the identification of the image's semantic content. This paper addresses the correlation issue between automatic aesthetic quality assessment and semantic recognition. We cast the assessment problem as the main task among a multi-task deep model, and argue that semantic recognition task offers the key to address this problem. Based on convolutional neural networks, we employ a single and simple multi-task framework to efficiently utilize the supervision of aesthetic and semantic labels. A correlation item between these two tasks is further introduced to the framework by incorporating the inter-task relationship learning. This item not only provides some useful insight about the correlation but also improves assessment accuracy of the aesthetic task. Particularly, an effective strategy is developed to keep a balance between the two tasks, which facilitates to optimize the parameters of the framework. Extensive experiments on the challenging AVA dataset and Photo.net dataset validate the importance of semantic recognition in aesthetic quality assessment, and demonstrate that multi-task deep models can discover an effective aesthetic representation to achieve state-of-the-art results.
[ "['Yueying Kao' 'Ran He' 'Kaiqi Huang']", "Yueying Kao, Ran He, Kaiqi Huang" ]
cs.LG cs.AI
null
1604.05024
null
null
http://arxiv.org/pdf/1604.05024v1
2016-04-18T08:01:02Z
2016-04-18T08:01:02Z
Empirical study of PROXTONE and PROXTONE$^+$ for Fast Learning of Large Scale Sparse Models
PROXTONE is a novel and fast method for optimization of large scale non-smooth convex problem \cite{shi2015large}. In this work, we try to use PROXTONE method in solving large scale \emph{non-smooth non-convex} problems, for example training of sparse deep neural network (sparse DNN) or sparse convolutional neural network (sparse CNN) for embedded or mobile device. PROXTONE converges much faster than first order methods, while first order method is easy in deriving and controlling the sparseness of the solutions. Thus in some applications, in order to train sparse models fast, we propose to combine the merits of both methods, that is we use PROXTONE in the first several epochs to reach the neighborhood of an optimal solution, and then use the first order method to explore the possibility of sparsity in the following training. We call such method PROXTONE plus (PROXTONE$^+$). Both PROXTONE and PROXTONE$^+$ are tested in our experiments, and which demonstrate both methods improved convergence speed twice as fast at least on diverse sparse model learning problems, and at the same time reduce the size to 0.5\% for DNN models. The source of all the algorithms is available upon request.
[ "['Ziqiang Shi' 'Rujie Liu']", "Ziqiang Shi and Rujie Liu" ]
cs.AI cs.LG
null
1604.05085
null
null
http://arxiv.org/pdf/1604.05085v3
2016-12-12T12:54:36Z
2016-04-18T11:06:32Z
Mastering 2048 with Delayed Temporal Coherence Learning, Multi-Stage Weight Promotion, Redundant Encoding and Carousel Shaping
2048 is an engaging single-player, nondeterministic video puzzle game, which, thanks to the simple rules and hard-to-master gameplay, has gained massive popularity in recent years. As 2048 can be conveniently embedded into the discrete-state Markov decision processes framework, we treat it as a testbed for evaluating existing and new methods in reinforcement learning. With the aim to develop a strong 2048 playing program, we employ temporal difference learning with systematic n-tuple networks. We show that this basic method can be significantly improved with temporal coherence learning, multi-stage function approximator with weight promotion, carousel shaping, and redundant encoding. In addition, we demonstrate how to take advantage of the characteristics of the n-tuple network, to improve the algorithmic effectiveness of the learning process by i) delaying the (decayed) update and applying lock-free optimistic parallelism to effortlessly make advantage of multiple CPU cores. This way, we were able to develop the best known 2048 playing program to date, which confirms the effectiveness of the introduced methods for discrete-state Markov decision problems.
[ "['Wojciech Jaśkowski']", "Wojciech Ja\\'skowski" ]
cs.LG cs.AI cs.CV cs.NE cs.RO
null
1604.05091
null
null
http://arxiv.org/pdf/1604.05091v2
2016-04-19T14:09:26Z
2016-04-18T11:15:56Z
End-to-End Tracking and Semantic Segmentation Using Recurrent Neural Networks
In this work we present a novel end-to-end framework for tracking and classifying a robot's surroundings in complex, dynamic and only partially observable real-world environments. The approach deploys a recurrent neural network to filter an input stream of raw laser measurements in order to directly infer object locations, along with their identity in both visible and occluded areas. To achieve this we first train the network using unsupervised Deep Tracking, a recently proposed theoretical framework for end-to-end space occupancy prediction. We show that by learning to track on a large amount of unsupervised data, the network creates a rich internal representation of its environment which we in turn exploit through the principle of inductive transfer of knowledge to perform the task of it's semantic classification. As a result, we show that only a small amount of labelled data suffices to steer the network towards mastering this additional task. Furthermore we propose a novel recurrent neural network architecture specifically tailored to tracking and semantic classification in real-world robotics applications. We demonstrate the tracking and classification performance of the method on real-world data collected at a busy road junction. Our evaluation shows that the proposed end-to-end framework compares favourably to a state-of-the-art, model-free tracking solution and that it outperforms a conventional one-shot training scheme for semantic classification.
[ "Peter Ondruska, Julie Dequaire, Dominic Zeng Wang, Ingmar Posner", "['Peter Ondruska' 'Julie Dequaire' 'Dominic Zeng Wang' 'Ingmar Posner']" ]
cs.NE cs.LG stat.ML
null
1604.05198
null
null
http://arxiv.org/pdf/1604.05198v1
2016-04-18T15:11:13Z
2016-04-18T15:11:13Z
Locally Imposing Function for Generalized Constraint Neural Networks - A Study on Equality Constraints
This work is a further study on the Generalized Constraint Neural Network (GCNN) model [1], [2]. Two challenges are encountered in the study, that is, to embed any type of prior information and to select its imposing schemes. The work focuses on the second challenge and studies a new constraint imposing scheme for equality constraints. A new method called locally imposing function (LIF) is proposed to provide a local correction to the GCNN prediction function, which therefore falls within Locally Imposing Scheme (LIS). In comparison, the conventional Lagrange multiplier method is considered as Globally Imposing Scheme (GIS) because its added constraint term exhibits a global impact to its objective function. Two advantages are gained from LIS over GIS. First, LIS enables constraints to fire locally and explicitly in the domain only where they need on the prediction function. Second, constraints can be implemented within a network setting directly. We attempt to interpret several constraint methods graphically from a viewpoint of the locality principle. Numerical examples confirm the advantages of the proposed method. In solving boundary value problems with Dirichlet and Neumann constraints, the GCNN model with LIF is possible to achieve an exact satisfaction of the constraints.
[ "Linlin Cao, Ran He, Bao-Gang Hu", "['Linlin Cao' 'Ran He' 'Bao-Gang Hu']" ]
cs.CV cs.LG
null
1604.05242
null
null
http://arxiv.org/pdf/1604.05242v2
2016-04-22T23:03:27Z
2016-04-18T17:05:00Z
Can Boosting with SVM as Week Learners Help?
Object recognition in images involves identifying objects with partial occlusions, viewpoint changes, varying illumination, cluttered backgrounds. Recent work in object recognition uses machine learning techniques SVM-KNN, Local Ensemble Kernel Learning, Multiple Kernel Learning. In this paper, we want to utilize SVM as week learners in AdaBoost. Experiments are done with classifiers like near- est neighbor, k-nearest neighbor, Support vector machines, Local learning(SVM- KNN) and AdaBoost. Models use Scale-Invariant descriptors and Pyramid his- togram of gradient descriptors. AdaBoost is trained with set of week classifier as SVMs, each with kernel distance function on different descriptors. Results shows AdaBoost with SVM outperform other methods for Object Categorization dataset.
[ "['Dinesh Govindaraj']", "Dinesh Govindaraj" ]
cs.LG
10.1109/JSTSP.2016.2592622
1604.05257
null
null
http://arxiv.org/abs/1604.05257v3
2017-08-14T21:10:14Z
2016-04-18T17:28:41Z
Risk-Averse Multi-Armed Bandit Problems under Mean-Variance Measure
The multi-armed bandit problems have been studied mainly under the measure of expected total reward accrued over a horizon of length $T$. In this paper, we address the issue of risk in multi-armed bandit problems and develop parallel results under the measure of mean-variance, a commonly adopted risk measure in economics and mathematical finance. We show that the model-specific regret and the model-independent regret in terms of the mean-variance of the reward process are lower bounded by $\Omega(\log T)$ and $\Omega(T^{2/3})$, respectively. We then show that variations of the UCB policy and the DSEE policy developed for the classic risk-neutral MAB achieve these lower bounds.
[ "['Sattar Vakili' 'Qing Zhao']", "Sattar Vakili, Qing Zhao" ]
stat.ML cs.LG
null
1604.05263
null
null
http://arxiv.org/pdf/1604.05263v1
2016-04-18T17:46:23Z
2016-04-18T17:46:23Z
Chained Gaussian Processes
Gaussian process models are flexible, Bayesian non-parametric approaches to regression. Properties of multivariate Gaussians mean that they can be combined linearly in the manner of additive models and via a link function (like in generalized linear models) to handle non-Gaussian data. However, the link function formalism is restrictive, link functions are always invertible and must convert a parameter of interest to a linear combination of the underlying processes. There are many likelihoods and models where a non-linear combination is more appropriate. We term these more general models Chained Gaussian Processes: the transformation of the GPs to the likelihood parameters will not generally be invertible, and that implies that linearisation would only be possible with multiple (localized) links, i.e. a chain. We develop an approximate inference procedure for Chained GPs that is scalable and applicable to any factorized likelihood. We demonstrate the approximation on a range of likelihood functions.
[ "['Alan D. Saul' 'James Hensman' 'Aki Vehtari' 'Neil D. Lawrence']", "Alan D. Saul, James Hensman, Aki Vehtari, Neil D. Lawrence" ]
cs.LG cs.AI math.PR
null
1604.05280
null
null
http://arxiv.org/pdf/1604.05280v4
2016-09-07T18:43:24Z
2016-04-18T19:04:59Z
Asymptotic Convergence in Online Learning with Unbounded Delays
We study the problem of predicting the results of computations that are too expensive to run, via the observation of the results of smaller computations. We model this as an online learning problem with delayed feedback, where the length of the delay is unbounded, which we study mainly in a stochastic setting. We show that in this setting, consistency is not possible in general, and that optimal forecasters might not have average regret going to zero. However, it is still possible to give algorithms that converge asymptotically to Bayes-optimal predictions, by evaluating forecasters on specific sparse independent subsequences of their predictions. We give an algorithm that does this, which converges asymptotically on good behavior, and give very weak bounds on how long it takes to converge. We then relate our results back to the problem of predicting large computations in a deterministic setting.
[ "Scott Garrabrant, Nate Soares, Jessica Taylor", "['Scott Garrabrant' 'Nate Soares' 'Jessica Taylor']" ]
cs.AI cs.LG math.PR
null
1604.05288
null
null
http://arxiv.org/pdf/1604.05288v3
2016-10-07T17:00:38Z
2016-04-18T19:37:46Z
Inductive Coherence
While probability theory is normally applied to external environments, there has been some recent interest in probabilistic modeling of the outputs of computations that are too expensive to run. Since mathematical logic is a powerful tool for reasoning about computer programs, we consider this problem from the perspective of integrating probability and logic. Recent work on assigning probabilities to mathematical statements has used the concept of coherent distributions, which satisfy logical constraints such as the probability of a sentence and its negation summing to one. Although there are algorithms which converge to a coherent probability distribution in the limit, this yields only weak guarantees about finite approximations of these distributions. In our setting, this is a significant limitation: Coherent distributions assign probability one to all statements provable in a specific logical theory, such as Peano Arithmetic, which can prove what the output of any terminating computation is; thus, a coherent distribution must assign probability one to the output of any terminating computation. To model uncertainty about computations, we propose to work with approximations to coherent distributions. We introduce inductive coherence, a strengthening of coherence that provides appropriate constraints on finite approximations, and propose an algorithm which satisfies this criterion.
[ "['Scott Garrabrant' 'Benya Fallenstein' 'Abram Demski' 'Nate Soares']", "Scott Garrabrant, Benya Fallenstein, Abram Demski, Nate Soares" ]
cs.LG cs.IT math.IT stat.ML
null
1604.05307
null
null
http://arxiv.org/pdf/1604.05307v1
2016-04-18T17:09:48Z
2016-04-18T17:09:48Z
Learning Sparse Additive Models with Interactions in High Dimensions
A function $f: \mathbb{R}^d \rightarrow \mathbb{R}$ is referred to as a Sparse Additive Model (SPAM), if it is of the form $f(\mathbf{x}) = \sum_{l \in \mathcal{S}}\phi_{l}(x_l)$, where $\mathcal{S} \subset [d]$, $|\mathcal{S}| \ll d$. Assuming $\phi_l$'s and $\mathcal{S}$ to be unknown, the problem of estimating $f$ from its samples has been studied extensively. In this work, we consider a generalized SPAM, allowing for second order interaction terms. For some $\mathcal{S}_1 \subset [d], \mathcal{S}_2 \subset {[d] \choose 2}$, the function $f$ is assumed to be of the form: $$f(\mathbf{x}) = \sum_{p \in \mathcal{S}_1}\phi_{p} (x_p) + \sum_{(l,l^{\prime}) \in \mathcal{S}_2}\phi_{(l,l^{\prime})} (x_{l},x_{l^{\prime}}).$$ Assuming $\phi_{p},\phi_{(l,l^{\prime})}$, $\mathcal{S}_1$ and, $\mathcal{S}_2$ to be unknown, we provide a randomized algorithm that queries $f$ and exactly recovers $\mathcal{S}_1,\mathcal{S}_2$. Consequently, this also enables us to estimate the underlying $\phi_p, \phi_{(l,l^{\prime})}$. We derive sample complexity bounds for our scheme and also extend our analysis to include the situation where the queries are corrupted with noise -- either stochastic, or arbitrary but bounded. Lastly, we provide simulation results on synthetic data, that validate our theoretical findings.
[ "['Hemant Tyagi' 'Anastasios Kyrillidis' 'Bernd Gärtner' 'Andreas Krause']", "Hemant Tyagi, Anastasios Kyrillidis, Bernd G\\\"artner, Andreas Krause" ]
stat.ML cs.LG cs.NE
null
1604.05377
null
null
http://arxiv.org/pdf/1604.05377v1
2016-04-18T23:18:23Z
2016-04-18T23:18:23Z
Churn analysis using deep convolutional neural networks and autoencoders
Customer temporal behavioral data was represented as images in order to perform churn prediction by leveraging deep learning architectures prominent in image classification. Supervised learning was performed on labeled data of over 6 million customers using deep convolutional neural networks, which achieved an AUC of 0.743 on the test dataset using no more than 12 temporal features for each customer. Unsupervised learning was conducted using autoencoders to better understand the reasons for customer churn. Images that maximally activate the hidden units of an autoencoder trained with churned customers reveal ample opportunities for action to be taken to prevent churn among strong data, no voice users.
[ "Artit Wangperawong, Cyrille Brun, Olav Laudy, Rujikorn Pavasuthipaisit", "['Artit Wangperawong' 'Cyrille Brun' 'Olav Laudy'\n 'Rujikorn Pavasuthipaisit']" ]
cs.IT cs.LG math.IT
10.14569/IJACSA.2015.060632
1604.05393
null
null
http://arxiv.org/abs/1604.05393v1
2016-04-19T00:59:54Z
2016-04-19T00:59:54Z
An Adaptive Learning Mechanism for Selection of Increasingly More Complex Systems
Recently it has been demonstrated that causal entropic forces can lead to the emergence of complex phenomena associated with human cognitive niche such as tool use and social cooperation. Here I show that even more fundamental traits associated with human cognition such as 'self-awareness' can easily be demonstrated to be arising out of merely a selection for 'better regulators'; i.e. systems which respond comparatively better to threats to their existence which are internal to themselves. A simple model demonstrates how indeed the average self-awareness for a universe of systems continues to rise as less self-aware systems are eliminated. The model also demonstrates however that the maximum attainable self-awareness for any system is limited by the plasticity and energy availability for that typology of systems. I argue that this rise in self-awareness may be the reason why systems tend towards greater complexity.
[ "Fouad Khan", "['Fouad Khan']" ]
cs.CV cs.LG stat.ML
10.1109/BTAS.2016.7791205
1604.05417
null
null
http://arxiv.org/abs/1604.05417v3
2017-01-18T03:10:44Z
2016-04-19T03:29:56Z
Triplet Probabilistic Embedding for Face Verification and Clustering
Despite significant progress made over the past twenty five years, unconstrained face verification remains a challenging problem. This paper proposes an approach that couples a deep CNN-based approach with a low-dimensional discriminative embedding learned using triplet probability constraints to solve the unconstrained face verification problem. Aside from yielding performance improvements, this embedding provides significant advantages in terms of memory and for post-processing operations like subject specific clustering. Experiments on the challenging IJB-A dataset show that the proposed algorithm performs comparably or better than the state of the art methods in verification and identification metrics, while requiring much less training data and training time. The superior performance of the proposed method on the CFP dataset shows that the representation learned by our deep CNN is robust to extreme pose variation. Furthermore, we demonstrate the robustness of the deep features to challenges including age, pose, blur and clutter by performing simple clustering experiments on both IJB-A and LFW datasets.
[ "['Swami Sankaranarayanan' 'Azadeh Alavi' 'Carlos Castillo'\n 'Rama Chellappa']", "Swami Sankaranarayanan, Azadeh Alavi, Carlos Castillo, Rama Chellappa" ]
cs.LG
null
1604.05429
null
null
http://arxiv.org/pdf/1604.05429v1
2016-04-19T04:31:55Z
2016-04-19T04:31:55Z
Comparative Study of Instance Based Learning and Back Propagation for Classification Problems
The paper presents a comparative study of the performance of Back Propagation and Instance Based Learning Algorithm for classification tasks. The study is carried out by a series of experiments will all possible combinations of parameter values for the algorithms under evaluation. The algorithm's classification accuracy is compared over a range of datasets and measurements like Cross Validation, Kappa Statistics, Root Mean Squared Value and True Positive vs False Positive rate have been used to evaluate their performance. Along with performance comparison, techniques of handling missing values have also been compared that include Mean or Mode replacement and Multiple Imputation. The results showed that parameter adjustment plays vital role in improving an algorithm's accuracy and therefore, Back Propagation has shown better results as compared to Instance Based Learning. Furthermore, the problem of missing values was better handled by Multiple imputation method, however, not suitable for less amount of data.
[ "Nadia Kanwal and Erkan Bostanci", "['Nadia Kanwal' 'Erkan Bostanci']" ]
stat.ML cs.LG
null
1604.05449
null
null
http://arxiv.org/pdf/1604.05449v1
2016-04-19T07:12:29Z
2016-04-19T07:12:29Z
Streaming Label Learning for Modeling Labels on the Fly
It is challenging to handle a large volume of labels in multi-label learning. However, existing approaches explicitly or implicitly assume that all the labels in the learning process are given, which could be easily violated in changing environments. In this paper, we define and study streaming label learning (SLL), i.e., labels are arrived on the fly, to model newly arrived labels with the help of the knowledge learned from past labels. The core of SLL is to explore and exploit the relationships between new labels and past labels and then inherit the relationship into hypotheses of labels to boost the performance of new classifiers. In specific, we use the label self-representation to model the label relationship, and SLL will be divided into two steps: a regression problem and a empirical risk minimization (ERM) problem. Both problems are simple and can be efficiently solved. We further show that SLL can generate a tighter generalization error bound for new labels than the general ERM framework with trace norm or Frobenius norm regularization. Finally, we implement extensive experiments on various benchmark datasets to validate the new setting. And results show that SLL can effectively handle the constantly emerging new labels and provides excellent classification performance.
[ "Shan You, Chang Xu, Yunhe Wang, Chao Xu and Dacheng Tao", "['Shan You' 'Chang Xu' 'Yunhe Wang' 'Chao Xu' 'Dacheng Tao']" ]
cs.IR cs.AI cs.CL cs.LG
null
1604.05468
null
null
http://arxiv.org/pdf/1604.05468v1
2016-04-19T08:31:23Z
2016-04-19T08:31:23Z
Understanding Rating Behaviour and Predicting Ratings by Identifying Representative Users
Online user reviews describing various products and services are now abundant on the web. While the information conveyed through review texts and ratings is easily comprehensible, there is a wealth of hidden information in them that is not immediately obvious. In this study, we unlock this hidden value behind user reviews to understand the various dimensions along which users rate products. We learn a set of users that represent each of these dimensions and use their ratings to predict product ratings. Specifically, we work with restaurant reviews to identify users whose ratings are influenced by dimensions like 'Service', 'Atmosphere' etc. in order to predict restaurant ratings and understand the variation in rating behaviour across different cuisines. While previous approaches to obtaining product ratings require either a large number of user ratings or a few review texts, we show that it is possible to predict ratings with few user ratings and no review text. Our experiments show that our approach outperforms other conventional methods by 16-27% in terms of RMSE.
[ "Rahul Kamath, Masanao Ochi, Yutaka Matsuo", "['Rahul Kamath' 'Masanao Ochi' 'Yutaka Matsuo']" ]
cs.DS cs.CR cs.LG
10.1145/2902251.2902296
1604.05590
null
null
http://arxiv.org/abs/1604.05590v2
2017-03-13T15:51:54Z
2016-04-19T14:27:32Z
Locating a Small Cluster Privately
We present a new algorithm for locating a small cluster of points with differential privacy [Dwork, McSherry, Nissim, and Smith, 2006]. Our algorithm has implications to private data exploration, clustering, and removal of outliers. Furthermore, we use it to significantly relax the requirements of the sample and aggregate technique [Nissim, Raskhodnikova, and Smith, 2007], which allows compiling of "off the shelf" (non-private) analyses into analyses that preserve differential privacy.
[ "Kobbi Nissim, Uri Stemmer, Salil Vadhan", "['Kobbi Nissim' 'Uri Stemmer' 'Salil Vadhan']" ]
null
null
1604.05753
null
null
http://arxiv.org/pdf/1604.05753v1
2016-04-19T21:22:29Z
2016-04-19T21:22:29Z
Sketching and Neural Networks
High-dimensional sparse data present computational and statistical challenges for supervised learning. We propose compact linear sketches for reducing the dimensionality of the input, followed by a single layer neural network. We show that any sparse polynomial function can be computed, on nearly all sparse binary vectors, by a single layer neural network that takes a compact sketch of the vector as input. Consequently, when a set of sparse binary vectors is approximately separable using a sparse polynomial, there exists a single-layer neural network that takes a short sketch as input and correctly classifies nearly all the points. Previous work has proposed using sketches to reduce dimensionality while preserving the hypothesis class. However, the sketch size has an exponential dependence on the degree in the case of polynomial classifiers. In stark contrast, our approach of using improper learning, using a larger hypothesis class allows the sketch size to have a logarithmic dependence on the degree. Even in the linear case, our approach allows us to improve on the pesky $O({1}/{{gamma}^2})$ dependence of random projections, on the margin $gamma$. We empirically show that our approach leads to more compact neural networks than related methods such as feature hashing at equal or better performance.
[ "['Amit Daniely' 'Nevena Lazic' 'Yoram Singer' 'Kunal Talwar']" ]
stat.ML cs.LG
null
1604.05819
null
null
http://arxiv.org/pdf/1604.05819v1
2016-04-20T04:59:08Z
2016-04-20T04:59:08Z
Trading-Off Cost of Deployment Versus Accuracy in Learning Predictive Models
Predictive models are finding an increasing number of applications in many industries. As a result, a practical means for trading-off the cost of deploying a model versus its effectiveness is needed. Our work is motivated by risk prediction problems in healthcare. Cost-structures in domains such as healthcare are quite complex, posing a significant challenge to existing approaches. We propose a novel framework for designing cost-sensitive structured regularizers that is suitable for problems with complex cost dependencies. We draw upon a surprising connection to boolean circuits. In particular, we represent the problem costs as a multi-layer boolean circuit, and then use properties of boolean circuits to define an extended feature vector and a group regularizer that exactly captures the underlying cost structure. The resulting regularizer may then be combined with a fidelity function to perform model prediction, for example. For the challenging real-world application of risk prediction for sepsis in intensive care units, the use of our regularizer leads to models that are in harmony with the underlying cost structure and thus provide an excellent prediction accuracy versus cost tradeoff.
[ "['Daniel P. Robinson' 'Suchi Saria']", "Daniel P. Robinson and Suchi Saria" ]
cs.LG
null
1604.05993
null
null
http://arxiv.org/pdf/1604.05993v1
2016-04-20T15:07:00Z
2016-04-20T15:07:00Z
Greedy Criterion in Orthogonal Greedy Learning
Orthogonal greedy learning (OGL) is a stepwise learning scheme that starts with selecting a new atom from a specified dictionary via the steepest gradient descent (SGD) and then builds the estimator through orthogonal projection. In this paper, we find that SGD is not the unique greedy criterion and introduce a new greedy criterion, called "$\delta$-greedy threshold" for learning. Based on the new greedy criterion, we derive an adaptive termination rule for OGL. Our theoretical study shows that the new learning scheme can achieve the existing (almost) optimal learning rate of OGL. Plenty of numerical experiments are provided to support that the new scheme can achieve almost optimal generalization performance, while requiring less computation than OGL.
[ "['Lin Xu' 'Shaobo Lin' 'Jinshan Zeng' 'Xia Liu' 'Zongben Xu']", "Lin Xu, Shaobo Lin, Jinshan Zeng, Xia Liu, Zongben Xu" ]
stat.ML cs.AI cs.LG
null
1604.06020
null
null
http://arxiv.org/pdf/1604.06020v1
2016-04-20T16:22:01Z
2016-04-20T16:22:01Z
Constructive Preference Elicitation by Setwise Max-margin Learning
In this paper we propose an approach to preference elicitation that is suitable to large configuration spaces beyond the reach of existing state-of-the-art approaches. Our setwise max-margin method can be viewed as a generalization of max-margin learning to sets, and can produce a set of "diverse" items that can be used to ask informative queries to the user. Moreover, the approach can encourage sparsity in the parameter space, in order to favor the assessment of utility towards combinations of weights that concentrate on just few features. We present a mixed integer linear programming formulation and show how our approach compares favourably with Bayesian preference elicitation alternatives and easily scales to realistic datasets.
[ "['Stefano Teso' 'Andrea Passerini' 'Paolo Viappiani']", "Stefano Teso, Andrea Passerini, Paolo Viappiani" ]
cs.LG cs.AI cs.CV cs.NE stat.ML
null
1604.06057
null
null
http://arxiv.org/pdf/1604.06057v2
2016-05-31T14:45:58Z
2016-04-20T18:47:48Z
Hierarchical Deep Reinforcement Learning: Integrating Temporal Abstraction and Intrinsic Motivation
Learning goal-directed behavior in environments with sparse feedback is a major challenge for reinforcement learning algorithms. The primary difficulty arises due to insufficient exploration, resulting in an agent being unable to learn robust value functions. Intrinsically motivated agents can explore new behavior for its own sake rather than to directly solve problems. Such intrinsic behaviors could eventually help the agent solve tasks posed by the environment. We present hierarchical-DQN (h-DQN), a framework to integrate hierarchical value functions, operating at different temporal scales, with intrinsically motivated deep reinforcement learning. A top-level value function learns a policy over intrinsic goals, and a lower-level function learns a policy over atomic actions to satisfy the given goals. h-DQN allows for flexible goal specifications, such as functions over entities and relations. This provides an efficient space for exploration in complicated environments. We demonstrate the strength of our approach on two problems with very sparse, delayed feedback: (1) a complex discrete stochastic decision process, and (2) the classic ATARI game `Montezuma's Revenge'.
[ "['Tejas D. Kulkarni' 'Karthik R. Narasimhan' 'Ardavan Saeedi'\n 'Joshua B. Tenenbaum']", "Tejas D. Kulkarni, Karthik R. Narasimhan, Ardavan Saeedi, Joshua B.\n Tenenbaum" ]
cs.LG
null
1604.06133
null
null
http://arxiv.org/pdf/1604.06133v1
2016-04-20T22:16:34Z
2016-04-20T22:16:34Z
Embedded all relevant feature selection with Random Ferns
Many machine learning methods can produce variable importance scores expressing the usability of each feature in context of the produced model; those scores on their own are yet not sufficient to generate feature selection, especially when an all relevant selection is required. Although there are wrapper methods aiming to solve this problem, they introduce a substantial increase in the required computational effort. In this paper I investigate an idea of incorporating all relevant selection within the training process by producing importance for implicitly generated shadows, attributes irrelevant by design. I propose and evaluate such a method in context of random ferns classifier. Experiment results confirm the effectiveness of such approach, although show that fully stochastic nature of random ferns limits its applicability either to small dimensions or as a part of a broader feature selection procedure.
[ "['Miron Bartosz Kursa']", "Miron Bartosz Kursa" ]
cs.LG
null
1604.06153
null
null
http://arxiv.org/pdf/1604.06153v1
2016-04-21T01:29:56Z
2016-04-21T01:29:56Z
Nonextensive information theoretical machine
In this paper, we propose a new discriminative model named \emph{nonextensive information theoretical machine (NITM)} based on nonextensive generalization of Shannon information theory. In NITM, weight parameters are treated as random variables. Tsallis divergence is used to regularize the distribution of weight parameters and maximum unnormalized Tsallis entropy distribution is used to evaluate fitting effect. On the one hand, it is showed that some well-known margin-based loss functions such as $\ell_{0/1}$ loss, hinge loss, squared hinge loss and exponential loss can be unified by unnormalized Tsallis entropy. On the other hand, Gaussian prior regularization is generalized to Student-t prior regularization with similar computational complexity. The model can be solved efficiently by gradient-based convex optimization and its performance is illustrated on standard datasets.
[ "['Chaobing Song' 'Shu-Tao Xia']", "Chaobing Song, Shu-Tao Xia" ]
cs.LG cs.CV cs.NE
null
1604.06154
null
null
http://arxiv.org/pdf/1604.06154v1
2016-04-21T01:47:33Z
2016-04-21T01:47:33Z
Deep Adaptive Network: An Efficient Deep Neural Network with Sparse Binary Connections
Deep neural networks are state-of-the-art models for understanding the content of images, video and raw input data. However, implementing a deep neural network in embedded systems is a challenging task, because a typical deep neural network, such as a Deep Belief Network using 128x128 images as input, could exhaust Giga bytes of memory and result in bandwidth and computing bottleneck. To address this challenge, this paper presents a hardware-oriented deep learning algorithm, named as the Deep Adaptive Network, which attempts to exploit the sparsity in the neural connections. The proposed method adaptively reduces the weights associated with negligible features to zero, leading to sparse feedforward network architecture. Furthermore, since the small proportion of important weights are significantly larger than zero, they can be robustly thresholded and represented using single-bit integers (-1 and +1), leading to implementations of deep neural networks with sparse and binary connections. Our experiments showed that, for the application of recognizing MNIST handwritten digits, the features extracted by a two-layer Deep Adaptive Network with about 25% reserved important connections achieved 97.2% classification accuracy, which was almost the same with the standard Deep Belief Network (97.3%). Furthermore, for efficient hardware implementations, the sparse-and-binary-weighted deep neural network could save about 99.3% memory and 99.9% computation units without significant loss of classification accuracy for pattern recognition applications.
[ "Xichuan Zhou, Shengli Li, Kai Qin, Kunping Li, Fang Tang, Shengdong\n Hu, Shujun Liu, Zhi Lin", "['Xichuan Zhou' 'Shengli Li' 'Kai Qin' 'Kunping Li' 'Fang Tang'\n 'Shengdong Hu' 'Shujun Liu' 'Zhi Lin']" ]
cs.LG
null
1604.06162
null
null
http://arxiv.org/pdf/1604.06162v3
2016-09-28T05:19:33Z
2016-04-21T02:39:22Z
The Extended Littlestone's Dimension for Learning with Mistakes and Abstentions
This paper studies classification with an abstention option in the online setting. In this setting, examples arrive sequentially, the learner is given a hypothesis class $\mathcal H$, and the goal of the learner is to either predict a label on each example or abstain, while ensuring that it does not make more than a pre-specified number of mistakes when it does predict a label. Previous work on this problem has left open two main challenges. First, not much is known about the optimality of algorithms, and in particular, about what an optimal algorithmic strategy is for any individual hypothesis class. Second, while the realizable case has been studied, the more realistic non-realizable scenario is not well-understood. In this paper, we address both challenges. First, we provide a novel measure, called the Extended Littlestone's Dimension, which captures the number of abstentions needed to ensure a certain number of mistakes. Second, we explore the non-realizable case, and provide upper and lower bounds on the number of abstentions required by an algorithm to guarantee a specified number of mistakes.
[ "['Chicheng Zhang' 'Kamalika Chaudhuri']", "Chicheng Zhang and Kamalika Chaudhuri" ]
cs.LG
null
1604.06174
null
null
http://arxiv.org/pdf/1604.06174v2
2016-04-22T19:21:36Z
2016-04-21T04:15:27Z
Training Deep Nets with Sublinear Memory Cost
We propose a systematic approach to reduce the memory consumption of deep neural network training. Specifically, we design an algorithm that costs O(sqrt(n)) memory to train a n layer network, with only the computational cost of an extra forward pass per mini-batch. As many of the state-of-the-art models hit the upper bound of the GPU memory, our algorithm allows deeper and more complex models to be explored, and helps advance the innovations in deep learning research. We focus on reducing the memory cost to store the intermediate feature maps and gradients during training. Computation graph analysis is used for automatic in-place operation and memory sharing optimizations. We show that it is possible to trade computation for memory - giving a more memory efficient training algorithm with a little extra computation cost. In the extreme case, our analysis also shows that the memory consumption can be reduced to O(log n) with as little as O(n log n) extra cost for forward computation. Our experiments show that we can reduce the memory cost of a 1,000-layer deep residual network from 48G to 7G with only 30 percent additional running time cost on ImageNet problems. Similarly, significant memory cost reduction is observed in training complex recurrent neural networks on very long sequences.
[ "Tianqi Chen and Bing Xu and Chiyuan Zhang and Carlos Guestrin", "['Tianqi Chen' 'Bing Xu' 'Chiyuan Zhang' 'Carlos Guestrin']" ]
cs.NE cs.LG cs.SD
null
1604.06338
null
null
http://arxiv.org/pdf/1604.06338v2
2016-06-22T13:01:16Z
2016-04-21T14:51:43Z
Robust Audio Event Recognition with 1-Max Pooling Convolutional Neural Networks
We present in this paper a simple, yet efficient convolutional neural network (CNN) architecture for robust audio event recognition. Opposing to deep CNN architectures with multiple convolutional and pooling layers topped up with multiple fully connected layers, the proposed network consists of only three layers: convolutional, pooling, and softmax layer. Two further features distinguish it from the deep architectures that have been proposed for the task: varying-size convolutional filters at the convolutional layer and 1-max pooling scheme at the pooling layer. In intuition, the network tends to select the most discriminative features from the whole audio signals for recognition. Our proposed CNN not only shows state-of-the-art performance on the standard task of robust audio event recognition but also outperforms other deep architectures up to 4.5% in terms of recognition accuracy, which is equivalent to 76.3% relative error reduction.
[ "['Huy Phan' 'Lars Hertel' 'Marco Maass' 'Alfred Mertins']", "Huy Phan, Lars Hertel, Marco Maass, Alfred Mertins" ]
cs.DS cs.IT cs.LG math.IT math.ST stat.ML stat.TH
null
1604.06443
null
null
http://arxiv.org/pdf/1604.06443v2
2019-03-15T02:31:22Z
2016-04-21T19:54:24Z
Robust Estimators in High Dimensions without the Computational Intractability
We study high-dimensional distribution learning in an agnostic setting where an adversary is allowed to arbitrarily corrupt an $\varepsilon$-fraction of the samples. Such questions have a rich history spanning statistics, machine learning and theoretical computer science. Even in the most basic settings, the only known approaches are either computationally inefficient or lose dimension-dependent factors in their error guarantees. This raises the following question:Is high-dimensional agnostic distribution learning even possible, algorithmically? In this work, we obtain the first computationally efficient algorithms with dimension-independent error guarantees for agnostically learning several fundamental classes of high-dimensional distributions: (1) a single Gaussian, (2) a product distribution on the hypercube, (3) mixtures of two product distributions (under a natural balancedness condition), and (4) mixtures of spherical Gaussians. Our algorithms achieve error that is independent of the dimension, and in many cases scales nearly-linearly with the fraction of adversarially corrupted samples. Moreover, we develop a general recipe for detecting and correcting corruptions in high-dimensions, that may be applicable to many other problems.
[ "Ilias Diakonikolas, Gautam Kamath, Daniel Kane, Jerry Li, Ankur\n Moitra, Alistair Stewart", "['Ilias Diakonikolas' 'Gautam Kamath' 'Daniel Kane' 'Jerry Li'\n 'Ankur Moitra' 'Alistair Stewart']" ]
math.ST cs.LG stat.TH
10.1109/TNSE.2017.2703102
1604.06474
null
null
http://arxiv.org/abs/1604.06474v1
2016-04-21T20:13:54Z
2016-04-21T20:13:54Z
On Detection and Structural Reconstruction of Small-World Random Networks
In this paper, we study detection and fast reconstruction of the celebrated Watts-Strogatz (WS) small-world random graph model \citep{watts1998collective} which aims to describe real-world complex networks that exhibit both high clustering and short average length properties. The WS model with neighborhood size $k$ and rewiring probability probability $\beta$ can be viewed as a continuous interpolation between a deterministic ring lattice graph and the Erd\H{o}s-R\'{e}nyi random graph. We study both the computational and statistical aspects of detecting the deterministic ring lattice structure (or local geographical links, strong ties) in the presence of random connections (or long range links, weak ties), and for its recovery. The phase diagram in terms of $(k,\beta)$ is partitioned into several regions according to the difficulty of the problem. We propose distinct methods for the various regions.
[ "T. Tony Cai, Tengyuan Liang and Alexander Rakhlin", "['T. Tony Cai' 'Tengyuan Liang' 'Alexander Rakhlin']" ]
stat.ML cs.LG
null
1604.06498
null
null
http://arxiv.org/pdf/1604.06498v3
2017-05-09T00:50:38Z
2016-04-21T21:34:34Z
Stabilized Sparse Online Learning for Sparse Data
Stochastic gradient descent (SGD) is commonly used for optimization in large-scale machine learning problems. Langford et al. (2009) introduce a sparse online learning method to induce sparsity via truncated gradient. With high-dimensional sparse data, however, the method suffers from slow convergence and high variance due to the heterogeneity in feature sparsity. To mitigate this issue, we introduce a stabilized truncated stochastic gradient descent algorithm. We employ a soft-thresholding scheme on the weight vector where the imposed shrinkage is adaptive to the amount of information available in each feature. The variability in the resulted sparse weight vector is further controlled by stability selection integrated with the informative truncation. To facilitate better convergence, we adopt an annealing strategy on the truncation rate, which leads to a balanced trade-off between exploration and exploitation in learning a sparse weight vector. Numerical experiments show that our algorithm compares favorably with the original algorithm in terms of prediction accuracy, achieved sparsity and stability.
[ "['Yuting Ma' 'Tian Zheng']", "Yuting Ma, Tian Zheng" ]
cs.LG stat.ML
null
1604.06518
null
null
http://arxiv.org/pdf/1604.06518v4
2017-05-28T01:26:48Z
2016-04-22T01:57:01Z
Approximation Vector Machines for Large-scale Online Learning
One of the most challenging problems in kernel online learning is to bound the model size and to promote the model sparsity. Sparse models not only improve computation and memory usage, but also enhance the generalization capacity, a principle that concurs with the law of parsimony. However, inappropriate sparsity modeling may also significantly degrade the performance. In this paper, we propose Approximation Vector Machine (AVM), a model that can simultaneously encourage the sparsity and safeguard its risk in compromising the performance. When an incoming instance arrives, we approximate this instance by one of its neighbors whose distance to it is less than a predefined threshold. Our key intuition is that since the newly seen instance is expressed by its nearby neighbor the optimal performance can be analytically formulated and maintained. We develop theoretical foundations to support this intuition and further establish an analysis to characterize the gap between the approximation and optimal solutions. This gap crucially depends on the frequency of approximation and the predefined threshold. We perform the convergence analysis for a wide spectrum of loss functions including Hinge, smooth Hinge, and Logistic for classification task, and $l_1$, $l_2$, and $\epsilon$-insensitive for regression task. We conducted extensive experiments for classification task in batch and online modes, and regression task in online mode over several benchmark datasets. The results show that our proposed AVM achieved a comparable predictive performance with current state-of-the-art methods while simultaneously achieving significant computational speed-up due to the ability of the proposed AVM in maintaining the model size.
[ "Trung Le and Tu Dinh Nguyen and Vu Nguyen and Dinh Phung", "['Trung Le' 'Tu Dinh Nguyen' 'Vu Nguyen' 'Dinh Phung']" ]
cs.CL cs.LG cs.NE
null
1604.06529
null
null
http://arxiv.org/pdf/1604.06529v2
2016-06-30T04:23:07Z
2016-04-22T03:20:24Z
Dependency Parsing with LSTMs: An Empirical Evaluation
We propose a transition-based dependency parser using Recurrent Neural Networks with Long Short-Term Memory (LSTM) units. This extends the feedforward neural network parser of Chen and Manning (2014) and enables modelling of entire sequences of shift/reduce transition decisions. On the Google Web Treebank, our LSTM parser is competitive with the best feedforward parser on overall accuracy and notably achieves more than 3% improvement for long-range dependencies, which has proved difficult for previous transition-based parsers due to error propagation and limited context information. Our findings additionally suggest that dropout regularisation on the embedding layer is crucial to improve the LSTM's generalisation.
[ "Adhiguna Kuncoro, Yuichiro Sawai, Kevin Duh, Yuji Matsumoto", "['Adhiguna Kuncoro' 'Yuichiro Sawai' 'Kevin Duh' 'Yuji Matsumoto']" ]
cs.SI cs.LG
null
1604.06577
null
null
http://arxiv.org/pdf/1604.06577v1
2016-04-22T08:59:43Z
2016-04-22T08:59:43Z
CT-Mapper: Mapping Sparse Multimodal Cellular Trajectories using a Multilayer Transportation Network
Mobile phone data have recently become an attractive source of information about mobility behavior. Since cell phone data can be captured in a passive way for a large user population, they can be harnessed to collect well-sampled mobility information. In this paper, we propose CT-Mapper, an unsupervised algorithm that enables the mapping of mobile phone traces over a multimodal transport network. One of the main strengths of CT-Mapper is its capability to map noisy sparse cellular multimodal trajectories over a multilayer transportation network where the layers have different physical properties and not only to map trajectories associated with a single layer. Such a network is modeled by a large multilayer graph in which the nodes correspond to metro/train stations or road intersections and edges correspond to connections between them. The mapping problem is modeled by an unsupervised HMM where the observations correspond to sparse user mobile trajectories and the hidden states to the multilayer graph nodes. The HMM is unsupervised as the transition and emission probabilities are inferred using respectively the physical transportation properties and the information on the spatial coverage of antenna base stations. To evaluate CT-Mapper we collected cellular traces with their corresponding GPS trajectories for a group of volunteer users in Paris and vicinity (France). We show that CT-Mapper is able to accurately retrieve the real cell phone user paths despite the sparsity of the observed trace trajectories. Furthermore our transition probability model is up to 20% more accurate than other naive models.
[ "['Fereshteh Asgari' 'Alexis Sultan' 'Haoyi Xiong' 'Vincent Gauthier'\n 'Mounim El-Yacoubi']", "Fereshteh Asgari and Alexis Sultan and Haoyi Xiong and Vincent\n Gauthier and Mounim El-Yacoubi" ]
cs.LG
null
1604.06602
null
null
http://arxiv.org/pdf/1604.06602v7
2018-07-07T12:09:52Z
2016-04-22T10:53:48Z
Clustering with Missing Features: A Penalized Dissimilarity Measure based approach
Many real-world clustering problems are plagued by incomplete data characterized by missing or absent features for some or all of the data instances. Traditional clustering methods cannot be directly applied to such data without preprocessing by imputation or marginalization techniques. In this article, we overcome this drawback by utilizing a penalized dissimilarity measure which we refer to as the Feature Weighted Penalty based Dissimilarity (FWPD). Using the FWPD measure, we modify the traditional k-means clustering algorithm and the standard hierarchical agglomerative clustering algorithms so as to make them directly applicable to datasets with missing features. We present time complexity analyses for these new techniques and also undertake a detailed theoretical analysis showing that the new FWPD based k-means algorithm converges to a local optimum within a finite number of iterations. We also present a detailed method for simulating random as well as feature dependent missingness. We report extensive experiments on various benchmark datasets for different types of missingness showing that the proposed clustering techniques have generally better results compared to some of the most well-known imputation methods which are commonly used to handle such incomplete data. We append a possible extension of the proposed dissimilarity measure to the case of absent features (where the unobserved features are known to be undefined).
[ "['Shounak Datta' 'Supritam Bhattacharjee' 'Swagatam Das']", "Shounak Datta, Supritam Bhattacharjee, Swagatam Das" ]
cs.LG cs.CV stat.ML
null
1604.06626
null
null
http://arxiv.org/pdf/1604.06626v2
2016-04-26T06:55:06Z
2016-04-22T12:32:37Z
The Mean Partition Theorem of Consensus Clustering
To devise efficient solutions for approximating a mean partition in consensus clustering, Dimitriadou et al. [3] presented a necessary condition of optimality for a consensus function based on least square distances. We show that their result is pivotal for deriving interesting properties of consensus clustering beyond optimization. For this, we present the necessary condition of optimality in a slightly stronger form in terms of the Mean Partition Theorem and extend it to the Expected Partition Theorem. To underpin its versatility, we show three examples that apply the Mean Partition Theorem: (i) equivalence of the mean partition and optimal multiple alignment, (ii) construction of profiles and motifs, and (iii) relationship between consensus clustering and cluster stability.
[ "['Brijnesh J. Jain']", "Brijnesh J. Jain" ]
cs.CL cs.AI cs.LG cs.NE
null
1604.06635
null
null
http://arxiv.org/pdf/1604.06635v1
2016-04-22T12:51:11Z
2016-04-22T12:51:11Z
Bridging LSTM Architecture and the Neural Dynamics during Reading
Recently, the long short-term memory neural network (LSTM) has attracted wide interest due to its success in many tasks. LSTM architecture consists of a memory cell and three gates, which looks similar to the neuronal networks in the brain. However, there still lacks the evidence of the cognitive plausibility of LSTM architecture as well as its working mechanism. In this paper, we study the cognitive plausibility of LSTM by aligning its internal architecture with the brain activity observed via fMRI when the subjects read a story. Experiment results show that the artificial memory vector in LSTM can accurately predict the observed sequential brain activities, indicating the correlation between LSTM architecture and the cognitive process of story reading.
[ "Peng Qian, Xipeng Qiu, Xuanjing Huang", "['Peng Qian' 'Xipeng Qiu' 'Xuanjing Huang']" ]
cs.NE cs.LG stat.ML
null
1604.06730
null
null
http://arxiv.org/pdf/1604.06730v1
2016-04-22T16:20:29Z
2016-04-22T16:20:29Z
Developing an ICU scoring system with interaction terms using a genetic algorithm
ICU mortality scoring systems attempt to predict patient mortality using predictive models with various clinical predictors. Examples of such systems are APACHE, SAPS and MPM. However, most such scoring systems do not actively look for and include interaction terms, despite physicians intuitively taking such interactions into account when making a diagnosis. One barrier to including such terms in predictive models is the difficulty of using most variable selection methods in high-dimensional datasets. A genetic algorithm framework for variable selection with logistic regression models is used to search for two-way interaction terms in a clinical dataset of adult ICU patients, with separate models being built for each category of diagnosis upon admittance to the ICU. The models had good discrimination across all categories, with a weighted average AUC of 0.84 (>0.90 for several categories) and the genetic algorithm was able to find several significant interaction terms, which may be able to provide greater insight into mortality prediction for health practitioners. The GA selected models had improved performance against stepwise selection and random forest models, and provides greater flexibility in terms of variable selection by being able to optimize over any modeler-defined model performance metric instead of a specific variable importance metric.
[ "Chee Chun Gan and Gerard Learmonth", "['Chee Chun Gan' 'Gerard Learmonth']" ]
cs.LG
null
1604.06737
null
null
http://arxiv.org/pdf/1604.06737v1
2016-04-22T16:34:30Z
2016-04-22T16:34:30Z
Entity Embeddings of Categorical Variables
We map categorical variables in a function approximation problem into Euclidean spaces, which are the entity embeddings of the categorical variables. The mapping is learned by a neural network during the standard supervised training process. Entity embedding not only reduces memory usage and speeds up neural networks compared with one-hot encoding, but more importantly by mapping similar values close to each other in the embedding space it reveals the intrinsic properties of the categorical variables. We applied it successfully in a recent Kaggle competition and were able to reach the third position with relative simple features. We further demonstrate in this paper that entity embedding helps the neural network to generalize better when the data is sparse and statistics is unknown. Thus it is especially useful for datasets with lots of high cardinality features, where other methods tend to overfit. We also demonstrate that the embeddings obtained from the trained neural network boost the performance of all tested machine learning methods considerably when used as the input features instead. As entity embedding defines a distance measure for categorical variables it can be used for visualizing categorical data and for data clustering.
[ "Cheng Guo and Felix Berkhahn", "['Cheng Guo' 'Felix Berkhahn']" ]
cs.LG cs.AI
null
1604.06743
null
null
http://arxiv.org/pdf/1604.06743v1
2016-04-22T16:47:04Z
2016-04-22T16:47:04Z
Latent Contextual Bandits and their Application to Personalized Recommendations for New Users
Personalized recommendations for new users, also known as the cold-start problem, can be formulated as a contextual bandit problem. Existing contextual bandit algorithms generally rely on features alone to capture user variability. Such methods are inefficient in learning new users' interests. In this paper we propose Latent Contextual Bandits. We consider both the benefit of leveraging a set of learned latent user classes for new users, and how we can learn such latent classes from prior users. We show that our approach achieves a better regret bound than existing algorithms. We also demonstrate the benefit of our approach using a large real world dataset and a preliminary user study.
[ "Li Zhou and Emma Brunskill", "['Li Zhou' 'Emma Brunskill']" ]
cs.LG cs.AI cs.RO
null
1604.06778
null
null
http://arxiv.org/pdf/1604.06778v3
2016-05-27T19:25:59Z
2016-04-22T18:57:24Z
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
[ "['Yan Duan' 'Xi Chen' 'Rein Houthooft' 'John Schulman' 'Pieter Abbeel']", "Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel" ]
cs.AI cs.LG
null
1604.06849
null
null
http://arxiv.org/pdf/1604.06849v1
2016-04-23T02:23:14Z
2016-04-23T02:23:14Z
A Computational Model for Situated Task Learning with Interactive Instruction
Learning novel tasks is a complex cognitive activity requiring the learner to acquire diverse declarative and procedural knowledge. Prior ACT-R models of acquiring task knowledge from instruction focused on learning procedural knowledge from declarative instructions encoded in semantic memory. In this paper, we identify the requirements for designing compu- tational models that learn task knowledge from situated task- oriented interactions with an expert and then describe and evaluate a model of learning from situated interactive instruc- tion that is implemented in the Soar cognitive architecture.
[ "['Shiwali Mohan' 'James Kirk' 'John Laird']", "Shiwali Mohan, James Kirk, John Laird" ]
cs.LG
null
1604.06915
null
null
http://arxiv.org/pdf/1604.06915v1
2016-04-23T15:13:43Z
2016-04-23T15:13:43Z
On the Sample Complexity of End-to-end Training vs. Semantic Abstraction Training
We compare the end-to-end training approach to a modular approach in which a system is decomposed into semantically meaningful components. We focus on the sample complexity aspect, in the regime where an extremely high accuracy is necessary, as is the case in autonomous driving applications. We demonstrate cases in which the number of training examples required by the end-to-end approach is exponentially larger than the number of examples required by the semantic abstraction approach.
[ "Shai Shalev-Shwartz and Amnon Shashua", "['Shai Shalev-Shwartz' 'Amnon Shashua']" ]
cs.DS cs.LG stat.ML
null
1604.06968
null
null
http://arxiv.org/pdf/1604.06968v2
2016-08-14T19:50:58Z
2016-04-24T00:23:51Z
Agnostic Estimation of Mean and Covariance
We consider the problem of estimating the mean and covariance of a distribution from iid samples in $\mathbb{R}^n$, in the presence of an $\eta$ fraction of malicious noise; this is in contrast to much recent work where the noise itself is assumed to be from a distribution of known type. The agnostic problem includes many interesting special cases, e.g., learning the parameters of a single Gaussian (or finding the best-fit Gaussian) when $\eta$ fraction of data is adversarially corrupted, agnostically learning a mixture of Gaussians, agnostic ICA, etc. We present polynomial-time algorithms to estimate the mean and covariance with error guarantees in terms of information-theoretic lower bounds. As a corollary, we also obtain an agnostic algorithm for Singular Value Decomposition.
[ "Kevin A. Lai, Anup B. Rao, Santosh Vempala", "['Kevin A. Lai' 'Anup B. Rao' 'Santosh Vempala']" ]
cs.LG
null
1604.06985
null
null
http://arxiv.org/pdf/1604.06985v3
2016-05-08T10:07:34Z
2016-04-24T05:17:04Z
Deep Learning with Eigenvalue Decay Regularizer
This paper extends our previous work on regularization of neural networks using Eigenvalue Decay by employing a soft approximation of the dominant eigenvalue in order to enable the calculation of its derivatives in relation to the synaptic weights, and therefore the application of back-propagation, which is a primary demand for deep learning. Moreover, we extend our previous theoretical analysis to deep neural networks and multiclass classification problems. Our method is implemented as an additional regularizer in Keras, a modular neural networks library written in Python, and evaluated in the benchmark data sets Reuters Newswire Topics Classification, IMDB database for binary sentiment classification, MNIST database of handwritten digits and CIFAR-10 data set for image classification.
[ "Oswaldo Ludwig", "['Oswaldo Ludwig']" ]
cs.LG math.OC stat.ML
null
1604.07070
null
null
http://arxiv.org/pdf/1604.07070v3
2016-10-16T18:56:57Z
2016-04-24T18:50:58Z
Stochastic Variance-Reduced ADMM
The alternating direction method of multipliers (ADMM) is a powerful optimization solver in machine learning. Recently, stochastic ADMM has been integrated with variance reduction methods for stochastic gradient, leading to SAG-ADMM and SDCA-ADMM that have fast convergence rates and low iteration complexities. However, their space requirements can still be high. In this paper, we propose an integration of ADMM with the method of stochastic variance reduced gradient (SVRG). Unlike another recent integration attempt called SCAS-ADMM, the proposed algorithm retains the fast convergence benefits of SAG-ADMM and SDCA-ADMM, but is more advantageous in that its storage requirement is very low, even independent of the sample size $n$. We also extend the proposed method for nonconvex problems, and obtain a convergence rate of $O(1/T)$. Experimental results demonstrate that it is as fast as SAG-ADMM and SDCA-ADMM, much faster than SCAS-ADMM, and can be used on much bigger data sets.
[ "Shuai Zheng and James T. Kwok", "['Shuai Zheng' 'James T. Kwok']" ]
cs.LG
null
1604.07078
null
null
http://arxiv.org/pdf/1604.07078v1
2016-04-24T20:32:18Z
2016-04-24T20:32:18Z
Unsupervised Representation Learning of Structured Radio Communication Signals
We explore unsupervised representation learning of radio communication signals in raw sampled time series representation. We demonstrate that we can learn modulation basis functions using convolutional autoencoders and visually recognize their relationship to the analytic bases used in digital communications. We also propose and evaluate quantitative met- rics for quality of encoding using domain relevant performance metrics.
[ "[\"Timothy J. O'Shea\" 'Johnathan Corgan' 'T. Charles Clancy']", "Timothy J. O'Shea, Johnathan Corgan, T. Charles Clancy" ]
cs.CV cs.AI cs.LG stat.AP stat.ML
null
1604.07093
null
null
http://arxiv.org/pdf/1604.07093v1
2016-04-24T23:36:36Z
2016-04-24T23:36:36Z
Semi-supervised Vocabulary-informed Learning
Despite significant progress in object categorization, in recent years, a number of important challenges remain, mainly, ability to learn from limited labeled data and ability to recognize object classes within large, potentially open, set of labels. Zero-shot learning is one way of addressing these challenges, but it has only been shown to work with limited sized class vocabularies and typically requires separation between supervised and unsupervised classes, allowing former to inform the latter but not vice versa. We propose the notion of semi-supervised vocabulary-informed learning to alleviate the above mentioned challenges and address problems of supervised, zero-shot and open set recognition using a unified framework. Specifically, we propose a maximum margin framework for semantic manifold-based recognition that incorporates distance constraints from (both supervised and unsupervised) vocabulary atoms, ensuring that labeled samples are projected closest to their correct prototypes, in the embedding space, than to others. We show that resulting model shows improvements in supervised, zero-shot, and large open set recognition, with up to 310K class vocabulary on AwA and ImageNet datasets.
[ "Yanwei Fu, Leonid Sigal", "['Yanwei Fu' 'Leonid Sigal']" ]
cs.LG stat.ML
null
1604.07101
null
null
http://arxiv.org/pdf/1604.07101v2
2016-10-27T17:36:57Z
2016-04-25T00:38:16Z
Double Thompson Sampling for Dueling Bandits
In this paper, we propose a Double Thompson Sampling (D-TS) algorithm for dueling bandit problems. As indicated by its name, D-TS selects both the first and the second candidates according to Thompson Sampling. Specifically, D-TS maintains a posterior distribution for the preference matrix, and chooses the pair of arms for comparison by sampling twice from the posterior distribution. This simple algorithm applies to general Copeland dueling bandits, including Condorcet dueling bandits as its special case. For general Copeland dueling bandits, we show that D-TS achieves $O(K^2 \log T)$ regret. For Condorcet dueling bandits, we further simplify the D-TS algorithm and show that the simplified D-TS algorithm achieves $O(K \log T + K^2 \log \log T)$ regret. Simulation results based on both synthetic and real-world data demonstrate the efficiency of the proposed D-TS algorithm.
[ "Huasen Wu and Xin Liu", "['Huasen Wu' 'Xin Liu']" ]
stat.ML cs.LG math.ST stat.TH
null
1604.07143
null
null
http://arxiv.org/pdf/1604.07143v2
2018-04-03T07:42:50Z
2016-04-25T06:43:47Z
Neural Random Forests
Given an ensemble of randomized regression trees, it is possible to restructure them as a collection of multilayered neural networks with particular connection weights. Following this principle, we reformulate the random forest method of Breiman (2001) into a neural network setting, and in turn propose two new hybrid procedures that we call neural random forests. Both predictors exploit prior knowledge of regression trees for their architecture, have less parameters to tune than standard networks, and less restrictions on the geometry of the decision boundaries than trees. Consistency results are proved, and substantial numerical evidence is provided on both synthetic and real data sets to assess the excellent performance of our methods in a large variety of prediction problems.
[ "['Gérard Biau' 'Erwan Scornet' 'Johannes Welbl']", "G\\'erard Biau (LPMA, LSTA), Erwan Scornet (LSTA), Johannes Welbl (UCL)" ]
q-bio.BM cs.AI cs.LG cs.NE q-bio.QM
null
1604.07176
null
null
http://arxiv.org/pdf/1604.07176v1
2016-04-25T09:17:18Z
2016-04-25T09:17:18Z
Protein Secondary Structure Prediction Using Cascaded Convolutional and Recurrent Neural Networks
Protein secondary structure prediction is an important problem in bioinformatics. Inspired by the recent successes of deep neural networks, in this paper, we propose an end-to-end deep network that predicts protein secondary structures from integrated local and global contextual features. Our deep architecture leverages convolutional neural networks with different kernel sizes to extract multiscale local contextual features. In addition, considering long-range dependencies existing in amino acid sequences, we set up a bidirectional neural network consisting of gated recurrent unit to capture global contextual features. Furthermore, multi-task learning is utilized to predict secondary structure labels and amino-acid solvent accessibility simultaneously. Our proposed deep network demonstrates its effectiveness by achieving state-of-the-art performance, i.e., 69.7% Q8 accuracy on the public benchmark CB513, 76.9% Q8 accuracy on CASP10 and 73.1% Q8 accuracy on CASP11. Our model and results are publicly available.
[ "Zhen Li and Yizhou Yu", "['Zhen Li' 'Yizhou Yu']" ]
cs.LG cs.AI stat.ML
10.1109/ICDM.2015.145
1604.07178
null
null
http://arxiv.org/abs/1604.07178v1
2016-04-25T09:29:21Z
2016-04-25T09:29:21Z
Weighted Spectral Cluster Ensemble
Clustering explores meaningful patterns in the non-labeled data sets. Cluster Ensemble Selection (CES) is a new approach, which can combine individual clustering results for increasing the performance of the final results. Although CES can achieve better final results in comparison with individual clustering algorithms and cluster ensemble methods, its performance can be dramatically affected by its consensus diversity metric and thresholding procedure. There are two problems in CES: 1) most of the diversity metrics is based on heuristic Shannon's entropy and 2) estimating threshold values are really hard in practice. The main goal of this paper is proposing a robust approach for solving the above mentioned problems. Accordingly, this paper develops a novel framework for clustering problems, which is called Weighted Spectral Cluster Ensemble (WSCE), by exploiting some concepts from community detection arena and graph based clustering. Under this framework, a new version of spectral clustering, which is called Two Kernels Spectral Clustering, is used for generating graphs based individual clustering results. Further, by using modularity, which is a famous metric in the community detection, on the transformed graph representation of individual clustering results, our approach provides an effective diversity estimation for individual clustering results. Moreover, this paper introduces a new approach for combining the evaluated individual clustering results without the procedure of thresholding. Experimental study on varied data sets demonstrates that the prosed approach achieves superior performance to state-of-the-art methods.
[ "['Muhammad Yousefnezhad' 'Daoqiang Zhang']", "Muhammad Yousefnezhad, Daoqiang Zhang" ]
cs.DB cs.LG
null
1604.07180
null
null
http://arxiv.org/pdf/1604.07180v1
2016-04-25T09:39:57Z
2016-04-25T09:39:57Z
Observing and Recommending from a Social Web with Biases
The research question this report addresses is: how, and to what extent, those directly involved with the design, development and employment of a specific black box algorithm can be certain that it is not unlawfully discriminating (directly and/or indirectly) against particular persons with protected characteristics (e.g. gender, race and ethnicity)?
[ "Steffen Staab and Sophie Stalla-Bourdillon and Laura Carmichael", "['Steffen Staab' 'Sophie Stalla-Bourdillon' 'Laura Carmichael']" ]
cs.IR cs.LG
null
1604.07209
null
null
http://arxiv.org/pdf/1604.07209v1
2016-04-25T11:28:21Z
2016-04-25T11:28:21Z
Unbiased Comparative Evaluation of Ranking Functions
Eliciting relevance judgments for ranking evaluation is labor-intensive and costly, motivating careful selection of which documents to judge. Unlike traditional approaches that make this selection deterministically, probabilistic sampling has shown intriguing promise since it enables the design of estimators that are provably unbiased even when reusing data with missing judgments. In this paper, we first unify and extend these sampling approaches by viewing the evaluation problem as a Monte Carlo estimation task that applies to a large number of common IR metrics. Drawing on the theoretical clarity that this view offers, we tackle three practical evaluation scenarios: comparing two systems, comparing $k$ systems against a baseline, and ranking $k$ systems. For each scenario, we derive an estimator and a variance-optimizing sampling distribution while retaining the strengths of sampling-based evaluation, including unbiasedness, reusability despite missing data, and ease of use in practice. In addition to the theoretical contribution, we empirically evaluate our methods against previously used sampling heuristics and find that they generally cut the number of required relevance judgments at least in half.
[ "Tobias Schnabel, Adith Swaminathan, Peter Frazier, Thorsten Joachims", "['Tobias Schnabel' 'Adith Swaminathan' 'Peter Frazier' 'Thorsten Joachims']" ]
cs.MM cs.LG
null
1604.07211
null
null
http://arxiv.org/pdf/1604.07211v1
2016-04-25T11:43:09Z
2016-04-25T11:43:09Z
Towards Reduced Reference Parametric Models for Estimating Audiovisual Quality in Multimedia Services
We have developed reduced reference parametric models for estimating perceived quality in audiovisual multimedia services. We have created 144 unique configurations for audiovisual content including various application and network parameters such as bitrates and distortions in terms of bandwidth, packet loss rate and jitter. To generate the data needed for model training and validation we have tasked 24 subjects, in a controlled environment, to rate the overall audiovisual quality on the absolute category rating (ACR) 5-level quality scale. We have developed models using Random Forest and Neural Network based machine learning methods in order to estimate Mean Opinion Scores (MOS) values. We have used information retrieved from the packet headers and side information provided as network parameters for model training. Random Forest based models have performed better in terms of Root Mean Square Error (RMSE) and Pearson correlation coefficient. The side information proved to be very effective in developing the model. We have found that, while the model performance might be improved by replacing the side information with more accurate bit stream level measurements, they are performing well in estimating perceived quality in audiovisual multimedia services.
[ "Edip Demirbilek and Jean-Charles Gr\\'egoire", "['Edip Demirbilek' 'Jean-Charles Grégoire']" ]
cs.LG
null
1604.07243
null
null
http://arxiv.org/pdf/1604.07243v3
2017-06-14T14:08:22Z
2016-04-25T13:22:55Z
Learning Arbitrary Sum-Product Network Leaves with Expectation-Maximization
Sum-Product Networks with complex probability distribution at the leaves have been shown to be powerful tractable-inference probabilistic models. However, while learning the internal parameters has been amply studied, learning complex leaf distribution is an open problem with only few results available in special cases. In this paper we derive an efficient method to learn a very large class of leaf distributions with Expectation-Maximization. The EM updates have the form of simple weighted maximum likelihood problems, allowing to use any distribution that can be learned with maximum likelihood, even approximately. The algorithm has cost linear in the model size and converges even if only partial optimizations are performed. We demonstrate this approach with experiments on twenty real-life datasets for density estimation, using tree graphical models as leaves. Our model outperforms state-of-the-art methods for parameter learning despite using SPNs with much fewer parameters.
[ "['Mattia Desana' 'Christoph Schnörr']", "Mattia Desana and Christoph Schn\\\"orr" ]
cs.AI cs.LG
null
1604.07255
null
null
http://arxiv.org/pdf/1604.07255v3
2016-11-30T17:35:27Z
2016-04-25T13:45:50Z
A Deep Hierarchical Approach to Lifelong Learning in Minecraft
We propose a lifelong learning system that has the ability to reuse and transfer knowledge from one task to another while efficiently retaining the previously learned knowledge-base. Knowledge is transferred by learning reusable skills to solve tasks in Minecraft, a popular video game which is an unsolved and high-dimensional lifelong learning problem. These reusable skills, which we refer to as Deep Skill Networks, are then incorporated into our novel Hierarchical Deep Reinforcement Learning Network (H-DRLN) architecture using two techniques: (1) a deep skill array and (2) skill distillation, our novel variation of policy distillation (Rusu et. al. 2015) for learning skills. Skill distillation enables the HDRLN to efficiently retain knowledge and therefore scale in lifelong learning, by accumulating knowledge and encapsulating multiple reusable skills into a single distilled network. The H-DRLN exhibits superior performance and lower learning sample complexity compared to the regular Deep Q Network (Mnih et. al. 2015) in sub-domains of Minecraft.
[ "['Chen Tessler' 'Shahar Givony' 'Tom Zahavy' 'Daniel J. Mankowitz'\n 'Shie Mannor']", "Chen Tessler, Shahar Givony, Tom Zahavy, Daniel J. Mankowitz, Shie\n Mannor" ]
cs.NE cs.LG
null
1604.07269
null
null
http://arxiv.org/pdf/1604.07269v1
2016-04-25T14:17:08Z
2016-04-25T14:17:08Z
CMA-ES for Hyperparameter Optimization of Deep Neural Networks
Hyperparameters of deep neural networks are often optimized by grid search, random search or Bayesian optimization. As an alternative, we propose to use the Covariance Matrix Adaptation Evolution Strategy (CMA-ES), which is known for its state-of-the-art performance in derivative-free optimization. CMA-ES has some useful invariance properties and is friendly to parallel evaluations of solutions. We provide a toy example comparing CMA-ES and state-of-the-art Bayesian optimization algorithms for tuning the hyperparameters of a convolutional neural network for the MNIST dataset on 30 GPUs in parallel.
[ "['Ilya Loshchilov' 'Frank Hutter']", "Ilya Loshchilov and Frank Hutter" ]
cs.CV cs.LG cs.NE
null
1604.07316
null
null
http://arxiv.org/pdf/1604.07316v1
2016-04-25T16:03:56Z
2016-04-25T16:03:56Z
End to End Learning for Self-Driving Cars
We trained a convolutional neural network (CNN) to map raw pixels from a single front-facing camera directly to steering commands. This end-to-end approach proved surprisingly powerful. With minimum training data from humans the system learns to drive in traffic on local roads with or without lane markings and on highways. It also operates in areas with unclear visual guidance such as in parking lots and on unpaved roads. The system automatically learns internal representations of the necessary processing steps such as detecting useful road features with only the human steering angle as the training signal. We never explicitly trained it to detect, for example, the outline of roads. Compared to explicit decomposition of the problem, such as lane marking detection, path planning, and control, our end-to-end system optimizes all processing steps simultaneously. We argue that this will eventually lead to better performance and smaller systems. Better performance will result because the internal components self-optimize to maximize overall system performance, instead of optimizing human-selected intermediate criteria, e.g., lane detection. Such criteria understandably are selected for ease of human interpretation which doesn't automatically guarantee maximum system performance. Smaller networks are possible because the system learns to solve the problem with the minimal number of processing steps. We used an NVIDIA DevBox and Torch 7 for training and an NVIDIA DRIVE(TM) PX self-driving car computer also running Torch 7 for determining where to drive. The system operates at 30 frames per second (FPS).
[ "['Mariusz Bojarski' 'Davide Del Testa' 'Daniel Dworakowski'\n 'Bernhard Firner' 'Beat Flepp' 'Prasoon Goyal' 'Lawrence D. Jackel'\n 'Mathew Monfort' 'Urs Muller' 'Jiakai Zhang' 'Xin Zhang' 'Jake Zhao'\n 'Karol Zieba']", "Mariusz Bojarski, Davide Del Testa, Daniel Dworakowski, Bernhard\n Firner, Beat Flepp, Prasoon Goyal, Lawrence D. Jackel, Mathew Monfort, Urs\n Muller, Jiakai Zhang, Xin Zhang, Jake Zhao, Karol Zieba" ]
stat.ML cs.LG
null
1604.07356
null
null
http://arxiv.org/pdf/1604.07356v1
2016-04-25T18:33:59Z
2016-04-25T18:33:59Z
Fast nonlinear embeddings via structured matrices
We present a new paradigm for speeding up randomized computations of several frequently used functions in machine learning. In particular, our paradigm can be applied for improving computations of kernels based on random embeddings. Above that, the presented framework covers multivariate randomized functions. As a byproduct, we propose an algorithmic approach that also leads to a significant reduction of space complexity. Our method is based on careful recycling of Gaussian vectors into structured matrices that share properties of fully random matrices. The quality of the proposed structured approach follows from combinatorial properties of the graphs encoding correlations between rows of these structured matrices. Our framework covers as special cases already known structured approaches such as the Fast Johnson-Lindenstrauss Transform, but is much more general since it can be applied also to highly nonlinear embeddings. We provide strong concentration results showing the quality of the presented paradigm.
[ "Krzysztof Choromanski, Francois Fagan", "['Krzysztof Choromanski' 'Francois Fagan']" ]
cs.CV cs.AI cs.GR cs.LG
null
1604.07379
null
null
http://arxiv.org/pdf/1604.07379v2
2016-11-21T20:56:42Z
2016-04-25T19:42:46Z
Context Encoders: Feature Learning by Inpainting
We present an unsupervised visual feature learning algorithm driven by context-based pixel prediction. By analogy with auto-encoders, we propose Context Encoders -- a convolutional neural network trained to generate the contents of an arbitrary image region conditioned on its surroundings. In order to succeed at this task, context encoders need to both understand the content of the entire image, as well as produce a plausible hypothesis for the missing part(s). When training context encoders, we have experimented with both a standard pixel-wise reconstruction loss, as well as a reconstruction plus an adversarial loss. The latter produces much sharper results because it can better handle multiple modes in the output. We found that a context encoder learns a representation that captures not just appearance but also the semantics of visual structures. We quantitatively demonstrate the effectiveness of our learned features for CNN pre-training on classification, detection, and segmentation tasks. Furthermore, context encoders can be used for semantic inpainting tasks, either stand-alone or as initialization for non-parametric methods.
[ "['Deepak Pathak' 'Philipp Krahenbuhl' 'Jeff Donahue' 'Trevor Darrell'\n 'Alexei A. Efros']", "Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell,\n Alexei A. Efros" ]
cs.LG stat.ML
null
1604.07484
null
null
http://arxiv.org/pdf/1604.07484v1
2016-04-26T00:44:42Z
2016-04-26T00:44:42Z
Deep Multi-fidelity Gaussian Processes
We develop a novel multi-fidelity framework that goes far beyond the classical AR(1) Co-kriging scheme of Kennedy and O'Hagan (2000). Our method can handle general discontinuous cross-correlations among systems with different levels of fidelity. A combination of multi-fidelity Gaussian Processes (AR(1) Co-kriging) and deep neural networks enables us to construct a method that is immune to discontinuities. We demonstrate the effectiveness of the new technology using standard benchmark problems designed to resemble the outputs of complicated high- and low-fidelity codes.
[ "Maziar Raissi and George Karniadakis", "['Maziar Raissi' 'George Karniadakis']" ]
cs.CV cs.LG stat.ML
null
1604.07554
null
null
http://arxiv.org/pdf/1604.07554v1
2016-04-26T07:43:59Z
2016-04-26T07:43:59Z
A New Approach in Persian Handwritten Letters Recognition Using Error Correcting Output Coding
Classification Ensemble, which uses the weighed polling of outputs, is the art of combining a set of basic classifiers for generating high-performance, robust and more stable results. This study aims to improve the results of identifying the Persian handwritten letters using Error Correcting Output Coding (ECOC) ensemble method. Furthermore, the feature selection is used to reduce the costs of errors in our proposed method. ECOC is a method for decomposing a multi-way classification problem into many binary classification tasks; and then combining the results of the subtasks into a hypothesized solution to the original problem. Firstly, the image features are extracted by Principal Components Analysis (PCA). After that, ECOC is used for identification the Persian handwritten letters which it uses Support Vector Machine (SVM) as the base classifier. The empirical results of applying this ensemble method using 10 real-world data sets of Persian handwritten letters indicate that this method has better results in identifying the Persian handwritten letters than other ensemble methods and also single classifications. Moreover, by testing a number of different features, this paper found that we can reduce the additional cost in feature selection stage by using this method.
[ "['Maziar Kazemi' 'Muhammad Yousefnezhad' 'Saber Nourian']", "Maziar Kazemi, Muhammad Yousefnezhad, Saber Nourian" ]
cs.SI cs.DS cs.LG
10.1109/IWQoS.2016.7590438
1604.07638
null
null
http://arxiv.org/abs/1604.07638v1
2016-04-26T12:02:55Z
2016-04-26T12:02:55Z
Online Influence Maximization in Non-Stationary Social Networks
Social networks have been popular platforms for information propagation. An important use case is viral marketing: given a promotion budget, an advertiser can choose some influential users as the seed set and provide them free or discounted sample products; in this way, the advertiser hopes to increase the popularity of the product in the users' friend circles by the world-of-mouth effect, and thus maximizes the number of users that information of the production can reach. There has been a body of literature studying the influence maximization problem. Nevertheless, the existing studies mostly investigate the problem on a one-off basis, assuming fixed known influence probabilities among users, or the knowledge of the exact social network topology. In practice, the social network topology and the influence probabilities are typically unknown to the advertiser, which can be varying over time, i.e., in cases of newly established, strengthened or weakened social ties. In this paper, we focus on a dynamic non-stationary social network and design a randomized algorithm, RSB, based on multi-armed bandit optimization, to maximize influence propagation over time. The algorithm produces a sequence of online decisions and calibrates its explore-exploit strategy utilizing outcomes of previous decisions. It is rigorously proven to achieve an upper-bounded regret in reward and applicable to large-scale social networks. Practical effectiveness of the algorithm is evaluated using both synthetic and real-world datasets, which demonstrates that our algorithm outperforms previous stationary methods under non-stationary conditions.
[ "['Yixin Bao' 'Xiaoke Wang' 'Zhi Wang' 'Chuan Wu' 'Francis C. M. Lau']", "Yixin Bao, Xiaoke Wang, Zhi Wang, Chuan Wu, Francis C.M. Lau" ]
cs.LG cs.AI stat.ML
null
1604.07706
null
null
http://arxiv.org/pdf/1604.07706v3
2016-06-07T08:06:23Z
2016-04-26T14:59:43Z
Distributed Clustering of Linear Bandits in Peer to Peer Networks
We provide two distributed confidence ball algorithms for solving linear bandit problems in peer to peer networks with limited communication capabilities. For the first, we assume that all the peers are solving the same linear bandit problem, and prove that our algorithm achieves the optimal asymptotic regret rate of any centralised algorithm that can instantly communicate information between the peers. For the second, we assume that there are clusters of peers solving the same bandit problem within each cluster, and we prove that our algorithm discovers these clusters, while achieving the optimal asymptotic regret rate within each one. Through experiments on several real-world datasets, we demonstrate the performance of proposed algorithms compared to the state-of-the-art.
[ "Nathan Korda and Balazs Szorenyi and Shuai Li", "['Nathan Korda' 'Balazs Szorenyi' 'Shuai Li']" ]
stat.ML cs.LG
null
1604.07711
null
null
http://arxiv.org/pdf/1604.07711v2
2016-10-10T10:06:28Z
2016-04-26T15:14:49Z
Condorcet's Jury Theorem for Consensus Clustering and its Implications for Diversity
Condorcet's Jury Theorem has been invoked for ensemble classifiers to indicate that the combination of many classifiers can have better predictive performance than a single classifier. Such a theoretical underpinning is unknown for consensus clustering. This article extends Condorcet's Jury Theorem to the mean partition approach under the additional assumptions that a unique ground-truth partition exists and sample partitions are drawn from a sufficiently small ball containing the ground-truth. As an implication of practical relevance, we question the claim that the quality of consensus clustering depends on the diversity of the sample partitions. Instead, we conjecture that limiting the diversity of the mean partitions is necessary for controlling the quality.
[ "['Brijnesh J. Jain']", "Brijnesh J. Jain" ]
cs.LG
null
1604.07759
null
null
http://arxiv.org/pdf/1604.07759v3
2016-07-01T13:17:01Z
2016-04-26T17:18:24Z
F-measure Maximization in Multi-Label Classification with Conditionally Independent Label Subsets
We discuss a method to improve the exact F-measure maximization algorithm called GFM, proposed in (Dembczynski et al. 2011) for multi-label classification, assuming the label set can be can partitioned into conditionally independent subsets given the input features. If the labels were all independent, the estimation of only $m$ parameters ($m$ denoting the number of labels) would suffice to derive Bayes-optimal predictions in $O(m^2)$ operations. In the general case, $m^2+1$ parameters are required by GFM, to solve the problem in $O(m^3)$ operations. In this work, we show that the number of parameters can be reduced further to $m^2/n$, in the best case, assuming the label set can be partitioned into $n$ conditionally independent subsets. As this label partition needs to be estimated from the data beforehand, we use first the procedure proposed in (Gasse et al. 2015) that finds such partition and then infer the required parameters locally in each label subset. The latter are aggregated and serve as input to GFM to form the Bayes-optimal prediction. We show on a synthetic experiment that the reduction in the number of parameters brings about significant benefits in terms of performance.
[ "Maxime Gasse and Alex Aussem", "['Maxime Gasse' 'Alex Aussem']" ]
cs.NE cs.LG stat.ML
null
1604.07796
null
null
http://arxiv.org/pdf/1604.07796v1
2016-04-26T19:04:59Z
2016-04-26T19:04:59Z
Scale Normalization
One of the difficulties of training deep neural networks is caused by improper scaling between layers. Scaling issues introduce exploding / gradient problems, and have typically been addressed by careful scale-preserving initialization. We investigate the value of preserving scale, or isometry, beyond the initial weights. We propose two methods of maintaing isometry, one exact and one stochastic. Preliminary experiments show that for both determinant and scale-normalization effectively speeds up learning. Results suggest that isometry is important in the beginning of learning, and maintaining it leads to faster learning.
[ "Henry Z. Lo and Kevin Amaral and Wei Ding", "['Henry Z. Lo' 'Kevin Amaral' 'Wei Ding']" ]
cs.LG cs.CV
null
1604.07866
null
null
http://arxiv.org/pdf/1604.07866v3
2016-08-04T15:01:36Z
2016-04-26T21:42:51Z
Learning by tracking: Siamese CNN for robust target association
This paper introduces a novel approach to the task of data association within the context of pedestrian tracking, by introducing a two-stage learning scheme to match pairs of detections. First, a Siamese convolutional neural network (CNN) is trained to learn descriptors encoding local spatio-temporal structures between the two input image patches, aggregating pixel values and optical flow information. Second, a set of contextual features derived from the position and size of the compared input patches are combined with the CNN output by means of a gradient boosting classifier to generate the final matching probability. This learning approach is validated by using a linear programming based multi-person tracker showing that even a simple and efficient tracker may outperform much more complex models when fed with our learned matching probabilities. Results on publicly available sequences show that our method meets state-of-the-art standards in multiple people tracking.
[ "['Laura Leal-Taixé' 'Cristian Canton Ferrer' 'Konrad Schindler']", "Laura Leal-Taix\\'e, Cristian Canton Ferrer, Konrad Schindler" ]
cs.SI cs.LG stat.ML
10.1109/IKT.2015.7288793
1604.07878
null
null
http://arxiv.org/abs/1604.07878v1
2016-04-26T22:38:47Z
2016-04-26T22:38:47Z
Evaluating the effect of topic consideration in identifying communities of rating-based social networks
Finding meaningful communities in social network has attracted the attentions of many researchers. The community structure of complex networks reveals both their organization and hidden relations among their constituents. Most of the researches in the field of community detection mainly focus on the topological structure of the network without performing any content analysis. Nowadays, real world social networks are containing a vast range of information including shared objects, comments, following information, etc. In recent years, a number of researches have proposed approaches which consider both the contents that are interchanged in the networks and the topological structures of the networks in order to find more meaningful communities. In this research, the effect of topic analysis in finding more meaningful communities in social networking sites in which the users express their feelings toward different objects (like movies) by the means of rating is demonstrated by performing extensive experiments.
[ "['Ali Reihanian' 'Behrouz Minaei-Bidgoli' 'Muhammad Yousefnezhad']", "Ali Reihanian, Behrouz Minaei-Bidgoli, Muhammad Yousefnezhad" ]
cs.CV cs.LG cs.NE
null
1604.07904
null
null
http://arxiv.org/pdf/1604.07904v1
2016-04-27T02:16:43Z
2016-04-27T02:16:43Z
Image Colorization Using a Deep Convolutional Neural Network
In this paper, we present a novel approach that uses deep learning techniques for colorizing grayscale images. By utilizing a pre-trained convolutional neural network, which is originally designed for image classification, we are able to separate content and style of different images and recombine them into a single image. We then propose a method that can add colors to a grayscale image by combining its content with style of a color image having semantic similarity with the grayscale one. As an application, to our knowledge the first of its kind, we use the proposed method to colorize images of ukiyo-e a genre of Japanese painting?and obtain interesting results, showing the potential of this method in the growing field of computer assisted art.
[ "['Tung Nguyen' 'Kazuki Mori' 'Ruck Thawonmas']", "Tung Nguyen, Kazuki Mori, and Ruck Thawonmas" ]
cs.LG cs.AI cs.DC stat.ML
null
1604.07928
null
null
http://arxiv.org/pdf/1604.07928v2
2016-05-22T00:00:23Z
2016-04-27T04:18:32Z
Distributed Flexible Nonlinear Tensor Factorization
Tensor factorization is a powerful tool to analyse multi-way data. Compared with traditional multi-linear methods, nonlinear tensor factorization models are capable of capturing more complex relationships in the data. However, they are computationally expensive and may suffer severe learning bias in case of extreme data sparsity. To overcome these limitations, in this paper we propose a distributed, flexible nonlinear tensor factorization model. Our model can effectively avoid the expensive computations and structural restrictions of the Kronecker-product in existing TGP formulations, allowing an arbitrary subset of tensorial entries to be selected to contribute to the training. At the same time, we derive a tractable and tight variational evidence lower bound (ELBO) that enables highly decoupled, parallel computations and high-quality inference. Based on the new bound, we develop a distributed inference algorithm in the MapReduce framework, which is key-value-free and can fully exploit the memory cache mechanism in fast MapReduce systems such as SPARK. Experimental results fully demonstrate the advantages of our method over several state-of-the-art approaches, in terms of both predictive performance and computational efficiency. Moreover, our approach shows a promising potential in the application of Click-Through-Rate (CTR) prediction for online advertising.
[ "Shandian Zhe, Kai Zhang, Pengyuan Wang, Kuang-chih Lee, Zenglin Xu,\n Yuan Qi, Zoubin Ghahramani", "['Shandian Zhe' 'Kai Zhang' 'Pengyuan Wang' 'Kuang-chih Lee' 'Zenglin Xu'\n 'Yuan Qi' 'Zoubin Ghahramani']" ]
cs.MS cs.LG stat.ML
null
1604.08079
null
null
http://arxiv.org/pdf/1604.08079v2
2016-07-12T23:08:46Z
2016-04-27T14:13:11Z
UBL: an R package for Utility-based Learning
This document describes the R package UBL that allows the use of several methods for handling utility-based learning problems. Classification and regression problems that assume non-uniform costs and/or benefits pose serious challenges to predictive analytic tasks. In the context of meteorology, finance, medicine, ecology, among many other, specific domain information concerning the preference bias of the users must be taken into account to enhance the models predictive performance. To deal with this problem, a large number of techniques was proposed by the research community for both classification and regression tasks. The main goal of UBL package is to facilitate the utility-based predictive analytic task by providing a set of methods to deal with this type of problems in the R environment. It is a versatile tool that provides mechanisms to handle both regression and classification (binary and multiclass) tasks. Moreover, UBL package allows the user to specify his domain preferences, but it also provides some automatic methods that try to infer those preference bias from the domain, considering some common known settings.
[ "['Paula Branco' 'Rita P. Ribeiro' 'Luis Torgo']", "Paula Branco, Rita P. Ribeiro, Luis Torgo" ]
stat.CO cs.LG stat.ML
null
1604.08098
null
null
http://arxiv.org/pdf/1604.08098v3
2018-09-13T05:08:08Z
2016-04-27T15:00:12Z
Local Uncertainty Sampling for Large-Scale Multi-Class Logistic Regression
A major challenge for building statistical models in the big data era is that the available data volume far exceeds the computational capability. A common approach for solving this problem is to employ a subsampled dataset that can be handled by available computational resources. In this paper, we propose a general subsampling scheme for large-scale multi-class logistic regression and examine the variance of the resulting estimator. We show that asymptotically, the proposed method always achieves a smaller variance than that of the uniform random sampling. Moreover, when the classes are conditionally imbalanced, significant improvement over uniform sampling can be achieved. Empirical performance of the proposed method is compared to other methods on both simulated and real-world datasets, and these results match and confirm our theoretical analysis.
[ "Lei Han, Kean Ming Tan, Ting Yang and Tong Zhang", "['Lei Han' 'Kean Ming Tan' 'Ting Yang' 'Tong Zhang']" ]
cs.LG cs.AI stat.ML
null
1604.08153
null
null
http://arxiv.org/pdf/1604.08153v3
2017-06-19T15:34:58Z
2016-04-27T17:48:39Z
Classifying Options for Deep Reinforcement Learning
In this paper we combine one method for hierarchical reinforcement learning - the options framework - with deep Q-networks (DQNs) through the use of different "option heads" on the policy network, and a supervisory network for choosing between the different options. We utilise our setup to investigate the effects of architectural constraints in subtasks with positive and negative transfer, across a range of network capacities. We empirically show that our augmented DQN has lower sample complexity when simultaneously learning subtasks with negative transfer, without degrading performance when learning subtasks with positive transfer.
[ "['Kai Arulkumaran' 'Nat Dilokthanakul' 'Murray Shanahan'\n 'Anil Anthony Bharath']", "Kai Arulkumaran, Nat Dilokthanakul, Murray Shanahan, Anil Anthony\n Bharath" ]
cs.LG cs.CV cs.NE
null
1604.08220
null
null
http://arxiv.org/pdf/1604.08220v1
2016-04-27T20:05:45Z
2016-04-27T20:05:45Z
Diving deeper into mentee networks
Modern computer vision is all about the possession of powerful image representations. Deeper and deeper convolutional neural networks have been built using larger and larger datasets and are made publicly available. A large swath of computer vision scientists use these pre-trained networks with varying degrees of successes in various tasks. Even though there is tremendous success in copying these networks, the representational space is not learnt from the target dataset in a traditional manner. One of the reasons for opting to use a pre-trained network over a network learnt from scratch is that small datasets provide less supervision and require meticulous regularization, smaller and careful tweaking of learning rates to even achieve stable learning without weight explosion. It is often the case that large deep networks are not portable, which necessitates the ability to learn mid-sized networks from scratch. In this article, we dive deeper into training these mid-sized networks on small datasets from scratch by drawing additional supervision from a large pre-trained network. Such learning also provides better generalization accuracies than networks trained with common regularization techniques such as l2, l1 and dropouts. We show that features learnt thus, are more general than those learnt independently. We studied various characteristics of such networks and found some interesting behaviors.
[ "['Ragav Venkatesan' 'Baoxin Li']", "Ragav Venkatesan, Baoxin Li" ]
cs.CR cs.LG cs.NE
null
1604.08275
null
null
http://arxiv.org/pdf/1604.08275v1
2016-04-28T00:35:32Z
2016-04-28T00:35:32Z
Crafting Adversarial Input Sequences for Recurrent Neural Networks
Machine learning models are frequently used to solve complex security problems, as well as to make decisions in sensitive situations like guiding autonomous vehicles or predicting financial market behaviors. Previous efforts have shown that numerous machine learning models were vulnerable to adversarial manipulations of their inputs taking the form of adversarial samples. Such inputs are crafted by adding carefully selected perturbations to legitimate inputs so as to force the machine learning model to misbehave, for instance by outputting a wrong class if the machine learning task of interest is classification. In fact, to the best of our knowledge, all previous work on adversarial samples crafting for neural network considered models used to solve classification tasks, most frequently in computer vision applications. In this paper, we contribute to the field of adversarial machine learning by investigating adversarial input sequences for recurrent neural networks processing sequential data. We show that the classes of algorithms introduced previously to craft adversarial samples misclassified by feed-forward neural networks can be adapted to recurrent neural networks. In a experiment, we show that adversaries can craft adversarial sequences misleading both categorical and sequential recurrent neural networks.
[ "['Nicolas Papernot' 'Patrick McDaniel' 'Ananthram Swami' 'Richard Harang']", "Nicolas Papernot and Patrick McDaniel and Ananthram Swami and Richard\n Harang" ]
stat.ML cs.LG
null
1604.08291
null
null
http://arxiv.org/pdf/1604.08291v1
2016-04-28T02:37:03Z
2016-04-28T02:37:03Z
Streaming View Learning
An underlying assumption in conventional multi-view learning algorithms is that all views can be simultaneously accessed. However, due to various factors when collecting and pre-processing data from different views, the streaming view setting, in which views arrive in a streaming manner, is becoming more common. By assuming that the subspaces of a multi-view model trained over past views are stable, here we fine tune their combination weights such that the well-trained multi-view model is compatible with new views. This largely overcomes the burden of learning new view functions and updating past view functions. We theoretically examine convergence issues and the influence of streaming views in the proposed algorithm. Experimental results on real-world datasets suggest that studying the streaming views problem in multi-view learning is significant and that the proposed algorithm can effectively handle streaming views in different applications.
[ "Chang Xu, Dacheng Tao, Chao Xu", "['Chang Xu' 'Dacheng Tao' 'Chao Xu']" ]