title
stringlengths
5
246
categories
stringlengths
5
94
abstract
stringlengths
54
5.03k
authors
stringlengths
0
6.72k
doi
stringlengths
12
54
id
stringlengths
6
10
year
float64
2.02k
2.02k
venue
stringclasses
13 values
Affinity CNN: Learning Pixel-Centric Pairwise Relations for Figure/Ground Embedding
cs.CV cs.LG cs.NE
Spectral embedding provides a framework for solving perceptual organization problems, including image segmentation and figure/ground organization. From an affinity matrix describing pairwise relationships between pixels, it clusters pixels into regions, and, using a complex-valued extension, orders pixels according to layer. We train a convolutional neural network (CNN) to directly predict the pairwise relationships that define this affinity matrix. Spectral embedding then resolves these predictions into a globally-consistent segmentation and figure/ground organization of the scene. Experiments demonstrate significant benefit to this direct coupling compared to prior works which use explicit intermediate stages, such as edge detection, on the pathway from image to affinities. Our results suggest spectral embedding as a powerful alternative to the conditional random field (CRF)-based globalization schemes typically coupled to deep neural networks.
Michael Maire, Takuya Narihira, Stella X. Yu
null
1512.02767
null
null
Bigger Buffer k-d Trees on Multi-Many-Core Systems
cs.DC cs.DS cs.LG
A buffer k-d tree is a k-d tree variant for massively-parallel nearest neighbor search. While providing valuable speed-ups on modern many-core devices in case both a large number of reference and query points are given, buffer k-d trees are limited by the amount of points that can fit on a single device. In this work, we show how to modify the original data structure and the associated workflow to make the overall approach capable of dealing with massive data sets. We further provide a simple yet efficient way of using multiple devices given in a single workstation. The applicability of the modified framework is demonstrated in the context of astronomy, a field that is faced with huge amounts of data.
Fabian Gieseke and Cosmin Eugen Oancea and Ashish Mahabal and Christian Igel and Tom Heskes
null
1512.02831
null
null
Multi-Player Bandits -- a Musical Chairs Approach
cs.LG stat.ML
We consider a variant of the stochastic multi-armed bandit problem, where multiple players simultaneously choose from the same set of arms and may collide, receiving no reward. This setting has been motivated by problems arising in cognitive radio networks, and is especially challenging under the realistic assumption that communication between players is limited. We provide a communication-free algorithm (Musical Chairs) which attains constant regret with high probability, as well as a sublinear-regret, communication-free algorithm (Dynamic Musical Chairs) for the more difficult setting of players dynamically entering and leaving throughout the game. Moreover, both algorithms do not require prior knowledge of the number of players. To the best of our knowledge, these are the first communication-free algorithms with these types of formal guarantees. We also rigorously compare our algorithms to previous works, and complement our theoretical findings with experiments.
Jonathan Rosenski, Ohad Shamir, Liran Szlak
null
1512.02866
null
null
Where You Are Is Who You Are: User Identification by Matching Statistics
cs.LG cs.CR cs.SI stat.AP stat.ML
Most users of online services have unique behavioral or usage patterns. These behavioral patterns can be exploited to identify and track users by using only the observed patterns in the behavior. We study the task of identifying users from statistics of their behavioral patterns. Specifically, we focus on the setting in which we are given histograms of users' data collected during two different experiments. We assume that, in the first dataset, the users' identities are anonymized or hidden and that, in the second dataset, their identities are known. We study the task of identifying the users by matching the histograms of their data in the first dataset with the histograms from the second dataset. In recent works, the optimal algorithm for this user identification task is introduced. In this paper, we evaluate the effectiveness of this method on three different types of datasets and in multiple scenarios. Using datasets such as call data records, web browsing histories, and GPS trajectories, we show that a large fraction of users can be easily identified given only histograms of their data; hence these histograms can act as users' fingerprints. We also verify that simultaneous identification of users achieves better performance compared to one-by-one user identification. We show that using the optimal method for identification gives higher identification accuracy than heuristics-based approaches in practical scenarios. The accuracy obtained under this optimal method can thus be used to quantify the maximum level of user identification that is possible in such settings. We show that the key factors affecting the accuracy of the optimal identification algorithm are the duration of the data collection, the number of users in the anonymized dataset, and the resolution of the dataset. We analyze the effectiveness of k-anonymization in resisting user identification attacks on these datasets.
Farid M. Naini, Jayakrishnan Unnikrishnan, Patrick Thiran, Martin Vetterli
10.1109/TIFS.2015.2498131
1512.02896
null
null
Efficient Distributed SGD with Variance Reduction
cs.LG cs.DC math.OC stat.ML
Stochastic Gradient Descent (SGD) has become one of the most popular optimization methods for training machine learning models on massive datasets. However, SGD suffers from two main drawbacks: (i) The noisy gradient updates have high variance, which slows down convergence as the iterates approach the optimum, and (ii) SGD scales poorly in distributed settings, typically experiencing rapidly decreasing marginal benefits as the number of workers increases. In this paper, we propose a highly parallel method, CentralVR, that uses error corrections to reduce the variance of SGD gradient updates, and scales linearly with the number of worker nodes. CentralVR enjoys low iteration complexity, provably linear convergence rates, and exhibits linear performance gains up to hundreds of cores for massive datasets. We compare CentralVR to state-of-the-art parallel stochastic optimization methods on a variety of models and datasets, and find that our proposed methods exhibit stronger scaling than other SGD variants.
Soham De and Tom Goldstein
null
1512.02970
null
null
Partial Reinitialisation for Optimisers
stat.ML cs.LG cs.NE math.OC
Heuristic optimisers which search for an optimal configuration of variables relative to an objective function often get stuck in local optima where the algorithm is unable to find further improvement. The standard approach to circumvent this problem involves periodically restarting the algorithm from random initial configurations when no further improvement can be found. We propose a method of partial reinitialization, whereby, in an attempt to find a better solution, only sub-sets of variables are re-initialised rather than the whole configuration. Much of the information gained from previous runs is hence retained. This leads to significant improvements in the quality of the solution found in a given time for a variety of optimisation problems in machine learning.
Ilia Zintchenko, Matthew Hastings, Nathan Wiebe, Ethan Brown, Matthias Troyer
null
1512.03025
null
null
Gated networks: an inventory
cs.LG
Gated networks are networks that contain gating connections, in which the outputs of at least two neurons are multiplied. Initially, gated networks were used to learn relationships between two input sources, such as pixels from two images. More recently, they have been applied to learning activity recognition or multi-modal representations. The aims of this paper are threefold: 1) to explain the basic computations in gated networks to the non-expert, while adopting a standpoint that insists on their symmetric nature. 2) to serve as a quick reference guide to the recent literature, by providing an inventory of applications of these networks, as well as recent extensions to the basic architecture. 3) to suggest future research directions and applications.
Olivier Sigaud and Cl\'ement Masson and David Filliat and Freek Stulp
null
1512.03201
null
null
Norm-Free Radon-Nikodym Approach to Machine Learning
cs.LG stat.ML
For Machine Learning (ML) classification problem, where a vector of $\mathbf{x}$--observations (values of attributes) is mapped to a single $y$ value (class label), a generalized Radon--Nikodym type of solution is proposed. Quantum--mechanics --like probability states $\psi^2(\mathbf{x})$ are considered and "Cluster Centers", corresponding to the extremums of $<y\psi^2(\mathbf{x})>/<\psi^2(\mathbf{x})>$, are found from generalized eigenvalues problem. The eigenvalues give possible $y^{[i]}$ outcomes and corresponding to them eigenvectors $\psi^{[i]}(\mathbf{x})$ define "Cluster Centers". The projection of a $\psi$ state, localized at given $\mathbf{x}$ to classify, on these eigenvectors define the probability of $y^{[i]}$ outcome, thus avoiding using a norm ($L^2$ or other types), required for "quality criteria" in a typical Machine Learning technique. A coverage of each `Cluster Center" is calculated, what potentially allows to separate system properties (described by $y^{[i]}$ outcomes) and system testing conditions (described by $C^{[i]}$ coverage). As an example of such application $y$ distribution estimator is proposed in a form of pairs $(y^{[i]},C^{[i]})$, that can be considered as Gauss quadratures generalization. This estimator allows to perform $y$ probability distribution estimation in a strongly non--Gaussian case.
Vladislav Gennadievich Malyshkin
null
1512.03219
null
null
Convolutional Monte Carlo Rollouts in Go
cs.LG cs.AI
In this work, we present a MCTS-based Go-playing program which uses convolutional networks in all parts. Our method performs MCTS in batches, explores the Monte Carlo search tree using Thompson sampling and a convolutional network, and evaluates convnet-based rollouts on the GPU. We achieve strong win rates against open source Go programs and attain competitive results against state of the art convolutional net-based Go-playing programs.
Peter H. Jin and Kurt Keutzer
null
1512.03375
null
null
Boosted Sparse Non-linear Distance Metric Learning
stat.ML cs.LG
This paper proposes a boosting-based solution addressing metric learning problems for high-dimensional data. Distance measures have been used as natural measures of (dis)similarity and served as the foundation of various learning methods. The efficiency of distance-based learning methods heavily depends on the chosen distance metric. With increasing dimensionality and complexity of data, however, traditional metric learning methods suffer from poor scalability and the limitation due to linearity as the true signals are usually embedded within a low-dimensional nonlinear subspace. In this paper, we propose a nonlinear sparse metric learning algorithm via boosting. We restructure a global optimization problem into a forward stage-wise learning of weak learners based on a rank-one decomposition of the weight matrix in the Mahalanobis distance metric. A gradient boosting algorithm is devised to obtain a sparse rank-one update of the weight matrix at each step. Nonlinear features are learned by a hierarchical expansion of interactions incorporated within the boosting algorithm. Meanwhile, an early stopping rule is imposed to control the overall complexity of the learned metric. As a result, our approach guarantees three desirable properties of the final metric: positive semi-definiteness, low rank and element-wise sparsity. Numerical experiments show that our learning model compares favorably with the state-of-the-art methods in the current literature of metric learning.
Yuting Ma, Tian Zheng
null
1512.03396
null
null
Predicting proximity with ambient mobile sensors for non-invasive health diagnostics
cs.CY cs.LG
Modern smart phones are becoming helpful in the areas of Internet-Of-Things (IoT) and ambient health intelligence. By learning data from several mobile sensors, we detect nearness of the human body to a mobile device in a three-dimensional space with no physical contact with the device for non-invasive health diagnostics. We show that the human body generates wave patterns that interact with other naturally occurring ambient signals that could be measured by mobile sensors, such as, temperature, humidity, magnetic field, acceleration, gravity, and light. This interaction consequentially alters the patterns of the naturally occurring signals, and thus, exhibits characteristics that could be learned to predict the nearness of the human body to a mobile device, hence provide diagnostic information for medical practitioners. Our prediction technique achieved 88.75% accuracy and 88.3% specificity.
Sylvester Olubolu Orimaye, Foo Chuan Leong, Chen Hui Lee, Eddy Cheng Han Ng
10.1109/MICC.2015.7725398
1512.03423
null
null
A Unified Approach to Error Bounds for Structured Convex Optimization Problems
math.OC cs.LG math.NA stat.ML
Error bounds, which refer to inequalities that bound the distance of vectors in a test set to a given set by a residual function, have proven to be extremely useful in analyzing the convergence rates of a host of iterative methods for solving optimization problems. In this paper, we present a new framework for establishing error bounds for a class of structured convex optimization problems, in which the objective function is the sum of a smooth convex function and a general closed proper convex function. Such a class encapsulates not only fairly general constrained minimization problems but also various regularized loss minimization formulations in machine learning, signal processing, and statistics. Using our framework, we show that a number of existing error bound results can be recovered in a unified and transparent manner. To further demonstrate the power of our framework, we apply it to a class of nuclear-norm regularized loss minimization problems and establish a new error bound for this class under a strict complementarity-type regularity condition. We then complement this result by constructing an example to show that the said error bound could fail to hold without the regularity condition. Consequently, we obtain a rather complete answer to a question raised by Tseng. We believe that our approach will find further applications in the study of error bounds for structured convex optimization problems.
Zirui Zhou, Anthony Man-Cho So
null
1512.03518
null
null
Distilling Knowledge from Deep Networks with Applications to Healthcare Domain
stat.ML cs.LG
Exponential growth in Electronic Healthcare Records (EHR) has resulted in new opportunities and urgent needs for discovery of meaningful data-driven representations and patterns of diseases in Computational Phenotyping research. Deep Learning models have shown superior performance for robust prediction in computational phenotyping tasks, but suffer from the issue of model interpretability which is crucial for clinicians involved in decision-making. In this paper, we introduce a novel knowledge-distillation approach called Interpretable Mimic Learning, to learn interpretable phenotype features for making robust prediction while mimicking the performance of deep learning models. Our framework uses Gradient Boosting Trees to learn interpretable features from deep learning models such as Stacked Denoising Autoencoder and Long Short-Term Memory. Exhaustive experiments on a real-world clinical time-series dataset show that our method obtains similar or better performance than the deep learning models, and it provides interpretable phenotypes for clinical decision making.
Zhengping Che, Sanjay Purushotham, Robinder Khemani, Yan Liu
null
1512.03542
null
null
Words are not Equal: Graded Weighting Model for building Composite Document Vectors
cs.CL cs.LG cs.NE
Despite the success of distributional semantics, composing phrases from word vectors remains an important challenge. Several methods have been tried for benchmark tasks such as sentiment classification, including word vector averaging, matrix-vector approaches based on parsing, and on-the-fly learning of paragraph vectors. Most models usually omit stop words from the composition. Instead of such an yes-no decision, we consider several graded schemes where words are weighted according to their discriminatory relevance with respect to its use in the document (e.g., idf). Some of these methods (particularly tf-idf) are seen to result in a significant improvement in performance over prior state of the art. Further, combining such approaches into an ensemble based on alternate classifiers such as the RNN model, results in an 1.6% performance improvement on the standard IMDB movie review dataset, and a 7.01% improvement on Amazon product reviews. Since these are language free models and can be obtained in an unsupervised manner, they are of interest also for under-resourced languages such as Hindi as well and many more languages. We demonstrate the language free aspects by showing a gain of 12% for two review datasets over earlier results, and also release a new larger dataset for future testing (Singh,2015).
Pranjal Singh, Amitabha Mukerjee
null
1512.03549
null
null
Efficient Deep Feature Learning and Extraction via StochasticNets
cs.LG stat.ML
Deep neural networks are a powerful tool for feature learning and extraction given their ability to model high-level abstractions in highly complex data. One area worth exploring in feature learning and extraction using deep neural networks is efficient neural connectivity formation for faster feature learning and extraction. Motivated by findings of stochastic synaptic connectivity formation in the brain as well as the brain's uncanny ability to efficiently represent information, we propose the efficient learning and extraction of features via StochasticNets, where sparsely-connected deep neural networks can be formed via stochastic connectivity between neurons. To evaluate the feasibility of such a deep neural network architecture for feature learning and extraction, we train deep convolutional StochasticNets to learn abstract features using the CIFAR-10 dataset, and extract the learned features from images to perform classification on the SVHN and STL-10 datasets. Experimental results show that features learned using deep convolutional StochasticNets, with fewer neural connections than conventional deep convolutional neural networks, can allow for better or comparable classification accuracy than conventional deep neural networks: relative test error decrease of ~4.5% for classification on the STL-10 dataset and ~1% for classification on the SVHN dataset. Furthermore, it was shown that the deep features extracted using deep convolutional StochasticNets can provide comparable classification accuracy even when only 10% of the training data is used for feature learning. Finally, it was also shown that significant gains in feature extraction speed can be achieved in embedded applications using StochasticNets. As such, StochasticNets allow for faster feature learning and extraction performance while facilitate for better or comparable accuracy performances.
Mohammad Javad Shafiee, Parthipan Siva, Paul Fieguth, and Alexander Wong
null
1512.03844
null
null
Active Sampler: Light-weight Accelerator for Complex Data Analytics at Scale
cs.DB cs.LG stat.ML
Recent years have witnessed amazing outcomes from "Big Models" trained by "Big Data". Most popular algorithms for model training are iterative. Due to the surging volumes of data, we can usually afford to process only a fraction of the training data in each iteration. Typically, the data are either uniformly sampled or sequentially accessed. In this paper, we study how the data access pattern can affect model training. We propose an Active Sampler algorithm, where training data with more "learning value" to the model are sampled more frequently. The goal is to focus training effort on valuable instances near the classification boundaries, rather than evident cases, noisy data or outliers. We show the correctness and optimality of Active Sampler in theory, and then develop a light-weight vectorized implementation. Active Sampler is orthogonal to most approaches optimizing the efficiency of large-scale data analytics, and can be applied to most analytics models trained by stochastic gradient descent (SGD) algorithm. Extensive experimental evaluations demonstrate that Active Sampler can speed up the training procedure of SVM, feature selection and deep learning, for comparable training quality by 1.6-2.2x.
Jinyang Gao, H.V.Jagadish, Beng Chin Ooi
null
1512.03880
null
null
Quantum assisted Gaussian process regression
quant-ph cs.LG stat.ML
Gaussian processes (GP) are a widely used model for regression problems in supervised machine learning. Implementation of GP regression typically requires $O(n^3)$ logic gates. We show that the quantum linear systems algorithm [Harrow et al., Phys. Rev. Lett. 103, 150502 (2009)] can be applied to Gaussian process regression (GPR), leading to an exponential reduction in computation time in some instances. We show that even in some cases not ideally suited to the quantum linear systems algorithm, a polynomial increase in efficiency still occurs.
Zhikuan Zhao, Jack K. Fitzsimons and Joseph F. Fitzsimons
10.1103/PhysRevA.99.052331
1512.03929
null
null
Active Distance-Based Clustering using K-medoids
cs.LG
k-medoids algorithm is a partitional, centroid-based clustering algorithm which uses pairwise distances of data points and tries to directly decompose the dataset with $n$ points into a set of $k$ disjoint clusters. However, k-medoids itself requires all distances between data points that are not so easy to get in many applications. In this paper, we introduce a new method which requires only a small proportion of the whole set of distances and makes an effort to estimate an upper-bound for unknown distances using the inquired ones. This algorithm makes use of the triangle inequality to calculate an upper-bound estimation of the unknown distances. Our method is built upon a recursive approach to cluster objects and to choose some points actively from each bunch of data and acquire the distances between these prominent points from oracle. Experimental results show that the proposed method using only a small subset of the distances can find proper clustering on many real-world and synthetic datasets.
Mehrdad Ghadiri, Amin Aghaee, Mahdieh Soleymani Baghshah
null
1512.03953
null
null
The Power of Depth for Feedforward Neural Networks
cs.LG cs.NE stat.ML
We show that there is a simple (approximately radial) function on $\reals^d$, expressible by a small 3-layer feedforward neural networks, which cannot be approximated by any 2-layer network, to more than a certain constant accuracy, unless its width is exponential in the dimension. The result holds for virtually all known activation functions, including rectified linear units, sigmoids and thresholds, and formally demonstrates that depth -- even if increased by 1 -- can be exponentially more valuable than width for standard feedforward neural networks. Moreover, compared to related results in the context of Boolean functions, our result requires fewer assumptions, and the proof techniques and construction are very different.
Ronen Eldan and Ohad Shamir
null
1512.03965
null
null
Quantum Privacy-Preserving Data Mining
quant-ph cs.CR cs.DB cs.LG
Data mining is a key technology in big data analytics and it can discover understandable knowledge (patterns) hidden in large data sets. Association rule is one of the most useful knowledge patterns, and a large number of algorithms have been developed in the data mining literature to generate association rules corresponding to different problems and situations. Privacy becomes a vital issue when data mining is used to sensitive data sets like medical records, commercial data sets and national security. In this Letter, we present a quantum protocol for mining association rules on vertically partitioned databases. The quantum protocol can improve the privacy level preserved by known classical protocols and at the same time it can exponentially reduce the computational complexity and communication cost.
Shenggang Ying, Mingsheng Ying and Yuan Feng
null
1512.04009
null
null
L1-Regularized Distributed Optimization: A Communication-Efficient Primal-Dual Framework
cs.LG
Despite the importance of sparsity in many large-scale applications, there are few methods for distributed optimization of sparsity-inducing objectives. In this paper, we present a communication-efficient framework for L1-regularized optimization in the distributed environment. By viewing classical objectives in a more general primal-dual setting, we develop a new class of methods that can be efficiently distributed and applied to common sparsity-inducing models, such as Lasso, sparse logistic regression, and elastic net-regularized problems. We provide theoretical convergence guarantees for our framework, and demonstrate its efficiency and flexibility with a thorough experimental comparison on Amazon EC2. Our proposed framework yields speedups of up to 50x as compared to current state-of-the-art methods for distributed L1-regularized optimization.
Virginia Smith, Simone Forte, Michael I. Jordan, Martin Jaggi
null
1512.04011
null
null
Tracking Idea Flows between Social Groups
cs.SI cs.LG
In many applications, ideas that are described by a set of words often flow between different groups. To facilitate users in analyzing the flow, we present a method to model the flow behaviors that aims at identifying the lead-lag relationships between word clusters of different user groups. In particular, an improved Bayesian conditional cointegration based on dynamic time warping is employed to learn links between words in different groups. A tensor-based technique is developed to cluster these linked words into different clusters (ideas) and track the flow of ideas. The main feature of the tensor representation is that we introduce two additional dimensions to represent both time and lead-lag relationships. Experiments on both synthetic and real datasets show that our method is more effective than methods based on traditional clustering techniques and achieves better accuracy. A case study was conducted to demonstrate the usefulness of our method in helping users understand the flow of ideas between different user groups on social media
Yangxin Zhong, Shixia Liu, Xiting Wang, Jiannan Xiao, and Yangqiu Song
null
1512.04036
null
null
Distributed Optimization with Arbitrary Local Solvers
cs.LG math.OC
With the growth of data and necessity for distributed optimization methods, solvers that work well on a single machine must be re-designed to leverage distributed computation. Recent work in this area has been limited by focusing heavily on developing highly specific methods for the distributed environment. These special-purpose methods are often unable to fully leverage the competitive performance of their well-tuned and customized single machine counterparts. Further, they are unable to easily integrate improvements that continue to be made to single machine methods. To this end, we present a framework for distributed optimization that both allows the flexibility of arbitrary solvers to be used on each (single) machine locally, and yet maintains competitive performance against other state-of-the-art special-purpose distributed methods. We give strong primal-dual convergence rate guarantees for our framework that hold for arbitrary local solvers. We demonstrate the impact of local solver selection both theoretically and in an extensive experimental comparison. Finally, we provide thorough implementation details for our framework, highlighting areas for practical performance gains.
Chenxin Ma, Jakub Kone\v{c}n\'y, Martin Jaggi, Virginia Smith, Michael I. Jordan, Peter Richt\'arik and Martin Tak\'a\v{c}
null
1512.04039
null
null
Big Data Scaling through Metric Mapping: Exploiting the Remarkable Simplicity of Very High Dimensional Spaces using Correspondence Analysis
stat.ML cs.LG
We present new findings in regard to data analysis in very high dimensional spaces. We use dimensionalities up to around one million. A particular benefit of Correspondence Analysis is its suitability for carrying out an orthonormal mapping, or scaling, of power law distributed data. Power law distributed data are found in many domains. Correspondence factor analysis provides a latent semantic or principal axes mapping. Our experiments use data from digital chemistry and finance, and other statistically generated data.
Fionn Murtagh
null
1512.04052
null
null
True Online Temporal-Difference Learning
cs.AI cs.LG
The temporal-difference methods TD($\lambda$) and Sarsa($\lambda$) form a core part of modern reinforcement learning. Their appeal comes from their good performance, low computational cost, and their simple interpretation, given by their forward view. Recently, new versions of these methods were introduced, called true online TD($\lambda$) and true online Sarsa($\lambda$), respectively (van Seijen & Sutton, 2014). These new versions maintain an exact equivalence with the forward view at all times, whereas the traditional versions only approximate it for small step-sizes. We hypothesize that these true online methods not only have better theoretical properties, but also dominate the regular methods empirically. In this article, we put this hypothesis to the test by performing an extensive empirical comparison. Specifically, we compare the performance of true online TD($\lambda$)/Sarsa($\lambda$) with regular TD($\lambda$)/Sarsa($\lambda$) on random MRPs, a real-world myoelectric prosthetic arm, and a domain from the Arcade Learning Environment. We use linear function approximation with tabular, binary, and non-binary features. Our results suggest that the true online methods indeed dominate the regular methods. Across all domains/representations the learning speed of the true online methods are often better, but never worse than that of the regular methods. An additional advantage is that no choice between traces has to be made for the true online methods. Besides the empirical results, we provide an in-depth analysis of the theory behind true online temporal-difference learning. In addition, we show that new true online temporal-difference methods can be derived by making changes to the online forward view and then rewriting the update equations.
Harm van Seijen and A. Rupam Mahmood and Patrick M. Pilarski and Marlos C. Machado and Richard S. Sutton
null
1512.04087
null
null
Stack Exchange Tagger
cs.CL cs.LG
The goal of our project is to develop an accurate tagger for questions posted on Stack Exchange. Our problem is an instance of the more general problem of developing accurate classifiers for large scale text datasets. We are tackling the multilabel classification problem where each item (in this case, question) can belong to multiple classes (in this case, tags). We are predicting the tags (or keywords) for a particular Stack Exchange post given only the question text and the title of the post. In the process, we compare the performance of Support Vector Classification (SVC) for different kernel functions, loss function, etc. We found linear SVC with Crammer Singer technique produces best results.
Sanket Mehta, Shagun Sodhani
null
1512.04092
null
null
Policy Gradient Methods for Off-policy Control
cs.AI cs.LG
Off-policy learning refers to the problem of learning the value function of a way of behaving, or policy, while following a different policy. Gradient-based off-policy learning algorithms, such as GTD and TDC/GQ, converge even when using function approximation and incremental updates. However, they have been developed for the case of a fixed behavior policy. In control problems, one would like to adapt the behavior policy over time to become more greedy with respect to the existing value function. In this paper, we present the first gradient-based learning algorithms for this problem, which rely on the framework of policy gradient in order to modify the behavior policy. We present derivations of the algorithms, a convergence theorem, and empirical evidence showing that they compare favorably to existing approaches.
Lucas Lehnert and Doina Precup
null
1512.04105
null
null
Fighting Bandits with a New Kind of Smoothness
cs.LG cs.GT stat.ML
We define a novel family of algorithms for the adversarial multi-armed bandit problem, and provide a simple analysis technique based on convex smoothing. We prove two main results. First, we show that regularization via the \emph{Tsallis entropy}, which includes EXP3 as a special case, achieves the $\Theta(\sqrt{TN})$ minimax regret. Second, we show that a wide class of perturbation methods achieve a near-optimal regret as low as $O(\sqrt{TN \log N})$ if the perturbation distribution has a bounded hazard rate. For example, the Gumbel, Weibull, Frechet, Pareto, and Gamma distributions all satisfy this key property.
Jacob Abernethy, Chansoo Lee, Ambuj Tewari
null
1512.04152
null
null
Preconditioned Stochastic Gradient Descent
stat.ML cs.LG
Stochastic gradient descent (SGD) still is the workhorse for many practical problems. However, it converges slow, and can be difficult to tune. It is possible to precondition SGD to accelerate its convergence remarkably. But many attempts in this direction either aim at solving specialized problems, or result in significantly more complicated methods than SGD. This paper proposes a new method to estimate a preconditioner such that the amplitudes of perturbations of preconditioned stochastic gradient match that of the perturbations of parameters to be optimized in a way comparable to Newton method for deterministic optimization. Unlike the preconditioners based on secant equation fitting as done in deterministic quasi-Newton methods, which assume positive definite Hessian and approximate its inverse, the new preconditioner works equally well for both convex and non-convex optimizations with exact or noisy gradients. When stochastic gradient is used, it can naturally damp the gradient noise to stabilize SGD. Efficient preconditioner estimation methods are developed, and with reasonable simplifications, they are applicable to large scaled problems. Experimental results demonstrate that equipped with the new preconditioner, without any tuning effort, preconditioned SGD can efficiently solve many challenging problems like the training of a deep neural network or a recurrent neural network requiring extremely long term memories.
Xi-Lin Li
10.1109/TNNLS.2017.2672978
1512.04202
null
null
Small-footprint Deep Neural Networks with Highway Connections for Speech Recognition
cs.CL cs.LG cs.NE
For speech recognition, deep neural networks (DNNs) have significantly improved the recognition accuracy in most of benchmark datasets and application domains. However, compared to the conventional Gaussian mixture models, DNN-based acoustic models usually have much larger number of model parameters, making it challenging for their applications in resource constrained platforms, e.g., mobile devices. In this paper, we study the application of the recently proposed highway network to train small-footprint DNNs, which are {\it thinner} and {\it deeper}, and have significantly smaller number of model parameters compared to conventional DNNs. We investigated this approach on the AMI meeting speech transcription corpus which has around 70 hours of audio data. The highway neural networks constantly outperformed their plain DNN counterparts, and the number of model parameters can be reduced significantly without sacrificing the recognition accuracy.
Liang Lu and Steve Renals
null
1512.04280
null
null
Origami: A 803 GOp/s/W Convolutional Network Accelerator
cs.CV cs.AI cs.LG cs.NE
An ever increasing number of computer vision and image/video processing challenges are being approached using deep convolutional neural networks, obtaining state-of-the-art results in object recognition and detection, semantic segmentation, action recognition, optical flow and superresolution. Hardware acceleration of these algorithms is essential to adopt these improvements in embedded and mobile computer vision systems. We present a new architecture, design and implementation as well as the first reported silicon measurements of such an accelerator, outperforming previous work in terms of power-, area- and I/O-efficiency. The manufactured device provides up to 196 GOp/s on 3.09 mm^2 of silicon in UMC 65nm technology and can achieve a power efficiency of 803 GOp/s/W. The massively reduced bandwidth requirements make it the first architecture scalable to TOp/s performance.
Lukas Cavigelli, Luca Benini
10.1109/TCSVT.2016.2592330
1512.04295
null
null
Automatic Incident Classification for Big Traffic Data by Adaptive Boosting SVM
cs.LG
Modern cities experience heavy traffic flows and congestions regularly across space and time. Monitoring traffic situations becomes an important challenge for the Traffic Control and Surveillance Systems (TCSS). In advanced TCSS, it is helpful to automatically detect and classify different traffic incidents such as severity of congestion, abnormal driving pattern, abrupt or illegal stop on road, etc. Although most TCSS are equipped with basic incident detection algorithms, they are however crude to be really useful as an automated tool for further classification. In literature, there is a lack of research for Automated Incident Classification (AIC). Therefore, a novel AIC method is proposed in this paper to tackle such challenges. In the proposed method, traffic signals are firstly extracted from captured videos and converted as spatial-temporal (ST) signals. Based on the characteristics of the ST signals, a set of realistic simulation data are generated to construct an extended big traffic database to cover a variety of traffic situations. Next, a Mean-Shift filter is introduced to suppress the effect of noise and extract significant features from the ST signals. The extracted features are then associated with various types of traffic data: one normal type (inliers) and multiple abnormal types (outliers). For the classification, an adaptive boosting classifier is trained to detect outliers in traffic data automatically. Further, a Support Vector Machine (SVM) based method is adopted to train the model for identifying the categories of outliers. In short, this hybrid approach is called an Adaptive Boosting Support Vector Machines (AB-SVM) method. Experimental results show that the proposed AB-SVM method achieves a satisfied result with more than 92% classification accuracy on average.
Li-Li Wang, Henry Y.T. Ngan, Nelson H.C. Yung
null
1512.04392
null
null
We Are Humor Beings: Understanding and Predicting Visual Humor
cs.CV cs.CL cs.LG
Humor is an integral part of human lives. Despite being tremendously impactful, it is perhaps surprising that we do not have a detailed understanding of humor yet. As interactions between humans and AI systems increase, it is imperative that these systems are taught to understand subtleties of human expressions such as humor. In this work, we are interested in the question - what content in a scene causes it to be funny? As a first step towards understanding visual humor, we analyze the humor manifested in abstract scenes and design computational models for them. We collect two datasets of abstract scenes that facilitate the study of humor at both the scene-level and the object-level. We analyze the funny scenes and explore the different types of humor depicted in them via human studies. We model two tasks that we believe demonstrate an understanding of some aspects of visual humor. The tasks involve predicting the funniness of a scene and altering the funniness of a scene. We show that our models perform well quantitatively, and qualitatively through human studies. Our datasets are publicly available.
Arjun Chandrasekaran, Ashwin K. Vijayakumar, Stanislaw Antol, Mohit Bansal, Dhruv Batra, C. Lawrence Zitnick and Devi Parikh
null
1512.04407
null
null
Near-Optimal Bounds for Binary Embeddings of Arbitrary Sets
cs.LG cs.DS math.FA
We study embedding a subset $K$ of the unit sphere to the Hamming cube $\{-1,+1\}^m$. We characterize the tradeoff between distortion and sample complexity $m$ in terms of the Gaussian width $\omega(K)$ of the set. For subspaces and several structured sets we show that Gaussian maps provide the optimal tradeoff $m\sim \delta^{-2}\omega^2(K)$, in particular for $\delta$ distortion one needs $m\approx\delta^{-2}{d}$ where $d$ is the subspace dimension. For general sets, we provide sharp characterizations which reduces to $m\approx{\delta^{-4}}{\omega^2(K)}$ after simplification. We provide improved results for local embedding of points that are in close proximity of each other which is related to locality sensitive hashing. We also discuss faster binary embedding where one takes advantage of an initial sketching procedure based on Fast Johnson-Lindenstauss Transform. Finally, we list several numerical observations and discuss open problems.
Samet Oymak, Ben Recht
null
1512.04433
null
null
Memory-based control with recurrent neural networks
cs.LG
Partially observed control problems are a challenging aspect of reinforcement learning. We extend two related, model-free algorithms for continuous control -- deterministic policy gradient and stochastic value gradient -- to solve partially observed domains using recurrent neural networks trained with backpropagation through time. We demonstrate that this approach, coupled with long-short term memory is able to solve a variety of physical control problems exhibiting an assortment of memory requirements. These include the short-term integration of information from noisy sensors and the identification of system parameters, as well as long-term memory problems that require preserving information over many time steps. We also demonstrate success on a combined exploration and memory problem in the form of a simplified version of the well-known Morris water maze task. Finally, we show that our approach can deal with high-dimensional observations by learning directly from pixels. We find that recurrent deterministic and stochastic policies are able to learn similarly good solutions to these tasks, including the water maze where the agent must learn effective search strategies.
Nicolas Heess, Jonathan J Hunt, Timothy P Lillicrap, David Silver
null
1512.04455
null
null
Semisupervised Autoencoder for Sentiment Analysis
cs.LG
In this paper, we investigate the usage of autoencoders in modeling textual data. Traditional autoencoders suffer from at least two aspects: scalability with the high dimensionality of vocabulary size and dealing with task-irrelevant words. We address this problem by introducing supervision via the loss function of autoencoders. In particular, we first train a linear classifier on the labeled data, then define a loss for the autoencoder with the weights learned from the linear classifier. To reduce the bias brought by one single classifier, we define a posterior probability distribution on the weights of the classifier, and derive the marginalized loss of the autoencoder with Laplace approximation. We show that our choice of loss function can be rationalized from the perspective of Bregman Divergence, which justifies the soundness of our model. We evaluate the effectiveness of our model on six sentiment analysis datasets, and show that our model significantly outperforms all the competing methods with respect to classification accuracy. We also show that our model is able to take advantage of unlabeled dataset and get improved performance. We further show that our model successfully learns highly discriminative feature maps, which explains its superior performance.
Shuangfei Zhai, Zhongfei Zhang
null
1512.04466
null
null
\"Uber die Klassifizierung von Knoten in dynamischen Netzwerken mit Inhalt
cs.LG
This paper explains the DYCOS-Algorithm as it was introduced in by Aggarwal and Li in 2011. It operates on graphs whichs nodes are partially labeled and automatically adds missing labels to nodes. To do so, the DYCOS algorithm makes use of the structure of the graph as well as content which is assigned to the node. Aggarwal and Li measured in an experimental analysis that DYCOS adds the missing labels to a Graph with 19396 nodes of which 14814 are labeled and another Graph with 806635 nodes of which 18999 are labeld on one core of an Intel Xeon 2.5 GHz CPU with 32 G RAM within less than a minute. Additionally, extensions of the DYCOS algorithm are proposed. ----- In dieser Arbeit wird der DYCOS-Algorithmus, wie er 2011 von Aggarwal und Li vorgestellt wurde, erkl\"art. Er arbeitet auf Graphen, deren Knoten teilweise mit Beschriftungen versehen sind und erg\"anzt automatisch Beschriftungen f\"ur Knoten, die bisher noch keine Beschriftung haben. Dieser Vorgang wird "Klassifizierung" genannt. Dazu verwendet er die Struktur des Graphen sowie textuelle Informationen, die den Knoten zugeordnet sind. Die von Aggarwal und Li beschriebene experimentelle Analyse ergab, dass er auch auf dynamischen Graphen mit 19396 bzw. 806635 Knoten, von denen nur 14814 bzw. 18999 beschriftet waren, innerhalb von weniger als einer Minute auf einem Kern einer Intel Xeon 2.5 GHz CPU mit 32 G RAM ausgef\"uhrt werden kann. Zus\"atzlich wird die Ver\"offentlichung von Aggarwal und Li kritisch er\"ortert und und es werden m\"ogliche Erweiterungen des DYCOS-Algorithmus vorgeschlagen.
Martin Thoma
null
1512.04469
null
null
Dropout Training of Matrix Factorization and Autoencoder for Link Prediction in Sparse Graphs
cs.LG
Matrix factorization (MF) and Autoencoder (AE) are among the most successful approaches of unsupervised learning. While MF based models have been extensively exploited in the graph modeling and link prediction literature, the AE family has not gained much attention. In this paper we investigate both MF and AE's application to the link prediction problem in sparse graphs. We show the connection between AE and MF from the perspective of multiview learning, and further propose MF+AE: a model training MF and AE jointly with shared parameters. We apply dropout to training both the MF and AE parts, and show that it can significantly prevent overfitting by acting as an adaptive regularization. We conduct experiments on six real world sparse graph datasets, and show that MF+AE consistently outperforms the competing methods, especially on datasets that demonstrate strong non-cohesive structures.
Shuangfei Zhai, Zhongfei Zhang
null
1512.04483
null
null
On non-iterative training of a neural classifier
cs.CV cs.LG cs.NE
Recently an algorithm, was discovered, which separates points in n-dimension by planes in such a manner that no two points are left un-separated by at least one plane{[}1-3{]}. By using this new algorithm we show that there are two ways of classification by a neural network, for a large dimension feature space, both of which are non-iterative and deterministic. To demonstrate the power of both these methods we apply them exhaustively to the classical pattern recognition problem: The Fisher-Anderson's, IRIS flower data set and present the results. It is expected these methods will now be widely used for the training of neural networks for Deep Learning not only because of their non-iterative and deterministic nature but also because of their efficiency and speed and will supersede other classification methods which are iterative in nature and rely on error minimization.
K.Eswaran and K.Damodhar Rao
null
1512.04509
null
null
Relaxed Linearized Algorithms for Faster X-Ray CT Image Reconstruction
math.OC cs.LG stat.ML
Statistical image reconstruction (SIR) methods are studied extensively for X-ray computed tomography (CT) due to the potential of acquiring CT scans with reduced X-ray dose while maintaining image quality. However, the longer reconstruction time of SIR methods hinders their use in X-ray CT in practice. To accelerate statistical methods, many optimization techniques have been investigated. Over-relaxation is a common technique to speed up convergence of iterative algorithms. For instance, using a relaxation parameter that is close to two in alternating direction method of multipliers (ADMM) has been shown to speed up convergence significantly. This paper proposes a relaxed linearized augmented Lagrangian (AL) method that shows theoretical faster convergence rate with over-relaxation and applies the proposed relaxed linearized AL method to X-ray CT image reconstruction problems. Experimental results with both simulated and real CT scan data show that the proposed relaxed algorithm (with ordered-subsets [OS] acceleration) is about twice as fast as the existing unrelaxed fast algorithms, with negligible computation and memory overhead.
Hung Nien and Jeffrey A. Fessler
10.1109/TMI.2015.2508780
1512.04564
null
null
Learning optimal nonlinearities for iterative thresholding algorithms
cs.LG stat.ML
Iterative shrinkage/thresholding algorithm (ISTA) is a well-studied method for finding sparse solutions to ill-posed inverse problems. In this letter, we present a data-driven scheme for learning optimal thresholding functions for ISTA. The proposed scheme is obtained by relating iterations of ISTA to layers of a simple deep neural network (DNN) and developing a corresponding error backpropagation algorithm that allows to fine-tune the thresholding functions. Simulations on sparse statistical signals illustrate potential gains in estimation quality due to the proposed data adaptive ISTA.
Ulugbek S. Kamilov and Hassan Mansour
10.1109/LSP.2016.2548245
1512.04754
null
null
From One Point to A Manifold: Knowledge Graph Embedding For Precise Link Prediction
cs.AI cs.LG
Knowledge graph embedding aims at offering a numerical knowledge representation paradigm by transforming the entities and relations into continuous vector space. However, existing methods could not characterize the knowledge graph in a fine degree to make a precise prediction. There are two reasons: being an ill-posed algebraic system and applying an overstrict geometric form. As precise prediction is critical, we propose an manifold-based embedding principle (\textbf{ManifoldE}) which could be treated as a well-posed algebraic system that expands the position of golden triples from one point in current models to a manifold in ours. Extensive experiments show that the proposed models achieve substantial improvements against the state-of-the-art baselines especially for the precise prediction task, and yet maintain high efficiency.
Han Xiao, Minlie Huang, Xiaoyan Zhu
null
1512.04792
null
null
Causal and anti-causal learning in pattern recognition for neuroimaging
stat.ML cs.LG q-bio.NC stat.ME
Pattern recognition in neuroimaging distinguishes between two types of models: encoding- and decoding models. This distinction is based on the insight that brain state features, that are found to be relevant in an experimental paradigm, carry a different meaning in encoding- than in decoding models. In this paper, we argue that this distinction is not sufficient: Relevant features in encoding- and decoding models carry a different meaning depending on whether they represent causal- or anti-causal relations. We provide a theoretical justification for this argument and conclude that causal inference is essential for interpretation in neuroimaging.
Sebastian Weichwald, Bernhard Sch\"olkopf, Tonio Ball, Moritz Grosse-Wentrup
10.1109/PRNI.2014.6858551
1512.04808
null
null
Feature-Level Domain Adaptation
stat.ML cs.LG
Domain adaptation is the supervised learning setting in which the training and test data are sampled from different distributions: training data is sampled from a source domain, whilst test data is sampled from a target domain. This paper proposes and studies an approach, called feature-level domain adaptation (FLDA), that models the dependence between the two domains by means of a feature-level transfer model that is trained to describe the transfer from source to target domain. Subsequently, we train a domain-adapted classifier by minimizing the expected loss under the resulting transfer model. For linear classifiers and a large family of loss functions and transfer models, this expected loss can be computed or approximated analytically, and minimized efficiently. Our empirical evaluation of FLDA focuses on problems comprising binary and count data in which the transfer can be naturally modeled via a dropout distribution, which allows the classifier to adapt to differences in the marginal probability of features in the source and the target domain. Our experiments on several real-world problems show that FLDA performs on par with state-of-the-art domain-adaptation techniques.
Wouter M. Kouw, Jesse H. Krijthe, Marco Loog and Laurens J.P. van der Maaten
null
1512.04829
null
null
Data Driven Resource Allocation for Distributed Learning
cs.LG cs.DS stat.ML
In distributed machine learning, data is dispatched to multiple machines for processing. Motivated by the fact that similar data points often belong to the same or similar classes, and more generally, classification rules of high accuracy tend to be "locally simple but globally complex" (Vapnik & Bottou 1993), we propose data dependent dispatching that takes advantage of such structure. We present an in-depth analysis of this model, providing new algorithms with provable worst-case guarantees, analysis proving existing scalable heuristics perform well in natural non worst-case conditions, and techniques for extending a dispatching rule from a small sample to the entire distribution. We overcome novel technical challenges to satisfy important conditions for accurate distributed learning, including fault tolerance and balancedness. We empirically compare our approach with baselines based on random partitioning, balanced partition trees, and locality sensitive hashing, showing that we achieve significantly higher accuracy on both synthetic and real world image and advertising datasets. We also demonstrate that our technique strongly scales with the available computing power.
Travis Dick, Mu Li, Venkata Krishna Pillutla, Colin White, Maria Florina Balcan, Alex Smola
null
1512.04848
null
null
Energy-Efficient Classification for Anomaly Detection: The Wireless Channel as a Helper
cs.IT cs.LG math.IT
Anomaly detection has various applications including condition monitoring and fault diagnosis. The objective is to sense the environment, learn the normal system state, and then periodically classify whether the instantaneous state deviates from the normal one or not. A flexible and cost-effective way of monitoring a system state is to use a wireless sensor network. In the traditional approach, the sensors encode their observations and transmit them to a fusion center by means of some interference avoiding channel access method. The fusion center then decodes all the data and classifies the corresponding system state. As this approach can be highly inefficient in terms of energy consumption, in this paper we propose a transmission scheme that exploits interference for carrying out the anomaly detection directly in the air. In other words, the wireless channel helps the fusion center to retrieve the sought classification outcome immediately from the channel output. To achieve this, the chosen learning model is linear support vector machines. After discussing the proposed scheme and proving its reliability, we present numerical examples demonstrating that the scheme reduces the energy consumption for anomaly detection by up to 53% compared to a strategy that uses time division multiple-access.
Kiril Ralinovski, Mario Goldenbaum, and S{\l}awomir Sta\'nczak
null
1512.04857
null
null
Increasing the Action Gap: New Operators for Reinforcement Learning
cs.AI cs.LG
This paper introduces new optimality-preserving operators on Q-functions. We first describe an operator for tabular representations, the consistent Bellman operator, which incorporates a notion of local policy consistency. We show that this local consistency leads to an increase in the action gap at each state; increasing this gap, we argue, mitigates the undesirable effects of approximation and estimation errors on the induced greedy policies. This operator can also be applied to discretized continuous space and time problems, and we provide empirical results evidencing superior performance in this context. Extending the idea of a locally consistent operator, we then derive sufficient conditions for an operator to preserve optimality, leading to a family of operators which includes our consistent Bellman operator. As corollaries we provide a proof of optimality for Baird's advantage learning algorithm and derive other gap-increasing operators with interesting properties. We conclude with an empirical study on 60 Atari 2600 games illustrating the strong potential of these new operators.
Marc G. Bellemare, Georg Ostrovski, Arthur Guez, Philip S. Thomas and R\'emi Munos
null
1512.04860
null
null
Strategies for Training Large Vocabulary Neural Language Models
cs.CL cs.LG
Training neural network language models over large vocabularies is still computationally very costly compared to count-based models such as Kneser-Ney. At the same time, neural language models are gaining popularity for many applications such as speech recognition and machine translation whose success depends on scalability. We present a systematic comparison of strategies to represent and train large vocabularies, including softmax, hierarchical softmax, target sampling, noise contrastive estimation and self normalization. We further extend self normalization to be a proper estimator of likelihood and introduce an efficient variant of softmax. We evaluate each method on three popular benchmarks, examining performance on rare words, the speed/accuracy trade-off and complementarity to Kneser-Ney.
Welin Chen and David Grangier and Michael Auli
null
1512.04906
null
null
A Light Touch for Heavily Constrained SGD
cs.LG
Minimizing empirical risk subject to a set of constraints can be a useful strategy for learning restricted classes of functions, such as monotonic functions, submodular functions, classifiers that guarantee a certain class label for some subset of examples, etc. However, these restrictions may result in a very large number of constraints. Projected stochastic gradient descent (SGD) is often the default choice for large-scale optimization in machine learning, but requires a projection after each update. For heavily-constrained objectives, we propose an efficient extension of SGD that stays close to the feasible region while only applying constraints probabilistically at each iteration. Theoretical analysis shows a compelling trade-off between per-iteration work and the number of iterations needed on problems with a large number of constraints.
Andrew Cotter, Maya Gupta, Jan Pfeifer
null
1512.04960
null
null
Streaming Kernel Principal Component Analysis
cs.DS cs.LG stat.ML
Kernel principal component analysis (KPCA) provides a concise set of basis vectors which capture non-linear structures within large data sets, and is a central tool in data analysis and learning. To allow for non-linear relations, typically a full $n \times n$ kernel matrix is constructed over $n$ data points, but this requires too much space and time for large values of $n$. Techniques such as the Nystr\"om method and random feature maps can help towards this goal, but they do not explicitly maintain the basis vectors in a stream and take more space than desired. We propose a new approach for streaming KPCA which maintains a small set of basis elements in a stream, requiring space only logarithmic in $n$, and also improves the dependence on the error parameter. Our technique combines together random feature maps with recent advances in matrix sketching, it has guaranteed spectral norm error bounds with respect to the original kernel matrix, and it compares favorably in practice to state-of-the-art approaches.
Mina Ghashami, Daniel Perry, Jeff M. Phillips
null
1512.05059
null
null
DNA-Level Splice Junction Prediction using Deep Recurrent Neural Networks
cs.LG q-bio.GN
A eukaryotic gene consists of multiple exons (protein coding regions) and introns (non-coding regions), and a splice junction refers to the boundary between a pair of exon and intron. Precise identification of spice junctions on a gene is important for deciphering its primary structure, function, and interaction. Experimental techniques for determining exon/intron boundaries include RNA-seq, which is often accompanied by computational approaches. Canonical splicing signals are known, but computational junction prediction still remains challenging because of a large number of false positives and other complications. In this paper, we exploit deep recurrent neural networks (RNNs) to model DNA sequences and to detect splice junctions thereon. We test various RNN units and architectures including long short-term memory units, gated recurrent units, and recently proposed iRNN for in-depth design space exploration. According to our experimental results, the proposed approach significantly outperforms not only conventional machine learning-based methods but also a recent state-of-the-art deep belief network-based technique in terms of prediction accuracy.
Byunghan Lee, Taehoon Lee, Byunggook Na, Sungroh Yoon
null
1512.05135
null
null
Learning Games and Rademacher Observations Losses
cs.LG
It has recently been shown that supervised learning with the popular logistic loss is equivalent to optimizing the exponential loss over sufficient statistics about the class: Rademacher observations (rados). We first show that this unexpected equivalence can actually be generalized to other example / rado losses, with necessary and sufficient conditions for the equivalence, exemplified on four losses that bear popular names in various fields: exponential (boosting), mean-variance (finance), Linear Hinge (on-line learning), ReLU (deep learning), and unhinged (statistics). Second, we show that the generalization unveils a surprising new connection to regularized learning, and in particular a sufficient condition under which regularizing the loss over examples is equivalent to regularizing the rados (with Minkowski sums) in the equivalent rado loss. This brings simple and powerful rado-based learning algorithms for sparsity-controlling regularization, that we exemplify on a boosting algorithm for the regularized exponential rado-loss, which formally boosts over four types of regularization, including the popular ridge and lasso, and the recently coined slope --- we obtain the first proven boosting algorithm for this last regularization. Through our first contribution on the equivalence of rado and example-based losses, Omega-R.AdaBoost~appears to be an efficient proxy to boost the regularized logistic loss over examples using whichever of the four regularizers. Experiments display that regularization consistently improves performances of rado-based learning, and may challenge or beat the state of the art of example-based learning even when learning over small sets of rados. Finally, we connect regularization to differential privacy, and display how tiny budgets can be afforded on big domains while beating (protected) example-based learning.
Richard Nock
null
1512.05244
null
null
Blockout: Dynamic Model Selection for Hierarchical Deep Networks
cs.CV cs.LG
Most deep architectures for image classification--even those that are trained to classify a large number of diverse categories--learn shared image representations with a single model. Intuitively, however, categories that are more similar should share more information than those that are very different. While hierarchical deep networks address this problem by learning separate features for subsets of related categories, current implementations require simplified models using fixed architectures specified via heuristic clustering methods. Instead, we propose Blockout, a method for regularization and model selection that simultaneously learns both the model architecture and parameters. A generalization of Dropout, our approach gives a novel parametrization of hierarchical architectures that allows for structure learning via back-propagation. To demonstrate its utility, we evaluate Blockout on the CIFAR and ImageNet datasets, demonstrating improved classification accuracy, better regularization performance, faster training, and the clear emergence of hierarchical network structures.
Calvin Murdock, Zhen Li, Howard Zhou, Tom Duerig
10.1109/CVPR.2016.283
1512.05246
null
null
Feature Representation for ICU Mortality
cs.AI cs.LG stat.ML
Good predictors of ICU Mortality have the potential to identify high-risk patients earlier, improve ICU resource allocation, or create more accurate population-level risk models. Machine learning practitioners typically make choices about how to represent features in a particular model, but these choices are seldom evaluated quantitatively. This study compares the performance of different representations of clinical event data from MIMIC II in a logistic regression model to predict 36-hour ICU mortality. The most common representations are linear (normalized counts) and binary (yes/no). These, along with a new representation termed "hill", are compared using both L1 and L2 regularization. Results indicate that the introduced "hill" representation outperforms both the binary and linear representations, the hill representation thus has the potential to improve existing models of ICU mortality.
Harini Suresh
null
1512.05294
null
null
Unsupervised Feature Construction for Improving Data Representation and Semantics
cs.AI cs.LG
Feature-based format is the main data representation format used by machine learning algorithms. When the features do not properly describe the initial data, performance starts to degrade. Some algorithms address this problem by internally changing the representation space, but the newly-constructed features are rarely comprehensible. We seek to construct, in an unsupervised way, new features that are more appropriate for describing a given dataset and, at the same time, comprehensible for a human user. We propose two algorithms that construct the new features as conjunctions of the initial primitive features or their negations. The generated feature sets have reduced correlations between features and succeed in catching some of the hidden relations between individuals in a dataset. For example, a feature like $sky \wedge \neg building \wedge panorama$ would be true for non-urban images and is more informative than simple features expressing the presence or the absence of an object. The notion of Pareto optimality is used to evaluate feature sets and to obtain a balance between total correlation and the complexity of the resulted feature set. Statistical hypothesis testing is used in order to automatically determine the values of the parameters used for constructing a data-dependent feature set. We experimentally show that our approaches achieve the construction of informative feature sets for multiple datasets.
Marian-Andrei Rizoiu, Julien Velcin, St\'ephane Lallich
10.1007/s10844-013-0235-x
1512.05467
null
null
An Empirical Comparison of Neural Architectures for Reinforcement Learning in Partially Observable Environments
cs.NE cs.AI cs.LG
This paper explores the performance of fitted neural Q iteration for reinforcement learning in several partially observable environments, using three recurrent neural network architectures: Long Short-Term Memory, Gated Recurrent Unit and MUT1, a recurrent neural architecture evolved from a pool of several thousands candidate architectures. A variant of fitted Q iteration, based on Advantage values instead of Q values, is also explored. The results show that GRU performs significantly better than LSTM and MUT1 for most of the problems considered, requiring less training episodes and less CPU time before learning a very good policy. Advantage learning also tends to produce better results.
Denis Steckelmacher and Peter Vrancx
null
1512.05509
null
null
Deep-Spying: Spying using Smartwatch and Deep Learning
cs.CR cs.CY cs.LG
Wearable technologies are today on the rise, becoming more common and broadly available to mainstream users. In fact, wristband and armband devices such as smartwatches and fitness trackers already took an important place in the consumer electronics market and are becoming ubiquitous. By their very nature of being wearable, these devices, however, provide a new pervasive attack surface threatening users privacy, among others. In the meantime, advances in machine learning are providing unprecedented possibilities to process complex data efficiently. Allowing patterns to emerge from high dimensional unavoidably noisy data. The goal of this work is to raise awareness about the potential risks related to motion sensors built-in wearable devices and to demonstrate abuse opportunities leveraged by advanced neural network architectures. The LSTM-based implementation presented in this research can perform touchlogging and keylogging on 12-keys keypads with above-average accuracy even when confronted with raw unprocessed data. Thus demonstrating that deep neural networks are capable of making keystroke inference attacks based on motion sensors easier to achieve by removing the need for non-trivial pre-processing pipelines and carefully engineered feature extraction strategies. Our results suggest that the complete technological ecosystem of a user can be compromised when a wearable wristband device is worn.
Tony Beltramelli, Sebastian Risi
null
1512.05616
null
null
Probabilistic Programming with Gaussian Process Memoization
cs.LG cs.AI stat.ML
Gaussian Processes (GPs) are widely used tools in statistics, machine learning, robotics, computer vision, and scientific computation. However, despite their popularity, they can be difficult to apply; all but the simplest classification or regression applications require specification and inference over complex covariance functions that do not admit simple analytical posteriors. This paper shows how to embed Gaussian processes in any higher-order probabilistic programming language, using an idiom based on memoization, and demonstrates its utility by implementing and extending classic and state-of-the-art GP applications. The interface to Gaussian processes, called gpmem, takes an arbitrary real-valued computational process as input and returns a statistical emulator that automatically improve as the original process is invoked and its input-output behavior is recorded. The flexibility of gpmem is illustrated via three applications: (i) robust GP regression with hierarchical hyper-parameter learning, (ii) discovering symbolic expressions from time-series data by fully Bayesian structure learning over kernels generated by a stochastic grammar, and (iii) a bandit formulation of Bayesian optimization with automatic inference and action selection. All applications share a single 50-line Python library and require fewer than 20 lines of probabilistic code each.
Ulrich Schaechtle, Ben Zinberg, Alexey Radul, Kostas Stathis and Vikash K. Mansinghka
null
1512.05665
null
null
A Survey of Available Corpora for Building Data-Driven Dialogue Systems
cs.CL cs.AI cs.HC cs.LG stat.ML
During the past decade, several areas of speech and language understanding have witnessed substantial breakthroughs from the use of data-driven models. In the area of dialogue systems, the trend is less obvious, and most practical systems are still built through significant engineering and expert knowledge. Nevertheless, several recent results suggest that data-driven approaches are feasible and quite promising. To facilitate research in this area, we have carried out a wide survey of publicly available datasets suitable for data-driven learning of dialogue systems. We discuss important characteristics of these datasets, how they can be used to learn diverse dialogue strategies, and their other potential uses. We also examine methods for transfer learning between datasets and the use of external knowledge. Finally, we discuss appropriate choice of evaluation metrics for the learning objective.
Iulian Vlad Serban, Ryan Lowe, Peter Henderson, Laurent Charlin, Joelle Pineau
null
1512.05742
null
null
Successive Ray Refinement and Its Application to Coordinate Descent for LASSO
cs.LG
Coordinate descent is one of the most popular approaches for solving Lasso and its extensions due to its simplicity and efficiency. When applying coordinate descent to solving Lasso, we update one coordinate at a time while fixing the remaining coordinates. Such an update, which is usually easy to compute, greedily decreases the objective function value. In this paper, we aim to improve its computational efficiency by reducing the number of coordinate descent iterations. To this end, we propose a novel technique called Successive Ray Refinement (SRR). SRR makes use of the following ray continuation property on the successive iterations: for a particular coordinate, the value obtained in the next iteration almost always lies on a ray that starts at its previous iteration and passes through the current iteration. Motivated by this ray-continuation property, we propose that coordinate descent be performed not directly on the previous iteration but on a refined search point that has the following properties: on one hand, it lies on a ray that starts at a history solution and passes through the previous iteration, and on the other hand, it achieves the minimum objective function value among all the points on the ray. We propose two schemes for defining the search point and show that the refined search point can be efficiently obtained. Empirical results for real and synthetic data sets show that the proposed SRR can significantly reduce the number of coordinate descent iterations, especially for small Lasso regularization parameters.
Jun Liu, Zheng Zhao, Ruiwen Zhang
null
1512.05808
null
null
Relay Backpropagation for Effective Learning of Deep Convolutional Neural Networks
cs.CV cs.LG
Learning deeper convolutional neural networks becomes a tendency in recent years. However, many empirical evidences suggest that performance improvement cannot be gained by simply stacking more layers. In this paper, we consider the issue from an information theoretical perspective, and propose a novel method Relay Backpropagation, that encourages the propagation of effective information through the network in training stage. By virtue of the method, we achieved the first place in ILSVRC 2015 Scene Classification Challenge. Extensive experiments on two challenging large scale datasets demonstrate the effectiveness of our method is not restricted to a specific dataset or network architecture. Our models will be available to the research community later.
Li Shen and Zhouchen Lin and Qingming Huang
null
1512.05830
null
null
Deep Poisson Factorization Machines: factor analysis for mapping behaviors in journalist ecosystem
cs.CY cs.LG stat.ML
Newsroom in online ecosystem is difficult to untangle. With prevalence of social media, interactions between journalists and individuals become visible, but lack of understanding to inner processing of information feedback loop in public sphere leave most journalists baffled. Can we provide an organized view to characterize journalist behaviors on individual level to know better of the ecosystem? To this end, I propose Poisson Factorization Machine (PFM), a Bayesian analogue to matrix factorization that assumes Poisson distribution for generative process. The model generalizes recent studies on Poisson Matrix Factorization to account temporal interaction which involves tensor-like structure, and label information. Two inference procedures are designed, one based on batch variational EM and another stochastic variational inference scheme that efficiently scales with data size. An important novelty in this note is that I show how to stack layers of PFM to introduce a deep architecture. This work discusses some potential results applying the model and explains how such latent factors may be useful for analyzing latent behaviors for data exploration.
Pau Perng-Hwa Kung
null
1512.05840
null
null
Complexity and Approximation of the Fuzzy K-Means Problem
cs.LG cs.DS
The fuzzy $K$-means problem is a generalization of the classical $K$-means problem to soft clusterings, i.e. clusterings where each points belongs to each cluster to some degree. Although popular in practice, prior to this work the fuzzy $K$-means problem has not been studied from a complexity theoretic or algorithmic perspective. We show that optimal solutions for fuzzy $K$-means cannot, in general, be expressed by radicals over the input points. Surprisingly, this already holds for very simple inputs in one-dimensional space. Hence, one cannot expect to compute optimal solutions exactly. We give the first $(1+\epsilon)$-approximation algorithms for the fuzzy $K$-means problem. First, we present a deterministic approximation algorithm whose runtime is polynomial in $N$ and linear in the dimension $D$ of the input set, given that $K$ is constant, i.e. a polynomial time approximation algorithm given a fixed $K$. We achieve this result by showing that for each soft clustering there exists a hard clustering with comparable properties. Second, by using techniques known from coreset constructions for the $K$-means problem, we develop a deterministic approximation algorithm that runs in time almost linear in $N$ but exponential in the dimension $D$. We complement these results with a randomized algorithm which imposes some natural restrictions on the input set and whose runtime is comparable to some of the most efficient approximation algorithms for $K$-means, i.e. linear in the number of points and the dimension, but exponential in the number of clusters.
Johannes Bl\"omer, Sascha Brauer, and Kathrin Bujna
null
1512.05947
null
null
Asymptotic Behavior of Mean Partitions in Consensus Clustering
cs.LG stat.ML
Although consistency is a minimum requirement of any estimator, little is known about consistency of the mean partition approach in consensus clustering. This contribution studies the asymptotic behavior of mean partitions. We show that under normal assumptions, the mean partition approach is consistent and asymptotic normal. To derive both results, we represent partitions as points of some geometric space, called orbit space. Then we draw on results from the theory of Fr\'echet means and stochastic programming. The asymptotic properties hold for continuous extensions of standard cluster criteria (indices). The results justify consensus clustering using finite but sufficiently large sample sizes. Furthermore, the orbit space framework provides a mathematical foundation for studying further statistical, geometrical, and analytical properties of sets of partitions.
Brijnesh Jain
null
1512.06061
null
null
Discriminative Subnetworks with Regularized Spectral Learning for Global-state Network Data
cs.LG
Data mining practitioners are facing challenges from data with network structure. In this paper, we address a specific class of global-state networks which comprises of a set of network instances sharing a similar structure yet having different values at local nodes. Each instance is associated with a global state which indicates the occurrence of an event. The objective is to uncover a small set of discriminative subnetworks that can optimally classify global network values. Unlike most existing studies which explore an exponential subnetwork space, we address this difficult problem by adopting a space transformation approach. Specifically, we present an algorithm that optimizes a constrained dual-objective function to learn a low-dimensional subspace that is capable of discriminating networks labelled by different global states, while reconciling with common network topology sharing across instances. Our algorithm takes an appealing approach from spectral graph learning and we show that the globally optimum solution can be achieved via matrix eigen-decomposition.
Xuan Hong Dang, Ambuj K. Singh, Petko Bogdanov, Hongyuan You and Bayyuan Hsu
null
1512.06173
null
null
Poseidon: A System Architecture for Efficient GPU-based Deep Learning on Multiple Machines
cs.LG cs.CV cs.DC
Deep learning (DL) has achieved notable successes in many machine learning tasks. A number of frameworks have been developed to expedite the process of designing and training deep neural networks (DNNs), such as Caffe, Torch and Theano. Currently they can harness multiple GPUs on a single machine, but are unable to use GPUs that are distributed across multiple machines; as even average-sized DNNs can take days to train on a single GPU with 100s of GBs to TBs of data, distributed GPUs present a prime opportunity for scaling up DL. However, the limited bandwidth available on commodity Ethernet networks presents a bottleneck to distributed GPU training, and prevents its trivial realization. To investigate how to adapt existing frameworks to efficiently support distributed GPUs, we propose Poseidon, a scalable system architecture for distributed inter-machine communication in existing DL frameworks. We integrate Poseidon with Caffe and evaluate its performance at training DNNs for object recognition. Poseidon features three key contributions that accelerate DNN training on clusters: (1) a three-level hybrid architecture that allows Poseidon to support both CPU-only and GPU-equipped clusters, (2) a distributed wait-free backpropagation (DWBP) algorithm to improve GPU utilization and to balance communication, and (3) a structure-aware communication protocol (SACP) to minimize communication overheads. We empirically show that Poseidon converges to same objectives as a single machine, and achieves state-of-art training speedup across multiple models and well-established datasets using a commodity GPU cluster of 8 nodes (e.g. 4.5x speedup on AlexNet, 4x on GoogLeNet, 4x on CIFAR-10). On the much larger ImageNet22K dataset, Poseidon with 8 nodes achieves better speedup and competitive accuracy to recent CPU-based distributed systems such as Adam and Le et al., which use 10s to 1000s of nodes.
Hao Zhang, Zhiting Hu, Jinliang Wei, Pengtao Xie, Gunhee Kim, Qirong Ho and Eric Xing
null
1512.06216
null
null
A new robust adaptive algorithm for underwater acoustic channel equalization
cs.SD cs.IT cs.LG math.IT
We introduce a novel family of adaptive robust equalizers for highly challenging underwater acoustic (UWA) channel equalization. Since the underwater environment is highly non-stationary and subjected to impulsive noise, we use adaptive filtering techniques based on a relative logarithmic cost function inspired by the competitive methods from the online learning literature. To improve the convergence performance of the conventional linear equalization methods, while mitigating the stability issues, we intrinsically combine different norms of the error in the cost function, using logarithmic functions. Hence, we achieve a comparable convergence performance to least mean fourth (LMF) equalizer, while significantly enhancing the stability performance in such an adverse communication medium. We demonstrate the performance of our algorithms through highly realistic experiments performed on accurately simulated underwater acoustic channels.
Dariush Kari and Muhammed Omer Sayin and Suleyman Serdar Kozat
null
1512.06222
null
null
Using machine learning for medium frequency derivative portfolio trading
q-fin.TR cs.LG stat.ML
We use machine learning for designing a medium frequency trading strategy for a portfolio of 5 year and 10 year US Treasury note futures. We formulate this as a classification problem where we predict the weekly direction of movement of the portfolio using features extracted from a deep belief network trained on technical indicators of the portfolio constituents. The experimentation shows that the resulting pipeline is effective in making a profitable trade.
Abhijit Sharang and Chetan Rao
null
1512.06228
null
null
The Limitations of Optimization from Samples
cs.DS cs.DM cs.LG
In this paper we consider the following question: can we optimize objective functions from the training data we use to learn them? We formalize this question through a novel framework we call optimization from samples (OPS). In OPS, we are given sampled values of a function drawn from some distribution and the objective is to optimize the function under some constraint. While there are interesting classes of functions that can be optimized from samples, our main result is an impossibility. We show that there are classes of functions which are statistically learnable and optimizable, but for which no reasonable approximation for optimization from samples is achievable. In particular, our main result shows that there is no constant factor approximation for maximizing coverage functions under a cardinality constraint using polynomially-many samples drawn from any distribution. We also show tight approximation guarantees for maximization under a cardinality constraint of several interesting classes of functions including unit-demand, additive, and general monotone submodular functions, as well as a constant factor approximation for monotone submodular functions with bounded curvature.
Eric Balkanski, Aviad Rubinstein, Yaron Singer
null
1512.06238
null
null
A Mathematical Theory of Deep Convolutional Neural Networks for Feature Extraction
cs.IT cs.AI cs.LG math.FA math.IT stat.ML
Deep convolutional neural networks have led to breakthrough results in numerous practical machine learning tasks such as classification of images in the ImageNet data set, control-policy-learning to play Atari games or the board game Go, and image captioning. Many of these applications first perform feature extraction and then feed the results thereof into a trainable classifier. The mathematical analysis of deep convolutional neural networks for feature extraction was initiated by Mallat, 2012. Specifically, Mallat considered so-called scattering networks based on a wavelet transform followed by the modulus non-linearity in each network layer, and proved translation invariance (asymptotically in the wavelet scale parameter) and deformation stability of the corresponding feature extractor. This paper complements Mallat's results by developing a theory that encompasses general convolutional transforms, or in more technical parlance, general semi-discrete frames (including Weyl-Heisenberg filters, curvelets, shearlets, ridgelets, wavelets, and learned filters), general Lipschitz-continuous non-linearities (e.g., rectified linear units, shifted logistic sigmoids, hyperbolic tangents, and modulus functions), and general Lipschitz-continuous pooling operators emulating, e.g., sub-sampling and averaging. In addition, all of these elements can be different in different network layers. For the resulting feature extractor we prove a translation invariance result of vertical nature in the sense of the features becoming progressively more translation-invariant with increasing network depth, and we establish deformation sensitivity bounds that apply to signal classes such as, e.g., band-limited functions, cartoon functions, and Lipschitz functions.
Thomas Wiatowski and Helmut B\"olcskei
null
1512.06293
null
null
Kernel principal component analysis network for image classification
cs.LG cs.CV
In order to classify the nonlinear feature with linear classifier and improve the classification accuracy, a deep learning network named kernel principal component analysis network (KPCANet) is proposed. First, mapping the data into higher space with kernel principal component analysis to make the data linearly separable. Then building a two-layer KPCANet to obtain the principal components of image. Finally, classifying the principal components with linearly classifier. Experimental results show that the proposed KPCANet is effective in face recognition, object recognition and hand-writing digits recognition, it also outperforms principal component analysis network (PCANet) generally as well. Besides, KPCANet is invariant to illumination and stable to occlusion and slight deformation.
Dan Wu, Jiasong Wu, Rui Zeng, Longyu Jiang, Lotfi Senhadji, Huazhong Shu
null
1512.06337
null
null
Revisiting Differentially Private Regression: Lessons From Learning Theory and their Consequences
cs.CR cs.DB cs.LG
Private regression has received attention from both database and security communities. Recent work by Fredrikson et al. (USENIX Security 2014) analyzed the functional mechanism (Zhang et al. VLDB 2012) for training linear regression models over medical data. Unfortunately, they found that model accuracy is already unacceptable with differential privacy when $\varepsilon = 5$. We address this issue, presenting an explicit connection between differential privacy and stable learning theory through which a substantially better privacy/utility tradeoff can be obtained. Perhaps more importantly, our theory reveals that the most basic mechanism in differential privacy, output perturbation, can be used to obtain a better tradeoff for all convex-Lipschitz-bounded learning tasks. Since output perturbation is simple to implement, it means that our approach is potentially widely applicable in practice. We go on to apply it on the same medical data as used by Fredrikson et al. Encouragingly, we achieve accurate models even for $\varepsilon = 0.1$. In the last part of this paper, we study the impact of our improved differentially private mechanisms on model inversion attacks, a privacy attack introduced by Fredrikson et al. We observe that the improved tradeoff makes the resulting differentially private model more susceptible to inversion attacks. We analyze this phenomenon formally.
Xi Wu, Matthew Fredrikson, Wentao Wu, Somesh Jha, Jeffrey F. Naughton
null
1512.06388
null
null
Behavioral Modeling for Churn Prediction: Early Indicators and Accurate Predictors of Custom Defection and Loyalty
cs.LG
Churn prediction, or the task of identifying customers who are likely to discontinue use of a service, is an important and lucrative concern of firms in many different industries. As these firms collect an increasing amount of large-scale, heterogeneous data on the characteristics and behaviors of customers, new methods become possible for predicting churn. In this paper, we present a unified analytic framework for detecting the early warning signs of churn, and assigning a "Churn Score" to each customer that indicates the likelihood that the particular individual will churn within a predefined amount of time. This framework employs a brute force approach to feature engineering, then winnows the set of relevant attributes via feature selection, before feeding the final feature-set into a suite of supervised learning algorithms. Using several terabytes of data from a large mobile phone network, our method identifies several intuitive - and a few surprising - early warning signs of churn, and our best model predicts whether a subscriber will churn with 89.4% accuracy.
Muhammad R. Khan, Johua Manoj, Anikate Singh, Joshua Blumenstock
10.1109/BigDataCongress.2015.107
1512.06430
null
null
ATD: Anomalous Topic Discovery in High Dimensional Discrete Data
stat.ML cs.LG
We propose an algorithm for detecting patterns exhibited by anomalous clusters in high dimensional discrete data. Unlike most anomaly detection (AD) methods, which detect individual anomalies, our proposed method detects groups (clusters) of anomalies; i.e. sets of points which collectively exhibit abnormal patterns. In many applications this can lead to better understanding of the nature of the atypical behavior and to identifying the sources of the anomalies. Moreover, we consider the case where the atypical patterns exhibit on only a small (salient) subset of the very high dimensional feature space. Individual AD techniques and techniques that detect anomalies using all the features typically fail to detect such anomalies, but our method can detect such instances collectively, discover the shared anomalous patterns exhibited by them, and identify the subsets of salient features. In this paper, we focus on detecting anomalous topics in a batch of text documents, developing our algorithm based on topic models. Results of our experiments show that our method can accurately detect anomalous topics and salient features (words) under each such topic in a synthetic data set and two real-world text corpora and achieves better performance compared to both standard group AD and individual AD techniques. All required code to reproduce our experiments is available from https://github.com/hsoleimani/ATD
Hossein Soleimani, David J. Miller
10.1109/TKDE.2016.2561288
1512.06452
null
null
Backward and Forward Language Modeling for Constrained Sentence Generation
cs.CL cs.LG cs.NE
Recent language models, especially those based on recurrent neural networks (RNNs), make it possible to generate natural language from a learned probability. Language generation has wide applications including machine translation, summarization, question answering, conversation systems, etc. Existing methods typically learn a joint probability of words conditioned on additional information, which is (either statically or dynamically) fed to RNN's hidden layer. In many applications, we are likely to impose hard constraints on the generated texts, i.e., a particular word must appear in the sentence. Unfortunately, existing approaches could not solve this problem. In this paper, we propose a novel backward and forward language model. Provided a specific word, we use RNNs to generate previous words and future words, either simultaneously or asynchronously, resulting in two model variants. In this way, the given word could appear at any position in the sentence. Experimental results show that the generated texts are comparable to sequential LMs in quality.
Lili Mou, Rui Yan, Ge Li, Lu Zhang, Zhi Jin
null
1512.06612
null
null
Deep Learning for Surface Material Classification Using Haptic And Visual Information
cs.RO cs.CV cs.LG
When a user scratches a hand-held rigid tool across an object surface, an acceleration signal can be captured, which carries relevant information about the surface. More importantly, such a haptic signal is complementary to the visual appearance of the surface, which suggests the combination of both modalities for the recognition of the surface material. In this paper, we present a novel deep learning method dealing with the surface material classification problem based on a Fully Convolutional Network (FCN), which takes as input the aforementioned acceleration signal and a corresponding image of the surface texture. Compared to previous surface material classification solutions, which rely on a careful design of hand-crafted domain-specific features, our method automatically extracts discriminative features utilizing the advanced deep learning methodologies. Experiments performed on the TUM surface material database demonstrate that our method achieves state-of-the-art classification accuracy robustly and efficiently.
Haitian Zheng, Lu Fang, Mengqi Ji, Matti Strese, Yigitcan Ozer, Eckehard Steinbach
null
1512.06658
null
null
Multilinear Subspace Clustering
cs.IT cs.CV cs.LG math.IT stat.ML
In this paper we present a new model and an algorithm for unsupervised clustering of 2-D data such as images. We assume that the data comes from a union of multilinear subspaces (UOMS) model, which is a specific structured case of the much studied union of subspaces (UOS) model. For segmentation under this model, we develop Multilinear Subspace Clustering (MSC) algorithm and evaluate its performance on the YaleB and Olivietti image data sets. We show that MSC is highly competitive with existing algorithms employing the UOS model in terms of clustering performance while enjoying improvement in computational complexity.
Eric Kernfeld, Nathan Majumder, Shuchin Aeron, Misha Kilmer
null
1512.06730
null
null
GraphConnect: A Regularization Framework for Neural Networks
cs.CV cs.LG cs.NE
Deep neural networks have proved very successful in domains where large training sets are available, but when the number of training samples is small, their performance suffers from overfitting. Prior methods of reducing overfitting such as weight decay, Dropout and DropConnect are data-independent. This paper proposes a new method, GraphConnect, that is data-dependent, and is motivated by the observation that data of interest lie close to a manifold. The new method encourages the relationships between the learned decisions to resemble a graph representing the manifold structure. Essentially GraphConnect is designed to learn attributes that are present in data samples in contrast to weight decay, Dropout and DropConnect which are simply designed to make it more difficult to fit to random error or noise. Empirical Rademacher complexity is used to connect the generalization error of the neural network to spectral properties of the graph learned from the input data. This framework is used to show that GraphConnect is superior to weight decay. Experimental results on several benchmark datasets validate the theoretical analysis, and show that when the number of training samples is small, GraphConnect is able to significantly improve performance over weight decay.
Jiaji Huang, Qiang Qiu, Robert Calderbank, Guillermo Sapiro
null
1512.06757
null
null
Predicting the Co-Evolution of Event and Knowledge Graphs
cs.LG
Embedding learning, a.k.a. representation learning, has been shown to be able to model large-scale semantic knowledge graphs. A key concept is a mapping of the knowledge graph to a tensor representation whose entries are predicted by models using latent representations of generalized entities. Knowledge graphs are typically treated as static: A knowledge graph grows more links when more facts become available but the ground truth values associated with links is considered time invariant. In this paper we address the issue of knowledge graphs where triple states depend on time. We assume that changes in the knowledge graph always arrive in form of events, in the sense that the events are the gateway to the knowledge graph. We train an event prediction model which uses both knowledge graph background information and information on recent events. By predicting future events, we also predict likely changes in the knowledge graph and thus obtain a model for the evolution of the knowledge graph as well. Our experiments demonstrate that our approach performs well in a clinical application, a recommendation engine and a sensor network application.
Crist\'obal Esteban and Volker Tresp and Yinchong Yang and Stephan Baier and Denis Krompa{\ss}
null
1512.06900
null
null
A C++ library for Multimodal Deep Learning
cs.LG
MDL, Multimodal Deep Learning Library, is a deep learning framework that supports multiple models, and this document explains its philosophy and functionality. MDL runs on Linux, Mac, and Unix platforms. It depends on OpenCV.
Jian Jin
null
1512.06927
null
null
On the Differential Privacy of Bayesian Inference
cs.AI cs.CR cs.LG math.ST stat.ML stat.TH
We study how to communicate findings of Bayesian inference to third parties, while preserving the strong guarantee of differential privacy. Our main contributions are four different algorithms for private Bayesian inference on proba-bilistic graphical models. These include two mechanisms for adding noise to the Bayesian updates, either directly to the posterior parameters, or to their Fourier transform so as to preserve update consistency. We also utilise a recently introduced posterior sampling mechanism, for which we prove bounds for the specific but general case of discrete Bayesian networks; and we introduce a maximum-a-posteriori private mechanism. Our analysis includes utility and privacy bounds, with a novel focus on the influence of graph structure on privacy. Worked examples and experiments with Bayesian na{\"i}ve Bayes and Bayesian linear regression illustrate the application of our mechanisms.
Zuhe Zhang, Benjamin Rubinstein, Christos Dimitrakakis
null
1512.06992
null
null
FAASTA: A fast solver for total-variation regularization of ill-conditioned problems with application to brain imaging
q-bio.NC cs.LG stat.CO stat.ML
The total variation (TV) penalty, as many other analysis-sparsity problems, does not lead to separable factors or a proximal operatorwith a closed-form expression, such as soft thresholding for the $\ell\_1$ penalty. As a result, in a variational formulation of an inverse problem or statisticallearning estimation, it leads to challenging non-smooth optimization problemsthat are often solved with elaborate single-step first-order methods. When thedata-fit term arises from empirical measurements, as in brain imaging, it isoften very ill-conditioned and without simple structure. In this situation, in proximal splitting methods, the computation cost of thegradient step can easily dominate each iteration. Thus it is beneficialto minimize the number of gradient steps.We present fAASTA, a variant of FISTA, that relies on an internal solver forthe TV proximal operator, and refines its tolerance to balance computationalcost of the gradient and the proximal steps. We give benchmarks andillustrations on "brain decoding": recovering brain maps from noisymeasurements to predict observed behavior. The algorithm as well as theempirical study of convergence speed are valuable for any non-exact proximaloperator, in particular analysis-sparsity problems.
Ga\"el Varoquaux (PARIETAL), Michael Eickenberg (PARIETAL), Elvis Dohmatob (PARIETAL), Bertand Thirion (PARIETAL)
null
1512.06999
null
null
Implementation of deep learning algorithm for automatic detection of brain tumors using intraoperative IR-thermal mapping data
cs.CV cs.LG q-bio.QM stat.ML
The efficiency of deep machine learning for automatic delineation of tumor areas has been demonstrated for intraoperative neuronavigation using active IR-mapping with the use of the cold test. The proposed approach employs a matrix IR-imager to remotely register the space-time distribution of surface temperature pattern, which is determined by the dynamics of local cerebral blood flow. The advantages of this technique are non-invasiveness, zero risks for the health of patients and medical staff, low implementation and operational costs, ease and speed of use. Traditional IR-diagnostic technique has a crucial limitation - it involves a diagnostician who determines the boundaries of tumor areas, which gives rise to considerable uncertainty, which can lead to diagnosis errors that are difficult to control. The current study demonstrates that implementing deep learning algorithms allows to eliminate the explained drawback.
A.V. Makarenko, M.G. Volovik
null
1512.07041
null
null
Move from Perturbed scheme to exponential weighting average
cs.LG
In an online decision problem, one makes decisions often with a pool of decision sequence called experts but without knowledge of the future. After each step, one pays a cost based on the decision and observed rate. One reasonal goal would be to perform as well as the best expert in the pool. The modern and well-known way to attain this goal is the algorithm of exponential weighting. However, recently, another algorithm called follow the perturbed leader is developed and achieved about the same performance. In our work, we first show the properties shared in common by the two algorithms which explain the similarities on the performance. Next we will show that for a specific perturbation, the two algorithms are identical. Finally, we show with some examples that follow-the-leader style algorithms extend naturally to a large class of structured online problems for which the exponential algorithms are inefficient.
Chunyang Xiao
null
1512.07074
null
null
Recent Advances in Convolutional Neural Networks
cs.CV cs.LG cs.NE
In the last few years, deep learning has led to very good performance on a variety of problems, such as visual recognition, speech recognition and natural language processing. Among different types of deep neural networks, convolutional neural networks have been most extensively studied. Leveraging on the rapid growth in the amount of the annotated data and the great improvements in the strengths of graphics processor units, the research on convolutional neural networks has been emerged swiftly and achieved state-of-the-art results on various tasks. In this paper, we provide a broad survey of the recent advances in convolutional neural networks. We detailize the improvements of CNN on different aspects, including layer design, activation function, loss function, regularization, optimization and fast computation. Besides, we also introduce various applications of convolutional neural networks in computer vision, speech and natural language processing.
Jiuxiang Gu, Zhenhua Wang, Jason Kuen, Lianyang Ma, Amir Shahroudy, Bing Shuai, Ting Liu, Xingxing Wang, Li Wang, Gang Wang, Jianfei Cai, Tsuhan Chen
null
1512.07108
null
null
Refined Error Bounds for Several Learning Algorithms
cs.LG math.ST stat.ML stat.TH
This article studies the achievable guarantees on the error rates of certain learning algorithms, with particular focus on refining logarithmic factors. Many of the results are based on a general technique for obtaining bounds on the error rates of sample-consistent classifiers with monotonic error regions, in the realizable case. We prove bounds of this type expressed in terms of either the VC dimension or the sample compression size. This general technique also enables us to derive several new bounds on the error rates of general sample-consistent learning algorithms, as well as refined bounds on the label complexity of the CAL active learning algorithm. Additionally, we establish a simple necessary and sufficient condition for the existence of a distribution-free bound on the error rates of all sample-consistent learning rules, converging at a rate inversely proportional to the sample size. We also study learning in the presence of classification noise, deriving a new excess error rate guarantee for general VC classes under Tsybakov's noise condition, and establishing a simple and general necessary and sufficient condition for the minimax excess risk under bounded noise to converge at a rate inversely proportional to the sample size.
Steve Hanneke
null
1512.07146
null
null
Feature Selection for Classification under Anonymity Constraint
cs.LG cs.CR
Over the last decade, proliferation of various online platforms and their increasing adoption by billions of users have heightened the privacy risk of a user enormously. In fact, security researchers have shown that sparse microdata containing information about online activities of a user although anonymous, can still be used to disclose the identity of the user by cross-referencing the data with other data sources. To preserve the privacy of a user, in existing works several methods (k-anonymity, l-diversity, differential privacy) are proposed that ensure a dataset which is meant to share or publish bears small identity disclosure risk. However, the majority of these methods modify the data in isolation, without considering their utility in subsequent knowledge discovery tasks, which makes these datasets less informative. In this work, we consider labeled data that are generally used for classification, and propose two methods for feature selection considering two goals: first, on the reduced feature set the data has small disclosure risk, and second, the utility of the data is preserved for performing a classification task. Experimental results on various real-world datasets show that the method is effective and useful in practice.
Baichuan Zhang, Noman Mohammed, Vachik Dave, Mohammad Al Hasan
null
1512.07158
null
null
Latent Variable Modeling with Diversity-Inducing Mutual Angular Regularization
cs.LG stat.ML
Latent Variable Models (LVMs) are a large family of machine learning models providing a principled and effective way to extract underlying patterns, structure and knowledge from observed data. Due to the dramatic growth of volume and complexity of data, several new challenges have emerged and cannot be effectively addressed by existing LVMs: (1) How to capture long-tail patterns that carry crucial information when the popularity of patterns is distributed in a power-law fashion? (2) How to reduce model complexity and computational cost without compromising the modeling power of LVMs? (3) How to improve the interpretability and reduce the redundancy of discovered patterns? To addresses the three challenges discussed above, we develop a novel regularization technique for LVMs, which controls the geometry of the latent space during learning to enable the learned latent components of LVMs to be diverse in the sense that they are favored to be mutually different from each other, to accomplish long-tail coverage, low redundancy, and better interpretability. We propose a mutual angular regularizer (MAR) to encourage the components in LVMs to have larger mutual angles. The MAR is non-convex and non-smooth, entailing great challenges for optimization. To cope with this issue, we derive a smooth lower bound of the MAR and optimize the lower bound instead. We show that the monotonicity of the lower bound is closely aligned with the MAR to qualify the lower bound as a desirable surrogate of the MAR. Using neural network (NN) as an instance, we analyze how the MAR affects the generalization performance of NN. On two popular latent variable models --- restricted Boltzmann machine and distance metric learning, we demonstrate that MAR can effectively capture long-tail patterns, reduce model complexity without sacrificing expressivity and improve interpretability.
Pengtao Xie, Yuntian Deng, Eric Xing
null
1512.07336
null
null
A Deep Generative Deconvolutional Image Model
cs.CV cs.LG stat.ML
A deep generative model is developed for representation and analysis of images, based on a hierarchical convolutional dictionary-learning framework. Stochastic {\em unpooling} is employed to link consecutive layers in the model, yielding top-down image generation. A Bayesian support vector machine is linked to the top-layer features, yielding max-margin discrimination. Deep deconvolutional inference is employed when testing, to infer the latent features, and the top-layer features are connected with the max-margin classifier for discrimination tasks. The model is efficiently trained using a Monte Carlo expectation-maximization (MCEM) algorithm, with implementation on graphical processor units (GPUs) for efficient large-scale learning, and fast testing. Excellent results are obtained on several benchmark datasets, including ImageNet, demonstrating that the proposed model achieves results that are highly competitive with similarly sized convolutional neural networks.
Yunchen Pu, Xin Yuan, Andrew Stevens, Chunyuan Li, Lawrence Carin
null
1512.07344
null
null
Adaptive Algorithms for Online Convex Optimization with Long-term Constraints
stat.ML cs.LG math.OC
We present an adaptive online gradient descent algorithm to solve online convex optimization problems with long-term constraints , which are constraints that need to be satisfied when accumulated over a finite number of rounds T , but can be violated in intermediate rounds. For some user-defined trade-off parameter $\beta$ $\in$ (0, 1), the proposed algorithm achieves cumulative regret bounds of O(T^max{$\beta$,1--$\beta$}) and O(T^(1--$\beta$/2)) for the loss and the constraint violations respectively. Our results hold for convex losses and can handle arbitrary convex constraints without requiring knowledge of the number of rounds in advance. Our contributions improve over the best known cumulative regret bounds by Mahdavi, et al. (2012) that are respectively O(T^1/2) and O(T^3/4) for general convex domains, and respectively O(T^2/3) and O(T^2/3) when further restricting to polyhedral domains. We supplement the analysis with experiments validating the performance of our algorithm in practice.
Rodolphe Jenatton, Jim Huang, C\'edric Archambeau
null
1512.07422
null
null
Adaptive Ensemble Learning with Confidence Bounds
cs.LG stat.ML
Extracting actionable intelligence from distributed, heterogeneous, correlated and high-dimensional data sources requires run-time processing and learning both locally and globally. In the last decade, a large number of meta-learning techniques have been proposed in which local learners make online predictions based on their locally-collected data instances, and feed these predictions to an ensemble learner, which fuses them and issues a global prediction. However, most of these works do not provide performance guarantees or, when they do, these guarantees are asymptotic. None of these existing works provide confidence estimates about the issued predictions or rate of learning guarantees for the ensemble learner. In this paper, we provide a systematic ensemble learning method called Hedged Bandits, which comes with both long run (asymptotic) and short run (rate of learning) performance guarantees. Moreover, our approach yields performance guarantees with respect to the optimal local prediction strategy, and is also able to adapt its predictions in a data-driven manner. We illustrate the performance of Hedged Bandits in the context of medical informatics and show that it outperforms numerous online and offline ensemble learning methods.
Cem Tekin, Jinsung Yoon, Mihaela van der Schaar
null
1512.07446
null
null
A Latent-Variable Lattice Model
cs.LG cs.CV stat.ML
Markov random field (MRF) learning is intractable, and its approximation algorithms are computationally expensive. We target a small subset of MRF that is used frequently in computer vision. We characterize this subset with three concepts: Lattice, Homogeneity, and Inertia; and design a non-markov model as an alternative. Our goal is robust learning from small datasets. Our learning algorithm uses vector quantization and, at time complexity O(U log U) for a dataset of U pixels, is much faster than that of general-purpose MRF.
Rajasekaran Masatran
null
1512.07587
null
null
Satisficing in multi-armed bandit problems
cs.LG math.OC stat.ML
Satisficing is a relaxation of maximizing and allows for less risky decision making in the face of uncertainty. We propose two sets of satisficing objectives for the multi-armed bandit problem, where the objective is to achieve reward-based decision-making performance above a given threshold. We show that these new problems are equivalent to various standard multi-armed bandit problems with maximizing objectives and use the equivalence to find bounds on performance. The different objectives can result in qualitatively different behavior; for example, agents explore their options continually in one case and only a finite number of times in another. For the case of Gaussian rewards we show an additional equivalence between the two sets of satisficing objectives that allows algorithms developed for one set to be applied to the other. We then develop variants of the Upper Credible Limit (UCL) algorithm that solve the problems with satisficing objectives and show that these modified UCL algorithms achieve efficient satisficing performance.
Paul Reverdy and Vaibhav Srivastava and Naomi Ehrich Leonard
null
1512.07638
null
null
The Max $K$-Armed Bandit: PAC Lower Bounds and Efficient Algorithms
stat.ML cs.AI cs.LG
We consider the Max $K$-Armed Bandit problem, where a learning agent is faced with several stochastic arms, each a source of i.i.d. rewards of unknown distribution. At each time step the agent chooses an arm, and observes the reward of the obtained sample. Each sample is considered here as a separate item with the reward designating its value, and the goal is to find an item with the highest possible value. Our basic assumption is a known lower bound on the {\em tail function} of the reward distributions. Under the PAC framework, we provide a lower bound on the sample complexity of any $(\epsilon,\delta)$-correct algorithm, and propose an algorithm that attains this bound up to logarithmic factors. We analyze the robustness of the proposed algorithm and in addition, we compare the performance of this algorithm to the variant in which the arms are not distinguishable by the agent and are chosen randomly at each stage. Interestingly, when the maximal rewards of the arms happen to be similar, the latter approach may provide better performance.
Yahel David and Nahum Shimkin
null
1512.07650
null
null
Deep Reinforcement Learning in Large Discrete Action Spaces
cs.AI cs.LG cs.NE stat.ML
Being able to reason in an environment with a large number of discrete actions is essential to bringing reinforcement learning to a larger class of problems. Recommender systems, industrial plants and language models are only some of the many real-world tasks involving large numbers of discrete actions for which current methods are difficult or even often impossible to apply. An ability to generalize over the set of actions as well as sub-linear complexity relative to the size of the set are both necessary to handle such tasks. Current approaches are not able to provide both of these, which motivates the work in this paper. Our proposed approach leverages prior information about the actions to embed them in a continuous space upon which it can generalize. Additionally, approximate nearest-neighbor methods allow for logarithmic-time lookup complexity relative to the number of actions, which is necessary for time-wise tractable training. This combined approach allows reinforcement learning methods to be applied to large-scale learning problems previously intractable with current methods. We demonstrate our algorithm's abilities on a series of tasks having up to one million actions.
Gabriel Dulac-Arnold and Richard Evans and Hado van Hasselt and Peter Sunehag and Timothy Lillicrap and Jonathan Hunt and Timothy Mann and Theophane Weber and Thomas Degris and Ben Coppin
null
1512.07679
null
null
Fast Parallel SVM using Data Augmentation
cs.LG
As one of the most popular classifiers, linear SVMs still have challenges in dealing with very large-scale problems, even though linear or sub-linear algorithms have been developed recently on single machines. Parallel computing methods have been developed for learning large-scale SVMs. However, existing methods rely on solving local sub-optimization problems. In this paper, we develop a novel parallel algorithm for learning large-scale linear SVM. Our approach is based on a data augmentation equivalent formulation, which casts the problem of learning SVM as a Bayesian inference problem, for which we can develop very efficient parallel sampling methods. We provide empirical results for this parallel sampling SVM, and provide extensions for SVR, non-linear kernels, and provide a parallel implementation of the Crammer and Singer model. This approach is very promising in its own right, and further is a very useful technique to parallelize a broader family of general maximum-margin models.
Hugh Perkins, Minjie Xu, Jun Zhu, Bo Zhang
null
1512.07716
null
null
Real-Time Audio-to-Score Alignment of Music Performances Containing Errors and Arbitrary Repeats and Skips
cs.SD cs.LG cs.MM
This paper discusses real-time alignment of audio signals of music performance to the corresponding score (a.k.a. score following) which can handle tempo changes, errors and arbitrary repeats and/or skips (repeats/skips) in performances. This type of score following is particularly useful in automatic accompaniment for practices and rehearsals, where errors and repeats/skips are often made. Simple extensions of the algorithms previously proposed in the literature are not applicable in these situations for scores of practical length due to the problem of large computational complexity. To cope with this problem, we present two hidden Markov models of monophonic performance with errors and arbitrary repeats/skips, and derive efficient score-following algorithms with an assumption that the prior probability distributions of score positions before and after repeats/skips are independent from each other. We confirmed real-time operation of the algorithms with music scores of practical length (around 10000 notes) on a modern laptop and their tracking ability to the input performance within 0.7 s on average after repeats/skips in clarinet performance data. Further improvements and extension for polyphonic signals are also discussed.
Tomohiko Nakamura, Eita Nakamura and Shigeki Sagayama
10.1109/TASLP.2015.2507862
1512.07748
null
null
The Lov\'asz Hinge: A Novel Convex Surrogate for Submodular Losses
stat.ML cs.LG
Learning with non-modular losses is an important problem when sets of predictions are made simultaneously. The main tools for constructing convex surrogate loss functions for set prediction are margin rescaling and slack rescaling. In this work, we show that these strategies lead to tight convex surrogates iff the underlying loss function is increasing in the number of incorrect predictions. However, gradient or cutting-plane computation for these functions is NP-hard for non-supermodular loss functions. We propose instead a novel surrogate loss function for submodular losses, the Lov\'asz hinge, which leads to O(p log p) complexity with O(p) oracle accesses to the loss function to compute a gradient or cutting-plane. We prove that the Lov\'asz hinge is convex and yields an extension. As a result, we have developed the first tractable convex surrogates in the literature for submodular losses. We demonstrate the utility of this novel convex surrogate through several set prediction tasks, including on the PASCAL VOC and Microsoft COCO datasets.
Jiaqian Yu (CVC, GALEN), Matthew Blaschko
null
1512.07797
null
null
Visualizations Relevant to The User By Multi-View Latent Variable Factorization
cs.LG cs.IR
A main goal of data visualization is to find, from among all the available alternatives, mappings to the 2D/3D display which are relevant to the user. Assuming user interaction data, or other auxiliary data about the items or their relationships, the goal is to identify which aspects in the primary data support the user\'s input and, equally importantly, which aspects of the user\'s potentially noisy input have support in the primary data. For solving the problem, we introduce a multi-view embedding in which a latent factorization identifies which aspects in the two data views (primary data and user data) are related and which are specific to only one of them. The factorization is a generative model in which the display is parameterized as a part of the factorization and the other factors explain away the aspects not expressible in a two-dimensional display. Functioning of the model is demonstrated on several data sets.
Seppo Virtanen, Homayun Afrabandpey, Samuel Kaski
10.1109/ICASSP.2016.7472120
1512.07807
null
null
Implementing a Bayes Filter in a Neural Circuit: The Case of Unknown Stimulus Dynamics
cs.LG stat.ML
In order to interact intelligently with objects in the world, animals must first transform neural population responses into estimates of the dynamic, unknown stimuli which caused them. The Bayesian solution to this problem is known as a Bayes filter, which applies Bayes' rule to combine population responses with the predictions of an internal model. In this paper we present a method for learning to approximate a Bayes filter when the stimulus dynamics are unknown. To do this we use the inferential properties of probabilistic population codes to compute Bayes' rule, and train a neural network to compute approximate predictions by the method of maximum likelihood. In particular, we perform stochastic gradient descent on the negative log-likelihood with a novel approximation of the gradient. We demonstrate our methods on a finite-state, a linear, and a nonlinear filtering problem, and show how the hidden layer of the neural network develops tuning curves which are consistent with findings in experimental neuroscience.
Sacha Sokoloski
null
1512.07839
null
null