categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
list
null
null
1611.02830
null
null
http://arxiv.org/pdf/1611.02830v1
2016-11-09T06:21:27Z
2016-11-09T06:21:27Z
Online Learning for Wireless Distributed Computing
There has been a growing interest for Wireless Distributed Computing (WDC), which leverages collaborative computing over multiple wireless devices. WDC enables complex applications that a single device cannot support individually. However, the problem of assigning tasks over multiple devices becomes challenging in the dynamic environments encountered in real-world settings, considering that the resource availability and channel conditions change over time in unpredictable ways due to mobility and other factors. In this paper, we formulate a task assignment problem as an online learning problem using an adversarial multi-armed bandit framework. We propose MABSTA, a novel online learning algorithm that learns the performance of unknown devices and channel qualities continually through exploratory probing and makes task assignment decisions by exploiting the gained knowledge. For maximal adaptability, MABSTA is designed to make no stochastic assumption about the environment. We analyze it mathematically and provide a worst-case performance guarantee for any dynamic environment. We also compare it with the optimal offline policy as well as other baselines via emulations on trace-data obtained from a wireless IoT testbed, and show that it offers competitive and robust performance in all cases. To the best of our knowledge, MABSTA is the first online algorithm in this domain of task assignment problems and provides provable performance guarantee.
[ "['Yi-Hsuan Kao' 'Kwame Wright' 'Bhaskar Krishnamachari' 'Fan Bai']" ]
cs.NE cs.LG
null
1611.02854
null
null
http://arxiv.org/pdf/1611.02854v2
2017-03-05T21:03:22Z
2016-11-09T08:51:54Z
Lie-Access Neural Turing Machines
External neural memory structures have recently become a popular tool for algorithmic deep learning (Graves et al. 2014, Weston et al. 2014). These models generally utilize differentiable versions of traditional discrete memory-access structures (random access, stacks, tapes) to provide the storage necessary for computational tasks. In this work, we argue that these neural memory systems lack specific structure important for relative indexing, and propose an alternative model, Lie-access memory, that is explicitly designed for the neural setting. In this paradigm, memory is accessed using a continuous head in a key-space manifold. The head is moved via Lie group actions, such as shifts or rotations, generated by a controller, and memory access is performed by linear smoothing in key space. We argue that Lie groups provide a natural generalization of discrete memory structures, such as Turing machines, as they provide inverse and identity operators while maintaining differentiability. To experiment with this approach, we implement a simplified Lie-access neural Turing machine (LANTM) with different Lie groups. We find that this approach is able to perform well on a range of algorithmic tasks.
[ "Greg Yang, Alexander M. Rush", "['Greg Yang' 'Alexander M. Rush']" ]
cs.CV cs.CL cs.LG
null
1611.02879
null
null
http://arxiv.org/pdf/1611.02879v1
2016-11-09T10:24:52Z
2016-11-09T10:24:52Z
Audio Visual Speech Recognition using Deep Recurrent Neural Networks
In this work, we propose a training algorithm for an audio-visual automatic speech recognition (AV-ASR) system using deep recurrent neural network (RNN).First, we train a deep RNN acoustic model with a Connectionist Temporal Classification (CTC) objective function. The frame labels obtained from the acoustic model are then used to perform a non-linear dimensionality reduction of the visual features using a deep bottleneck network. Audio and visual features are fused and used to train a fusion RNN. The use of bottleneck features for visual modality helps the model to converge properly during training. Our system is evaluated on GRID corpus. Our results show that presence of visual modality gives significant improvement in character error rate (CER) at various levels of noise even when the model is trained without noisy data. We also provide a comparison of two fusion methods: feature fusion and decision fusion.
[ "['Abhinav Thanda' 'Shankar M Venkatesan']", "Abhinav Thanda, Shankar M Venkatesan" ]
q-bio.QM cs.LG
10.1016/j.jbi.2017.03.006
1611.02945
null
null
http://arxiv.org/abs/1611.02945v1
2016-11-08T20:18:57Z
2016-11-08T20:18:57Z
Heter-LP: A heterogeneous label propagation algorithm and its application in drug repositioning
Drug repositioning offers an effective solution to drug discovery, saving both time and resources by finding new indications for existing drugs. Typically, a drug takes effect via its protein targets in the cell. As a result, it is necessary for drug development studies to conduct an investigation into the interrelationships of drugs, protein targets, and diseases. Although previous studies have made a strong case for the effectiveness of integrative network-based methods for predicting these interrelationships, little progress has been achieved in this regard within drug repositioning research. Moreover, the interactions of new drugs and targets (lacking any known targets and drugs, respectively) cannot be accurately predicted by most established methods. In this paper, we propose a novel semi-supervised heterogeneous label propagation algorithm named Heter-LP, which applies both local as well as global network features for data integration. To predict drug-target, disease-target, and drug-disease associations, we use information about drugs, diseases, and targets as collected from multiple sources at different levels. Our algorithm integrates these various types of data into a heterogeneous network and implements a label propagation algorithm to find new interactions. Statistical analyses of 10-fold cross-validation results and experimental analysis support the effectiveness of the proposed algorithm.
[ "Maryam Lotfi Shahreza, Nasser Ghadiri, Seyed Rasul Mossavi, Jaleh\n Varshosaz, James Green", "['Maryam Lotfi Shahreza' 'Nasser Ghadiri' 'Seyed Rasul Mossavi'\n 'Jaleh Varshosaz' 'James Green']" ]
cs.IT cs.DS cs.LG math.IT
null
1611.0296
null
null
null
null
null
A Unified Maximum Likelihood Approach for Optimal Distribution Property Estimation
The advent of data science has spurred interest in estimating properties of distributions over large alphabets. Fundamental symmetric properties such as support size, support coverage, entropy, and proximity to uniformity, received most attention, with each property estimated using a different technique and often intricate analysis tools. We prove that for all these properties, a single, simple, plug-in estimator---profile maximum likelihood (PML)---performs as well as the best specialized techniques. This raises the possibility that PML may optimally estimate many other symmetric properties.
[ "Jayadev Acharya, Hirakendu Das, Alon Orlitsky, Ananda Theertha Suresh" ]
null
null
1611.02960
null
null
http://arxiv.org/pdf/1611.02960v2
2016-11-28T16:36:44Z
2016-11-09T14:59:23Z
A Unified Maximum Likelihood Approach for Optimal Distribution Property Estimation
The advent of data science has spurred interest in estimating properties of distributions over large alphabets. Fundamental symmetric properties such as support size, support coverage, entropy, and proximity to uniformity, received most attention, with each property estimated using a different technique and often intricate analysis tools. We prove that for all these properties, a single, simple, plug-in estimator---profile maximum likelihood (PML)---performs as well as the best specialized techniques. This raises the possibility that PML may optimally estimate many other symmetric properties.
[ "['Jayadev Acharya' 'Hirakendu Das' 'Alon Orlitsky'\n 'Ananda Theertha Suresh']" ]
cs.LG cs.CR stat.AP
null
1611.03021
null
null
http://arxiv.org/pdf/1611.03021v2
2017-08-14T18:25:34Z
2016-11-07T15:26:58Z
Attributing Hacks
In this paper we describe an algorithm for estimating the provenance of hacks on websites. That is, given properties of sites and the temporal occurrence of attacks, we are able to attribute individual attacks to joint causes and vulnerabilities, as well as estimating the evolution of these vulnerabilities over time. Specifically, we use hazard regression with a time-varying additive hazard function parameterized in a generalized linear form. The activation coefficients on each feature are continuous-time functions over time. We formulate the problem of learning these functions as a constrained variational maximum likelihood estimation problem with total variation penalty and show that the optimal solution is a 0th order spline (a piecewise constant function) with a finite number of known knots. This allows the inference problem to be solved efficiently and at scale by solving a finite dimensional optimization problem. Extensive experiments on real data sets show that our method significantly outperforms Cox's proportional hazard model. We also conduct a case study and verify that the fitted functions are indeed recovering vulnerable features and real-life events such as the release of code to exploit these features in hacker blogs.
[ "Ziqi Liu, Alexander J. Smola, Kyle Soska, Yu-Xiang Wang, Qinghua\n Zheng, Jun Zhou", "['Ziqi Liu' 'Alexander J. Smola' 'Kyle Soska' 'Yu-Xiang Wang'\n 'Qinghua Zheng' 'Jun Zhou']" ]
cs.LG cs.NE
null
1611.03068
null
null
http://arxiv.org/pdf/1611.03068v2
2016-12-01T21:33:19Z
2016-11-09T20:12:08Z
Incremental Sequence Learning
Deep learning research over the past years has shown that by increasing the scope or difficulty of the learning problem over time, increasingly complex learning problems can be addressed. We study incremental learning in the context of sequence learning, using generative RNNs in the form of multi-layer recurrent Mixture Density Networks. While the potential of incremental or curriculum learning to enhance learning is known, indiscriminate application of the principle does not necessarily lead to improvement, and it is essential therefore to know which forms of incremental or curriculum learning have a positive effect. This research contributes to that aim by comparing three instantiations of incremental or curriculum learning. We introduce Incremental Sequence Learning, a simple incremental approach to sequence learning. Incremental Sequence Learning starts out by using only the first few steps of each sequence as training data. Each time a performance criterion has been reached, the length of the parts of the sequences used for training is increased. We introduce and make available a novel sequence learning task and data set: predicting and classifying MNIST pen stroke sequences. We find that Incremental Sequence Learning greatly speeds up sequence learning and reaches the best test performance level of regular sequence learning 20 times faster, reduces the test error by 74%, and in general performs more robustly; it displays lower variance and achieves sustained progress after all three comparison methods have stopped improving. The other instantiations of curriculum learning do not result in any noticeable improvement. A trained sequence prediction model is also used in transfer learning to the task of sequence classification, where it is found that transfer learning realizes improved classification performance compared to methods that learn to classify from scratch.
[ "Edwin D. de Jong", "['Edwin D. de Jong']" ]
cs.LG
null
1611.03071
null
null
http://arxiv.org/pdf/1611.03071v4
2017-08-06T00:12:49Z
2016-11-09T20:19:45Z
Fairness in Reinforcement Learning
We initiate the study of fairness in reinforcement learning, where the actions of a learning algorithm may affect its environment and future rewards. Our fairness constraint requires that an algorithm never prefers one action over another if the long-term (discounted) reward of choosing the latter action is higher. Our first result is negative: despite the fact that fairness is consistent with the optimal policy, any learning algorithm satisfying fairness must take time exponential in the number of states to achieve non-trivial approximation to the optimal policy. We then provide a provably fair polynomial time algorithm under an approximate notion of fairness, thus establishing an exponential gap between exact and approximate fairness
[ "Shahin Jabbari, Matthew Joseph, Michael Kearns, Jamie Morgenstern,\n Aaron Roth", "['Shahin Jabbari' 'Matthew Joseph' 'Michael Kearns' 'Jamie Morgenstern'\n 'Aaron Roth']" ]
cs.LG cs.AR
null
1611.03109
null
null
http://arxiv.org/pdf/1611.03109v1
2016-10-25T18:45:32Z
2016-10-25T18:45:32Z
Energy-efficient Machine Learning in Silicon: A Communications-inspired Approach
This position paper advocates a communications-inspired approach to the design of machine learning systems on energy-constrained embedded `always-on' platforms. The communications-inspired approach has two versions - 1) a deterministic version where existing low-power communication IC design methods are repurposed, and 2) a stochastic version referred to as Shannon-inspired statistical information processing employing information-based metrics, statistical error compensation (SEC), and retraining-based methods to implement ML systems on stochastic circuit/device fabrics operating at the limits of energy-efficiency. The communications-inspired approach has the potential to fully leverage the opportunities afforded by ML algorithms and applications in order to address the challenges inherent in their deployment on energy-constrained platforms.
[ "['Naresh R. Shanbhag']", "Naresh R. Shanbhag" ]
cs.LG
null
1611.03125
null
null
http://arxiv.org/pdf/1611.03125v1
2016-11-09T22:40:15Z
2016-11-09T22:40:15Z
A Modular Theory of Feature Learning
Learning representations of data, and in particular learning features for a subsequent prediction task, has been a fruitful area of research delivering impressive empirical results in recent years. However, relatively little is understood about what makes a representation `good'. We propose the idea of a risk gap induced by representation learning for a given prediction context, which measures the difference in the risk of some learner using the learned features as compared to the original inputs. We describe a set of sufficient conditions for unsupervised representation learning to provide a benefit, as measured by this risk gap. These conditions decompose the problem of when representation learning works into its constituent parts, which can be separately evaluated using an unlabeled sample, suitable domain-specific assumptions about the joint distribution, and analysis of the feature learner and subsequent supervised learner. We provide two examples of such conditions in the context of specific properties of the unlabeled distribution, namely when the data lies close to a low-dimensional manifold and when it forms clusters. We compare our approach to a recently proposed analysis of semi-supervised learning.
[ "Daniel McNamara, Cheng Soon Ong, Robert C. Williamson", "['Daniel McNamara' 'Cheng Soon Ong' 'Robert C. Williamson']" ]
cs.LG stat.ML
null
1611.03131
null
null
http://arxiv.org/pdf/1611.03131v3
2017-03-03T04:38:29Z
2016-11-09T23:19:21Z
Diverse Neural Network Learns True Target Functions
Neural networks are a powerful class of functions that can be trained with simple gradient descent to achieve state-of-the-art performance on a variety of applications. Despite their practical success, there is a paucity of results that provide theoretical guarantees on why they are so effective. Lying in the center of the problem is the difficulty of analyzing the non-convex loss function with potentially numerous local minima and saddle points. Can neural networks corresponding to the stationary points of the loss function learn the true target function? If yes, what are the key factors contributing to such nice optimization properties? In this paper, we answer these questions by analyzing one-hidden-layer neural networks with ReLU activation, and show that despite the non-convexity, neural networks with diverse units have no spurious local minima. We bypass the non-convexity issue by directly analyzing the first order optimality condition, and show that the loss can be made arbitrarily small if the minimum singular value of the "extended feature matrix" is large enough. We make novel use of techniques from kernel methods and geometric discrepancy, and identify a new relation linking the smallest singular value to the spectrum of a kernel function associated with the activation function and to the diversity of the units. Our results also suggest a novel regularization function to promote unit diversity for potentially better generalization.
[ "['Bo Xie' 'Yingyu Liang' 'Le Song']", "Bo Xie, Yingyu Liang, Le Song" ]
cs.LG
null
1611.03158
null
null
http://arxiv.org/pdf/1611.03158v2
2017-03-27T05:21:42Z
2016-11-10T01:48:39Z
Using Neural Networks to Compute Approximate and Guaranteed Feasible Hamilton-Jacobi-Bellman PDE Solutions
To sidestep the curse of dimensionality when computing solutions to Hamilton-Jacobi-Bellman partial differential equations (HJB PDE), we propose an algorithm that leverages a neural network to approximate the value function. We show that our final approximation of the value function generates near optimal controls which are guaranteed to successfully drive the system to a target state. Our framework is not dependent on state space discretization, leading to a significant reduction in computation time and space complexity in comparison with dynamic programming-based approaches. Using this grid-free approach also enables us to plan over longer time horizons with relatively little additional computation overhead. Unlike many previous neural network HJB PDE approximating formulations, our approximation is strictly conservative and hence any trajectories we generate will be strictly feasible. For demonstration, we specialize our new general framework to the Dubins car model and discuss how the framework can be applied to other models with higher-dimensional state spaces.
[ "['Frank Jiang' 'Glen Chou' 'Mo Chen' 'Claire J. Tomlin']", "Frank Jiang, Glen Chou, Mo Chen, Claire J. Tomlin" ]
cs.CR cs.LG
null
1611.03186
null
null
http://arxiv.org/pdf/1611.03186v1
2016-11-10T05:08:02Z
2016-11-10T05:08:02Z
SoK: Applying Machine Learning in Security - A Survey
The idea of applying machine learning(ML) to solve problems in security domains is almost 3 decades old. As information and communications grow more ubiquitous and more data become available, many security risks arise as well as appetite to manage and mitigate such risks. Consequently, research on applying and designing ML algorithms and systems for security has grown fast, ranging from intrusion detection systems(IDS) and malware classification to security policy management(SPM) and information leak checking. In this paper, we systematically study the methods, algorithms, and system designs in academic publications from 2008-2015 that applied ML in security domains. 98 percent of the surveyed papers appeared in the 6 highest-ranked academic security conferences and 1 conference known for pioneering ML applications in security. We examine the generalized system designs, underlying assumptions, measurements, and use cases in active research. Our examinations lead to 1) a taxonomy on ML paradigms and security domains for future exploration and exploitation, and 2) an agenda detailing open and upcoming challenges. Based on our survey, we also suggest a point of view that treats security as a game theory problem instead of a batch-trained ML problem.
[ "Heju Jiang, Jasvir Nagra, Parvez Ahammad", "['Heju Jiang' 'Jasvir Nagra' 'Parvez Ahammad']" ]
cs.LG stat.ML
null
1611.03199
null
null
http://arxiv.org/pdf/1611.03199v1
2016-11-10T07:00:43Z
2016-11-10T07:00:43Z
Low Data Drug Discovery with One-shot Learning
Recent advances in machine learning have made significant contributions to drug discovery. Deep neural networks in particular have been demonstrated to provide significant boosts in predictive power when inferring the properties and activities of small-molecule compounds. However, the applicability of these techniques has been limited by the requirement for large amounts of training data. In this work, we demonstrate how one-shot learning can be used to significantly lower the amounts of data required to make meaningful predictions in drug discovery applications. We introduce a new architecture, the residual LSTM embedding, that, when combined with graph convolutional neural networks, significantly improves the ability to learn meaningful distance metrics over small-molecules. We open source all models introduced in this work as part of DeepChem, an open-source framework for deep-learning in drug discovery.
[ "['Han Altae-Tran' 'Bharath Ramsundar' 'Aneesh S. Pappu' 'Vijay Pande']", "Han Altae-Tran, Bharath Ramsundar, Aneesh S. Pappu, and Vijay Pande" ]
cs.LG
null
1611.03214
null
null
http://arxiv.org/pdf/1611.03214v1
2016-11-10T08:07:46Z
2016-11-10T08:07:46Z
Ultimate tensorization: compressing convolutional and FC layers alike
Convolutional neural networks excel in image recognition tasks, but this comes at the cost of high computational and memory complexity. To tackle this problem, [1] developed a tensor factorization framework to compress fully-connected layers. In this paper, we focus on compressing convolutional layers. We show that while the direct application of the tensor framework [1] to the 4-dimensional kernel of convolution does compress the layer, we can do better. We reshape the convolutional kernel into a tensor of higher order and factorize it. We combine the proposed approach with the previous work to compress both convolutional and fully-connected layers of a network and achieve 80x network compression rate with 1.1% accuracy drop on the CIFAR-10 dataset.
[ "['Timur Garipov' 'Dmitry Podoprikhin' 'Alexander Novikov' 'Dmitry Vetrov']", "Timur Garipov, Dmitry Podoprikhin, Alexander Novikov, Dmitry Vetrov" ]
cs.AI cs.CL cs.LG cs.MA
null
1611.03218
null
null
http://arxiv.org/pdf/1611.03218v4
2017-03-15T11:24:49Z
2016-11-10T08:44:52Z
Learning to Play Guess Who? and Inventing a Grounded Language as a Consequence
Acquiring your first language is an incredible feat and not easily duplicated. Learning to communicate using nothing but a few pictureless books, a corpus, would likely be impossible even for humans. Nevertheless, this is the dominating approach in most natural language processing today. As an alternative, we propose the use of situated interactions between agents as a driving force for communication, and the framework of Deep Recurrent Q-Networks for evolving a shared language grounded in the provided environment. We task the agents with interactive image search in the form of the game Guess Who?. The images from the game provide a non trivial environment for the agents to discuss and a natural grounding for the concepts they decide to encode in their communication. Our experiments show that the agents learn not only to encode physical concepts in their words, i.e. grounding, but also that the agents learn to hold a multi-step dialogue remembering the state of the dialogue from step to step.
[ "Emilio Jorge, Mikael K{\\aa}geb\\\"ack, Fredrik D. Johansson, Emil\n Gustavsson", "['Emilio Jorge' 'Mikael Kågebäck' 'Fredrik D. Johansson' 'Emil Gustavsson']" ]
cs.NA cs.DS cs.LG math.NA
null
1611.0322
null
null
null
null
null
Faster Kernel Ridge Regression Using Sketching and Preconditioning
Kernel Ridge Regression (KRR) is a simple yet powerful technique for non-parametric regression whose computation amounts to solving a linear system. This system is usually dense and highly ill-conditioned. In addition, the dimensions of the matrix are the same as the number of data points, so direct methods are unrealistic for large-scale datasets. In this paper, we propose a preconditioning technique for accelerating the solution of the aforementioned linear system. The preconditioner is based on random feature maps, such as random Fourier features, which have recently emerged as a powerful technique for speeding up and scaling the training of kernel-based methods, such as kernel ridge regression, by resorting to approximations. However, random feature maps only provide crude approximations to the kernel function, so delivering state-of-the-art results by directly solving the approximated system requires the number of random features to be very large. We show that random feature maps can be much more effective in forming preconditioners, since under certain conditions a not-too-large number of random features is sufficient to yield an effective preconditioner. We empirically evaluate our method and show it is highly effective for datasets of up to one million training examples.
[ "Haim Avron and Kenneth L. Clarkson and David P. Woodruff" ]
null
null
1611.03220
null
null
http://arxiv.org/pdf/1611.03220v4
2017-07-15T06:31:03Z
2016-11-10T08:50:05Z
Faster Kernel Ridge Regression Using Sketching and Preconditioning
Kernel Ridge Regression (KRR) is a simple yet powerful technique for non-parametric regression whose computation amounts to solving a linear system. This system is usually dense and highly ill-conditioned. In addition, the dimensions of the matrix are the same as the number of data points, so direct methods are unrealistic for large-scale datasets. In this paper, we propose a preconditioning technique for accelerating the solution of the aforementioned linear system. The preconditioner is based on random feature maps, such as random Fourier features, which have recently emerged as a powerful technique for speeding up and scaling the training of kernel-based methods, such as kernel ridge regression, by resorting to approximations. However, random feature maps only provide crude approximations to the kernel function, so delivering state-of-the-art results by directly solving the approximated system requires the number of random features to be very large. We show that random feature maps can be much more effective in forming preconditioners, since under certain conditions a not-too-large number of random features is sufficient to yield an effective preconditioner. We empirically evaluate our method and show it is highly effective for datasets of up to one million training examples.
[ "['Haim Avron' 'Kenneth L. Clarkson' 'David P. Woodruff']" ]
cs.DS cs.LG cs.NA math.NA
null
1611.03225
null
null
http://arxiv.org/pdf/1611.03225v2
2017-06-26T12:55:39Z
2016-11-10T09:05:43Z
Sharper Bounds for Regularized Data Fitting
We study matrix sketching methods for regularized variants of linear regression, low rank approximation, and canonical correlation analysis. Our main focus is on sketching techniques which preserve the objective function value for regularized problems, which is an area that has remained largely unexplored. We study regularization both in a fairly broad setting, and in the specific context of the popular and widely used technique of ridge regularization; for the latter, as applied to each of these problems, we show algorithmic resource bounds in which the {\em statistical dimension} appears in places where in previous bounds the rank would appear. The statistical dimension is always smaller than the rank, and decreases as the amount of regularization increases. In particular, for the ridge low-rank approximation problem $\min_{Y,X} \lVert YX - A \rVert_F^2 + \lambda \lVert Y\rVert_F^2 + \lambda\lVert X \rVert_F^2$, where $Y\in\mathbb{R}^{n\times k}$ and $X\in\mathbb{R}^{k\times d}$, we give an approximation algorithm needing \[ O(\mathtt{nnz}(A)) + \tilde{O}((n+d)\varepsilon^{-1}k \min\{k, \varepsilon^{-1}\mathtt{sd}_\lambda(Y^*)\})+ \mathtt{poly}(\mathtt{sd}_\lambda(Y^*) \varepsilon^{-1}) \] time, where $s_{\lambda}(Y^*)\le k$ is the statistical dimension of $Y^*$, $Y^*$ is an optimal $Y$, $\varepsilon$ is an error parameter, and $\mathtt{nnz}(A)$ is the number of nonzero entries of $A$.This is faster than prior work, even when $\lambda=0$. We also study regularization in a much more general setting. For example, we obtain sketching-based algorithms for the low-rank approximation problem $\min_{X,Y} \lVert YX - A \rVert_F^2 + f(Y,X)$ where $f(\cdot,\cdot)$ is a regularizing function satisfying some very general conditions (chiefly, invariance under orthogonal transformations).
[ "Haim Avron and Kenneth L. Clarkson and David P. Woodruff", "['Haim Avron' 'Kenneth L. Clarkson' 'David P. Woodruff']" ]
stat.ML cs.LG
null
1611.03231
null
null
http://arxiv.org/pdf/1611.03231v1
2016-11-10T09:25:12Z
2016-11-10T09:25:12Z
Policy Search with High-Dimensional Context Variables
Direct contextual policy search methods learn to improve policy parameters and simultaneously generalize these parameters to different context or task variables. However, learning from high-dimensional context variables, such as camera images, is still a prominent problem in many real-world tasks. A naive application of unsupervised dimensionality reduction methods to the context variables, such as principal component analysis, is insufficient as task-relevant input may be ignored. In this paper, we propose a contextual policy search method in the model-based relative entropy stochastic search framework with integrated dimensionality reduction. We learn a model of the reward that is locally quadratic in both the policy parameters and the context variables. Furthermore, we perform supervised linear dimensionality reduction on the context variables by nuclear norm regularization. The experimental results show that the proposed method outperforms naive dimensionality reduction via principal component analysis and a state-of-the-art contextual policy search method.
[ "Voot Tangkaratt, Herke van Hoof, Simone Parisi, Gerhard Neumann, Jan\n Peters, Masashi Sugiyama", "['Voot Tangkaratt' 'Herke van Hoof' 'Simone Parisi' 'Gerhard Neumann'\n 'Jan Peters' 'Masashi Sugiyama']" ]
cs.LG stat.ML
null
1611.03383
null
null
http://arxiv.org/pdf/1611.03383v1
2016-11-10T16:24:16Z
2016-11-10T16:24:16Z
Disentangling factors of variation in deep representations using adversarial training
We introduce a conditional generative model for learning to disentangle the hidden factors of variation within a set of labeled observations, and separate them into complementary codes. One code summarizes the specified factors of variation associated with the labels. The other summarizes the remaining unspecified variability. During training, the only available source of supervision comes from our ability to distinguish among different observations belonging to the same class. Examples of such observations include images of a set of labeled objects captured at different viewpoints, or recordings of set of speakers dictating multiple phrases. In both instances, the intra-class diversity is the source of the unspecified factors of variation: each object is observed at multiple viewpoints, and each speaker dictates multiple phrases. Learning to disentangle the specified factors from the unspecified ones becomes easier when strong supervision is possible. Suppose that during training, we have access to pairs of images, where each pair shows two different objects captured from the same viewpoint. This source of alignment allows us to solve our task using existing methods. However, labels for the unspecified factors are usually unavailable in realistic scenarios where data acquisition is not strictly controlled. We address the problem of disentanglement in this more general setting by combining deep convolutional autoencoders with a form of adversarial training. Both factors of variation are implicitly captured in the organization of the learned embedding space, and can be used for solving single-image analogies. Experimental results on synthetic and real datasets show that the proposed method is capable of generalizing to unseen classes and intra-class variabilities.
[ "['Michael Mathieu' 'Junbo Zhao' 'Pablo Sprechmann' 'Aditya Ramesh'\n 'Yann LeCun']", "Michael Mathieu, Junbo Zhao, Pablo Sprechmann, Aditya Ramesh, Yann\n LeCun" ]
cs.DC astro-ph.IM cs.LG stat.AP stat.ML
null
1611.03404
null
null
http://arxiv.org/pdf/1611.03404v1
2016-11-10T17:16:04Z
2016-11-10T17:16:04Z
Learning an Astronomical Catalog of the Visible Universe through Scalable Bayesian Inference
Celeste is a procedure for inferring astronomical catalogs that attains state-of-the-art scientific results. To date, Celeste has been scaled to at most hundreds of megabytes of astronomical images: Bayesian posterior inference is notoriously demanding computationally. In this paper, we report on a scalable, parallel version of Celeste, suitable for learning catalogs from modern large-scale astronomical datasets. Our algorithmic innovations include a fast numerical optimization routine for Bayesian posterior inference and a statistically efficient scheme for decomposing astronomical optimization problems into subproblems. Our scalable implementation is written entirely in Julia, a new high-level dynamic programming language designed for scientific and numerical computing. We use Julia's high-level constructs for shared and distributed memory parallelism, and demonstrate effective load balancing and efficient scaling on up to 8192 Xeon cores on the NERSC Cori supercomputer.
[ "['Jeffrey Regier' 'Kiran Pamnany' 'Ryan Giordano' 'Rollin Thomas'\n 'David Schlegel' 'Jon McAuliffe' 'Prabhat']", "Jeffrey Regier, Kiran Pamnany, Ryan Giordano, Rollin Thomas, David\n Schlegel, Jon McAuliffe and Prabhat" ]
cs.PL cs.LG cs.MS
null
1611.0341
null
null
null
null
null
Binomial Checkpointing for Arbitrary Programs with No User Annotation
Heretofore, automatic checkpointing at procedure-call boundaries, to reduce the space complexity of reverse mode, has been provided by systems like Tapenade. However, binomial checkpointing, or treeverse, has only been provided in Automatic Differentiation (AD) systems in special cases, e.g., through user-provided pragmas on DO loops in Tapenade, or as the nested taping mechanism in adol-c for time integration processes, which requires that user code be refactored. We present a framework for applying binomial checkpointing to arbitrary code with no special annotation or refactoring required. This is accomplished by applying binomial checkpointing directly to a program trace. This trace is produced by a general-purpose checkpointing mechanism that is orthogonal to AD.
[ "Jeffrey Mark Siskind and Barak A. Pearlmutter" ]
null
null
1611.03410
null
null
http://arxiv.org/pdf/1611.03410v1
2016-11-10T17:29:24Z
2016-11-10T17:29:24Z
Binomial Checkpointing for Arbitrary Programs with No User Annotation
Heretofore, automatic checkpointing at procedure-call boundaries, to reduce the space complexity of reverse mode, has been provided by systems like Tapenade. However, binomial checkpointing, or treeverse, has only been provided in Automatic Differentiation (AD) systems in special cases, e.g., through user-provided pragmas on DO loops in Tapenade, or as the nested taping mechanism in adol-c for time integration processes, which requires that user code be refactored. We present a framework for applying binomial checkpointing to arbitrary code with no special annotation or refactoring required. This is accomplished by applying binomial checkpointing directly to a program trace. This trace is produced by a general-purpose checkpointing mechanism that is orthogonal to AD.
[ "['Jeffrey Mark Siskind' 'Barak A. Pearlmutter']" ]
cs.MS cs.LG
null
1611.03423
null
null
http://arxiv.org/pdf/1611.03423v1
2016-11-10T17:50:06Z
2016-11-10T17:50:06Z
DiffSharp: An AD Library for .NET Languages
DiffSharp is an algorithmic differentiation or automatic differentiation (AD) library for the .NET ecosystem, which is targeted by the C# and F# languages, among others. The library has been designed with machine learning applications in mind, allowing very succinct implementations of models and optimization routines. DiffSharp is implemented in F# and exposes forward and reverse AD operators as general nestable higher-order functions, usable by any .NET language. It provides high-performance linear algebra primitives---scalars, vectors, and matrices, with a generalization to tensors underway---that are fully supported by all the AD operators, and which use a BLAS/LAPACK backend via the highly optimized OpenBLAS library. DiffSharp currently uses operator overloading, but we are developing a transformation-based version of the library using F#'s "code quotation" metaprogramming facility. Work on a CUDA-based GPU backend is also underway.
[ "At{\\i}l{\\i}m G\\\"une\\c{s} Baydin and Barak A. Pearlmutter and Jeffrey\n Mark Siskind", "['Atılım Güneş Baydin' 'Barak A. Pearlmutter' 'Jeffrey Mark Siskind']" ]
stat.ML cs.LG
null
1611.03427
null
null
http://arxiv.org/pdf/1611.03427v2
2017-03-02T22:09:54Z
2016-11-10T17:54:22Z
Multi-Task Multiple Kernel Relationship Learning
This paper presents a novel multitask multiple kernel learning framework that efficiently learns the kernel weights leveraging the relationship across multiple tasks. The idea is to automatically infer this task relationship in the \textit{RKHS} space corresponding to the given base kernels. The problem is formulated as a regularization-based approach called \textit{Multi-Task Multiple Kernel Relationship Learning} (\textit{MK-MTRL}), which models the task relationship matrix from the weights learned from latent feature spaces of task-specific base kernels. Unlike in previous work, the proposed formulation allows one to incorporate prior knowledge for simultaneously learning several related tasks. We propose an alternating minimization algorithm to learn the model parameters, kernel weights and task relationship matrix. In order to tackle large-scale problems, we further propose a two-stage \textit{MK-MTRL} online learning algorithm and show that it significantly reduces the computational time, and also achieves performance comparable to that of the joint learning framework. Experimental results on benchmark datasets show that the proposed formulations outperform several state-of-the-art multitask learning methods.
[ "['Keerthiram Murugesan' 'Jaime Carbonell']", "Keerthiram Murugesan, Jaime Carbonell" ]
cs.LG cs.AI stat.ML
null
1611.03451
null
null
http://arxiv.org/pdf/1611.03451v1
2016-11-10T19:11:09Z
2016-11-10T19:11:09Z
Importance Sampling with Unequal Support
Importance sampling is often used in machine learning when training and testing data come from different distributions. In this paper we propose a new variant of importance sampling that can reduce the variance of importance sampling-based estimates by orders of magnitude when the supports of the training and testing distributions differ. After motivating and presenting our new importance sampling estimator, we provide a detailed theoretical analysis that characterizes both its bias and variance relative to the ordinary importance sampling estimator (in various settings, which include cases where ordinary importance sampling is biased, while our new estimator is not, and vice versa). We conclude with an example of how our new importance sampling estimator can be used to improve estimates of how well a new treatment policy for diabetes will work for an individual, using only data from when the individual used a previous treatment policy.
[ "Philip S. Thomas and Emma Brunskill", "['Philip S. Thomas' 'Emma Brunskill']" ]
cs.LG cs.CC cs.DS cs.IT math.IT math.ST stat.TH
null
1611.03473
null
null
http://arxiv.org/pdf/1611.03473v2
2017-05-17T15:48:34Z
2016-11-10T20:32:48Z
Statistical Query Lower Bounds for Robust Estimation of High-dimensional Gaussians and Gaussian Mixtures
We describe a general technique that yields the first {\em Statistical Query lower bounds} for a range of fundamental high-dimensional learning problems involving Gaussian distributions. Our main results are for the problems of (1) learning Gaussian mixture models (GMMs), and (2) robust (agnostic) learning of a single unknown Gaussian distribution. For each of these problems, we show a {\em super-polynomial gap} between the (information-theoretic) sample complexity and the computational complexity of {\em any} Statistical Query algorithm for the problem. Our SQ lower bound for Problem (1) is qualitatively matched by known learning algorithms for GMMs. Our lower bound for Problem (2) implies that the accuracy of the robust learning algorithm in~\cite{DiakonikolasKKLMS16} is essentially best possible among all polynomial-time SQ algorithms. Our SQ lower bounds are attained via a unified moment-matching technique that is useful in other contexts and may be of broader interest. Our technique yields nearly-tight lower bounds for a number of related unsupervised estimation problems. Specifically, for the problems of (3) robust covariance estimation in spectral norm, and (4) robust sparse mean estimation, we establish a quadratic {\em statistical--computational tradeoff} for SQ algorithms, matching known upper bounds. Finally, our technique can be used to obtain tight sample complexity lower bounds for high-dimensional {\em testing} problems. Specifically, for the classical problem of robustly {\em testing} an unknown mean (known covariance) Gaussian, our technique implies an information-theoretic sample lower bound that scales {\em linearly} in the dimension. Our sample lower bound matches the sample complexity of the corresponding robust {\em learning} problem and separates the sample complexity of robust testing from standard (non-robust) testing.
[ "Ilias Diakonikolas, Daniel M. Kane, Alistair Stewart", "['Ilias Diakonikolas' 'Daniel M. Kane' 'Alistair Stewart']" ]
cs.LG
null
1611.0353
null
null
null
null
null
Understanding deep learning requires rethinking generalization
Despite their massive size, successful deep artificial neural networks can exhibit a remarkably small difference between training and test performance. Conventional wisdom attributes small generalization error either to properties of the model family, or to the regularization techniques used during training. Through extensive systematic experiments, we show how these traditional approaches fail to explain why large neural networks generalize well in practice. Specifically, our experiments establish that state-of-the-art convolutional networks for image classification trained with stochastic gradient methods easily fit a random labeling of the training data. This phenomenon is qualitatively unaffected by explicit regularization, and occurs even if we replace the true images by completely unstructured random noise. We corroborate these experimental findings with a theoretical construction showing that simple depth two neural networks already have perfect finite sample expressivity as soon as the number of parameters exceeds the number of data points as it usually does in practice. We interpret our experimental findings by comparison with traditional models.
[ "Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol\n Vinyals" ]
null
null
1611.03530
null
null
http://arxiv.org/pdf/1611.03530v2
2017-02-26T19:36:40Z
2016-11-10T22:02:36Z
Understanding deep learning requires rethinking generalization
Despite their massive size, successful deep artificial neural networks can exhibit a remarkably small difference between training and test performance. Conventional wisdom attributes small generalization error either to properties of the model family, or to the regularization techniques used during training. Through extensive systematic experiments, we show how these traditional approaches fail to explain why large neural networks generalize well in practice. Specifically, our experiments establish that state-of-the-art convolutional networks for image classification trained with stochastic gradient methods easily fit a random labeling of the training data. This phenomenon is qualitatively unaffected by explicit regularization, and occurs even if we replace the true images by completely unstructured random noise. We corroborate these experimental findings with a theoretical construction showing that simple depth two neural networks already have perfect finite sample expressivity as soon as the number of parameters exceeds the number of data points as it usually does in practice. We interpret our experimental findings by comparison with traditional models.
[ "['Chiyuan Zhang' 'Samy Bengio' 'Moritz Hardt' 'Benjamin Recht'\n 'Oriol Vinyals']" ]
cs.LG cs.AI
null
1611.03553
null
null
http://arxiv.org/pdf/1611.03553v1
2016-11-11T00:46:33Z
2016-11-11T00:46:33Z
The Sum-Product Theorem: A Foundation for Learning Tractable Models
Inference in expressive probabilistic models is generally intractable, which makes them difficult to learn and limits their applicability. Sum-product networks are a class of deep models where, surprisingly, inference remains tractable even when an arbitrary number of hidden layers are present. In this paper, we generalize this result to a much broader set of learning problems: all those where inference consists of summing a function over a semiring. This includes satisfiability, constraint satisfaction, optimization, integration, and others. In any semiring, for summation to be tractable it suffices that the factors of every product have disjoint scopes. This unifies and extends many previous results in the literature. Enforcing this condition at learning time thus ensures that the learned models are tractable. We illustrate the power and generality of this approach by applying it to a new type of structured prediction problem: learning a nonconvex function that can be globally optimized in polynomial time. We show empirically that this greatly outperforms the standard approach of learning without regard to the cost of optimization.
[ "['Abram L. Friesen' 'Pedro Domingos']", "Abram L. Friesen and Pedro Domingos" ]
stat.ML cs.LG
null
1611.03578
null
null
http://arxiv.org/pdf/1611.03578v1
2016-11-11T03:54:00Z
2016-11-11T03:54:00Z
Simple and Efficient Parallelization for Probabilistic Temporal Tensor Factorization
Probabilistic Temporal Tensor Factorization (PTTF) is an effective algorithm to model the temporal tensor data. It leverages a time constraint to capture the evolving properties of tensor data. Nowadays the exploding dataset demands a large scale PTTF analysis, and a parallel solution is critical to accommodate the trend. Whereas, the parallelization of PTTF still remains unexplored. In this paper, we propose a simple yet efficient Parallel Probabilistic Temporal Tensor Factorization, referred to as P$^2$T$^2$F, to provide a scalable PTTF solution. P$^2$T$^2$F is fundamentally disparate from existing parallel tensor factorizations by considering the probabilistic decomposition and the temporal effects of tensor data. It adopts a new tensor data split strategy to subdivide a large tensor into independent sub-tensors, the computation of which is inherently parallel. We train P$^2$T$^2$F with an efficient algorithm of stochastic Alternating Direction Method of Multipliers, and show that the convergence is guaranteed. Experiments on several real-word tensor datasets demonstrate that P$^2$T$^2$F is a highly effective and efficiently scalable algorithm dedicated for large scale probabilistic temporal tensor analysis.
[ "['Guangxi Li' 'Zenglin Xu' 'Linnan Wang' 'Jinmian Ye' 'Irwin King'\n 'Michael Lyu']", "Guangxi Li, Zenglin Xu, Linnan Wang, Jinmian Ye, Irwin King, Michael\n Lyu" ]
cs.DS cs.IT cs.LG math.IT math.ST stat.TH
null
1611.03579
null
null
http://arxiv.org/pdf/1611.03579v1
2016-11-11T03:59:24Z
2016-11-11T03:59:24Z
Collision-based Testers are Optimal for Uniformity and Closeness
We study the fundamental problems of (i) uniformity testing of a discrete distribution, and (ii) closeness testing between two discrete distributions with bounded $\ell_2$-norm. These problems have been extensively studied in distribution testing and sample-optimal estimators are known for them~\cite{Paninski:08, CDVV14, VV14, DKN:15}. In this work, we show that the original collision-based testers proposed for these problems ~\cite{GRdist:00, BFR+:00} are sample-optimal, up to constant factors. Previous analyses showed sample complexity upper bounds for these testers that are optimal as a function of the domain size $n$, but suboptimal by polynomial factors in the error parameter $\epsilon$. Our main contribution is a new tight analysis establishing that these collision-based testers are information-theoretically optimal, up to constant factors, both in the dependence on $n$ and in the dependence on $\epsilon$.
[ "['Ilias Diakonikolas' 'Themis Gouleakis' 'John Peebles' 'Eric Price']", "Ilias Diakonikolas, Themis Gouleakis, John Peebles, Eric Price" ]
cs.CL cs.AI cs.LG
null
1611.03599
null
null
http://arxiv.org/pdf/1611.03599v1
2016-11-11T07:05:49Z
2016-11-11T07:05:49Z
UTCNN: a Deep Learning Model of Stance Classificationon on Social Media Text
Most neural network models for document classification on social media focus on text infor-mation to the neglect of other information on these platforms. In this paper, we classify post stance on social media channels and develop UTCNN, a neural network model that incorporates user tastes, topic tastes, and user comments on posts. UTCNN not only works on social media texts, but also analyzes texts in forums and message boards. Experiments performed on Chinese Facebook data and English online debate forum data show that UTCNN achieves a 0.755 macro-average f-score for supportive, neutral, and unsupportive stance classes on Facebook data, which is significantly better than models in which either user, topic, or comment information is withheld. This model design greatly mitigates the lack of data for the minor class without the use of oversampling. In addition, UTCNN yields a 0.842 accuracy on English online debate forum data, which also significantly outperforms results from previous work as well as other deep learning models, showing that UTCNN performs well regardless of language or platform.
[ "Wei-Fan Chen and Lun-Wei Ku", "['Wei-Fan Chen' 'Lun-Wei Ku']" ]
cs.LG
null
1611.03608
null
null
http://arxiv.org/pdf/1611.03608v1
2016-11-11T08:23:30Z
2016-11-11T08:23:30Z
Greedy Step Averaging: A parameter-free stochastic optimization method
In this paper we present the greedy step averaging(GSA) method, a parameter-free stochastic optimization algorithm for a variety of machine learning problems. As a gradient-based optimization method, GSA makes use of the information from the minimizer of a single sample's loss function, and takes average strategy to calculate reasonable learning rate sequence. While most existing gradient-based algorithms introduce an increasing number of hyper parameters or try to make a trade-off between computational cost and convergence rate, GSA avoids the manual tuning of learning rate and brings in no more hyper parameters or extra cost. We perform exhaustive numerical experiments for logistic and softmax regression to compare our method with the other state of the art ones on 16 datasets. Results show that GSA is robust on various scenarios.
[ "Xiatian Zhang, Fan Yao, Yongjun Tian", "['Xiatian Zhang' 'Fan Yao' 'Yongjun Tian']" ]
cs.AI cs.CV cs.LG cs.RO
null
1611.03673
null
null
http://arxiv.org/pdf/1611.03673v3
2017-01-13T11:15:22Z
2016-11-11T12:14:45Z
Learning to Navigate in Complex Environments
Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks leveraging multimodal sensory inputs. In particular we consider jointly learning the goal-driven reinforcement learning problem with auxiliary depth prediction and loop closure classification tasks. This approach can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, showing that the agent implicitly learns key navigation abilities.
[ "Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J.\n Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray\n Kavukcuoglu, Dharshan Kumaran and Raia Hadsell", "['Piotr Mirowski' 'Razvan Pascanu' 'Fabio Viola' 'Hubert Soyer'\n 'Andrew J. Ballard' 'Andrea Banino' 'Misha Denil' 'Ross Goroshin'\n 'Laurent Sifre' 'Koray Kavukcuoglu' 'Dharshan Kumaran' 'Raia Hadsell']" ]
cs.CV cs.LG
null
1611.03718
null
null
http://arxiv.org/pdf/1611.03718v2
2016-11-25T14:31:07Z
2016-11-11T14:25:54Z
Hierarchical Object Detection with Deep Reinforcement Learning
We present a method for performing hierarchical object detection in images guided by a deep reinforcement learning agent. The key idea is to focus on those parts of the image that contain richer information and zoom on them. We train an intelligent agent that, given an image window, is capable of deciding where to focus the attention among five different predefined region candidates (smaller windows). This procedure is iterated providing a hierarchical image analysis.We compare two different candidate proposal strategies to guide the object search: with and without overlap. Moreover, our work compares two different strategies to extract features from a convolutional neural network for each region proposal: a first one that computes new feature maps for each region proposal, and a second one that computes the feature maps for the whole image to later generate crops for each region proposal. Experiments indicate better results for the overlapping candidate proposal strategy and a loss of performance for the cropped image features due to the loss of spatial resolution. We argue that, while this loss seems unavoidable when working with large amounts of object candidates, the much more reduced amount of region proposals generated by our reinforcement learning agent allows considering to extract features for each location without sharing convolutional computation among regions.
[ "['Miriam Bellver' 'Xavier Giro-i-Nieto' 'Ferran Marques' 'Jordi Torres']", "Miriam Bellver, Xavier Giro-i-Nieto, Ferran Marques and Jordi Torres" ]
cs.LG stat.ML
null
1611.03777
null
null
http://arxiv.org/pdf/1611.03777v1
2016-11-10T17:57:19Z
2016-11-10T17:57:19Z
Tricks from Deep Learning
The deep learning community has devised a diverse set of methods to make gradient optimization, using large datasets, of large and highly complex models with deeply cascaded nonlinearities, practical. Taken as a whole, these methods constitute a breakthrough, allowing computational structures which are quite wide, very deep, and with an enormous number and variety of free parameters to be effectively optimized. The result now dominates much of practical machine learning, with applications in machine translation, computer vision, and speech recognition. Many of these methods, viewed through the lens of algorithmic differentiation (AD), can be seen as either addressing issues with the gradient itself, or finding ways of achieving increased efficiency using tricks that are AD-related, but not provided by current AD systems. The goal of this paper is to explain not just those methods of most relevance to AD, but also the technical constraints and mindset which led to their discovery. After explaining this context, we present a "laundry list" of methods developed by the deep learning community. Two of these are discussed in further mathematical detail: a way to dramatically reduce the size of the tape when performing reverse-mode AD on a (theoretically) time-reversible process like an ODE integrator; and a new mathematical insight that allows for the implementation of a stochastic Newton's method.
[ "At{\\i}l{\\i}m G\\\"une\\c{s} Baydin and Barak A. Pearlmutter and Jeffrey\n Mark Siskind", "['Atılım Güneş Baydin' 'Barak A. Pearlmutter' 'Jeffrey Mark Siskind']" ]
cs.CR cs.LG
null
1611.03814
null
null
http://arxiv.org/pdf/1611.03814v1
2016-11-11T18:57:15Z
2016-11-11T18:57:15Z
Towards the Science of Security and Privacy in Machine Learning
Advances in machine learning (ML) in recent years have enabled a dizzying array of applications such as data analytics, autonomous systems, and security diagnostics. ML is now pervasive---new systems and models are being deployed in every domain imaginable, leading to rapid and widespread deployment of software based inference and decision making. There is growing recognition that ML exposes new vulnerabilities in software systems, yet the technical community's understanding of the nature and extent of these vulnerabilities remains limited. We systematize recent findings on ML security and privacy, focusing on attacks identified on these systems and defenses crafted to date. We articulate a comprehensive threat model for ML, and categorize attacks and defenses within an adversarial framework. Key insights resulting from works both in the ML and security communities are identified and the effectiveness of approaches are related to structural elements of ML algorithms and the data used to train them. We conclude by formally exploring the opposing relationship between model accuracy and resilience to adversarial manipulation. Through these explorations, we show that there are (possibly unavoidable) tensions between model complexity, accuracy, and resilience that must be calibrated for the environments in which they will be used.
[ "['Nicolas Papernot' 'Patrick McDaniel' 'Arunesh Sinha' 'Michael Wellman']", "Nicolas Papernot, Patrick McDaniel, Arunesh Sinha, Michael Wellman" ]
cs.LG cs.DS stat.ML
null
1611.03819
null
null
http://arxiv.org/pdf/1611.03819v1
2016-11-11T19:13:37Z
2016-11-11T19:13:37Z
Recovery Guarantee of Non-negative Matrix Factorization via Alternating Updates
Non-negative matrix factorization is a popular tool for decomposing data into feature and weight matrices under non-negativity constraints. It enjoys practical success but is poorly understood theoretically. This paper proposes an algorithm that alternates between decoding the weights and updating the features, and shows that assuming a generative model of the data, it provably recovers the ground-truth under fairly mild conditions. In particular, its only essential requirement on features is linear independence. Furthermore, the algorithm uses ReLU to exploit the non-negativity for decoding the weights, and thus can tolerate adversarial noise that can potentially be as large as the signal, and can tolerate unbiased noise much larger than the signal. The analysis relies on a carefully designed coupling between two potential functions, which we believe is of independent interest.
[ "Yuanzhi Li, Yingyu Liang, Andrej Risteski", "['Yuanzhi Li' 'Yingyu Liang' 'Andrej Risteski']" ]
stat.ML cs.LG
null
1611.03824
null
null
http://arxiv.org/pdf/1611.03824v6
2017-06-12T11:19:30Z
2016-11-11T19:33:01Z
Learning to Learn without Gradient Descent by Gradient Descent
We learn recurrent neural network optimizers trained on simple synthetic functions by gradient descent. We show that these learned optimizers exhibit a remarkable degree of transfer in that they can be used to efficiently optimize a broad range of derivative-free black-box functions, including Gaussian process bandits, simple control objectives, global optimization benchmarks and hyper-parameter tuning tasks. Up to the training horizon, the learned optimizers learn to trade-off exploration and exploitation, and compare favourably with heavily engineered Bayesian optimization packages for hyper-parameter tuning.
[ "Yutian Chen, Matthew W. Hoffman, Sergio Gomez Colmenarejo, Misha\n Denil, Timothy P. Lillicrap, Matt Botvinick, Nando de Freitas", "['Yutian Chen' 'Matthew W. Hoffman' 'Sergio Gomez Colmenarejo'\n 'Misha Denil' 'Timothy P. Lillicrap' 'Matt Botvinick' 'Nando de Freitas']" ]
cs.LG cs.AI
null
1611.03852
null
null
http://arxiv.org/pdf/1611.03852v3
2016-11-25T08:09:55Z
2016-11-11T20:53:45Z
A Connection between Generative Adversarial Networks, Inverse Reinforcement Learning, and Energy-Based Models
Generative adversarial networks (GANs) are a recently proposed class of generative models in which a generator is trained to optimize a cost function that is being simultaneously learned by a discriminator. While the idea of learning cost functions is relatively new to the field of generative modeling, learning costs has long been studied in control and reinforcement learning (RL) domains, typically for imitation learning from demonstrations. In these fields, learning cost function underlying observed behavior is known as inverse reinforcement learning (IRL) or inverse optimal control. While at first the connection between cost learning in RL and cost learning in generative modeling may appear to be a superficial one, we show in this paper that certain IRL methods are in fact mathematically equivalent to GANs. In particular, we demonstrate an equivalence between a sample-based algorithm for maximum entropy IRL and a GAN in which the generator's density can be evaluated and is provided as an additional input to the discriminator. Interestingly, maximum entropy IRL is a special case of an energy-based model. We discuss the interpretation of GANs as an algorithm for training energy-based models, and relate this interpretation to other recent work that seeks to connect GANs and EBMs. By formally highlighting the connection between GANs, IRL, and EBMs, we hope that researchers in all three communities can better identify and apply transferable ideas from one domain to another, particularly for developing more stable and scalable algorithms: a major challenge in all three domains.
[ "['Chelsea Finn' 'Paul Christiano' 'Pieter Abbeel' 'Sergey Levine']", "Chelsea Finn, Paul Christiano, Pieter Abbeel, Sergey Levine" ]
stat.ML cs.LG
null
1611.03879
null
null
http://arxiv.org/pdf/1611.03879v1
2016-11-11T21:08:36Z
2016-11-11T21:08:36Z
Annealing Gaussian into ReLU: a New Sampling Strategy for Leaky-ReLU RBM
Restricted Boltzmann Machine (RBM) is a bipartite graphical model that is used as the building block in energy-based deep generative models. Due to numerical stability and quantifiability of the likelihood, RBM is commonly used with Bernoulli units. Here, we consider an alternative member of exponential family RBM with leaky rectified linear units -- called leaky RBM. We first study the joint and marginal distributions of leaky RBM under different leakiness, which provides us important insights by connecting the leaky RBM model and truncated Gaussian distributions. The connection leads us to a simple yet efficient method for sampling from this model, where the basic idea is to anneal the leakiness rather than the energy; -- i.e., start from a fully Gaussian/Linear unit and gradually decrease the leakiness over iterations. This serves as an alternative to the annealing of the temperature parameter and enables numerical estimation of the likelihood that are more efficient and more accurate than the commonly used annealed importance sampling (AIS). We further demonstrate that the proposed sampling algorithm enjoys faster mixing property than contrastive divergence algorithm, which benefits the training without any additional computational cost.
[ "['Chun-Liang Li' 'Siamak Ravanbakhsh' 'Barnabas Poczos']", "Chun-Liang Li, Siamak Ravanbakhsh, Barnabas Poczos" ]
cs.LG
null
1611.03894
null
null
http://arxiv.org/pdf/1611.03894v1
2016-11-11T22:01:41Z
2016-11-11T22:01:41Z
Unsupervised Learning For Effective User Engagement on Social Media
In this paper, we investigate the effectiveness of unsupervised feature learning techniques in predicting user engagement on social media. Specifically, we compare two methods to predict the number of feedbacks (i.e., comments) that a blog post is likely to receive. We compare Principal Component Analysis (PCA) and sparse Autoencoder to a baseline method where the data are only centered and scaled, on each of two models: Linear Regression and Regression Tree. We find that unsupervised learning techniques significantly improve the prediction accuracy on both models. For the Linear Regression model, sparse Autoencoder achieves the best result, with an improvement in the root mean squared error (RMSE) on the test set of 42% over the baseline method. For the Regression Tree model, PCA achieves the best result, with an improvement in RMSE of 15% over the baseline.
[ "Thai Pham and Camelia Simoiu", "['Thai Pham' 'Camelia Simoiu']" ]
cs.LG stat.ML
null
1611.03898
null
null
http://arxiv.org/pdf/1611.03898v1
2016-11-11T22:20:41Z
2016-11-11T22:20:41Z
Low Latency Anomaly Detection and Bayesian Network Prediction of Anomaly Likelihood
We develop a supervised machine learning model that detects anomalies in systems in real time. Our model processes unbounded streams of data into time series which then form the basis of a low-latency anomaly detection model. Moreover, we extend our preliminary goal of just anomaly detection to simultaneous anomaly prediction. We approach this very challenging problem by developing a Bayesian Network framework that captures the information about the parameters of the lagged regressors calibrated in the first part of our approach and use this structure to learn local conditional probability distributions.
[ "Derek Farren and Thai Pham and Marco Alban-Hidalgo", "['Derek Farren' 'Thai Pham' 'Marco Alban-Hidalgo']" ]
cs.AI cs.LG stat.ML
null
1611.03907
null
null
http://arxiv.org/pdf/1611.03907v4
2018-06-19T20:14:54Z
2016-11-11T22:39:01Z
Reinforcement Learning in Rich-Observation MDPs using Spectral Methods
Reinforcement learning (RL) in Markov decision processes (MDPs) with large state spaces is a challenging problem. The performance of standard RL algorithms degrades drastically with the dimensionality of state space. However, in practice, these large MDPs typically incorporate a latent or hidden low-dimensional structure. In this paper, we study the setting of rich-observation Markov decision processes (ROMDP), where there are a small number of hidden states which possess an injective mapping to the observation states. In other words, every observation state is generated through a single hidden state, and this mapping is unknown a priori. We introduce a spectral decomposition method that consistently learns this mapping, and more importantly, achieves it with low regret. The estimated mapping is integrated into an optimistic RL algorithm (UCRL), which operates on the estimated hidden space. We derive finite-time regret bounds for our algorithm with a weak dependence on the dimensionality of the observed space. In fact, our algorithm asymptotically achieves the same average regret as the oracle UCRL algorithm, which has the knowledge of the mapping from hidden to observed spaces. Thus, we derive an efficient spectral RL algorithm for ROMDPs.
[ "Kamyar Azizzadenesheli, Alessandro Lazaric, Animashree Anandkumar", "['Kamyar Azizzadenesheli' 'Alessandro Lazaric' 'Animashree Anandkumar']" ]
cs.LG
null
1611.03934
null
null
http://arxiv.org/pdf/1611.03934v1
2016-11-12T01:53:54Z
2016-11-12T01:53:54Z
Personalized Donor-Recipient Matching for Organ Transplantation
Organ transplants can improve the life expectancy and quality of life for the recipient but carries the risk of serious post-operative complications, such as septic shock and organ rejection. The probability of a successful transplant depends in a very subtle fashion on compatibility between the donor and the recipient but current medical practice is short of domain knowledge regarding the complex nature of recipient-donor compatibility. Hence a data-driven approach for learning compatibility has the potential for significant improvements in match quality. This paper proposes a novel system (ConfidentMatch) that is trained using data from electronic health records. ConfidentMatch predicts the success of an organ transplant (in terms of the 3 year survival rates) on the basis of clinical and demographic traits of the donor and recipient. ConfidentMatch captures the heterogeneity of the donor and recipient traits by optimally dividing the feature space into clusters and constructing different optimal predictive models to each cluster. The system controls the complexity of the learned predictive model in a way that allows for assuring more granular and confident predictions for a larger number of potential recipient-donor pairs, thereby ensuring that predictions are "personalized" and tailored to individual characteristics to the finest possible granularity. Experiments conducted on the UNOS heart transplant dataset show the superiority of the prognostic value of ConfidentMatch to other competing benchmarks; ConfidentMatch can provide predictions of success with 95% confidence for 5,489 patients of a total population of 9,620 patients, which corresponds to 410 more patients than the most competitive benchmark algorithm (DeepBoost).
[ "['Jinsung Yoon' 'Ahmed M. Alaa' 'Martin Cadeiras' 'Mihaela van der Schaar']", "Jinsung Yoon, Ahmed M. Alaa, Martin Cadeiras, and Mihaela van der\n Schaar" ]
cs.LG cs.CR
null
1611.03941
null
null
http://arxiv.org/pdf/1611.03941v2
2017-02-25T00:56:26Z
2016-11-12T02:39:41Z
Anomaly Detection in Bitcoin Network Using Unsupervised Learning Methods
The problem of anomaly detection has been studied for a long time. In short, anomalies are abnormal or unlikely things. In financial networks, thieves and illegal activities are often anomalous in nature. Members of a network want to detect anomalies as soon as possible to prevent them from harming the network's community and integrity. Many Machine Learning techniques have been proposed to deal with this problem; some results appear to be quite promising but there is no obvious superior method. In this paper, we consider anomaly detection particular to the Bitcoin transaction network. Our goal is to detect which users and transactions are the most suspicious; in this case, anomalous behavior is a proxy for suspicious behavior. To this end, we use three unsupervised learning methods including k-means clustering, Mahalanobis distance, and Unsupervised Support Vector Machine (SVM) on two graphs generated by the Bitcoin transaction network: one graph has users as nodes, and the other has transactions as nodes.
[ "['Thai Pham' 'Steven Lee']", "Thai Pham and Steven Lee" ]
stat.CO cs.LG stat.ML
null
1611.03969
null
null
http://arxiv.org/pdf/1611.03969v1
2016-11-12T08:18:38Z
2016-11-12T08:18:38Z
An Introduction to MM Algorithms for Machine Learning and Statistical
MM (majorization--minimization) algorithms are an increasingly popular tool for solving optimization problems in machine learning and statistical estimation. This article introduces the MM algorithm framework in general and via three popular example applications: Gaussian mixture regressions, multinomial logistic regressions, and support vector machines. Specific algorithms for the three examples are derived and numerical demonstrations are presented. Theoretical and practical aspects of MM algorithm design are discussed.
[ "Hien D. Nguyen", "['Hien D. Nguyen']" ]
cs.LG stat.ML
null
1611.03981
null
null
http://arxiv.org/pdf/1611.03981v1
2016-11-12T10:23:53Z
2016-11-12T10:23:53Z
Dual Teaching: A Practical Semi-supervised Wrapper Method
Semi-supervised wrapper methods are concerned with building effective supervised classifiers from partially labeled data. Though previous works have succeeded in some fields, it is still difficult to apply semi-supervised wrapper methods to practice because the assumptions those methods rely on tend to be unrealistic in practice. For practical use, this paper proposes a novel semi-supervised wrapper method, Dual Teaching, whose assumptions are easy to set up. Dual Teaching adopts two external classifiers to estimate the false positives and false negatives of the base learner. Only if the recall of every external classifier is greater than zero and the sum of the precision is greater than one, Dual Teaching will train a base learner from partially labeled data as effectively as the fully-labeled-data-trained classifier. The effectiveness of Dual Teaching is proved in both theory and practice.
[ "['Fuqaing Liu' 'Chenwei Deng' 'Fukun Bi' 'Yiding Yang']", "Fuqaing Liu, Chenwei Deng, Fukun Bi, Yiding Yang" ]
stat.ML cs.LG cs.NA
null
1611.03993
null
null
http://arxiv.org/pdf/1611.03993v2
2017-02-23T09:09:58Z
2016-11-12T11:58:17Z
Riemannian Tensor Completion with Side Information
By restricting the iterate on a nonlinear manifold, the recently proposed Riemannian optimization methods prove to be both efficient and effective in low rank tensor completion problems. However, existing methods fail to exploit the easily accessible side information, due to their format mismatch. Consequently, there is still room for improvement in such methods. To fill the gap, in this paper, a novel Riemannian model is proposed to organically integrate the original model and the side information by overcoming their inconsistency. For this particular model, an efficient Riemannian conjugate gradient descent solver is devised based on a new metric that captures the curvature of the objective.Numerical experiments suggest that our solver is more accurate than the state-of-the-art without compromising the efficiency.
[ "Tengfei Zhou, Hui Qian, Zebang Shen, Congfu Xu", "['Tengfei Zhou' 'Hui Qian' 'Zebang Shen' 'Congfu Xu']" ]
cs.LG
null
1611.04049
null
null
http://arxiv.org/pdf/1611.04049v1
2016-11-12T22:08:15Z
2016-11-12T22:08:15Z
Prognostics of Surgical Site Infections using Dynamic Health Data
Surgical Site Infection (SSI) is a national priority in healthcare research. Much research attention has been attracted to develop better SSI risk prediction models. However, most of the existing SSI risk prediction models are built on static risk factors such as comorbidities and operative factors. In this paper, we investigate the use of the dynamic wound data for SSI risk prediction. There have been emerging mobile health (mHealth) tools that can closely monitor the patients and generate continuous measurements of many wound-related variables and other evolving clinical variables. Since existing prediction models of SSI have quite limited capacity to utilize the evolving clinical data, we develop the corresponding solution to equip these mHealth tools with decision-making capabilities for SSI prediction with a seamless assembly of several machine learning models to tackle the analytic challenges arising from the spatial-temporal data. The basic idea is to exploit the low-rank property of the spatial-temporal data via the bilinear formulation, and further enhance it with automatic missing data imputation by the matrix completion technique. We derive efficient optimization algorithms to implement these models and demonstrate the superior performances of our new predictive model on a real-world dataset of SSI, compared to a range of state-of-the-art methods.
[ "Chuyang Ke, Yan Jin, Heather Evans, Bill Lober, Xiaoning Qian, Ji Liu,\n Shuai Huang", "['Chuyang Ke' 'Yan Jin' 'Heather Evans' 'Bill Lober' 'Xiaoning Qian'\n 'Ji Liu' 'Shuai Huang']" ]
stat.ML cs.LG
null
1611.04051
null
null
http://arxiv.org/pdf/1611.04051v1
2016-11-12T22:54:45Z
2016-11-12T22:54:45Z
GANS for Sequences of Discrete Elements with the Gumbel-softmax Distribution
Generative Adversarial Networks (GAN) have limitations when the goal is to generate sequences of discrete elements. The reason for this is that samples from a distribution on discrete objects such as the multinomial are not differentiable with respect to the distribution parameters. This problem can be avoided by using the Gumbel-softmax distribution, which is a continuous approximation to a multinomial distribution parameterized in terms of the softmax function. In this work, we evaluate the performance of GANs based on recurrent neural networks with Gumbel-softmax output distributions in the task of generating sequences of discrete elements.
[ "['Matt J. Kusner' 'José Miguel Hernández-Lobato']", "Matt J. Kusner, Jos\\'e Miguel Hern\\'andez-Lobato" ]
stat.ML cs.LG
10.1109/TMI.2017.2650960
1611.04069
null
null
http://arxiv.org/abs/1611.04069v2
2017-01-09T10:27:05Z
2016-11-13T02:21:07Z
Low-rank and Adaptive Sparse Signal (LASSI) Models for Highly Accelerated Dynamic Imaging
Sparsity-based approaches have been popular in many applications in image processing and imaging. Compressed sensing exploits the sparsity of images in a transform domain or dictionary to improve image recovery from undersampled measurements. In the context of inverse problems in dynamic imaging, recent research has demonstrated the promise of sparsity and low-rank techniques. For example, the patches of the underlying data are modeled as sparse in an adaptive dictionary domain, and the resulting image and dictionary estimation from undersampled measurements is called dictionary-blind compressed sensing, or the dynamic image sequence is modeled as a sum of low-rank and sparse (in some transform domain) components (L+S model) that are estimated from limited measurements. In this work, we investigate a data-adaptive extension of the L+S model, dubbed LASSI, where the temporal image sequence is decomposed into a low-rank component and a component whose spatiotemporal (3D) patches are sparse in some adaptive dictionary domain. We investigate various formulations and efficient methods for jointly estimating the underlying dynamic signal components and the spatiotemporal dictionary from limited measurements. We also obtain efficient sparsity penalized dictionary-blind compressed sensing methods as special cases of our LASSI approaches. Our numerical experiments demonstrate the promising performance of LASSI schemes for dynamic magnetic resonance image reconstruction from limited k-t space data compared to recent methods such as k-t SLR and L+S, and compared to the proposed dictionary-blind compressed sensing method.
[ "['Saiprasad Ravishankar' 'Brian E. Moore' 'Raj Rao Nadakuditi'\n 'Jeffrey A. Fessler']", "Saiprasad Ravishankar, Brian E. Moore, Raj Rao Nadakuditi, and Jeffrey\n A. Fessler" ]
cs.LG
null
1611.04088
null
null
http://arxiv.org/pdf/1611.04088v1
2016-11-13T05:52:58Z
2016-11-13T05:52:58Z
Batched Gaussian Process Bandit Optimization via Determinantal Point Processes
Gaussian Process bandit optimization has emerged as a powerful tool for optimizing noisy black box functions. One example in machine learning is hyper-parameter optimization where each evaluation of the target function requires training a model which may involve days or even weeks of computation. Most methods for this so-called "Bayesian optimization" only allow sequential exploration of the parameter space. However, it is often desirable to propose batches or sets of parameter values to explore simultaneously, especially when there are large parallel processing facilities at our disposal. Batch methods require modeling the interaction between the different evaluations in the batch, which can be expensive in complex scenarios. In this paper, we propose a new approach for parallelizing Bayesian optimization by modeling the diversity of a batch via Determinantal point processes (DPPs) whose kernels are learned automatically. This allows us to generalize a previous result as well as prove better regret bounds based on DPP sampling. Our experiments on a variety of synthetic and real-world robotics and hyper-parameter optimization tasks indicate that our DPP-based methods, especially those based on DPP sampling, outperform state-of-the-art methods.
[ "Tarun Kathuria, Amit Deshpande, Pushmeet Kohli", "['Tarun Kathuria' 'Amit Deshpande' 'Pushmeet Kohli']" ]
stat.ML cs.LG
null
1611.04149
null
null
http://arxiv.org/pdf/1611.04149v1
2016-11-13T16:01:10Z
2016-11-13T16:01:10Z
Accelerated Variance Reduced Block Coordinate Descent
Algorithms with fast convergence, small number of data access, and low per-iteration complexity are particularly favorable in the big data era, due to the demand for obtaining \emph{highly accurate solutions} to problems with \emph{a large number of samples} in \emph{ultra-high} dimensional space. Existing algorithms lack at least one of these qualities, and thus are inefficient in handling such big data challenge. In this paper, we propose a method enjoying all these merits with an accelerated convergence rate $O(\frac{1}{k^2})$. Empirical studies on large scale datasets with more than one million features are conducted to show the effectiveness of our methods in practice.
[ "Zebang Shen, Hui Qian, Chao Zhang, and Tengfei Zhou", "['Zebang Shen' 'Hui Qian' 'Chao Zhang' 'Tengfei Zhou']" ]
cs.LG stat.ML
null
1611.04199
null
null
http://arxiv.org/pdf/1611.04199v1
2016-11-13T22:53:51Z
2016-11-13T22:53:51Z
Realistic risk-mitigating recommendations via inverse classification
Inverse classification, the process of making meaningful perturbations to a test point such that it is more likely to have a desired classification, has previously been addressed using data from a single static point in time. Such an approach yields inflated probability estimates, stemming from an implicitly made assumption that recommendations are implemented instantaneously. We propose using longitudinal data to alleviate such issues in two ways. First, we use past outcome probabilities as features in the present. Use of such past probabilities ties historical behavior to the present, allowing for more information to be taken into account when making initial probability estimates and subsequently performing inverse classification. Secondly, following inverse classification application, optimized instances' unchangeable features (e.g.,~age) are updated using values from the next longitudinal time period. Optimized test instance probabilities are then reassessed. Updating the unchangeable features in this manner reflects the notion that improvements in outcome likelihood, which result from following the inverse classification recommendations, do not materialize instantaneously. As our experiments demonstrate, more realistic estimates of probability can be obtained by factoring in such considerations.
[ "Michael T. Lash and W. Nick Street", "['Michael T. Lash' 'W. Nick Street']" ]
cs.LG cs.CV cs.RO
null
1611.04201
null
null
http://arxiv.org/pdf/1611.04201v4
2017-06-08T07:21:39Z
2016-11-13T23:08:42Z
CAD2RL: Real Single-Image Flight without a Single Real Image
Deep reinforcement learning has emerged as a promising and powerful technique for automatically acquiring control policies that can process raw sensory inputs, such as images, and perform complex behaviors. However, extending deep RL to real-world robotic tasks has proven challenging, particularly in safety-critical domains such as autonomous flight, where a trial-and-error learning process is often impractical. In this paper, we explore the following question: can we train vision-based navigation policies entirely in simulation, and then transfer them into the real world to achieve real-world flight without a single real training image? We propose a learning method that we call CAD$^2$RL, which can be used to perform collision-free indoor flight in the real world while being trained entirely on 3D CAD models. Our method uses single RGB images from a monocular camera, without needing to explicitly reconstruct the 3D geometry of the environment or perform explicit motion planning. Our learned collision avoidance policy is represented by a deep convolutional neural network that directly processes raw monocular images and outputs velocity commands. This policy is trained entirely on simulated images, with a Monte Carlo policy evaluation algorithm that directly optimizes the network's ability to produce collision-free flight. By highly randomizing the rendering settings for our simulated training set, we show that we can train a policy that generalizes to the real world, without requiring the simulator to be particularly realistic or high-fidelity. We evaluate our method by flying a real quadrotor through indoor environments, and further evaluate the design choices in our simulator through a series of ablation studies on depth prediction. For supplementary video see: https://youtu.be/nXBWmzFrj5s
[ "['Fereshteh Sadeghi' 'Sergey Levine']", "Fereshteh Sadeghi and Sergey Levine" ]
stat.ML cs.LG
null
1611.04218
null
null
http://arxiv.org/pdf/1611.04218v1
2016-11-14T01:17:14Z
2016-11-14T01:17:14Z
Preference Completion from Partial Rankings
We propose a novel and efficient algorithm for the collaborative preference completion problem, which involves jointly estimating individualized rankings for a set of entities over a shared set of items, based on a limited number of observed affinity values. Our approach exploits the observation that while preferences are often recorded as numerical scores, the predictive quantity of interest is the underlying rankings. Thus, attempts to closely match the recorded scores may lead to overfitting and impair generalization performance. Instead, we propose an estimator that directly fits the underlying preference order, combined with nuclear norm constraints to encourage low--rank parameters. Besides (approximate) correctness of the ranking order, the proposed estimator makes no generative assumption on the numerical scores of the observations. One consequence is that the proposed estimator can fit any consistent partial ranking over a subset of the items represented as a directed acyclic graph (DAG), generalizing standard techniques that can only fit preference scores. Despite this generality, for supervision representing total or blockwise total orders, the computational complexity of our algorithm is within a $\log$ factor of the standard algorithms for nuclear norm regularization based estimates for matrix completion. We further show promising empirical results for a novel and challenging application of collaboratively ranking of the associations between brain--regions and cognitive neuroscience terms.
[ "['Suriya Gunasekar' 'Oluwasanmi Koyejo' 'Joydeep Ghosh']", "Suriya Gunasekar, Oluwasanmi Koyejo, Joydeep Ghosh" ]
cs.LG
null
1611.04228
null
null
http://arxiv.org/pdf/1611.04228v1
2016-11-14T02:28:13Z
2016-11-14T02:28:13Z
Learning Sparse, Distributed Representations using the Hebbian Principle
The "fire together, wire together" Hebbian model is a central principle for learning in neuroscience, but surprisingly, it has found limited applicability in modern machine learning. In this paper, we take a first step towards bridging this gap, by developing flavors of competitive Hebbian learning which produce sparse, distributed neural codes using online adaptation with minimal tuning. We propose an unsupervised algorithm, termed Adaptive Hebbian Learning (AHL). We illustrate the distributed nature of the learned representations via output entropy computations for synthetic data, and demonstrate superior performance, compared to standard alternatives such as autoencoders, in training a deep convolutional net on standard image datasets.
[ "['Aseem Wadhwa' 'Upamanyu Madhow']", "Aseem Wadhwa and Upamanyu Madhow" ]
cs.LG cs.NE stat.ML
null
1611.04231
null
null
http://arxiv.org/pdf/1611.04231v3
2018-07-20T04:38:23Z
2016-11-14T02:44:18Z
Identity Matters in Deep Learning
An emerging design principle in deep learning is that each layer of a deep artificial neural network should be able to easily express the identity transformation. This idea not only motivated various normalization techniques, such as \emph{batch normalization}, but was also key to the immense success of \emph{residual networks}. In this work, we put the principle of \emph{identity parameterization} on a more solid theoretical footing alongside further empirical progress. We first give a strikingly simple proof that arbitrarily deep linear residual networks have no spurious local optima. The same result for linear feed-forward networks in their standard parameterization is substantially more delicate. Second, we show that residual networks with ReLu activations have universal finite-sample expressivity in the sense that the network can represent any function of its sample provided that the model has more parameters than the sample size. Directly inspired by our theory, we experiment with a radically simple residual architecture consisting of only residual convolutional layers and ReLu activations, but no batch normalization, dropout, or max pool. Our model improves significantly on previous all-convolutional networks on the CIFAR10, CIFAR100, and ImageNet classification benchmarks.
[ "Moritz Hardt and Tengyu Ma", "['Moritz Hardt' 'Tengyu Ma']" ]
cs.LG
null
1611.04273
null
null
http://arxiv.org/pdf/1611.04273v2
2017-06-06T22:36:35Z
2016-11-14T07:36:22Z
On the Quantitative Analysis of Decoder-Based Generative Models
The past several years have seen remarkable progress in generative models which produce convincing samples of images and other modalities. A shared component of many powerful generative models is a decoder network, a parametric deep neural net that defines a generative distribution. Examples include variational autoencoders, generative adversarial networks, and generative moment matching networks. Unfortunately, it can be difficult to quantify the performance of these models because of the intractability of log-likelihood estimation, and inspecting samples can be misleading. We propose to use Annealed Importance Sampling for evaluating log-likelihoods for decoder-based models and validate its accuracy using bidirectional Monte Carlo. The evaluation code is provided at https://github.com/tonywu95/eval_gen. Using this technique, we analyze the performance of decoder-based models, the effectiveness of existing log-likelihood estimators, the degree of overfitting, and the degree to which these models miss important modes of the data distribution.
[ "Yuhuai Wu, Yuri Burda, Ruslan Salakhutdinov, Roger Grosse", "['Yuhuai Wu' 'Yuri Burda' 'Ruslan Salakhutdinov' 'Roger Grosse']" ]
cs.CL cs.LG cs.NE
null
1611.04361
null
null
http://arxiv.org/pdf/1611.04361v1
2016-11-14T12:36:07Z
2016-11-14T12:36:07Z
Attending to Characters in Neural Sequence Labeling Models
Sequence labeling architectures use word embeddings for capturing similarity, but suffer when handling previously unseen or rare words. We investigate character-level extensions to such models and propose a novel architecture for combining alternative word representations. By using an attention mechanism, the model is able to dynamically decide how much information to use from a word- or character-level component. We evaluated different architectures on a range of sequence labeling datasets, and character-level extensions were found to improve performance on every benchmark. In addition, the proposed attention-based architecture delivered the best results even with a smaller number of trainable parameters.
[ "Marek Rei, Gamal K.O. Crichton, Sampo Pyysalo", "['Marek Rei' 'Gamal K. O. Crichton' 'Sampo Pyysalo']" ]
stat.CO cs.LG stat.ML
null
1611.04416
null
null
http://arxiv.org/pdf/1611.04416v1
2016-11-14T15:21:23Z
2016-11-14T15:21:23Z
On numerical approximation schemes for expectation propagation
Several numerical approximation strategies for the expectation-propagation algorithm are studied in the context of large-scale learning: the Laplace method, a faster variant of it, Gaussian quadrature, and a deterministic version of variational sampling (i.e., combining quadrature with variational approximation). Experiments in training linear binary classifiers show that the expectation-propagation algorithm converges best using variational sampling, while it also converges well using Laplace-style methods with smooth factors but tends to be unstable with non-differentiable ones. Gaussian quadrature yields unstable behavior or convergence to a sub-optimal solution in most experiments.
[ "Alexis Roche", "['Alexis Roche']" ]
stat.ML cs.AI cs.LG cs.NE stat.ME
null
1611.04488
null
null
null
null
null
Generative Models and Model Criticism via Optimized Maximum Mean Discrepancy
We propose a method to optimize the representation and distinguishability of samples from two probability distributions, by maximizing the estimated power of a statistical test based on the maximum mean discrepancy (MMD). This optimized MMD is applied to the setting of unsupervised learning by generative adversarial networks (GAN), in which a model attempts to generate realistic samples, and a discriminator attempts to tell these apart from data samples. In this context, the MMD may be used in two roles: first, as a discriminator, either directly on the samples, or on features of the samples. Second, the MMD can be used to evaluate the performance of a generative model, by testing the model's samples against a reference data set. In the latter role, the optimized MMD is particularly helpful, as it gives an interpretable indication of how the model and data distributions differ, even in cases where individual model samples are not easily distinguished either by eye or by classifier.
[ "Danica J. Sutherland, Hsiao-Yu Tung, Heiko Strathmann, Soumyajit De,\n Aaditya Ramdas, Alex Smola, Arthur Gretton" ]
stat.ML cs.LG
null
1611.04499
null
null
http://arxiv.org/pdf/1611.04499v2
2017-10-31T09:25:13Z
2016-11-14T17:54:28Z
Post Training in Deep Learning with Last Kernel
One of the main challenges of deep learning methods is the choice of an appropriate training strategy. In particular, additional steps, such as unsupervised pre-training, have been shown to greatly improve the performances of deep structures. In this article, we propose an extra training step, called post-training, which only optimizes the last layer of the network. We show that this procedure can be analyzed in the context of kernel theory, with the first layers computing an embedding of the data and the last layer a statistical model to solve the task based on this embedding. This step makes sure that the embedding, or representation, of the data is used in the best possible way for the considered task. This idea is then tested on multiple architectures with various data sets, showing that it consistently provides a boost in performance.
[ "['Thomas Moreau' 'Julien Audiffren']", "Thomas Moreau and Julien Audiffren" ]
stat.ML cs.LG cs.NE
null
1611.045
null
null
null
null
null
Deep Learning with Sets and Point Clouds
We introduce a simple permutation equivariant layer for deep learning with set structure.This type of layer, obtained by parameter-sharing, has a simple implementation and linear-time complexity in the size of each set. We use deep permutation-invariant networks to perform point-could classification and MNIST-digit summation, where in both cases the output is invariant to permutations of the input. In a semi-supervised setting, where the goal is make predictions for each instance within a set, we demonstrate the usefulness of this type of layer in set-outlier detection as well as semi-supervised learning with clustering side-information.
[ "Siamak Ravanbakhsh and Jeff Schneider and Barnabas Poczos" ]
null
null
1611.04500
null
null
http://arxiv.org/pdf/1611.04500v3
2017-02-24T01:54:59Z
2016-11-14T17:55:34Z
Deep Learning with Sets and Point Clouds
We introduce a simple permutation equivariant layer for deep learning with set structure.This type of layer, obtained by parameter-sharing, has a simple implementation and linear-time complexity in the size of each set. We use deep permutation-invariant networks to perform point-could classification and MNIST-digit summation, where in both cases the output is invariant to permutations of the input. In a semi-supervised setting, where the goal is make predictions for each instance within a set, we demonstrate the usefulness of this type of layer in set-outlier detection as well as semi-supervised learning with clustering side-information.
[ "['Siamak Ravanbakhsh' 'Jeff Schneider' 'Barnabas Poczos']" ]
cs.LG stat.ML
null
1611.0452
null
null
null
null
null
Normalizing the Normalizers: Comparing and Extending Network Normalization Schemes
Normalization techniques have only recently begun to be exploited in supervised learning tasks. Batch normalization exploits mini-batch statistics to normalize the activations. This was shown to speed up training and result in better models. However its success has been very limited when dealing with recurrent neural networks. On the other hand, layer normalization normalizes the activations across all activities within a layer. This was shown to work well in the recurrent setting. In this paper we propose a unified view of normalization techniques, as forms of divisive normalization, which includes layer and batch normalization as special cases. Our second contribution is the finding that a small modification to these normalization schemes, in conjunction with a sparse regularizer on the activations, leads to significant benefits over standard normalization techniques. We demonstrate the effectiveness of our unified divisive normalization framework in the context of convolutional neural nets and recurrent neural networks, showing improvements over baselines in image classification, language modeling as well as super-resolution.
[ "Mengye Ren, Renjie Liao, Raquel Urtasun, Fabian H. Sinz, Richard S.\n Zemel" ]
null
null
1611.04520
null
null
http://arxiv.org/pdf/1611.04520v2
2017-03-06T21:03:03Z
2016-11-14T19:04:58Z
Normalizing the Normalizers: Comparing and Extending Network Normalization Schemes
Normalization techniques have only recently begun to be exploited in supervised learning tasks. Batch normalization exploits mini-batch statistics to normalize the activations. This was shown to speed up training and result in better models. However its success has been very limited when dealing with recurrent neural networks. On the other hand, layer normalization normalizes the activations across all activities within a layer. This was shown to work well in the recurrent setting. In this paper we propose a unified view of normalization techniques, as forms of divisive normalization, which includes layer and batch normalization as special cases. Our second contribution is the finding that a small modification to these normalization schemes, in conjunction with a sparse regularizer on the activations, leads to significant benefits over standard normalization techniques. We demonstrate the effectiveness of our unified divisive normalization framework in the context of convolutional neural nets and recurrent neural networks, showing improvements over baselines in image classification, language modeling as well as super-resolution.
[ "['Mengye Ren' 'Renjie Liao' 'Raquel Urtasun' 'Fabian H. Sinz'\n 'Richard S. Zemel']" ]
quant-ph cs.LG stat.ML
null
1611.04528
null
null
http://arxiv.org/pdf/1611.04528v1
2016-11-14T19:15:57Z
2016-11-14T19:15:57Z
Benchmarking Quantum Hardware for Training of Fully Visible Boltzmann Machines
Quantum annealing (QA) is a hardware-based heuristic optimization and sampling method applicable to discrete undirected graphical models. While similar to simulated annealing, QA relies on quantum, rather than thermal, effects to explore complex search spaces. For many classes of problems, QA is known to offer computational advantages over simulated annealing. Here we report on the ability of recent QA hardware to accelerate training of fully visible Boltzmann machines. We characterize the sampling distribution of QA hardware, and show that in many cases, the quantum distributions differ significantly from classical Boltzmann distributions. In spite of this difference, training (which seeks to match data and model statistics) using standard classical gradient updates is still effective. We investigate the use of QA for seeding Markov chains as an alternative to contrastive divergence (CD) and persistent contrastive divergence (PCD). Using $k=50$ Gibbs steps, we show that for problems with high-energy barriers between modes, QA-based seeds can improve upon chains with CD and PCD initializations. For these hard problems, QA gradient estimates are more accurate, and allow for faster learning. Furthermore, and interestingly, even the case of raw QA samples (that is, $k=0$) achieved similar improvements. We argue that this relates to the fact that we are training a quantum rather than classical Boltzmann distribution in this case. The learned parameters give rise to hardware QA distributions closely approximating classical Boltzmann distributions that are hard to train with CD/PCD.
[ "Dmytro Korenkevych, Yanbo Xue, Zhengbing Bian, Fabian Chudak, William\n G. Macready, Jason Rolfe, Evgeny Andriyash", "['Dmytro Korenkevych' 'Yanbo Xue' 'Zhengbing Bian' 'Fabian Chudak'\n 'William G. Macready' 'Jason Rolfe' 'Evgeny Andriyash']" ]
cs.DS cs.AI cs.LG
null
1611.04535
null
null
http://arxiv.org/pdf/1611.04535v4
2018-10-16T16:07:08Z
2016-11-14T19:22:21Z
Learning-Theoretic Foundations of Algorithm Configuration for Combinatorial Partitioning Problems
Max-cut, clustering, and many other partitioning problems that are of significant importance to machine learning and other scientific fields are NP-hard, a reality that has motivated researchers to develop a wealth of approximation algorithms and heuristics. Although the best algorithm to use typically depends on the specific application domain, a worst-case analysis is often used to compare algorithms. This may be misleading if worst-case instances occur infrequently, and thus there is a demand for optimization methods which return the algorithm configuration best suited for the given application's typical inputs. We address this problem for clustering, max-cut, and other partitioning problems, such as integer quadratic programming, by designing computationally efficient and sample efficient learning algorithms which receive samples from an application-specific distribution over problem instances and learn a partitioning algorithm with high expected performance. Our algorithms learn over common integer quadratic programming and clustering algorithm families: SDP rounding algorithms and agglomerative clustering algorithms with dynamic programming. For our sample complexity analysis, we provide tight bounds on the pseudodimension of these algorithm classes, and show that surprisingly, even for classes of algorithms parameterized by a single parameter, the pseudo-dimension is superconstant. In this way, our work both contributes to the foundations of algorithm configuration and pushes the boundaries of learning theory, since the algorithm classes we analyze consist of multi-stage optimization procedures and are significantly more complex than classes typically studied in learning theory.
[ "Maria-Florina Balcan, Vaishnavh Nagarajan, Ellen Vitercik, and Colin\n White", "['Maria-Florina Balcan' 'Vaishnavh Nagarajan' 'Ellen Vitercik'\n 'Colin White']" ]
stat.ML cs.LG
null
1611.04561
null
null
http://arxiv.org/pdf/1611.04561v1
2016-11-14T20:34:29Z
2016-11-14T20:34:29Z
Splitting matters: how monotone transformation of predictor variables may improve the predictions of decision tree models
It is widely believed that the prediction accuracy of decision tree models is invariant under any strictly monotone transformation of the individual predictor variables. However, this statement may be false when predicting new observations with values that were not seen in the training-set and are close to the location of the split point of a tree rule. The sensitivity of the prediction error to the split point interpolation is high when the split point of the tree is estimated based on very few observations, reaching 9% misclassification error when only 10 observations are used for constructing a split, and shrinking to 1% when relying on 100 observations. This study compares the performance of alternative methods for split point interpolation and concludes that the best choice is taking the mid-point between the two closest points to the split point of the tree. Furthermore, if the (continuous) distribution of the predictor variable is known, then using its probability integral for transforming the variable ("quantile transformation") will reduce the model's interpolation error by up to about a half on average. Accordingly, this study provides guidelines for both developers and users of decision tree models (including bagging and random forest).
[ "['Tal Galili' 'Isaac Meilijson']", "Tal Galili, Isaac Meilijson" ]
cs.LG
null
1611.04578
null
null
http://arxiv.org/pdf/1611.04578v1
2016-11-14T20:55:33Z
2016-11-14T20:55:33Z
Earliness-Aware Deep Convolutional Networks for Early Time Series Classification
We present Earliness-Aware Deep Convolutional Networks (EA-ConvNets), an end-to-end deep learning framework, for early classification of time series data. Unlike most existing methods for early classification of time series data, that are designed to solve this problem under the assumption of the availability of a good set of pre-defined (often hand-crafted) features, our framework can jointly perform feature learning (by learning a deep hierarchy of \emph{shapelets} capturing the salient characteristics in each time series), along with a dynamic truncation model to help our deep feature learning architecture focus on the early parts of each time series. Consequently, our framework is able to make highly reliable early predictions, outperforming various state-of-the-art methods for early time series classification, while also being competitive when compared to the state-of-the-art time series classification algorithms that work with \emph{fully observed} time series data. To the best of our knowledge, the proposed framework is the first to perform data-driven (deep) feature learning in the context of early classification of time series data. We perform a comprehensive set of experiments, on several benchmark data sets, which demonstrate that our method yields significantly better predictions than various state-of-the-art methods designed for early time series classification. In addition to obtaining high accuracies, our experiments also show that the learned deep shapelets based features are also highly interpretable and can help gain better understanding of the underlying characteristics of time series data.
[ "Wenlin Wang, Changyou Chen, Wenqi Wang, Piyush Rai, Lawrence Carin", "['Wenlin Wang' 'Changyou Chen' 'Wenqi Wang' 'Piyush Rai' 'Lawrence Carin']" ]
cs.LG
null
1611.04581
null
null
http://arxiv.org/pdf/1611.04581v1
2016-11-14T20:59:54Z
2016-11-14T20:59:54Z
How to scale distributed deep learning?
Training time on large datasets for deep neural networks is the principal workflow bottleneck in a number of important applications of deep learning, such as object classification and detection in automatic driver assistance systems (ADAS). To minimize training time, the training of a deep neural network must be scaled beyond a single machine to as many machines as possible by distributing the optimization method used for training. While a number of approaches have been proposed for distributed stochastic gradient descent (SGD), at the current time synchronous approaches to distributed SGD appear to be showing the greatest performance at large scale. Synchronous scaling of SGD suffers from the need to synchronize all processors on each gradient step and is not resilient in the face of failing or lagging processors. In asynchronous approaches using parameter servers, training is slowed by contention to the parameter server. In this paper we compare the convergence of synchronous and asynchronous SGD for training a modern ResNet network architecture on the ImageNet classification problem. We also propose an asynchronous method, gossiping SGD, that aims to retain the positive features of both systems by replacing the all-reduce collective operation of synchronous training with a gossip aggregation algorithm. We find, perhaps counterintuitively, that asynchronous SGD, including both elastic averaging and gossiping, converges faster at fewer nodes (up to about 32 nodes), whereas synchronous SGD scales better to more nodes (up to about 100 nodes).
[ "['Peter H. Jin' 'Qiaochu Yuan' 'Forrest Iandola' 'Kurt Keutzer']", "Peter H. Jin, Qiaochu Yuan, Forrest Iandola, Kurt Keutzer" ]
cs.AI cs.CL cs.LG
null
1611.04642
null
null
http://arxiv.org/pdf/1611.04642v5
2018-04-22T05:22:58Z
2016-11-14T22:54:45Z
Link Prediction using Embedded Knowledge Graphs
Since large knowledge bases are typically incomplete, missing facts need to be inferred from observed facts in a task called knowledge base completion. The most successful approaches to this task have typically explored explicit paths through sequences of triples. These approaches have usually resorted to human-designed sampling procedures, since large knowledge graphs produce prohibitively large numbers of possible paths, most of which are uninformative. As an alternative approach, we propose performing a single, short sequence of interactive lookup operations on an embedded knowledge graph which has been trained through end-to-end backpropagation to be an optimized and compressed version of the initial knowledge base. Our proposed model, called Embedded Knowledge Graph Network (EKGN), achieves new state-of-the-art results on popular knowledge base completion benchmarks.
[ "['Yelong Shen' 'Po-Sen Huang' 'Ming-Wei Chang' 'Jianfeng Gao']", "Yelong Shen, Po-Sen Huang, Ming-Wei Chang, Jianfeng Gao" ]
cs.IR cs.LG
null
1611.04666
null
null
http://arxiv.org/pdf/1611.04666v1
2016-11-15T01:32:33Z
2016-11-15T01:32:33Z
A Generic Coordinate Descent Framework for Learning from Implicit Feedback
In recent years, interest in recommender research has shifted from explicit feedback towards implicit feedback data. A diversity of complex models has been proposed for a wide variety of applications. Despite this, learning from implicit feedback is still computationally challenging. So far, most work relies on stochastic gradient descent (SGD) solvers which are easy to derive, but in practice challenging to apply, especially for tasks with many items. For the simple matrix factorization model, an efficient coordinate descent (CD) solver has been previously proposed. However, efficient CD approaches have not been derived for more complex models. In this paper, we provide a new framework for deriving efficient CD algorithms for complex recommender models. We identify and introduce the property of k-separable models. We show that k-separability is a sufficient property to allow efficient optimization of implicit recommender problems with CD. We illustrate this framework on a variety of state-of-the-art models including factorization machines and Tucker decomposition. To summarize, our work provides the theory and building blocks to derive efficient implicit CD algorithms for complex recommender models.
[ "Immanuel Bayer, Xiangnan He, Bhargav Kanagal, Steffen Rendle", "['Immanuel Bayer' 'Xiangnan He' 'Bhargav Kanagal' 'Steffen Rendle']" ]
cs.LG
null
1611.04686
null
null
http://arxiv.org/pdf/1611.04686v1
2016-11-15T03:15:46Z
2016-11-15T03:15:46Z
Robust Matrix Regression
Modern technologies are producing datasets with complex intrinsic structures, and they can be naturally represented as matrices instead of vectors. To preserve the latent data structures during processing, modern regression approaches incorporate the low-rank property to the model and achieve satisfactory performance for certain applications. These approaches all assume that both predictors and labels for each pair of data within the training set are accurate. However, in real-world applications, it is common to see the training data contaminated by noises, which can affect the robustness of these matrix regression methods. In this paper, we address this issue by introducing a novel robust matrix regression method. We also derive efficient proximal algorithms for model training. To evaluate the performance of our methods, we apply it to real world applications with comparative studies. Our method achieves the state-of-the-art performance, which shows the effectiveness and the practical value of our method.
[ "Hang Zhang, Fengyuan Zhu and Shixin Li", "['Hang Zhang' 'Fengyuan Zhu' 'Shixin Li']" ]
cs.AI cs.LG
null
1611.04717
null
null
http://arxiv.org/pdf/1611.04717v3
2017-12-05T16:44:47Z
2016-11-15T06:42:24Z
#Exploration: A Study of Count-Based Exploration for Deep Reinforcement Learning
Count-based exploration algorithms are known to perform near-optimally when used in conjunction with tabular reinforcement learning (RL) methods for solving small discrete Markov decision processes (MDPs). It is generally thought that count-based methods cannot be applied in high-dimensional state spaces, since most states will only occur once. Recent deep RL exploration strategies are able to deal with high-dimensional continuous state spaces through complex heuristics, often relying on optimism in the face of uncertainty or intrinsic motivation. In this work, we describe a surprising finding: a simple generalization of the classic count-based approach can reach near state-of-the-art performance on various high-dimensional and/or continuous deep RL benchmarks. States are mapped to hash codes, which allows to count their occurrences with a hash table. These counts are then used to compute a reward bonus according to the classic count-based exploration theory. We find that simple hash functions can achieve surprisingly good results on many challenging tasks. Furthermore, we show that a domain-dependent learned hash code may further improve these results. Detailed analysis reveals important aspects of a good hash function: 1) having appropriate granularity and 2) encoding information relevant to solving the MDP. This exploration strategy achieves near state-of-the-art performance on both continuous control tasks and Atari 2600 games, hence providing a simple yet powerful baseline for solving MDPs that require considerable exploration.
[ "['Haoran Tang' 'Rein Houthooft' 'Davis Foote' 'Adam Stooke' 'Xi Chen'\n 'Yan Duan' 'John Schulman' 'Filip De Turck' 'Pieter Abbeel']", "Haoran Tang, Rein Houthooft, Davis Foote, Adam Stooke, Xi Chen, Yan\n Duan, John Schulman, Filip De Turck, Pieter Abbeel" ]
cs.CR cs.LG
null
1611.04786
null
null
http://arxiv.org/pdf/1611.04786v1
2016-11-15T10:54:58Z
2016-11-15T10:54:58Z
AdversariaLib: An Open-source Library for the Security Evaluation of Machine Learning Algorithms Under Attack
We present AdversariaLib, an open-source python library for the security evaluation of machine learning (ML) against carefully-targeted attacks. It supports the implementation of several attacks proposed thus far in the literature of adversarial learning, allows for the evaluation of a wide range of ML algorithms, runs on multiple platforms, and has multi-processing enabled. The library has a modular architecture that makes it easy to use and to extend by implementing novel attacks and countermeasures. It relies on other widely-used open-source ML libraries, including scikit-learn and FANN. Classification algorithms are implemented and optimized in C/C++, allowing for a fast evaluation of the simulated attacks. The package is distributed under the GNU General Public License v3, and it is available for download at http://sourceforge.net/projects/adversarialib.
[ "Igino Corona and Battista Biggio and Davide Maiorca", "['Igino Corona' 'Battista Biggio' 'Davide Maiorca']" ]
cs.LG math.OC stat.ML
null
1611.04831
null
null
http://arxiv.org/pdf/1611.04831v1
2016-11-15T13:56:24Z
2016-11-15T13:56:24Z
The Power of Normalization: Faster Evasion of Saddle Points
A commonly used heuristic in non-convex optimization is Normalized Gradient Descent (NGD) - a variant of gradient descent in which only the direction of the gradient is taken into account and its magnitude ignored. We analyze this heuristic and show that with carefully chosen parameters and noise injection, this method can provably evade saddle points. We establish the convergence of NGD to a local minimum, and demonstrate rates which improve upon the fastest known first order algorithm due to Ge e al. (2015). The effectiveness of our method is demonstrated via an application to the problem of online tensor decomposition; a task for which saddle point evasion is known to result in convergence to global minima.
[ "['Kfir Y. Levy']", "Kfir Y. Levy" ]
cs.CV cs.LG stat.ML
null
1611.04835
null
null
http://arxiv.org/pdf/1611.04835v1
2016-11-15T14:05:43Z
2016-11-15T14:05:43Z
Multilinear Low-Rank Tensors on Graphs & Applications
We propose a new framework for the analysis of low-rank tensors which lies at the intersection of spectral graph theory and signal processing. As a first step, we present a new graph based low-rank decomposition which approximates the classical low-rank SVD for matrices and multi-linear SVD for tensors. Then, building on this novel decomposition we construct a general class of convex optimization problems for approximately solving low-rank tensor inverse problems, such as tensor Robust PCA. The whole framework is named as 'Multilinear Low-rank tensors on Graphs (MLRTG)'. Our theoretical analysis shows: 1) MLRTG stands on the notion of approximate stationarity of multi-dimensional signals on graphs and 2) the approximation error depends on the eigen gaps of the graphs. We demonstrate applications for a wide variety of 4 artificial and 12 real tensor datasets, such as EEG, FMRI, BCI, surveillance videos and hyperspectral images. Generalization of the tensor concepts to non-euclidean domain, orders of magnitude speed-up, low-memory requirement and significantly enhanced performance at low SNR are the key aspects of our framework.
[ "['Nauman Shahid' 'Francesco Grassi' 'Pierre Vandergheynst']", "Nauman Shahid, Francesco Grassi, Pierre Vandergheynst" ]
cs.LG cs.DS
null
1611.04847
null
null
http://arxiv.org/pdf/1611.04847v3
2017-03-06T13:26:53Z
2016-11-10T10:13:10Z
The Power of Side-information in Subgraph Detection
In this work, we tackle the problem of hidden community detection. We consider Belief Propagation (BP) applied to the problem of detecting a hidden Erd\H{o}s-R\'enyi (ER) graph embedded in a larger and sparser ER graph, in the presence of side-information. We derive two related algorithms based on BP to perform subgraph detection in the presence of two kinds of side-information. The first variant of side-information consists of a set of nodes, called cues, known to be from the subgraph. The second variant of side-information consists of a set of nodes that are cues with a given probability. It was shown in past works that BP without side-information fails to detect the subgraph correctly when an effective signal-to-noise ratio (SNR) parameter falls below a threshold. In contrast, in the presence of non-trivial side-information, we show that the BP algorithm achieves asymptotically zero error for any value of the SNR parameter. We validate our results through simulations on synthetic datasets as well as on a few real world networks.
[ "['Arun Kadavankandy' 'Konstantin Avrachenkov' 'Laura Cottatellucci'\n 'Rajesh Sundaresan']", "Arun Kadavankandy (MAESTRO), Konstantin Avrachenkov (MAESTRO), Laura\n Cottatellucci, Rajesh Sundaresan (ECE)" ]
cs.CV cs.LG
10.1109/TCYB.2016.2623638
1611.0487
null
null
null
null
null
Constrained Low-Rank Learning Using Least Squares-Based Regularization
Low-rank learning has attracted much attention recently due to its efficacy in a rich variety of real-world tasks, e.g., subspace segmentation and image categorization. Most low-rank methods are incapable of capturing low-dimensional subspace for supervised learning tasks, e.g., classification and regression. This paper aims to learn both the discriminant low-rank representation (LRR) and the robust projecting subspace in a supervised manner. To achieve this goal, we cast the problem into a constrained rank minimization framework by adopting the least squares regularization. Naturally, the data label structure tends to resemble that of the corresponding low-dimensional representation, which is derived from the robust subspace projection of clean data by low-rank learning. Moreover, the low-dimensional representation of original data can be paired with some informative structure by imposing an appropriate constraint, e.g., Laplacian regularizer. Therefore, we propose a novel constrained LRR method. The objective function is formulated as a constrained nuclear norm minimization problem, which can be solved by the inexact augmented Lagrange multiplier algorithm. Extensive experiments on image classification, human pose estimation, and robust face recovery have confirmed the superiority of our method.
[ "Ping Li and Jun Yu and Meng Wang and Luming Zhang and Deng Cai and\n Xuelong Li" ]
null
null
1611.04870
null
null
http://arxiv.org/abs/1611.04870v1
2016-11-15T14:50:31Z
2016-11-15T14:50:31Z
Constrained Low-Rank Learning Using Least Squares-Based Regularization
Low-rank learning has attracted much attention recently due to its efficacy in a rich variety of real-world tasks, e.g., subspace segmentation and image categorization. Most low-rank methods are incapable of capturing low-dimensional subspace for supervised learning tasks, e.g., classification and regression. This paper aims to learn both the discriminant low-rank representation (LRR) and the robust projecting subspace in a supervised manner. To achieve this goal, we cast the problem into a constrained rank minimization framework by adopting the least squares regularization. Naturally, the data label structure tends to resemble that of the corresponding low-dimensional representation, which is derived from the robust subspace projection of clean data by low-rank learning. Moreover, the low-dimensional representation of original data can be paired with some informative structure by imposing an appropriate constraint, e.g., Laplacian regularizer. Therefore, we propose a novel constrained LRR method. The objective function is formulated as a constrained nuclear norm minimization problem, which can be solved by the inexact augmented Lagrange multiplier algorithm. Extensive experiments on image classification, human pose estimation, and robust face recovery have confirmed the superiority of our method.
[ "['Ping Li' 'Jun Yu' 'Meng Wang' 'Luming Zhang' 'Deng Cai' 'Xuelong Li']" ]
cs.LG cs.CV cs.SD
null
1611.04871
null
null
http://arxiv.org/pdf/1611.04871v3
2017-02-18T07:18:32Z
2016-11-12T07:39:50Z
Audio Event and Scene Recognition: A Unified Approach using Strongly and Weakly Labeled Data
In this paper we propose a novel learning framework called Supervised and Weakly Supervised Learning where the goal is to learn simultaneously from weakly and strongly labeled data. Strongly labeled data can be simply understood as fully supervised data where all labeled instances are available. In weakly supervised learning only data is weakly labeled which prevents one from directly applying supervised learning methods. Our proposed framework is motivated by the fact that a small amount of strongly labeled data can give considerable improvement over only weakly supervised learning. The primary problem domain focus of this paper is acoustic event and scene detection in audio recordings. We first propose a naive formulation for leveraging labeled data in both forms. We then propose a more general framework for Supervised and Weakly Supervised Learning (SWSL). Based on this general framework, we propose a graph based approach for SWSL. Our main method is based on manifold regularization on graphs in which we show that the unified learning can be formulated as a constraint optimization problem which can be solved by iterative concave-convex procedure (CCCP). Our experiments show that our proposed framework can address several concerns of audio content analysis using weakly labeled data.
[ "['Anurag Kumar' 'Bhiksha Raj']", "Anurag Kumar, Bhiksha Raj" ]
stat.ML cs.LG
null
1611.0492
null
null
null
null
null
Unsupervised Learning with Truncated Gaussian Graphical Models
Gaussian graphical models (GGMs) are widely used for statistical modeling, because of ease of inference and the ubiquitous use of the normal distribution in practical approximations. However, they are also known for their limited modeling abilities, due to the Gaussian assumption. In this paper, we introduce a novel variant of GGMs, which relaxes the Gaussian restriction and yet admits efficient inference. Specifically, we impose a bipartite structure on the GGM and govern the hidden variables by truncated normal distributions. The nonlinearity of the model is revealed by its connection to rectified linear unit (ReLU) neural networks. Meanwhile, thanks to the bipartite structure and appealing properties of truncated normals, we are able to train the models efficiently using contrastive divergence. We consider three output constructs, accounting for real-valued, binary and count data. We further extend the model to deep constructions and show that deep models can be used for unsupervised pre-training of rectifier neural networks. Extensive experimental results are provided to validate the proposed models and demonstrate their superiority over competing models.
[ "Qinliang Su, Xuejun Liao, Chunyuan Li, Zhe Gan, Lawrence Carin" ]
null
null
1611.04920
null
null
http://arxiv.org/pdf/1611.04920v2
2016-11-20T19:08:51Z
2016-11-15T16:26:17Z
Unsupervised Learning with Truncated Gaussian Graphical Models
Gaussian graphical models (GGMs) are widely used for statistical modeling, because of ease of inference and the ubiquitous use of the normal distribution in practical approximations. However, they are also known for their limited modeling abilities, due to the Gaussian assumption. In this paper, we introduce a novel variant of GGMs, which relaxes the Gaussian restriction and yet admits efficient inference. Specifically, we impose a bipartite structure on the GGM and govern the hidden variables by truncated normal distributions. The nonlinearity of the model is revealed by its connection to rectified linear unit (ReLU) neural networks. Meanwhile, thanks to the bipartite structure and appealing properties of truncated normals, we are able to train the models efficiently using contrastive divergence. We consider three output constructs, accounting for real-valued, binary and count data. We further extend the model to deep constructions and show that deep models can be used for unsupervised pre-training of rectifier neural networks. Extensive experimental results are provided to validate the proposed models and demonstrate their superiority over competing models.
[ "['Qinliang Su' 'Xuejun Liao' 'Chunyuan Li' 'Zhe Gan' 'Lawrence Carin']" ]
cs.LG
null
1611.04924
null
null
http://arxiv.org/pdf/1611.04924v2
2017-07-20T13:10:24Z
2016-11-15T16:36:29Z
Robust Semi-Supervised Graph Classifier Learning with Negative Edge Weights
In a semi-supervised learning scenario, (possibly noisy) partially observed labels are used as input to train a classifier, in order to assign labels to unclassified samples. In this paper, we study this classifier learning problem from a graph signal processing (GSP) perspective. Specifically, by viewing a binary classifier as a piecewise constant graph-signal in a high-dimensional feature space, we cast classifier learning as a signal restoration problem via a classical maximum a posteriori (MAP) formulation. Unlike previous graph-signal restoration works, we consider in addition edges with negative weights that signify anti-correlation between samples. One unfortunate consequence is that the graph Laplacian matrix $\mathbf{L}$ can be indefinite, and previously proposed graph-signal smoothness prior $\mathbf{x}^T \mathbf{L} \mathbf{x}$ for candidate signal $\mathbf{x}$ can lead to pathological solutions. In response, we derive an optimal perturbation matrix $\boldsymbol{\Delta}$ - based on a fast lower-bound computation of the minimum eigenvalue of $\mathbf{L}$ via a novel application of the Haynsworth inertia additivity formula---so that $\mathbf{L} + \boldsymbol{\Delta}$ is positive semi-definite, resulting in a stable signal prior. Further, instead of forcing a hard binary decision for each sample, we define the notion of generalized smoothness on graph that promotes ambiguity in the classifier signal. Finally, we propose an algorithm based on iterative reweighted least squares (IRLS) that solves the posed MAP problem efficiently. Extensive simulation results show that our proposed algorithm outperforms both SVM variants and graph-based classifiers using positive-edge graphs noticeably.
[ "['Gene Cheung' 'Weng-Tai Su' 'Yu Mao' 'Chia-Wen Lin']", "Gene Cheung, Weng-Tai Su, Yu Mao, and Chia-Wen Lin" ]
cs.LG stat.ML
null
1611.04967
null
null
http://arxiv.org/pdf/1611.04967v1
2016-11-15T18:10:24Z
2016-11-15T18:10:24Z
Iterative Orthogonal Feature Projection for Diagnosing Bias in Black-Box Models
Predictive models are increasingly deployed for the purpose of determining access to services such as credit, insurance, and employment. Despite potential gains in productivity and efficiency, several potential problems have yet to be addressed, particularly the potential for unintentional discrimination. We present an iterative procedure, based on orthogonal projection of input attributes, for enabling interpretability of black-box predictive models. Through our iterative procedure, one can quantify the relative dependence of a black-box model on its input attributes.The relative significance of the inputs to a predictive model can then be used to assess the fairness (or discriminatory extent) of such a model.
[ "['Julius Adebayo' 'Lalana Kagal']", "Julius Adebayo, Lalana Kagal" ]
math.OC cs.LG stat.ML
null
1611.04982
null
null
http://arxiv.org/pdf/1611.04982v2
2017-03-08T11:05:59Z
2016-11-15T18:41:55Z
Oracle Complexity of Second-Order Methods for Finite-Sum Problems
Finite-sum optimization problems are ubiquitous in machine learning, and are commonly solved using first-order methods which rely on gradient computations. Recently, there has been growing interest in \emph{second-order} methods, which rely on both gradients and Hessians. In principle, second-order methods can require much fewer iterations than first-order methods, and hold the promise for more efficient algorithms. Although computing and manipulating Hessians is prohibitive for high-dimensional problems in general, the Hessians of individual functions in finite-sum problems can often be efficiently computed, e.g. because they possess a low-rank structure. Can second-order information indeed be used to solve such problems more efficiently? In this paper, we provide evidence that the answer -- perhaps surprisingly -- is negative, at least in terms of worst-case guarantees. However, we also discuss what additional assumptions and algorithmic approaches might potentially circumvent this negative result.
[ "Yossi Arjevani and Ohad Shamir", "['Yossi Arjevani' 'Ohad Shamir']" ]
cs.LG
null
1611.05013
null
null
http://arxiv.org/pdf/1611.05013v1
2016-11-15T20:16:27Z
2016-11-15T20:16:27Z
PixelVAE: A Latent Variable Model for Natural Images
Natural image modeling is a landmark challenge of unsupervised learning. Variational Autoencoders (VAEs) learn a useful latent representation and model global structure well but have difficulty capturing small details. PixelCNN models details very well, but lacks a latent code and is difficult to scale for capturing large structures. We present PixelVAE, a VAE model with an autoregressive decoder based on PixelCNN. Our model requires very few expensive autoregressive layers compared to PixelCNN and learns latent codes that are more compressed than a standard VAE while still capturing most non-trivial structure. Finally, we extend our model to a hierarchy of latent variables at different scales. Our model achieves state-of-the-art performance on binarized MNIST, competitive performance on 64x64 ImageNet, and high-quality samples on the LSUN bedrooms dataset.
[ "['Ishaan Gulrajani' 'Kundan Kumar' 'Faruk Ahmed' 'Adrien Ali Taiga'\n 'Francesco Visin' 'David Vazquez' 'Aaron Courville']", "Ishaan Gulrajani, Kundan Kumar, Faruk Ahmed, Adrien Ali Taiga,\n Francesco Visin, David Vazquez, Aaron Courville" ]
cs.SE cs.LG
null
1611.05083
null
null
http://arxiv.org/pdf/1611.05083v2
2016-11-18T11:37:29Z
2016-11-15T22:33:48Z
Probabilistic Failure Analysis in Model Validation & Verification
Automated fault localization is an important issue in model validation and verification. It helps the end users in analyzing the origin of failure. In this work, we show the early experiments with probabilistic analysis approaches in fault localization. Inspired by the Kullback-Leibler Divergence from Bayesian probabilistic theory, we propose a suspiciousness factor to compute the fault contribution for the transitions in the reachability graph of model checking, using which to rank the potential faulty transitions. To automatically locate design faults in the simulation model of detailed design, we propose to use the statistical model Hidden Markov Model (HMM), which provides statistically identical information to component's real behavior. The core of this method is a fault localization algorithm that gives out the set of suspicious ranked faulty components and a backward algorithm that computes the matching degree between the HMM and the simulation model to evaluate the confidence degree of the localization conclusion.
[ "['Ning Ge' 'Marc Pantel' 'Xavier Crégut']", "Ning Ge, Marc Pantel, Xavier Cr\\'egut" ]
cs.LG cs.RO cs.SY
null
1611.05095
null
null
http://arxiv.org/pdf/1611.05095v1
2016-11-15T23:31:40Z
2016-11-15T23:31:40Z
Learning Dexterous Manipulation Policies from Experience and Imitation
We explore learning-based approaches for feedback control of a dexterous five-finger hand performing non-prehensile manipulation. First, we learn local controllers that are able to perform the task starting at a predefined initial state. These controllers are constructed using trajectory optimization with respect to locally-linear time-varying models learned directly from sensor data. In some cases, we initialize the optimizer with human demonstrations collected via teleoperation in a virtual environment. We demonstrate that such controllers can perform the task robustly, both in simulation and on the physical platform, for a limited range of initial conditions around the trained starting state. We then consider two interpolation methods for generalizing to a wider range of initial conditions: deep learning, and nearest neighbors. We find that nearest neighbors achieve higher performance. Nevertheless, the neural network has its advantages: it uses only tactile and proprioceptive feedback but no visual feedback about the object (i.e. it performs the task blind) and learns a time-invariant policy. In contrast, the nearest neighbors method switches between time-varying local controllers based on the proximity of initial object states sensed via motion capture. While both generalization methods leave room for improvement, our work shows that (i) local trajectory-based controllers for complex non-prehensile manipulation tasks can be constructed from surprisingly small amounts of training data, and (ii) collections of such controllers can be interpolated to form more global controllers. Results are summarized in the supplementary video: https://youtu.be/E0wmO6deqjo
[ "Vikash Kumar, Abhishek Gupta, Emanuel Todorov and Sergey Levine", "['Vikash Kumar' 'Abhishek Gupta' 'Emanuel Todorov' 'Sergey Levine']" ]
null
null
1611.05132
null
null
http://arxiv.org/pdf/1611.05132v1
2016-11-16T03:28:08Z
2016-11-16T03:28:08Z
Convergence rate of stochastic k-means
We analyze online cite{BottouBengio} and mini-batch cite{Sculley} $k$-means variants. Both scale up the widely used $k$-means algorithm via stochastic approximation, and have become popular for large-scale clustering and unsupervised feature learning. We show, for the first time, that starting with any initial solution, they converge to a "local optimum" at rate $O(frac{1}{t})$ (in terms of the $k$-means objective) under general conditions. In addition, we show if the dataset is clusterable, when initialized with a simple and scalable seeding algorithm, mini-batch $k$-means converges to an optimal $k$-means solution at rate $O(frac{1}{t})$ with high probability. The $k$-means objective is non-convex and non-differentiable: we exploit ideas from recent work on stochastic gradient descent for non-convex problems cite{ge:sgd_tensor, balsubramani13} by providing a novel characterization of the trajectory of $k$-means algorithm on its solution space, and circumvent the non-differentiability problem via geometric insights about $k$-means update.
[ "['Cheng Tang' 'Claire Monteleoni']" ]
cs.LG stat.ML
null
1611.05136
null
null
http://arxiv.org/pdf/1611.05136v1
2016-11-16T03:45:12Z
2016-11-16T03:45:12Z
Machine Learning Approach for Skill Evaluation in Robotic-Assisted Surgery
Evaluating surgeon skill has predominantly been a subjective task. Development of objective methods for surgical skill assessment are of increased interest. Recently, with technological advances such as robotic-assisted minimally invasive surgery (RMIS), new opportunities for objective and automated assessment frameworks have arisen. In this paper, we applied machine learning methods to automatically evaluate performance of the surgeon in RMIS. Six important movement features were used in the evaluation including completion time, path length, depth perception, speed, smoothness and curvature. Different classification methods applied to discriminate expert and novice surgeons. We test our method on real surgical data for suturing task and compare the classification result with the ground truth data (obtained by manual labeling). The experimental results show that the proposed framework can classify surgical skill level with relatively high accuracy of 85.7%. This study demonstrates the ability of machine learning methods to automatically classify expert and novice surgeons using movement features for different RMIS tasks. Due to the simplicity and generalizability of the introduced classification method, it is easy to implement in existing trainers.
[ "Mahtab J. Fard, Sattar Ameri, Ratna B. Chinnam, Abhilash K. Pandya,\n Michael D. Klein, and R. Darin Ellis", "['Mahtab J. Fard' 'Sattar Ameri' 'Ratna B. Chinnam' 'Abhilash K. Pandya'\n 'Michael D. Klein' 'R. Darin Ellis']" ]
cs.LG cs.CV
null
1611.05138
null
null
http://arxiv.org/pdf/1611.05138v1
2016-11-16T04:17:52Z
2016-11-16T04:17:52Z
S3Pool: Pooling with Stochastic Spatial Sampling
Feature pooling layers (e.g., max pooling) in convolutional neural networks (CNNs) serve the dual purpose of providing increasingly abstract representations as well as yielding computational savings in subsequent convolutional layers. We view the pooling operation in CNNs as a two-step procedure: first, a pooling window (e.g., $2\times 2$) slides over the feature map with stride one which leaves the spatial resolution intact, and second, downsampling is performed by selecting one pixel from each non-overlapping pooling window in an often uniform and deterministic (e.g., top-left) manner. Our starting point in this work is the observation that this regularly spaced downsampling arising from non-overlapping windows, although intuitive from a signal processing perspective (which has the goal of signal reconstruction), is not necessarily optimal for \emph{learning} (where the goal is to generalize). We study this aspect and propose a novel pooling strategy with stochastic spatial sampling (S3Pool), where the regular downsampling is replaced by a more general stochastic version. We observe that this general stochasticity acts as a strong regularizer, and can also be seen as doing implicit data augmentation by introducing distortions in the feature maps. We further introduce a mechanism to control the amount of distortion to suit different datasets and architectures. To demonstrate the effectiveness of the proposed approach, we perform extensive experiments on several popular image classification benchmarks, observing excellent improvements over baseline models. Experimental code is available at https://github.com/Shuangfei/s3pool.
[ "Shuangfei Zhai, Hui Wu, Abhishek Kumar, Yu Cheng, Yongxi Lu, Zhongfei\n Zhang, Rogerio Feris", "['Shuangfei Zhai' 'Hui Wu' 'Abhishek Kumar' 'Yu Cheng' 'Yongxi Lu'\n 'Zhongfei Zhang' 'Rogerio Feris']" ]
cs.NE cs.LG
10.13140/RG.2.2.10967.06566
1611.05141
null
null
http://arxiv.org/abs/1611.05141v1
2016-11-16T04:32:22Z
2016-11-16T04:32:22Z
Training Spiking Deep Networks for Neuromorphic Hardware
We describe a method to train spiking deep networks that can be run using leaky integrate-and-fire (LIF) neurons, achieving state-of-the-art results for spiking LIF networks on five datasets, including the large ImageNet ILSVRC-2012 benchmark. Our method for transforming deep artificial neural networks into spiking networks is scalable and works with a wide range of neural nonlinearities. We achieve these results by softening the neural response function, such that its derivative remains bounded, and by training the network with noise to provide robustness against the variability introduced by spikes. Our analysis shows that implementations of these networks on neuromorphic hardware will be many times more power-efficient than the equivalent non-spiking networks on traditional hardware.
[ "['Eric Hunsberger' 'Chris Eliasmith']", "Eric Hunsberger, Chris Eliasmith" ]
cs.LG stat.ML
null
1611.05146
null
null
http://arxiv.org/pdf/1611.05146v1
2016-11-16T05:11:36Z
2016-11-16T05:11:36Z
A Semi-Markov Switching Linear Gaussian Model for Censored Physiological Data
Critically ill patients in regular wards are vulnerable to unanticipated clinical dete- rioration which requires timely transfer to the intensive care unit (ICU). To allow for risk scoring and patient monitoring in such a setting, we develop a novel Semi- Markov Switching Linear Gaussian Model (SSLGM) for the inpatients' physiol- ogy. The model captures the patients' latent clinical states and their corresponding observable lab tests and vital signs. We present an efficient unsupervised learn- ing algorithm that capitalizes on the informatively censored data in the electronic health records (EHR) to learn the parameters of the SSLGM; the learned model is then used to assess the new inpatients' risk for clinical deterioration in an online fashion, allowing for timely ICU admission. Experiments conducted on a het- erogeneous cohort of 6,094 patients admitted to a large academic medical center show that the proposed model significantly outperforms the currently deployed risk scores such as Rothman index, MEWS, SOFA and APACHE.
[ "Ahmed M. Alaa, Jinsung Yoon, Scott Hu, Mihaela van der Schaar", "['Ahmed M. Alaa' 'Jinsung Yoon' 'Scott Hu' 'Mihaela van der Schaar']" ]