categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
list
cs.CV cs.LG cs.NE
null
1609.05396
null
null
http://arxiv.org/pdf/1609.05396v1
2016-09-17T21:46:21Z
2016-09-17T21:46:21Z
A Deep Metric for Multimodal Registration
Multimodal registration is a challenging problem in medical imaging due the high variability of tissue appearance under different imaging modalities. The crucial component here is the choice of the right similarity measure. We make a step towards a general learning-based solution that can be adapted to specific situations and present a metric based on a convolutional neural network. Our network can be trained from scratch even from a few aligned image pairs. The metric is validated on intersubject deformable registration on a dataset different from the one used for training, demonstrating good generalization. In this task, we outperform mutual information by a significant margin.
[ "Martin Simonovsky, Benjam\\'in Guti\\'errez-Becker, Diana Mateus, Nassir\n Navab, Nikos Komodakis", "['Martin Simonovsky' 'Benjamín Gutiérrez-Becker' 'Diana Mateus'\n 'Nassir Navab' 'Nikos Komodakis']" ]
cs.LG cs.AI
null
1609.05473
null
null
http://arxiv.org/pdf/1609.05473v6
2017-08-25T16:22:57Z
2016-09-18T11:42:23Z
SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient
As a new way of training generative models, Generative Adversarial Nets (GAN) that uses a discriminative model to guide the training of the generative model has enjoyed considerable success in generating real-valued data. However, it has limitations when the goal is for generating sequences of discrete tokens. A major reason lies in that the discrete outputs from the generative model make it difficult to pass the gradient update from the discriminative model to the generative model. Also, the discriminative model can only assess a complete sequence, while for a partially generated sequence, it is non-trivial to balance its current score and the future one once the entire sequence has been generated. In this paper, we propose a sequence generation framework, called SeqGAN, to solve the problems. Modeling the data generator as a stochastic policy in reinforcement learning (RL), SeqGAN bypasses the generator differentiation problem by directly performing gradient policy update. The RL reward signal comes from the GAN discriminator judged on a complete sequence, and is passed back to the intermediate state-action steps using Monte Carlo search. Extensive experiments on synthetic data and real-world tasks demonstrate significant improvements over strong baselines.
[ "['Lantao Yu' 'Weinan Zhang' 'Jun Wang' 'Yong Yu']", "Lantao Yu, Weinan Zhang, Jun Wang, Yong Yu" ]
cs.LG stat.ML
10.1145/3309541
1609.05486
null
null
http://arxiv.org/abs/1609.05486v3
2018-06-13T10:10:48Z
2016-09-18T14:01:04Z
Probabilistic Feature Selection and Classification Vector Machine
Sparse Bayesian learning is a state-of-the-art supervised learning algorithm that can choose a subset of relevant samples from the input data and make reliable probabilistic predictions. However, in the presence of high-dimensional data with irrelevant features, traditional sparse Bayesian classifiers suffer from performance degradation and low efficiency by failing to eliminate irrelevant features. To tackle this problem, we propose a novel sparse Bayesian embedded feature selection method that adopts truncated Gaussian distributions as both sample and feature priors. The proposed method, called probabilistic feature selection and classification vector machine (PFCVMLP ), is able to simultaneously select relevant features and samples for classification tasks. In order to derive the analytical solutions, Laplace approximation is applied to compute approximate posteriors and marginal likelihoods. Finally, parameters and hyperparameters are optimized by the type-II maximum likelihood method. Experiments on three datasets validate the performance of PFCVMLP along two dimensions: classification performance and effectiveness for feature selection. Finally, we analyze the generalization performance and derive a generalization error bound for PFCVMLP . By tightening the bound, the importance of feature selection is demonstrated.
[ "['Bingbing Jiang' 'Chang Li' 'Maarten de Rijke' 'Xin Yao' 'Huanhuan Chen']", "Bingbing Jiang, Chang Li, Maarten de Rijke, Xin Yao and Huanhuan Chen" ]
cs.AI cs.LG
null
1609.05518
null
null
http://arxiv.org/pdf/1609.05518v2
2016-10-01T16:19:56Z
2016-09-18T17:28:22Z
Towards Deep Symbolic Reinforcement Learning
Deep reinforcement learning (DRL) brings the power of deep neural networks to bear on the generic task of trial-and-error learning, and its effectiveness has been convincingly demonstrated on tasks such as Atari video games and the game of Go. However, contemporary DRL systems inherit a number of shortcomings from the current generation of deep learning techniques. For example, they require very large datasets to work effectively, entailing that they are slow to learn even when such datasets are available. Moreover, they lack the ability to reason on an abstract level, which makes it difficult to implement high-level cognitive functions such as transfer learning, analogical reasoning, and hypothesis-based reasoning. Finally, their operation is largely opaque to humans, rendering them unsuitable for domains in which verifiability is important. In this paper, we propose an end-to-end reinforcement learning architecture comprising a neural back end and a symbolic front end with the potential to overcome each of these shortcomings. As proof-of-concept, we present a preliminary implementation of the architecture and apply it to several variants of a simple video game. We show that the resulting system -- though just a prototype -- learns effectively, and, by acquiring a set of symbolic rules that are easily comprehensible to humans, dramatically outperforms a conventional, fully neural DRL system on a stochastic variant of the game.
[ "['Marta Garnelo' 'Kai Arulkumaran' 'Murray Shanahan']", "Marta Garnelo, Kai Arulkumaran, Murray Shanahan" ]
cs.AI cs.LG
null
1609.05521
null
null
http://arxiv.org/pdf/1609.05521v2
2018-01-29T15:13:59Z
2016-09-18T17:52:28Z
Playing FPS Games with Deep Reinforcement Learning
Advances in deep reinforcement learning have allowed autonomous agents to perform well on Atari games, often outperforming humans, using only raw pixels to make their decisions. However, most of these games take place in 2D environments that are fully observable to the agent. In this paper, we present the first architecture to tackle 3D environments in first-person shooter games, that involve partially observable states. Typically, deep reinforcement learning methods only utilize visual input for training. We present a method to augment these models to exploit game feature information such as the presence of enemies or items, during the training phase. Our model is trained to simultaneously learn these features along with minimizing a Q-learning objective, which is shown to dramatically improve the training speed and performance of our agent. Our architecture is also modularized to allow different models to be independently trained for different phases of the game. We show that the proposed architecture substantially outperforms built-in AI agents of the game as well as humans in deathmatch scenarios.
[ "['Guillaume Lample' 'Devendra Singh Chaplot']", "Guillaume Lample, Devendra Singh Chaplot" ]
cs.LG stat.ML
null
1609.05524
null
null
http://arxiv.org/pdf/1609.05524v3
2017-03-30T05:04:42Z
2016-09-18T18:19:02Z
Principled Option Learning in Markov Decision Processes
It is well known that options can make planning more efficient, among their many benefits. Thus far, algorithms for autonomously discovering a set of useful options were heuristic. Naturally, a principled way of finding a set of useful options may be more promising and insightful. In this paper we suggest a mathematical characterization of good sets of options using tools from information theory. This characterization enables us to find conditions for a set of options to be optimal and an algorithm that outputs a useful set of options and illustrate the proposed algorithm in simulation.
[ "Roy Fox, Michal Moshkovitz and Naftali Tishby", "['Roy Fox' 'Michal Moshkovitz' 'Naftali Tishby']" ]
cs.LG stat.ML
null
1609.05528
null
null
http://arxiv.org/pdf/1609.05528v1
2016-09-18T18:59:42Z
2016-09-18T18:59:42Z
Sequential Ensemble Learning for Outlier Detection: A Bias-Variance Perspective
Ensemble methods for classification and clustering have been effectively used for decades, while ensemble learning for outlier detection has only been studied recently. In this work, we design a new ensemble approach for outlier detection in multi-dimensional point data, which provides improved accuracy by reducing error through both bias and variance. Although classification and outlier detection appear as different problems, their theoretical underpinnings are quite similar in terms of the bias-variance trade-off [1], where outlier detection is considered as a binary classification task with unobserved labels but a similar bias-variance decomposition of error. In this paper, we propose a sequential ensemble approach called CARE that employs a two-phase aggregation of the intermediate results in each iteration to reach the final outcome. Unlike existing outlier ensembles which solely incorporate a parallel framework by aggregating the outcomes of independent base detectors to reduce variance, our ensemble incorporates both the parallel and sequential building blocks to reduce bias as well as variance by ($i$) successively eliminating outliers from the original dataset to build a better data model on which outlierness is estimated (sequentially), and ($ii$) combining the results from individual base detectors and across iterations (parallelly). Through extensive experiments on sixteen real-world datasets mainly from the UCI machine learning repository [2], we show that CARE performs significantly better than or at least similar to the individual baselines. We also compare CARE with the state-of-the-art outlier ensembles where it also provides significant improvement when it is the winner and remains close otherwise.
[ "['Shebuti Rayana' 'Wen Zhong' 'Leman Akoglu']", "Shebuti Rayana, Wen Zhong and Leman Akoglu" ]
cs.LG stat.ML
null
1609.05536
null
null
http://arxiv.org/pdf/1609.05536v1
2016-09-18T19:58:48Z
2016-09-18T19:58:48Z
Learning Personalized Optimal Control for Repeatedly Operated Systems
We consider the problem of online learning of optimal control for repeatedly operated systems in the presence of parametric uncertainty. During each round of operation, environment selects system parameters according to a fixed but unknown probability distribution. These parameters govern the dynamics of a plant. An agent chooses a control input to the plant and is then revealed the cost of the choice. In this setting, we design an agent that personalizes the control input to this plant taking into account the stochasticity involved. We demonstrate the effectiveness of our approach on a simulated system.
[ "['Theja Tulabandhula']", "Theja Tulabandhula" ]
stat.ML cs.LG
null
1609.05539
null
null
http://arxiv.org/pdf/1609.05539v2
2017-01-20T17:37:44Z
2016-09-18T20:17:00Z
On Randomized Distributed Coordinate Descent with Quantized Updates
In this paper, we study the randomized distributed coordinate descent algorithm with quantized updates. In the literature, the iteration complexity of the randomized distributed coordinate descent algorithm has been characterized under the assumption that machines can exchange updates with an infinite precision. We consider a practical scenario in which the messages exchange occurs over channels with finite capacity, and hence the updates have to be quantized. We derive sufficient conditions on the quantization error such that the algorithm with quantized update still converge. We further verify our theoretical results by running an experiment, where we apply the algorithm with quantized updates to solve a linear regression problem.
[ "Mostafa El Gamal and Lifeng Lai", "['Mostafa El Gamal' 'Lifeng Lai']" ]
cs.LG
null
1609.05559
null
null
http://arxiv.org/pdf/1609.05559v1
2016-09-18T22:06:36Z
2016-09-18T22:06:36Z
Opponent Modeling in Deep Reinforcement Learning
Opponent modeling is necessary in multi-agent settings where secondary agents with competing goals also adapt their strategies, yet it remains challenging because strategies interact with each other and change. Most previous work focuses on developing probabilistic models or parameterized strategies for specific applications. Inspired by the recent success of deep reinforcement learning, we present neural-based models that jointly learn a policy and the behavior of opponents. Instead of explicitly predicting the opponent's action, we encode observation of the opponents into a deep Q-Network (DQN); however, we retain explicit modeling (if desired) using multitasking. By using a Mixture-of-Experts architecture, our model automatically discovers different strategy patterns of opponents without extra supervision. We evaluate our models on a simulated soccer game and a popular trivia game, showing superior performance over DQN and its variants.
[ "['He He' 'Jordan Boyd-Graber' 'Kevin Kwok' 'Hal Daumé III']", "He He, Jordan Boyd-Graber, Kevin Kwok, Hal Daum\\'e III" ]
cs.NA cs.IT cs.LG math.IT
null
1609.05587
null
null
http://arxiv.org/pdf/1609.05587v1
2016-09-19T03:25:33Z
2016-09-19T03:25:33Z
Tensor Completion by Alternating Minimization under the Tensor Train (TT) Model
Using the matrix product state (MPS) representation of tensor train decompositions, in this paper we propose a tensor completion algorithm which alternates over the matrices (tensors) in the MPS representation. This development is motivated in part by the success of matrix completion algorithms which alternate over the (low-rank) factors. We comment on the computational complexity of the proposed algorithm and numerically compare it with existing methods employing low rank tensor train approximation for data completion as well as several other recently proposed methods. We show that our method is superior to existing ones for a variety of real settings.
[ "['Wenqi Wang' 'Vaneet Aggarwal' 'Shuchin Aeron']", "Wenqi Wang and Vaneet Aggarwal and Shuchin Aeron" ]
cs.IR cs.LG
null
1609.0561
null
null
null
null
null
Enhancing LambdaMART Using Oblivious Trees
Learning to rank is a machine learning technique broadly used in many areas such as document retrieval, collaborative filtering or question answering. We present experimental results which suggest that the performance of the current state-of-the-art learning to rank algorithm LambdaMART, when used for document retrieval for search engines, can be improved if standard regression trees are replaced by oblivious trees. This paper provides a comparison of both variants and our results demonstrate that the use of oblivious trees can improve the performance by more than $2.2\%$. Additional experimental analysis of the influence of a number of features and of a size of the training set is also provided and confirms the desirability of properties of oblivious decision trees.
[ "Michal Ferov and Marek Modr\\'y" ]
null
null
1609.05610
null
null
http://arxiv.org/pdf/1609.05610v1
2016-09-19T07:03:29Z
2016-09-19T07:03:29Z
Enhancing LambdaMART Using Oblivious Trees
Learning to rank is a machine learning technique broadly used in many areas such as document retrieval, collaborative filtering or question answering. We present experimental results which suggest that the performance of the current state-of-the-art learning to rank algorithm LambdaMART, when used for document retrieval for search engines, can be improved if standard regression trees are replaced by oblivious trees. This paper provides a comparison of both variants and our results demonstrate that the use of oblivious trees can improve the performance by more than $2.2%$. Additional experimental analysis of the influence of a number of features and of a size of the training set is also provided and confirms the desirability of properties of oblivious decision trees.
[ "['Michal Ferov' 'Marek Modrý']" ]
stat.ML cs.LG
null
1609.05772
null
null
http://arxiv.org/pdf/1609.05772v1
2016-09-19T15:19:44Z
2016-09-19T15:19:44Z
Stochastic Matrix Factorization
This paper considers a restriction to non-negative matrix factorization in which at least one matrix factor is stochastic. That is, the elements of the matrix factors are non-negative and the columns of one matrix factor sum to 1. This restriction includes topic models, a popular method for analyzing unstructured data. It also includes a method for storing and finding pictures. The paper presents necessary and sufficient conditions on the observed data such that the factorization is unique. In addition, the paper characterizes natural bounds on the parameters for any observed data and presents a consistent least squares estimator. The results are illustrated using a topic model analysis of PhD abstracts in economics and the problem of storing and retrieving a set of pictures of faces.
[ "Christopher Adams", "['Christopher Adams']" ]
cs.LG cs.CY stat.ML
null
1609.05807
null
null
http://arxiv.org/pdf/1609.05807v2
2016-11-17T16:41:21Z
2016-09-19T16:08:51Z
Inherent Trade-Offs in the Fair Determination of Risk Scores
Recent discussion in the public sphere about algorithmic classification has involved tension between competing notions of what it means for a probabilistic classification to be fair to different groups. We formalize three fairness conditions that lie at the heart of these debates, and we prove that except in highly constrained special cases, there is no method that can satisfy these three conditions simultaneously. Moreover, even satisfying all three conditions approximately requires that the data lie in an approximate version of one of the constrained special cases identified by our theorem. These results suggest some of the ways in which key notions of fairness are incompatible with each other, and hence provide a framework for thinking about the trade-offs between them.
[ "Jon Kleinberg, Sendhil Mullainathan, Manish Raghavan", "['Jon Kleinberg' 'Sendhil Mullainathan' 'Manish Raghavan']" ]
cs.IT cs.CV cs.LG math.IT math.OC stat.ML
null
1609.0582
null
null
null
null
null
The Projected Power Method: An Efficient Algorithm for Joint Alignment from Pairwise Differences
Various applications involve assigning discrete label values to a collection of objects based on some pairwise noisy data. Due to the discrete---and hence nonconvex---structure of the problem, computing the optimal assignment (e.g.~maximum likelihood assignment) becomes intractable at first sight. This paper makes progress towards efficient computation by focusing on a concrete joint alignment problem---that is, the problem of recovering $n$ discrete variables $x_i \in \{1,\cdots, m\}$, $1\leq i\leq n$ given noisy observations of their modulo differences $\{x_i - x_j~\mathsf{mod}~m\}$. We propose a low-complexity and model-free procedure, which operates in a lifted space by representing distinct label values in orthogonal directions, and which attempts to optimize quadratic functions over hypercubes. Starting with a first guess computed via a spectral method, the algorithm successively refines the iterates via projected power iterations. We prove that for a broad class of statistical models, the proposed projected power method makes no error---and hence converges to the maximum likelihood estimate---in a suitable regime. Numerical experiments have been carried out on both synthetic and real data to demonstrate the practicality of our algorithm. We expect this algorithmic framework to be effective for a broad range of discrete assignment problems.
[ "Yuxin Chen and Emmanuel Candes" ]
null
null
1609.05820
null
null
http://arxiv.org/pdf/1609.05820v3
2017-12-07T20:30:54Z
2016-09-19T16:29:46Z
The Projected Power Method: An Efficient Algorithm for Joint Alignment from Pairwise Differences
Various applications involve assigning discrete label values to a collection of objects based on some pairwise noisy data. Due to the discrete---and hence nonconvex---structure of the problem, computing the optimal assignment (e.g.~maximum likelihood assignment) becomes intractable at first sight. This paper makes progress towards efficient computation by focusing on a concrete joint alignment problem---that is, the problem of recovering $n$ discrete variables $x_i in {1,cdots, m}$, $1leq ileq n$ given noisy observations of their modulo differences ${x_i - x_j~mathsf{mod}~m}$. We propose a low-complexity and model-free procedure, which operates in a lifted space by representing distinct label values in orthogonal directions, and which attempts to optimize quadratic functions over hypercubes. Starting with a first guess computed via a spectral method, the algorithm successively refines the iterates via projected power iterations. We prove that for a broad class of statistical models, the proposed projected power method makes no error---and hence converges to the maximum likelihood estimate---in a suitable regime. Numerical experiments have been carried out on both synthetic and real data to demonstrate the practicality of our algorithm. We expect this algorithmic framework to be effective for a broad range of discrete assignment problems.
[ "['Yuxin Chen' 'Emmanuel Candes']" ]
cs.LG cs.IR cs.NE stat.ML
null
1609.05866
null
null
http://arxiv.org/pdf/1609.05866v1
2016-09-19T18:55:18Z
2016-09-19T18:55:18Z
A Cheap Linear Attention Mechanism with Fast Lookups and Fixed-Size Representations
The softmax content-based attention mechanism has proven to be very beneficial in many applications of recurrent neural networks. Nevertheless it suffers from two major computational limitations. First, its computations for an attention lookup scale linearly in the size of the attended sequence. Second, it does not encode the sequence into a fixed-size representation but instead requires to memorize all the hidden states. These two limitations restrict the use of the softmax attention mechanism to relatively small-scale applications with short sequences and few lookups per sequence. In this work we introduce a family of linear attention mechanisms designed to overcome the two limitations listed above. We show that removing the softmax non-linearity from the traditional attention formulation yields constant-time attention lookups and fixed-size representations of the attended sequences. These properties make these linear attention mechanisms particularly suitable for large-scale applications with extreme query loads, real-time requirements and memory constraints. Early experiments on a question answering task show that these linear mechanisms yield significantly better accuracy results than no attention, but obviously worse than their softmax alternative.
[ "Alexandre de Br\\'ebisson, Pascal Vincent", "['Alexandre de Brébisson' 'Pascal Vincent']" ]
cs.AI cs.LG stat.ML
null
1609.05881
null
null
http://arxiv.org/pdf/1609.05881v1
2016-09-19T19:44:06Z
2016-09-19T19:44:06Z
Online and Distributed learning of Gaussian mixture models by Bayesian Moment Matching
The Gaussian mixture model is a classic technique for clustering and data modeling that is used in numerous applications. With the rise of big data, there is a need for parameter estimation techniques that can handle streaming data and distribute the computation over several processors. While online variants of the Expectation Maximization (EM) algorithm exist, their data efficiency is reduced by a stochastic approximation of the E-step and it is not clear how to distribute the computation over multiple processors. We propose a Bayesian learning technique that lends itself naturally to online and distributed computation. Since the Bayesian posterior is not tractable, we project it onto a family of tractable distributions after each observation by matching a set of sufficient moments. This Bayesian moment matching technique compares favorably to online EM in terms of time and accuracy on a set of data modeling benchmarks.
[ "['Priyank Jaini' 'Pascal Poupart']", "Priyank Jaini and Pascal Poupart" ]
quant-ph cs.LG cs.NE
10.12743/quanta.v7i1.65
1609.05884
null
null
http://arxiv.org/abs/1609.05884v2
2018-02-02T09:39:52Z
2016-09-19T19:47:52Z
A Quantum Implementation Model for Artificial Neural Networks
The learning process for multi layered neural networks with many nodes makes heavy demands on computational resources. In some neural network models, the learning formulas, such as the Widrow-Hoff formula, do not change the eigenvectors of the weight matrix while flatting the eigenvalues. In infinity, this iterative formulas result in terms formed by the principal components of the weight matrix: i.e., the eigenvectors corresponding to the non-zero eigenvalues. In quantum computing, the phase estimation algorithm is known to provide speed-ups over the conventional algorithms for the eigenvalue-related problems. Combining the quantum amplitude amplification with the phase estimation algorithm, a quantum implementation model for artificial neural networks using the Widrow-Hoff learning rule is presented. The complexity of the model is found to be linear in the size of the weight matrix. This provides a quadratic improvement over the classical algorithms.
[ "Ammar Daskin", "['Ammar Daskin']" ]
stat.ML cs.LG stat.AP
null
1609.05959
null
null
http://arxiv.org/pdf/1609.05959v1
2016-09-19T22:30:36Z
2016-09-19T22:30:36Z
Conformalized Kernel Ridge Regression
General predictive models do not provide a measure of confidence in predictions without Bayesian assumptions. A way to circumvent potential restrictions is to use conformal methods for constructing non-parametric confidence regions, that offer guarantees regarding validity. In this paper we provide a detailed description of a computationally efficient conformal procedure for Kernel Ridge Regression (KRR), and conduct a comparative numerical study to see how well conformal regions perform against the Bayesian confidence sets. The results suggest that conformalized KRR can yield predictive confidence regions with specified coverage rate, which is essential in constructing anomaly detection systems based on predictive models.
[ "['Evgeny Burnaev' 'Ivan Nazarov']", "Evgeny Burnaev and Ivan Nazarov" ]
cs.SD cs.LG cs.MM
null
1609.06026
null
null
http://arxiv.org/pdf/1609.06026v3
2017-06-27T17:09:59Z
2016-09-20T05:52:06Z
An Approach for Self-Training Audio Event Detectors Using Web Data
Audio Event Detection (AED) aims to recognize sounds within audio and video recordings. AED employs machine learning algorithms commonly trained and tested on annotated datasets. However, available datasets are limited in number of samples and hence it is difficult to model acoustic diversity. Therefore, we propose combining labeled audio from a dataset and unlabeled audio from the web to improve the sound models. The audio event detectors are trained on the labeled audio and ran on the unlabeled audio downloaded from YouTube. Whenever the detectors recognized any of the known sounds with high confidence, the unlabeled audio was use to re-train the detectors. The performance of the re-trained detectors is compared to the one from the original detectors using the annotated test set. Results showed an improvement of the AED, and uncovered challenges of using web audio from videos.
[ "Benjamin Elizalde, Ankit Shah, Siddharth Dalmia, Min Hun Lee, Rohan\n Badlani, Anurag Kumar, Bhiksha Raj and Ian Lane", "['Benjamin Elizalde' 'Ankit Shah' 'Siddharth Dalmia' 'Min Hun Lee'\n 'Rohan Badlani' 'Anurag Kumar' 'Bhiksha Raj' 'Ian Lane']" ]
cs.CE cs.LG
10.1109/EAIS.2015.7368789
1609.06086
null
null
http://arxiv.org/abs/1609.06086v1
2016-09-20T10:36:01Z
2016-09-20T10:36:01Z
Modelling Stock-market Investors as Reinforcement Learning Agents [Correction]
Decision making in uncertain and risky environments is a prominent area of research. Standard economic theories fail to fully explain human behaviour, while a potentially promising alternative may lie in the direction of Reinforcement Learning (RL) theory. We analyse data for 46 players extracted from a financial market online game and test whether Reinforcement Learning (Q-Learning) could capture these players behaviour using a risk measure based on financial modeling. Moreover we test an earlier hypothesis that players are "na\"ive" (short-sighted). Our results indicate that a simple Reinforcement Learning model which considers only the selling component of the task captures the decision-making process for a subset of players but this is not sufficient to draw any conclusion on the population. We also find that there is not a significant improvement of fitting of the players when using a full RL model against a myopic version, where only immediate reward is valued by the players. This indicates that players, if using a Reinforcement Learning approach, do so na\"ively
[ "Alvin Pastore, Umberto Esposito, Eleni Vasilaki", "['Alvin Pastore' 'Umberto Esposito' 'Eleni Vasilaki']" ]
cs.LG stat.ML
10.1109/TSP.2017.2708035
1609.061
null
null
null
null
null
Distributed Adaptive Learning of Graph Signals
The aim of this paper is to propose distributed strategies for adaptive learning of signals defined over graphs. Assuming the graph signal to be bandlimited, the method enables distributed reconstruction, with guaranteed performance in terms of mean-square error, and tracking from a limited number of sampled observations taken from a subset of vertices. A detailed mean square analysis is carried out and illustrates the role played by the sampling strategy on the performance of the proposed method. Finally, some useful strategies for distributed selection of the sampling set are provided. Several numerical results validate our theoretical findings, and illustrate the performance of the proposed method for distributed adaptive learning of signals defined over graphs.
[ "P. Di Lorenzo, P. Banelli, S. Barbarossa, S. Sardellitti" ]
null
null
1609.06100
null
null
http://arxiv.org/abs/1609.06100v4
2017-05-13T21:06:26Z
2016-09-20T11:12:04Z
Distributed Adaptive Learning of Graph Signals
The aim of this paper is to propose distributed strategies for adaptive learning of signals defined over graphs. Assuming the graph signal to be bandlimited, the method enables distributed reconstruction, with guaranteed performance in terms of mean-square error, and tracking from a limited number of sampled observations taken from a subset of vertices. A detailed mean square analysis is carried out and illustrates the role played by the sampling strategy on the performance of the proposed method. Finally, some useful strategies for distributed selection of the sampling set are provided. Several numerical results validate our theoretical findings, and illustrate the performance of the proposed method for distributed adaptive learning of signals defined over graphs.
[ "['P. Di Lorenzo' 'P. Banelli' 'S. Barbarossa' 'S. Sardellitti']" ]
cs.LG
null
1609.06119
null
null
http://arxiv.org/pdf/1609.06119v1
2016-09-20T11:50:52Z
2016-09-20T11:50:52Z
FastBDT: A speed-optimized and cache-friendly implementation of stochastic gradient-boosted decision trees for multivariate classification
Stochastic gradient-boosted decision trees are widely employed for multivariate classification and regression tasks. This paper presents a speed-optimized and cache-friendly implementation for multivariate classification called FastBDT. FastBDT is one order of magnitude faster during the fitting-phase and application-phase, in comparison with popular implementations in software frameworks like TMVA, scikit-learn and XGBoost. The concepts used to optimize the execution time and performance studies are discussed in detail in this paper. The key ideas include: An equal-frequency binning on the input data, which allows replacing expensive floating-point with integer operations, while at the same time increasing the quality of the classification; a cache-friendly linear access pattern to the input data, in contrast to usual implementations, which exhibit a random access pattern. FastBDT provides interfaces to C/C++, Python and TMVA. It is extensively used in the field of high energy physics by the Belle II experiment.
[ "Thomas Keck", "['Thomas Keck']" ]
cs.CL cs.LG
null
1609.06127
null
null
http://arxiv.org/pdf/1609.06127v1
2016-09-20T12:29:15Z
2016-09-20T12:29:15Z
A framework for mining process models from emails logs
Due to its wide use in personal, but most importantly, professional contexts, email represents a valuable source of information that can be harvested for understanding, reengineering and repurposing undocumented business processes of companies and institutions. Towards this aim, a few researchers investigated the problem of extracting process oriented information from email logs in order to take benefit of the many available process mining techniques and tools. In this paper we go further in this direction, by proposing a new method for mining process models from email logs that leverage unsupervised machine learning techniques with little human involvement. Moreover, our method allows to semi-automatically label emails with activity names, that can be used for activity recognition in new incoming emails. A use case demonstrates the usefulness of the proposed solution using a modest in size, yet real-world, dataset containing emails that belong to two different process models.
[ "['Diana Jlailaty' 'Daniela Grigori' 'Khalid Belhajjame']", "Diana Jlailaty and Daniela Grigori and Khalid Belhajjame" ]
cs.LG
null
1609.06146
null
null
http://arxiv.org/pdf/1609.06146v1
2016-09-18T01:08:20Z
2016-09-18T01:08:20Z
mlr Tutorial
This document provides and in-depth introduction to the mlr framework for machine learning experiments in R.
[ "['Julia Schiffner' 'Bernd Bischl' 'Michel Lang' 'Jakob Richter'\n 'Zachary M. Jones' 'Philipp Probst' 'Florian Pfisterer' 'Mason Gallo'\n 'Dominik Kirchhoff' 'Tobias Kühn' 'Janek Thomas' 'Lars Kotthoff']", "Julia Schiffner, Bernd Bischl, Michel Lang, Jakob Richter, Zachary M.\n Jones, Philipp Probst, Florian Pfisterer, Mason Gallo, Dominik Kirchhoff,\n Tobias K\\\"uhn, Janek Thomas, Lars Kotthoff" ]
q-bio.MN cs.LG
null
1609.06335
null
null
http://arxiv.org/pdf/1609.06335v1
2016-09-20T20:14:15Z
2016-09-20T20:14:15Z
Unsupervised learning of transcriptional regulatory networks via latent tree graphical models
Gene expression is a readily-observed quantification of transcriptional activity and cellular state that enables the recovery of the relationships between regulators and their target genes. Reconstructing transcriptional regulatory networks from gene expression data is a problem that has attracted much attention, but previous work often makes the simplifying (but unrealistic) assumption that regulator activity is represented by mRNA levels. We use a latent tree graphical model to analyze gene expression without relying on transcription factor expression as a proxy for regulator activity. The latent tree model is a type of Markov random field that includes both observed gene variables and latent (hidden) variables, which factorize on a Markov tree. Through efficient unsupervised learning approaches, we determine which groups of genes are co-regulated by hidden regulators and the activity levels of those regulators. Post-processing annotates many of these discovered latent variables as specific transcription factors or groups of transcription factors. Other latent variables do not necessarily represent physical regulators but instead reveal hidden structure in the gene expression such as shared biological function. We apply the latent tree graphical model to a yeast stress response dataset. In addition to novel predictions, such as condition-specific binding of the transcription factor Msn4, our model recovers many known aspects of the yeast regulatory network. These include groups of co-regulated genes, condition-specific regulator activity, and combinatorial regulation among transcription factors. The latent tree graphical model is a general approach for analyzing gene expression data that requires no prior knowledge of which possible regulators exist, regulator activity, or where transcription factors physically bind.
[ "['Anthony Gitter' 'Furong Huang' 'Ragupathyraj Valluvan' 'Ernest Fraenkel'\n 'Animashree Anandkumar']", "Anthony Gitter, Furong Huang, Ragupathyraj Valluvan, Ernest Fraenkel,\n Animashree Anandkumar" ]
cs.AI cs.CY cs.HC cs.LG
null
1609.06354
null
null
http://arxiv.org/pdf/1609.06354v4
2017-09-30T15:25:23Z
2016-09-20T20:56:07Z
Recognizing Detailed Human Context In-the-Wild from Smartphones and Smartwatches
The ability to automatically recognize a person's behavioral context can contribute to health monitoring, aging care and many other domains. Validating context recognition in-the-wild is crucial to promote practical applications that work in real-life settings. We collected over 300k minutes of sensor data with context labels from 60 subjects. Unlike previous studies, our subjects used their own personal phone, in any way that was convenient to them, and engaged in their routine in their natural environments. Unscripted behavior and unconstrained phone usage resulted in situations that are harder to recognize. We demonstrate how fusion of multi-modal sensors is important for resolving such cases. We present a baseline system, and encourage researchers to use our public dataset to compare methods and improve context recognition in-the-wild.
[ "Yonatan Vaizman and Katherine Ellis and Gert Lanckriet", "['Yonatan Vaizman' 'Katherine Ellis' 'Gert Lanckriet']" ]
cs.LG cs.CV
null
1609.06377
null
null
http://arxiv.org/pdf/1609.06377v2
2017-06-12T21:52:06Z
2016-09-20T22:49:34Z
Geometry-Based Next Frame Prediction from Monocular Video
We consider the problem of next frame prediction from video input. A recurrent convolutional neural network is trained to predict depth from monocular video input, which, along with the current video image and the camera trajectory, can then be used to compute the next frame. Unlike prior next-frame prediction approaches, we take advantage of the scene geometry and use the predicted depth for generating the next frame prediction. Our approach can produce rich next frame predictions which include depth information attached to each pixel. Another novel aspect of our approach is that it predicts depth from a sequence of images (e.g. in a video), rather than from a single still image. We evaluate the proposed approach on the KITTI dataset, a standard dataset for benchmarking tasks relevant to autonomous driving. The proposed method produces results which are visually and numerically superior to existing methods that directly predict the next frame. We show that the accuracy of depth prediction improves as more prior frames are considered.
[ "Reza Mahjourian, Martin Wicke, Anelia Angelova", "['Reza Mahjourian' 'Martin Wicke' 'Anelia Angelova']" ]
stat.ML cs.LG
null
1609.06385
null
null
http://arxiv.org/pdf/1609.06385v1
2016-09-20T23:41:55Z
2016-09-20T23:41:55Z
Multiclass Classification Calibration Functions
In this paper we refine the process of computing calibration functions for a number of multiclass classification surrogate losses. Calibration functions are a powerful tool for easily converting bounds for the surrogate risk (which can be computed through well-known methods) into bounds for the true risk, the probability of making a mistake. They are particularly suitable in non-parametric settings, where the approximation error can be controlled, and provide tighter bounds than the common technique of upper-bounding the 0-1 loss by the surrogate loss. The abstract nature of the more sophisticated existing calibration function results requires calibration functions to be explicitly derived on a case-by-case basis, requiring repeated efforts whenever bounds for a new surrogate loss are required. We devise a streamlined analysis that simplifies the process of deriving calibration functions for a large number of surrogate losses that have been proposed in the literature. The effort of deriving calibration functions is then surmised in verifying, for a chosen surrogate loss, a small number of conditions that we introduce. As case studies, we recover existing calibration functions for the well-known loss of Lee et al. (2004), and also provide novel calibration functions for well-known losses, including the one-versus-all loss and the logistic regression loss, plus a number of other losses that have been shown to be classification-calibrated in the past, but for which no calibration function had been derived.
[ "Bernardo \\'Avila Pires and Csaba Szepesv\\'ari", "['Bernardo Ávila Pires' 'Csaba Szepesvári']" ]
stat.ML cs.LG
null
1609.0639
null
null
null
null
null
Learning HMMs with Nonparametric Emissions via Spectral Decompositions of Continuous Matrices
Recently, there has been a surge of interest in using spectral methods for estimating latent variable models. However, it is usually assumed that the distribution of the observations conditioned on the latent variables is either discrete or belongs to a parametric family. In this paper, we study the estimation of an $m$-state hidden Markov model (HMM) with only smoothness assumptions, such as H\"olderian conditions, on the emission densities. By leveraging some recent advances in continuous linear algebra and numerical analysis, we develop a computationally efficient spectral algorithm for learning nonparametric HMMs. Our technique is based on computing an SVD on nonparametric estimates of density functions by viewing them as \emph{continuous matrices}. We derive sample complexity bounds via concentration results for nonparametric density estimation and novel perturbation theory results for continuous matrices. We implement our method using Chebyshev polynomial approximations. Our method is competitive with other baselines on synthetic and real problems and is also very computationally efficient.
[ "Kirthevasan Kandasamy, Maruan Al-Shedivat, Eric P. Xing" ]
null
null
1609.06390
null
null
http://arxiv.org/pdf/1609.06390v1
2016-09-21T00:15:44Z
2016-09-21T00:15:44Z
Learning HMMs with Nonparametric Emissions via Spectral Decompositions of Continuous Matrices
Recently, there has been a surge of interest in using spectral methods for estimating latent variable models. However, it is usually assumed that the distribution of the observations conditioned on the latent variables is either discrete or belongs to a parametric family. In this paper, we study the estimation of an $m$-state hidden Markov model (HMM) with only smoothness assumptions, such as H"olderian conditions, on the emission densities. By leveraging some recent advances in continuous linear algebra and numerical analysis, we develop a computationally efficient spectral algorithm for learning nonparametric HMMs. Our technique is based on computing an SVD on nonparametric estimates of density functions by viewing them as emph{continuous matrices}. We derive sample complexity bounds via concentration results for nonparametric density estimation and novel perturbation theory results for continuous matrices. We implement our method using Chebyshev polynomial approximations. Our method is competitive with other baselines on synthetic and real problems and is also very computationally efficient.
[ "['Kirthevasan Kandasamy' 'Maruan Al-Shedivat' 'Eric P. Xing']" ]
cs.GT cs.LG
null
1609.06438
null
null
http://arxiv.org/pdf/1609.06438v1
2016-09-21T07:10:13Z
2016-09-21T07:10:13Z
Large-Scale Strategic Games and Adversarial Machine Learning
Decision making in modern large-scale and complex systems such as communication networks, smart electricity grids, and cyber-physical systems motivate novel game-theoretic approaches. This paper investigates big strategic (non-cooperative) games where a finite number of individual players each have a large number of continuous decision variables and input data points. Such high-dimensional decision spaces and big data sets lead to computational challenges, relating to efforts in non-linear optimization scaling up to large systems of variables. In addition to these computational challenges, real-world players often have limited information about their preference parameters due to the prohibitive cost of identifying them or due to operating in dynamic online settings. The challenge of limited information is exacerbated in high dimensions and big data sets. Motivated by both computational and information limitations that constrain the direct solution of big strategic games, our investigation centers around reductions using linear transformations such as random projection methods and their effect on Nash equilibrium solutions. Specific analytical results are presented for quadratic games and approximations. In addition, an adversarial learning game is presented where random projection and sampling schemes are investigated.
[ "['Tansu Alpcan' 'Benjamin I. P. Rubinstein' 'Christopher Leckie']", "Tansu Alpcan, Benjamin I. P. Rubinstein, Christopher Leckie" ]
cs.SI cs.LG stat.ML
null
1609.06457
null
null
http://arxiv.org/pdf/1609.06457v1
2016-09-21T08:14:12Z
2016-09-21T08:14:12Z
AMOS: An Automated Model Order Selection Algorithm for Spectral Graph Clustering
One of the longstanding problems in spectral graph clustering (SGC) is the so-called model order selection problem: automated selection of the correct number of clusters. This is equivalent to the problem of finding the number of connected components or communities in an undirected graph. In this paper, we propose AMOS, an automated model order selection algorithm for SGC. Based on a recent analysis of clustering reliability for SGC under the random interconnection model, AMOS works by incrementally increasing the number of clusters, estimating the quality of identified clusters, and providing a series of clustering reliability tests. Consequently, AMOS outputs clusters of minimal model order with statistical clustering reliability guarantees. Comparing to three other automated graph clustering methods on real-world datasets, AMOS shows superior performance in terms of multiple external and internal clustering metrics.
[ "Pin-Yu Chen and Thibaut Gensollen and Alfred O. Hero III", "['Pin-Yu Chen' 'Thibaut Gensollen' 'Alfred O. Hero III']" ]
q-bio.GN cs.LG stat.ML
null
1609.0648
null
null
null
null
null
Network-regularized Sparse Logistic Regression Models for Clinical Risk Prediction and Biomarker Discovery
Molecular profiling data (e.g., gene expression) has been used for clinical risk prediction and biomarker discovery. However, it is necessary to integrate other prior knowledge like biological pathways or gene interaction networks to improve the predictive ability and biological interpretability of biomarkers. Here, we first introduce a general regularized Logistic Regression (LR) framework with regularized term $\lambda \|\bm{w}\|_1 + \eta\bm{w}^T\bm{M}\bm{w}$, which can reduce to different penalties, including Lasso, elastic net, and network-regularized terms with different $\bm{M}$. This framework can be easily solved in a unified manner by a cyclic coordinate descent algorithm which can avoid inverse matrix operation and accelerate the computing speed. However, if those estimated $\bm{w}_i$ and $\bm{w}_j$ have opposite signs, then the traditional network-regularized penalty may not perform well. To address it, we introduce a novel network-regularized sparse LR model with a new penalty $\lambda \|\bm{w}\|_1 + \eta|\bm{w}|^T\bm{M}|\bm{w}|$ to consider the difference between the absolute values of the coefficients. And we develop two efficient algorithms to solve it. Finally, we test our methods and compare them with the related ones using simulated and real data to show their efficiency.
[ "Wenwen Min, Juan Liu, Shihua Zhang" ]
null
null
1609.06480
null
null
http://arxiv.org/pdf/1609.06480v1
2016-09-21T09:47:32Z
2016-09-21T09:47:32Z
Network-regularized Sparse Logistic Regression Models for Clinical Risk Prediction and Biomarker Discovery
Molecular profiling data (e.g., gene expression) has been used for clinical risk prediction and biomarker discovery. However, it is necessary to integrate other prior knowledge like biological pathways or gene interaction networks to improve the predictive ability and biological interpretability of biomarkers. Here, we first introduce a general regularized Logistic Regression (LR) framework with regularized term $lambda |bm{w}|_1 + etabm{w}^Tbm{M}bm{w}$, which can reduce to different penalties, including Lasso, elastic net, and network-regularized terms with different $bm{M}$. This framework can be easily solved in a unified manner by a cyclic coordinate descent algorithm which can avoid inverse matrix operation and accelerate the computing speed. However, if those estimated $bm{w}_i$ and $bm{w}_j$ have opposite signs, then the traditional network-regularized penalty may not perform well. To address it, we introduce a novel network-regularized sparse LR model with a new penalty $lambda |bm{w}|_1 + eta|bm{w}|^Tbm{M}|bm{w}|$ to consider the difference between the absolute values of the coefficients. And we develop two efficient algorithms to solve it. Finally, we test our methods and compare them with the related ones using simulated and real data to show their efficiency.
[ "['Wenwen Min' 'Juan Liu' 'Shihua Zhang']" ]
cs.CV cs.AI cs.CL cs.LG cs.NE
null
1609.06492
null
null
http://arxiv.org/pdf/1609.06492v1
2016-09-21T10:52:03Z
2016-09-21T10:52:03Z
Document Image Coding and Clustering for Script Discrimination
The paper introduces a new method for discrimination of documents given in different scripts. The document is mapped into a uniformly coded text of numerical values. It is derived from the position of the letters in the text line, based on their typographical characteristics. Each code is considered as a gray level. Accordingly, the coded text determines a 1-D image, on which texture analysis by run-length statistics and local binary pattern is performed. It defines feature vectors representing the script content of the document. A modified clustering approach employed on document feature vector groups documents written in the same script. Experimentation performed on two custom oriented databases of historical documents in old Cyrillic, angular and round Glagolitic as well as Antiqua and Fraktur scripts demonstrates the superiority of the proposed method with respect to well-known methods in the state-of-the-art.
[ "['Darko Brodic' 'Alessia Amelio' 'Zoran N. Milivojevic' 'Milena Jevtic']", "Darko Brodic, Alessia Amelio, Zoran N. Milivojevic, Milena Jevtic" ]
cs.DL cs.LG stat.ML
10.1007/s10994-016-5554-z
1609.06532
null
null
http://arxiv.org/abs/1609.06532v1
2016-09-21T12:44:37Z
2016-09-21T12:44:37Z
Bibliographic Analysis on Research Publications using Authors, Categorical Labels and the Citation Network
Bibliographic analysis considers the author's research areas, the citation network and the paper content among other things. In this paper, we combine these three in a topic model that produces a bibliographic model of authors, topics and documents, using a nonparametric extension of a combination of the Poisson mixed-topic link model and the author-topic model. This gives rise to the Citation Network Topic Model (CNTM). We propose a novel and efficient inference algorithm for the CNTM to explore subsets of research publications from CiteSeerX. The publication datasets are organised into three corpora, totalling to about 168k publications with about 62k authors. The queried datasets are made available online. In three publicly available corpora in addition to the queried datasets, our proposed model demonstrates an improved performance in both model fitting and document clustering, compared to several baselines. Moreover, our model allows extraction of additional useful knowledge from the corpora, such as the visualisation of the author-topics network. Additionally, we propose a simple method to incorporate supervision into topic modelling to achieve further improvement on the clustering task.
[ "Kar Wai Lim and Wray Buntine", "['Kar Wai Lim' 'Wray Buntine']" ]
stat.ML cs.LG
10.4236/am.2016.715143
1609.06533
null
null
http://arxiv.org/abs/1609.06533v1
2016-09-21T12:46:09Z
2016-09-21T12:46:09Z
On Data-Independent Properties for Density-Based Dissimilarity Measures in Hybrid Clustering
Hybrid clustering combines partitional and hierarchical clustering for computational effectiveness and versatility in cluster shape. In such clustering, a dissimilarity measure plays a crucial role in the hierarchical merging. The dissimilarity measure has great impact on the final clustering, and data-independent properties are needed to choose the right dissimilarity measure for the problem at hand. Properties for distance-based dissimilarity measures have been studied for decades, but properties for density-based dissimilarity measures have so far received little attention. Here, we propose six data-independent properties to evaluate density-based dissimilarity measures associated with hybrid clustering, regarding equality, orthogonality, symmetry, outlier and noise observations, and light-tailed models for heavy-tailed clusters. The significance of the properties is investigated, and we study some well-known dissimilarity measures based on Shannon entropy, misclassification rate, Bhattacharyya distance and Kullback-Leibler divergence with respect to the proposed properties. As none of them satisfy all the proposed properties, we introduce a new dissimilarity measure based on the Kullback-Leibler information and show that it satisfies all proposed properties. The effect of the proposed properties is also illustrated on several real and simulated data sets.
[ "Kajsa M{\\o}llersen, Subhra S. Dhar, Fred Godtliebsen", "['Kajsa Møllersen' 'Subhra S. Dhar' 'Fred Godtliebsen']" ]
cs.LG
null
1609.0657
null
null
null
null
null
Imbalanced-learn: A Python Toolbox to Tackle the Curse of Imbalanced Datasets in Machine Learning
Imbalanced-learn is an open-source python toolbox aiming at providing a wide range of methods to cope with the problem of imbalanced dataset frequently encountered in machine learning and pattern recognition. The implemented state-of-the-art methods can be categorized into 4 groups: (i) under-sampling, (ii) over-sampling, (iii) combination of over- and under-sampling, and (iv) ensemble learning methods. The proposed toolbox only depends on numpy, scipy, and scikit-learn and is distributed under MIT license. Furthermore, it is fully compatible with scikit-learn and is part of the scikit-learn-contrib supported project. Documentation, unit tests as well as integration tests are provided to ease usage and contribution. The toolbox is publicly available in GitHub: https://github.com/scikit-learn-contrib/imbalanced-learn.
[ "Guillaume Lemaitre and Fernando Nogueira and Christos K. Aridas" ]
null
null
1609.06570
null
null
http://arxiv.org/pdf/1609.06570v1
2016-09-21T14:16:14Z
2016-09-21T14:16:14Z
Imbalanced-learn: A Python Toolbox to Tackle the Curse of Imbalanced Datasets in Machine Learning
Imbalanced-learn is an open-source python toolbox aiming at providing a wide range of methods to cope with the problem of imbalanced dataset frequently encountered in machine learning and pattern recognition. The implemented state-of-the-art methods can be categorized into 4 groups: (i) under-sampling, (ii) over-sampling, (iii) combination of over- and under-sampling, and (iv) ensemble learning methods. The proposed toolbox only depends on numpy, scipy, and scikit-learn and is distributed under MIT license. Furthermore, it is fully compatible with scikit-learn and is part of the scikit-learn-contrib supported project. Documentation, unit tests as well as integration tests are provided to ease usage and contribution. The toolbox is publicly available in GitHub: https://github.com/scikit-learn-contrib/imbalanced-learn.
[ "['Guillaume Lemaitre' 'Fernando Nogueira' 'Christos K. Aridas']" ]
stat.ML cs.LG
null
1609.06575
null
null
http://arxiv.org/pdf/1609.06575v2
2016-10-07T22:51:50Z
2016-09-21T14:23:15Z
Theoretical Evaluation of Feature Selection Methods based on Mutual Information
Feature selection methods are usually evaluated by wrapping specific classifiers and datasets in the evaluation process, resulting very often in unfair comparisons between methods. In this work, we develop a theoretical framework that allows obtaining the true feature ordering of two-dimensional sequential forward feature selection methods based on mutual information, which is independent of entropy or mutual information estimation methods, classifiers, or datasets, and leads to an undoubtful comparison of the methods. Moreover, the theoretical framework unveils problems intrinsic to some methods that are otherwise difficult to detect, namely inconsistencies in the construction of the objective function used to select the candidate features, due to various types of indeterminations and to the possibility of the entropy of continuous random variables taking null and negative values.
[ "Cl\\'audia Pascoal, M. Ros\\'ario Oliveira, Ant\\'onio Pacheco, and Rui\n Valadas", "['Cláudia Pascoal' 'M. Rosário Oliveira' 'António Pacheco' 'Rui Valadas']" ]
cs.CL cs.IR cs.LG
10.1145/2661829.2662005
1609.06578
null
null
http://arxiv.org/abs/1609.06578v1
2016-09-21T14:25:23Z
2016-09-21T14:25:23Z
Twitter Opinion Topic Model: Extracting Product Opinions from Tweets by Leveraging Hashtags and Sentiment Lexicon
Aspect-based opinion mining is widely applied to review data to aggregate or summarize opinions of a product, and the current state-of-the-art is achieved with Latent Dirichlet Allocation (LDA)-based model. Although social media data like tweets are laden with opinions, their "dirty" nature (as natural language) has discouraged researchers from applying LDA-based opinion model for product review mining. Tweets are often informal, unstructured and lacking labeled data such as categories and ratings, making it challenging for product opinion mining. In this paper, we propose an LDA-based opinion model named Twitter Opinion Topic Model (TOTM) for opinion mining and sentiment analysis. TOTM leverages hashtags, mentions, emoticons and strong sentiment words that are present in tweets in its discovery process. It improves opinion prediction by modeling the target-opinion interaction directly, thus discovering target specific opinion words, neglected in existing approaches. Moreover, we propose a new formulation of incorporating sentiment prior information into a topic model, by utilizing an existing public sentiment lexicon. This is novel in that it learns and updates with the data. We conduct experiments on 9 million tweets on electronic products, and demonstrate the improved performance of TOTM in both quantitative evaluations and qualitative analysis. We show that aspect-based opinion analysis on massive volume of tweets provides useful opinions on products.
[ "['Kar Wai Lim' 'Wray Buntine']", "Kar Wai Lim, Wray Buntine" ]
cs.CR cs.CY cs.LG
null
1609.06582
null
null
http://arxiv.org/pdf/1609.06582v2
2016-10-09T15:58:06Z
2016-09-21T14:31:15Z
Privacy-Friendly Mobility Analytics using Aggregate Location Data
Location data can be extremely useful to study commuting patterns and disruptions, as well as to predict real-time traffic volumes. At the same time, however, the fine-grained collection of user locations raises serious privacy concerns, as this can reveal sensitive information about the users, such as, life style, political and religious inclinations, or even identities. In this paper, we study the feasibility of crowd-sourced mobility analytics over aggregate location information: users periodically report their location, using a privacy-preserving aggregation protocol, so that the server can only recover aggregates -- i.e., how many, but not which, users are in a region at a given time. We experiment with real-world mobility datasets obtained from the Transport For London authority and the San Francisco Cabs network, and present a novel methodology based on time series modeling that is geared to forecast traffic volumes in regions of interest and to detect mobility anomalies in them. In the presence of anomalies, we also make enhanced traffic volume predictions by feeding our model with additional information from correlated regions. Finally, we present and evaluate a mobile app prototype, called Mobility Data Donors (MDD), in terms of computation, communication, and energy overhead, demonstrating the real-world deployability of our techniques.
[ "Apostolos Pyrgelis and Emiliano De Cristofaro and Gordon Ross", "['Apostolos Pyrgelis' 'Emiliano De Cristofaro' 'Gordon Ross']" ]
cs.RO cs.AI cs.CV cs.LG cs.NE
null
1609.06666
null
null
http://arxiv.org/pdf/1609.06666v2
2017-03-05T15:29:45Z
2016-09-21T18:32:11Z
Vote3Deep: Fast Object Detection in 3D Point Clouds Using Efficient Convolutional Neural Networks
This paper proposes a computationally efficient approach to detecting objects natively in 3D point clouds using convolutional neural networks (CNNs). In particular, this is achieved by leveraging a feature-centric voting scheme to implement novel convolutional layers which explicitly exploit the sparsity encountered in the input. To this end, we examine the trade-off between accuracy and speed for different architectures and additionally propose to use an L1 penalty on the filter activations to further encourage sparsity in the intermediate representations. To the best of our knowledge, this is the first work to propose sparse convolutional layers and L1 regularisation for efficient large-scale processing of 3D data. We demonstrate the efficacy of our approach on the KITTI object detection benchmark and show that Vote3Deep models with as few as three layers outperform the previous state of the art in both laser and laser-vision based approaches by margins of up to 40% while remaining highly competitive in terms of processing time.
[ "['Martin Engelcke' 'Dushyant Rao' 'Dominic Zeng Wang' 'Chi Hay Tong'\n 'Ingmar Posner']", "Martin Engelcke, Dushyant Rao, Dominic Zeng Wang, Chi Hay Tong, Ingmar\n Posner" ]
cs.CL cs.LG
null
1609.06686
null
null
http://arxiv.org/pdf/1609.06686v1
2016-09-21T19:08:15Z
2016-09-21T19:08:15Z
Character-level and Multi-channel Convolutional Neural Networks for Large-scale Authorship Attribution
Convolutional neural networks (CNNs) have demonstrated superior capability for extracting information from raw signals in computer vision. Recently, character-level and multi-channel CNNs have exhibited excellent performance for sentence classification tasks. We apply CNNs to large-scale authorship attribution, which aims to determine an unknown text's author among many candidate authors, motivated by their ability to process character-level signals and to differentiate between a large number of classes, while making fast predictions in comparison to state-of-the-art approaches. We extensively evaluate CNN-based approaches that leverage word and character channels and compare them against state-of-the-art methods for a large range of author numbers, shedding new light on traditional approaches. We show that character-level CNNs outperform the state-of-the-art on four out of five datasets in different domains. Additionally, we present the first application of authorship attribution to reddit.
[ "['Sebastian Ruder' 'Parsa Ghaffari' 'John G. Breslin']", "Sebastian Ruder, Parsa Ghaffari, John G. Breslin" ]
cs.LG
null
1609.06693
null
null
http://arxiv.org/pdf/1609.06693v3
2016-12-03T08:08:53Z
2016-09-21T19:31:07Z
SoftTarget Regularization: An Effective Technique to Reduce Over-Fitting in Neural Networks
Deep neural networks are learning models with a very high capacity and therefore prone to over-fitting. Many regularization techniques such as Dropout, DropConnect, and weight decay all attempt to solve the problem of over-fitting by reducing the capacity of their respective models (Srivastava et al., 2014), (Wan et al., 2013), (Krogh & Hertz, 1992). In this paper we introduce a new form of regularization that guides the learning problem in a way that reduces over-fitting without sacrificing the capacity of the model. The mistakes that models make in early stages of training carry information about the learning problem. By adjusting the labels of the current epoch of training through a weighted average of the real labels, and an exponential average of the past soft-targets we achieved a regularization scheme as powerful as Dropout without necessarily reducing the capacity of the model, and simplified the complexity of the learning problem. SoftTarget regularization proved to be an effective tool in various neural network architectures.
[ "['Armen Aghajanyan']", "Armen Aghajanyan" ]
cs.CV cs.LG
null
1609.06694
null
null
http://arxiv.org/pdf/1609.06694v1
2016-09-21T19:32:46Z
2016-09-21T19:32:46Z
PixelNet: Towards a General Pixel-level Architecture
We explore architectures for general pixel-level prediction problems, from low-level edge detection to mid-level surface normal estimation to high-level semantic segmentation. Convolutional predictors, such as the fully-convolutional network (FCN), have achieved remarkable success by exploiting the spatial redundancy of neighboring pixels through convolutional processing. Though computationally efficient, we point out that such approaches are not statistically efficient during learning precisely because spatial redundancy limits the information learned from neighboring pixels. We demonstrate that (1) stratified sampling allows us to add diversity during batch updates and (2) sampled multi-scale features allow us to explore more nonlinear predictors (multiple fully-connected layers followed by ReLU) that improve overall accuracy. Finally, our objective is to show how a architecture can get performance better than (or comparable to) the architectures designed for a particular task. Interestingly, our single architecture produces state-of-the-art results for semantic segmentation on PASCAL-Context, surface normal estimation on NYUDv2 dataset, and edge detection on BSDS without contextual post-processing.
[ "['Aayush Bansal' 'Xinlei Chen' 'Bryan Russell' 'Abhinav Gupta'\n 'Deva Ramanan']", "Aayush Bansal, Xinlei Chen, Bryan Russell, Abhinav Gupta, Deva Ramanan" ]
stat.ML cs.CL cs.LG
10.1016/j.ijar.2016.07.007
1609.06783
null
null
http://arxiv.org/abs/1609.06783v1
2016-09-22T00:10:16Z
2016-09-22T00:10:16Z
Nonparametric Bayesian Topic Modelling with the Hierarchical Pitman-Yor Processes
The Dirichlet process and its extension, the Pitman-Yor process, are stochastic processes that take probability distributions as a parameter. These processes can be stacked up to form a hierarchical nonparametric Bayesian model. In this article, we present efficient methods for the use of these processes in this hierarchical context, and apply them to latent variable models for text analytics. In particular, we propose a general framework for designing these Bayesian models, which are called topic models in the computer science community. We then propose a specific nonparametric Bayesian topic model for modelling text from social media. We focus on tweets (posts on Twitter) in this article due to their ease of access. We find that our nonparametric model performs better than existing parametric models in both goodness of fit and real world applications.
[ "Kar Wai Lim, Wray Buntine, Changyou Chen, Lan Du", "['Kar Wai Lim' 'Wray Buntine' 'Changyou Chen' 'Lan Du']" ]
cs.LG math.OC
null
1609.06804
null
null
http://arxiv.org/pdf/1609.06804v2
2016-09-29T01:54:25Z
2016-09-22T02:50:09Z
Decoupled Asynchronous Proximal Stochastic Gradient Descent with Variance Reduction
In the era of big data, optimizing large scale machine learning problems becomes a challenging task and draws significant attention. Asynchronous optimization algorithms come out as a promising solution. Recently, decoupled asynchronous proximal stochastic gradient descent (DAP-SGD) is proposed to minimize a composite function. It is claimed to be able to off-loads the computation bottleneck from server to workers by allowing workers to evaluate the proximal operators, therefore, server just need to do element-wise operations. However, it still suffers from slow convergence rate because of the variance of stochastic gradient is nonzero. In this paper, we propose a faster method, decoupled asynchronous proximal stochastic variance reduced gradient descent method (DAP-SVRG). We prove that our method has linear convergence for strongly convex problem. Large-scale experiments are also conducted in this paper, and results demonstrate our theoretical analysis.
[ "Zhouyuan Huo, Bin Gu, Heng Huang", "['Zhouyuan Huo' 'Bin Gu' 'Heng Huang']" ]
cs.DL cs.LG stat.ML
null
1609.06826
null
null
http://arxiv.org/pdf/1609.06826v1
2016-09-22T05:46:46Z
2016-09-22T05:46:46Z
Bibliographic Analysis with the Citation Network Topic Model
Bibliographic analysis considers author's research areas, the citation network and paper content among other things. In this paper, we combine these three in a topic model that produces a bibliographic model of authors, topics and documents using a non-parametric extension of a combination of the Poisson mixed-topic link model and the author-topic model. We propose a novel and efficient inference algorithm for the model to explore subsets of research publications from CiteSeerX. Our model demonstrates improved performance in both model fitting and a clustering task compared to several baselines.
[ "['Kar Wai Lim' 'Wray Buntine']", "Kar Wai Lim, Wray Buntine" ]
cs.LG stat.ML
null
1609.06831
null
null
http://arxiv.org/pdf/1609.06831v1
2016-09-22T06:18:20Z
2016-09-22T06:18:20Z
Hawkes Processes with Stochastic Excitations
We propose an extension to Hawkes processes by treating the levels of self-excitation as a stochastic differential equation. Our new point process allows better approximation in application domains where events and intensities accelerate each other with correlated levels of contagion. We generalize a recent algorithm for simulating draws from Hawkes processes whose levels of excitation are stochastic processes, and propose a hybrid Markov chain Monte Carlo approach for model fitting. Our sampling procedure scales linearly with the number of required events and does not require stationarity of the point process. A modular inference procedure consisting of a combination between Gibbs and Metropolis Hastings steps is put forward. We recover expectation maximization as a special case. Our general approach is illustrated for contagion following geometric Brownian motion and exponential Langevin dynamics.
[ "Young Lee, Kar Wai Lim, Cheng Soon Ong", "['Young Lee' 'Kar Wai Lim' 'Cheng Soon Ong']" ]
cs.LG math.PR stat.ML
null
1609.0684
null
null
null
null
null
Exact Sampling from Determinantal Point Processes
Determinantal point processes (DPPs) are an important concept in random matrix theory and combinatorics. They have also recently attracted interest in the study of numerical methods for machine learning, as they offer an elegant "missing link" between independent Monte Carlo sampling and deterministic evaluation on regular grids, applicable to a general set of spaces. This is helpful whenever an algorithm explores to reduce uncertainty, such as in active learning, Bayesian optimization, reinforcement learning, and marginalization in graphical models. To draw samples from a DPP in practice, existing literature focuses on approximate schemes of low cost, or comparably inefficient exact algorithms like rejection sampling. We point out that, for many settings of relevance to machine learning, it is also possible to draw exact samples from DPPs on continuous domains. We start from an intuitive example on the real line, which is then generalized to multivariate real vector spaces. We also compare to previously studied approximations, showing that exact sampling, despite higher cost, can be preferable where precision is needed.
[ "Philipp Hennig and Roman Garnett" ]
null
null
1609.06840
null
null
http://arxiv.org/pdf/1609.06840v2
2018-04-17T10:21:59Z
2016-09-22T07:06:28Z
Exact Sampling from Determinantal Point Processes
Determinantal point processes (DPPs) are an important concept in random matrix theory and combinatorics. They have also recently attracted interest in the study of numerical methods for machine learning, as they offer an elegant "missing link" between independent Monte Carlo sampling and deterministic evaluation on regular grids, applicable to a general set of spaces. This is helpful whenever an algorithm explores to reduce uncertainty, such as in active learning, Bayesian optimization, reinforcement learning, and marginalization in graphical models. To draw samples from a DPP in practice, existing literature focuses on approximate schemes of low cost, or comparably inefficient exact algorithms like rejection sampling. We point out that, for many settings of relevance to machine learning, it is also possible to draw exact samples from DPPs on continuous domains. We start from an intuitive example on the real line, which is then generalized to multivariate real vector spaces. We also compare to previously studied approximations, showing that exact sampling, despite higher cost, can be preferable where precision is needed.
[ "['Philipp Hennig' 'Roman Garnett']" ]
stat.ML cs.LG cs.SY math.PR math.ST stat.TH
null
1609.06942
null
null
http://arxiv.org/pdf/1609.06942v1
2016-09-22T12:40:58Z
2016-09-22T12:40:58Z
Randomized Independent Component Analysis
Independent component analysis (ICA) is a method for recovering statistically independent signals from observations of unknown linear combinations of the sources. Some of the most accurate ICA decomposition methods require searching for the inverse transformation which minimizes different approximations of the Mutual Information, a measure of statistical independence of random vectors. Two such approximations are the Kernel Generalized Variance or the Kernel Canonical Correlation which has been shown to reach the highest performance of ICA methods. However, the computational effort necessary just for computing these measures is cubic in the sample size. Hence, optimizing them becomes even more computationally demanding, in terms of both space and time. Here, we propose a couple of alternative novel measures based on randomized features of the samples - the Randomized Generalized Variance and the Randomized Canonical Correlation. The computational complexity of calculating the proposed alternatives is linear in the sample size and provide a controllable approximation of their Kernel-based non-random versions. We also show that optimization of the proposed statistical properties yields a comparable separation error at an order of magnitude faster compared to Kernel-based measures.
[ "['Matan Sela' 'Ron Kimmel']", "Matan Sela and Ron Kimmel" ]
cs.AI cs.LG cs.LO
null
1609.06954
null
null
http://arxiv.org/pdf/1609.06954v2
2020-01-13T13:55:44Z
2016-09-21T08:17:40Z
Semiring Programming: A Declarative Framework for Generalized Sum Product Problems
To solve hard problems, AI relies on a variety of disciplines such as logic, probabilistic reasoning, machine learning and mathematical programming. Although it is widely accepted that solving real-world problems requires an integration amongst these, contemporary representation methodologies offer little support for this. In an attempt to alleviate this situation, we introduce a new declarative programming framework that provides abstractions of well-known problems such as SAT, Bayesian inference, generative models, and convex optimization. The semantics of programs is defined in terms of first-order structures with semiring labels, which allows us to freely combine and integrate problems from different AI disciplines.
[ "Vaishak Belle, Luc De Raedt", "['Vaishak Belle' 'Luc De Raedt']" ]
cs.LG stat.ML
null
1609.06957
null
null
http://arxiv.org/pdf/1609.06957v1
2016-09-21T09:35:56Z
2016-09-21T09:35:56Z
Early Warning System for Seismic Events in Coal Mines Using Machine Learning
This document describes an approach to the problem of predicting dangerous seismic events in active coal mines up to 8 hours in advance. It was developed as a part of the AAIA'16 Data Mining Challenge: Predicting Dangerous Seismic Events in Active Coal Mines. The solutions presented consist of ensembles of various predictive models trained on different sets of features. The best one achieved a winning score of 0.939 AUC.
[ "['Robert Bogucki' 'Jan Lasek' 'Jan Kanty Milczek' 'Michal Tadeusiak']", "Robert Bogucki, Jan Lasek, Jan Kanty Milczek, Michal Tadeusiak" ]
cs.CV cs.AI cs.LG stat.ML
null
1609.07042
null
null
http://arxiv.org/pdf/1609.07042v4
2016-11-14T04:10:09Z
2016-09-22T15:59:38Z
Pose-Selective Max Pooling for Measuring Similarity
In this paper, we deal with two challenges for measuring the similarity of the subject identities in practical video-based face recognition - the variation of the head pose in uncontrolled environments and the computational expense of processing videos. Since the frame-wise feature mean is unable to characterize the pose diversity among frames, we define and preserve the overall pose diversity and closeness in a video. Then, identity will be the only source of variation across videos since the pose varies even within a single video. Instead of simply using all the frames, we select those faces whose pose point is closest to the centroid of the K-means cluster containing that pose point. Then, we represent a video as a bag of frame-wise deep face features while the number of features has been reduced from hundreds to K. Since the video representation can well represent the identity, now we measure the subject similarity between two videos as the max correlation among all possible pairs in the two bags of features. On the official 5,000 video-pairs of the YouTube Face dataset for face verification, our algorithm achieves a comparable performance with VGG-face that averages over deep features of all frames. Other vision tasks can also benefit from the generic idea of employing geometric cues to improve the descriptiveness of deep features.
[ "['Xiang Xiang' 'Trac D. Tran']", "Xiang Xiang and Trac D. Tran" ]
cs.NE cs.LG
null
1609.07061
null
null
http://arxiv.org/pdf/1609.07061v1
2016-09-22T16:48:03Z
2016-09-22T16:48:03Z
Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations
We introduce a method to train Quantized Neural Networks (QNNs) --- neural networks with extremely low precision (e.g., 1-bit) weights and activations, at run-time. At train-time the quantized weights and activations are used for computing the parameter gradients. During the forward pass, QNNs drastically reduce memory size and accesses, and replace most arithmetic operations with bit-wise operations. As a result, power consumption is expected to be drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to their 32-bit counterparts. For example, our quantized version of AlexNet with 1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover, we quantize the parameter gradients to 6-bits as well which enables gradients computation using only bit-wise operation. Quantized recurrent neural networks were tested over the Penn Treebank dataset, and achieved comparable accuracy as their 32-bit counterparts using only 4-bits. Last but not least, we programmed a binary matrix multiplication GPU kernel with which it is possible to run our MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy. The QNN code is available online.
[ "['Itay Hubara' 'Matthieu Courbariaux' 'Daniel Soudry' 'Ran El-Yaniv'\n 'Yoshua Bengio']", "Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv and\n Yoshua Bengio" ]
cs.LG cs.CG cs.CV
null
1609.07082
null
null
http://arxiv.org/pdf/1609.07082v2
2016-09-26T18:34:42Z
2016-09-22T17:41:03Z
Large Margin Nearest Neighbor Classification using Curved Mahalanobis Distances
We consider the supervised classification problem of machine learning in Cayley-Klein projective geometries: We show how to learn a curved Mahalanobis metric distance corresponding to either the hyperbolic geometry or the elliptic geometry using the Large Margin Nearest Neighbor (LMNN) framework. We report on our experimental results, and further consider the case of learning a mixed curved Mahalanobis distance. Besides, we show that the Cayley-Klein Voronoi diagrams are affine, and can be built from an equivalent (clipped) power diagrams, and that Cayley-Klein balls have Mahalanobis shapes with displaced centers.
[ "['Frank Nielsen' 'Boris Muzellec' 'Richard Nock']", "Frank Nielsen and Boris Muzellec and Richard Nock" ]
cs.LG stat.ML
null
1609.07087
null
null
null
null
null
(Bandit) Convex Optimization with Biased Noisy Gradient Oracles
Algorithms for bandit convex optimization and online learning often rely on constructing noisy gradient estimates, which are then used in appropriately adjusted first-order algorithms, replacing actual gradients. Depending on the properties of the function to be optimized and the nature of ``noise'' in the bandit feedback, the bias and variance of gradient estimates exhibit various tradeoffs. In this paper we propose a novel framework that replaces the specific gradient estimation methods with an abstract oracle. With the help of the new framework we unify previous works, reproducing their results in a clean and concise fashion, while, perhaps more importantly, the framework also allows us to formally show that to achieve the optimal root-$n$ rate either the algorithms that use existing gradient estimators, or the proof techniques used to analyze them have to go beyond what exists today.
[ "Xiaowei Hu, Prashanth L.A., Andr\\'as Gy\\\"orgy and Csaba Szepesv\\'ari" ]
cs.LG cs.RO
null
1609.07088
null
null
http://arxiv.org/pdf/1609.07088v1
2016-09-22T17:59:25Z
2016-09-22T17:59:25Z
Learning Modular Neural Network Policies for Multi-Task and Multi-Robot Transfer
Reinforcement learning (RL) can automate a wide variety of robotic skills, but learning each new skill requires considerable real-world data collection and manual representation engineering to design policy classes or features. Using deep reinforcement learning to train general purpose neural network policies alleviates some of the burden of manual representation engineering by using expressive policy classes, but exacerbates the challenge of data collection, since such methods tend to be less efficient than RL with low-dimensional, hand-designed representations. Transfer learning can mitigate this problem by enabling us to transfer information from one skill to another and even from one robot to another. We show that neural network policies can be decomposed into "task-specific" and "robot-specific" modules, where the task-specific modules are shared across robots, and the robot-specific modules are shared across all tasks on that robot. This allows for sharing task information, such as perception, between robots and sharing robot information, such as dynamics and kinematics, between tasks. We exploit this decomposition to train mix-and-match modules that can solve new robot-task combinations that were not seen during training. Using a novel neural network architecture, we demonstrate the effectiveness of our transfer method for enabling zero-shot generalization with a variety of robots and tasks in simulation for both visual and non-visual tasks.
[ "['Coline Devin' 'Abhishek Gupta' 'Trevor Darrell' 'Pieter Abbeel'\n 'Sergey Levine']", "Coline Devin, Abhishek Gupta, Trevor Darrell, Pieter Abbeel, Sergey\n Levine" ]
cs.LG cs.CV cs.NE stat.ML
null
1609.07093
null
null
http://arxiv.org/pdf/1609.07093v3
2017-02-06T18:46:50Z
2016-09-22T18:07:56Z
Neural Photo Editing with Introspective Adversarial Networks
The increasingly photorealistic sample quality of generative image models suggests their feasibility in applications beyond image generation. We present the Neural Photo Editor, an interface that leverages the power of generative neural networks to make large, semantically coherent changes to existing images. To tackle the challenge of achieving accurate reconstructions without loss of feature quality, we introduce the Introspective Adversarial Network, a novel hybridization of the VAE and GAN. Our model efficiently captures long-range dependencies through use of a computational block based on weight-shared dilated convolutions, and improves generalization performance with Orthogonal Regularization, a novel weight regularization method. We validate our contributions on CelebA, SVHN, and CIFAR-100, and produce samples and reconstructions with high visual fidelity.
[ "Andrew Brock, Theodore Lim, J.M. Ritchie, Nick Weston", "['Andrew Brock' 'Theodore Lim' 'J. M. Ritchie' 'Nick Weston']" ]
cs.LG
null
1609.07132
null
null
http://arxiv.org/pdf/1609.07132v1
2016-09-22T19:57:08Z
2016-09-22T19:57:08Z
A Fully Convolutional Neural Network for Speech Enhancement
In hearing aids, the presence of babble noise degrades hearing intelligibility of human speech greatly. However, removing the babble without creating artifacts in human speech is a challenging task in a low SNR environment. Here, we sought to solve the problem by finding a `mapping' between noisy speech spectra and clean speech spectra via supervised learning. Specifically, we propose using fully Convolutional Neural Networks, which consist of lesser number of parameters than fully connected networks. The proposed network, Redundant Convolutional Encoder Decoder (R-CED), demonstrates that a convolutional network can be 12 times smaller than a recurrent network and yet achieves better performance, which shows its applicability for an embedded system: the hearing aids.
[ "['Se Rim Park' 'Jinwon Lee']", "Se Rim Park, Jinwon Lee" ]
cs.LG math.OC
null
1609.07152
null
null
http://arxiv.org/pdf/1609.07152v3
2017-06-14T17:59:12Z
2016-09-22T20:10:57Z
Input Convex Neural Networks
This paper presents the input convex neural network architecture. These are scalar-valued (potentially deep) neural networks with constraints on the network parameters such that the output of the network is a convex function of (some of) the inputs. The networks allow for efficient inference via optimization over some inputs to the network given others, and can be applied to settings including structured prediction, data imputation, reinforcement learning, and others. In this paper we lay the basic groundwork for these models, proposing methods for inference, optimization and learning, and analyze their representational power. We show that many existing neural network architectures can be made input-convex with a minor modification, and develop specialized optimization algorithms tailored to this setting. Finally, we highlight the performance of the methods on multi-label prediction, image completion, and reinforcement learning problems, where we show improvement over the existing state of the art in many cases.
[ "Brandon Amos, Lei Xu, J. Zico Kolter", "['Brandon Amos' 'Lei Xu' 'J. Zico Kolter']" ]
cs.LG cs.SI stat.ML
null
1609.072
null
null
null
null
null
Multilayer Spectral Graph Clustering via Convex Layer Aggregation
Multilayer graphs are commonly used for representing different relations between entities and handling heterogeneous data processing tasks. New challenges arise in multilayer graph clustering for assigning clusters to a common multilayer node set and for combining information from each layer. This paper presents a theoretical framework for multilayer spectral graph clustering of the nodes via convex layer aggregation. Under a novel multilayer signal plus noise model, we provide a phase transition analysis that establishes the existence of a critical value on the noise level that permits reliable cluster separation. The analysis also specifies analytical upper and lower bounds on the critical value, where the bounds become exact when the clusters have identical sizes. Numerical experiments on synthetic multilayer graphs are conducted to validate the phase transition analysis and study the effect of layer weights and noise levels on clustering reliability.
[ "Pin-Yu Chen and Alfred O. Hero III" ]
null
null
1609.07200
null
null
http://arxiv.org/pdf/1609.07200v1
2016-09-23T01:16:46Z
2016-09-23T01:16:46Z
Multilayer Spectral Graph Clustering via Convex Layer Aggregation
Multilayer graphs are commonly used for representing different relations between entities and handling heterogeneous data processing tasks. New challenges arise in multilayer graph clustering for assigning clusters to a common multilayer node set and for combining information from each layer. This paper presents a theoretical framework for multilayer spectral graph clustering of the nodes via convex layer aggregation. Under a novel multilayer signal plus noise model, we provide a phase transition analysis that establishes the existence of a critical value on the noise level that permits reliable cluster separation. The analysis also specifies analytical upper and lower bounds on the critical value, where the bounds become exact when the clusters have identical sizes. Numerical experiments on synthetic multilayer graphs are conducted to validate the phase transition analysis and study the effect of layer weights and noise levels on clustering reliability.
[ "['Pin-Yu Chen' 'Alfred O. Hero III']" ]
cs.LG cs.NE
null
1609.07215
null
null
http://arxiv.org/pdf/1609.07215v1
2016-09-23T03:09:24Z
2016-09-23T03:09:24Z
A Novel Progressive Multi-label Classifier for Classincremental Data
In this paper, a progressive learning algorithm for multi-label classification to learn new labels while retaining the knowledge of previous labels is designed. New output neurons corresponding to new labels are added and the neural network connections and parameters are automatically restructured as if the label has been introduced from the beginning. This work is the first of the kind in multi-label classifier for class-incremental learning. It is useful for real-world applications such as robotics where streaming data are available and the number of labels is often unknown. Based on the Extreme Learning Machine framework, a novel universal classifier with plug and play capabilities for progressive multi-label classification is developed. Experimental results on various benchmark synthetic and real datasets validate the efficiency and effectiveness of our proposed algorithm.
[ "Mihika Dave, Sahil Tapiawala, Meng Joo Er, Rajasekar Venkatesan", "['Mihika Dave' 'Sahil Tapiawala' 'Meng Joo Er' 'Rajasekar Venkatesan']" ]
cs.LG stat.ML
null
1609.07257
null
null
http://arxiv.org/pdf/1609.07257v3
2017-03-07T06:38:36Z
2016-09-23T07:40:12Z
Using Neural Network Formalism to Solve Multiple-Instance Problems
Many objects in the real world are difficult to describe by a single numerical vector of a fixed length, whereas describing them by a set of vectors is more natural. Therefore, Multiple instance learning (MIL) techniques have been constantly gaining on importance throughout last years. MIL formalism represents each object (sample) by a set (bag) of feature vectors (instances) of fixed length where knowledge about objects (e.g., class label) is available on bag level but not necessarily on instance level. Many standard tools including supervised classifiers have been already adapted to MIL setting since the problem got formalized in late nineties. In this work we propose a neural network (NN) based formalism that intuitively bridges the gap between MIL problem definition and the vast existing knowledge-base of standard models and classifiers. We show that the proposed NN formalism is effectively optimizable by a modified back-propagation algorithm and can reveal unknown patterns inside bags. Comparison to eight types of classifiers from the prior art on a set of 14 publicly available benchmark datasets confirms the advantages and accuracy of the proposed solution.
[ "Tomas Pevny and Petr Somol", "['Tomas Pevny' 'Petr Somol']" ]
stat.ML cs.LG
null
1609.07272
null
null
http://arxiv.org/pdf/1609.07272v1
2016-09-23T08:51:14Z
2016-09-23T08:51:14Z
Constraint-Based Clustering Selection
Semi-supervised clustering methods incorporate a limited amount of supervision into the clustering process. Typically, this supervision is provided by the user in the form of pairwise constraints. Existing methods use such constraints in one of the following ways: they adapt their clustering procedure, their similarity metric, or both. All of these approaches operate within the scope of individual clustering algorithms. In contrast, we propose to use constraints to choose between clusterings generated by very different unsupervised clustering algorithms, run with different parameter settings. We empirically show that this simple approach often outperforms existing semi-supervised clustering methods.
[ "['Toon Van Craenendonck' 'Hendrik Blockeel']", "Toon Van Craenendonck, Hendrik Blockeel" ]
cs.SD cs.AI cs.LG
null
1609.07384
null
null
http://arxiv.org/pdf/1609.07384v2
2017-02-13T01:09:27Z
2016-09-23T14:35:17Z
Discovering Sound Concepts and Acoustic Relations In Text
In this paper we describe approaches for discovering acoustic concepts and relations in text. The first major goal is to be able to identify text phrases which contain a notion of audibility and can be termed as a sound or an acoustic concept. We also propose a method to define an acoustic scene through a set of sound concepts. We use pattern matching and parts of speech tags to generate sound concepts from large scale text corpora. We use dependency parsing and LSTM recurrent neural network to predict a set of sound concepts for a given acoustic scene. These methods are not only helpful in creating an acoustic knowledge base but in the future can also directly help acoustic event and scene detection research.
[ "['Anurag Kumar' 'Bhiksha Raj' 'Ndapandula Nakashole']", "Anurag Kumar, Bhiksha Raj, Ndapandula Nakashole" ]
q-fin.CP cs.LG q-fin.PR
null
1609.07472
null
null
http://arxiv.org/pdf/1609.07472v3
2020-03-27T09:25:28Z
2016-09-14T20:56:06Z
Gated Neural Networks for Option Pricing: Rationality by Design
We propose a neural network approach to price EU call options that significantly outperforms some existing pricing models and comes with guarantees that its predictions are economically reasonable. To achieve this, we introduce a class of gated neural networks that automatically learn to divide-and-conquer the problem space for robust and accurate pricing. We then derive instantiations of these networks that are 'rational by design' in terms of naturally encoding a valid call option surface that enforces no arbitrage principles. This integration of human insight within data-driven learning provides significantly better generalisation in pricing performance due to the encoded inductive bias in the learning, guarantees sanity in the model's predictions, and provides econometrically useful byproduct such as risk neutral density.
[ "Yongxin Yang, Yu Zheng, Timothy M. Hospedales", "['Yongxin Yang' 'Yu Zheng' 'Timothy M. Hospedales']" ]
math.OC cs.LG stat.ML
null
1609.07478
null
null
http://arxiv.org/pdf/1609.07478v1
2016-09-23T19:59:50Z
2016-09-23T19:59:50Z
Screening Rules for Convex Problems
We propose a new framework for deriving screening rules for convex optimization problems. Our approach covers a large class of constrained and penalized optimization formulations, and works in two steps. First, given any approximate point, the structure of the objective function and the duality gap is used to gather information on the optimal solution. In the second step, this information is used to produce screening rules, i.e. safely identifying unimportant weight variables of the optimal solution. Our general framework leads to a large variety of useful existing as well as new screening rules for many applications. For example, we provide new screening rules for general simplex and $L_1$-constrained problems, Elastic Net, squared-loss Support Vector Machines, minimum enclosing ball, as well as structured norm regularized problems, such as group lasso.
[ "Anant Raj, Jakob Olbrich, Bernd G\\\"artner, Bernhard Sch\\\"olkopf,\n Martin Jaggi", "['Anant Raj' 'Jakob Olbrich' 'Bernd Gärtner' 'Bernhard Schölkopf'\n 'Martin Jaggi']" ]
cs.CV cs.LG
null
1609.07495
null
null
http://arxiv.org/pdf/1609.07495v1
2016-09-23T20:00:23Z
2016-09-23T20:00:23Z
A Rotation Invariant Latent Factor Model for Moveme Discovery from Static Poses
We tackle the problem of learning a rotation invariant latent factor model when the training data is comprised of lower-dimensional projections of the original feature space. The main goal is the discovery of a set of 3-D bases poses that can characterize the manifold of primitive human motions, or movemes, from a training set of 2-D projected poses obtained from still images taken at various camera angles. The proposed technique for basis discovery is data-driven rather than hand-designed. The learned representation is rotation invariant, and can reconstruct any training instance from multiple viewing angles. We apply our method to modeling human poses in sports (via the Leeds Sports Dataset), and demonstrate the effectiveness of the learned bases in a range of applications such as activity classification, inference of dynamics from a single frame, and synthetic representation of movements.
[ "Matteo Ruggero Ronchi, Joon Sik Kim and Yisong Yue", "['Matteo Ruggero Ronchi' 'Joon Sik Kim' 'Yisong Yue']" ]
stat.ML cs.AI cs.LG
null
1609.07521
null
null
http://arxiv.org/pdf/1609.07521v1
2016-09-23T21:18:31Z
2016-09-23T21:18:31Z
Fast Learning of Clusters and Topics via Sparse Posteriors
Mixture models and topic models generate each observation from a single cluster, but standard variational posteriors for each observation assign positive probability to all possible clusters. This requires dense storage and runtime costs that scale with the total number of clusters, even though typically only a few clusters have significant posterior mass for any data point. We propose a constrained family of sparse variational distributions that allow at most $L$ non-zero entries, where the tunable threshold $L$ trades off speed for accuracy. Previous sparse approximations have used hard assignments ($L=1$), but we find that moderate values of $L>1$ provide superior performance. Our approach easily integrates with stochastic or incremental optimization algorithms to scale to millions of examples. Experiments training mixture models of image patches and topic models for news articles show that our approach produces better-quality models in far less time than baseline methods.
[ "Michael C. Hughes and Erik B. Sudderth", "['Michael C. Hughes' 'Erik B. Sudderth']" ]
math.OC cs.LG cs.MA cs.SI stat.ML
null
1609.07537
null
null
http://arxiv.org/pdf/1609.07537v1
2016-09-23T23:12:06Z
2016-09-23T23:12:06Z
A Tutorial on Distributed (Non-Bayesian) Learning: Problem, Algorithms and Results
We overview some results on distributed learning with focus on a family of recently proposed algorithms known as non-Bayesian social learning. We consider different approaches to the distributed learning problem and its algorithmic solutions for the case of finitely many hypotheses. The original centralized problem is discussed at first, and then followed by a generalization to the distributed setting. The results on convergence and convergence rate are presented for both asymptotic and finite time regimes. Various extensions are discussed such as those dealing with directed time-varying networks, Nesterov's acceleration technique and a continuum sets of hypothesis.
[ "['Angelia Nedić' 'Alex Olshevsky' 'César A. Uribe']", "Angelia Nedi\\'c, Alex Olshevsky and C\\'esar A. Uribe" ]
cs.LG
null
1609.0754
null
null
null
null
null
Derivative Delay Embedding: Online Modeling of Streaming Time Series
The staggering amount of streaming time series coming from the real world calls for more efficient and effective online modeling solution. For time series modeling, most existing works make some unrealistic assumptions such as the input data is of fixed length or well aligned, which requires extra effort on segmentation or normalization of the raw streaming data. Although some literature claim their approaches to be invariant to data length and misalignment, they are too time-consuming to model a streaming time series in an online manner. We propose a novel and more practical online modeling and classification scheme, DDE-MGM, which does not make any assumptions on the time series while maintaining high efficiency and state-of-the-art performance. The derivative delay embedding (DDE) is developed to incrementally transform time series to the embedding space, where the intrinsic characteristics of data is preserved as recursive patterns regardless of the stream length and misalignment. Then, a non-parametric Markov geographic model (MGM) is proposed to both model and classify the pattern in an online manner. Experimental results demonstrate the effectiveness and superior classification accuracy of the proposed DDE-MGM in an online setting as compared to the state-of-the-art.
[ "Zhifei Zhang, Yang Song, Wei Wang, and Hairong Qi" ]
null
null
1609.07540
null
null
http://arxiv.org/pdf/1609.07540v1
2016-09-24T00:03:49Z
2016-09-24T00:03:49Z
Derivative Delay Embedding: Online Modeling of Streaming Time Series
The staggering amount of streaming time series coming from the real world calls for more efficient and effective online modeling solution. For time series modeling, most existing works make some unrealistic assumptions such as the input data is of fixed length or well aligned, which requires extra effort on segmentation or normalization of the raw streaming data. Although some literature claim their approaches to be invariant to data length and misalignment, they are too time-consuming to model a streaming time series in an online manner. We propose a novel and more practical online modeling and classification scheme, DDE-MGM, which does not make any assumptions on the time series while maintaining high efficiency and state-of-the-art performance. The derivative delay embedding (DDE) is developed to incrementally transform time series to the embedding space, where the intrinsic characteristics of data is preserved as recursive patterns regardless of the stream length and misalignment. Then, a non-parametric Markov geographic model (MGM) is proposed to both model and classify the pattern in an online manner. Experimental results demonstrate the effectiveness and superior classification accuracy of the proposed DDE-MGM in an online setting as compared to the state-of-the-art.
[ "['Zhifei Zhang' 'Yang Song' 'Wei Wang' 'Hairong Qi']" ]
cs.RO cs.AI cs.LG stat.ML
null
1609.0756
null
null
null
null
null
Informative Planning and Online Learning with Sparse Gaussian Processes
A big challenge in environmental monitoring is the spatiotemporal variation of the phenomena to be observed. To enable persistent sensing and estimation in such a setting, it is beneficial to have a time-varying underlying environmental model. Here we present a planning and learning method that enables an autonomous marine vehicle to perform persistent ocean monitoring tasks by learning and refining an environmental model. To alleviate the computational bottleneck caused by large-scale data accumulated, we propose a framework that iterates between a planning component aimed at collecting the most information-rich data, and a sparse Gaussian Process learning component where the environmental model and hyperparameters are learned online by taking advantage of only a subset of data that provides the greatest contribution. Our simulations with ground-truth ocean data shows that the proposed method is both accurate and efficient.
[ "Kai-Chieh Ma, Lantao Liu, Gaurav S. Sukhatme" ]
null
null
1609.07560
null
null
http://arxiv.org/pdf/1609.07560v1
2016-09-24T02:56:25Z
2016-09-24T02:56:25Z
Informative Planning and Online Learning with Sparse Gaussian Processes
A big challenge in environmental monitoring is the spatiotemporal variation of the phenomena to be observed. To enable persistent sensing and estimation in such a setting, it is beneficial to have a time-varying underlying environmental model. Here we present a planning and learning method that enables an autonomous marine vehicle to perform persistent ocean monitoring tasks by learning and refining an environmental model. To alleviate the computational bottleneck caused by large-scale data accumulated, we propose a framework that iterates between a planning component aimed at collecting the most information-rich data, and a sparse Gaussian Process learning component where the environmental model and hyperparameters are learned online by taking advantage of only a subset of data that provides the greatest contribution. Our simulations with ground-truth ocean data shows that the proposed method is both accurate and efficient.
[ "['Kai-Chieh Ma' 'Lantao Liu' 'Gaurav S. Sukhatme']" ]
stat.ML cs.LG
null
1609.07574
null
null
http://arxiv.org/pdf/1609.07574v4
2018-01-01T04:00:13Z
2016-09-24T06:02:24Z
Dynamic Pricing in High-dimensions
We study the pricing problem faced by a firm that sells a large number of products, described via a wide range of features, to customers that arrive over time. Customers independently make purchasing decisions according to a general choice model that includes products features and customers' characteristics, encoded as $d$-dimensional numerical vectors, as well as the price offered. The parameters of the choice model are a priori unknown to the firm, but can be learned as the (binary-valued) sales data accrues over time. The firm's objective is to minimize the regret, i.e., the expected revenue loss against a clairvoyant policy that knows the parameters of the choice model in advance, and always offers the revenue-maximizing price. This setting is motivated in part by the prevalence of online marketplaces that allow for real-time pricing. We assume a structured choice model, parameters of which depend on $s_0$ out of the $d$ product features. We propose a dynamic policy, called Regularized Maximum Likelihood Pricing (RMLP) that leverages the (sparsity) structure of the high-dimensional model and obtains a logarithmic regret in $T$. More specifically, the regret of our algorithm is of $O(s_0 \log d \cdot \log T)$. Furthermore, we show that no policy can obtain regret better than $O(s_0 (\log d + \log T))$.
[ "Adel Javanmard and Hamid Nazerzadeh", "['Adel Javanmard' 'Hamid Nazerzadeh']" ]
cs.LG
null
1609.07672
null
null
http://arxiv.org/pdf/1609.07672v2
2017-03-30T04:57:49Z
2016-09-24T20:45:37Z
Information-Theoretic Methods for Planning and Learning in Partially Observable Markov Decision Processes
Bounded agents are limited by intrinsic constraints on their ability to process information that is available in their sensors and memory and choose actions and memory updates. In this dissertation, we model these constraints as information-rate constraints on communication channels connecting these various internal components of the agent. We make four major contributions detailed below and many smaller contributions detailed in each section. First, we formulate the problem of optimizing the agent under both extrinsic and intrinsic constraints and develop the main tools for solving it. Second, we identify another reason for the challenging convergence properties of the optimization algorithm, which is the bifurcation structure of the update operator near phase transitions. Third, we study the special case of linear-Gaussian dynamics and quadratic cost (LQG), where the optimal solution has a particularly simple and solvable form. Fourth, we explore the learning task, where the model of the world dynamics is unknown and sample-based updates are used instead.
[ "['Roy Fox']", "Roy Fox" ]
cs.NE cs.AI cs.LG
10.1371/journal.pone.0170388
1609.07706
null
null
http://arxiv.org/abs/1609.07706v2
2019-02-18T05:38:53Z
2016-09-25T06:44:42Z
Learning by Stimulation Avoidance: A Principle to Control Spiking Neural Networks Dynamics
Learning based on networks of real neurons, and by extension biologically inspired models of neural networks, has yet to find general learning rules leading to widespread applications. In this paper, we argue for the existence of a principle allowing to steer the dynamics of a biologically inspired neural network. Using carefully timed external stimulation, the network can be driven towards a desired dynamical state. We term this principle "Learning by Stimulation Avoidance" (LSA). We demonstrate through simulation that the minimal sufficient conditions leading to LSA in artificial networks are also sufficient to reproduce learning results similar to those obtained in biological neurons by Shahaf and Marom [1]. We examine the mechanism's basic dynamics in a reduced network, and demonstrate how it scales up to a network of 100 neurons. We show that LSA has a higher explanatory power than existing hypotheses about the response of biological neural networks to external simulation, and can be used as a learning rule for an embodied application: learning of wall avoidance by a simulated robot. The surge in popularity of artificial neural networks is mostly directed to disembodied models of neurons with biologically irrelevant dynamics: to the authors' knowledge, this is the first work demonstrating sensory-motor learning with random spiking networks through pure Hebbian learning.
[ "Lana Sinapayen, Atsushi Masumori, Takashi Ikegami", "['Lana Sinapayen' 'Atsushi Masumori' 'Takashi Ikegami']" ]
cs.NE cs.LG
null
1609.07724
null
null
http://arxiv.org/pdf/1609.07724v1
2016-09-25T10:18:19Z
2016-09-25T10:18:19Z
The RNN-ELM Classifier
In this paper we examine learning methods combining the Random Neural Network, a biologically inspired neural network and the Extreme Learning Machine that achieve state of the art classification performance while requiring much shorter training time. The Random Neural Network is a integrate and fire computational model of a neural network whose mathematical structure permits the efficient analysis of large ensembles of neurons. An activation function is derived from the RNN and used in an Extreme Learning Machine. We compare the performance of this combination against the ELM with various activation functions, we reduce the input dimensionality via PCA and compare its performance vs. autoencoder based versions of the RNN-ELM.
[ "Athanasios Vlontzos", "['Athanasios Vlontzos']" ]
cs.NE cs.LG
null
1609.0775
null
null
null
null
null
Accurate and Efficient Hyperbolic Tangent Activation Function on FPGA using the DCT Interpolation Filter
Implementing an accurate and fast activation function with low cost is a crucial aspect to the implementation of Deep Neural Networks (DNNs) on FPGAs. We propose a high-accuracy approximation approach for the hyperbolic tangent activation function of artificial neurons in DNNs. It is based on the Discrete Cosine Transform Interpolation Filter (DCTIF). The proposed architecture combines simple arithmetic operations on stored samples of the hyperbolic tangent function and on input data. The proposed DCTIF implementation achieves two orders of magnitude greater precision than previous work while using the same or fewer computational resources. Various combinations of DCTIF parameters can be chosen to tradeoff the accuracy and complexity of the hyperbolic tangent function. In one case, the proposed architecture approximates the hyperbolic tangent activation function with 10E-5 maximum error while requiring only 1.52 Kbits memory and 57 LUTs of a Virtex-7 FPGA. We also discuss how the activation function accuracy affects the performance of DNNs in terms of their training and testing accuracies. We show that a high accuracy approximation can be necessary in order to maintain the same DNN training and testing performances realized by the exact function.
[ "Ahmed M. Abdelsalam, J.M. Pierre Langlois and F. Cheriet" ]
null
null
1609.07750
null
null
http://arxiv.org/pdf/1609.07750v1
2016-09-25T14:30:33Z
2016-09-25T14:30:33Z
Accurate and Efficient Hyperbolic Tangent Activation Function on FPGA using the DCT Interpolation Filter
Implementing an accurate and fast activation function with low cost is a crucial aspect to the implementation of Deep Neural Networks (DNNs) on FPGAs. We propose a high-accuracy approximation approach for the hyperbolic tangent activation function of artificial neurons in DNNs. It is based on the Discrete Cosine Transform Interpolation Filter (DCTIF). The proposed architecture combines simple arithmetic operations on stored samples of the hyperbolic tangent function and on input data. The proposed DCTIF implementation achieves two orders of magnitude greater precision than previous work while using the same or fewer computational resources. Various combinations of DCTIF parameters can be chosen to tradeoff the accuracy and complexity of the hyperbolic tangent function. In one case, the proposed architecture approximates the hyperbolic tangent activation function with 10E-5 maximum error while requiring only 1.52 Kbits memory and 57 LUTs of a Virtex-7 FPGA. We also discuss how the activation function accuracy affects the performance of DNNs in terms of their training and testing accuracies. We show that a high accuracy approximation can be necessary in order to maintain the same DNN training and testing performances realized by the exact function.
[ "['Ahmed M. Abdelsalam' 'J. M. Pierre Langlois' 'F. Cheriet']" ]
cs.CR cs.LG
null
1609.0777
null
null
null
null
null
Random Forest for Malware Classification
The challenge in engaging malware activities involves the correct identification and classification of different malware variants. Various malwares incorporate code obfuscation methods that alters their code signatures effectively countering antimalware detection techniques utilizing static methods and signature database. In this study, we utilized an approach of converting a malware binary into an image and use Random Forest to classify various malware families. The resulting accuracy of 0.9562 exhibits the effectivess of the method in detecting malware
[ "Felan Carlo C. Garcia, Felix P. Muga II" ]
null
null
1609.07770
null
null
http://arxiv.org/pdf/1609.07770v1
2016-09-25T16:43:44Z
2016-09-25T16:43:44Z
Random Forest for Malware Classification
The challenge in engaging malware activities involves the correct identification and classification of different malware variants. Various malwares incorporate code obfuscation methods that alters their code signatures effectively countering antimalware detection techniques utilizing static methods and signature database. In this study, we utilized an approach of converting a malware binary into an image and use Random Forest to classify various malware families. The resulting accuracy of 0.9562 exhibits the effectivess of the method in detecting malware
[ "['Felan Carlo C. Garcia' 'Felix P. Muga II']" ]
cs.CV cs.LG
null
1609.07916
null
null
http://arxiv.org/pdf/1609.07916v3
2017-06-16T15:12:49Z
2016-09-26T10:33:13Z
Deep Structured Features for Semantic Segmentation
We propose a highly structured neural network architecture for semantic segmentation with an extremely small model size, suitable for low-power embedded and mobile platforms. Specifically, our architecture combines i) a Haar wavelet-based tree-like convolutional neural network (CNN), ii) a random layer realizing a radial basis function kernel approximation, and iii) a linear classifier. While stages i) and ii) are completely pre-specified, only the linear classifier is learned from data. We apply the proposed architecture to outdoor scene and aerial image semantic segmentation and show that the accuracy of our architecture is competitive with conventional pixel classification CNNs. Furthermore, we demonstrate that the proposed architecture is data efficient in the sense of matching the accuracy of pixel classification CNNs when trained on a much smaller data set.
[ "['Michael Tschannen' 'Lukas Cavigelli' 'Fabian Mentzer' 'Thomas Wiatowski'\n 'Luca Benini']", "Michael Tschannen, Lukas Cavigelli, Fabian Mentzer, Thomas Wiatowski,\n Luca Benini" ]
cs.RO cs.LG
10.1109/DEVLRN.2015.7346156
1609.08009
null
null
http://arxiv.org/abs/1609.08009v1
2016-09-26T15:05:08Z
2016-09-26T15:05:08Z
Grounding object perception in a naive agent's sensorimotor experience
Artificial object perception usually relies on a priori defined models and feature extraction algorithms. We study how the concept of object can be grounded in the sensorimotor experience of a naive agent. Without any knowledge about itself or the world it is immersed in, the agent explores its sensorimotor space and identifies objects as consistent networks of sensorimotor transitions, independent from their context. A fundamental drive for prediction is assumed to explain the emergence of such networks from a developmental standpoint. An algorithm is proposed and tested to illustrate the approach.
[ "Alban Laflaqui\\`ere and Nikolas Hemion", "['Alban Laflaquière' 'Nikolas Hemion']" ]
cs.LG stat.ML
null
1609.08017
null
null
http://arxiv.org/pdf/1609.08017v3
2017-02-15T19:40:29Z
2016-09-26T15:14:05Z
Dropout with Expectation-linear Regularization
Dropout, a simple and effective way to train deep neural networks, has led to a number of impressive empirical successes and spawned many recent theoretical investigations. However, the gap between dropout's training and inference phases, introduced due to tractability considerations, has largely remained under-appreciated. In this work, we first formulate dropout as a tractable approximation of some latent variable model, leading to a clean view of parameter sharing and enabling further theoretical analysis. Then, we introduce (approximate) expectation-linear dropout neural networks, whose inference gap we are able to formally characterize. Algorithmically, we show that our proposed measure of the inference gap can be used to regularize the standard dropout training objective, resulting in an \emph{explicit} control of the gap. Our method is as simple and efficient as standard dropout. We further prove the upper bounds on the loss in accuracy due to expectation-linearization, describe classes of input distributions that expectation-linearize easily. Experiments on three image classification benchmark datasets demonstrate that reducing the inference gap can indeed improve the performance consistently.
[ "Xuezhe Ma, Yingkai Gao, Zhiting Hu, Yaoliang Yu, Yuntian Deng, Eduard\n Hovy", "['Xuezhe Ma' 'Yingkai Gao' 'Zhiting Hu' 'Yaoliang Yu' 'Yuntian Deng'\n 'Eduard Hovy']" ]
cs.CL cs.AI cs.LG
null
1609.08144
null
null
http://arxiv.org/pdf/1609.08144v2
2016-10-08T19:10:41Z
2016-09-26T19:59:55Z
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
Neural Machine Translation (NMT) is an end-to-end learning approach for automated translation, with the potential to overcome many of the weaknesses of conventional phrase-based translation systems. Unfortunately, NMT systems are known to be computationally expensive both in training and in translation inference. Also, most NMT systems have difficulty with rare words. These issues have hindered NMT's use in practical deployments and services, where both accuracy and speed are essential. In this work, we present GNMT, Google's Neural Machine Translation system, which attempts to address many of these issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder layers using attention and residual connections. To improve parallelism and therefore decrease training time, our attention mechanism connects the bottom layer of the decoder to the top layer of the encoder. To accelerate the final translation speed, we employ low-precision arithmetic during inference computations. To improve handling of rare words, we divide words into a limited set of common sub-word units ("wordpieces") for both input and output. This method provides a good balance between the flexibility of "character"-delimited models and the efficiency of "word"-delimited models, naturally handles translation of rare words, and ultimately improves the overall accuracy of the system. Our beam search technique employs a length-normalization procedure and uses a coverage penalty, which encourages generation of an output sentence that is most likely to cover all the words in the source sentence. On the WMT'14 English-to-French and English-to-German benchmarks, GNMT achieves competitive results to state-of-the-art. Using a human side-by-side evaluation on a set of isolated simple sentences, it reduces translation errors by an average of 60% compared to Google's phrase-based production system.
[ "['Yonghui Wu' 'Mike Schuster' 'Zhifeng Chen' 'Quoc V. Le'\n 'Mohammad Norouzi' 'Wolfgang Macherey' 'Maxim Krikun' 'Yuan Cao'\n 'Qin Gao' 'Klaus Macherey' 'Jeff Klingner' 'Apurva Shah' 'Melvin Johnson'\n 'Xiaobing Liu' 'Łukasz Kaiser' 'Stephan Gouws' 'Yoshikiyo Kato'\n 'Taku Kudo' 'Hideto Kazawa' 'Keith Stevens' 'George Kurian'\n 'Nishant Patil' 'Wei Wang' 'Cliff Young' 'Jason Smith' 'Jason Riesa'\n 'Alex Rudnick' 'Oriol Vinyals' 'Greg Corrado' 'Macduff Hughes'\n 'Jeffrey Dean']", "Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi,\n Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff\n Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, {\\L}ukasz Kaiser,\n Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens,\n George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason\n Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey\n Dean" ]
cs.LG
null
1609.08151
null
null
http://arxiv.org/pdf/1609.08151v2
2016-09-29T11:02:29Z
2016-09-25T13:47:08Z
Nonnegative autoencoder with simplified random neural network
This paper proposes new nonnegative (shallow and multi-layer) autoencoders by combining the spiking Random Neural Network (RNN) model, the network architecture typical used in deep-learning area and the training technique inspired from nonnegative matrix factorization (NMF). The shallow autoencoder is a simplified RNN model, which is then stacked into a multi-layer architecture. The learning algorithm is based on the weight update rules in NMF, subject to the nonnegative probability constraints of the RNN. The autoencoders equipped with this learning algorithm are tested on typical image datasets including the MNIST, Yale face and CIFAR-10 datasets, and also using 16 real-world datasets from different areas. The results obtained through these tests yield the desired high learning and recognition accuracy. Also, numerical simulations of the stochastic spiking behavior of this RNN auto encoder, show that it can be implemented in a highly-distributed manner.
[ "['Yonghua Yin' 'Erol Gelenbe']", "Yonghua Yin, Erol Gelenbe" ]
cs.CV cs.LG stat.ML
null
1609.08209
null
null
http://arxiv.org/pdf/1609.08209v1
2016-09-26T22:11:05Z
2016-09-26T22:11:05Z
Automatic Construction of a Recurrent Neural Network based Classifier for Vehicle Passage Detection
Recurrent Neural Networks (RNNs) are extensively used for time-series modeling and prediction. We propose an approach for automatic construction of a binary classifier based on Long Short-Term Memory RNNs (LSTM-RNNs) for detection of a vehicle passage through a checkpoint. As an input to the classifier we use multidimensional signals of various sensors that are installed on the checkpoint. Obtained results demonstrate that the previous approach to handcrafting a classifier, consisting of a set of deterministic rules, can be successfully replaced by an automatic RNN training on an appropriately labelled data.
[ "Evgeny Burnaev and Ivan Koptelov and German Novikov and Timur Khanipov", "['Evgeny Burnaev' 'Ivan Koptelov' 'German Novikov' 'Timur Khanipov']" ]
cs.CV cs.LG
null
1609.08221
null
null
http://arxiv.org/pdf/1609.08221v2
2017-01-09T03:23:43Z
2016-09-26T23:24:27Z
Simultaneous Low-rank Component and Graph Estimation for High-dimensional Graph Signals: Application to Brain Imaging
We propose an algorithm to uncover the intrinsic low-rank component of a high-dimensional, graph-smooth and grossly-corrupted dataset, under the situations that the underlying graph is unknown. Based on a model with a low-rank component plus a sparse perturbation, and an initial graph estimation, our proposed algorithm simultaneously learns the low-rank component and refines the graph. Our evaluations using synthetic and real brain imaging data in unsupervised and supervised classification tasks demonstrate encouraging performance.
[ "['Rui Liu' 'Hossein Nejati' 'Seyed Hamid Safavi' 'Ngai-Man Cheung']", "Rui Liu, Hossein Nejati, Seyed Hamid Safavi, Ngai-Man Cheung" ]
cs.LG
null
1609.08281
null
null
http://arxiv.org/pdf/1609.08281v3
2017-09-06T17:53:44Z
2016-09-27T06:59:11Z
An Efficient Method for Robust Projection Matrix Design
Our objective is to efficiently design a robust projection matrix $\Phi$ for the Compressive Sensing (CS) systems when applied to the signals that are not exactly sparse. The optimal projection matrix is obtained by mainly minimizing the average coherence of the equivalent dictionary. In order to drop the requirement of the sparse representation error (SRE) for a set of training data as in [15] [16], we introduce a novel penalty function independent of a particular SRE matrix. Without requiring of training data, we can efficiently design the robust projection matrix and apply it for most of CS systems, like a CS system for image processing with a conventional wavelet dictionary in which the SRE matrix is generally not available. Simulation results demonstrate the efficiency and effectiveness of the proposed approach compared with the state-of-the-art methods. In addition, we experimentally demonstrate with natural images that under similar compression rate, a CS system with a learned dictionary in high dimensions outperforms the one in low dimensions in terms of reconstruction accuracy. This together with the fact that our proposed method can efficiently work in high dimension suggests that a CS system can be potentially implemented beyond the small patches in sparsity-based image processing.
[ "['Tao Hong' 'Zhihui Zhu']", "Tao Hong and Zhihui Zhu" ]
cs.LG
null
1609.08286
null
null
http://arxiv.org/pdf/1609.08286v1
2016-09-27T07:10:16Z
2016-09-27T07:10:16Z
Online Unsupervised Multi-view Feature Selection
In the era of big data, it is becoming common to have data with multiple modalities or coming from multiple sources, known as "multi-view data". Multi-view data are usually unlabeled and come from high-dimensional spaces (such as language vocabularies), unsupervised multi-view feature selection is crucial to many applications. However, it is nontrivial due to the following challenges. First, there are too many instances or the feature dimensionality is too large. Thus, the data may not fit in memory. How to select useful features with limited memory space? Second, how to select features from streaming data and handles the concept drift? Third, how to leverage the consistent and complementary information from different views to improve the feature selection in the situation when the data are too big or come in as streams? To the best of our knowledge, none of the previous works can solve all the challenges simultaneously. In this paper, we propose an Online unsupervised Multi-View Feature Selection, OMVFS, which deals with large-scale/streaming multi-view data in an online fashion. OMVFS embeds unsupervised feature selection into a clustering algorithm via NMF with sparse learning. It further incorporates the graph regularization to preserve the local structure information and help select discriminative features. Instead of storing all the historical data, OMVFS processes the multi-view data chunk by chunk and aggregates all the necessary information into several small matrices. By using the buffering technique, the proposed OMVFS can reduce the computational and storage cost while taking advantage of the structure information. Furthermore, OMVFS can capture the concept drifts in the data streams. Extensive experiments on four real-world datasets show the effectiveness and efficiency of the proposed OMVFS method. More importantly, OMVFS is about 100 times faster than the off-line methods.
[ "Weixiang Shao, Lifang He, Chun-Ta Lu, Xiaokai Wei, Philip S. Yu", "['Weixiang Shao' 'Lifang He' 'Chun-Ta Lu' 'Xiaokai Wei' 'Philip S. Yu']" ]
cs.IT cs.LG math.IT
null
1609.08312
null
null
http://arxiv.org/pdf/1609.08312v2
2016-10-05T14:01:49Z
2016-09-27T08:29:39Z
Duality between Feature Selection and Data Clustering
The feature-selection problem is formulated from an information-theoretic perspective. We show that the problem can be efficiently solved by an extension of the recently proposed info-clustering paradigm. This reveals the fundamental duality between feature selection and data clustering,which is a consequence of the more general duality between the principal partition and the principal lattice of partitions in combinatorial optimization.
[ "['Chung Chan' 'Ali Al-Bashabsheh' 'Qiaoqiao Zhou' 'Tie Liu']", "Chung Chan, Ali Al-Bashabsheh, Qiaoqiao Zhou, Tie Liu" ]