categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
list
null
null
1606.03802v
null
null
http://arxiv.org/abs/1606.03802v11
2022-02-21T20:21:30Z
2016-06-13T03:46:17Z
Open-Set Support Vector Machines
Often, when dealing with real-world recognition problems, we do not need, and often cannot have, knowledge of the entire set of possible classes that might appear during operational testing. In such cases, we need to think of robust classification methods able to deal with the "unknown" and properly reject samples belonging to classes never seen during training. Notwithstanding, existing classifiers to date were mostly developed for the closed-set scenario, i.e., the classification setup in which it is assumed that all test samples belong to one of the classes with which the classifier was trained. In the open-set scenario, however, a test sample can belong to none of the known classes and the classifier must properly reject it by classifying it as unknown. In this work, we extend upon the well-known Support Vector Machines (SVM) classifier and introduce the Open-Set Support Vector Machines (OSSVM), which is suitable for recognition in open-set setups. OSSVM balances the empirical risk and the risk of the unknown and ensures that the region of the feature space in which a test sample would be classified as known (one of the known classes) is always bounded, ensuring a finite risk of the unknown. In this work, we also highlight the properties of the SVM classifier related to the open-set scenario, and provide necessary and sufficient conditions for an RBF SVM to have bounded open-space risk.
[ "['Pedro Ribeiro Mendes Júnior' 'Terrance E. Boult' 'Jacques Wainer'\n 'Anderson Rocha']" ]
math.OC cs.LG stat.ML
null
1606.03841
null
null
http://arxiv.org/pdf/1606.03841v3
2017-02-13T01:27:29Z
2016-06-13T07:21:31Z
Efficient Learning with a Family of Nonconvex Regularizers by Redistributing Nonconvexity
The use of convex regularizers allows for easy optimization, though they often produce biased estimation and inferior prediction performance. Recently, nonconvex regularizers have attracted a lot of attention and outperformed convex ones. However, the resultant optimization problem is much harder. In this paper, for a large class of nonconvex regularizers, we propose to move the nonconvexity from the regularizer to the loss. The nonconvex regularizer is then transformed to a familiar convex regularizer, while the resultant loss function can still be guaranteed to be smooth. Learning with the convexified regularizer can be performed by existing efficient algorithms originally designed for convex regularizers (such as the proximal algorithm, Frank-Wolfe algorithm, alternating direction method of multipliers and stochastic gradient descent). Extensions are made when the convexified regularizer does not have closed-form proximal step, and when the loss function is nonconvex, nonsmooth. Extensive experiments on a variety of machine learning application scenarios show that optimizing the transformed problem is much faster than running the state-of-the-art on the original problem.
[ "['Quanming Yao' 'James. T Kwok']", "Quanming Yao and James.T Kwok" ]
cs.LG
null
1606.03858
null
null
http://arxiv.org/pdf/1606.03858v2
2016-06-14T08:02:03Z
2016-06-13T08:55:20Z
Sorting out typicality with the inverse moment matrix SOS polynomial
We study a surprising phenomenon related to the representation of a cloud of data points using polynomials. We start with the previously unnoticed empirical observation that, given a collection (a cloud) of data points, the sublevel sets of a certain distinguished polynomial capture the shape of the cloud very accurately. This distinguished polynomial is a sum-of-squares (SOS) derived in a simple manner from the inverse of the empirical moment matrix. In fact, this SOS polynomial is directly related to orthogonal polynomials and the Christoffel function. This allows to generalize and interpret extremality properties of orthogonal polynomials and to provide a mathematical rationale for the observed phenomenon. Among diverse potential applications, we illustrate the relevance of our results on a network intrusion detection task for which we obtain performances similar to existing dedicated methods reported in the literature.
[ "['Jean-Bernard Lasserre' 'Edouard Pauwels']", "Jean-Bernard Lasserre, Edouard Pauwels" ]
stat.ML cs.AI cs.LG
null
1606.03860
null
null
http://arxiv.org/pdf/1606.03860v3
2018-06-19T16:44:54Z
2016-06-13T08:56:35Z
Robust Probabilistic Modeling with Bayesian Data Reweighting
Probabilistic models analyze data by relying on a set of assumptions. Data that exhibit deviations from these assumptions can undermine inference and prediction quality. Robust models offer protection against mismatch between a model's assumptions and reality. We propose a way to systematically detect and mitigate mismatch of a large class of probabilistic models. The idea is to raise the likelihood of each observation to a weight and then to infer both the latent variables and the weights from data. Inferring the weights allows a model to identify observations that match its assumptions and down-weight others. This enables robust inference and improves predictive accuracy. We study four different forms of mismatch with reality, ranging from missing latent groups to structure misspecification. A Poisson factorization analysis of the Movielens 1M dataset shows the benefits of this approach in a practical scenario.
[ "Yixin Wang, Alp Kucukelbir, David M. Blei", "['Yixin Wang' 'Alp Kucukelbir' 'David M. Blei']" ]
cs.NE cs.AI cs.CL cs.LG
null
1606.03864
null
null
http://arxiv.org/pdf/1606.03864v2
2016-06-14T07:59:18Z
2016-06-13T09:08:04Z
Neural Associative Memory for Dual-Sequence Modeling
Many important NLP problems can be posed as dual-sequence or sequence-to-sequence modeling tasks. Recent advances in building end-to-end neural architectures have been highly successful in solving such tasks. In this work we propose a new architecture for dual-sequence modeling that is based on associative memory. We derive AM-RNNs, a recurrent associative memory (AM) which augments generic recurrent neural networks (RNN). This architecture is extended to the Dual AM-RNN which operates on two AMs at once. Our models achieve very competitive results on textual entailment. A qualitative analysis demonstrates that long range dependencies between source and target-sequence can be bridged effectively using Dual AM-RNNs. However, an initial experiment on auto-encoding reveals that these benefits are not exploited by the system when learning to solve sequence-to-sequence tasks which indicates that additional supervision or regularization is needed.
[ "Dirk Weissenborn", "['Dirk Weissenborn']" ]
cs.IT cond-mat.dis-nn cs.LG math.IT stat.ML
10.1109/ITW.2016.7606837
1606.03956
null
null
http://arxiv.org/abs/1606.03956v1
2016-06-13T14:03:50Z
2016-06-13T14:03:50Z
Inferring Sparsity: Compressed Sensing using Generalized Restricted Boltzmann Machines
In this work, we consider compressed sensing reconstruction from $M$ measurements of $K$-sparse structured signals which do not possess a writable correlation model. Assuming that a generative statistical model, such as a Boltzmann machine, can be trained in an unsupervised manner on example signals, we demonstrate how this signal model can be used within a Bayesian framework of signal reconstruction. By deriving a message-passing inference for general distribution restricted Boltzmann machines, we are able to integrate these inferred signal models into approximate message passing for compressed sensing reconstruction. Finally, we show for the MNIST dataset that this approach can be very effective, even for $M < K$.
[ "['Eric W. Tramel' 'Andre Manoel' 'Francesco Caltagirone' 'Marylou Gabrié'\n 'Florent Krzakala']", "Eric W. Tramel and Andre Manoel and Francesco Caltagirone and Marylou\n Gabri\\'e and Florent Krzakala" ]
cs.LG cs.DC
null
1606.03966
null
null
http://arxiv.org/pdf/1606.03966v2
2017-05-09T14:41:15Z
2016-06-13T14:17:00Z
Making Contextual Decisions with Low Technical Debt
Applications and systems are constantly faced with decisions that require picking from a set of actions based on contextual information. Reinforcement-based learning algorithms such as contextual bandits can be very effective in these settings, but applying them in practice is fraught with technical debt, and no general system exists that supports them completely. We address this and create the first general system for contextual learning, called the Decision Service. Existing systems often suffer from technical debt that arises from issues like incorrect data collection and weak debuggability, issues we systematically address through our ML methodology and system abstractions. The Decision Service enables all aspects of contextual bandit learning using four system abstractions which connect together in a loop: explore (the decision space), log, learn, and deploy. Notably, our new explore and log abstractions ensure the system produces correct, unbiased data, which our learner uses for online learning and to enable real-time safeguards, all in a fully reproducible manner. The Decision Service has a simple user interface and works with a variety of applications: we present two live production deployments for content recommendation that achieved click-through improvements of 25-30%, another with 18% revenue lift in the landing page, and ongoing applications in tech support and machine failure handling. The service makes real-time decisions and learns continuously and scalably, while significantly lowering technical debt.
[ "Alekh Agarwal, Sarah Bird, Markus Cozowicz, Luong Hoang, John\n Langford, Stephen Lee, Jiaji Li, Dan Melamed, Gal Oshri, Oswaldo Ribas,\n Siddhartha Sen, Alex Slivkins", "['Alekh Agarwal' 'Sarah Bird' 'Markus Cozowicz' 'Luong Hoang'\n 'John Langford' 'Stephen Lee' 'Jiaji Li' 'Dan Melamed' 'Gal Oshri'\n 'Oswaldo Ribas' 'Siddhartha Sen' 'Alex Slivkins']" ]
stat.ML cs.AI cs.LG
null
1606.03976
null
null
http://arxiv.org/pdf/1606.03976v5
2017-05-16T15:11:15Z
2016-06-13T14:40:57Z
Estimating individual treatment effect: generalization bounds and algorithms
There is intense interest in applying machine learning to problems of causal inference in fields such as healthcare, economics and education. In particular, individual-level causal inference has important applications such as precision medicine. We give a new theoretical analysis and family of algorithms for predicting individual treatment effect (ITE) from observational data, under the assumption known as strong ignorability. The algorithms learn a "balanced" representation such that the induced treated and control distributions look similar. We give a novel, simple and intuitive generalization-error bound showing that the expected ITE estimation error of a representation is bounded by a sum of the standard generalization-error of that representation and the distance between the treated and control distributions induced by the representation. We use Integral Probability Metrics to measure distances between distributions, deriving explicit bounds for the Wasserstein and Maximum Mean Discrepancy (MMD) distances. Experiments on real and simulated data show the new algorithms match or outperform the state-of-the-art.
[ "Uri Shalit, Fredrik D. Johansson, David Sontag", "['Uri Shalit' 'Fredrik D. Johansson' 'David Sontag']" ]
cs.LG
null
1606.04038
null
null
http://arxiv.org/pdf/1606.04038v2
2017-02-17T01:33:17Z
2016-06-13T17:15:43Z
Trace Norm Regularised Deep Multi-Task Learning
We propose a framework for training multiple neural networks simultaneously. The parameters from all models are regularised by the tensor trace norm, so that each neural network is encouraged to reuse others' parameters if possible -- this is the main motivation behind multi-task learning. In contrast to many deep multi-task learning models, we do not predefine a parameter sharing strategy by specifying which layers have tied parameters. Instead, our framework considers sharing for all shareable layers, and the sharing strategy is learned in a data-driven way.
[ "['Yongxin Yang' 'Timothy M. Hospedales']", "Yongxin Yang, Timothy M. Hospedales" ]
cs.LG math.CO
null
1606.04056
null
null
http://arxiv.org/pdf/1606.04056v1
2016-06-13T18:18:08Z
2016-06-13T18:18:08Z
On the exact learnability of graph parameters: The case of partition functions
We study the exact learnability of real valued graph parameters $f$ which are known to be representable as partition functions which count the number of weighted homomorphisms into a graph $H$ with vertex weights $\alpha$ and edge weights $\beta$. M. Freedman, L. Lov\'asz and A. Schrijver have given a characterization of these graph parameters in terms of the $k$-connection matrices $C(f,k)$ of $f$. Our model of learnability is based on D. Angluin's model of exact learning using membership and equivalence queries. Given such a graph parameter $f$, the learner can ask for the values of $f$ for graphs of their choice, and they can formulate hypotheses in terms of the connection matrices $C(f,k)$ of $f$. The teacher can accept the hypothesis as correct, or provide a counterexample consisting of a graph. Our main result shows that in this scenario, a very large class of partition functions, the rigid partition functions, can be learned in time polynomial in the size of $H$ and the size of the largest counterexample in the Blum-Shub-Smale model of computation over the reals with unit cost.
[ "['Nadia Labai' 'Johann A. Makowsky']", "Nadia Labai and Johann A. Makowsky" ]
cs.LG stat.ML
null
1606.04080
null
null
http://arxiv.org/pdf/1606.04080v2
2017-12-29T17:45:19Z
2016-06-13T19:34:22Z
Matching Networks for One Shot Learning
Learning from a few examples remains a key challenge in machine learning. Despite recent advances in important domains such as vision and language, the standard supervised deep learning paradigm does not offer a satisfactory solution for learning new concepts rapidly from little data. In this work, we employ ideas from metric learning based on deep neural features and from recent advances that augment neural networks with external memories. Our framework learns a network that maps a small labelled support set and an unlabelled example to its label, obviating the need for fine-tuning to adapt to new class types. We then define one-shot learning problems on vision (using Omniglot, ImageNet) and language tasks. Our algorithm improves one-shot accuracy on ImageNet from 87.6% to 93.2% and from 88.0% to 93.8% on Omniglot compared to competing approaches. We also demonstrate the usefulness of the same model on language modeling by introducing a one-shot task on the Penn Treebank.
[ "Oriol Vinyals and Charles Blundell and Timothy Lillicrap and Koray\n Kavukcuoglu and Daan Wierstra", "['Oriol Vinyals' 'Charles Blundell' 'Timothy Lillicrap'\n 'Koray Kavukcuoglu' 'Daan Wierstra']" ]
cs.LG cs.IR cs.NE stat.ML
null
1606.04130
null
null
http://arxiv.org/pdf/1606.04130v5
2016-11-11T12:46:53Z
2016-06-13T20:34:35Z
Modeling Missing Data in Clinical Time Series with RNNs
We demonstrate a simple strategy to cope with missing data in sequential inputs, addressing the task of multilabel classification of diagnoses given clinical time series. Collected from the pediatric intensive care unit (PICU) at Children's Hospital Los Angeles, our data consists of multivariate time series of observations. The measurements are irregularly spaced, leading to missingness patterns in temporally discretized sequences. While these artifacts are typically handled by imputation, we achieve superior predictive performance by treating the artifacts as features. Unlike linear models, recurrent neural networks can realize this improvement using only simple binary indicators of missingness. For linear models, we show an alternative strategy to capture this signal. Training models on missingness patterns only, we show that for some diseases, what tests are run can be as predictive as the results themselves.
[ "['Zachary C. Lipton' 'David C. Kale' 'Randall Wetzel']", "Zachary C. Lipton, David C. Kale, Randall Wetzel" ]
cs.IT cond-mat.dis-nn cs.LG math-ph math.IT math.MP
null
1606.04142
null
null
http://arxiv.org/pdf/1606.04142v1
2016-06-13T21:17:15Z
2016-06-13T21:17:15Z
Mutual information for symmetric rank-one matrix estimation: A proof of the replica formula
Factorizing low-rank matrices has many applications in machine learning and statistics. For probabilistic models in the Bayes optimal setting, a general expression for the mutual information has been proposed using heuristic statistical physics computations, and proven in few specific cases. Here, we show how to rigorously prove the conjectured formula for the symmetric rank-one case. This allows to express the minimal mean-square-error and to characterize the detectability phase transitions in a large set of estimation problems ranging from community detection to sparse PCA. We also show that for a large set of parameters, an iterative algorithm called approximate message-passing is Bayes optimal. There exists, however, a gap between what currently known polynomial algorithms can do and what is expected information theoretically. Additionally, the proof technique has an interest of its own and exploits three essential ingredients: the interpolation method introduced in statistical physics by Guerra, the analysis of the approximate message-passing algorithm and the theory of spatial coupling and threshold saturation in coding. Our approach is generic and applicable to other open problems in statistical estimation where heuristic statistical physics predictions are available.
[ "['Jean Barbier' 'Mohamad Dia' 'Nicolas Macris' 'Florent Krzakala'\n 'Thibault Lesieur' 'Lenka Zdeborova']", "Jean Barbier, Mohamad Dia, Nicolas Macris, Florent Krzakala, Thibault\n Lesieur, Lenka Zdeborova" ]
cs.LG cs.GT
null
1606.04145
null
null
http://arxiv.org/pdf/1606.04145v1
2016-06-13T21:25:22Z
2016-06-13T21:25:22Z
Sample Complexity of Automated Mechanism Design
The design of revenue-maximizing combinatorial auctions, i.e. multi-item auctions over bundles of goods, is one of the most fundamental problems in computational economics, unsolved even for two bidders and two items for sale. In the traditional economic models, it is assumed that the bidders' valuations are drawn from an underlying distribution and that the auction designer has perfect knowledge of this distribution. Despite this strong and oftentimes unrealistic assumption, it is remarkable that the revenue-maximizing combinatorial auction remains unknown. In recent years, automated mechanism design has emerged as one of the most practical and promising approaches to designing high-revenue combinatorial auctions. The most scalable automated mechanism design algorithms take as input samples from the bidders' valuation distribution and then search for a high-revenue auction in a rich auction class. In this work, we provide the first sample complexity analysis for the standard hierarchy of deterministic combinatorial auction classes used in automated mechanism design. In particular, we provide tight sample complexity bounds on the number of samples needed to guarantee that the empirical revenue of the designed mechanism on the samples is close to its expected revenue on the underlying, unknown distribution over bidder valuations, for each of the auction classes in the hierarchy. In addition to helping set automated mechanism design on firm foundations, our results also push the boundaries of learning theory. In particular, the hypothesis functions used in our contexts are defined through multi-stage combinatorial optimization procedures, rather than simple decision boundaries, as are common in machine learning.
[ "Maria-Florina Balcan, Tuomas Sandholm, Ellen Vitercik", "['Maria-Florina Balcan' 'Tuomas Sandholm' 'Ellen Vitercik']" ]
cs.LG stat.ML
null
1606.04160
null
null
http://arxiv.org/pdf/1606.04160v2
2017-03-07T21:41:50Z
2016-06-13T22:27:36Z
The Crossover Process: Learnability and Data Protection from Inference Attacks
It is usual to consider data protection and learnability as conflicting objectives. This is not always the case: we show how to jointly control inference --- seen as the attack --- and learnability by a noise-free process that mixes training examples, the Crossover Process (cp). One key point is that the cp~is typically able to alter joint distributions without touching on marginals, nor altering the sufficient statistic for the class. In other words, it saves (and sometimes improves) generalization for supervised learning, but can alter the relationship between covariates --- and therefore fool measures of nonlinear independence and causal inference into misleading ad-hoc conclusions. For example, a cp~can increase / decrease odds ratios, bring fairness or break fairness, tamper with disparate impact, strengthen, weaken or reverse causal directions, change observed statistical measures of dependence. For each of these, we quantify changes brought by a cp, as well as its statistical impact on generalization abilities via a new complexity measure that we call the Rademacher cp~complexity. Experiments on a dozen readily available domains validate the theory.
[ "Richard Nock, Giorgio Patrini, Finnian Lattimore, Tiberio Caetano", "['Richard Nock' 'Giorgio Patrini' 'Finnian Lattimore' 'Tiberio Caetano']" ]
stat.ML cs.LG
null
1606.04166
null
null
http://arxiv.org/pdf/1606.04166v1
2016-06-13T22:44:39Z
2016-06-13T22:44:39Z
Modal-set estimation with an application to clustering
We present a first procedure that can estimate -- with statistical consistency guarantees -- any local-maxima of a density, under benign distributional conditions. The procedure estimates all such local maxima, or $\textit{modal-sets}$, of any bounded shape or dimension, including usual point-modes. In practice, modal-sets can arise as dense low-dimensional structures in noisy data, and more generally serve to better model the rich variety of locally-high-density structures in data. The procedure is then shown to be competitive on clustering applications, and moreover is quite stable to a wide range of settings of its tuning parameter.
[ "Heinrich Jiang, Samory Kpotufe", "['Heinrich Jiang' 'Samory Kpotufe']" ]
cs.CV cs.LG cs.NE
null
1606.04189
null
null
http://arxiv.org/pdf/1606.04189v2
2016-07-07T18:52:57Z
2016-06-14T01:35:12Z
Inverting face embeddings with convolutional neural networks
Deep neural networks have dramatically advanced the state of the art for many areas of machine learning. Recently they have been shown to have a remarkable ability to generate highly complex visual artifacts such as images and text rather than simply recognize them. In this work we use neural networks to effectively invert low-dimensional face embeddings while producing realistically looking consistent images. Our contribution is twofold, first we show that a gradient ascent style approaches can be used to reproduce consistent images, with a help of a guiding image. Second, we demonstrate that we can train a separate neural network to effectively solve the minimization problem in one pass, and generate images in real-time. We then evaluate the loss imposed by using a neural network instead of the gradient descent by comparing the final values of the minimized loss function.
[ "['Andrey Zhmoginov' 'Mark Sandler']", "Andrey Zhmoginov and Mark Sandler" ]
cs.CL cs.LG
null
1606.04199
null
null
http://arxiv.org/pdf/1606.04199v3
2016-07-23T13:14:17Z
2016-06-14T03:53:00Z
Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation
Neural machine translation (NMT) aims at solving machine translation (MT) problems using neural networks and has exhibited promising results in recent years. However, most of the existing NMT models are shallow and there is still a performance gap between a single NMT model and the best conventional MT system. In this work, we introduce a new type of linear connections, named fast-forward connections, based on deep Long Short-Term Memory (LSTM) networks, and an interleaved bi-directional architecture for stacking the LSTM layers. Fast-forward connections play an essential role in propagating the gradients and building a deep topology of depth 16. On the WMT'14 English-to-French task, we achieve BLEU=37.7 with a single attention model, which outperforms the corresponding single shallow model by 6.2 BLEU points. This is the first time that a single NMT model achieves state-of-the-art performance and outperforms the best conventional model by 0.7 BLEU points. We can still achieve BLEU=36.3 even without using an attention mechanism. After special handling of unknown words and model ensembling, we obtain the best score reported to date on this task with BLEU=40.4. Our models are also validated on the more difficult WMT'14 English-to-German task.
[ "['Jie Zhou' 'Ying Cao' 'Xuguang Wang' 'Peng Li' 'Wei Xu']", "Jie Zhou and Ying Cao and Xuguang Wang and Peng Li and Wei Xu" ]
cs.LG
null
1606.04218
null
null
http://arxiv.org/pdf/1606.04218v1
2016-06-14T07:11:29Z
2016-06-14T07:11:29Z
Conditional Generative Moment-Matching Networks
Maximum mean discrepancy (MMD) has been successfully applied to learn deep generative models for characterizing a joint distribution of variables via kernel mean embedding. In this paper, we present conditional generative moment- matching networks (CGMMN), which learn a conditional distribution given some input variables based on a conditional maximum mean discrepancy (CMMD) criterion. The learning is performed by stochastic gradient descent with the gradient calculated by back-propagation. We evaluate CGMMN on a wide range of tasks, including predictive modeling, contextual generation, and Bayesian dark knowledge, which distills knowledge from a Bayesian model by learning a relatively small CGMMN student network. Our results demonstrate competitive performance in all the tasks.
[ "['Yong Ren' 'Jialian Li' 'Yucen Luo' 'Jun Zhu']", "Yong Ren, Jialian Li, Yucen Luo, Jun Zhu" ]
cs.CV cs.LG
null
1606.04232
null
null
http://arxiv.org/pdf/1606.04232v1
2016-06-14T07:38:13Z
2016-06-14T07:38:13Z
DCNNs on a Diet: Sampling Strategies for Reducing the Training Set Size
Large-scale supervised classification algorithms, especially those based on deep convolutional neural networks (DCNNs), require vast amounts of training data to achieve state-of-the-art performance. Decreasing this data requirement would significantly speed up the training process and possibly improve generalization. Motivated by this objective, we consider the task of adaptively finding concise training subsets which will be iteratively presented to the learner. We use convex optimization methods, based on an objective criterion and feedback from the current performance of the classifier, to efficiently identify informative samples to train on. We propose an algorithm to decompose the optimization problem into smaller per-class problems, which can be solved in parallel. We test our approach on standard classification tasks and demonstrate its effectiveness in decreasing the training set size without compromising performance. We also show that our approach can make the classifier more robust in the presence of label noise and class imbalance.
[ "['Maya Kabkab' 'Azadeh Alavi' 'Rama Chellappa']", "Maya Kabkab, Azadeh Alavi, Rama Chellappa" ]
cs.NI cs.LG
10.1109/TWC.2016.2636139
1606.04236
null
null
http://arxiv.org/abs/1606.04236v2
2016-12-16T15:03:53Z
2016-06-14T07:53:47Z
Context-Aware Proactive Content Caching with Service Differentiation in Wireless Networks
Content caching in small base stations or wireless infostations is considered to be a suitable approach to improve the efficiency in wireless content delivery. Placing the optimal content into local caches is crucial due to storage limitations, but it requires knowledge about the content popularity distribution, which is often not available in advance. Moreover, local content popularity is subject to fluctuations since mobile users with different interests connect to the caching entity over time. Which content a user prefers may depend on the user's context. In this paper, we propose a novel algorithm for context-aware proactive caching. The algorithm learns context-specific content popularity online by regularly observing context information of connected users, updating the cache content and observing cache hits subsequently. We derive a sublinear regret bound, which characterizes the learning speed and proves that our algorithm converges to the optimal cache content placement strategy in terms of maximizing the number of cache hits. Furthermore, our algorithm supports service differentiation by allowing operators of caching entities to prioritize customer groups. Our numerical results confirm that our algorithm outperforms state-of-the-art algorithms in a real world data set, with an increase in the number of cache hits of at least 14%.
[ "['Sabrina Müller' 'Onur Atan' 'Mihaela van der Schaar' 'Anja Klein']", "Sabrina M\\\"uller, Onur Atan, Mihaela van der Schaar, Anja Klein" ]
cs.LG stat.ML
10.1109/TSP.2016.2628348
1606.04268
null
null
http://arxiv.org/abs/1606.04268v1
2016-06-14T09:18:46Z
2016-06-14T09:18:46Z
Local Canonical Correlation Analysis for Nonlinear Common Variables Discovery
In this paper, we address the problem of hidden common variables discovery from multimodal data sets of nonlinear high-dimensional observations. We present a metric based on local applications of canonical correlation analysis (CCA) and incorporate it in a kernel-based manifold learning technique.We show that this metric discovers the hidden common variables underlying the multimodal observations by estimating the Euclidean distance between them. Our approach can be viewed both as an extension of CCA to a nonlinear setting as well as an extension of manifold learning to multiple data sets. Experimental results show that our method indeed discovers the common variables underlying high-dimensional nonlinear observations without assuming prior rigid model assumptions.
[ "['Or Yair' 'Ronen Talmon']", "Or Yair, Ronen Talmon" ]
cs.DS cs.LG
10.1145/2978578
1606.04269
null
null
http://arxiv.org/abs/1606.04269v1
2016-06-14T09:27:29Z
2016-06-14T09:27:29Z
Context Trees: Augmenting Geospatial Trajectories with Context
Exposing latent knowledge in geospatial trajectories has the potential to provide a better understanding of the movements of individuals and groups. Motivated by such a desire, this work presents the context tree, a new hierarchical data structure that summarises the context behind user actions in a single model. We propose a method for context tree construction that augments geospatial trajectories with land usage data to identify such contexts. Through evaluation of the construction method and analysis of the properties of generated context trees, we demonstrate the foundation for understanding and modelling behaviour afforded. Summarising user contexts into a single data structure gives easy access to information that would otherwise remain latent, providing the basis for better understanding and predicting the actions and behaviours of individuals and groups. Finally, we also present a method for pruning context trees, for use in applications where it is desirable to reduce the size of the tree while retaining useful information.
[ "Alasdair Thomason, Nathan Griffiths, Victor Sanchez", "['Alasdair Thomason' 'Nathan Griffiths' 'Victor Sanchez']" ]
cs.LG
null
1606.04275
null
null
http://arxiv.org/pdf/1606.04275v1
2016-06-14T09:38:18Z
2016-06-14T09:38:18Z
Efficient Pairwise Learning Using Kernel Ridge Regression: an Exact Two-Step Method
Pairwise learning or dyadic prediction concerns the prediction of properties for pairs of objects. It can be seen as an umbrella covering various machine learning problems such as matrix completion, collaborative filtering, multi-task learning, transfer learning, network prediction and zero-shot learning. In this work we analyze kernel-based methods for pairwise learning, with a particular focus on a recently-suggested two-step method. We show that this method offers an appealing alternative for commonly-applied Kronecker-based methods that model dyads by means of pairwise feature representations and pairwise kernels. In a series of theoretical results, we establish correspondences between the two types of methods in terms of linear algebra and spectral filtering, and we analyze their statistical consistency. In addition, the two-step method allows us to establish novel algorithmic shortcuts for efficient training and validation on very large datasets. Putting those properties together, we believe that this simple, yet powerful method can become a standard tool for many problems. Extensive experimental results for a range of practical settings are reported.
[ "Michiel Stock and Tapio Pahikkala and Antti Airola and Bernard De\n Baets and Willem Waegeman", "['Michiel Stock' 'Tapio Pahikkala' 'Antti Airola' 'Bernard De Baets'\n 'Willem Waegeman']" ]
cs.IR cs.LG
10.1007/s10618-016-0456-z
1606.04278
null
null
http://arxiv.org/abs/1606.04278v1
2016-06-14T09:41:27Z
2016-06-14T09:41:27Z
Exact and efficient top-K inference for multi-target prediction by querying separable linear relational models
Many complex multi-target prediction problems that concern large target spaces are characterised by a need for efficient prediction strategies that avoid the computation of predictions for all targets explicitly. Examples of such problems emerge in several subfields of machine learning, such as collaborative filtering, multi-label classification, dyadic prediction and biological network inference. In this article we analyse efficient and exact algorithms for computing the top-$K$ predictions in the above problem settings, using a general class of models that we refer to as separable linear relational models. We show how to use those inference algorithms, which are modifications of well-known information retrieval methods, in a variety of machine learning settings. Furthermore, we study the possibility of scoring items incompletely, while still retaining an exact top-K retrieval. Experimental results in several application domains reveal that the so-called threshold algorithm is very scalable, performing often many orders of magnitude more efficiently than the naive approach.
[ "['Michiel Stock' 'Krzysztof Dembczynski' 'Bernard De Baets'\n 'Willem Waegeman']", "Michiel Stock and Krzysztof Dembczynski and Bernard De Baets and\n Willem Waegeman" ]
cs.CL cs.LG cs.NE
10.18653/v1/P16-1068
1606.04289
null
null
http://arxiv.org/abs/1606.04289v2
2016-06-16T16:30:33Z
2016-06-14T10:17:27Z
Automatic Text Scoring Using Neural Networks
Automated Text Scoring (ATS) provides a cost-effective and consistent alternative to human marking. However, in order to achieve good performance, the predictive features of the system need to be manually engineered by human experts. We introduce a model that forms word representations by learning the extent to which specific words contribute to the text's score. Using Long-Short Term Memory networks to represent the meaning of texts, we demonstrate that a fully automated framework is able to achieve excellent results over similar approaches. In an attempt to make our results more interpretable, and inspired by recent advances in visualizing neural networks, we introduce a novel method for identifying the regions of the text that the model has found more discriminative.
[ "['Dimitrios Alikaniotis' 'Helen Yannakoudakis' 'Marek Rei']", "Dimitrios Alikaniotis and Helen Yannakoudakis and Marek Rei" ]
stat.ML cs.LG
null
1606.04316
null
null
http://arxiv.org/pdf/1606.04316v3
2017-07-15T15:16:48Z
2016-06-14T11:35:35Z
Time for a change: a tutorial for comparing multiple classifiers through Bayesian analysis
The machine learning community adopted the use of null hypothesis significance testing (NHST) in order to ensure the statistical validity of results. Many scientific fields however realized the shortcomings of frequentist reasoning and in the most radical cases even banned its use in publications. We should do the same: just as we have embraced the Bayesian paradigm in the development of new machine learning methods, so we should also use it in the analysis of our own results. We argue for abandonment of NHST by exposing its fallacies and, more importantly, offer better - more sound and useful - alternatives for it.
[ "Alessio Benavoli, Giorgio Corani, Janez Demsar, Marco Zaffalon", "['Alessio Benavoli' 'Giorgio Corani' 'Janez Demsar' 'Marco Zaffalon']" ]
stat.ML cs.LG cs.SD
null
1606.04317
null
null
http://arxiv.org/pdf/1606.04317v1
2016-06-14T11:44:31Z
2016-06-14T11:44:31Z
Calibration of Phone Likelihoods in Automatic Speech Recognition
In this paper we study the probabilistic properties of the posteriors in a speech recognition system that uses a deep neural network (DNN) for acoustic modeling. We do this by reducing Kaldi's DNN shared pdf-id posteriors to phone likelihoods, and using test set forced alignments to evaluate these using a calibration sensitive metric. Individual frame posteriors are in principle well-calibrated, because the DNN is trained using cross entropy as the objective function, which is a proper scoring rule. When entire phones are assessed, we observe that it is best to average the log likelihoods over the duration of the phone. Further scaling of the average log likelihoods by the logarithm of the duration slightly improves the calibration, and this improvement is retained when tested on independent test data.
[ "David A. van Leeuwen and Joost van Doremalen", "['David A. van Leeuwen' 'Joost van Doremalen']" ]
cs.CL cs.IR cs.LG
null
1606.04351
null
null
http://arxiv.org/pdf/1606.04351v1
2016-06-14T13:36:00Z
2016-06-14T13:36:00Z
TwiSE at SemEval-2016 Task 4: Twitter Sentiment Classification
This paper describes the participation of the team "TwiSE" in the SemEval 2016 challenge. Specifically, we participated in Task 4, namely "Sentiment Analysis in Twitter" for which we implemented sentiment classification systems for subtasks A, B, C and D. Our approach consists of two steps. In the first step, we generate and validate diverse feature sets for twitter sentiment evaluation, inspired by the work of participants of previous editions of such challenges. In the second step, we focus on the optimization of the evaluation measures of the different subtasks. To this end, we examine different learning strategies by validating them on the data provided by the task organisers. For our final submissions we used an ensemble learning approach (stacked generalization) for Subtask A and single linear models for the rest of the subtasks. In the official leaderboard we were ranked 9/35, 8/19, 1/11 and 2/14 for subtasks A, B, C and D respectively.\footnote{We make the code available for research purposes at \url{https://github.com/balikasg/SemEval2016-Twitter\_Sentiment\_Evaluation}.}
[ "['Georgios Balikas' 'Massih-Reza Amini']", "Georgios Balikas, Massih-Reza Amini" ]
cs.CV cs.LG cs.NE stat.ML
null
1606.04393
null
null
http://arxiv.org/pdf/1606.04393v3
2017-02-06T22:51:33Z
2016-06-14T14:36:55Z
Deep Learning with Darwin: Evolutionary Synthesis of Deep Neural Networks
Taking inspiration from biological evolution, we explore the idea of "Can deep neural networks evolve naturally over successive generations into highly efficient deep neural networks?" by introducing the notion of synthesizing new highly efficient, yet powerful deep neural networks over successive generations via an evolutionary process from ancestor deep neural networks. The architectural traits of ancestor deep neural networks are encoded using synaptic probability models, which can be viewed as the `DNA' of these networks. New descendant networks with differing network architectures are synthesized based on these synaptic probability models from the ancestor networks and computational environmental factor models, in a random manner to mimic heredity, natural selection, and random mutation. These offspring networks are then trained into fully functional networks, like one would train a newborn, and have more efficient, more diverse network architectures than their ancestor networks, while achieving powerful modeling capabilities. Experimental results for the task of visual saliency demonstrated that the synthesized `evolved' offspring networks can achieve state-of-the-art performance while having network architectures that are significantly more efficient (with a staggering $\sim$48-fold decrease in synapses by the fourth generation) compared to the original ancestor network.
[ "['Mohammad Javad Shafiee' 'Akshaya Mishra' 'Alexander Wong']", "Mohammad Javad Shafiee, Akshaya Mishra, and Alexander Wong" ]
stat.ML cs.AI cs.LG
null
1606.04414
null
null
http://arxiv.org/pdf/1606.04414v4
2018-04-22T23:44:15Z
2016-06-14T15:12:01Z
The Parallel Knowledge Gradient Method for Batch Bayesian Optimization
In many applications of black-box optimization, one can evaluate multiple points simultaneously, e.g. when evaluating the performances of several different neural network architectures in a parallel computing environment. In this paper, we develop a novel batch Bayesian optimization algorithm --- the parallel knowledge gradient method. By construction, this method provides the one-step Bayes-optimal batch of points to sample. We provide an efficient strategy for computing this Bayes-optimal batch of points, and we demonstrate that the parallel knowledge gradient method finds global optima significantly faster than previous batch Bayesian optimization algorithms on both synthetic test functions and when tuning hyperparameters of practical machine learning algorithms, especially when function evaluations are noisy.
[ "['Jian Wu' 'Peter I. Frazier']", "Jian Wu, Peter I. Frazier" ]
cs.AI cs.LG cs.LO cs.NE
null
1606.04422
null
null
http://arxiv.org/pdf/1606.04422v2
2016-07-07T12:28:57Z
2016-06-14T15:25:28Z
Logic Tensor Networks: Deep Learning and Logical Reasoning from Data and Knowledge
We propose Logic Tensor Networks: a uniform framework for integrating automatic learning and reasoning. A logic formalism called Real Logic is defined on a first-order language whereby formulas have truth-value in the interval [0,1] and semantics defined concretely on the domain of real numbers. Logical constants are interpreted as feature vectors of real numbers. Real Logic promotes a well-founded integration of deductive reasoning on a knowledge-base and efficient data-driven relational machine learning. We show how Real Logic can be implemented in deep Tensor Neural Networks with the use of Google's tensorflow primitives. The paper concludes with experiments applying Logic Tensor Networks on a simple but representative example of knowledge completion.
[ "Luciano Serafini and Artur d'Avila Garcez", "['Luciano Serafini' \"Artur d'Avila Garcez\"]" ]
cs.CR cs.LG cs.NE
null
1606.04435
null
null
http://arxiv.org/pdf/1606.04435v2
2016-06-16T08:14:12Z
2016-06-14T16:01:52Z
Adversarial Perturbations Against Deep Neural Networks for Malware Classification
Deep neural networks, like many other machine learning models, have recently been shown to lack robustness against adversarially crafted inputs. These inputs are derived from regular inputs by minor yet carefully selected perturbations that deceive machine learning models into desired misclassifications. Existing work in this emerging field was largely specific to the domain of image classification, since the high-entropy of images can be conveniently manipulated without changing the images' overall visual appearance. Yet, it remains unclear how such attacks translate to more security-sensitive applications such as malware detection - which may pose significant challenges in sample generation and arguably grave consequences for failure. In this paper, we show how to construct highly-effective adversarial sample crafting attacks for neural networks used as malware classifiers. The application domain of malware classification introduces additional constraints in the adversarial sample crafting problem when compared to the computer vision domain: (i) continuous, differentiable input domains are replaced by discrete, often binary inputs; and (ii) the loose condition of leaving visual appearance unchanged is replaced by requiring equivalent functional behavior. We demonstrate the feasibility of these attacks on many different instances of malware classifiers that we trained using the DREBIN Android malware data set. We furthermore evaluate to which extent potential defensive mechanisms against adversarial crafting can be leveraged to the setting of malware classification. While feature reduction did not prove to have a positive impact, distillation and re-training on adversarially crafted samples show promising results.
[ "['Kathrin Grosse' 'Nicolas Papernot' 'Praveen Manoharan' 'Michael Backes'\n 'Patrick McDaniel']", "Kathrin Grosse, Nicolas Papernot, Praveen Manoharan, Michael Backes,\n Patrick McDaniel" ]
cs.AI cs.LG cs.LO
null
1606.04442
null
null
http://arxiv.org/pdf/1606.04442v2
2017-01-26T19:35:16Z
2016-06-14T16:27:41Z
DeepMath - Deep Sequence Models for Premise Selection
We study the effectiveness of neural sequence models for premise selection in automated theorem proving, one of the main bottlenecks in the formalization of mathematics. We propose a two stage approach for this task that yields good results for the premise selection task on the Mizar corpus while avoiding the hand-engineered features of existing state-of-the-art models. To our knowledge, this is the first time deep learning has been applied to theorem proving on a large scale.
[ "Alex A. Alemi, Francois Chollet, Niklas Een, Geoffrey Irving,\n Christian Szegedy and Josef Urban", "['Alex A. Alemi' 'Francois Chollet' 'Niklas Een' 'Geoffrey Irving'\n 'Christian Szegedy' 'Josef Urban']" ]
stat.ML cs.LG
null
1606.04443
null
null
http://arxiv.org/pdf/1606.04443v2
2016-10-28T20:05:02Z
2016-06-14T16:31:14Z
A scalable end-to-end Gaussian process adapter for irregularly sampled time series classification
We present a general framework for classification of sparse and irregularly-sampled time series. The properties of such time series can result in substantial uncertainty about the values of the underlying temporal processes, while making the data difficult to deal with using standard classification methods that assume fixed-dimensional feature spaces. To address these challenges, we propose an uncertainty-aware classification framework based on a special computational layer we refer to as the Gaussian process adapter that can connect irregularly sampled time series data to any black-box classifier learnable using gradient descent. We show how to scale up the required computations based on combining the structured kernel interpolation framework and the Lanczos approximation method, and how to discriminatively train the Gaussian process adapter in combination with a number of classifiers end-to-end using backpropagation.
[ "['Steven Cheng-Xian Li' 'Benjamin Marlin']", "Steven Cheng-Xian Li, Benjamin Marlin" ]
stat.ML cs.LG
null
1606.04449
null
null
http://arxiv.org/pdf/1606.04449v2
2016-12-08T18:56:38Z
2016-06-14T16:40:38Z
Recurrent neural network training with preconditioned stochastic gradient descent
This paper studies the performance of a recently proposed preconditioned stochastic gradient descent (PSGD) algorithm on recurrent neural network (RNN) training. PSGD adaptively estimates a preconditioner to accelerate gradient descent, and is designed to be simple, general and easy to use, as stochastic gradient descent (SGD). RNNs, especially the ones requiring extremely long term memories, are difficult to train. We have tested PSGD on a set of synthetic pathological RNN learning problems and the real world MNIST handwritten digit recognition task. Experimental results suggest that PSGD is able to achieve highly competitive performance without using any trick like preprocessing, pretraining or parameter tweaking.
[ "['Xi-Lin Li']", "Xi-Lin Li" ]
stat.ML cs.LG q-bio.NC
null
1606.04460
null
null
http://arxiv.org/pdf/1606.04460v1
2016-06-14T17:03:46Z
2016-06-14T17:03:46Z
Model-Free Episodic Control
State of the art deep reinforcement learning algorithms take many millions of interactions to attain human-level performance. Humans, on the other hand, can very quickly exploit highly rewarding nuances of an environment upon first discovery. In the brain, such rapid learning is thought to depend on the hippocampus and its capacity for episodic memory. Here we investigate whether a simple model of hippocampal episodic control can learn to solve difficult sequential decision-making tasks. We demonstrate that it not only attains a highly rewarding strategy significantly faster than state-of-the-art deep reinforcement learning algorithms, but also achieves a higher overall reward on some of the more challenging domains.
[ "Charles Blundell and Benigno Uria and Alexander Pritzel and Yazhe Li\n and Avraham Ruderman and Joel Z Leibo and Jack Rae and Daan Wierstra and\n Demis Hassabis", "['Charles Blundell' 'Benigno Uria' 'Alexander Pritzel' 'Yazhe Li'\n 'Avraham Ruderman' 'Joel Z Leibo' 'Jack Rae' 'Daan Wierstra'\n 'Demis Hassabis']" ]
cs.NE cs.LG
null
1606.04474
null
null
http://arxiv.org/pdf/1606.04474v2
2016-11-30T16:45:45Z
2016-06-14T17:49:32Z
Learning to learn by gradient descent by gradient descent
The move from hand-designed features to learned features in machine learning has been wildly successful. In spite of this, optimization algorithms are still designed by hand. In this paper we show how the design of an optimization algorithm can be cast as a learning problem, allowing the algorithm to learn to exploit structure in the problems of interest in an automatic way. Our learned algorithms, implemented by LSTMs, outperform generic, hand-designed competitors on the tasks for which they are trained, and also generalize well to new tasks with similar structure. We demonstrate this on a number of tasks, including simple convex problems, training neural networks, and styling images with neural art.
[ "Marcin Andrychowicz and Misha Denil and Sergio Gomez and Matthew W.\n Hoffman and David Pfau and Tom Schaul and Brendan Shillingford and Nando de\n Freitas", "['Marcin Andrychowicz' 'Misha Denil' 'Sergio Gomez' 'Matthew W. Hoffman'\n 'David Pfau' 'Tom Schaul' 'Brendan Shillingford' 'Nando de Freitas']" ]
cs.DC cs.LG
null
1606.04487
null
null
http://arxiv.org/pdf/1606.04487v4
2016-10-19T04:26:03Z
2016-06-14T18:21:04Z
Omnivore: An Optimizer for Multi-device Deep Learning on CPUs and GPUs
We study the factors affecting training time in multi-device deep learning systems. Given a specification of a convolutional neural network, our goal is to minimize the time to train this model on a cluster of commodity CPUs and GPUs. We first focus on the single-node setting and show that by using standard batching and data-parallel techniques, throughput can be improved by at least 5.5x over state-of-the-art systems on CPUs. This ensures an end-to-end training speed directly proportional to the throughput of a device regardless of its underlying hardware, allowing each node in the cluster to be treated as a black box. Our second contribution is a theoretical and empirical study of the tradeoffs affecting end-to-end training time in a multiple-device setting. We identify the degree of asynchronous parallelization as a key factor affecting both hardware and statistical efficiency. We see that asynchrony can be viewed as introducing a momentum term. Our results imply that tuning momentum is critical in asynchronous parallel configurations, and suggest that published results that have not been fully tuned might report suboptimal performance for some configurations. For our third contribution, we use our novel understanding of the interaction between system and optimization dynamics to provide an efficient hyperparameter optimizer. Our optimizer involves a predictive model for the total time to convergence and selects an allocation of resources to minimize that time. We demonstrate that the most popular distributed deep learning systems fall within our tradeoff space, but do not optimize within the space. By doing this optimization, our prototype runs 1.9x to 12x faster than the fastest state-of-the-art systems.
[ "['Stefan Hadjis' 'Ce Zhang' 'Ioannis Mitliagkas' 'Dan Iter'\n 'Christopher Ré']", "Stefan Hadjis, Ce Zhang, Ioannis Mitliagkas, Dan Iter, Christopher\n R\\'e" ]
cs.LG cs.CV
null
1606.04506
null
null
http://arxiv.org/pdf/1606.04506v1
2016-06-14T19:05:01Z
2016-06-14T19:05:01Z
Max-Margin Feature Selection
Many machine learning applications such as in vision, biology and social networking deal with data in high dimensions. Feature selection is typically employed to select a subset of features which im- proves generalization accuracy as well as reduces the computational cost of learning the model. One of the criteria used for feature selection is to jointly minimize the redundancy and maximize the rele- vance of the selected features. In this paper, we formulate the task of feature selection as a one class SVM problem in a space where features correspond to the data points and instances correspond to the dimensions. The goal is to look for a representative subset of the features (support vectors) which describes the boundary for the region where the set of the features (data points) exists. This leads to a joint optimization of relevance and redundancy in a principled max-margin framework. Additionally, our formulation enables us to leverage existing techniques for optimizing the SVM objective resulting in highly computationally efficient solutions for the task of feature selection. Specifically, we employ the dual coordinate descent algorithm (Hsieh et al., 2008), originally proposed for SVMs, for our formulation. We use a sparse representation to deal with data in very high dimensions. Experiments on seven publicly available benchmark datasets from a variety of domains show that our approach results in orders of magnitude faster solutions even while retaining the same level of accuracy compared to the state of the art feature selection techniques.
[ "Yamuna Prasad, Dinesh Khandelwal, K. K. Biswas", "['Yamuna Prasad' 'Dinesh Khandelwal' 'K. K. Biswas']" ]
cs.LG cs.NE
null
1606.04518
null
null
http://arxiv.org/pdf/1606.04518v1
2016-06-14T19:32:34Z
2016-06-14T19:32:34Z
Sparsely Connected and Disjointly Trained Deep Neural Networks for Low Resource Behavioral Annotation: Acoustic Classification in Couples' Therapy
Observational studies are based on accurate assessment of human state. A behavior recognition system that models interlocutors' state in real-time can significantly aid the mental health domain. However, behavior recognition from speech remains a challenging task since it is difficult to find generalizable and representative features because of noisy and high-dimensional data, especially when data is limited and annotated coarsely and subjectively. Deep Neural Networks (DNN) have shown promise in a wide range of machine learning tasks, but for Behavioral Signal Processing (BSP) tasks their application has been constrained due to limited quantity of data. We propose a Sparsely-Connected and Disjointly-Trained DNN (SD-DNN) framework to deal with limited data. First, we break the acoustic feature set into subsets and train multiple distinct classifiers. Then, the hidden layers of these classifiers become parts of a deeper network that integrates all feature streams. The overall system allows for full connectivity while limiting the number of parameters trained at any time and allows convergence possible with even limited data. We present results on multiple behavior codes in the couples' therapy domain and demonstrate the benefits in behavior classification accuracy. We also show the viability of this system towards live behavior annotations.
[ "['Haoqi Li' 'Brian Baucom' 'Panayiotis Georgiou']", "Haoqi Li, Brian Baucom, Panayiotis Georgiou" ]
cs.LG
null
1606.04521
null
null
http://arxiv.org/pdf/1606.04521v1
2016-06-14T19:39:41Z
2016-06-14T19:39:41Z
Training variance and performance evaluation of neural networks in speech
In this work we study variance in the results of neural network training on a wide variety of configurations in automatic speech recognition. Although this variance itself is well known, this is, to the best of our knowledge, the first paper that performs an extensive empirical study on its effects in speech recognition. We view training as sampling from a distribution and show that these distributions can have a substantial variance. These results show the urgent need to rethink the way in which results in the literature are reported and interpreted.
[ "['Ewout van den Berg' 'Bhuvana Ramabhadran' 'Michael Picheny']", "Ewout van den Berg, Bhuvana Ramabhadran, Michael Picheny" ]
cs.LG cs.CR cs.NI
null
1606.04552
null
null
http://arxiv.org/pdf/1606.04552v1
2016-06-14T20:29:50Z
2016-06-14T20:29:50Z
A New Approach to Dimensionality Reduction for Anomaly Detection in Data Traffic
The monitoring and management of high-volume feature-rich traffic in large networks offers significant challenges in storage, transmission and computational costs. The predominant approach to reducing these costs is based on performing a linear mapping of the data to a low-dimensional subspace such that a certain large percentage of the variance in the data is preserved in the low-dimensional representation. This variance-based subspace approach to dimensionality reduction forces a fixed choice of the number of dimensions, is not responsive to real-time shifts in observed traffic patterns, and is vulnerable to normal traffic spoofing. Based on theoretical insights proved in this paper, we propose a new distance-based approach to dimensionality reduction motivated by the fact that the real-time structural differences between the covariance matrices of the observed and the normal traffic is more relevant to anomaly detection than the structure of the training data alone. Our approach, called the distance-based subspace method, allows a different number of reduced dimensions in different time windows and arrives at only the number of dimensions necessary for effective anomaly detection. We present centralized and distributed versions of our algorithm and, using simulation on real traffic traces, demonstrate the qualitative and quantitative advantages of the distance-based subspace approach.
[ "['Tingshan Huang' 'Harish Sethu' 'Nagarajan Kandasamy']", "Tingshan Huang, Harish Sethu and Nagarajan Kandasamy" ]
cs.LG cs.CE
null
1606.04561
null
null
http://arxiv.org/pdf/1606.04561v2
2016-07-18T16:04:09Z
2016-06-14T20:49:22Z
A two-stage learning method for protein-protein interaction prediction
In this paper, a new method for PPI (proteinprotein interaction) prediction is proposed. In PPI prediction, a reliable and sufficient number of training samples is not available, but a large number of unlabeled samples is in hand. In the proposed method, the denoising auto encoders are employed for learning robust features. The obtained robust features are used in order to train a classifier with a better performance. The experimental results demonstrate the capabilities of the proposed method. Protein-protein interaction; Denoising auto encoder;Robust features; Unlabelled data;
[ "Amir Ahooye Atashin, Parsa Bagherzadeh, Kamaledin Ghiasi-Shirazi", "['Amir Ahooye Atashin' 'Parsa Bagherzadeh' 'Kamaledin Ghiasi-Shirazi']" ]
cs.LG cs.AI cs.NE
null
1606.04615
null
null
http://arxiv.org/pdf/1606.04615v1
2016-06-15T01:57:40Z
2016-06-15T01:57:40Z
Deep Reinforcement Learning With Macro-Actions
Deep reinforcement learning has been shown to be a powerful framework for learning policies from complex high-dimensional sensory inputs to actions in complex tasks, such as the Atari domain. In this paper, we explore output representation modeling in the form of temporal abstraction to improve convergence and reliability of deep reinforcement learning approaches. We concentrate on macro-actions, and evaluate these on different Atari 2600 games, where we show that they yield significant improvements in learning speed. Additionally, we show that they can even achieve better scores than DQN. We offer analysis and explanation for both convergence and final results, revealing a problem deep RL approaches have with sparse reward signals.
[ "['Ishan P. Durugkar' 'Clemens Rosenbaum' 'Stefan Dernbach'\n 'Sridhar Mahadevan']", "Ishan P. Durugkar, Clemens Rosenbaum, Stefan Dernbach, Sridhar\n Mahadevan" ]
stat.ML cs.LG
null
1606.04618
null
null
http://arxiv.org/pdf/1606.04618v1
2016-06-15T02:03:05Z
2016-06-15T02:03:05Z
Masking Strategies for Image Manifolds
We consider the problem of selecting an optimal mask for an image manifold, i.e., choosing a subset of the pixels of the image that preserves the manifold's geometric structure present in the original data. Such masking implements a form of compressive sensing through emerging imaging sensor platforms for which the power expense grows with the number of pixels acquired. Our goal is for the manifold learned from masked images to resemble its full image counterpart as closely as possible. More precisely, we show that one can indeed accurately learn an image manifold without having to consider a large majority of the image pixels. In doing so, we consider two masking methods that preserve the local and global geometric structure of the manifold, respectively. In each case, the process of finding the optimal masking pattern can be cast as a binary integer program, which is computationally expensive but can be approximated by a fast greedy algorithm. Numerical experiments show that the relevant manifold structure is preserved through the data-dependent masking process, even for modest mask sizes.
[ "Hamid Dadkhahi and Marco F. Duarte", "['Hamid Dadkhahi' 'Marco F. Duarte']" ]
cs.LG
null
1606.04624
null
null
http://arxiv.org/pdf/1606.04624v1
2016-06-15T02:44:02Z
2016-06-15T02:44:02Z
Finite-time Analysis for the Knowledge-Gradient Policy
We consider sequential decision problems in which we adaptively choose one of finitely many alternatives and observe a stochastic reward. We offer a new perspective of interpreting Bayesian ranking and selection problems as adaptive stochastic multi-set maximization problems and derive the first finite-time bound of the knowledge-gradient policy for adaptive submodular objective functions. In addition, we introduce the concept of prior-optimality and provide another insight into the performance of the knowledge gradient policy based on the submodular assumption on the value of information. We demonstrate submodularity for the two-alternative case and provide other conditions for more general problems, bringing out the issue and importance of submodularity in learning problems. Empirical experiments are conducted to further illustrate the finite time behavior of the knowledge gradient policy.
[ "Yingfei Wang and Warren Powell", "['Yingfei Wang' 'Warren Powell']" ]
cs.LG
null
1606.04646
null
null
http://arxiv.org/pdf/1606.04646v1
2016-06-15T05:26:29Z
2016-06-15T05:26:29Z
Unsupervised Learning of Predictors from Unpaired Input-Output Samples
Unsupervised learning is the most challenging problem in machine learning and especially in deep learning. Among many scenarios, we study an unsupervised learning problem of high economic value --- learning to predict without costly pairing of input data and corresponding labels. Part of the difficulty in this problem is a lack of solid evaluation measures. In this paper, we take a practical approach to grounding unsupervised learning by using the same success criterion as for supervised learning in prediction tasks but we do not require the presence of paired input-output training data. In particular, we propose an objective function that aims to make the predicted outputs fit well the structure of the output while preserving the correlation between the input and the predicted output. We experiment with a synthetic structural prediction problem and show that even with simple linear classifiers, the objective function is already highly non-convex. We further demonstrate the nature of this non-convex optimization problem as well as potential solutions. In particular, we show that with regularization via a generative model, learning with the proposed unsupervised objective function converges to an optimal solution.
[ "['Jianshu Chen' 'Po-Sen Huang' 'Xiaodong He' 'Jianfeng Gao' 'Li Deng']", "Jianshu Chen, Po-Sen Huang, Xiaodong He, Jianfeng Gao and Li Deng" ]
cs.LG
null
1606.04671
null
null
null
null
null
Progressive Neural Networks
Learning to solve complex sequences of tasks--while both leveraging transfer and avoiding catastrophic forgetting--remains a key obstacle to achieving human-level intelligence. The progressive networks approach represents a step forward in this direction: they are immune to forgetting and can leverage prior knowledge via lateral connections to previously learned features. We evaluate this architecture extensively on a wide variety of reinforcement learning tasks (Atari and 3D maze games), and show that it outperforms common baselines based on pretraining and finetuning. Using a novel sensitivity measure, we demonstrate that transfer occurs at both low-level sensory and high-level control layers of the learned policy.
[ "Andrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert\n Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, Raia Hadsell" ]
cs.AI cs.LG
null
1606.04695
null
null
http://arxiv.org/pdf/1606.04695v1
2016-06-15T09:28:52Z
2016-06-15T09:28:52Z
Strategic Attentive Writer for Learning Macro-Actions
We present a novel deep recurrent neural network architecture that learns to build implicit plans in an end-to-end manner by purely interacting with an environment in reinforcement learning setting. The network builds an internal plan, which is continuously updated upon observation of the next input from the environment. It can also partition this internal representation into contiguous sub- sequences by learning for how long the plan can be committed to - i.e. followed without re-planing. Combining these properties, the proposed model, dubbed STRategic Attentive Writer (STRAW) can learn high-level, temporally abstracted macro- actions of varying lengths that are solely learnt from data without any prior information. These macro-actions enable both structured exploration and economic computation. We experimentally demonstrate that STRAW delivers strong improvements on several ATARI games by employing temporally extended planning strategies (e.g. Ms. Pacman and Frostbite). It is at the same time a general algorithm that can be applied on any sequence data. To that end, we also show that when trained on text prediction task, STRAW naturally predicts frequent n-grams (instead of macro-actions), demonstrating the generality of the approach.
[ "Alexander (Sasha) Vezhnevets, Volodymyr Mnih, John Agapiou, Simon\n Osindero, Alex Graves, Oriol Vinyals, Koray Kavukcuoglu", "['Alexander' 'Vezhnevets' 'Volodymyr Mnih' 'John Agapiou' 'Simon Osindero'\n 'Alex Graves' 'Oriol Vinyals' 'Koray Kavukcuoglu']" ]
cs.LG cs.CR cs.DB stat.ML
null
1606.04722
null
null
http://arxiv.org/pdf/1606.04722v3
2017-03-23T17:35:09Z
2016-06-15T11:14:29Z
Bolt-on Differential Privacy for Scalable Stochastic Gradient Descent-based Analytics
While significant progress has been made separately on analytics systems for scalable stochastic gradient descent (SGD) and private SGD, none of the major scalable analytics frameworks have incorporated differentially private SGD. There are two inter-related issues for this disconnect between research and practice: (1) low model accuracy due to added noise to guarantee privacy, and (2) high development and runtime overhead of the private algorithms. This paper takes a first step to remedy this disconnect and proposes a private SGD algorithm to address \emph{both} issues in an integrated manner. In contrast to the white-box approach adopted by previous work, we revisit and use the classical technique of {\em output perturbation} to devise a novel "bolt-on" approach to private SGD. While our approach trivially addresses (2), it makes (1) even more challenging. We address this challenge by providing a novel analysis of the $L_2$-sensitivity of SGD, which allows, under the same privacy guarantees, better convergence of SGD when only a constant number of passes can be made over the data. We integrate our algorithm, as well as other state-of-the-art differentially private SGD, into Bismarck, a popular scalable SGD-based analytics system on top of an RDBMS. Extensive experiments show that our algorithm can be easily integrated, incurs virtually no overhead, scales well, and most importantly, yields substantially better (up to 4X) test accuracy than the state-of-the-art algorithms on many real datasets.
[ "['Xi Wu' 'Fengan Li' 'Arun Kumar' 'Kamalika Chaudhuri' 'Somesh Jha'\n 'Jeffrey F. Naughton']", "Xi Wu, Fengan Li, Arun Kumar, Kamalika Chaudhuri, Somesh Jha, Jeffrey\n F. Naughton" ]
cs.LG cs.NE cs.SD
null
1606.04750
null
null
http://arxiv.org/pdf/1606.04750v1
2016-06-15T13:14:05Z
2016-06-15T13:14:05Z
Multi-Modal Hybrid Deep Neural Network for Speech Enhancement
Deep Neural Networks (DNN) have been successful in en- hancing noisy speech signals. Enhancement is achieved by learning a nonlinear mapping function from the features of the corrupted speech signal to that of the reference clean speech signal. The quality of predicted features can be improved by providing additional side channel information that is robust to noise, such as visual cues. In this paper we propose a novel deep learning model inspired by insights from human audio visual perception. In the proposed unified hybrid architecture, features from a Convolution Neural Network (CNN) that processes the visual cues and features from a fully connected DNN that processes the audio signal are integrated using a Bidirectional Long Short-Term Memory (BiLSTM) network. The parameters of the hybrid model are jointly learned using backpropagation. We compare the quality of enhanced speech from the hybrid models with those from traditional DNN and BiLSTM models.
[ "Zhenzhou Wu, Sunil Sivadas, Yong Kiam Tan, Ma Bin, Rick Siow Mong Goh", "['Zhenzhou Wu' 'Sunil Sivadas' 'Yong Kiam Tan' 'Ma Bin'\n 'Rick Siow Mong Goh']" ]
cs.LG cs.AI cs.RO stat.ML
null
1606.04753
null
null
http://arxiv.org/pdf/1606.04753v2
2016-11-15T14:00:11Z
2016-06-15T13:18:30Z
Safe Exploration in Finite Markov Decision Processes with Gaussian Processes
In classical reinforcement learning, when exploring an environment, agents accept arbitrary short term loss for long term gain. This is infeasible for safety critical applications, such as robotics, where even a single unsafe action may cause system failure. In this paper, we address the problem of safely exploring finite Markov decision processes (MDP). We define safety in terms of an, a priori unknown, safety constraint that depends on states and actions. We aim to explore the MDP under this constraint, assuming that the unknown function satisfies regularity conditions expressed via a Gaussian process prior. We develop a novel algorithm for this task and prove that it is able to completely explore the safely reachable part of the MDP without violating the safety constraint. To achieve this, it cautiously explores safe states and actions in order to gain statistical confidence about the safety of unvisited state-action pairs from noisy observations collected while navigating the environment. Moreover, the algorithm explicitly considers reachability when exploring the MDP, ensuring that it does not get stuck in any state with no safe way out. We demonstrate our method on digital terrain models for the task of exploring an unknown map with a rover.
[ "['Matteo Turchetta' 'Felix Berkenkamp' 'Andreas Krause']", "Matteo Turchetta, Felix Berkenkamp, Andreas Krause" ]
cs.NI cs.LG
null
1606.04778
null
null
http://arxiv.org/pdf/1606.04778v2
2017-03-28T03:52:23Z
2016-06-15T14:13:57Z
The Learning and Prediction of Application-level Traffic Data in Cellular Networks
Traffic learning and prediction is at the heart of the evaluation of the performance of telecommunications networks and attracts a lot of attention in wired broadband networks. Now, benefiting from the big data in cellular networks, it becomes possible to make the analyses one step further into the application level. In this paper, we firstly collect a significant amount of application-level traffic data from cellular network operators. Afterwards, with the aid of the traffic "big data", we make a comprehensive study over the modeling and prediction framework of cellular network traffic. Our results solidly demonstrate that there universally exist some traffic statistical modeling characteristics, including ALPHA-stable modeled property in the temporal domain and the sparsity in the spatial domain. Meanwhile, the results also demonstrate the distinctions originated from the uniqueness of different service types of applications. Furthermore, we propose a new traffic prediction framework to encompass and explore these aforementioned characteristics and then develop a dictionary learning-based alternating direction method to solve it. Besides, we validate the prediction accuracy improvement and the robustness of the proposed framework through extensive simulation results.
[ "['Rongpeng Li' 'Zhifeng Zhao' 'Jianchao Zheng' 'Chengli Mei' 'Yueming Cai'\n 'Honggang Zhang']", "Rongpeng Li, Zhifeng Zhao, Jianchao Zheng, Chengli Mei, Yueming Cai,\n and Honggang Zhang" ]
cs.CV cs.LG cs.NE
null
1606.04801
null
null
http://arxiv.org/pdf/1606.04801v2
2016-06-16T06:53:55Z
2016-06-15T14:56:42Z
A Powerful Generative Model Using Random Weights for the Deep Image Representation
To what extent is the success of deep visualization due to the training? Could we do deep visualization using untrained, random weight networks? To address this issue, we explore new and powerful generative models for three popular deep visualization tasks using untrained, random weight convolutional neural networks. First we invert representations in feature spaces and reconstruct images from white noise inputs. The reconstruction quality is statistically higher than that of the same method applied on well trained networks with the same architecture. Next we synthesize textures using scaled correlations of representations in multiple layers and our results are almost indistinguishable with the original natural texture and the synthesized textures based on the trained network. Third, by recasting the content of an image in the style of various artworks, we create artistic images with high perceptual quality, highly competitive to the prior work of Gatys et al. on pretrained networks. To our knowledge this is the first demonstration of image representations using untrained deep neural networks. Our work provides a new and fascinating tool to study the representation of deep network architecture and sheds light on new understandings on deep visualization.
[ "Kun He and Yan Wang and John Hopcroft", "['Kun He' 'Yan Wang' 'John Hopcroft']" ]
math.OC cs.LG stat.ML
null
1606.04809
null
null
http://arxiv.org/pdf/1606.04809v3
2017-11-08T12:38:31Z
2016-06-15T15:12:01Z
ASAGA: Asynchronous Parallel SAGA
We describe ASAGA, an asynchronous parallel version of the incremental gradient algorithm SAGA that enjoys fast linear convergence rates. Through a novel perspective, we revisit and clarify a subtle but important technical issue present in a large fraction of the recent convergence rate proofs for asynchronous parallel optimization algorithms, and propose a simplification of the recently introduced "perturbed iterate" framework that resolves it. We thereby prove that ASAGA can obtain a theoretical linear speedup on multi-core systems even without sparsity assumptions. We present results of an implementation on a 40-core architecture illustrating the practical speedup as well as the hardware overhead.
[ "['Rémi Leblond' 'Fabian Pedregosa' 'Simon Lacoste-Julien']", "R\\'emi Leblond, Fabian Pedregosa and Simon Lacoste-Julien" ]
stat.ML cs.LG math.OC
null
1606.04838
null
null
http://arxiv.org/pdf/1606.04838v3
2018-02-08T20:40:22Z
2016-06-15T16:15:53Z
Optimization Methods for Large-Scale Machine Learning
This paper provides a review and commentary on the past, present, and future of numerical optimization algorithms in the context of machine learning applications. Through case studies on text classification and the training of deep neural networks, we discuss how optimization problems arise in machine learning and what makes them challenging. A major theme of our study is that large-scale machine learning represents a distinctive setting in which the stochastic gradient (SG) method has traditionally played a central role while conventional gradient-based nonlinear optimization techniques typically falter. Based on this viewpoint, we present a comprehensive theory of a straightforward, yet versatile SG algorithm, discuss its practical behavior, and highlight opportunities for designing algorithms with improved performance. This leads to a discussion about the next generation of optimization methods for large-scale machine learning, including an investigation of two main streams of research on techniques that diminish noise in the stochastic directions and methods that make use of second-order derivative approximations.
[ "L\\'eon Bottou, Frank E. Curtis, Jorge Nocedal", "['Léon Bottou' 'Frank E. Curtis' 'Jorge Nocedal']" ]
cs.LG cs.SD
null
1606.04930
null
null
http://arxiv.org/pdf/1606.04930v1
2016-06-15T19:38:14Z
2016-06-15T19:38:14Z
Deep Learning for Music
Our goal is to be able to build a generative model from a deep neural network architecture to try to create music that has both harmony and melody and is passable as music composed by humans. Previous work in music generation has mainly been focused on creating a single melody. More recent work on polyphonic music modeling, centered around time series probability density estimation, has met some partial success. In particular, there has been a lot of work based off of Recurrent Neural Networks combined with Restricted Boltzmann Machines (RNN-RBM) and other similar recurrent energy based models. Our approach, however, is to perform end-to-end learning and generation with deep neural nets alone.
[ "['Allen Huang' 'Raymond Wu']", "Allen Huang, Raymond Wu" ]
cs.LG stat.ML
null
1606.04934
null
null
http://arxiv.org/pdf/1606.04934v2
2017-01-30T20:36:01Z
2016-06-15T19:46:36Z
Improving Variational Inference with Inverse Autoregressive Flow
The framework of normalizing flows provides a general strategy for flexible variational inference of posteriors over latent variables. We propose a new type of normalizing flow, inverse autoregressive flow (IAF), that, in contrast to earlier published flows, scales well to high-dimensional latent spaces. The proposed flow consists of a chain of invertible transformations, where each transformation is based on an autoregressive neural network. In experiments, we show that IAF significantly improves upon diagonal Gaussian approximate posteriors. In addition, we demonstrate that a novel type of variational autoencoder, coupled with IAF, is competitive with neural autoregressive models in terms of attained log-likelihood on natural images, while allowing significantly faster synthesis.
[ "Diederik P. Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya\n Sutskever and Max Welling", "['Diederik P. Kingma' 'Tim Salimans' 'Rafal Jozefowicz' 'Xi Chen'\n 'Ilya Sutskever' 'Max Welling']" ]
cs.CV cs.LG stat.ML
null
1606.04985
null
null
http://arxiv.org/pdf/1606.04985v1
2016-06-15T21:19:54Z
2016-06-15T21:19:54Z
Combining multiscale features for classification of hyperspectral images: a sequence based kernel approach
Nowadays, hyperspectral image classification widely copes with spatial information to improve accuracy. One of the most popular way to integrate such information is to extract hierarchical features from a multiscale segmentation. In the classification context, the extracted features are commonly concatenated into a long vector (also called stacked vector), on which is applied a conventional vector-based machine learning technique (e.g. SVM with Gaussian kernel). In this paper, we rather propose to use a sequence structured kernel: the spectrum kernel. We show that the conventional stacked vector-based kernel is actually a special case of this kernel. Experiments conducted on various publicly available hyperspectral datasets illustrate the improvement of the proposed kernel w.r.t. conventional ones using the same hierarchical spatial features.
[ "['Yanwei Cui' 'Laetitia Chapel' 'Sébastien Lefèvre']", "Yanwei Cui, Laetitia Chapel, S\\'ebastien Lef\\`evre" ]
stat.ML cs.LG
null
1606.04988
null
null
http://arxiv.org/pdf/1606.04988v2
2016-12-01T02:09:04Z
2016-06-15T21:27:43Z
Logarithmic Time One-Against-Some
We create a new online reduction of multiclass classification to binary classification for which training and prediction time scale logarithmically with the number of classes. Compared to previous approaches, we obtain substantially better statistical performance for two reasons: First, we prove a tighter and more complete boosting theorem, and second we translate the results more directly into an algorithm. We show that several simple techniques give rise to an algorithm that can compete with one-against-all in both space and predictive power while offering exponential improvements in speed when the number of classes is large.
[ "Hal Daume III, Nikos Karampatziakis, John Langford, Paul Mineiro", "['Hal Daume III' 'Nikos Karampatziakis' 'John Langford' 'Paul Mineiro']" ]
cs.LG math.OC stat.ML
null
1606.04991
null
null
http://arxiv.org/pdf/1606.04991v1
2016-06-15T21:34:46Z
2016-06-15T21:34:46Z
A Class of Parallel Doubly Stochastic Algorithms for Large-Scale Learning
We consider learning problems over training sets in which both, the number of training examples and the dimension of the feature vectors, are large. To solve these problems we propose the random parallel stochastic algorithm (RAPSA). We call the algorithm random parallel because it utilizes multiple parallel processors to operate on a randomly chosen subset of blocks of the feature vector. We call the algorithm stochastic because processors choose training subsets uniformly at random. Algorithms that are parallel in either of these dimensions exist, but RAPSA is the first attempt at a methodology that is parallel in both the selection of blocks and the selection of elements of the training set. In RAPSA, processors utilize the randomly chosen functions to compute the stochastic gradient component associated with a randomly chosen block. The technical contribution of this paper is to show that this minimally coordinated algorithm converges to the optimal classifier when the training objective is convex. Moreover, we present an accelerated version of RAPSA (ARAPSA) that incorporates the objective function curvature information by premultiplying the descent direction by a Hessian approximation matrix. We further extend the results for asynchronous settings and show that if the processors perform their updates without any coordination the algorithms are still convergent to the optimal argument. RAPSA and its extensions are then numerically evaluated on a linear estimation problem and a binary image classification task using the MNIST handwritten digit dataset.
[ "['Aryan Mokhtari' 'Alec Koppel' 'Alejandro Ribeiro']", "Aryan Mokhtari and Alec Koppel and Alejandro Ribeiro" ]
cs.CL cs.LG cs.SD
null
1606.05007
null
null
http://arxiv.org/pdf/1606.05007v1
2016-06-15T23:45:33Z
2016-06-15T23:45:33Z
Automatic Pronunciation Generation by Utilizing a Semi-supervised Deep Neural Networks
Phonemic or phonetic sub-word units are the most commonly used atomic elements to represent speech signals in modern ASRs. However they are not the optimal choice due to several reasons such as: large amount of effort required to handcraft a pronunciation dictionary, pronunciation variations, human mistakes and under-resourced dialects and languages. Here, we propose a data-driven pronunciation estimation and acoustic modeling method which only takes the orthographic transcription to jointly estimate a set of sub-word units and a reliable dictionary. Experimental results show that the proposed method which is based on semi-supervised training of a deep neural network largely outperforms phoneme based continuous speech recognition on the TIMIT dataset.
[ "['Naoya Takahashi' 'Tofigh Naghibi' 'Beat Pfister']", "Naoya Takahashi, Tofigh Naghibi, Beat Pfister" ]
stat.ML cs.LG cs.NE
null
1606.05018
null
null
http://arxiv.org/pdf/1606.05018v1
2016-06-16T00:53:56Z
2016-06-16T00:53:56Z
Improving Power Generation Efficiency using Deep Neural Networks
Recently there has been significant research on power generation, distribution and transmission efficiency especially in the case of renewable resources. The main objective is reduction of energy losses and this requires improvements on data acquisition and analysis. In this paper we address these concerns by using consumers' electrical smart meter readings to estimate network loading and this information can then be used for better capacity planning. We compare Deep Neural Network (DNN) methods with traditional methods for load forecasting. Our results indicate that DNN methods outperform most traditional methods. This comes at the cost of additional computational complexity but this can be addressed with the use of cloud resources. We also illustrate how these results can be used to better support dynamic pricing.
[ "['Stefan Hosein' 'Patrick Hosein']", "Stefan Hosein and Patrick Hosein" ]
stat.ML cs.LG
null
1606.05027
null
null
http://arxiv.org/pdf/1606.05027v2
2017-02-22T06:12:56Z
2016-06-16T01:55:33Z
Learning Optimal Interventions
Our goal is to identify beneficial interventions from observational data. We consider interventions that are narrowly focused (impacting few covariates) and may be tailored to each individual or globally enacted over a population. For applications where harmful intervention is drastically worse than proposing no change, we propose a conservative definition of the optimal intervention. Assuming the underlying relationship remains invariant under intervention, we develop efficient algorithms to identify the optimal intervention policy from limited data and provide theoretical guarantees for our approach in a Gaussian Process setting. Although our methods assume covariates can be precisely adjusted, they remain capable of improving outcomes in misspecified settings where interventions incur unintentional downstream effects. Empirically, our approach identifies good interventions in two practical applications: gene perturbation and writing improvement.
[ "['Jonas Mueller' 'David N. Reshef' 'George Du' 'Tommi Jaakkola']", "Jonas Mueller, David N. Reshef, George Du, Tommi Jaakkola" ]
stat.ML cs.LG
null
1606.05060
null
null
http://arxiv.org/pdf/1606.05060v1
2016-06-16T05:56:36Z
2016-06-16T05:56:36Z
Pruning Random Forests for Prediction on a Budget
We propose to prune a random forest (RF) for resource-constrained prediction. We first construct a RF and then prune it to optimize expected feature cost & accuracy. We pose pruning RFs as a novel 0-1 integer program with linear constraints that encourages feature re-use. We establish total unimodularity of the constraint set to prove that the corresponding LP relaxation solves the original integer program. We then exploit connections to combinatorial optimization and develop an efficient primal-dual algorithm, scalable to large datasets. In contrast to our bottom-up approach, which benefits from good RF initialization, conventional methods are top-down acquiring features based on their utility value and is generally intractable, requiring heuristics. Empirically, our pruning algorithm outperforms existing state-of-the-art resource-constrained algorithms.
[ "Feng Nan, Joseph Wang, Venkatesh Saligrama", "['Feng Nan' 'Joseph Wang' 'Venkatesh Saligrama']" ]
stat.ML cs.CV cs.IT cs.LG math.IT
null
1606.05228
null
null
http://arxiv.org/pdf/1606.05228v1
2016-06-16T15:38:20Z
2016-06-16T15:38:20Z
How many faces can be recognized? Performance extrapolation for multi-class classification
The difficulty of multi-class classification generally increases with the number of classes. Using data from a subset of the classes, can we predict how well a classifier will scale with an increased number of classes? Under the assumption that the classes are sampled exchangeably, and under the assumption that the classifier is generative (e.g. QDA or Naive Bayes), we show that the expected accuracy when the classifier is trained on $k$ classes is the $k-1$st moment of a \emph{conditional accuracy distribution}, which can be estimated from data. This provides the theoretical foundation for performance extrapolation based on pseudolikelihood, unbiased estimation, and high-dimensional asymptotics. We investigate the robustness of our methods to non-generative classifiers in simulations and one optical character recognition example.
[ "['Charles Y. Zheng' 'Rakesh Achanta' 'Yuval Benjamini']", "Charles Y. Zheng, Rakesh Achanta, and Yuval Benjamini" ]
cs.CV cs.LG
null
1606.05233
null
null
http://arxiv.org/pdf/1606.05233v1
2016-06-16T15:49:26Z
2016-06-16T15:49:26Z
Learning feed-forward one-shot learners
One-shot learning is usually tackled by using generative models or discriminative embeddings. Discriminative methods based on deep learning, which are very effective in other learning scenarios, are ill-suited for one-shot learning as they need large amounts of training data. In this paper, we propose a method to learn the parameters of a deep model in one shot. We construct the learner as a second deep network, called a learnet, which predicts the parameters of a pupil network from a single exemplar. In this manner we obtain an efficient feed-forward one-shot learner, trained end-to-end by minimizing a one-shot classification objective in a learning to learn formulation. In order to make the construction feasible, we propose a number of factorizations of the parameters of the pupil network. We demonstrate encouraging results by learning characters from single exemplars in Omniglot, and by tracking visual objects from a single initial exemplar in the Visual Object Tracking benchmark.
[ "Luca Bertinetto, Jo\\~ao F. Henriques, Jack Valmadre, Philip H. S.\n Torr, Andrea Vedaldi", "['Luca Bertinetto' 'João F. Henriques' 'Jack Valmadre' 'Philip H. S. Torr'\n 'Andrea Vedaldi']" ]
math.ST cs.LG stat.TH
null
1606.05302
null
null
http://arxiv.org/pdf/1606.05302v1
2016-06-16T18:21:45Z
2016-06-16T18:21:45Z
Generalized Direct Change Estimation in Ising Model Structure
We consider the problem of estimating change in the dependency structure between two $p$-dimensional Ising models, based on respectively $n_1$ and $n_2$ samples drawn from the models. The change is assumed to be structured, e.g., sparse, block sparse, node-perturbed sparse, etc., such that it can be characterized by a suitable (atomic) norm. We present and analyze a norm-regularized estimator for directly estimating the change in structure, without having to estimate the structures of the individual Ising models. The estimator can work with any norm, and can be generalized to other graphical models under mild assumptions. We show that only one set of samples, say $n_2$, needs to satisfy the sample complexity requirement for the estimator to work, and the estimation error decreases as $\frac{c}{\sqrt{\min(n_1,n_2)}}$, where $c$ depends on the Gaussian width of the unit norm ball. For example, for $\ell_1$ norm applied to $s$-sparse change, the change can be accurately estimated with $\min(n_1,n_2)=O(s \log p)$ which is sharper than an existing result $n_1= O(s^2 \log p)$ and $n_2 = O(n_1^2)$. Experimental results illustrating the effectiveness of the proposed estimator are presented.
[ "['Farideh Fazayeli' 'Arindam Banerjee']", "Farideh Fazayeli and Arindam Banerjee" ]
cs.LG cs.AI stat.ML
null
1606.05313
null
null
http://arxiv.org/pdf/1606.05313v1
2016-06-16T18:48:51Z
2016-06-16T18:48:51Z
Unsupervised Risk Estimation Using Only Conditional Independence Structure
We show how to estimate a model's test error from unlabeled data, on distributions very different from the training distribution, while assuming only that certain conditional independencies are preserved between train and test. We do not need to assume that the optimal predictor is the same between train and test, or that the true distribution lies in any parametric family. We can also efficiently differentiate the error estimate to perform unsupervised discriminative learning. Our technical tool is the method of moments, which allows us to exploit conditional independencies in the absence of a fully-specified model. Our framework encompasses a large family of losses including the log and exponential loss, and extends to structured output settings such as hidden Markov models.
[ "['Jacob Steinhardt' 'Percy Liang']", "Jacob Steinhardt and Percy Liang" ]
cs.LG
null
1606.05316
null
null
http://arxiv.org/pdf/1606.05316v2
2017-07-28T04:13:58Z
2016-06-16T19:02:14Z
Learning Infinite-Layer Networks: Without the Kernel Trick
Infinite--Layer Networks (ILN) have recently been proposed as an architecture that mimics neural networks while enjoying some of the advantages of kernel methods. ILN are networks that integrate over infinitely many nodes within a single hidden layer. It has been demonstrated by several authors that the problem of learning ILN can be reduced to the kernel trick, implying that whenever a certain integral can be computed analytically they are efficiently learnable. In this work we give an online algorithm for ILN, which avoids the kernel trick assumption. More generally and of independent interest, we show that kernel methods in general can be exploited even when the kernel cannot be efficiently computed but can only be estimated via sampling. We provide a regret analysis for our algorithm, showing that it matches the sample complexity of methods which have access to kernel values. Thus, our method is the first to demonstrate that the kernel trick is not necessary as such, and random features suffice to obtain comparable performance.
[ "Roi Livni and Daniel Carmon and Amir Globerson", "['Roi Livni' 'Daniel Carmon' 'Amir Globerson']" ]
stat.ML cs.CL cs.LG
null
1606.05320
null
null
http://arxiv.org/pdf/1606.05320v2
2016-09-30T22:20:39Z
2016-06-16T19:13:52Z
Increasing the Interpretability of Recurrent Neural Networks Using Hidden Markov Models
As deep neural networks continue to revolutionize various application domains, there is increasing interest in making these powerful models more understandable and interpretable, and narrowing down the causes of good and bad predictions. We focus on recurrent neural networks (RNNs), state of the art models in speech recognition and translation. Our approach to increasing interpretability is by combining an RNN with a hidden Markov model (HMM), a simpler and more transparent model. We explore various combinations of RNNs and HMMs: an HMM trained on LSTM states; a hybrid model where an HMM is trained first, then a small LSTM is given HMM state distributions and trained to fill in gaps in the HMM's performance; and a jointly trained hybrid model. We find that the LSTM and HMM learn complementary information about the features in the text.
[ "['Viktoriya Krakovna' 'Finale Doshi-Velez']", "Viktoriya Krakovna, Finale Doshi-Velez" ]
stat.ML cs.LG
null
1606.05325
null
null
http://arxiv.org/pdf/1606.05325v1
2016-06-16T19:36:51Z
2016-06-16T19:36:51Z
ACDC: $\alpha$-Carving Decision Chain for Risk Stratification
In many healthcare settings, intuitive decision rules for risk stratification can help effective hospital resource allocation. This paper introduces a novel variant of decision tree algorithms that produces a chain of decisions, not a general tree. Our algorithm, $\alpha$-Carving Decision Chain (ACDC), sequentially carves out "pure" subsets of the majority class examples. The resulting chain of decision rules yields a pure subset of the minority class examples. Our approach is particularly effective in exploring large and class-imbalanced health datasets. Moreover, ACDC provides an interactive interpretation in conjunction with visual performance metrics such as Receiver Operating Characteristics curve and Lift chart.
[ "Yubin Park and Joyce Ho and Joydeep Ghosh", "['Yubin Park' 'Joyce Ho' 'Joydeep Ghosh']" ]
cs.CV cs.LG
null
1606.05328
null
null
http://arxiv.org/pdf/1606.05328v2
2016-06-18T15:44:24Z
2016-06-16T19:40:56Z
Conditional Image Generation with PixelCNN Decoders
This work explores conditional image generation with a new image density model based on the PixelCNN architecture. The model can be conditioned on any vector, including descriptive labels or tags, or latent embeddings created by other networks. When conditioned on class labels from the ImageNet database, the model is able to generate diverse, realistic scenes representing distinct animals, objects, landscapes and structures. When conditioned on an embedding produced by a convolutional network given a single image of an unseen face, it generates a variety of new portraits of the same person with different facial expressions, poses and lighting conditions. We also show that conditional PixelCNN can serve as a powerful decoder in an image autoencoder. Additionally, the gated convolutional layers in the proposed model improve the log-likelihood of PixelCNN to match the state-of-the-art performance of PixelRNN on ImageNet, with greatly reduced computational cost.
[ "['Aaron van den Oord' 'Nal Kalchbrenner' 'Oriol Vinyals' 'Lasse Espeholt'\n 'Alex Graves' 'Koray Kavukcuoglu']", "Aaron van den Oord, Nal Kalchbrenner, Oriol Vinyals, Lasse Espeholt,\n Alex Graves, Koray Kavukcuoglu" ]
stat.ML cs.AI cs.LG
null
1606.05336
null
null
http://arxiv.org/pdf/1606.05336v6
2017-06-18T13:24:34Z
2016-06-16T19:55:29Z
On the Expressive Power of Deep Neural Networks
We propose a new approach to the problem of neural network expressivity, which seeks to characterize how structural properties of a neural network family affect the functions it is able to compute. Our approach is based on an interrelated set of measures of expressivity, unified by the novel notion of trajectory length, which measures how the output of a network changes as the input sweeps along a one-dimensional path. Our findings can be summarized as follows: (1) The complexity of the computed function grows exponentially with depth. (2) All weights are not equal: trained networks are more sensitive to their lower (initial) layer weights. (3) Regularizing on trajectory length (trajectory regularization) is a simpler alternative to batch normalization, with the same performance.
[ "Maithra Raghu, Ben Poole, Jon Kleinberg, Surya Ganguli, Jascha\n Sohl-Dickstein", "['Maithra Raghu' 'Ben Poole' 'Jon Kleinberg' 'Surya Ganguli'\n 'Jascha Sohl-Dickstein']" ]
stat.ML cond-mat.dis-nn cs.LG
null
1606.05340
null
null
http://arxiv.org/pdf/1606.05340v2
2016-06-17T18:13:20Z
2016-06-16T19:59:57Z
Exponential expressivity in deep neural networks through transient chaos
We combine Riemannian geometry with the mean field theory of high dimensional chaos to study the nature of signal propagation in generic, deep neural networks with random weights. Our results reveal an order-to-chaos expressivity phase transition, with networks in the chaotic phase computing nonlinear functions whose global curvature grows exponentially with depth but not width. We prove this generic class of deep random functions cannot be efficiently computed by any shallow network, going beyond prior work restricted to the analysis of single functions. Moreover, we formalize and quantitatively demonstrate the long conjectured idea that deep networks can disentangle highly curved manifolds in input space into flat manifolds in hidden space. Our theoretical analysis of the expressive power of deep networks broadly applies to arbitrary nonlinearities, and provides a quantitative underpinning for previously abstract notions about the geometry of deep functions.
[ "Ben Poole, Subhaneil Lahiri, Maithra Raghu, Jascha Sohl-Dickstein,\n Surya Ganguli", "['Ben Poole' 'Subhaneil Lahiri' 'Maithra Raghu' 'Jascha Sohl-Dickstein'\n 'Surya Ganguli']" ]
cs.HC cs.CR cs.DS cs.GT cs.LG
null
1606.05374
null
null
http://arxiv.org/pdf/1606.05374v1
2016-06-16T21:45:14Z
2016-06-16T21:45:14Z
Avoiding Imposters and Delinquents: Adversarial Crowdsourcing and Peer Prediction
We consider a crowdsourcing model in which $n$ workers are asked to rate the quality of $n$ items previously generated by other workers. An unknown set of $\alpha n$ workers generate reliable ratings, while the remaining workers may behave arbitrarily and possibly adversarially. The manager of the experiment can also manually evaluate the quality of a small number of items, and wishes to curate together almost all of the high-quality items with at most an $\epsilon$ fraction of low-quality items. Perhaps surprisingly, we show that this is possible with an amount of work required of the manager, and each worker, that does not scale with $n$: the dataset can be curated with $\tilde{O}\Big(\frac{1}{\beta\alpha^3\epsilon^4}\Big)$ ratings per worker, and $\tilde{O}\Big(\frac{1}{\beta\epsilon^2}\Big)$ ratings by the manager, where $\beta$ is the fraction of high-quality items. Our results extend to the more general setting of peer prediction, including peer grading in online classrooms.
[ "['Jacob Steinhardt' 'Gregory Valiant' 'Moses Charikar']", "Jacob Steinhardt and Gregory Valiant and Moses Charikar" ]
cs.LG stat.AP stat.ML
10.1109/RAM.2018.8463127
1606.05382
null
null
http://arxiv.org/abs/1606.05382v3
2016-09-25T22:15:38Z
2016-06-16T23:18:23Z
Sampling Method for Fast Training of Support Vector Data Description
Support Vector Data Description (SVDD) is a popular outlier detection technique which constructs a flexible description of the input data. SVDD computation time is high for large training datasets which limits its use in big-data process-monitoring applications. We propose a new iterative sampling-based method for SVDD training. The method incrementally learns the training data description at each iteration by computing SVDD on an independent random sample selected with replacement from the training data set. The experimental results indicate that the proposed method is extremely fast and provides a good data description .
[ "['Arin Chaudhuri' 'Deovrat Kakde' 'Maria Jahja' 'Wei Xiao' 'Hansi Jiang'\n 'Seunghyun Kong' 'Sergiy Peredriy']", "Arin Chaudhuri, Deovrat Kakde, Maria Jahja, Wei Xiao, Hansi Jiang,\n Seunghyun Kong, Sergiy Peredriy" ]
stat.ML cs.LG
null
1606.05386
null
null
http://arxiv.org/pdf/1606.05386v1
2016-06-16T23:39:41Z
2016-06-16T23:39:41Z
Model-Agnostic Interpretability of Machine Learning
Understanding why machine learning models behave the way they do empowers both system designers and end-users in many ways: in model selection, feature engineering, in order to trust and act upon the predictions, and in more intuitive user interfaces. Thus, interpretability has become a vital concern in machine learning, and work in the area of interpretable models has found renewed interest. In some applications, such models are as accurate as non-interpretable ones, and thus are preferred for their transparency. Even when they are not accurate, they may still be preferred when interpretability is of paramount importance. However, restricting machine learning to interpretable models is often a severe limitation. In this paper we argue for explaining machine learning predictions using model-agnostic approaches. By treating the machine learning models as black-box functions, these approaches provide crucial flexibility in the choice of models, explanations, and representations, improving debugging, comparison, and interfaces for a variety of users and models. We also outline the main challenges for such methods, and review a recently-introduced model-agnostic explanation approach (LIME) that addresses these challenges.
[ "Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin", "['Marco Tulio Ribeiro' 'Sameer Singh' 'Carlos Guestrin']" ]
cs.LO cs.AI cs.LG
10.4204/EPTCS.210
1606.05427
null
null
http://arxiv.org/abs/1606.05427v1
2016-06-17T06:52:32Z
2016-06-17T06:52:32Z
Proceedings First International Workshop on Hammers for Type Theories
This volume of EPTCS contains the proceedings of the First Workshop on Hammers for Type Theories (HaTT 2016), held on 1 July 2016 as part of the International Joint Conference on Automated Reasoning (IJCAR 2016) in Coimbra, Portugal. The proceedings contain four regular papers, as well as abstracts of the two invited talks by Pierre Corbineau (Verimag, France) and Aleksy Schubert (University of Warsaw, Poland).
[ "['Jasmin Christian Blanchette' 'Cezary Kaliszyk']", "Jasmin Christian Blanchette, Cezary Kaliszyk" ]
cs.CL cs.LG cs.NE
null
1606.05464
null
null
http://arxiv.org/pdf/1606.05464v2
2016-09-26T20:49:16Z
2016-06-17T09:39:47Z
Stance Detection with Bidirectional Conditional Encoding
Stance detection is the task of classifying the attitude expressed in a text towards a target such as Hillary Clinton to be "positive", negative" or "neutral". Previous work has assumed that either the target is mentioned in the text or that training data for every target is given. This paper considers the more challenging version of this task, where targets are not always mentioned and no training data is available for the test targets. We experiment with conditional LSTM encoding, which builds a representation of the tweet that is dependent on the target, and demonstrate that it outperforms encoding the tweet and the target independently. Performance is improved further when the conditional model is augmented with bidirectional encoding. We evaluate our approach on the SemEval 2016 Task 6 Twitter Stance Detection corpus achieving performance second best only to a system trained on semi-automatically labelled tweets for the test target. When such weak supervision is added, our approach achieves state-of-the-art results.
[ "Isabelle Augenstein and Tim Rockt\\\"aschel and Andreas Vlachos and\n Kalina Bontcheva", "['Isabelle Augenstein' 'Tim Rocktäschel' 'Andreas Vlachos'\n 'Kalina Bontcheva']" ]
cs.CL cs.LG cs.NE
null
1606.05554
null
null
http://arxiv.org/pdf/1606.05554v1
2016-06-17T15:15:18Z
2016-06-17T15:15:18Z
SMS Spam Filtering using Probabilistic Topic Modelling and Stacked Denoising Autoencoder
In This paper we present a novel approach to spam filtering and demonstrate its applicability with respect to SMS messages. Our approach requires minimum features engineering and a small set of la- belled data samples. Features are extracted using topic modelling based on latent Dirichlet allocation, and then a comprehensive data model is created using a Stacked Denoising Autoencoder (SDA). Topic modelling summarises the data providing ease of use and high interpretability by visualising the topics using word clouds. Given that the SMS messages can be regarded as either spam (unwanted) or ham (wanted), the SDA is able to model the messages and accurately discriminate between the two classes without the need for a pre-labelled training set. The results are compared against the state-of-the-art spam detection algorithms with our proposed approach achieving over 97% accuracy which compares favourably to the best reported algorithms presented in the literature.
[ "Noura Al Moubayed, Toby Breckon, Peter Matthews, and A. Stephen\n McGough", "['Noura Al Moubayed' 'Toby Breckon' 'Peter Matthews' 'A. Stephen McGough']" ]
stat.ML cs.LG
null
1606.05572
null
null
http://arxiv.org/pdf/1606.05572v1
2016-06-17T15:58:24Z
2016-06-17T15:58:24Z
Learning Interpretable Musical Compositional Rules and Traces
Throughout music history, theorists have identified and documented interpretable rules that capture the decisions of composers. This paper asks, "Can a machine behave like a music theorist?" It presents MUS-ROVER, a self-learning system for automatically discovering rules from symbolic music. MUS-ROVER performs feature learning via $n$-gram models to extract compositional rules --- statistical patterns over the resulting features. We evaluate MUS-ROVER on Bach's (SATB) chorales, demonstrating that it can recover known rules, as well as identify new, characteristic patterns for further study. We discuss how the extracted rules can be used in both machine and human composition.
[ "['Haizi Yu' 'Lav R. Varshney' 'Guy E. Garnett' 'Ranjitha Kumar']", "Haizi Yu, Lav R. Varshney, Guy E. Garnett, Ranjitha Kumar" ]
stat.ML cs.LG q-bio.NC
null
1606.05579
null
null
http://arxiv.org/pdf/1606.05579v3
2016-09-20T09:30:26Z
2016-06-17T16:19:46Z
Early Visual Concept Learning with Unsupervised Deep Learning
Automated discovery of early visual concepts from raw image data is a major open challenge in AI research. Addressing this problem, we propose an unsupervised approach for learning disentangled representations of the underlying factors of variation. We draw inspiration from neuroscience, and show how this can be achieved in an unsupervised generative model by applying the same learning pressures as have been suggested to act in the ventral visual stream in the brain. By enforcing redundancy reduction, encouraging statistical independence, and exposure to data with transform continuities analogous to those to which human infants are exposed, we obtain a variational autoencoder (VAE) framework capable of learning disentangled factors. Our approach makes few assumptions and works well across a wide variety of datasets. Furthermore, our solution has useful emergent properties, such as zero-shot inference and an intuitive understanding of "objectness".
[ "['Irina Higgins' 'Loic Matthey' 'Xavier Glorot' 'Arka Pal' 'Benigno Uria'\n 'Charles Blundell' 'Shakir Mohamed' 'Alexander Lerchner']", "Irina Higgins, Loic Matthey, Xavier Glorot, Arka Pal, Benigno Uria,\n Charles Blundell, Shakir Mohamed, Alexander Lerchner" ]
stat.ML cs.LG
null
1606.05596
null
null
http://arxiv.org/pdf/1606.05596v1
2016-06-17T17:31:51Z
2016-06-17T17:31:51Z
Ground Truth Bias in External Cluster Validity Indices
It has been noticed that some external CVIs exhibit a preferential bias towards a larger or smaller number of clusters which is monotonic (directly or inversely) in the number of clusters in candidate partitions. This type of bias is caused by the functional form of the CVI model. For example, the popular Rand index (RI) exhibits a monotone increasing (NCinc) bias, while the Jaccard Index (JI) index suffers from a monotone decreasing (NCdec) bias. This type of bias has been previously recognized in the literature. In this work, we identify a new type of bias arising from the distribution of the ground truth (reference) partition against which candidate partitions are compared. We call this new type of bias ground truth (GT) bias. This type of bias occurs if a change in the reference partition causes a change in the bias status (e.g., NCinc, NCdec) of a CVI. For example, NCinc bias in the RI can be changed to NCdec bias by skewing the distribution of clusters in the ground truth partition. It is important for users to be aware of this new type of biased behaviour, since it may affect the interpretations of CVI results. The objective of this article is to study the empirical and theoretical implications of GT bias. To the best of our knowledge, this is the first extensive study of such a property for external cluster validity indices.
[ "Yang Lei, James C. Bezdek, Simone Romano, Nguyen Xuan Vinh, Jeffrey\n Chan and James Bailey", "['Yang Lei' 'James C. Bezdek' 'Simone Romano' 'Nguyen Xuan Vinh'\n 'Jeffrey Chan' 'James Bailey']" ]
cs.LG cs.DS
null
1606.05615
null
null
http://arxiv.org/pdf/1606.05615v5
2019-05-06T16:06:35Z
2016-06-17T18:15:52Z
Guaranteed Non-convex Optimization: Submodular Maximization over Continuous Domains
Submodular continuous functions are a category of (generally) non-convex/non-concave functions with a wide spectrum of applications. We characterize these functions and demonstrate that they can be maximized efficiently with approximation guarantees. Specifically, i) We introduce the weak DR property that gives a unified characterization of submodularity for all set, integer-lattice and continuous functions; ii) for maximizing monotone DR-submodular continuous functions under general down-closed convex constraints, we propose a Frank-Wolfe variant with $(1-1/e)$ approximation guarantee, and sub-linear convergence rate; iii) for maximizing general non-monotone submodular continuous functions subject to box constraints, we propose a DoubleGreedy algorithm with $1/3$ approximation guarantee. Submodular continuous functions naturally find applications in various real-world settings, including influence and revenue maximization with continuous assignments, sensor energy management, multi-resolution data summarization, facility location, etc. Experimental results show that the proposed algorithms efficiently generate superior solutions compared to baseline algorithms.
[ "['Andrew An Bian' 'Baharan Mirzasoleiman' 'Joachim M. Buhmann'\n 'Andreas Krause']", "Andrew An Bian, Baharan Mirzasoleiman, Joachim M. Buhmann, Andreas\n Krause" ]
stat.ML cs.LG q-bio.NC
null
1606.05642
null
null
http://arxiv.org/pdf/1606.05642v2
2017-03-01T20:31:24Z
2016-06-17T19:54:43Z
Balancing New Against Old Information: The Role of Surprise in Learning
Surprise describes a range of phenomena from unexpected events to behavioral responses. We propose a measure of surprise and use it for surprise-driven learning. Our surprise measure takes into account data likelihood as well as the degree of commitment to a belief via the entropy of the belief distribution. We find that surprise-minimizing learning dynamically adjusts the balance between new and old information without the need of knowledge about the temporal statistics of the environment. We apply our framework to a dynamic decision-making task and a maze exploration task. Our surprise minimizing framework is suitable for learning in complex environments, even if the environment undergoes gradual or sudden changes and could eventually provide a framework to study the behavior of humans and animals encountering surprising events.
[ "['Mohammadjavad Faraji' 'Kerstin Preuschoff' 'Wulfram Gerstner']", "Mohammadjavad Faraji, Kerstin Preuschoff, Wulfram Gerstner" ]
cs.LG
10.1063/1.4972718
1606.05664
null
null
http://arxiv.org/abs/1606.05664v1
2016-05-31T09:54:46Z
2016-05-31T09:54:46Z
Linear Classification of data with Support Vector Machines and Generalized Support Vector Machines
In this paper, we study the support vector machine and introduced the notion of generalized support vector machine for classification of data. We show that the problem of generalized support vector machine is equivalent to the problem of generalized variational inequality and establish various results for the existence of solutions. Moreover, we provide various examples to support our results.
[ "['Xiaomin Qi' 'Sergei Silvestrov' 'Talat Nazir']", "Xiaomin Qi, Sergei Silvestrov and Talat Nazir" ]
stat.ML cs.LG
null
1606.05685
null
null
http://arxiv.org/pdf/1606.05685v2
2016-06-21T18:06:13Z
2016-06-17T21:56:43Z
Using Visual Analytics to Interpret Predictive Machine Learning Models
It is commonly believed that increasing the interpretability of a machine learning model may decrease its predictive power. However, inspecting input-output relationships of those models using visual analytics, while treating them as black-box, can help to understand the reasoning behind outcomes without sacrificing predictive quality. We identify a space of possible solutions and provide two examples of where such techniques have been successfully used in practice.
[ "Josua Krause, Adam Perer, Enrico Bertini", "['Josua Krause' 'Adam Perer' 'Enrico Bertini']" ]
cs.DC cs.LG
null
1606.05688
null
null
http://arxiv.org/pdf/1606.05688v1
2016-06-17T22:16:39Z
2016-06-17T22:16:39Z
ZNNi - Maximizing the Inference Throughput of 3D Convolutional Networks on Multi-Core CPUs and GPUs
Sliding window convolutional networks (ConvNets) have become a popular approach to computer vision problems such as image segmentation, and object detection and localization. Here we consider the problem of inference, the application of a previously trained ConvNet, with emphasis on 3D images. Our goal is to maximize throughput, defined as average number of output voxels computed per unit time. Other things being equal, processing a larger image tends to increase throughput, because fractionally less computation is wasted on the borders of the image. It follows that an apparently slower algorithm may end up having higher throughput if it can process a larger image within the constraint of the available RAM. We introduce novel CPU and GPU primitives for convolutional and pooling layers, which are designed to minimize memory overhead. The primitives include convolution based on highly efficient pruned FFTs. Our theoretical analyses and empirical tests reveal a number of interesting findings. For some ConvNet architectures, cuDNN is outperformed by our FFT-based GPU primitives, and these in turn can be outperformed by our CPU primitives. The CPU manages to achieve higher throughput because of its fast access to more RAM. A novel primitive in which the GPU accesses host RAM can significantly increase GPU throughput. Finally, a CPU-GPU algorithm achieves the greatest throughput of all, 10x or more than other publicly available implementations of sliding window 3D ConvNets. All of our code has been made available as open source project.
[ "['Aleksandar Zlateski' 'Kisuk Lee' 'H. Sebastian Seung']", "Aleksandar Zlateski, Kisuk Lee and H. Sebastian Seung" ]
stat.ML cs.LG
null
1606.05693
null
null
http://arxiv.org/pdf/1606.05693v1
2016-06-17T22:31:01Z
2016-06-17T22:31:01Z
Structured Stochastic Linear Bandits
The stochastic linear bandit problem proceeds in rounds where at each round the algorithm selects a vector from a decision set after which it receives a noisy linear loss parameterized by an unknown vector. The goal in such a problem is to minimize the (pseudo) regret which is the difference between the total expected loss of the algorithm and the total expected loss of the best fixed vector in hindsight. In this paper, we consider settings where the unknown parameter has structure, e.g., sparse, group sparse, low-rank, which can be captured by a norm, e.g., $L_1$, $L_{(1,2)}$, nuclear norm. We focus on constructing confidence ellipsoids which contain the unknown parameter across all rounds with high-probability. We show the radius of such ellipsoids depend on the Gaussian width of sets associated with the norm capturing the structure. Such characterization leads to tighter confidence ellipsoids and, therefore, sharper regret bounds compared to bounds in the existing literature which are based on the ambient dimensionality.
[ "Nicholas Johnson, Vidyashankar Sivakumar, Arindam Banerjee", "['Nicholas Johnson' 'Vidyashankar Sivakumar' 'Arindam Banerjee']" ]
cs.LG cs.AI stat.ML
null
1606.05725
null
null
http://arxiv.org/pdf/1606.05725v1
2016-06-18T07:49:13Z
2016-06-18T07:49:13Z
An Efficient Large-scale Semi-supervised Multi-label Classifier Capable of Handling Missing labels
Multi-label classification has received considerable interest in recent years. Multi-label classifiers have to address many problems including: handling large-scale datasets with many instances and a large set of labels, compensating missing label assignments in the training set, considering correlations between labels, as well as exploiting unlabeled data to improve prediction performance. To tackle datasets with a large set of labels, embedding-based methods have been proposed which seek to represent the label assignments in a low-dimensional space. Many state-of-the-art embedding-based methods use a linear dimensionality reduction to represent the label assignments in a low-dimensional space. However, by doing so, these methods actually neglect the tail labels - labels that are infrequently assigned to instances. We propose an embedding-based method that non-linearly embeds the label vectors using an stochastic approach, thereby predicting the tail labels more accurately. Moreover, the proposed method have excellent mechanisms for handling missing labels, dealing with large-scale datasets, as well as exploiting unlabeled data. With the best of our knowledge, our proposed method is the first multi-label classifier that simultaneously addresses all of the mentioned challenges. Experiments on real-world datasets show that our method outperforms stateof-the-art multi-label classifiers by a large margin, in terms of prediction performance, as well as training time.
[ "['Amirhossein Akbarnejad' 'Mahdieh Soleymani Baghshah']", "Amirhossein Akbarnejad, Mahdieh Soleymani Baghshah" ]
cs.LG cs.AI cs.CY
null
1606.05735
null
null
http://arxiv.org/pdf/1606.05735v2
2016-11-11T14:47:28Z
2016-06-18T10:06:44Z
A Comparative Analysis of classification data mining techniques : Deriving key factors useful for predicting students performance
Students opting for Engineering as their discipline is increasing rapidly. But due to various factors and inappropriate primary education in India, failure rates are high. Students are unable to excel in core engineering because of complex and mathematical subjects. Hence, they fail in such subjects. With the help of data mining techniques, we can predict the performance of students in terms of grades and failure in subjects. This paper performs a comparative analysis of various classification techniques, such as Na\"ive Bayes, LibSVM, J48, Random Forest, and JRip and tries to choose best among these. Based on the results obtained, we found that Na\"ive Bayes is the most accurate method in terms of students failure prediction and JRip is most accurate in terms of students grade prediction. We also found that JRip marginally differs from Na\"ive Bayes in terms of accuracy for students failure prediction and gives us a set of rules from which we derive the key factors influencing students performance. Finally, we suggest various ways to mitigate these factors. This study is limited to Indian Education system scenarios. However, the factors found can be helpful in other scenarios as well.
[ "['Muhammed Salman Shamsi' 'Jhansi Lakshmi']", "Muhammed Salman Shamsi, Jhansi Lakshmi" ]
null
null
1606.05798
null
null
http://arxiv.org/pdf/1606.05798v1
2016-06-18T19:37:26Z
2016-06-18T19:37:26Z
Interpretable Two-level Boolean Rule Learning for Classification
As a contribution to interpretable machine learning research, we develop a novel optimization framework for learning accurate and sparse two-level Boolean rules. We consider rules in both conjunctive normal form (AND-of-ORs) and disjunctive normal form (OR-of-ANDs). A principled objective function is proposed to trade classification accuracy and interpretability, where we use Hamming loss to characterize accuracy and sparsity to characterize interpretability. We propose efficient procedures to optimize these objectives based on linear programming (LP) relaxation, block coordinate descent, and alternating minimization. Experiments show that our new algorithms provide very good tradeoffs between accuracy and interpretability.
[ "['Guolong Su' 'Dennis Wei' 'Kush R. Varshney' 'Dmitry M. Malioutov']" ]
stat.ML cs.LG
null
1606.05819
null
null
http://arxiv.org/pdf/1606.05819v1
2016-06-19T01:37:01Z
2016-06-19T01:37:01Z
Building an Interpretable Recommender via Loss-Preserving Transformation
We propose a method for building an interpretable recommender system for personalizing online content and promotions. Historical data available for the system consists of customer features, provided content (promotions), and user responses. Unlike in a standard multi-class classification setting, misclassification costs depend on both recommended actions and customers. Our method transforms such a data set to a new set which can be used with standard interpretable multi-class classification algorithms. The transformation has the desirable property that minimizing the standard misclassification penalty in this new space is equivalent to minimizing the custom cost function.
[ "Amit Dhurandhar, Sechan Oh, Marek Petrik", "['Amit Dhurandhar' 'Sechan Oh' 'Marek Petrik']" ]
cs.SD cs.LG
null
1606.05844
null
null
http://arxiv.org/pdf/1606.05844v1
2016-06-19T08:38:26Z
2016-06-19T08:38:26Z
Statistical Parametric Speech Synthesis Using Bottleneck Representation From Sequence Auto-encoder
In this paper, we describe a statistical parametric speech synthesis approach with unit-level acoustic representation. In conventional deep neural network based speech synthesis, the input text features are repeated for the entire duration of phoneme for mapping text and speech parameters. This mapping is learnt at the frame-level which is the de-facto acoustic representation. However much of this computational requirement can be drastically reduced if every unit can be represented with a fixed-dimensional representation. Using recurrent neural network based auto-encoder, we show that it is indeed possible to map units of varying duration to a single vector. We then use this acoustic representation at unit-level to synthesize speech using deep neural network based statistical parametric speech synthesis technique. Results show that the proposed approach is able to synthesize at the same quality as the conventional frame based approach at a highly reduced computational cost.
[ "Sivanand Achanta, KNRK Raju Alluri, Suryakanth V Gangashetty", "['Sivanand Achanta' 'KNRK Raju Alluri' 'Suryakanth V Gangashetty']" ]
cs.LG cs.IT math.IT stat.ML
10.3390/e18120442
1606.05850
null
null
http://arxiv.org/abs/1606.05850v2
2016-08-17T00:24:48Z
2016-06-19T09:39:30Z
Guaranteed bounds on the Kullback-Leibler divergence of univariate mixtures using piecewise log-sum-exp inequalities
Information-theoretic measures such as the entropy, cross-entropy and the Kullback-Leibler divergence between two mixture models is a core primitive in many signal processing tasks. Since the Kullback-Leibler divergence of mixtures provably does not admit a closed-form formula, it is in practice either estimated using costly Monte-Carlo stochastic integration, approximated, or bounded using various techniques. We present a fast and generic method that builds algorithmically closed-form lower and upper bounds on the entropy, the cross-entropy and the Kullback-Leibler divergence of mixtures. We illustrate the versatile method by reporting on our experiments for approximating the Kullback-Leibler divergence between univariate exponential mixtures, Gaussian mixtures, Rayleigh mixtures, and Gamma mixtures.
[ "Frank Nielsen and Ke Sun", "['Frank Nielsen' 'Ke Sun']" ]
null
null
1606.05896
null
null
http://arxiv.org/pdf/1606.05896v1
2016-06-19T18:07:15Z
2016-06-19T18:07:15Z
Clustering with a Reject Option: Interactive Clustering as Bayesian Prior Elicitation
A good clustering can help a data analyst to explore and understand a data set, but what constitutes a good clustering may depend on domain-specific and application-specific criteria. These criteria can be difficult to formalize, even when it is easy for an analyst to know a good clustering when they see one. We present a new approach to interactive clustering for data exploration called TINDER, based on a particularly simple feedback mechanism, in which an analyst can reject a given clustering and request a new one, which is chosen to be different from the previous clustering while fitting the data well. We formalize this interaction in a Bayesian framework as a method for prior elicitation, in which each different clustering is produced by a prior distribution that is modified to discourage previously rejected clusterings. We show that TINDER successfully produces a diverse set of clusterings, each of equivalent quality, that are much more diverse than would be obtained by randomized restarts.
[ "['Akash Srivastava' 'James Zou' 'Ryan P. Adams' 'Charles Sutton']" ]
stat.ML cs.LG
null
1606.05908
null
null
null
null
null
Tutorial on Variational Autoencoders
In just three years, Variational Autoencoders (VAEs) have emerged as one of the most popular approaches to unsupervised learning of complicated distributions. VAEs are appealing because they are built on top of standard function approximators (neural networks), and can be trained with stochastic gradient descent. VAEs have already shown promise in generating many kinds of complicated data, including handwritten digits, faces, house numbers, CIFAR images, physical models of scenes, segmentation, and predicting the future from static images. This tutorial introduces the intuitions behind VAEs, explains the mathematics behind them, and describes some empirical behavior. No prior knowledge of variational Bayesian methods is assumed.
[ "Carl Doersch" ]
cs.LG cs.DM
null
1606.05918
null
null
http://arxiv.org/pdf/1606.05918v2
2017-08-10T08:46:02Z
2016-06-19T22:17:05Z
Slack and Margin Rescaling as Convex Extensions of Supermodular Functions
Slack and margin rescaling are variants of the structured output SVM, which is frequently applied to problems in computer vision such as image segmentation, object localization, and learning parts based object models. They define convex surrogates to task specific loss functions, which, when specialized to non-additive loss functions for multi-label problems, yield extensions to increasing set functions. We demonstrate in this paper that we may use these concepts to define polynomial time convex extensions of arbitrary supermodular functions, providing an analysis framework for the tightness of these surrogates. This analysis framework shows that, while neither margin nor slack rescaling dominate the other, known bounds on supermodular functions can be used to derive extensions that dominate both of these, indicating possible directions for defining novel structured output prediction surrogates. In addition to the analysis of structured prediction loss functions, these results imply an approach to supermodular minimization in which margin rescaling is combined with non-polynomial time convex extensions to compute a sequence of LP relaxations reminiscent of a cutting plane method. This approach is applied to the problem of selecting representative exemplars from a set of images, validating our theoretical contributions.
[ "['Matthew B. Blaschko']", "Matthew B. Blaschko" ]