title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
End-to-End Joint Learning of Natural Language Understanding and Dialogue
Manager | cs.CL cs.LG | Natural language understanding and dialogue policy learning are both
essential in conversational systems that predict the next system actions in
response to a current user utterance. Conventional approaches aggregate
separate models of natural language understanding (NLU) and system action
prediction (SAP) as a pipeline that is sensitive to noisy outputs of
error-prone NLU. To address the issues, we propose an end-to-end deep recurrent
neural network with limited contextual dialogue memory by jointly training NLU
and SAP on DSTC4 multi-domain human-human dialogues. Experiments show that our
proposed model significantly outperforms the state-of-the-art pipeline models
for both NLU and SAP, which indicates that our joint model is capable of
mitigating the affects of noisy NLU outputs, and NLU model can be refined by
error flows backpropagating from the extra supervised signals of system
actions.
| Xuesong Yang, Yun-Nung Chen, Dilek Hakkani-Tur, Paul Crook, Xiujun Li,
Jianfeng Gao, Li Deng | null | 1612.00913 | null | null |
Positive blood culture detection in time series data using a BiLSTM
network | cs.LG cs.NE q-bio.QM stat.ML | The presence of bacteria or fungi in the bloodstream of patients is abnormal
and can lead to life-threatening conditions. A computational model based on a
bidirectional long short-term memory artificial neural network, is explored to
assist doctors in the intensive care unit to predict whether examination of
blood cultures of patients will return positive. As input it uses nine
monitored clinical parameters, presented as time series data, collected from
2177 ICU admissions at the Ghent University Hospital. Our main goal is to
determine if general machine learning methods and more specific, temporal
models, can be used to create an early detection system. This preliminary
research obtains an area of 71.95% under the precision recall curve, proving
the potential of temporal neural networks in this context.
| Leen De Baets, Joeri Ruyssinck, Thomas Peiffer, Johan Decruyenaere,
Filip De Turck, Femke Ongenae, Tom Dhaene | null | 1612.00962 | null | null |
Estimating latent feature-feature interactions in large feature-rich
graphs | cs.SI cs.LG stat.ML | Real-world complex networks describe connections between objects; in reality,
those objects are often endowed with some kind of features. How does the
presence or absence of such features interplay with the network link structure?
Although the situation here described is truly ubiquitous, there is a limited
body of research dealing with large graphs of this kind. Many previous works
considered homophily as the only possible transmission mechanism translating
node features into links. Other authors, instead, developed more sophisticated
models, that are able to handle complex feature interactions, but are unfit to
scale to very large networks. We expand on the MGJ model, where interactions
between pairs of features can foster or discourage link formation. In this
work, we will investigate how to estimate the latent feature-feature
interactions in this model. We shall propose two solutions: the first one
assumes feature independence and it is essentially based on Naive Bayes; the
second one, which relaxes the independence assumption assumption, is based on
perceptrons. In fact, we show it is possible to cast the model equation in
order to see it as the prediction rule of a perceptron. We analyze how
classical results for the perceptrons can be interpreted in this context; then,
we define a fast and simple perceptron-like algorithm for this task, which can
process $10^8$ links in minutes. We then compare these two techniques, first
with synthetic datasets that follows our model, gaining evidence that the Naive
independence assumptions are detrimental in practice. Secondly, we consider a
real, large-scale citation network where each node (i.e., paper) can be
described by different types of characteristics; there, our algorithm can
assess how well each set of features can explain the links, and thus finding
meaningful latent feature-feature interactions.
| Corrado Monti and Paolo Boldi | null | 1612.00984 | null | null |
Hypothesis Transfer Learning via Transformation Functions | stat.ML cs.LG | We consider the Hypothesis Transfer Learning (HTL) problem where one
incorporates a hypothesis trained on the source domain into the learning
procedure of the target domain. Existing theoretical analysis either only
studies specific algorithms or only presents upper bounds on the generalization
error but not on the excess risk. In this paper, we propose a unified
algorithm-dependent framework for HTL through a novel notion of transformation
function, which characterizes the relation between the source and the target
domains. We conduct a general risk analysis of this framework and in
particular, we show for the first time, if two domains are related, HTL enjoys
faster convergence rates of excess risks for Kernel Smoothing and Kernel Ridge
Regression than those of the classical non-transfer learning settings.
Experiments on real world data demonstrate the effectiveness of our framework.
| Simon Shaolei Du, Jayanth Koushik, Aarti Singh, and Barnabas Poczos | null | 1612.0102 | null | null |
Large scale modeling of antimicrobial resistance with interpretable
classifiers | q-bio.GN cs.LG stat.ML | Antimicrobial resistance is an important public health concern that has
implications in the practice of medicine worldwide. Accurately predicting
resistance phenotypes from genome sequences shows great promise in promoting
better use of antimicrobial agents, by determining which antibiotics are likely
to be effective in specific clinical cases. In healthcare, this would allow for
the design of treatment plans tailored for specific individuals, likely
resulting in better clinical outcomes for patients with bacterial infections.
In this work, we present the recent work of Drouin et al. (2016) on using Set
Covering Machines to learn highly interpretable models of antibiotic resistance
and complement it by providing a large scale application of their method to the
entire PATRIC database. We report prediction results for 36 new datasets and
present the Kover AMR platform, a new web-based tool allowing the visualization
and interpretation of the generated models.
| Alexandre Drouin, Fr\'ed\'eric Raymond, Ga\"el Letarte St-Pierre,
Mario Marchand, Jacques Corbeil, Fran\c{c}ois Laviolette | null | 1612.0103 | null | null |
Modeling trajectories of mental health: challenges and opportunities | stat.ML cs.LG stat.AP | More than two thirds of mental health problems have their onset during
childhood or adolescence. Identifying children at risk for mental illness later
in life and predicting the type of illness is not easy. We set out to develop a
platform to define subtypes of childhood social-emotional development using
longitudinal, multifactorial trait-based measures. Subtypes discovered through
this study could ultimately advance psychiatric knowledge of the early
behavioural signs of mental illness. To this extent we have examined two types
of models: latent class mixture models and GP-based models. Our findings
indicate that while GP models come close in accuracy of predicting future
trajectories, LCMMs predict the trajectories as well in a fraction of the time.
Unfortunately, neither of the models are currently accurate enough to lead to
immediate clinical impact. The available data related to the development of
childhood mental health is often sparse with only a few time points measured
and require novel methods with improved efficiency and accuracy.
| Lauren Erdman, Ekansh Sharma, Eva Unternahrer, Shantala Hari Dass,
Kieran ODonnell, Sara Mostafavi, Rachel Edgar, Michael Kobor, Helene
Gaudreau, Michael Meaney, Anna Goldenberg | null | 1612.01055 | null | null |
Algorithmic Songwriting with ALYSIA | cs.AI cs.LG cs.MM cs.SD | This paper introduces ALYSIA: Automated LYrical SongwrIting Application.
ALYSIA is based on a machine learning model using Random Forests, and we
discuss its success at pitch and rhythm prediction. Next, we show how ALYSIA
was used to create original pop songs that were subsequently recorded and
produced. Finally, we discuss our vision for the future of Automated
Songwriting for both co-creative and autonomous systems.
| Margareta Ackerman and David Loker | null | 1612.01058 | null | null |
Trained Ternary Quantization | cs.LG | Deep neural networks are widely used in machine learning applications.
However, the deployment of large neural networks models can be difficult to
deploy on mobile devices with limited power budgets. To solve this problem, we
propose Trained Ternary Quantization (TTQ), a method that can reduce the
precision of weights in neural networks to ternary values. This method has very
little accuracy degradation and can even improve the accuracy of some models
(32, 44, 56-layer ResNet) on CIFAR-10 and AlexNet on ImageNet. And our AlexNet
model is trained from scratch, which means it's as easy as to train normal full
precision model. We highlight our trained quantization method that can learn
both ternary values and ternary assignment. During inference, only ternary
values (2-bit weights) and scaling factors are needed, therefore our models are
nearly 16x smaller than full-precision models. Our ternary models can also be
viewed as sparse binary weight networks, which can potentially be accelerated
with custom circuit. Experiments on CIFAR-10 show that the ternary models
obtained by trained quantization method outperform full-precision models of
ResNet-32,44,56 by 0.04%, 0.16%, 0.36%, respectively. On ImageNet, our model
outperforms full-precision AlexNet model by 0.3% of Top-1 accuracy and
outperforms previous ternary models by 3%.
| Chenzhuo Zhu, Song Han, Huizi Mao, William J. Dally | null | 1612.01064 | null | null |
Enhancing Use Case Points Estimation Method Using Soft Computing
Techniques | cs.SE cs.AI cs.LG | Software estimation is a crucial task in software engineering. Software
estimation encompasses cost, effort, schedule, and size. The importance of
software estimation becomes critical in the early stages of the software life
cycle when the details of software have not been revealed yet. Several
commercial and non-commercial tools exist to estimate software in the early
stages. Most software effort estimation methods require software size as one of
the important metric inputs and consequently, software size estimation in the
early stages becomes essential. One of the approaches that has been used for
about two decades in the early size and effort estimation is called use case
points. Use case points method relies on the use case diagram to estimate the
size and effort of software projects. Although the use case points method has
been widely used, it has some limitations that might adversely affect the
accuracy of estimation. This paper presents some techniques using fuzzy logic
and neural networks to improve the accuracy of the use case points method.
Results showed that an improvement up to 22% can be obtained using the proposed
approach.
| Ali Bou Nassif, Luiz Fernando Capretz, Danny Ho | null | 1612.01078 | null | null |
Deep Learning of Robotic Tasks without a Simulator using Strong and Weak
Human Supervision | cs.AI cs.LG cs.RO | We propose a scheme for training a computerized agent to perform complex
human tasks such as highway steering. The scheme is designed to follow a
natural learning process whereby a human instructor teaches a computerized
trainee. The learning process consists of five elements: (i) unsupervised
feature learning; (ii) supervised imitation learning; (iii) supervised reward
induction; (iv) supervised safety module construction; and (v) reinforcement
learning. We implemented the last four elements of the scheme using deep
convolutional networks and applied it to successfully create a computerized
agent capable of autonomous highway steering over the well-known racing game
Assetto Corsa. We demonstrate that the use of the last four elements is
essential to effectively carry out the steering task using vision alone,
without access to a driving simulator internals, and operating in wall-clock
time. This is made possible also through the introduction of a safety network,
a novel way for preventing the agent from performing catastrophic mistakes
during the reinforcement learning stage.
| Bar Hilleli and Ran El-Yaniv | null | 1612.01086 | null | null |
Learning to superoptimize programs - Workshop Version | cs.LG | Superoptimization requires the estimation of the best program for a given
computational task. In order to deal with large programs, superoptimization
techniques perform a stochastic search. This involves proposing a modification
of the current program, which is accepted or rejected based on the improvement
achieved. The state of the art method uses uniform proposal distributions,
which fails to exploit the problem structure to the fullest. To alleviate this
deficiency, we learn a proposal distribution over possible modifications using
Reinforcement Learning. We provide convincing results on the superoptimization
of "Hacker's Delight" programs.
| Rudy Bunel, Alban Desmaison, M. Pawan Kumar, Philip H.S.Torr, Pushmeet
Kohli | null | 1612.01094 | null | null |
Robust nonparametric nearest neighbor random process clustering | cs.LG cs.IT math.IT stat.ML | We consider the problem of clustering noisy finite-length observations of
stationary ergodic random processes according to their generative models
without prior knowledge of the model statistics and the number of generative
models. Two algorithms, both using the $L^1$-distance between estimated power
spectral densities (PSDs) as a measure of dissimilarity, are analyzed. The
first one, termed nearest neighbor process clustering (NNPC), relies on
partitioning the nearest neighbor graph of the observations via spectral
clustering. The second algorithm, simply referred to as $k$-means (KM),
consists of a single $k$-means iteration with farthest point initialization and
was considered before in the literature, albeit with a different dissimilarity
measure. We prove that both algorithms succeed with high probability in the
presence of noise and missing entries, and even when the generative process
PSDs overlap significantly, all provided that the observation length is
sufficiently large. Our results quantify the tradeoff between the overlap of
the generative process PSDs, the observation length, the fraction of missing
entries, and the noise variance. Finally, we provide extensive numerical
results for synthetic and real data and find that NNPC outperforms
state-of-the-art algorithms in human motion sequence clustering.
| Michael Tschannen and Helmut B\"olcskei | 10.1109/TSP.2017.2736513 | 1612.01103 | null | null |
Properties and Bayesian fitting of restricted Boltzmann machines | stat.ML cs.LG | A restricted Boltzmann machine (RBM) is an undirected graphical model
constructed for discrete or continuous random variables, with two layers, one
hidden and one visible, and no conditional dependency within a layer. In recent
years, RBMs have risen to prominence due to their connection to deep learning.
By treating a hidden layer of one RBM as the visible layer in a second RBM, a
deep architecture can be created. RBMs are thought to thereby have the ability
to encode very complex and rich structures in data, making them attractive for
supervised learning. However, the generative behavior of RBMs is largely
unexplored and typical fitting methodology does not easily allow for
uncertainty quantification in addition to point estimates. In this paper, we
discuss the relationship between RBM parameter specification in the binary case
and model properties such as degeneracy, instability and uninterpretability. We
also describe the associated difficulties that can arise with likelihood-based
inference and further discuss the potential Bayes fitting of such (highly
flexible) models, especially as Gibbs sampling (quasi-Bayes) methods are often
advocated for the RBM model structure.
| Andee Kaplan, Daniel Nordman, and Stephen Vardeman | 10.1002/sam.11396 | 1612.01158 | null | null |
Neural Symbolic Machines: Learning Semantic Parsers on Freebase with
Weak Supervision (Short Version) | cs.CL cs.AI cs.LG | Extending the success of deep neural networks to natural language
understanding and symbolic reasoning requires complex operations and external
memory. Recent neural program induction approaches have attempted to address
this problem, but are typically limited to differentiable memory, and
consequently cannot scale beyond small synthetic tasks. In this work, we
propose the Manager-Programmer-Computer framework, which integrates neural
networks with non-differentiable memory to support abstract, scalable and
precise operations through a friendly neural computer interface. Specifically,
we introduce a Neural Symbolic Machine, which contains a sequence-to-sequence
neural "programmer", and a non-differentiable "computer" that is a Lisp
interpreter with code assist. To successfully apply REINFORCE for training, we
augment it with approximate gold programs found by an iterative maximum
likelihood training process. NSM is able to learn a semantic parser from weak
supervision over a large knowledge base. It achieves new state-of-the-art
performance on WebQuestionsSP, a challenging semantic parsing dataset, with
weak supervision. Compared to previous approaches, NSM is end-to-end, therefore
does not rely on feature engineering or domain specific knowledge.
| Chen Liang, Jonathan Berant, Quoc Le, Kenneth D. Forbus, Ni Lao | null | 1612.01197 | null | null |
Intra-day Activity Better Predicts Chronic Conditions | stat.ML cs.LG | In this work we investigate intra-day patterns of activity on a population of
7,261 users of mobile health wearable devices and apps. We show that: (1) using
intra-day step and sleep data recorded from passive trackers significantly
improves classification performance on self-reported chronic conditions related
to mental health and nervous system disorders, (2) Convolutional Neural
Networks achieve top classification performance vs. baseline models when
trained directly on multivariate time series of activity data, and (3) jointly
predicting all condition classes via multi-task learning can be leveraged to
extract features that generalize across data sets and achieve the highest
classification performance.
| Tom Quisel, David C. Kale, Luca Foschini | null | 1612.012 | null | null |
Optimal and Adaptive Off-policy Evaluation in Contextual Bandits | stat.ML cs.LG | We study the off-policy evaluation problem---estimating the value of a target
policy using data collected by another policy---under the contextual bandit
model. We consider the general (agnostic) setting without access to a
consistent model of rewards and establish a minimax lower bound on the mean
squared error (MSE). The bound is matched up to constants by the inverse
propensity scoring (IPS) and doubly robust (DR) estimators. This highlights the
difficulty of the agnostic contextual setting, in contrast with multi-armed
bandits and contextual bandits with access to a consistent reward model, where
IPS is suboptimal. We then propose the SWITCH estimator, which can use an
existing reward model (not necessarily consistent) to achieve a better
bias-variance tradeoff than IPS and DR. We prove an upper bound on its MSE and
demonstrate its benefits empirically on a diverse collection of data sets,
often outperforming prior work by orders of magnitude.
| Yu-Xiang Wang and Alekh Agarwal and Miroslav Dudik | null | 1612.01205 | null | null |
Deep Metric Learning via Facility Location | cs.CV cs.LG | Learning the representation and the similarity metric in an end-to-end
fashion with deep networks have demonstrated outstanding results for clustering
and retrieval. However, these recent approaches still suffer from the
performance degradation stemming from the local metric training procedure which
is unaware of the global structure of the embedding space.
We propose a global metric learning scheme for optimizing the deep metric
embedding with the learnable clustering function and the clustering metric
(NMI) in a novel structured prediction framework.
Our experiments on CUB200-2011, Cars196, and Stanford online products
datasets show state of the art performance both on the clustering and retrieval
tasks measured in the NMI and Recall@K evaluation metrics.
| Hyun Oh Song, Stefanie Jegelka, Vivek Rathod, Kevin Murphy | null | 1612.01213 | null | null |
Known Unknowns: Uncertainty Quality in Bayesian Neural Networks | stat.ML cs.LG cs.NE | We evaluate the uncertainty quality in neural networks using anomaly
detection. We extract uncertainty measures (e.g. entropy) from the predictions
of candidate models, use those measures as features for an anomaly detector,
and gauge how well the detector differentiates known from unknown classes. We
assign higher uncertainty quality to candidate models that lead to better
detectors. We also propose a novel method for sampling a variational
approximation of a Bayesian neural network, called One-Sample Bayesian
Approximation (OSBA). We experiment on two datasets, MNIST and CIFAR10. We
compare the following candidate neural network models: Maximum Likelihood,
Bayesian Dropout, OSBA, and --- for MNIST --- the standard variational
approximation. We show that Bayesian Dropout and OSBA provide better
uncertainty information than Maximum Likelihood, and are essentially equivalent
to the standard variational approximation, but much faster.
| Ramon Oliveira, Pedro Tabacof, Eduardo Valle | null | 1612.01251 | null | null |
Deep Image Category Discovery using a Transferred Similarity Function | cs.CV cs.LG | Automatically discovering image categories in unlabeled natural images is one
of the important goals of unsupervised learning. However, the task is
challenging and even human beings define visual categories based on a large
amount of prior knowledge. In this paper, we similarly utilize prior knowledge
to facilitate the discovery of image categories. We present a novel end-to-end
network to map unlabeled images to categories as a clustering network. We
propose that this network can be learned with contrastive loss which is only
based on weak binary pair-wise constraints. Such binary constraints can be
learned from datasets in other domains as transferred similarity functions,
which mimic a simple knowledge transfer. We first evaluate our experiments on
the MNIST dataset as a proof of concept, based on predicted similarities
trained on Omniglot, showing a 99\% accuracy which significantly outperforms
clustering based approaches. Then we evaluate the discovery performance on
Cifar-10, STL-10, and ImageNet, which achieves both state-of-the-art accuracy
and shows it can be scalable to various large natural images.
| Yen-Chang Hsu, Zhaoyang Lv, Zsolt Kira | null | 1612.01253 | null | null |
Deep Symbolic Representation Learning for Heterogeneous Time-series
Classification | cs.LG stat.ML | In this paper, we consider the problem of event classification with
multi-variate time series data consisting of heterogeneous (continuous and
categorical) variables. The complex temporal dependencies between the variables
combined with sparsity of the data makes the event classification problem
particularly challenging. Most state-of-art approaches address this either by
designing hand-engineered features or breaking up the problem over homogeneous
variates. In this work, we propose and compare three representation learning
algorithms over symbolized sequences which enables classification of
heterogeneous time-series data using a deep architecture. The proposed
representations are trained jointly along with the rest of the network
architecture in an end-to-end fashion that makes the learned features
discriminative for the given task. Experiments on three real-world datasets
demonstrate the effectiveness of the proposed approaches.
| Shengdong Zhang and Soheil Bahrampour and Naveen Ramakrishnan and
Mohak Shah | null | 1612.01254 | null | null |
Cryptocurrency Portfolio Management with Deep Reinforcement Learning | cs.LG | Portfolio management is the decision-making process of allocating an amount
of fund into different financial investment products. Cryptocurrencies are
electronic and decentralized alternatives to government-issued money, with
Bitcoin as the best-known example of a cryptocurrency. This paper presents a
model-less convolutional neural network with historic prices of a set of
financial assets as its input, outputting portfolio weights of the set. The
network is trained with 0.7 years' price data from a cryptocurrency exchange.
The training is done in a reinforcement manner, maximizing the accumulative
return, which is regarded as the reward function of the network. Backtest
trading experiments with trading period of 30 minutes is conducted in the same
market, achieving 10-fold returns in 1.8 months' periods. Some recently
published portfolio selection strategies are also used to perform the same
back-tests, whose results are compared with the neural network. The network is
not limited to cryptocurrency, but can be applied to any other financial
markets.
| Zhengyao Jiang, Jinjun Liang | null | 1612.01277 | null | null |
Message Passing Multi-Agent GANs | cs.CV cs.AI cs.LG cs.NE | Communicating and sharing intelligence among agents is an important facet of
achieving Artificial General Intelligence. As a first step towards this
challenge, we introduce a novel framework for image generation: Message Passing
Multi-Agent Generative Adversarial Networks (MPM GANs). While GANs have
recently been shown to be very effective for image generation and other tasks,
these networks have been limited to mostly single generator-discriminator
networks. We show that we can obtain multi-agent GANs that communicate through
message passing to achieve better image generation. The objectives of the
individual agents in this framework are two fold: a co-operation objective and
a competing objective. The co-operation objective ensures that the message
sharing mechanism guides the other generator to generate better than itself
while the competing objective encourages each generator to generate better than
its counterpart. We analyze and visualize the messages that these GANs share
among themselves in various scenarios. We quantitatively show that the message
sharing formulation serves as a regularizer for the adversarial training.
Qualitatively, we show that the different generators capture different traits
of the underlying data distribution.
| Arnab Ghosh and Viveka Kulharia and Vinay Namboodiri | null | 1612.01294 | null | null |
Ranking Biomarkers Through Mutual Information | stat.ML cs.LG stat.AP | We study information theoretic methods for ranking biomarkers. In clinical
trials there are two, closely related, types of biomarkers: predictive and
prognostic, and disentangling them is a key challenge. Our first step is to
phrase biomarker ranking in terms of optimizing an information theoretic
quantity. This formalization of the problem will enable us to derive rankings
of predictive/prognostic biomarkers, by estimating different, high dimensional,
conditional mutual information terms. To estimate these terms, we suggest
efficient low dimensional approximations, and we derive an empirical Bayes
estimator, which is suitable for small or sparse datasets. Finally, we
introduce a new visualisation tool that captures the prognostic and the
predictive strength of a set of biomarkers. We believe this representation will
prove to be a powerful tool in biomarker discovery.
| Konstantinos Sechidis, Emily Turner, Paul D. Metcalfe, James
Weatherall and Gavin Brown | null | 1612.01316 | null | null |
A One class Classifier based Framework using SVDD : Application to an
Imbalanced Geological Dataset | cs.LG stat.AP stat.ML | Evaluation of hydrocarbon reservoir requires classification of petrophysical
properties from available dataset. However, characterization of reservoir
attributes is difficult due to the nonlinear and heterogeneous nature of the
subsurface physical properties. In this context, present study proposes a
generalized one class classification framework based on Support Vector Data
Description (SVDD) to classify a reservoir characteristic water saturation into
two classes (Class high and Class low) from four logs namely gamma ray, neutron
porosity, bulk density, and P sonic using an imbalanced dataset. A comparison
is carried out among proposed framework and different supervised classification
algorithms in terms of g metric means and execution time. Experimental results
show that proposed framework has outperformed other classifiers in terms of
these performance evaluators. It is envisaged that the classification analysis
performed in this study will be useful in further reservoir modeling.
| Soumi Chaki, Akhilesh Kumar Verma, Aurobinda Routray, William K.
Mohanty, Mamata Jenamani | null | 1612.01349 | null | null |
Diagnostic Prediction Using Discomfort Drawings | cs.LG | In this paper, we explore the possibility to apply machine learning to make
diagnostic predictions using discomfort drawings. A discomfort drawing is an
intuitive way for patients to express discomfort and pain related symptoms.
These drawings have proven to be an effective method to collect patient data
and make diagnostic decisions in real-life practice. A dataset from real-world
patient cases is collected for which medical experts provide diagnostic labels.
Next, we extend a factorized multimodal topic model, Inter-Battery Topic Model
(IBTM), to train a system that can make diagnostic predictions given an unseen
discomfort drawing. Experimental results show reasonable predictions of
diagnostic labels given an unseen discomfort drawing. The positive result
indicates a significant potential of machine learning to be used for parts of
the pain diagnostic process and to be a decision support system for physicians
and other health care personnel.
| Cheng Zhang, Hedvig Kjellstrom, Bo C. Bertilson | null | 1612.01356 | null | null |
An Asymptotically Optimal Contextual Bandit Algorithm Using Hierarchical
Structures | cs.LG | We propose online algorithms for sequential learning in the contextual
multi-armed bandit setting. Our approach is to partition the context space and
then optimally combine all of the possible mappings between the partition
regions and the set of bandit arms in a data driven manner. We show that in our
approach, the best mapping is able to approximate the best arm selection policy
to any desired degree under mild Lipschitz conditions. Therefore, we design our
algorithms based on the optimal adaptive combination and asymptotically achieve
the performance of the best mapping as well as the best arm selection policy.
This optimality is also guaranteed to hold even in adversarial environments
since we do not rely on any statistical assumptions regarding the contexts or
the loss of the bandit arms. Moreover, we design efficient implementations for
our algorithms in various hierarchical partitioning structures such as
lexicographical or arbitrary position splitting and binary trees (and several
other partitioning examples). For instance, in the case of binary tree
partitioning, the computational complexity is only log-linear in the number of
regions in the finest partition. In conclusion, we provide significant
performance improvements by introducing upper bounds (w.r.t. the best arm
selection policy) that are mathematically proven to vanish in the average loss
per round sense at a faster rate compared to the state-of-the-art. Our
experimental work extensively covers various scenarios ranging from bandit
settings to multi-class classification with real and synthetic data. In these
experiments, we show that our algorithms are highly superior over the
state-of-the-art techniques while maintaining the introduced mathematical
guarantees and a computationally decent scalability.
| Mohammadreza Mohaghegh Neyshabouri, Kaan Gokcesu, Huseyin Ozkan and
Suleyman S. Kozat | null | 1612.01367 | null | null |
Implicit Modeling -- A Generalization of Discriminative and Generative
Approaches | cs.LG | We propose a new modeling approach that is a generalization of generative and
discriminative models. The core idea is to use an implicit parameterization of
a joint probability distribution by specifying only the conditional
distributions. The proposed scheme combines the advantages of both worlds -- it
can use powerful complex discriminative models as its parts, having at the same
time better generalization capabilities. We thoroughly evaluate the proposed
method for a simple classification task with artificial data and illustrate its
advantages for real-word scenarios on a semantic image segmentation problem.
| Dmitrij Schlesinger and Carsten Rother | null | 1612.01397 | null | null |
Learning Adversary-Resistant Deep Neural Networks | cs.LG | Deep neural networks (DNNs) have proven to be quite effective in a vast array
of machine learning tasks, with recent examples in cyber security and
autonomous vehicles. Despite the superior performance of DNNs in these
applications, it has been recently shown that these models are susceptible to a
particular type of attack that exploits a fundamental flaw in their design.
This attack consists of generating particular synthetic examples referred to as
adversarial samples. These samples are constructed by slightly manipulating
real data-points in order to "fool" the original DNN model, forcing it to
mis-classify previously correctly classified samples with high confidence.
Addressing this flaw in the model is essential if DNNs are to be used in
critical applications such as those in cyber security.
Previous work has provided various learning algorithms to enhance the
robustness of DNN models, and they all fall into the tactic of "security
through obscurity". This means security can be guaranteed only if one can
obscure the learning algorithms from adversaries. Once the learning technique
is disclosed, DNNs protected by these defense mechanisms are still susceptible
to adversarial samples. In this work, we investigate this issue shared across
previous research work and propose a generic approach to escalate a DNN's
resistance to adversarial samples. More specifically, our approach integrates a
data transformation module with a DNN, making it robust even if we reveal the
underlying learning algorithm. To demonstrate the generality of our proposed
approach and its potential for handling cyber security applications, we
evaluate our method and several other existing solutions on datasets publicly
available. Our results indicate that our approach typically provides superior
classification performance and resistance in comparison with state-of-art
solutions.
| Qinglong Wang, Wenbo Guo, Kaixuan Zhang, Alexander G. Ororbia II,
Xinyu Xing, Xue Liu, C. Lee Giles | null | 1612.01401 | null | null |
Semi-Supervised Learning via Sparse Label Propagation | cs.LG stat.ML | This work proposes a novel method for semi-supervised learning from partially
labeled massive network-structured datasets, i.e., big data over networks. We
model the underlying hypothesis, which relates data points to labels, as a
graph signal, defined over some graph (network) structure intrinsic to the
dataset. Following the key principle of supervised learning, i.e., similar
inputs yield similar outputs, we require the graph signals induced by labels to
have small total variation. Accordingly, we formulate the problem of learning
the labels of data points as a non-smooth convex optimization problem which
amounts to balancing between the empirical loss, i.e., the discrepancy with
some partially available label information, and the smoothness quantified by
the total variation of the learned graph signal. We solve this optimization
problem by appealing to a recently proposed preconditioned variant of the
popular primal-dual method by Pock and Chambolle, which results in a sparse
label propagation algorithm. This learning algorithm allows for a highly
scalable implementation as message passing over the underlying data graph. By
applying concepts of compressed sensing to the learning problem, we are also
able to provide a transparent sufficient condition on the underlying network
structure such that accurate learning of the labels is possible. We also
present an implementation of the message passing formulation allows for a
highly scalable implementation in big data frameworks.
| Alexander Jung, Alfred O. Hero III, Alexandru Mara, and Saeed Jahromi | null | 1612.01414 | null | null |
Zeroth-order Asynchronous Doubly Stochastic Algorithm with Variance
Reduction | cs.LG | Zeroth-order (derivative-free) optimization attracts a lot of attention in
machine learning, because explicit gradient calculations may be computationally
expensive or infeasible. To handle large scale problems both in volume and
dimension, recently asynchronous doubly stochastic zeroth-order algorithms were
proposed. The convergence rate of existing asynchronous doubly stochastic
zeroth order algorithms is $O(\frac{1}{\sqrt{T}})$ (also for the sequential
stochastic zeroth-order optimization algorithms). In this paper, we focus on
the finite sums of smooth but not necessarily convex functions, and propose an
asynchronous doubly stochastic zeroth-order optimization algorithm using the
accelerated technology of variance reduction (AsyDSZOVR). Rigorous theoretical
analysis show that the convergence rate can be improved from
$O(\frac{1}{\sqrt{T}})$ the best result of existing algorithms to
$O(\frac{1}{T})$. Also our theoretical results is an improvement to the ones of
the sequential stochastic zeroth-order optimization algorithms.
| Bin Gu and Zhouyuan Huo and Heng Huang | null | 1612.01425 | null | null |
Extracting Implicit Social Relation for Social Recommendation Techniques
in User Rating Prediction | cs.SI cs.LG | Recommendation plays an increasingly important role in our daily lives.
Recommender systems automatically suggest items to users that might be
interesting for them. Recent studies illustrate that incorporating social trust
in Matrix Factorization methods demonstrably improves accuracy of rating
prediction. Such approaches mainly use the trust scores explicitly expressed by
users. However, it is often challenging to have users provide explicit trust
scores of each other. There exist quite a few works, which propose Trust
Metrics to compute and predict trust scores between users based on their
interactions. In this paper, first we present how social relation can be
extracted from users' ratings to items by describing Hellinger distance between
users in recommender systems. Then, we propose to incorporate the predicted
trust scores into social matrix factorization models. By analyzing social
relation extraction from three well-known real-world datasets, which both:
trust and recommendation data available, we conclude that using the implicit
social relation in social recommendation techniques has almost the same
performance compared to the actual trust scores explicitly expressed by users.
Hence, we build our method, called Hell-TrustSVD, on top of the
state-of-the-art social recommendation technique to incorporate both the
extracted implicit social relations and ratings given by users on the
prediction of items for an active user. To the best of our knowledge, this is
the first work to extend TrustSVD with extracted social trust information. The
experimental results support the idea of employing implicit trust into matrix
factorization whenever explicit trust is not available, can perform much better
than the state-of-the-art approaches in user rating prediction.
| Seyed Mohammad Taheri, Hamidreza Mahyar, Mohammad Firouzi, Elahe
Ghalebi K., Radu Grosu, Ali Movaghar | 10.1145/3041021.3051153 | 1612.01428 | null | null |
Understanding and Optimizing the Performance of Distributed Machine
Learning Applications on Apache Spark | cs.DC cs.LG | In this paper we explore the performance limits of Apache Spark for machine
learning applications. We begin by analyzing the characteristics of a
state-of-the-art distributed machine learning algorithm implemented in Spark
and compare it to an equivalent reference implementation using the high
performance computing framework MPI. We identify critical bottlenecks of the
Spark framework and carefully study their implications on the performance of
the algorithm. In order to improve Spark performance we then propose a number
of practical techniques to alleviate some of its overheads. However, optimizing
computational efficiency and framework related overheads is not the only key to
performance -- we demonstrate that in order to get the best performance out of
any implementation it is necessary to carefully tune the algorithm to the
respective trade-off between computation time and communication latency. The
optimal trade-off depends on both the properties of the distributed algorithm
as well as infrastructure and framework-related characteristics. Finally, we
apply these technical and algorithmic optimizations to three different
distributed linear machine learning algorithms that have been implemented in
Spark. We present results using five large datasets and demonstrate that by
using the proposed optimizations, we can achieve a reduction in the performance
difference between Spark and MPI from 20x to 2x.
| Celestine D\"unner, Thomas Parnell, Kubilay Atasu, Manolis Sifalakis,
Haralampos Pozidis | 10.1109/BigData.2017.8257942 | 1612.01437 | null | null |
Support vector regression model for BigData systems | cs.DC cs.LG cs.PF | Nowadays Big Data are becoming more and more important. Many sectors of our
economy are now guided by data-driven decision processes. Big Data and business
intelligence applications are facilitated by the MapReduce programming model
while, at infrastructural layer, cloud computing provides flexible and cost
effective solutions for allocating on demand large clusters. In such systems,
capacity allocation, which is the ability to optimally size minimal resources
for achieve a certain level of performance, is a key challenge to enhance
performance for MapReduce jobs and minimize cloud resource costs. In order to
do so, one of the biggest challenge is to build an accurate performance model
to estimate job execution time of MapReduce systems. Previous works applied
simulation based models for modeling such systems. Although this approach can
accurately describe the behavior of Big Data clusters, it is too
computationally expensive and does not scale to large system. We try to
overcome these issues by applying machine learning techniques. More precisely
we focus on Support Vector Regression (SVR) which is intrinsically more robust
w.r.t other techniques, like, e.g., neural networks, and less sensitive to
outliers in the training set. To better investigate these benefits, we compare
SVR to linear regression.
| Alessandro Maria Rizzi | null | 1612.01458 | null | null |
Simple and Scalable Predictive Uncertainty Estimation using Deep
Ensembles | stat.ML cs.LG | Deep neural networks (NNs) are powerful black box predictors that have
recently achieved impressive performance on a wide spectrum of tasks.
Quantifying predictive uncertainty in NNs is a challenging and yet unsolved
problem. Bayesian NNs, which learn a distribution over weights, are currently
the state-of-the-art for estimating predictive uncertainty; however these
require significant modifications to the training procedure and are
computationally expensive compared to standard (non-Bayesian) NNs. We propose
an alternative to Bayesian NNs that is simple to implement, readily
parallelizable, requires very little hyperparameter tuning, and yields high
quality predictive uncertainty estimates. Through a series of experiments on
classification and regression benchmarks, we demonstrate that our method
produces well-calibrated uncertainty estimates which are as good or better than
approximate Bayesian NNs. To assess robustness to dataset shift, we evaluate
the predictive uncertainty on test examples from known and unknown
distributions, and show that our method is able to express higher uncertainty
on out-of-distribution examples. We demonstrate the scalability of our method
by evaluating predictive uncertainty estimates on ImageNet.
| Balaji Lakshminarayanan, Alexander Pritzel and Charles Blundell | null | 1612.01474 | null | null |
Generalized RBF kernel for incomplete data | cs.LG stat.ML | We construct $\bf genRBF$ kernel, which generalizes the classical Gaussian
RBF kernel to the case of incomplete data. We model the uncertainty contained
in missing attributes making use of data distribution and associate every point
with a conditional probability density function. This allows to embed
incomplete data into the function space and to define a kernel between two
missing data points based on scalar product in $L_2$. Experiments show that
introduced kernel applied to SVM classifier gives better results than other
state-of-the-art methods, especially in the case when large number of features
is missing. Moreover, it is easy to implement and can be used together with any
kernel approaches with no additional modifications.
| {\L}ukasz Struski, Marek \'Smieja, Jacek Tabor | null | 1612.0148 | null | null |
A Nonparametric Latent Factor Model For Location-Aware Video
Recommendations | stat.ML cs.LG | We are interested in learning customers' video preferences from their
historic viewing patterns and geographical location. We consider a Bayesian
latent factor modeling approach for this task. In order to tune the complexity
of the model to best represent the data, we make use of Bayesian nonparameteric
techniques. We describe an inference technique that can scale to large
real-world data sets. Finally we show results obtained by applying the model to
a large internal Netflix data set, that illustrates that the model was able to
capture interesting relationships between viewing patterns and geographical
location.
| Ehtsham Elahi | null | 1612.01481 | null | null |
Towards the Limit of Network Quantization | cs.CV cs.LG cs.NE | Network quantization is one of network compression techniques to reduce the
redundancy of deep neural networks. It reduces the number of distinct network
parameter values by quantization in order to save the storage for them. In this
paper, we design network quantization schemes that minimize the performance
loss due to quantization given a compression ratio constraint. We analyze the
quantitative relation of quantization errors to the neural network loss
function and identify that the Hessian-weighted distortion measure is locally
the right objective function for the optimization of network quantization. As a
result, Hessian-weighted k-means clustering is proposed for clustering network
parameters to quantize. When optimal variable-length binary codes, e.g.,
Huffman codes, are employed for further compression, we derive that the network
quantization problem can be related to the entropy-constrained scalar
quantization (ECSQ) problem in information theory and consequently propose two
solutions of ECSQ for network quantization, i.e., uniform quantization and an
iterative solution similar to Lloyd's algorithm. Finally, using the simple
uniform quantization followed by Huffman coding, we show from our experiments
that the compression ratios of 51.25, 22.17 and 40.65 are achievable for LeNet,
32-layer ResNet and AlexNet, respectively.
| Yoojin Choi, Mostafa El-Khamy, and Jungwon Lee | null | 1612.01543 | null | null |
Improving the Performance of Neural Networks in Regression Tasks Using
Drawering | cs.LG cs.AI cs.NE stat.ML | The method presented extends a given regression neural network to make its
performance improve. The modification affects the learning procedure only,
hence the extension may be easily omitted during evaluation without any change
in prediction. It means that the modified model may be evaluated as quickly as
the original one but tends to perform better.
This improvement is possible because the modification gives better expressive
power, provides better behaved gradients and works as a regularization. The
knowledge gained by the temporarily extended neural network is contained in the
parameters shared with the original neural network.
The only cost is an increase in learning time.
| Konrad Zolna | null | 1612.01589 | null | null |
Deterministic and Probabilistic Conditions for Finite Completability of
Low-Tucker-Rank Tensor | cs.NA cs.IT cs.LG math.IT | We investigate the fundamental conditions on the sampling pattern, i.e.,
locations of the sampled entries, for finite completability of a low-rank
tensor given some components of its Tucker rank. In order to find the
deterministic necessary and sufficient conditions, we propose an algebraic
geometric analysis on the Tucker manifold, which allows us to incorporate
multiple rank components in the proposed analysis in contrast with the
conventional geometric approaches on the Grassmannian manifold. This analysis
characterizes the algebraic independence of a set of polynomials defined based
on the sampling pattern, which is closely related to finite completion.
Probabilistic conditions are then studied and a lower bound on the sampling
probability is given, which guarantees that the proposed deterministic
conditions on the sampling patterns for finite completability hold with high
probability. Furthermore, using the proposed geometric approach for finite
completability, we propose a sufficient condition on the sampling pattern that
ensures there exists exactly one completion for the sampled tensor.
| Morteza Ashraphijuo and Vaneet Aggarwal and Xiaodong Wang | null | 1612.01597 | null | null |
Distributed Gaussian Learning over Time-varying Directed Graphs | math.OC cs.LG cs.MA cs.SY stat.ML | We present a distributed (non-Bayesian) learning algorithm for the problem of
parameter estimation with Gaussian noise. The algorithm is expressed as
explicit updates on the parameters of the Gaussian beliefs (i.e. means and
precision). We show a convergence rate of $O(1/k)$ with the constant term
depending on the number of agents and the topology of the network. Moreover, we
show almost sure convergence to the optimal solution of the estimation problem
for the general case of time-varying directed graphs.
| Angelia Nedi\'c, Alex Olshevsky and C\'esar A. Uribe | null | 1612.016 | null | null |
Efficient Non-oblivious Randomized Reduction for Risk Minimization with
Improved Excess Risk Guarantee | cs.LG | In this paper, we address learning problems for high dimensional data.
Previously, oblivious random projection based approaches that project high
dimensional features onto a random subspace have been used in practice for
tackling high-dimensionality challenge in machine learning. Recently, various
non-oblivious randomized reduction methods have been developed and deployed for
solving many numerical problems such as matrix product approximation, low-rank
matrix approximation, etc. However, they are less explored for the machine
learning tasks, e.g., classification. More seriously, the theoretical analysis
of excess risk bounds for risk minimization, an important measure of
generalization performance, has not been established for non-oblivious
randomized reduction methods. It therefore remains an open problem what is the
benefit of using them over previous oblivious random projection based
approaches. To tackle these challenges, we propose an algorithmic framework for
employing non-oblivious randomized reduction method for general empirical risk
minimizing in machine learning tasks, where the original high-dimensional
features are projected onto a random subspace that is derived from the data
with a small matrix approximation error. We then derive the first excess risk
bound for the proposed non-oblivious randomized reduction approach without
requiring strong assumptions on the training data. The established excess risk
bound exhibits that the proposed approach provides much better generalization
performance and it also sheds more insights about different randomized
reduction approaches. Finally, we conduct extensive experiments on both
synthetic and real-world benchmark datasets, whose dimension scales to
$O(10^7)$, to demonstrate the efficacy of our proposed approach.
| Yi Xu, Haiqin Yang, Lijun Zhang, Tianbao Yang | null | 1612.01663 | null | null |
Statistical mechanics of unsupervised feature learning in a restricted
Boltzmann machine with binary synapses | cs.LG cond-mat.dis-nn cond-mat.stat-mech cs.NE q-bio.NC | Revealing hidden features in unlabeled data is called unsupervised feature
learning, which plays an important role in pretraining a deep neural network.
Here we provide a statistical mechanics analysis of the unsupervised learning
in a restricted Boltzmann machine with binary synapses. A message passing
equation to infer the hidden feature is derived, and furthermore, variants of
this equation are analyzed. A statistical analysis by replica theory describes
the thermodynamic properties of the model. Our analysis confirms an entropy
crisis preceding the non-convergence of the message passing equation,
suggesting a discontinuous phase transition as a key characteristic of the
restricted Boltzmann machine. Continuous phase transition is also confirmed
depending on the embedded feature strength in the data. The mean-field result
under the replica symmetric assumption agrees with that obtained by running
message passing algorithms on single instances of finite sizes. Interestingly,
in an approximate Hopfield model, the entropy crisis is absent, and a
continuous phase transition is observed instead. We also develop an iterative
equation to infer the hyper-parameter (temperature) hidden in the data, which
in physics corresponds to iteratively imposing Nishimori condition. Our study
provides insights towards understanding the thermodynamic properties of the
restricted Boltzmann machine learning, and moreover important theoretical basis
to build simplified deep networks.
| Haiping Huang | 10.1088/1742-5468/aa6ddc | 1612.01717 | null | null |
Factored Contextual Policy Search with Bayesian Optimization | cs.LG cs.AI cs.RO stat.ML | Scarce data is a major challenge to scaling robot learning to truly complex
tasks, as we need to generalize locally learned policies over different
"contexts". Bayesian optimization approaches to contextual policy search (CPS)
offer data-efficient policy learning that generalize over a context space. We
propose to improve data-efficiency by factoring typically considered contexts
into two components: target-type contexts that correspond to a desired outcome
of the learned behavior, e.g. target position for throwing a ball; and
environment type contexts that correspond to some state of the environment,
e.g. initial ball position or wind speed. Our key observation is that
experience can be directly generalized over target-type contexts. Based on that
we introduce Factored Contextual Policy Search with Bayesian Optimization for
both passive and active learning settings. Preliminary results show faster
policy generalization on a simulated toy problem. A full paper extension is
available at arXiv:1904.11761
| Peter Karkus, Andras Kupcsik, David Hsu, Wee Sun Lee | null | 1612.01746 | null | null |
Video Ladder Networks | cs.LG cs.CV stat.ML | We present the Video Ladder Network (VLN) for efficiently generating future
video frames. VLN is a neural encoder-decoder model augmented at all layers by
both recurrent and feedforward lateral connections. At each layer, these
connections form a lateral recurrent residual block, where the feedforward
connection represents a skip connection and the recurrent connection represents
the residual. Thanks to the recurrent connections, the decoder can exploit
temporal summaries generated from all layers of the encoder. This way, the top
layer is relieved from the pressure of modeling lower-level spatial and
temporal details. Furthermore, we extend the basic version of VLN to
incorporate ResNet-style residual blocks in the encoder and decoder, which help
improving the prediction results. VLN is trained in self-supervised regime on
the Moving MNIST dataset, achieving competitive results while having very
simple structure and providing fast inference.
| Francesco Cricri, Xingyang Ni, Mikko Honkala, Emre Aksu, Moncef
Gabbouj | null | 1612.01756 | null | null |
Control Matching via Discharge Code Sequences | cs.LG | In this paper, we consider the patient similarity matching problem over a
cancer cohort of more than 220,000 patients. Our approach first leverages on
Word2Vec framework to embed ICD codes into vector-valued representation. We
then propose a sequential algorithm for case-control matching on this
representation space of diagnosis codes. The novel practice of applying the
sequential matching on the vector representation lifted the matching accuracy
measured through multiple clinical outcomes. We reported the results on a
large-scale dataset to demonstrate the effectiveness of our method. For such a
large dataset where most clinical information has been codified, the new method
is particularly relevant.
| Dang Nguyen, Wei Luo, Dinh Phung, Svetha Venkatesh | null | 1612.01812 | null | null |
Combinatorial semi-bandit with known covariance | cs.LG | The combinatorial stochastic semi-bandit problem is an extension of the
classical multi-armed bandit problem in which an algorithm pulls more than one
arm at each stage and the rewards of all pulled arms are revealed. One
difference with the single arm variant is that the dependency structure of the
arms is crucial. Previous works on this setting either used a worst-case
approach or imposed independence of the arms. We introduce a way to quantify
the dependency structure of the problem and design an algorithm that adapts to
it. The algorithm is based on linear regression and the analysis develops
techniques from the linear bandit literature. By comparing its performance to a
new lower bound, we prove that it is optimal, up to a poly-logarithmic factor
in the number of pulled arms.
| R\'emy Degenne, Vianney Perchet | null | 1612.01859 | null | null |
Microseismic events enhancement and detection in sensor arrays using
autocorrelation based filtering | physics.geo-ph cs.LG eess.SP | Passive microseismic data are commonly buried in noise, which presents a
significant challenge for signal detection and recovery. For recordings from a
surface sensor array where each trace contains a time-delayed arrival from the
event, we propose an autocorrelation-based stacking method that designs a
denoising filter from all the traces, as well as a multi-channel detection
scheme. This approach circumvents the issue of time aligning the traces prior
to stacking because every trace's autocorrelation is centered at zero in the
lag domain. The effect of white noise is concentrated near zero lag, so the
filter design requires a predictable adjustment of the zero-lag value.
Truncation of the autocorrelation is employed to smooth the impulse response of
the denoising filter. In order to extend the applicability of the algorithm, we
also propose a noise prewhitening scheme that addresses cases with colored
noise. The simplicity and robustness of this method are validated with
synthetic and real seismic traces.
| Entao Liu, Lijun Zhu, Anupama Govinda Raj, James H. McClellan,
Abdullatif Al-Shuhail, SanLinn I. Kaka, Naveed Iqbal | null | 1612.01884 | null | null |
Invariant Representations for Noisy Speech Recognition | cs.CL cs.CV cs.LG cs.SD stat.ML | Modern automatic speech recognition (ASR) systems need to be robust under
acoustic variability arising from environmental, speaker, channel, and
recording conditions. Ensuring such robustness to variability is a challenge in
modern day neural network-based ASR systems, especially when all types of
variability are not seen during training. We attempt to address this problem by
encouraging the neural network acoustic model to learn invariant feature
representations. We use ideas from recent research on image generation using
Generative Adversarial Networks and domain adaptation ideas extending
adversarial gradient-based training. A recent work from Ganin et al. proposes
to use adversarial training for image domain adaptation by using an
intermediate representation from the main target classification network to
deteriorate the domain classifier performance through a separate neural
network. Our work focuses on investigating neural architectures which produce
representations invariant to noise conditions for ASR. We evaluate the proposed
architecture on the Aurora-4 task, a popular benchmark for noise robust ASR. We
show that our method generalizes better than the standard multi-condition
training especially when only a few noise categories are seen during training.
| Dmitriy Serdyuk, Kartik Audhkhasi, Phil\'emon Brakel, Bhuvana
Ramabhadran, Samuel Thomas, Yoshua Bengio | null | 1612.01928 | null | null |
A Probabilistic Framework for Deep Learning | stat.ML cs.LG cs.NE | We develop a probabilistic framework for deep learning based on the Deep
Rendering Mixture Model (DRMM), a new generative probabilistic model that
explicitly capture variations in data due to latent task nuisance variables. We
demonstrate that max-sum inference in the DRMM yields an algorithm that exactly
reproduces the operations in deep convolutional neural networks (DCNs),
providing a first principles derivation. Our framework provides new insights
into the successes and shortcomings of DCNs as well as a principled route to
their improvement. DRMM training via the Expectation-Maximization (EM)
algorithm is a powerful alternative to DCN back-propagation, and initial
training results are promising. Classification based on the DRMM and other
variants outperforms DCNs in supervised digit classification, training 2-3x
faster while achieving similar accuracy. Moreover, the DRMM is applicable to
semi-supervised and unsupervised learning tasks, achieving results that are
state-of-the-art in several categories on the MNIST benchmark and comparable to
state of the art on the CIFAR10 benchmark.
| Ankit B. Patel, Tan Nguyen, Richard G. Baraniuk | null | 1612.01936 | null | null |
Semi-Supervised Learning with the Deep Rendering Mixture Model | stat.ML cs.LG cs.NE | Semi-supervised learning algorithms reduce the high cost of acquiring labeled
training data by using both labeled and unlabeled data during learning. Deep
Convolutional Networks (DCNs) have achieved great success in supervised tasks
and as such have been widely employed in the semi-supervised learning. In this
paper we leverage the recently developed Deep Rendering Mixture Model (DRMM), a
probabilistic generative model that models latent nuisance variation, and whose
inference algorithm yields DCNs. We develop an EM algorithm for the DRMM to
learn from both labeled and unlabeled data. Guided by the theory of the DRMM,
we introduce a novel non-negativity constraint and a variational inference
term. We report state-of-the-art performance on MNIST and SVHN and competitive
results on CIFAR10. We also probe deeper into how a DRMM trained in a
semi-supervised setting represents latent nuisance variation using
synthetically rendered images. Taken together, our work provides a unified
framework for supervised, unsupervised, and semi-supervised learning.
| Tan Nguyen, Wanjia Liu, Ethan Perez, Richard G. Baraniuk, Ankit B.
Patel | null | 1612.01942 | null | null |
Segmental Convolutional Neural Networks for Detection of Cardiac
Abnormality With Noisy Heart Sound Recordings | cs.SD cs.LG stat.ML | Heart diseases constitute a global health burden, and the problem is
exacerbated by the error-prone nature of listening to and interpreting heart
sounds. This motivates the development of automated classification to screen
for abnormal heart sounds. Existing machine learning-based systems achieve
accurate classification of heart sound recordings but rely on expert features
that have not been thoroughly evaluated on noisy recordings. Here we propose a
segmental convolutional neural network architecture that achieves automatic
feature learning from noisy heart sound recordings. Our experiments show that
our best model, trained on noisy recording segments acquired with an existing
hidden semi-markov model-based approach, attains a classification accuracy of
87.5% on the 2016 PhysioNet/CinC Challenge dataset, compared to the 84.6%
accuracy of the state-of-the-art statistical classifier trained and evaluated
on the same dataset. Our results indicate the potential of using neural
network-based methods to increase the accuracy of automated classification of
heart sound recordings for improved screening of heart diseases.
| Yuhao Zhang, Sandeep Ayyar, Long-Huei Chen, Ethan J. Li | null | 1612.01943 | null | null |
Core Sampling Framework for Pixel Classification | cs.CV cs.LG | The intermediate map responses of a Convolutional Neural Network (CNN)
contain information about an image that can be used to extract contextual
knowledge about it. In this paper, we present a core sampling framework that is
able to use these activation maps from several layers as features to another
neural network using transfer learning to provide an understanding of an input
image. Our framework creates a representation that combines features from the
test data and the contextual knowledge gained from the responses of a
pretrained network, processes it and feeds it to a separate Deep Belief
Network. We use this representation to extract more information from an image
at the pixel level, hence gaining understanding of the whole image. We
experimentally demonstrate the usefulness of our framework using a pretrained
VGG-16 model to perform segmentation on the BAERI dataset of Synthetic Aperture
Radar(SAR) imagery and the CAMVID dataset.
| Manohar Karki, Robert DiBiano, Saikat Basu, Supratik Mukhopadhyay | null | 1612.01981 | null | null |
Local Group Invariant Representations via Orbit Embeddings | cs.LG stat.ML | Invariance to nuisance transformations is one of the desirable properties of
effective representations. We consider transformations that form a \emph{group}
and propose an approach based on kernel methods to derive local group invariant
representations. Locality is achieved by defining a suitable probability
distribution over the group which in turn induces distributions in the input
feature space. We learn a decision function over these distributions by
appealing to the powerful framework of kernel methods and generate local
invariant random feature maps via kernel approximations. We show uniform
convergence bounds for kernel approximation and provide excess risk bounds for
learning with these features. We evaluate our method on three real datasets,
including Rotated MNIST and CIFAR-10, and observe that it outperforms competing
kernel based approaches. The proposed method also outperforms deep CNN on
Rotated-MNIST and performs comparably to the recently proposed
group-equivariant CNN.
| Anant Raj, Abhishek Kumar, Youssef Mroueh, P. Thomas Fletcher,
Bernhard Sch\"olkopf | null | 1612.01988 | null | null |
Statistical and Computational Guarantees of Lloyd's Algorithm and its
Variants | math.ST cs.LG stat.ML stat.TH | Clustering is a fundamental problem in statistics and machine learning.
Lloyd's algorithm, proposed in 1957, is still possibly the most widely used
clustering algorithm in practice due to its simplicity and empirical
performance. However, there has been little theoretical investigation on the
statistical and computational guarantees of Lloyd's algorithm. This paper is an
attempt to bridge this gap between practice and theory. We investigate the
performance of Lloyd's algorithm on clustering sub-Gaussian mixtures. Under an
appropriate initialization for labels or centers, we show that Lloyd's
algorithm converges to an exponentially small clustering error after an order
of $\log n$ iterations, where $n$ is the sample size. The error rate is shown
to be minimax optimal. For the two-mixture case, we only require the
initializer to be slightly better than random guess.
In addition, we extend the Lloyd's algorithm and its analysis to community
detection and crowdsourcing, two problems that have received a lot of attention
recently in statistics and machine learning. Two variants of Lloyd's algorithm
are proposed respectively for community detection and crowdsourcing. On the
theoretical side, we provide statistical and computational guarantees of the
two algorithms, and the results improve upon some previous signal-to-noise
ratio conditions in literature for both problems. Experimental results on
simulated and real data sets demonstrate competitive performance of our
algorithms to the state-of-the-art methods.
| Yu Lu and Harrison H. Zhou | null | 1612.02099 | null | null |
Predictive Business Process Monitoring with LSTM Neural Networks | stat.AP cs.DB cs.LG cs.NE stat.ML | Predictive business process monitoring methods exploit logs of completed
cases of a process in order to make predictions about running cases thereof.
Existing methods in this space are tailor-made for specific prediction tasks.
Moreover, their relative accuracy is highly sensitive to the dataset at hand,
thus requiring users to engage in trial-and-error and tuning when applying them
in a specific setting. This paper investigates Long Short-Term Memory (LSTM)
neural networks as an approach to build consistently accurate models for a wide
range of predictive process monitoring tasks. First, we show that LSTMs
outperform existing techniques to predict the next event of a running case and
its timestamp. Next, we show how to use models for predicting the next task in
order to predict the full continuation of a running case. Finally, we apply the
same approach to predict the remaining time, and show that this approach
outperforms existing tailor-made methods.
| Niek Tax, Ilya Verenich, Marcello La Rosa, Marlon Dumas | 10.1007/978-3-319-59536-8_30 | 1612.0213 | null | null |
Mode Regularized Generative Adversarial Networks | cs.LG cs.AI cs.CV cs.NE | Although Generative Adversarial Networks achieve state-of-the-art results on
a variety of generative tasks, they are regarded as highly unstable and prone
to miss modes. We argue that these bad behaviors of GANs are due to the very
particular functional shape of the trained discriminators in high dimensional
spaces, which can easily make training stuck or push probability mass in the
wrong direction, towards that of higher concentration than that of the data
generating distribution. We introduce several ways of regularizing the
objective, which can dramatically stabilize the training of GAN models. We also
show that our regularizers can help the fair distribution of probability mass
across the modes of the data generating distribution, during the early phases
of training and thus providing a unified solution to the missing modes problem.
| Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, Wenjie Li | null | 1612.02136 | null | null |
Measuring the non-asymptotic convergence of sequential Monte Carlo
samplers using probabilistic programming | cs.AI cs.LG stat.ML | A key limitation of sampling algorithms for approximate inference is that it
is difficult to quantify their approximation error. Widely used sampling
schemes, such as sequential importance sampling with resampling and
Metropolis-Hastings, produce output samples drawn from a distribution that may
be far from the target posterior distribution. This paper shows how to
upper-bound the symmetric KL divergence between the output distribution of a
broad class of sequential Monte Carlo (SMC) samplers and their target posterior
distributions, subject to assumptions about the accuracy of a separate
gold-standard sampler. The proposed method applies to samplers that combine
multiple particles, multinomial resampling, and rejuvenation kernels. The
experiments show the technique being used to estimate bounds on the divergence
of SMC samplers for posterior inference in a Bayesian linear regression model
and a Dirichlet process mixture model.
| Marco F. Cusumano-Towner, Vikash K. Mansinghka | null | 1612.02161 | null | null |
Model-based Adversarial Imitation Learning | stat.ML cs.LG | Generative adversarial learning is a popular new approach to training
generative models which has been proven successful for other related problems
as well. The general idea is to maintain an oracle $D$ that discriminates
between the expert's data distribution and that of the generative model $G$.
The generative model is trained to capture the expert's distribution by
maximizing the probability of $D$ misclassifying the data it generates.
Overall, the system is \emph{differentiable} end-to-end and is trained using
basic backpropagation. This type of learning was successfully applied to the
problem of policy imitation in a model-free setup. However, a model-free
approach does not allow the system to be differentiable, which requires the use
of high-variance gradient estimations. In this paper we introduce the Model
based Adversarial Imitation Learning (MAIL) algorithm. A model-based approach
for the problem of adversarial imitation learning. We show how to use a forward
model to make the system fully differentiable, which enables us to train
policies using the (stochastic) gradient of $D$. Moreover, our approach
requires relatively few environment interactions, and fewer hyper-parameters to
tune. We test our method on the MuJoCo physics simulator and report initial
results that surpass the current state-of-the-art.
| Nir Baram, Oron Anschel, Shie Mannor | null | 1612.02179 | null | null |
Fast Adaptation in Generative Models with Generative Matching Networks | stat.ML cs.LG | Despite recent advances, the remaining bottlenecks in deep generative models
are necessity of extensive training and difficulties with generalization from
small number of training examples. We develop a new generative model called
Generative Matching Network which is inspired by the recently proposed matching
networks for one-shot learning in discriminative tasks. By conditioning on the
additional input dataset, our model can instantly learn new concepts that were
not available in the training data but conform to a similar generative process.
The proposed framework does not explicitly restrict diversity of the
conditioning data and also does not require an extensive inference procedure
for training or adaptation. Our experiments on the Omniglot dataset demonstrate
that Generative Matching Networks significantly improve predictive performance
on the fly as more additional data is available and outperform existing state
of the art conditional generative models.
| Sergey Bartunov, Dmitry P. Vetrov | null | 1612.02192 | null | null |
A Communication-Efficient Parallel Method for Group-Lasso | cs.LG stat.ML | Group-Lasso (gLasso) identifies important explanatory factors in predicting
the response variable by considering the grouping structure over input
variables. However, most existing algorithms for gLasso are not scalable to
deal with large-scale datasets, which are becoming a norm in many applications.
In this paper, we present a divide-and-conquer based parallel algorithm
(DC-gLasso) to scale up gLasso in the tasks of regression with grouping
structures. DC-gLasso only needs two iterations to collect and aggregate the
local estimates on subsets of the data, and is provably correct to recover the
true model under certain conditions. We further extend it to deal with
overlappings between groups. Empirical results on a wide range of synthetic and
real-world datasets show that DC-gLasso can significantly improve the time
efficiency without sacrificing regression accuracy.
| Binghong Chen, Jun Zhu | null | 1612.02222 | null | null |
Large-Margin Softmax Loss for Convolutional Neural Networks | stat.ML cs.LG | Cross-entropy loss together with softmax is arguably one of the most common
used supervision components in convolutional neural networks (CNNs). Despite
its simplicity, popularity and excellent performance, the component does not
explicitly encourage discriminative learning of features. In this paper, we
propose a generalized large-margin softmax (L-Softmax) loss which explicitly
encourages intra-class compactness and inter-class separability between learned
features. Moreover, L-Softmax not only can adjust the desired margin but also
can avoid overfitting. We also show that the L-Softmax loss can be optimized by
typical stochastic gradient descent. Extensive experiments on four benchmark
datasets demonstrate that the deeply-learned features with L-softmax loss
become more discriminative, hence significantly boosting the performance on a
variety of visual classification and verification tasks.
| Weiyang Liu, Yandong Wen, Zhiding Yu, Meng Yang | null | 1612.02295 | null | null |
Spatially Adaptive Computation Time for Residual Networks | cs.CV cs.LG | This paper proposes a deep learning architecture based on Residual Network
that dynamically adjusts the number of executed layers for the regions of the
image. This architecture is end-to-end trainable, deterministic and
problem-agnostic. It is therefore applicable without any modifications to a
wide range of computer vision problems such as image classification, object
detection and image segmentation. We present experimental results showing that
this model improves the computational efficiency of Residual Networks on the
challenging ImageNet classification and COCO object detection datasets.
Additionally, we evaluate the computation time maps on the visual saliency
dataset cat2000 and find that they correlate surprisingly well with human eye
fixation positions.
| Michael Figurnov, Maxwell D. Collins, Yukun Zhu, Li Zhang, Jonathan
Huang, Dmitry Vetrov, Ruslan Salakhutdinov | null | 1612.02297 | null | null |
Extend natural neighbor: a novel classification method with
self-adaptive neighborhood parameters in different stages | cs.AI cs.LG | Various kinds of k-nearest neighbor (KNN) based classification methods are
the bases of many well-established and high-performance pattern-recognition
techniques, but both of them are vulnerable to their parameter choice.
Essentially, the challenge is to detect the neighborhood of various data sets,
while utterly ignorant of the data characteristic. This article introduces a
new supervised classification method: the extend natural neighbor (ENaN)
method, and shows that it provides a better classification result without
choosing the neighborhood parameter artificially. Unlike the original KNN based
method which needs a prior k, the ENaNE method predicts different k in
different stages. Therefore, the ENaNE method is able to learn more from
flexible neighbor information both in training stage and testing stage, and
provide a better classification result.
| Ji Feng, Qingsheng Zhu, Jinlong Huang, Lijun Yang | null | 1612.0231 | null | null |
Robust Low-Complexity Randomized Methods for Locating Outliers in Large
Matrices | cs.IT cs.LG math.IT stat.ML | This paper examines the problem of locating outlier columns in a large,
otherwise low-rank matrix, in settings where {}{the data} are noisy, or where
the overall matrix has missing elements. We propose a randomized two-step
inference framework, and establish sufficient conditions on the required sample
complexities under which these methods succeed (with high probability) in
accurately locating the outliers for each task. Comprehensive numerical
experimental results are provided to verify the theoretical bounds and
demonstrate the computational efficiency of the proposed algorithm.
| Xingguo Li and Jarvis Haupt | null | 1612.02334 | null | null |
An Information-theoretic Approach to Machine-oriented Music
Summarization | cs.IR cs.LG cs.SD | Music summarization allows for higher efficiency in processing, storage, and
sharing of datasets. Machine-oriented approaches, being agnostic to human
consumption, optimize these aspects even further. Such summaries have already
been successfully validated in some MIR tasks. We now generalize previous
conclusions by evaluating the impact of generic summarization of music from a
probabilistic perspective. We estimate Gaussian distributions for original and
summarized songs and compute their relative entropy, in order to measure
information loss incurred by summarization. Our results suggest that relative
entropy is a good predictor of summarization performance in the context of
tasks relying on a bag-of-features model. Based on this observation, we further
propose a straightforward yet expressive summarizer, which minimizes relative
entropy with respect to the original song, that objectively outperforms
previous methods and is better suited to avoid potential copyright issues.
| Francisco Raposo, David Martins de Matos, Ricardo Ribeiro | 10.1016/j.patrec.2019.03.014 | 1612.0235 | null | null |
Improving the Performance of Neural Machine Translation Involving
Morphologically Rich Languages | cs.CL cs.LG cs.NE | The advent of the attention mechanism in neural machine translation models
has improved the performance of machine translation systems by enabling
selective lookup into the source sentence. In this paper, the efficiencies of
translation using bidirectional encoder attention decoder models were studied
with respect to translation involving morphologically rich languages. The
English - Tamil language pair was selected for this analysis. First, the use of
Word2Vec embedding for both the English and Tamil words improved the
translation results by 0.73 BLEU points over the baseline RNNSearch model with
4.84 BLEU score. The use of morphological segmentation before word
vectorization to split the morphologically rich Tamil words into their
respective morphemes before the translation, caused a reduction in the target
vocabulary size by a factor of 8. Also, this model (RNNMorph) improved the
performance of neural machine translation by 7.05 BLEU points over the
RNNSearch model used over the same corpus. Since the BLEU evaluation of the
RNNMorph model might be unreliable due to an increase in the number of matching
tokens per sentence, the performances of the translations were also compared by
means of human evaluation metrics of adequacy, fluency and relative ranking.
Further, the use of morphological segmentation also improved the efficacy of
the attention mechanism.
| Krupakar Hans, R S Milton | null | 1612.02482 | null | null |
Interactive Elicitation of Knowledge on Feature Relevance Improves
Predictions in Small Data Sets | cs.AI cs.LG stat.ML | Providing accurate predictions is challenging for machine learning algorithms
when the number of features is larger than the number of samples in the data.
Prior knowledge can improve machine learning models by indicating relevant
variables and parameter values. Yet, this prior knowledge is often tacit and
only available from domain experts. We present a novel approach that uses
interactive visualization to elicit the tacit prior knowledge and uses it to
improve the accuracy of prediction models. The main component of our approach
is a user model that models the domain expert's knowledge of the relevance of
different features for a prediction task. In particular, based on the expert's
earlier input, the user model guides the selection of the features on which to
elicit user's knowledge next. The results of a controlled user study show that
the user model significantly improves prior knowledge elicitation and
prediction accuracy, when predicting the relative citation counts of scientific
documents in a specific domain.
| Luana Micallef, Iiris Sundin, Pekka Marttinen, Muhammad Ammad-ud-din,
Tomi Peltola, Marta Soare, Giulio Jacucci, Samuel Kaski | null | 1612.02487 | null | null |
Bridging Medical Data Inference to Achilles Tendon Rupture
Rehabilitation | cs.LG stat.AP | Imputing incomplete medical tests and predicting patient outcomes are crucial
for guiding the decision making for therapy, such as after an Achilles Tendon
Rupture (ATR). We formulate the problem of data imputation and prediction for
ATR relevant medical measurements into a recommender system framework. By
applying MatchBox, which is a collaborative filtering approach, on a real
dataset collected from 374 ATR patients, we aim at offering personalized
medical data imputation and prediction. In this work, we show the feasibility
of this approach and discuss potential research directions by conducting
initial qualitative evaluations.
| An Qu and Cheng Zhang and Paul Ackermann and Hedvig Kjellstr\"om | null | 1612.0249 | null | null |
Prediction with a Short Memory | cs.LG cs.AI cs.CC stat.ML | We consider the problem of predicting the next observation given a sequence
of past observations, and consider the extent to which accurate prediction
requires complex algorithms that explicitly leverage long-range dependencies.
Perhaps surprisingly, our positive results show that for a broad class of
sequences, there is an algorithm that predicts well on average, and bases its
predictions only on the most recent few observation together with a set of
simple summary statistics of the past observations. Specifically, we show that
for any distribution over observations, if the mutual information between past
observations and future observations is upper bounded by $I$, then a simple
Markov model over the most recent $I/\epsilon$ observations obtains expected KL
error $\epsilon$---and hence $\ell_1$ error $\sqrt{\epsilon}$---with respect to
the optimal predictor that has access to the entire past and knows the data
generating distribution. For a Hidden Markov Model with $n$ hidden states, $I$
is bounded by $\log n$, a quantity that does not depend on the mixing time, and
we show that the trivial prediction algorithm based on the empirical
frequencies of length $O(\log n/\epsilon)$ windows of observations achieves
this error, provided the length of the sequence is $d^{\Omega(\log
n/\epsilon)}$, where $d$ is the size of the observation alphabet.
We also establish that this result cannot be improved upon, even for the
class of HMMs, in the following two senses: First, for HMMs with $n$ hidden
states, a window length of $\log n/\epsilon$ is information-theoretically
necessary to achieve expected $\ell_1$ error $\sqrt{\epsilon}$. Second, the
$d^{\Theta(\log n/\epsilon)}$ samples required to estimate the Markov model for
an observation alphabet of size $d$ is necessary for any computationally
tractable learning algorithm, assuming the hardness of strongly refuting a
certain class of CSPs.
| Vatsal Sharan, Sham Kakade, Percy Liang, Gregory Valiant | null | 1612.02526 | null | null |
Predicting brain age with deep learning from raw imaging data results in
a reliable and heritable biomarker | stat.ML cs.CV cs.LG q-bio.NC | Machine learning analysis of neuroimaging data can accurately predict
chronological age in healthy people and deviations from healthy brain ageing
have been associated with cognitive impairment and disease. Here we sought to
further establish the credentials of "brain-predicted age" as a biomarker of
individual differences in the brain ageing process, using a predictive
modelling approach based on deep learning, and specifically convolutional
neural networks (CNN), and applied to both pre-processed and raw T1-weighted
MRI data. Firstly, we aimed to demonstrate the accuracy of CNN brain-predicted
age using a large dataset of healthy adults (N = 2001). Next, we sought to
establish the heritability of brain-predicted age using a sample of monozygotic
and dizygotic female twins (N = 62). Thirdly, we examined the test-retest and
multi-centre reliability of brain-predicted age using two samples
(within-scanner N = 20; between-scanner N = 11). CNN brain-predicted ages were
generated and compared to a Gaussian Process Regression (GPR) approach, on all
datasets. Input data were grey matter (GM) or white matter (WM) volumetric maps
generated by Statistical Parametric Mapping (SPM) or raw data. Brain-predicted
age represents an accurate, highly reliable and genetically-valid phenotype,
that has potential to be used as a biomarker of brain ageing. Moreover, age
predictions can be accurately generated on raw T1-MRI data, substantially
reducing computation time for novel data, bringing the process closer to giving
real-time information on brain health in clinical settings.
| James H Cole, Rudra PK Poudel, Dimosthenis Tsagkrasoulis, Matthan WA
Caan, Claire Steves, Tim D Spector, Giovanni Montana | null | 1612.02572 | null | null |
Towards Information-Seeking Agents | cs.LG | We develop a general problem setting for training and testing the ability of
agents to gather information efficiently. Specifically, we present a collection
of tasks in which success requires searching through a partially-observed
environment, for fragments of information which can be pieced together to
accomplish various goals. We combine deep architectures with techniques from
reinforcement learning to develop agents that solve our tasks. We shape the
behavior of these agents by combining extrinsic and intrinsic rewards. We
empirically demonstrate that these agents learn to search actively and
intelligently for new information to reduce their uncertainty, and to exploit
information they have already acquired.
| Philip Bachman and Alessandro Sordoni and Adam Trischler | null | 1612.02605 | null | null |
Evaluating the Performance of ANN Prediction System at Shanghai Stock
Market in the Period 21-Sep-2016 to 11-Oct-2016 | q-fin.ST cs.LG stat.ML | This research evaluates the performance of an Artificial Neural Network based
prediction system that was employed on the Shanghai Stock Exchange for the
period 21-Sep-2016 to 11-Oct-2016. It is a follow-up to a previous paper in
which the prices were predicted and published before September 21. Stock market
price prediction remains an important quest for investors and researchers. This
research used an Artificial Intelligence system, being an Artificial Neural
Network that is feedforward multi-layer perceptron with error backpropagation
for prediction, unlike other methods such as technical, fundamental or time
series analysis. While these alternative methods tend to guide on trends and
not the exact likely prices, neural networks on the other hand have the ability
to predict the real value prices, as was done on this research. Nonetheless,
determination of suitable network parameters remains a challenge in neural
network design, with this research settling on a configuration of 5:21:21:1
with 80% training data or 4-year of training data as a good enough model for
stock prediction, as already determined in a previous research by the author.
The comparative results indicate that neural network can predict typical stock
market prices with mean absolute percentage errors that are as low as 1.95%
over the ten prediction instances that was studied in this research.
| Barack Wamkaya Wanjawa | null | 1612.02666 | null | null |
Towards better decoding and language model integration in sequence to
sequence models | cs.NE cs.CL cs.LG stat.ML | The recently proposed Sequence-to-Sequence (seq2seq) framework advocates
replacing complex data processing pipelines, such as an entire automatic speech
recognition system, with a single neural network trained in an end-to-end
fashion. In this contribution, we analyse an attention-based seq2seq speech
recognition system that directly transcribes recordings into characters. We
observe two shortcomings: overconfidence in its predictions and a tendency to
produce incomplete transcriptions when language models are used. We propose
practical solutions to both problems achieving competitive speaker independent
word error rates on the Wall Street Journal dataset: without separate language
models we reach 10.6% WER, while together with a trigram language model, we
reach 6.7% WER.
| Jan Chorowski and Navdeep Jaitly | null | 1612.02695 | null | null |
A note on the triangle inequality for the Jaccard distance | cs.DM cs.IR cs.LG stat.ML | Two simple proofs of the triangle inequality for the Jaccard distance in
terms of nonnegative, monotone, submodular functions are given and discussed.
| Sven Kosub | null | 1612.02696 | null | null |
CrowdMI: Multiple Imputation via Crowdsourcing | cs.LG cs.HC stat.ML | Can humans impute missing data with similar proficiency as machines? This is
the question we aim to answer in this paper. We present a novel idea of
converting observations with missing data in to a survey questionnaire, which
is presented to crowdworkers for completion. We replicate a multiple imputation
framework by having multiple unique crowdworkers complete our questionnaire.
Experimental results demonstrate that using our method, it is possible to
generate valid imputations for qualitative and quantitative missing data, with
results comparable to imputations generated by complex statistical models.
| Lovedeep Gondara | null | 1612.02707 | null | null |
Scalable Influence Maximization for Multiple Products in Continuous-Time
Diffusion Networks | cs.SI cs.DS cs.LG stat.ML | A typical viral marketing model identifies influential users in a social
network to maximize a single product adoption assuming unlimited user
attention, campaign budgets, and time. In reality, multiple products need
campaigns, users have limited attention, convincing users incurs costs, and
advertisers have limited budgets and expect the adoptions to be maximized soon.
Facing these user, monetary, and timing constraints, we formulate the problem
as a submodular maximization task in a continuous-time diffusion model under
the intersection of a matroid and multiple knapsack constraints. We propose a
randomized algorithm estimating the user influence in a network
($|\mathcal{V}|$ nodes, $|\mathcal{E}|$ edges) to an accuracy of $\epsilon$
with $n=\mathcal{O}(1/\epsilon^2)$ randomizations and
$\tilde{\mathcal{O}}(n|\mathcal{E}|+n|\mathcal{V}|)$ computations. By
exploiting the influence estimation algorithm as a subroutine, we develop an
adaptive threshold greedy algorithm achieving an approximation factor $k_a/(2+2
k)$ of the optimal when $k_a$ out of the $k$ knapsack constraints are active.
Extensive experiments on networks of millions of nodes demonstrate that the
proposed algorithms achieve the state-of-the-art in terms of effectiveness and
scalability.
| Nan Du, Yingyu Liang, Maria-Florina Balcan, Manuel Gomez-Rodriguez,
Hongyuan Zha, Le Song | null | 1612.02712 | null | null |
Learning in the Machine: Random Backpropagation and the Deep Learning
Channel | cs.LG cs.AI cs.NE | Random backpropagation (RBP) is a variant of the backpropagation algorithm
for training neural networks, where the transpose of the forward matrices are
replaced by fixed random matrices in the calculation of the weight updates. It
is remarkable both because of its effectiveness, in spite of using random
matrices to communicate error information, and because it completely removes
the taxing requirement of maintaining symmetric weights in a physical neural
system. To better understand random backpropagation, we first connect it to the
notions of local learning and learning channels. Through this connection, we
derive several alternatives to RBP, including skipped RBP (SRPB), adaptive RBP
(ARBP), sparse RBP, and their combinations (e.g. ASRBP) and analyze their
computational complexity. We then study their behavior through simulations
using the MNIST and CIFAR-10 bechnmark datasets. These simulations show that
most of these variants work robustly, almost as well as backpropagation, and
that multiplication by the derivatives of the activation functions is
important. As a follow-up, we study also the low-end of the number of bits
required to communicate error information over the learning channel. We then
provide partial intuitive explanations for some of the remarkable properties of
RBP and its variations. Finally, we prove several mathematical results,
including the convergence to fixed points of linear chains of arbitrary length,
the convergence to fixed points of linear autoencoders with decorrelated data,
the long-term existence of solutions for linear systems with a single hidden
layer and convergence in special cases, and the convergence to fixed points of
non-linear chains, when the derivative of the activation functions is included.
| Pierre Baldi, Peter Sadowski, Zhiqin Lu | null | 1612.02734 | null | null |
Controlling Robot Morphology from Incomplete Measurements | cs.RO cs.AI cs.LG cs.SY | Mobile robots with complex morphology are essential for traversing rough
terrains in Urban Search & Rescue missions (USAR). Since teleoperation of the
complex morphology causes high cognitive load of the operator, the morphology
is controlled autonomously. The autonomous control measures the robot state and
surrounding terrain which is usually only partially observable, and thus the
data are often incomplete. We marginalize the control over the missing
measurements and evaluate an explicit safety condition. If the safety condition
is violated, tactile terrain exploration by the body-mounted robotic arm
gathers the missing data.
| Martin Pecka, Karel Zimmermann, Michal Rein\v{s}tein, Tom\'a\v{s}
Svoboda | 10.1109/TIE.2016.2580125 | 1612.02739 | null | null |
Coupling Distributed and Symbolic Execution for Natural Language Queries | cs.LG cs.AI cs.CL cs.NE cs.SE | Building neural networks to query a knowledge base (a table) with natural
language is an emerging research topic in deep learning. An executor for table
querying typically requires multiple steps of execution because queries may
have complicated structures. In previous studies, researchers have developed
either fully distributed executors or symbolic executors for table querying. A
distributed executor can be trained in an end-to-end fashion, but is weak in
terms of execution efficiency and explicit interpretability. A symbolic
executor is efficient in execution, but is very difficult to train especially
at initial stages. In this paper, we propose to couple distributed and symbolic
execution for natural language queries, where the symbolic executor is
pretrained with the distributed executor's intermediate execution results in a
step-by-step fashion. Experiments show that our approach significantly
outperforms both distributed and symbolic executors, exhibiting high accuracy,
high learning efficiency, high execution efficiency, and high interpretability.
| Lili Mou, Zhengdong Lu, Hang Li, Zhi Jin | null | 1612.02741 | null | null |
Protein-Ligand Scoring with Convolutional Neural Networks | stat.ML cs.LG q-bio.BM | Computational approaches to drug discovery can reduce the time and cost
associated with experimental assays and enable the screening of novel
chemotypes. Structure-based drug design methods rely on scoring functions to
rank and predict binding affinities and poses. The ever-expanding amount of
protein-ligand binding and structural data enables the use of deep machine
learning techniques for protein-ligand scoring.
We describe convolutional neural network (CNN) scoring functions that take as
input a comprehensive 3D representation of a protein-ligand interaction. A CNN
scoring function automatically learns the key features of protein-ligand
interactions that correlate with binding. We train and optimize our CNN scoring
functions to discriminate between correct and incorrect binding poses and known
binders and non-binders. We find that our CNN scoring function outperforms the
AutoDock Vina scoring function when ranking poses both for pose prediction and
virtual screening.
| Matthew Ragoza (1), Joshua Hochuli (1), Elisa Idrobo (2), Jocelyn
Sunseri (1) and David Ryan Koes (1) ((1) University of Pittsburgh, (2) The
College of New Jersey) | 10.1021/acs.jcim.6b00740 | 1612.02751 | null | null |
Improved generator objectives for GANs | cs.LG stat.ML | We present a framework to understand GAN training as alternating density
ratio estimation and approximate divergence minimization. This provides an
interpretation for the mismatched GAN generator and discriminator objectives
often used in practice, and explains the problem of poor sample diversity. We
also derive a family of generator objectives that target arbitrary
$f$-divergences without minimizing a lower bound, and use them to train
generative image models that target either improved sample quality or greater
sample diversity.
| Ben Poole, Alexander A. Alemi, Jascha Sohl-Dickstein, Anelia Angelova | null | 1612.0278 | null | null |
Interactive Prior Elicitation of Feature Similarities for Small Sample
Size Prediction | cs.LG cs.HC | Regression under the "small $n$, large $p$" conditions, of small sample size
$n$ and large number of features $p$ in the learning data set, is a recurring
setting in which learning from data is difficult. With prior knowledge about
relationships of the features, $p$ can effectively be reduced, but explicating
such prior knowledge is difficult for experts. In this paper we introduce a new
method for eliciting expert prior knowledge about the similarity of the roles
of features in the prediction task. The key idea is to use an interactive
multidimensional-scaling (MDS) type scatterplot display of the features to
elicit the similarity relationships, and then use the elicited relationships in
the prior distribution of prediction parameters. Specifically, for learning to
predict a target variable with Bayesian linear regression, the feature
relationships are used to construct a Gaussian prior with a full covariance
matrix for the regression coefficients. Evaluation of our method in experiments
with simulated and real users on text data confirm that prior elicitation of
feature similarities improves prediction accuracy. Furthermore, elicitation
with an interactive scatterplot display outperforms straightforward elicitation
where the users choose feature pairs from a feature list.
| Homayun Afrabandpey, Tomi Peltola, Samuel Kaski | 10.1145/3079628.3079698 | 1612.02802 | null | null |
The Physical Systems Behind Optimization Algorithms | cs.LG math.OC stat.ML | We use differential equations based approaches to provide some {\it
\textbf{physics}} insights into analyzing the dynamics of popular optimization
algorithms in machine learning. In particular, we study gradient descent,
proximal gradient descent, coordinate gradient descent, proximal coordinate
gradient, and Newton's methods as well as their Nesterov's accelerated variants
in a unified framework motivated by a natural connection of optimization
algorithms to physical systems. Our analysis is applicable to more general
algorithms and optimization problems {\it \textbf{beyond}} convexity and strong
convexity, e.g. Polyak-\L ojasiewicz and error bound conditions (possibly
nonconvex).
| Lin F. Yang, R. Arora, V. Braverman, Tuo Zhao | null | 1612.02803 | null | null |
Task-Guided and Path-Augmented Heterogeneous Network Embedding for
Author Identification | cs.LG cs.AI cs.IR stat.ML | In this paper, we study the problem of author identification under
double-blind review setting, which is to identify potential authors given
information of an anonymized paper. Different from existing approaches that
rely heavily on feature engineering, we propose to use network embedding
approach to address the problem, which can automatically represent nodes into
lower dimensional feature vectors. However, there are two major limitations in
recent studies on network embedding: (1) they are usually general-purpose
embedding methods, which are independent of the specific tasks; and (2) most of
these approaches can only deal with homogeneous networks, where the
heterogeneity of the network is ignored. Hence, challenges faced here are two
folds: (1) how to embed the network under the guidance of the author
identification task, and (2) how to select the best type of information due to
the heterogeneity of the network.
To address the challenges, we propose a task-guided and path-augmented
heterogeneous network embedding model. In our model, nodes are first embedded
as vectors in latent feature space. Embeddings are then shared and jointly
trained according to task-specific and network-general objectives. We extend
the existing unsupervised network embedding to incorporate meta paths in
heterogeneous networks, and select paths according to the specific task. The
guidance from author identification task for network embedding is provided both
explicitly in joint training and implicitly during meta path selection. Our
experiments demonstrate that by using path-augmented network embedding with
task guidance, our model can obtain significantly better accuracy at
identifying the true authors comparing to existing methods.
| Ting Chen and Yizhou Sun | null | 1612.02814 | null | null |
Tensor-Dictionary Learning with Deep Kruskal-Factor Analysis | stat.ML cs.LG | A multi-way factor analysis model is introduced for tensor-variate data of
any order. Each data item is represented as a (sparse) sum of Kruskal
decompositions, a Kruskal-factor analysis (KFA). KFA is nonparametric and can
infer both the tensor-rank of each dictionary atom and the number of dictionary
atoms. The model is adapted for online learning, which allows dictionary
learning on large data sets. After KFA is introduced, the model is extended to
a deep convolutional tensor-factor analysis, supervised by a Bayesian SVM. The
experiments section demonstrates the improvement of KFA over vectorized
approaches (e.g., BPFA), tensor decompositions, and convolutional neural
networks (CNN) in multi-way denoising, blind inpainting, and image
classification. The improvement in PSNR for the inpainting results over other
methods exceeds 1dB in several cases and we achieve state of the art results on
Caltech101 image classification.
| Andrew Stevens, Yunchen Pu, Yannan Sun, Greg Spell, Lawrence Carin | null | 1612.02842 | null | null |
Learning Representations by Stochastic Meta-Gradient Descent in Neural
Networks | cs.LG cs.AI stat.ML | Representations are fundamental to artificial intelligence. The performance
of a learning system depends on the type of representation used for
representing the data. Typically, these representations are hand-engineered
using domain knowledge. More recently, the trend is to learn these
representations through stochastic gradient descent in multi-layer neural
networks, which is called backprop. Learning the representations directly from
the incoming data stream reduces the human labour involved in designing a
learning system. More importantly, this allows in scaling of a learning system
for difficult tasks. In this paper, we introduce a new incremental learning
algorithm called crossprop, which learns incoming weights of hidden units based
on the meta-gradient descent approach, that was previously introduced by Sutton
(1992) and Schraudolph (1999) for learning step-sizes. The final update
equation introduces an additional memory parameter for each of these weights
and generalizes the backprop update equation. From our experiments, we show
that crossprop learns and reuses its feature representation while tackling new
and unseen tasks whereas backprop relearns a new feature representation.
| Vivek Veeriah, Shangtong Zhang, Richard S. Sutton | null | 1612.02879 | null | null |
A Review of Intelligent Practices for Irrigation Prediction | cs.LG cs.NE | Population growth and increasing droughts are creating unprecedented strain
on the continued availability of water resources. Since irrigation is a major
consumer of fresh water, wastage of resources in this sector could have strong
consequences. To address this issue, irrigation water management and prediction
techniques need to be employed effectively and should be able to account for
the variabilities present in the environment. The different techniques surveyed
in this paper can be classified into two categories: computational and
statistical. Computational methods deal with scientific correlations between
physical parameters whereas statistical methods involve specific prediction
algorithms that can be used to automate the process of irrigation water
prediction. These algorithms interpret semantic relationships between the
various parameters of temperature, pressure, evapotranspiration etc. and store
them as numerical precomputed entities specific to the conditions and the area
used as the data for the training corpus used to train it. We focus on
reviewing the computational methods used to determine Evapotranspiration and
its implications. We compare the efficiencies of different data mining and
machine learning methods implemented in this area, such as Logistic Regression,
Decision Tress Classifier, SysFor, Support Vector Machine(SVM), Fuzzy Logic
techniques, Artifical Neural Networks(ANNs) and various hybrids of Genetic
Algorithms (GA) applied to irrigation prediction. We also recommend a possible
technique for the same based on its superior results in other such time series
analysis tasks.
| Hans Krupakar, Akshay Jayakumar, Dhivya G | null | 1612.02893 | null | null |
Environmental Modeling Framework using Stacked Gaussian Processes | cs.LG stat.ML | A network of independently trained Gaussian processes (StackedGP) is
introduced to obtain predictions of quantities of interest with quantified
uncertainties. The main applications of the StackedGP framework are to
integrate different datasets through model composition, enhance predictions of
quantities of interest through a cascade of intermediate predictions, and to
propagate uncertainties through emulated dynamical systems driven by uncertain
forcing variables. By using analytical first and second-order moments of a
Gaussian process with uncertain inputs using squared exponential and polynomial
kernels, approximated expectations of quantities of interests that require an
arbitrary composition of functions can be obtained. The StackedGP model is
extended to any number of layers and nodes per layer, and it provides
flexibility in kernel selection for the input nodes. The proposed nonparametric
stacked model is validated using synthetic datasets, and its performance in
model composition and cascading predictions is measured in two applications
using real data.
| Kareem Abdelfatah, Junshu Bao, Gabriel Terejanu | null | 1612.02897 | null | null |
A series of maximum entropy upper bounds of the differential entropy | cs.IT cs.CV cs.LG math.IT | We present a series of closed-form maximum entropy upper bounds for the
differential entropy of a continuous univariate random variable and study the
properties of that series. We then show how to use those generic bounds for
upper bounding the differential entropy of Gaussian mixture models. This
requires to calculate the raw moments and raw absolute moments of Gaussian
mixtures in closed-form that may also be handy in statistical machine learning
and information theory. We report on our experiments and discuss on the
tightness of those bounds.
| Frank Nielsen and Richard Nock | null | 1612.02954 | null | null |
BaTFLED: Bayesian Tensor Factorization Linked to External Data | stat.ML cs.LG q-bio.QM | The vast majority of current machine learning algorithms are designed to
predict single responses or a vector of responses, yet many types of response
are more naturally organized as matrices or higher-order tensor objects where
characteristics are shared across modes. We present a new machine learning
algorithm BaTFLED (Bayesian Tensor Factorization Linked to External Data) that
predicts values in a three-dimensional response tensor using input features for
each of the dimensions. BaTFLED uses a probabilistic Bayesian framework to
learn projection matrices mapping input features for each mode into latent
representations that multiply to form the response tensor. By utilizing a
Tucker decomposition, the model can capture weights for interactions between
latent factors for each mode in a small core tensor. Priors that encourage
sparsity in the projection matrices and core tensor allow for feature selection
and model regularization. This method is shown to far outperform elastic net
and neural net models on 'cold start' tasks from data simulated in a three-mode
structure. Additionally, we apply the model to predict dose-response curves in
a panel of breast cancer cell lines treated with drug compounds that was used
as a Dialogue for Reverse Engineering Assessments and Methods (DREAM)
challenge.
| Nathan H Lazar, Mehmet G\"onen, Kemal S\"onmez | null | 1612.02965 | null | null |
Clipper: A Low-Latency Online Prediction Serving System | cs.DC cs.LG | Machine learning is being deployed in a growing number of applications which
demand real-time, accurate, and robust predictions under heavy query load.
However, most machine learning frameworks and systems only address model
training and not deployment.
In this paper, we introduce Clipper, a general-purpose low-latency prediction
serving system. Interposing between end-user applications and a wide range of
machine learning frameworks, Clipper introduces a modular architecture to
simplify model deployment across frameworks and applications. Furthermore, by
introducing caching, batching, and adaptive model selection techniques, Clipper
reduces prediction latency and improves prediction throughput, accuracy, and
robustness without modifying the underlying machine learning frameworks. We
evaluate Clipper on four common machine learning benchmark datasets and
demonstrate its ability to meet the latency, accuracy, and throughput demands
of online serving applications. Finally, we compare Clipper to the TensorFlow
Serving system and demonstrate that we are able to achieve comparable
throughput and latency while enabling model composition and online learning to
improve accuracy and render more robust predictions.
| Daniel Crankshaw, Xin Wang, Giulio Zhou, Michael J. Franklin, Joseph
E. Gonzalez, Ion Stoica | null | 1612.03079 | null | null |
Advancing Bayesian Optimization: The Mixed-Global-Local (MGL) Kernel and
Length-Scale Cool Down | cs.LG cs.AI stat.ML | Bayesian Optimization (BO) has become a core method for solving expensive
black-box optimization problems. While much research focussed on the choice of
the acquisition function, we focus on online length-scale adaption and the
choice of kernel function. Instead of choosing hyperparameters in view of
maximum likelihood on past data, we propose to use the acquisition function to
decide on hyperparameter adaptation more robustly and in view of the future
optimization progress. Further, we propose a particular kernel function that
includes non-stationarity and local anisotropy and thereby implicitly
integrates the efficiency of local convex optimization with global Bayesian
optimization. Comparisons to state-of-the art BO methods underline the
efficiency of these mechanisms on global optimization benchmarks.
| Kim Peter Wabersich and Marc Toussaint | null | 1612.03117 | null | null |
Phase transitions in Restricted Boltzmann Machines with generic priors | cond-mat.dis-nn cs.LG physics.data-an stat.ML | We study Generalised Restricted Boltzmann Machines with generic priors for
units and weights, interpolating between Boolean and Gaussian variables. We
present a complete analysis of the replica symmetric phase diagram of these
systems, which can be regarded as Generalised Hopfield models. We underline the
role of the retrieval phase for both inference and learning processes and we
show that retrieval is robust for a large class of weight and unit priors,
beyond the standard Hopfield scenario. Furthermore we show how the paramagnetic
phase boundary is directly related to the optimal size of the training set
necessary for good generalisation in a teacher-student scenario of unsupervised
learning.
| Adriano Barra, Giuseppe Genovese, Peter Sollich, Daniele Tantari | 10.1103/PhysRevE.96.042156 | 1612.03132 | null | null |
Testing Ising Models | cs.DS cs.IT cs.LG math.IT math.PR math.ST stat.TH | Given samples from an unknown multivariate distribution $p$, is it possible
to distinguish whether $p$ is the product of its marginals versus $p$ being far
from every product distribution? Similarly, is it possible to distinguish
whether $p$ equals a given distribution $q$ versus $p$ and $q$ being far from
each other? These problems of testing independence and goodness-of-fit have
received enormous attention in statistics, information theory, and theoretical
computer science, with sample-optimal algorithms known in several interesting
regimes of parameters. Unfortunately, it has also been understood that these
problems become intractable in large dimensions, necessitating exponential
sample complexity.
Motivated by the exponential lower bounds for general distributions as well
as the ubiquity of Markov Random Fields (MRFs) in the modeling of
high-dimensional distributions, we initiate the study of distribution testing
on structured multivariate distributions, and in particular the prototypical
example of MRFs: the Ising Model. We demonstrate that, in this structured
setting, we can avoid the curse of dimensionality, obtaining sample and time
efficient testers for independence and goodness-of-fit. One of the key
technical challenges we face along the way is bounding the variance of
functions of the Ising model.
| Constantinos Daskalakis, Nishanth Dikkala, Gautam Kamath | null | 1612.03147 | null | null |
Optimal mean-based algorithms for trace reconstruction | cs.CC cs.DS cs.LG | In the (deletion-channel) trace reconstruction problem, there is an unknown
$n$-bit source string $x$. An algorithm is given access to independent traces
of $x$, where a trace is formed by deleting each bit of~$x$ independently with
probability~$\delta$. The goal of the algorithm is to recover~$x$ exactly (with
high probability), while minimizing samples (number of traces) and running
time.
Previously, the best known algorithm for the trace reconstruction problem was
due to Holenstein~et~al.; it uses $\exp(\tilde{O}(n^{1/2}))$ samples and
running time for any fixed $0 < \delta < 1$. It is also what we call a
"mean-based algorithm", meaning that it only uses the empirical means of the
individual bits of the traces. Holenstein~et~al.~also gave a lower bound,
showing that any mean-based algorithm must use at least $n^{\tilde{\Omega}(\log
n)}$ samples.
In this paper we improve both of these results, obtaining matching upper and
lower bounds for mean-based trace reconstruction. For any constant deletion
rate $0 < \delta < 1$, we give a mean-based algorithm that uses
$\exp(O(n^{1/3}))$ time and traces; we also prove that any mean-based algorithm
must use at least $\exp(\Omega(n^{1/3}))$ traces. In fact, we obtain matching
upper and lower bounds even for $\delta$ subconstant and $\rho := 1-\delta$
subconstant: when $(\log^3 n)/n \ll \delta \leq 1/2$ the bound is
$\exp(-\Theta(\delta n)^{1/3})$, and when $1/\sqrt{n} \ll \rho \leq 1/2$ the
bound is $\exp(-\Theta(n/\rho)^{1/3})$.
Our proofs involve estimates for the maxima of Littlewood polynomials on
complex disks. We show that these techniques can also be used to perform trace
reconstruction with random insertions and bit-flips in addition to deletions.
We also find a surprising result: for deletion probabilities $\delta > 1/2$,
the presence of insertions can actually help with trace reconstruction.
| Anindya De and Ryan O'Donnell and Rocco Servedio | null | 1612.03148 | null | null |
Testing Bayesian Networks | cs.DS cs.IT cs.LG math.IT math.ST stat.TH | This work initiates a systematic investigation of testing high-dimensional
structured distributions by focusing on testing Bayesian networks -- the
prototypical family of directed graphical models. A Bayesian network is defined
by a directed acyclic graph, where we associate a random variable with each
node. The value at any particular node is conditionally independent of all the
other non-descendant nodes once its parents are fixed. Specifically, we study
the properties of identity testing and closeness testing of Bayesian networks.
Our main contribution is the first non-trivial efficient testing algorithms for
these problems and corresponding information-theoretic lower bounds. For a wide
range of parameter settings, our testing algorithms have sample complexity
sublinear in the dimension and are sample-optimal, up to constant factors.
| Clement Canonne, Ilias Diakonikolas, Daniel Kane, Alistair Stewart | null | 1612.03156 | null | null |
Square Hellinger Subadditivity for Bayesian Networks and its
Applications to Identity Testing | cs.LG cs.IT math.IT math.PR math.ST stat.ML stat.TH | We show that the square Hellinger distance between two Bayesian networks on
the same directed graph, $G$, is subadditive with respect to the neighborhoods
of $G$. Namely, if $P$ and $Q$ are the probability distributions defined by two
Bayesian networks on the same DAG, our inequality states that the square
Hellinger distance, $H^2(P,Q)$, between $P$ and $Q$ is upper bounded by the
sum, $\sum_v H^2(P_{\{v\} \cup \Pi_v}, Q_{\{v\} \cup \Pi_v})$, of the square
Hellinger distances between the marginals of $P$ and $Q$ on every node $v$ and
its parents $\Pi_v$ in the DAG. Importantly, our bound does not involve the
conditionals but the marginals of $P$ and $Q$. We derive a similar inequality
for more general Markov Random Fields.
As an application of our inequality, we show that distinguishing whether two
Bayesian networks $P$ and $Q$ on the same (but potentially unknown) DAG satisfy
$P=Q$ vs $d_{\rm TV}(P,Q)>\epsilon$ can be performed from
$\tilde{O}(|\Sigma|^{3/4(d+1)} \cdot n/\epsilon^2)$ samples, where $d$ is the
maximum in-degree of the DAG and $\Sigma$ the domain of each variable of the
Bayesian networks. If $P$ and $Q$ are defined on potentially different and
potentially unknown trees, the sample complexity becomes
$\tilde{O}(|\Sigma|^{4.5} n/\epsilon^2)$, whose dependence on $n, \epsilon$ is
optimal up to logarithmic factors. Lastly, if $P$ and $Q$ are product
distributions over $\{0,1\}^n$ and $Q$ is known, the sample complexity becomes
$O(\sqrt{n}/\epsilon^2)$, which is optimal up to constant factors.
| Constantinos Daskalakis, Qinxuan Pan | null | 1612.03164 | null | null |
Low-Rank Inducing Norms with Optimality Interpretations | math.OC cs.LG stat.ML | Optimization problems with rank constraints appear in many diverse fields
such as control, machine learning and image analysis. Since the rank constraint
is non-convex, these problems are often approximately solved via convex
relaxations. Nuclear norm regularization is the prevailing convexifying
technique for dealing with these types of problem. This paper introduces a
family of low-rank inducing norms and regularizers which includes the nuclear
norm as a special case. A posteriori guarantees on solving an underlying rank
constrained optimization problem with these convex relaxations are provided. We
evaluate the performance of the low-rank inducing norms on three matrix
completion problems. In all examples, the nuclear norm heuristic is
outperformed by convex relaxations based on other low-rank inducing norms. For
two of the problems there exist low-rank inducing norms that succeed in
recovering the partially unknown matrix, while the nuclear norm fails. These
low-rank inducing norms are shown to be representable as semi-definite
programs. Moreover, these norms have cheaply computable proximal mappings,
which makes it possible to also solve problems of large size using first-order
methods.
| Christian Grussler and Pontus Giselsson | 10.1137/17M1115770 | 1612.03186 | null | null |
DeepCancer: Detecting Cancer through Gene Expressions via Deep
Generative Learning | cs.AI cs.LG q-bio.GN | Transcriptional profiling on microarrays to obtain gene expressions has been
used to facilitate cancer diagnosis. We propose a deep generative machine
learning architecture (called DeepCancer) that learn features from unlabeled
microarray data. These models have been used in conjunction with conventional
classifiers that perform classification of the tissue samples as either being
cancerous or non-cancerous. The proposed model has been tested on two different
clinical datasets. The evaluation demonstrates that DeepCancer model achieves a
very high precision score, while significantly controlling the false positive
and false negative scores.
| Rajendra Rana Bhat, Vivek Viswanath, Xiaolin Li | null | 1612.03211 | null | null |
Towards deep learning with spiking neurons in energy based models with
contrastive Hebbian plasticity | cs.LG cs.NE q-bio.NC | In machine learning, error back-propagation in multi-layer neural networks
(deep learning) has been impressively successful in supervised and
reinforcement learning tasks. As a model for learning in the brain, however,
deep learning has long been regarded as implausible, since it relies in its
basic form on a non-local plasticity rule. To overcome this problem,
energy-based models with local contrastive Hebbian learning were proposed and
tested on a classification task with networks of rate neurons. We extended this
work by implementing and testing such a model with networks of leaky
integrate-and-fire neurons. Preliminary results indicate that it is possible to
learn a non-linear regression task with hidden layers, spiking neurons and a
local synaptic plasticity rule.
| Thomas Mesnard, Wulfram Gerstner, Johanni Brea | null | 1612.03214 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.