title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Classification of MRI data using Deep Learning and Gaussian
Process-based Model Selection | cs.LG stat.ML | The classification of MRI images according to the anatomical field of view is
a necessary task to solve when faced with the increasing quantity of medical
images. In parallel, advances in deep learning makes it a suitable tool for
computer vision problems. Using a common architecture (such as AlexNet)
provides quite good results, but not sufficient for clinical use. Improving the
model is not an easy task, due to the large number of hyper-parameters
governing both the architecture and the training of the network, and to the
limited understanding of their relevance. Since an exhaustive search is not
tractable, we propose to optimize the network first by random search, and then
by an adaptive search based on Gaussian Processes and Probability of
Improvement. Applying this method on a large and varied MRI dataset, we show a
substantial improvement between the baseline network and the final one (up to
20\% for the most difficult classes).
| Hadrien Bertrand, Matthieu Perrot, Roberto Ardon, Isabelle Bloch | null | 1701.04355 | null | null |
The Incredible Shrinking Neural Network: New Perspectives on Learning
Representations Through The Lens of Pruning | cs.NE cs.LG | How much can pruning algorithms teach us about the fundamentals of learning
representations in neural networks? And how much can these fundamentals help
while devising new pruning techniques? A lot, it turns out. Neural network
pruning has become a topic of great interest in recent years, and many
different techniques have been proposed to address this problem. The decision
of what to prune and when to prune necessarily forces us to confront our
assumptions about how neural networks actually learn to represent patterns in
data. In this work, we set out to test several long-held hypotheses about
neural network learning representations, approaches to pruning and the
relevance of one in the context of the other. To accomplish this, we argue in
favor of pruning whole neurons as opposed to the traditional method of pruning
weights from optimally trained networks. We first review the historical
literature, point out some common assumptions it makes, and propose methods to
demonstrate the inherent flaws in these assumptions. We then propose our novel
approach to pruning and set about analyzing the quality of the decisions it
makes. Our analysis led us to question the validity of many widely-held
assumptions behind pruning algorithms and the trade-offs we often make in the
interest of reducing computational complexity. We discovered that there is a
straightforward way, however expensive, to serially prune 40-70% of the neurons
in a trained network with minimal effect on the learning representation and
without any re-training. It is to be noted here that the motivation behind this
work is not to propose an algorithm that would outperform all existing methods,
but to shed light on what some inherent flaws in these methods can teach us
about learning representations and how this can lead us to superior pruning
techniques.
| Aditya Sharma, Nikolas Wolfe, Bhiksha Raj | null | 1701.04465 | null | null |
Towards a New Interpretation of Separable Convolutions | cs.LG stat.ML | In recent times, the use of separable convolutions in deep convolutional
neural network architectures has been explored. Several researchers, most
notably (Chollet, 2016) and (Ghosh, 2017) have used separable convolutions in
their deep architectures and have demonstrated state of the art or close to
state of the art performance. However, the underlying mechanism of action of
separable convolutions are still not fully understood. Although their
mathematical definition is well understood as a depthwise convolution followed
by a pointwise convolution, deeper interpretations such as the extreme
Inception hypothesis (Chollet, 2016) have failed to provide a thorough
explanation of their efficacy. In this paper, we propose a hybrid
interpretation that we believe is a better model for explaining the efficacy of
separable convolutions.
| Tapabrata Ghosh | null | 1701.04489 | null | null |
Deep Learning for Computational Chemistry | stat.ML cs.AI cs.CE cs.LG physics.chem-ph | The rise and fall of artificial neural networks is well documented in the
scientific literature of both computer science and computational chemistry. Yet
almost two decades later, we are now seeing a resurgence of interest in deep
learning, a machine learning algorithm based on multilayer neural networks.
Within the last few years, we have seen the transformative impact of deep
learning in many domains, particularly in speech recognition and computer
vision, to the extent that the majority of expert practitioners in those field
are now regularly eschewing prior established models in favor of deep learning
models. In this review, we provide an introductory overview into the theory of
deep neural networks and their unique properties that distinguish them from
traditional machine learning algorithms used in cheminformatics. By providing
an overview of the variety of emerging applications of deep neural networks, we
highlight its ubiquity and broad applicability to a wide range of challenges in
the field, including QSAR, virtual screening, protein structure prediction,
quantum chemistry, materials design and property prediction. In reviewing the
performance of deep neural networks, we observed a consistent outperformance
against non-neural networks state-of-the-art models across disparate research
topics, and deep neural network based models often exceeded the "glass ceiling"
expectations of their respective tasks. Coupled with the maturity of
GPU-accelerated computing for training deep neural networks and the exponential
growth of chemical data on which to train these networks on, we anticipate that
deep learning algorithms will be a valuable tool for computational chemistry.
| Garrett B. Goh, Nathan O. Hodas, Abhinav Vishnu | null | 1701.04503 | null | null |
Online Learning with Regularized Kernel for One-class Classification | cs.LG | This paper presents an online learning with regularized kernel based
one-class extreme learning machine (ELM) classifier and is referred as online
RK-OC-ELM. The baseline kernel hyperplane model considers whole data in a
single chunk with regularized ELM approach for offline learning in case of
one-class classification (OCC). Further, the basic hyper plane model is adapted
in an online fashion from stream of training samples in this paper. Two
frameworks viz., boundary and reconstruction are presented to detect the target
class in online RKOC-ELM. Boundary framework based one-class classifier
consists of single node output architecture and classifier endeavors to
approximate all data to any real number. However, one-class classifier based on
reconstruction framework is an autoencoder architecture, where output nodes are
identical to input nodes and classifier endeavor to reconstruct input layer at
the output layer. Both these frameworks employ regularized kernel ELM based
online learning and consistency based model selection has been employed to
select learning algorithm parameters. The performance of online RK-OC-ELM has
been evaluated on standard benchmark datasets as well as on artificial datasets
and the results are compared with existing state-of-the art one-class
classifiers. The results indicate that the online learning one-class classifier
is slightly better or same as batch learning based approaches. As, base
classifier used for the proposed classifiers are based on the ELM, hence,
proposed classifiers would also inherit the benefit of the base classifier i.e.
it will perform faster computation compared to traditional autoencoder based
one-class classifier.
| Chandan Gautam, Aruna Tiwari, Sundaram Suresh and Kapil Ahuja | null | 1701.04508 | null | null |
On The Construction of Extreme Learning Machine for Online and Offline
One-Class Classification - An Expanded Toolbox | cs.LG stat.ML | One-Class Classification (OCC) has been prime concern for researchers and
effectively employed in various disciplines. But, traditional methods based
one-class classifiers are very time consuming due to its iterative process and
various parameters tuning. In this paper, we present six OCC methods based on
extreme learning machine (ELM) and Online Sequential ELM (OSELM). Our proposed
classifiers mainly lie in two categories: reconstruction based and boundary
based, which supports both types of learning viz., online and offline learning.
Out of various proposed methods, four are offline and remaining two are online
methods. Out of four offline methods, two methods perform random feature
mapping and two methods perform kernel feature mapping. Kernel feature mapping
based approaches have been tested with RBF kernel and online version of
one-class classifiers are tested with both types of nodes viz., additive and
RBF. It is well known fact that threshold decision is a crucial factor in case
of OCC, so, three different threshold deciding criteria have been employed so
far and analyses the effectiveness of one threshold deciding criteria over
another. Further, these methods are tested on two artificial datasets to check
there boundary construction capability and on eight benchmark datasets from
different discipline to evaluate the performance of the classifiers. Our
proposed classifiers exhibit better performance compared to ten traditional
one-class classifiers and ELM based two one-class classifiers. Through proposed
one-class classifiers, we intend to expand the functionality of the most used
toolbox for OCC i.e. DD toolbox. All of our methods are totally compatible with
all the present features of the toolbox.
| Chandan Gautam, Aruna Tiwari and Qian Leng | 10.1016/j.neucom.2016.04.070 | 1701.04516 | null | null |
Towards prediction of rapid intensification in tropical cyclones with
recurrent neural networks | cs.LG stat.AP | The problem where a tropical cyclone intensifies dramatically within a short
period of time is known as rapid intensification. This has been one of the
major challenges for tropical weather forecasting. Recurrent neural networks
have been promising for time series problems which makes them appropriate for
rapid intensification. In this paper, recurrent neural networks are used to
predict rapid intensification cases of tropical cyclones from the South Pacific
and South Indian Ocean regions. A class imbalanced problem is encountered which
makes it very challenging to achieve promising performance. A simple strategy
was proposed to include more positive cases for detection where the false
positive rate was slightly improved. The limitations of building an efficient
system remains due to the challenges of addressing the class imbalance problem
encountered for rapid intensification prediction. This motivates further
research in using innovative machine learning methods.
| Rohitash Chandra | null | 1701.04518 | null | null |
Faster K-Means Cluster Estimation | cs.LG cs.IR | There has been considerable work on improving popular clustering algorithm
`K-means' in terms of mean squared error (MSE) and speed, both. However, most
of the k-means variants tend to compute distance of each data point to each
cluster centroid for every iteration. We propose a fast heuristic to overcome
this bottleneck with only marginal increase in MSE. We observe that across all
iterations of K-means, a data point changes its membership only among a small
subset of clusters. Our heuristic predicts such clusters for each data point by
looking at nearby clusters after the first iteration of k-means. We augment
well known variants of k-means with our heuristic to demonstrate effectiveness
of our heuristic. For various synthetic and real-world datasets, our heuristic
achieves speed-up of up-to 3 times when compared to efficient variants of
k-means.
| Siddhesh Khandelwal, Amit Awekar | null | 1701.046 | null | null |
Incremental Learning for Robot Perception through HRI | cs.RO cs.HC cs.LG | Scene understanding and object recognition is a difficult to achieve yet
crucial skill for robots. Recently, Convolutional Neural Networks (CNN), have
shown success in this task. However, there is still a gap between their
performance on image datasets and real-world robotics scenarios. We present a
novel paradigm for incrementally improving a robot's visual perception through
active human interaction. In this paradigm, the user introduces novel objects
to the robot by means of pointing and voice commands. Given this information,
the robot visually explores the object and adds images from it to re-train the
perception module. Our base perception module is based on recent development in
object detection and recognition using deep learning. Our method leverages
state of the art CNNs from off-line batch learning, human guidance, robot
exploration and incremental on-line learning.
| Sepehr Valipour, Camilo Perez, Martin Jagersand | null | 1701.04693 | null | null |
Adversarial Variational Bayes: Unifying Variational Autoencoders and
Generative Adversarial Networks | cs.LG | Variational Autoencoders (VAEs) are expressive latent variable models that
can be used to learn complex probability distributions from training data.
However, the quality of the resulting model crucially relies on the
expressiveness of the inference model. We introduce Adversarial Variational
Bayes (AVB), a technique for training Variational Autoencoders with arbitrarily
expressive inference models. We achieve this by introducing an auxiliary
discriminative network that allows to rephrase the maximum-likelihood-problem
as a two-player game, hence establishing a principled connection between VAEs
and Generative Adversarial Networks (GANs). We show that in the nonparametric
limit our method yields an exact maximum-likelihood assignment for the
parameters of the generative model, as well as the exact posterior distribution
over the latent variables given an observation. Contrary to competing
approaches which combine VAEs with GANs, our approach has a clear theoretical
justification, retains most advantages of standard Variational Autoencoders and
is easy to implement.
| Lars Mescheder, Sebastian Nowozin and Andreas Geiger | null | 1701.04722 | null | null |
On the Sample Complexity of Graphical Model Selection for Non-Stationary
Processes | cs.LG stat.ML | We characterize the sample size required for accurate graphical model
selection from non-stationary samples. The observed data is modeled as a
vector-valued zero-mean Gaussian random process whose samples are uncorrelated
but have different covariance matrices. This model contains as special cases
the standard setting of i.i.d. samples as well as the case of samples forming a
stationary or underspread (non-stationary) processes. More generally, our model
applies to any process model for which an efficient decorrelation can be
obtained. By analyzing a particular model selection method, we derive a
sufficient condition on the required sample size for accurate graphical model
selection based on non-stationary data.
| Nguyen Q. Tran and Oleksii Abramenko and Alexander Jung | null | 1701.04724 | null | null |
Summoning Demons: The Pursuit of Exploitable Bugs in Machine Learning | cs.CR cs.LG | Governments and businesses increasingly rely on data analytics and machine
learning (ML) for improving their competitive edge in areas such as consumer
satisfaction, threat intelligence, decision making, and product efficiency.
However, by cleverly corrupting a subset of data used as input to a target's ML
algorithms, an adversary can perturb outcomes and compromise the effectiveness
of ML technology. While prior work in the field of adversarial machine learning
has studied the impact of input manipulation on correct ML algorithms, we
consider the exploitation of bugs in ML implementations. In this paper, we
characterize the attack surface of ML programs, and we show that malicious
inputs exploiting implementation bugs enable strictly more powerful attacks
than the classic adversarial machine learning techniques. We propose a
semi-automated technique, called steered fuzzing, for exploring this attack
surface and for discovering exploitable bugs in machine learning programs, in
order to demonstrate the magnitude of this threat. As a result of our work, we
responsibly disclosed five vulnerabilities, established three new CVE-IDs, and
illuminated a common insecure practice across many machine learning systems.
Finally, we outline several research directions for further understanding and
mitigating this threat.
| Rock Stevens, Octavian Suciu, Andrew Ruef, Sanghyun Hong, Michael
Hicks, Tudor Dumitra\c{s} | null | 1701.04739 | null | null |
Joint Deep Modeling of Users and Items Using Reviews for Recommendation | cs.LG cs.IR | A large amount of information exists in reviews written by users. This source
of information has been ignored by most of the current recommender systems
while it can potentially alleviate the sparsity problem and improve the quality
of recommendations. In this paper, we present a deep model to learn item
properties and user behaviors jointly from review text. The proposed model,
named Deep Cooperative Neural Networks (DeepCoNN), consists of two parallel
neural networks coupled in the last layers. One of the networks focuses on
learning user behaviors exploiting reviews written by the user, and the other
one learns item properties from the reviews written for the item. A shared
layer is introduced on the top to couple these two networks together. The
shared layer enables latent factors learned for users and items to interact
with each other in a manner similar to factorization machine techniques.
Experimental results demonstrate that DeepCoNN significantly outperforms all
baseline recommender systems on a variety of datasets.
| Lei Zheng, Vahid Noroozi, Philip S. Yu | null | 1701.04783 | null | null |
Towards Principled Methods for Training Generative Adversarial Networks | stat.ML cs.LG | The goal of this paper is not to introduce a single algorithm or method, but
to make theoretical steps towards fully understanding the training dynamics of
generative adversarial networks. In order to substantiate our theoretical
analysis, we perform targeted experiments to verify our assumptions, illustrate
our claims, and quantify the phenomena. This paper is divided into three
sections. The first section introduces the problem at hand. The second section
is dedicated to studying and proving rigorously the problems including
instability and saturation that arize when training generative adversarial
networks. The third section examines a practical and theoretically grounded
direction towards solving these problems, while introducing new tools to study
them.
| Martin Arjovsky, L\'eon Bottou | null | 1701.04862 | null | null |
3D Morphology Prediction of Progressive Spinal Deformities from
Probabilistic Modeling of Discriminant Manifolds | cs.LG stat.ML | We introduce a novel approach for predicting the progression of adolescent
idiopathic scoliosis from 3D spine models reconstructed from biplanar X-ray
images. Recent progress in machine learning have allowed to improve
classification and prognosis rates, but lack a probabilistic framework to
measure uncertainty in the data. We propose a discriminative probabilistic
manifold embedding where locally linear mappings transform data points from
high-dimensional space to corresponding low-dimensional coordinates. A
discriminant adjacency matrix is constructed to maximize the separation between
progressive and non-progressive groups of patients diagnosed with scoliosis,
while minimizing the distance in latent variables belonging to the same class.
To predict the evolution of deformation, a baseline reconstruction is projected
onto the manifold, from which a spatiotemporal regression model is built from
parallel transport curves inferred from neighboring exemplars. Rate of
progression is modulated from the spine flexibility and curve magnitude of the
3D spine deformation. The method was tested on 745 reconstructions from 133
subjects using longitudinal 3D reconstructions of the spine, with results
demonstrating the discriminatory framework can identify between progressive and
non-progressive of scoliotic patients with a classification rate of 81% and
prediction differences of 2.1$^{o}$ in main curve angulation, outperforming
other manifold learning methods. Our method achieved a higher prediction
accuracy and improved the modeling of spatiotemporal morphological changes in
highly deformed spines compared to other learning methods.
| Samuel Kadoury, William Mandel, Marjolaine Roy-Beaudry, Marie-Lyne
Nault, Stefan Parent | null | 1701.04869 | null | null |
Agglomerative Info-Clustering | cs.IT cs.LG math.IT | An agglomerative clustering of random variables is proposed, where clusters
of random variables sharing the maximum amount of multivariate mutual
information are merged successively to form larger clusters. Compared to the
previous info-clustering algorithms, the agglomerative approach allows the
computation to stop earlier when clusters of desired size and accuracy are
obtained. An efficient algorithm is also derived based on the submodularity of
entropy and the duality between the principal sequence of partitions and the
principal sequence for submodular functions.
| Chung Chan, Ali Al-Bashabsheh, Qiaoqiao Zhou | null | 1701.04926 | null | null |
A Machine Learning Alternative to P-values | stat.ML cs.LG | This paper presents an alternative approach to p-values in regression
settings. This approach, whose origins can be traced to machine learning, is
based on the leave-one-out bootstrap for prediction error. In machine learning
this is called the out-of-bag (OOB) error. To obtain the OOB error for a model,
one draws a bootstrap sample and fits the model to the in-sample data. The
out-of-sample prediction error for the model is obtained by calculating the
prediction error for the model using the out-of-sample data. Repeating and
averaging yields the OOB error, which represents a robust cross-validated
estimate of the accuracy of the underlying model. By a simple modification to
the bootstrap data involving "noising up" a variable, the OOB method yields a
variable importance (VIMP) index, which directly measures how much a specific
variable contributes to the prediction precision of a model. VIMP provides a
scientifically interpretable measure of the effect size of a variable, we call
the "predictive effect size", that holds whether the researcher's model is
correct or not, unlike the p-value whose calculation is based on the assumed
correctness of the model. We also discuss a marginal VIMP index, also easily
calculated, which measures the marginal effect of a variable, or what we call
"the discovery effect". The OOB procedure can be applied to both parametric and
nonparametric regression models and requires only that the researcher can
repeatedly fit their model to bootstrap and modified bootstrap data. We
illustrate this approach on a survival data set involving patients with
systolic heart failure and to a simulated survival data set where the model is
incorrectly specified to illustrate its robustness to model misspecification.
| Min Lu and Hemant Ishwaran | null | 1701.04944 | null | null |
A Deep Convolutional Auto-Encoder with Pooling - Unpooling Layers in
Caffe | cs.NE cs.CV cs.LG | This paper presents the development of several models of a deep convolutional
auto-encoder in the Caffe deep learning framework and their experimental
evaluation on the example of MNIST dataset. We have created five models of a
convolutional auto-encoder which differ architecturally by the presence or
absence of pooling and unpooling layers in the auto-encoder's encoder and
decoder parts. Our results show that the developed models provide very good
results in dimensionality reduction and unsupervised clustering tasks, and
small classification errors when we used the learned internal code as an input
of a supervised linear classifier and multi-layer perceptron. The best results
were provided by a model where the encoder part contains convolutional and
pooling layers, followed by an analogous decoder part with deconvolution and
unpooling layers without the use of switch variables in the decoder part. The
paper also discusses practical details of the creation of a deep convolutional
auto-encoder in the very popular Caffe deep learning framework. We believe that
our approach and results presented in this paper could help other researchers
to build efficient deep neural network architectures in the future.
| Volodymyr Turchenko, Eric Chalmers, Artur Luczak | null | 1701.04949 | null | null |
Multilayer Perceptron Algebra | stat.ML cs.LG | Artificial Neural Networks(ANN) has been phenomenally successful on various
pattern recognition tasks. However, the design of neural networks rely heavily
on the experience and intuitions of individual developers. In this article, the
author introduces a mathematical structure called MLP algebra on the set of all
Multilayer Perceptron Neural Networks(MLP), which can serve as a guiding
principle to build MLPs accommodating to the particular data sets, and to build
complex MLPs from simpler ones.
| Zhao Peng | null | 1701.04968 | null | null |
Highly Efficient Hierarchical Online Nonlinear Regression Using Second
Order Methods | cs.LG | We introduce highly efficient online nonlinear regression algorithms that are
suitable for real life applications. We process the data in a truly online
manner such that no storage is needed, i.e., the data is discarded after being
used. For nonlinear modeling we use a hierarchical piecewise linear approach
based on the notion of decision trees where the space of the regressor vectors
is adaptively partitioned based on the performance. As the first time in the
literature, we learn both the piecewise linear partitioning of the regressor
space as well as the linear models in each region using highly effective second
order methods, i.e., Newton-Raphson Methods. Hence, we avoid the well known
over fitting issues by using piecewise linear models, however, since both the
region boundaries as well as the linear models in each region are trained using
the second order methods, we achieve substantial performance compared to the
state of the art. We demonstrate our gains over the well known benchmark data
sets and provide performance results in an individual sequence manner
guaranteed to hold without any statistical assumptions. Hence, the introduced
algorithms address computational complexity issues widely encountered in real
life applications while providing superior guaranteed performance in a strong
deterministic sense.
| Burak C. Civek, Ibrahim Delibalta and Suleyman S. Kozat | null | 1701.05053 | null | null |
Lipschitz Properties for Deep Convolutional Networks | cs.LG math.FA | In this paper we discuss the stability properties of convolutional neural
networks. Convolutional neural networks are widely used in machine learning. In
classification they are mainly used as feature extractors. Ideally, we expect
similar features when the inputs are from the same class. That is, we hope to
see a small change in the feature vector with respect to a deformation on the
input signal. This can be established mathematically, and the key step is to
derive the Lipschitz properties. Further, we establish that the stability
results can be extended for more general networks. We give a formula for
computing the Lipschitz bound, and compare it with other methods to show it is
closer to the optimal value.
| Radu Balan, Maneesh Singh, Dongmian Zou | null | 1701.05217 | null | null |
Parsimonious Inference on Convolutional Neural Networks: Learning and
applying on-line kernel activation rules | cs.CV cs.AI cs.LG cs.NE | A new, radical CNN design approach is presented in this paper, considering
the reduction of the total computational load during inference. This is
achieved by a new holistic intervention on both the CNN architecture and the
training procedure, which targets to the parsimonious inference by learning to
exploit or remove the redundant capacity of a CNN architecture. This is
accomplished, by the introduction of a new structural element that can be
inserted as an add-on to any contemporary CNN architecture, whilst preserving
or even improving its recognition accuracy. Our approach formulates a
systematic and data-driven method for developing CNNs that are trained to
eventually change size and form in real-time during inference, targeting to the
smaller possible computational footprint. Results are provided for the optimal
implementation on a few modern, high-end mobile computing platforms indicating
a significant speed-up of up to x3 times.
| I. Theodorakopoulos, V. Pothos, D. Kastaniotis and N. Fragoulis | null | 1701.05221 | null | null |
Recommendation under Capacity Constraints | stat.ML cs.IR cs.LG | In this paper, we investigate the common scenario where every candidate item
for recommendation is characterized by a maximum capacity, i.e., number of
seats in a Point-of-Interest (POI) or size of an item's inventory. Despite the
prevalence of the task of recommending items under capacity constraints in a
variety of settings, to the best of our knowledge, none of the known
recommender methods is designed to respect capacity constraints. To close this
gap, we extend three state-of-the art latent factor recommendation approaches:
probabilistic matrix factorization (PMF), geographical matrix factorization
(GeoMF), and bayesian personalized ranking (BPR), to optimize for both
recommendation accuracy and expected item usage that respects the capacity
constraints. We introduce the useful concepts of user propensity to listen and
item capacity. Our experimental results in real-world datasets, both for the
domain of item recommendation and POI recommendation, highlight the benefit of
our method for the setting of recommendation under capacity constraints.
| Konstantina Christakopoulou, Jaya Kawale, Arindam Banerjee | null | 1701.05228 | null | null |
Online Structure Learning for Sum-Product Networks with Gaussian Leaves | stat.ML cs.LG | Sum-product networks have recently emerged as an attractive representation
due to their dual view as a special type of deep neural network with clear
semantics and a special type of probabilistic graphical model for which
inference is always tractable. Those properties follow from some conditions
(i.e., completeness and decomposability) that must be respected by the
structure of the network. As a result, it is not easy to specify a valid
sum-product network by hand and therefore structure learning techniques are
typically used in practice. This paper describes the first online structure
learning technique for continuous SPNs with Gaussian leaves. We also introduce
an accompanying new parameter learning technique.
| Wilson Hsu, Agastya Kalra, Pascal Poupart | null | 1701.05265 | null | null |
Validity of Clusters Produced By kernel-$k$-means With Kernel-Trick | cs.LG stat.ML | This paper corrects the proof of the Theorem 2 from the Gower's paper
\cite[page 5]{Gower:1982} as well as corrects the Theorem 7 from Gower's paper
\cite{Gower:1986}. The first correction is needed in order to establish the
existence of the kernel function used commonly in the kernel trick e.g. for
$k$-means clustering algorithm, on the grounds of distance matrix. The
correction encompasses the missing if-part proof and dropping unnecessary
conditions. The second correction deals with transformation of the kernel
matrix into a one embeddable in Euclidean space.
| Mieczys{\l}aw A. K{\l}opotek | null | 1701.05335 | null | null |
Stochastic Subsampling for Factorizing Huge Matrices | stat.ML cs.LG math.OC q-bio.NC | We present a matrix-factorization algorithm that scales to input matrices
with both huge number of rows and columns. Learned factors may be sparse or
dense and/or non-negative, which makes our algorithm suitable for dictionary
learning, sparse component analysis, and non-negative matrix factorization. Our
algorithm streams matrix columns while subsampling them to iteratively learn
the matrix factors. At each iteration, the row dimension of a new sample is
reduced by subsampling, resulting in lower time complexity compared to a simple
streaming algorithm. Our method comes with convergence guarantees to reach a
stationary point of the matrix-factorization problem. We demonstrate its
efficiency on massive functional Magnetic Resonance Imaging data (2 TB), and on
patches extracted from hyperspectral images (103 GB). For both problems, which
involve different penalties on rows and columns, we obtain significant
speed-ups compared to state-of-the-art algorithms.
| Arthur Mensch (PARIETAL, NEUROSPIN), Julien Mairal (Thoth), Bertrand
Thirion (PARIETAL, NEUROSPIN), Gael Varoquaux (NEUROSPIN, PARIETAL) | 10.1109/TSP.2017.2752697 | 1701.05363 | null | null |
Variational Dropout Sparsifies Deep Neural Networks | stat.ML cs.LG | We explore a recently proposed Variational Dropout technique that provided an
elegant Bayesian interpretation to Gaussian Dropout. We extend Variational
Dropout to the case when dropout rates are unbounded, propose a way to reduce
the variance of the gradient estimator and report first experimental results
with individual dropout rates per weight. Interestingly, it leads to extremely
sparse solutions both in fully-connected and convolutional layers. This effect
is similar to automatic relevance determination effect in empirical Bayes but
has a number of advantages. We reduce the number of parameters up to 280 times
on LeNet architectures and up to 68 times on VGG-like networks with a
negligible decrease of accuracy.
| Dmitry Molchanov, Arsenii Ashukha and Dmitry Vetrov | null | 1701.05369 | null | null |
Learning first-order definable concepts over structures of small degree | cs.LG cs.LO | We consider a declarative framework for machine learning where concepts and
hypotheses are defined by formulas of a logic over some background structure.
We show that within this framework, concepts defined by first-order formulas
over a background structure of at most polylogarithmic degree can be learned in
polylogarithmic time in the "probably approximately correct" learning sense.
| Martin Grohe and Martin Ritzert | null | 1701.05487 | null | null |
Fisher consistency for prior probability shift | stat.ML cs.LG stat.CO | We introduce Fisher consistency in the sense of unbiasedness as a desirable
property for estimators of class prior probabilities. Lack of Fisher
consistency could be used as a criterion to dismiss estimators that are
unlikely to deliver precise estimates in test datasets under prior probability
and more general dataset shift. The usefulness of this unbiasedness concept is
demonstrated with three examples of classifiers used for quantification:
Adjusted Classify & Count, EM-algorithm and CDE-Iterate. We find that Adjusted
Classify & Count and EM-algorithm are Fisher consistent. A counter-example
shows that CDE-Iterate is not Fisher consistent and, therefore, cannot be
trusted to deliver reliable estimates of class probabilities.
| Dirk Tasche | null | 1701.05512 | null | null |
PixelCNN++: Improving the PixelCNN with Discretized Logistic Mixture
Likelihood and Other Modifications | cs.LG stat.ML | PixelCNNs are a recently proposed class of powerful generative models with
tractable likelihood. Here we discuss our implementation of PixelCNNs which we
make available at https://github.com/openai/pixel-cnn. Our implementation
contains a number of modifications to the original model that both simplify its
structure and improve its performance. 1) We use a discretized logistic mixture
likelihood on the pixels, rather than a 256-way softmax, which we find to speed
up training. 2) We condition on whole pixels, rather than R/G/B sub-pixels,
simplifying the model structure. 3) We use downsampling to efficiently capture
structure at multiple resolutions. 4) We introduce additional short-cut
connections to further speed up optimization. 5) We regularize the model using
dropout. Finally, we present state-of-the-art log likelihood results on
CIFAR-10 to demonstrate the usefulness of these modifications.
| Tim Salimans, Andrej Karpathy, Xi Chen, Diederik P. Kingma | null | 1701.05517 | null | null |
Deep Neural Networks - A Brief History | cs.NE cs.CV cs.LG | Introduction to deep neural networks and their history.
| Krzysztof J. Cios | null | 1701.05549 | null | null |
Poisson--Gamma Dynamical Systems | stat.ML cs.LG | We introduce a new dynamical system for sequentially observed multivariate
count data. This model is based on the gamma--Poisson construction---a natural
choice for count data---and relies on a novel Bayesian nonparametric prior that
ties and shrinks the model parameters, thus avoiding overfitting. We present an
efficient MCMC inference algorithm that advances recent work on augmentation
schemes for inference in negative binomial models. Finally, we demonstrate the
model's inductive bias using a variety of real-world data sets, showing that it
exhibits superior predictive performance over other models and infers highly
interpretable latent structure.
| Aaron Schein, Mingyuan Zhou, Hanna Wallach | null | 1701.05573 | null | null |
Rare Disease Physician Targeting: A Factor Graph Approach | stat.ML cs.LG | In rare disease physician targeting, a major challenge is how to identify
physicians who are treating diagnosed or underdiagnosed rare diseases patients.
Rare diseases have extremely low incidence rate. For a specified rare disease,
only a small number of patients are affected and a fractional of physicians are
involved. The existing targeting methodologies, such as segmentation and
profiling, are developed under mass market assumption. They are not suitable
for rare disease market where the target classes are extremely imbalanced. The
authors propose a graphical model approach to predict targets by jointly
modeling physician and patient features from different data spaces and
utilizing the extra relational information. Through an empirical example with
medical claim and prescription data, the proposed approach demonstrates better
accuracy in finding target physicians. The graph representation also provides
visual interpretability of relationship among physicians and patients. The
model can be extended to incorporate more complex dependency structures. This
article contributes to the literature of exploring the benefit of utilizing
relational dependencies among entities in healthcare industry.
| Yong Cai, Yunlong Wang, Dong Dai | null | 1701.05644 | null | null |
Git Blame Who?: Stylistic Authorship Attribution of Small, Incomplete
Source Code Fragments | cs.LG cs.CR | Program authorship attribution has implications for the privacy of
programmers who wish to contribute code anonymously. While previous work has
shown that complete files that are individually authored can be attributed, we
show here for the first time that accounts belonging to open source
contributors containing short, incomplete, and typically uncompilable fragments
can also be effectively attributed.
We propose a technique for authorship attribution of contributor accounts
containing small source code samples, such as those that can be obtained from
version control systems or other direct comparison of sequential versions. We
show that while application of previous methods to individual small source code
samples yields an accuracy of about 73% for 106 programmers as a baseline, by
ensembling and averaging the classification probabilities of a sufficiently
large set of samples belonging to the same author we achieve 99% accuracy for
assigning the set of samples to the correct author. Through these results, we
demonstrate that attribution is an important threat to privacy for programmers
even in real-world collaborative environments such as GitHub. Additionally, we
propose the use of calibration curves to identify samples by unknown and
previously unencountered authors in the open world setting. We show that we can
also use these calibration curves in the case that we do not have linking
information and thus are forced to classify individual samples directly. This
is because the calibration curves allow us to identify which samples are more
likely to have been correctly attributed. Using such a curve can help an
analyst choose a cut-off point which will prevent most misclassifications, at
the cost of causing the rejection of some of the more dubious correct
attributions.
| Edwin Dauber, Aylin Caliskan, Richard Harang, Gregory Shearer, Michael
Weisman, Frederica Nelson, Rachel Greenstadt | 10.2478/popets-2019-0053 | 1701.05681 | null | null |
Real-time Traffic Accident Risk Prediction based on Frequent Pattern
Tree | stat.AP cs.LG | Traffic accident data are usually noisy, contain missing values, and
heterogeneous. How to select the most important variables to improve real-time
traffic accident risk prediction has become a concern of many recent studies.
This paper proposes a novel variable selection method based on the Frequent
Pattern tree (FP tree) algorithm. First, all the frequent patterns in the
traffic accident dataset are discovered. Then for each frequent pattern, a new
criterion, called the Relative Object Purity Ratio (ROPR) which we proposed, is
calculated. This ROPR is added to the importance score of the variables that
differentiate one frequent pattern from the others. To test the proposed
method, a dataset was compiled from the traffic accidents records detected by
only one detector on interstate highway I-64 in Virginia in 2005. This dataset
was then linked to other variables such as real-time traffic information and
weather conditions. Both the proposed method based on the FP tree algorithm, as
well as the widely utilized, random forest method, were then used to identify
the important variables or the Virginia dataset. The results indicate that
there are some differences between the variables deemed important by the FP
tree and those selected by the random forest method. Following this, two
baseline models (i.e. a nearest neighbor (k-NN) method and a Bayesian network)
were developed to predict accident risk based on the variables identified by
both the FP tree method and the random forest method. The results show that the
models based on the variable selection using the FP tree performed better than
those based on the random forest method for several versions of the k-NN and
Bayesian network models.The best results were derived from a Bayesian network
model using variables from FP tree. That model could predict 61.11% of
accidents accurately while having a false alarm rate of 38.16%.
| Lei Lin, Qian Wang, Adel W. Sadek | null | 1701.05691 | null | null |
Empirical Study of Drone Sound Detection in Real-Life Environment with
Deep Neural Networks | cs.SD cs.LG | This work aims to investigate the use of deep neural network to detect
commercial hobby drones in real-life environments by analyzing their sound
data. The purpose of work is to contribute to a system for detecting drones
used for malicious purposes, such as for terrorism. Specifically, we present a
method capable of detecting the presence of commercial hobby drones as a binary
classification problem based on sound event detection. We recorded the sound
produced by a few popular commercial hobby drones, and then augmented this data
with diverse environmental sound data to remedy the scarcity of drone sound
data in diverse environments. We investigated the effectiveness of
state-of-the-art event sound classification methods, i.e., a Gaussian Mixture
Model (GMM), Convolutional Neural Network (CNN), and Recurrent Neural Network
(RNN), for drone sound detection. Our empirical results, which were obtained
with a testing dataset collected on an urban street, confirmed the
effectiveness of these models for operating in a real environment. In summary,
our RNN models showed the best detection performance with an F-Score of 0.8009
with 240 ms of input audio with a short processing time, indicating their
applicability to real-time detection systems.
| Sungho Jeon, Jong-Woo Shin, Young-Jun Lee, Woong-Hee Kim, YoungHyoun
Kwon, and Hae-Yong Yang | null | 1701.05779 | null | null |
Disentangling group and link persistence in Dynamic Stochastic Block
models | cs.SI cs.LG physics.soc-ph stat.ML | We study the inference of a model of dynamic networks in which both
communities and links keep memory of previous network states. By considering
maximum likelihood inference from single snapshot observations of the network,
we show that link persistence makes the inference of communities harder,
decreasing the detectability threshold, while community persistence tends to
make it easier. We analytically show that communities inferred from single
network snapshot can share a maximum overlap with the underlying communities of
a specific previous instant in time. This leads to time-lagged inference: the
identification of past communities rather than present ones. Finally we compute
the time lag and propose a corrected algorithm, the Lagged Snapshot Dynamic
(LSD) algorithm, for community detection in dynamic networks. We analytically
and numerically characterize the detectability transitions of such algorithm as
a function of the memory parameters of the model and we make a comparison with
a full dynamic inference.
| Paolo Barucca, Fabrizio Lillo, Piero Mazzarisi, Daniele Tantari | null | 1701.05804 | null | null |
Neural Offset Min-Sum Decoding | cs.IT cs.LG math.IT | Recently, it was shown that if multiplicative weights are assigned to the
edges of a Tanner graph used in belief propagation decoding, it is possible to
use deep learning techniques to find values for the weights which improve the
error-correction performance of the decoder. Unfortunately, this approach
requires many multiplications, which are generally expensive operations. In
this paper, we suggest a more hardware-friendly approach in which offset
min-sum decoding is augmented with learnable offset parameters. Our method uses
no multiplications and has a parameter count less than half that of the
multiplicative algorithm. This both speeds up training and provides a feasible
path to hardware architectures. After describing our method, we compare the
performance of the two neural decoding algorithms and show that our method
achieves error-correction performance within 0.1 dB of the multiplicative
approach and as much as 1 dB better than traditional belief propagation for the
codes under consideration.
| Loren Lugosch, Warren J. Gross | null | 1701.05931 | null | null |
Learning Policies for Markov Decision Processes from Data | math.OC cs.LG stat.ML | We consider the problem of learning a policy for a Markov decision process
consistent with data captured on the state-actions pairs followed by the
policy. We assume that the policy belongs to a class of parameterized policies
which are defined using features associated with the state-action pairs. The
features are known a priori, however, only an unknown subset of them could be
relevant. The policy parameters that correspond to an observed target policy
are recovered using $\ell_1$-regularized logistic regression that best fits the
observed state-action samples. We establish bounds on the difference between
the average reward of the estimated and the original policy (regret) in terms
of the generalization error and the ergodic coefficient of the underlying
Markov chain. To that end, we combine sample complexity theory and sensitivity
analysis of the stationary distribution of Markov chains. Our analysis suggests
that to achieve regret within order $O(\sqrt{\epsilon})$, it suffices to use
training sample size on the order of $\Omega(\log n \cdot poly(1/\epsilon))$,
where $n$ is the number of the features. We demonstrate the effectiveness of
our method on a synthetic robot navigation example.
| Manjesh K. Hanawal, Hao Liu, Henghui Zhu, Ioannis Ch. Paschalidis | null | 1701.05954 | null | null |
Label Propagation on K-partite Graphs with Heterophily | cs.LG cs.AI cs.SI | In this paper, for the first time, we study label propagation in
heterogeneous graphs under heterophily assumption. Homophily label propagation
(i.e., two connected nodes share similar labels) in homogeneous graph (with
same types of vertices and relations) has been extensively studied before.
Unfortunately, real-life networks are heterogeneous, they contain different
types of vertices (e.g., users, images, texts) and relations (e.g.,
friendships, co-tagging) and allow for each node to propagate both the same and
opposite copy of labels to its neighbors. We propose a $\mathcal{K}$-partite
label propagation model to handle the mystifying combination of heterogeneous
nodes/relations and heterophily propagation. With this model, we develop a
novel label inference algorithm framework with update rules in near-linear time
complexity. Since real networks change over time, we devise an incremental
approach, which supports fast updates for both new data and evidence (e.g.,
ground truth labels) with guaranteed efficiency. We further provide a utility
function to automatically determine whether an incremental or a re-modeling
approach is favored. Extensive experiments on real datasets have verified the
effectiveness and efficiency of our approach, and its superiority over the
state-of-the-art label propagation methods.
| Dingxiong Deng, Fan Bai, Yiqi Tang, Shuigeng Zhou, Cyrus Shahabi,
Linhong Zhu | null | 1701.06075 | null | null |
Lyrics-to-Audio Alignment by Unsupervised Discovery of Repetitive
Patterns in Vowel Acoustics | cs.SD cs.AI cs.IR cs.LG eess.AS | Most of the previous approaches to lyrics-to-audio alignment used a
pre-developed automatic speech recognition (ASR) system that innately suffered
from several difficulties to adapt the speech model to individual singers. A
significant aspect missing in previous works is the self-learnability of
repetitive vowel patterns in the singing voice, where the vowel part used is
more consistent than the consonant part. Based on this, our system first learns
a discriminative subspace of vowel sequences, based on weighted symmetric
non-negative matrix factorization (WS-NMF), by taking the self-similarity of a
standard acoustic feature as an input. Then, we make use of canonical time
warping (CTW), derived from a recent computer vision technique, to find an
optimal spatiotemporal transformation between the text and the acoustic
sequences. Experiments with Korean and English data sets showed that deploying
this method after a pre-developed, unsupervised, singing source separation
achieved more promising results than other state-of-the-art unsupervised
approaches and an existing ASR-based system.
| Sungkyun Chang, Kyogu Lee | 10.1109/ACCESS.2017.2738558 | 1701.06078 | null | null |
Neurogenesis-Inspired Dictionary Learning: Online Model Adaption in a
Changing World | cs.LG cs.AI cs.CV cs.NE stat.ML | In this paper, we focus on online representation learning in non-stationary
environments which may require continuous adaptation of model architecture. We
propose a novel online dictionary-learning (sparse-coding) framework which
incorporates the addition and deletion of hidden units (dictionary elements),
and is inspired by the adult neurogenesis phenomenon in the dentate gyrus of
the hippocampus, known to be associated with improved cognitive function and
adaptation to new environments. In the online learning setting, where new input
instances arrive sequentially in batches, the neuronal-birth is implemented by
adding new units with random initial weights (random dictionary elements); the
number of new units is determined by the current performance (representation
error) of the dictionary, higher error causing an increase in the birth rate.
Neuronal-death is implemented by imposing l1/l2-regularization (group sparsity)
on the dictionary within the block-coordinate descent optimization at each
iteration of our online alternating minimization scheme, which iterates between
the code and dictionary updates. Finally, hidden unit connectivity adaptation
is facilitated by introducing sparsity in dictionary elements. Our empirical
evaluation on several real-life datasets (images and language) as well as on
synthetic data demonstrates that the proposed approach can considerably
outperform the state-of-art fixed-size (nonadaptive) online sparse coding of
Mairal et al. (2009) in the presence of nonstationary data. Moreover, we
identify certain properties of the data (e.g., sparse inputs with nearly
non-overlapping supports) and of the model (e.g., dictionary sparsity)
associated with such improvements.
| Sahil Garg, Irina Rish, Guillermo Cecchi, Aurelie Lozano | null | 1701.06106 | null | null |
Effective and Extensible Feature Extraction Method Using Genetic
Algorithm-Based Frequency-Domain Feature Search for Epileptic EEG
Multi-classification | cs.LG cs.IT math.IT stat.ML | In this paper, a genetic algorithm-based frequency-domain feature search
(GAFDS) method is proposed for the electroencephalogram (EEG) analysis of
epilepsy. In this method, frequency-domain features are first searched and then
combined with nonlinear features. Subsequently, these features are selected and
optimized to classify EEG signals. The extracted features are analyzed
experimentally. The features extracted by GAFDS show remarkable independence,
and they are superior to the nonlinear features in terms of the ratio of
inter-class distance and intra-class distance. Moreover, the proposed feature
search method can additionally search for features of instantaneous frequency
in a signal after Hilbert transformation. The classification results achieved
using these features are reasonable, thus, GAFDS exhibits good extensibility.
Multiple classic classifiers (i.e., $k$-nearest neighbor, linear discriminant
analysis, decision tree, AdaBoost, multilayer perceptron, and Na\"ive Bayes)
achieve good results by using the features generated by GAFDS method and the
optimized selection. Specifically, the accuracies for the two-classification
and three-classification problems may reach up to 99% and 97%, respectively.
Results of several cross-validation experiments illustrate that GAFDS is
effective in feature extraction for EEG classification. Therefore, the proposed
feature selection and optimization model can improve classification accuracy.
| Tingxi Wen, Zhongnan Zhang | null | 1701.0612 | null | null |
Optimization on Product Submanifolds of Convolution Kernels | cs.CV cs.LG cs.NE | Recent advances in optimization methods used for training convolutional
neural networks (CNNs) with kernels, which are normalized according to
particular constraints, have shown remarkable success. This work introduces an
approach for training CNNs using ensembles of joint spaces of kernels
constructed using different constraints. For this purpose, we address a problem
of optimization on ensembles of products of submanifolds (PEMs) of convolution
kernels. To this end, we first propose three strategies to construct ensembles
of PEMs in CNNs. Next, we expound their geometric properties (metric and
curvature properties) in CNNs. We make use of our theoretical results by
developing a geometry-aware SGD algorithm (G-SGD) for optimization on ensembles
of PEMs to train CNNs. Moreover, we analyze convergence properties of G-SGD
considering geometric properties of PEMs. In the experimental analyses, we
employ G-SGD to train CNNs on Cifar-10, Cifar-100 and Imagenet datasets. The
results show that geometric adaptive step size computation methods of G-SGD can
improve training loss and convergence properties of CNNs. Moreover, we observe
that classification performance of baseline CNNs can be boosted using G-SGD on
ensembles of PEMs identified by multiple constraints.
| Mete Ozay, Takayuki Okatani | null | 1701.06123 | null | null |
Predicting Demographics of High-Resolution Geographies with Geotagged
Tweets | cs.LG cs.SI stat.ML | In this paper, we consider the problem of predicting demographics of
geographic units given geotagged Tweets that are composed within these units.
Traditional survey methods that offer demographics estimates are usually
limited in terms of geographic resolution, geographic boundaries, and time
intervals. Thus, it would be highly useful to develop computational methods
that can complement traditional survey methods by offering demographics
estimates at finer geographic resolutions, with flexible geographic boundaries
(i.e. not confined to administrative boundaries), and at different time
intervals. While prior work has focused on predicting demographics and health
statistics at relatively coarse geographic resolutions such as the county-level
or state-level, we introduce an approach to predict demographics at finer
geographic resolutions such as the blockgroup-level. For the task of predicting
gender and race/ethnicity counts at the blockgroup-level, an approach adapted
from prior work to our problem achieves an average correlation of 0.389
(gender) and 0.569 (race) on a held-out test dataset. Our approach outperforms
this prior approach with an average correlation of 0.671 (gender) and 0.692
(race).
| Omar Montasser and Daniel Kifer | null | 1701.06225 | null | null |
What the Language You Tweet Says About Your Occupation | cs.CY cs.AI cs.CL cs.LG | Many aspects of people's lives are proven to be deeply connected to their
jobs. In this paper, we first investigate the distinct characteristics of major
occupation categories based on tweets. From multiple social media platforms, we
gather several types of user information. From users' LinkedIn webpages, we
learn their proficiencies. To overcome the ambiguity of self-reported
information, a soft clustering approach is applied to extract occupations from
crowd-sourced data. Eight job categories are extracted, including Marketing,
Administrator, Start-up, Editor, Software Engineer, Public Relation, Office
Clerk, and Designer. Meanwhile, users' posts on Twitter provide cues for
understanding their linguistic styles, interests, and personalities. Our
results suggest that people of different jobs have unique tendencies in certain
language styles and interests. Our results also clearly reveal distinctive
levels in terms of Big Five Traits for different jobs. Finally, a classifier is
built to predict job types based on the features extracted from tweets. A high
accuracy indicates a strong discrimination power of language features for job
prediction task.
| Tianran Hu, Haoyuan Xiao, Thuy-vy Thi Nguyen, Jiebo Luo | null | 1701.06233 | null | null |
A Multichannel Convolutional Neural Network For Cross-language Dialog
State Tracking | cs.CL cs.AI cs.LG | The fifth Dialog State Tracking Challenge (DSTC5) introduces a new
cross-language dialog state tracking scenario, where the participants are asked
to build their trackers based on the English training corpus, while evaluating
them with the unlabeled Chinese corpus. Although the computer-generated
translations for both English and Chinese corpus are provided in the dataset,
these translations contain errors and careless use of them can easily hurt the
performance of the built trackers. To address this problem, we propose a
multichannel Convolutional Neural Networks (CNN) architecture, in which we
treat English and Chinese language as different input channels of one single
CNN model. In the evaluation of DSTC5, we found that such multichannel
architecture can effectively improve the robustness against translation errors.
Additionally, our method for DSTC5 is purely machine learning based and
requires no prior knowledge about the target language. We consider this a
desirable property for building a tracker in the cross-language context, as not
every developer will be familiar with both languages.
| Hongjie Shi, Takashi Ushio, Mitsuru Endo, Katsuyoshi Yamagami, Noriaki
Horii | null | 1701.06247 | null | null |
dna2vec: Consistent vector representations of variable-length k-mers | q-bio.QM cs.CL cs.LG stat.ML | One of the ubiquitous representation of long DNA sequence is dividing it into
shorter k-mer components. Unfortunately, the straightforward vector encoding of
k-mer as a one-hot vector is vulnerable to the curse of dimensionality. Worse
yet, the distance between any pair of one-hot vectors is equidistant. This is
particularly problematic when applying the latest machine learning algorithms
to solve problems in biological sequence analysis. In this paper, we propose a
novel method to train distributed representations of variable-length k-mers.
Our method is based on the popular word embedding model word2vec, which is
trained on a shallow two-layer neural network. Our experiments provide evidence
that the summing of dna2vec vectors is akin to nucleotides concatenation. We
also demonstrate that there is correlation between Needleman-Wunsch similarity
score and cosine similarity of dna2vec vectors.
| Patrick Ng | null | 1701.06279 | null | null |
Comparative study on supervised learning methods for identifying
phytoplankton species | stat.ML cs.LG | Phytoplankton plays an important role in marine ecosystem. It is defined as a
biological factor to assess marine quality. The identification of phytoplankton
species has a high potential for monitoring environmental, climate changes and
for evaluating water quality. However, phytoplankton species identification is
not an easy task owing to their variability and ambiguity due to thousands of
micro and pico-plankton species. Therefore, the aim of this paper is to build a
framework for identifying phytoplankton species and to perform a comparison on
different features types and classifiers. We propose a new features type
extracted from raw signals of phytoplankton species. We then analyze the
performance of various classifiers on the proposed features type as well as two
other features types for finding the robust one. Through experiments, it is
found that Random Forest using the proposed features gives the best
classification results with average accuracy up to 98.24%.
| Thi-Thu-Hong Phan (LISIC), Emilie Poisson Caillault (LISIC), Andr\'e
Bigand (LISIC) | 10.1109/CCE.2016.7562650 | 1701.06421 | null | null |
Learning what to look in chest X-rays with a recurrent visual attention
model | stat.ML cs.CV cs.LG | X-rays are commonly performed imaging tests that use small amounts of
radiation to produce pictures of the organs, tissues, and bones of the body.
X-rays of the chest are used to detect abnormalities or diseases of the
airways, blood vessels, bones, heart, and lungs. In this work we present a
stochastic attention-based model that is capable of learning what regions
within a chest X-ray scan should be visually explored in order to conclude that
the scan contains a specific radiological abnormality. The proposed model is a
recurrent neural network (RNN) that learns to sequentially sample the entire
X-ray and focus only on informative areas that are likely to contain the
relevant information. We report on experiments carried out with more than
$100,000$ X-rays containing enlarged hearts or medical devices. The model has
been trained using reinforcement learning methods to learn task-specific
policies.
| Petros-Pavlos Ypsilantis and Giovanni Montana | null | 1701.06452 | null | null |
Aggressive Sampling for Multi-class to Binary Reduction with
Applications to Text Classification | stat.ML cs.LG | We address the problem of multi-class classification in the case where the
number of classes is very large. We propose a double sampling strategy on top
of a multi-class to binary reduction strategy, which transforms the original
multi-class problem into a binary classification problem over pairs of
examples. The aim of the sampling strategy is to overcome the curse of
long-tailed class distributions exhibited in majority of large-scale
multi-class classification problems and to reduce the number of pairs of
examples in the expanded data. We show that this strategy does not alter the
consistency of the empirical risk minimization principle defined over the
double sample reduction. Experiments are carried out on DMOZ and Wikipedia
collections with 10,000 to 100,000 classes where we show the efficiency of the
proposed approach in terms of training and prediction time, memory consumption,
and predictive performance with respect to state-of-the-art approaches.
| Bikash Joshi, Massih-Reza Amini, Ioannis Partalas, Franck Iutzeler,
Yury Maximov | null | 1701.06511 | null | null |
ENIGMA: Efficient Learning-based Inference Guiding Machine | cs.LO cs.AI cs.LG | ENIGMA is a learning-based method for guiding given clause selection in
saturation-based theorem provers. Clauses from many proof searches are
classified as positive and negative based on their participation in the proofs.
An efficient classification model is trained on this data, using fast
feature-based characterization of the clauses . The learned model is then
tightly linked with the core prover and used as a basis of a new parameterized
evaluation heuristic that provides fast ranking of all generated clauses. The
approach is evaluated on the E prover and the CASC 2016 AIM benchmark, showing
a large increase of E's performance.
| Jan Jakub\r{u}v, Josef Urban | null | 1701.06532 | null | null |
Outrageously Large Neural Networks: The Sparsely-Gated
Mixture-of-Experts Layer | cs.LG cs.CL cs.NE stat.ML | The capacity of a neural network to absorb information is limited by its
number of parameters. Conditional computation, where parts of the network are
active on a per-example basis, has been proposed in theory as a way of
dramatically increasing model capacity without a proportional increase in
computation. In practice, however, there are significant algorithmic and
performance challenges. In this work, we address these challenges and finally
realize the promise of conditional computation, achieving greater than 1000x
improvements in model capacity with only minor losses in computational
efficiency on modern GPU clusters. We introduce a Sparsely-Gated
Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward
sub-networks. A trainable gating network determines a sparse combination of
these experts to use for each example. We apply the MoE to the tasks of
language modeling and machine translation, where model capacity is critical for
absorbing the vast quantities of knowledge available in the training corpora.
We present model architectures in which a MoE with up to 137 billion parameters
is applied convolutionally between stacked LSTM layers. On large language
modeling and machine translation benchmarks, these models achieve significantly
better results than state-of-the-art at lower computational cost.
| Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc
Le, Geoffrey Hinton, Jeff Dean | null | 1701.06538 | null | null |
Regularizing Neural Networks by Penalizing Confident Output
Distributions | cs.NE cs.LG | We systematically explore regularizing neural networks by penalizing low
entropy output distributions. We show that penalizing low entropy output
distributions, which has been shown to improve exploration in reinforcement
learning, acts as a strong regularizer in supervised learning. Furthermore, we
connect a maximum entropy based confidence penalty to label smoothing through
the direction of the KL divergence. We exhaustively evaluate the proposed
confidence penalty and label smoothing on 6 common benchmarks: image
classification (MNIST and Cifar-10), language modeling (Penn Treebank), machine
translation (WMT'14 English-to-German), and speech recognition (TIMIT and WSJ).
We find that both label smoothing and the confidence penalty improve
state-of-the-art models across benchmarks without modifying existing
hyperparameters, suggesting the wide applicability of these regularizers.
| Gabriel Pereyra, George Tucker, Jan Chorowski, {\L}ukasz Kaiser,
Geoffrey Hinton | null | 1701.06548 | null | null |
On the Parametric Study of Lubricating Oil Production using an
Artificial Neural Network (ANN) Approach | cs.LG | In this study, an Artificial Neural Network (ANN) approach is utilized to
perform a parametric study on the process of extraction of lubricants from
heavy petroleum cuts. To train the model, we used field data collected from an
industrial plant. Operational conditions of feed and solvent flow rate,
Temperature of streams and mixing rate were considered as the input to the
model, whereas the flow rate of the main product was considered as the output
of the ANN model. A feed-forward Multi-Layer Perceptron Neural Network was
successfully applied to capture the relationship between inputs and output
parameters.
| Masood Tehrani and Mary Ahmadi | null | 1701.06551 | null | null |
Identifying Nonlinear 1-Step Causal Influences in Presence of Latent
Variables | cs.IT cs.LG math.IT stat.ME | We propose an approach for learning the causal structure in stochastic
dynamical systems with a $1$-step functional dependency in the presence of
latent variables. We propose an information-theoretic approach that allows us
to recover the causal relations among the observed variables as long as the
latent variables evolve without exogenous noise. We further propose an
efficient learning method based on linear regression for the special sub-case
when the dynamics are restricted to be linear. We validate the performance of
our approach via numerical simulations.
| Saber Salehkaleybar and Jalal Etesami and Negar Kiyavash | null | 1701.06605 | null | null |
Revenue Forecasting for Enterprise Products | q-fin.GN cs.LG | For any business, planning is a continuous process, and typically
business-owners focus on making both long-term planning aligned with a
particular strategy as well as short-term planning that accommodates the
dynamic market situations. An ability to perform an accurate financial forecast
is crucial for effective planning. In this paper, we focus on providing an
intelligent and efficient solution that will help in forecasting revenue using
machine learning algorithms. We experiment with three different revenue
forecasting models, and here we provide detailed insights into the methodology
and their relative performance measured on real finance data. As a real-world
application of our models, we partner with Microsoft's Finance organization
(department that reports Microsoft's finances) to provide them a guidance on
the projected revenue for upcoming quarters.
| Amita Gajewar, Gagan Bansal | null | 1701.06624 | null | null |
Convex Parameterizations and Fidelity Bounds for Nonlinear
Identification and Reduced-Order Modelling | cs.SY cs.LG math.OC | Model instability and poor prediction of long-term behavior are common
problems when modeling dynamical systems using nonlinear "black-box"
techniques. Direct optimization of the long-term predictions, often called
simulation error minimization, leads to optimization problems that are
generally non-convex in the model parameters and suffer from multiple local
minima. In this work we present methods which address these problems through
convex optimization, based on Lagrangian relaxation, dissipation inequalities,
contraction theory, and semidefinite programming. We demonstrate the proposed
methods with a model order reduction task for electronic circuit design and the
identification of a pneumatic actuator from experiment.
| Mark M. Tobenkin and Ian R. Manchester and Alexandre Megretski | null | 1701.06652 | null | null |
Patchwork Kriging for Large-scale Gaussian Process Regression | cs.LG stat.ML | This paper presents a new approach for Gaussian process (GP) regression for
large datasets. The approach involves partitioning the regression input domain
into multiple local regions with a different local GP model fitted in each
region. Unlike existing local partitioned GP approaches, we introduce a
technique for patching together the local GP models nearly seamlessly to ensure
that the local GP models for two neighboring regions produce nearly the same
response prediction and prediction error variance on the boundary between the
two regions. This largely mitigates the well-known discontinuity problem that
degrades the boundary accuracy of existing local partitioned GP methods. Our
main innovation is to represent the continuity conditions as additional
pseudo-observations that the differences between neighboring GP responses are
identically zero at an appropriately chosen set of boundary input locations. To
predict the response at any input location, we simply augment the actual
response observations with the pseudo-observations and apply standard GP
prediction methods to the augmented data. In contrast to heuristic continuity
adjustments, this has an advantage of working within a formal GP framework, so
that the GP-based predictive uncertainty quantification remains valid. Our
approach also inherits a sparse block-like structure for the sample covariance
matrix, which results in computationally efficient closed-form expressions for
the predictive mean and variance. In addition, we provide a new spatial
partitioning scheme based on a recursive space partitioning along local
principal component directions, which makes the proposed approach applicable
for regression domains having more than two dimensions. Using three spatial
datasets and three higher dimensional datasets, we investigate the numerical
performance of the approach and compare it to several state-of-the-art
approaches.
| Chiwoo Park and Daniel Apley | null | 1701.06655 | null | null |
A Contextual Bandit Approach for Stream-Based Active Learning | cs.LG | Contextual bandit algorithms -- a class of multi-armed bandit algorithms that
exploit the contextual information -- have been shown to be effective in
solving sequential decision making problems under uncertainty. A common
assumption adopted in the literature is that the realized (ground truth) reward
by taking the selected action is observed by the learner at no cost, which,
however, is not realistic in many practical scenarios. When observing the
ground truth reward is costly, a key challenge for the learner is how to
judiciously acquire the ground truth by assessing the benefits and costs in
order to balance learning efficiency and learning cost. From the information
theoretic perspective, a perhaps even more interesting question is how much
efficiency might be lost due to this cost. In this paper, we design a novel
contextual bandit-based learning algorithm and endow it with the active
learning capability. The key feature of our algorithm is that in addition to
sending a query to an annotator for the ground truth, prior information about
the ground truth learned by the learner is sent together, thereby reducing the
query cost. We prove that by carefully choosing the algorithm parameters, the
learning regret of the proposed algorithm achieves the same order as that of
conventional contextual bandit algorithms in cost-free scenarios, implying
that, surprisingly, cost due to acquiring the ground truth does not increase
the learning regret in the long-run. Our analysis shows that prior information
about the ground truth plays a critical role in improving the system
performance in scenarios where active learning is necessary.
| Linqi Song and Jie Xu | null | 1701.06725 | null | null |
Collective Vertex Classification Using Recursive Neural Network | cs.LG cs.SI | Collective classification of vertices is a task of assigning categories to
each vertex in a graph based on both vertex attributes and link structure.
Nevertheless, some existing approaches do not use the features of neighbouring
vertices properly, due to the noise introduced by these features. In this
paper, we propose a graph-based recursive neural network framework for
collective vertex classification. In this framework, we generate hidden
representations from both attributes of vertices and representations of
neighbouring vertices via recursive neural networks. Under this framework, we
explore two types of recursive neural units, naive recursive neural unit and
long short-term memory unit. We have conducted experiments on four real-world
network datasets. The experimental results show that our frame- work with long
short-term memory model achieves better results and outperforms several
competitive baseline methods.
| Qiongkai Xu, Qing Wang, Chenchen Xu and Lizhen Qu | null | 1701.06751 | null | null |
Discriminative Neural Topic Models | cs.LG | We propose a neural network based approach for learning topics from text and
image datasets. The model makes no assumptions about the conditional
distribution of the observed features given the latent topics. This allows us
to perform topic modelling efficiently using sentences of documents and patches
of images as observed features, rather than limiting ourselves to words.
Moreover, the proposed approach is online, and hence can be used for streaming
data. Furthermore, since the approach utilizes neural networks, it can be
implemented on GPU with ease, and hence it is very scalable.
| Gaurav Pandey and Ambedkar Dukkipati | null | 1701.06796 | null | null |
A Survey of Quantum Learning Theory | quant-ph cs.CC cs.LG | This paper surveys quantum learning theory: the theoretical aspects of
machine learning using quantum computers. We describe the main results known
for three models of learning: exact learning from membership queries, and
Probably Approximately Correct (PAC) and agnostic learning from classical or
quantum examples.
| Srinivasan Arunachalam (CWI) and Ronald de Wolf (CWI and U of
Amsterdam) | null | 1701.06806 | null | null |
Deep Network Guided Proof Search | cs.AI cs.LG cs.LO | Deep learning techniques lie at the heart of several significant AI advances
in recent years including object recognition and detection, image captioning,
machine translation, speech recognition and synthesis, and playing the game of
Go. Automated first-order theorem provers can aid in the formalization and
verification of mathematical theorems and play a crucial role in program
analysis, theory reasoning, security, interpolation, and system verification.
Here we suggest deep learning based guidance in the proof search of the theorem
prover E. We train and compare several deep neural network models on the traces
of existing ATP proofs of Mizar statements and use them to select processed
clauses during proof search. We give experimental evidence that with a hybrid,
two-phase approach, deep learning based guidance can significantly reduce the
average number of proof search steps while increasing the number of theorems
proved. Using a few proof guidance strategies that leverage deep neural
networks, we have found first-order proofs of 7.36% of the first-order logic
translations of the Mizar Mathematical Library theorems that did not previously
have ATP generated proofs. This increases the ratio of statements in the corpus
with ATP generated proofs from 56% to 59%.
| Sarah Loos, Geoffrey Irving, Christian Szegedy, Cezary Kaliszyk | null | 1701.06972 | null | null |
On the Effectiveness of Discretizing Quantitative Attributes in Linear
Classifiers | cs.LG | Learning algorithms that learn linear models often have high representation
bias on real-world problems. In this paper, we show that this representation
bias can be greatly reduced by discretization. Discretization is a common
procedure in machine learning that is used to convert a quantitative attribute
into a qualitative one. It is often motivated by the limitation of some
learners to qualitative data. Discretization loses information, as fewer
distinctions between instances are possible using discretized data relative to
undiscretized data. In consequence, where discretization is not essential, it
might appear desirable to avoid it. However, it has been shown that
discretization often substantially reduces the error of the linear generative
Bayesian classifier naive Bayes. This motivates a systematic study of the
effectiveness of discretizing quantitative attributes for other linear
classifiers. In this work, we study the effect of discretization on the
performance of linear classifiers optimizing three distinct discriminative
objective functions --- logistic regression (optimizing negative
log-likelihood), support vector classifiers (optimizing hinge loss) and a
zero-hidden layer artificial neural network (optimizing mean-square-error). We
show that discretization can greatly increase the accuracy of these linear
discriminative learners by reducing their representation bias, especially on
big datasets. We substantiate our claims with an empirical study on $42$
benchmark datasets.
| Nayyar A. Zaidi, Yang Du, Geoffrey I. Webb | null | 1701.07114 | null | null |
jsCoq: Towards Hybrid Theorem Proving Interfaces | cs.PL cs.HC cs.LG cs.LO | We describe jsCcoq, a new platform and user environment for the Coq
interactive proof assistant. The jsCoq system targets the HTML5-ECMAScript 2015
specification, and it is typically run inside a standards-compliant browser,
without the need of external servers or services. Targeting educational use,
jsCoq allows the user to start interaction with proof scripts right away,
thanks to its self-contained nature. Indeed, a full Coq environment is packed
along the proof scripts, easing distribution and installation. Starting to use
jsCoq is as easy as clicking on a link. The current release ships more than 10
popular Coq libraries, and supports popular books such as Software Foundations
or Certified Programming with Dependent Types. The new target platform has
opened up new interaction and display possibilities. It has also fostered the
development of some new Coq-related technology. In particular, we have
implemented a new serialization-based protocol for interaction with the proof
assistant, as well as a new package format for library distribution.
| Emilio Jes\'us Gallego Arias (MINES ParisTech, PSL Research
University, France), Beno\^it Pin (MINES ParisTech, PSL Research University,
France), Pierre Jouvelot (MINES ParisTech, PSL Research University, France) | 10.4204/EPTCS.239.2 | 1701.07125 | null | null |
CP-decomposition with Tensor Power Method for Convolutional Neural
Networks Compression | cs.LG | Convolutional Neural Networks (CNNs) has shown a great success in many areas
including complex image classification tasks. However, they need a lot of
memory and computational cost, which hinders them from running in relatively
low-end smart devices such as smart phones. We propose a CNN compression method
based on CP-decomposition and Tensor Power Method. We also propose an iterative
fine tuning, with which we fine-tune the whole network after decomposing each
layer, but before decomposing the next layer. Significant reduction in memory
and computation cost is achieved compared to state-of-the-art previous work
with no more accuracy loss.
| Marcella Astrid and Seung-Ik Lee | null | 1701.07148 | null | null |
Personalized Classifier Ensemble Pruning Framework for Mobile
Crowdsourcing | cs.DC cs.HC cs.LG | Ensemble learning has been widely employed by mobile applications, ranging
from environmental sensing to activity recognitions. One of the fundamental
issue in ensemble learning is the trade-off between classification accuracy and
computational costs, which is the goal of ensemble pruning. During
crowdsourcing, the centralized aggregator releases ensemble learning models to
a large number of mobile participants for task evaluation or as the
crowdsourcing learning results, while different participants may seek for
different levels of the accuracy-cost trade-off. However, most of existing
ensemble pruning approaches consider only one identical level of such
trade-off. In this study, we present an efficient ensemble pruning framework
for personalized accuracy-cost trade-offs via multi-objective optimization.
Specifically, for the commonly used linear-combination style of the trade-off,
we provide an objective-mixture optimization to further reduce the number of
ensemble candidates. Experimental results show that our framework is highly
efficient for personalized ensemble pruning, and achieves much better pruning
performance with objective-mixture optimization when compared to state-of-art
approaches.
| Shaowei Wang, Liusheng Huang, Pengzhan Wang, Hongli Xu, Wei Yang | null | 1701.07166 | null | null |
Malicious URL Detection using Machine Learning: A Survey | cs.LG cs.CR | Malicious URL, a.k.a. malicious website, is a common and serious threat to
cybersecurity. Malicious URLs host unsolicited content (spam, phishing,
drive-by exploits, etc.) and lure unsuspecting users to become victims of scams
(monetary loss, theft of private information, and malware installation), and
cause losses of billions of dollars every year. It is imperative to detect and
act on such threats in a timely manner. Traditionally, this detection is done
mostly through the usage of blacklists. However, blacklists cannot be
exhaustive, and lack the ability to detect newly generated malicious URLs. To
improve the generality of malicious URL detectors, machine learning techniques
have been explored with increasing attention in recent years. This article aims
to provide a comprehensive survey and a structural understanding of Malicious
URL Detection techniques using machine learning. We present the formal
formulation of Malicious URL Detection as a machine learning task, and
categorize and review the contributions of literature studies that addresses
different dimensions of this problem (feature representation, algorithm design,
etc.). Further, this article provides a timely and comprehensive survey for a
range of different audiences, not only for machine learning researchers and
engineers in academia, but also for professionals and practitioners in
cybersecurity industry, to help them understand the state of the art and
facilitate their own research and practical applications. We also discuss
practical issues in system design, open research challenges, and point out some
important directions for future research.
| Doyen Sahoo, Chenghao Liu, and Steven C.H. Hoi | null | 1701.07179 | null | null |
Privileged Multi-label Learning | stat.ML cs.LG | This paper presents privileged multi-label learning (PrML) to explore and
exploit the relationship between labels in multi-label learning problems. We
suggest that for each individual label, it cannot only be implicitly connected
with other labels via the low-rank constraint over label predictors, but also
its performance on examples can receive the explicit comments from other labels
together acting as an \emph{Oracle teacher}. We generate privileged label
feature for each example and its individual label, and then integrate it into
the framework of low-rank based multi-label learning. The proposed algorithm
can therefore comprehensively explore and exploit label relationships by
inheriting all the merits of privileged information and low-rank constraints.
We show that PrML can be efficiently solved by dual coordinate descent
algorithm using iterative optimization strategy with cheap updates. Experiments
on benchmark datasets show that through privileged label features, the
performance can be significantly improved and PrML is superior to several
competing methods in most cases.
| Shan You, Chang Xu, Yunhe Wang, Chao Xu, Dacheng Tao | null | 1701.07194 | null | null |
Fast Exact k-Means, k-Medians and Bregman Divergence Clustering in 1D | cs.DS cs.AI cs.LG | The $k$-Means clustering problem on $n$ points is NP-Hard for any dimension
$d\ge 2$, however, for the 1D case there exists exact polynomial time
algorithms. Previous literature reported an $O(kn^2)$ time dynamic programming
algorithm that uses $O(kn)$ space. It turns out that the problem has been
considered under a different name more than twenty years ago. We present all
the existing work that had been overlooked and compare the various solutions
theoretically. Moreover, we show how to reduce the space usage for some of
them, as well as generalize them to data structures that can quickly report an
optimal $k$-Means clustering for any $k$. Finally we also generalize all the
algorithms to work for the absolute distance and to work for any Bregman
Divergence. We complement our theoretical contributions by experiments that
compare the practical performance of the various algorithms.
| Allan Gr{\o}nlund and Kasper Green Larsen and Alexander Mathiasen and
Jesper Sindahl Nielsen and Stefan Schneider and Mingzhou Song | null | 1701.07204 | null | null |
Learn&Fuzz: Machine Learning for Input Fuzzing | cs.AI cs.CR cs.LG cs.PL cs.SE | Fuzzing consists of repeatedly testing an application with modified, or
fuzzed, inputs with the goal of finding security vulnerabilities in
input-parsing code. In this paper, we show how to automate the generation of an
input grammar suitable for input fuzzing using sample inputs and
neural-network-based statistical machine-learning techniques. We present a
detailed case study with a complex input format, namely PDF, and a large
complex security-critical parser for this format, namely, the PDF parser
embedded in Microsoft's new Edge browser. We discuss (and measure) the tension
between conflicting learning and fuzzing goals: learning wants to capture the
structure of well-formed inputs, while fuzzing wants to break that structure in
order to cover unexpected code paths and find bugs. We also present a new
algorithm for this learn&fuzz challenge which uses a learnt input probability
distribution to intelligently guide where to fuzz inputs.
| Patrice Godefroid, Hila Peleg, Rishabh Singh | null | 1701.07232 | null | null |
Decoding Epileptogenesis in a Reduced State Space | q-bio.NC cs.LG q-bio.QM | We describe here the recent results of a multidisciplinary effort to design a
biomarker that can actively and continuously decode the progressive changes in
neuronal organization leading to epilepsy, a process known as epileptogenesis.
Using an animal model of acquired epilepsy, wechronically record hippocampal
evoked potentials elicited by an auditory stimulus. Using a set of reduced
coordinates, our algorithm can identify universal smooth low-dimensional
configurations of the auditory evoked potentials that correspond to distinct
stages of epileptogenesis. We use a hidden Markov model to learn the dynamics
of the evoked potential, as it evolves along these smooth low-dimensional
subsets. We provide experimental evidence that the biomarker is able to exploit
subtle changes in the evoked potential to reliably decode the stage of
epileptogenesis and predict whether an animal will eventually recover from the
injury, or develop spontaneous seizures.
| Fran\c{c}ois G. Meyer, Alexander M. Benison, Zachariah Smith, and
Daniel S. Barth | null | 1701.07243 | null | null |
k*-Nearest Neighbors: From Global to Local | stat.ML cs.LG | The weighted k-nearest neighbors algorithm is one of the most fundamental
non-parametric methods in pattern recognition and machine learning. The
question of setting the optimal number of neighbors as well as the optimal
weights has received much attention throughout the years, nevertheless this
problem seems to have remained unsettled. In this paper we offer a simple
approach to locally weighted regression/classification, where we make the
bias-variance tradeoff explicit. Our formulation enables us to phrase a notion
of optimal weights, and to efficiently find these weights as well as the
optimal number of neighbors efficiently and adaptively, for each data point
whose value we wish to estimate. The applicability of our approach is
demonstrated on several datasets, showing superior performance over standard
locally weighted methods.
| Oren Anava, Kfir Y. Levy | null | 1701.07266 | null | null |
Deep Reinforcement Learning: An Overview | cs.LG | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update.
| Yuxi Li | null | 1701.07274 | null | null |
Learning Light Transport the Reinforced Way | cs.LG cs.GR | We show that the equations of reinforcement learning and light transport
simulation are related integral equations. Based on this correspondence, a
scheme to learn importance while sampling path space is derived. The new
approach is demonstrated in a consistent light transport simulation algorithm
that uses reinforcement learning to progressively learn where light comes from.
As using this information for importance sampling includes information about
visibility, too, the number of light transport paths with zero contribution is
dramatically reduced, resulting in much less noisy images within a fixed time
budget.
| Ken Dahm and Alexander Keller | null | 1701.07403 | null | null |
A Convex Similarity Index for Sparse Recovery of Missing Image Samples | cs.LG stat.ML | This paper investigates the problem of recovering missing samples using
methods based on sparse representation adapted especially for image signals.
Instead of $l_2$-norm or Mean Square Error (MSE), a new perceptual quality
measure is used as the similarity criterion between the original and the
reconstructed images. The proposed criterion called Convex SIMilarity (CSIM)
index is a modified version of the Structural SIMilarity (SSIM) index, which
despite its predecessor, is convex and uni-modal. We derive mathematical
properties for the proposed index and show how to optimally choose the
parameters of the proposed criterion, investigating the Restricted Isometry
(RIP) and error-sensitivity properties. We also propose an iterative sparse
recovery method based on a constrained $l_1$-norm minimization problem,
incorporating CSIM as the fidelity criterion. The resulting convex optimization
problem is solved via an algorithm based on Alternating Direction Method of
Multipliers (ADMM). Taking advantage of the convexity of the CSIM index, we
also prove the convergence of the algorithm to the globally optimal solution of
the proposed optimization problem, starting from any arbitrary point.
Simulation results confirm the performance of the new similarity index as well
as the proposed algorithm for missing sample recovery of image patch signals.
| Amirhossein Javaheri, Hadi Zayyani and Farokh Marvasti | null | 1701.07422 | null | null |
Robust mixture of experts modeling using the $t$ distribution | stat.ME cs.LG stat.ML | Mixture of Experts (MoE) is a popular framework for modeling heterogeneity in
data for regression, classification, and clustering. For regression and cluster
analyses of continuous data, MoE usually use normal experts following the
Gaussian distribution. However, for a set of data containing a group or groups
of observations with heavy tails or atypical observations, the use of normal
experts is unsuitable and can unduly affect the fit of the MoE model. We
introduce a robust MoE modeling using the $t$ distribution. The proposed $t$
MoE (TMoE) deals with these issues regarding heavy-tailed and noisy data. We
develop a dedicated expectation-maximization (EM) algorithm to estimate the
parameters of the proposed model by monotonically maximizing the observed data
log-likelihood. We describe how the presented model can be used in prediction
and in model-based clustering of regression data. The proposed model is
validated on numerical experiments carried out on simulated data, which show
the effectiveness and the robustness of the proposed model in terms of modeling
non-linear regression functions as well as in model-based clustering. Then, it
is applied to the real-world data of tone perception for musical data analysis,
and the one of temperature anomalies for the analysis of climate change data.
The obtained results show the usefulness of the TMoE model for practical
applications.
| Faicel Chamroukhi | 10.1016/j.neunet.2016.03.002 | 1701.07429 | null | null |
Exploiting Convolutional Neural Network for Risk Prediction with Medical
Feature Embedding | cs.LG stat.ML | The widespread availability of electronic health records (EHRs) promises to
usher in the era of personalized medicine. However, the problem of extracting
useful clinical representations from longitudinal EHR data remains challenging.
In this paper, we explore deep neural network models with learned medical
feature embedding to deal with the problems of high dimensionality and
temporality. Specifically, we use a multi-layer convolutional neural network
(CNN) to parameterize the model and is thus able to capture complex non-linear
longitudinal evolution of EHRs. Our model can effectively capture local/short
temporal dependency in EHRs, which is beneficial for risk prediction. To
account for high dimensionality, we use the embedding medical features in the
CNN model which hold the natural medical concepts. Our initial experiments
produce promising results and demonstrate the effectiveness of both the medical
feature embedding and the proposed convolutional neural network in risk
prediction on cohorts of congestive heart failure and diabetes patients
compared with several strong baselines.
| Zhengping Che, Yu Cheng, Zhaonan Sun, Yan Liu | null | 1701.07474 | null | null |
A Model-based Projection Technique for Segmenting Customers | stat.ME cs.LG stat.AP stat.ML | We consider the problem of segmenting a large population of customers into
non-overlapping groups with similar preferences, using diverse preference
observations such as purchases, ratings, clicks, etc. over subsets of items. We
focus on the setting where the universe of items is large (ranging from
thousands to millions) and unstructured (lacking well-defined attributes) and
each customer provides observations for only a few items. These data
characteristics limit the applicability of existing techniques in marketing and
machine learning. To overcome these limitations, we propose a model-based
projection technique, which transforms the diverse set of observations into a
more comparable scale and deals with missing data by projecting the transformed
data onto a low-dimensional space. We then cluster the projected data to obtain
the customer segments. Theoretically, we derive precise necessary and
sufficient conditions that guarantee asymptotic recovery of the true customer
segments. Empirically, we demonstrate the speed and performance of our method
in two real-world case studies: (a) 84% improvement in the accuracy of new
movie recommendations on the MovieLens data set and (b) 6% improvement in the
performance of similar item recommendations algorithm on an offline dataset at
eBay. We show that our method outperforms standard latent-class and
demographic-based techniques.
| Srikanth Jagabathula, Lakshminarayanan Subramanian, Ashwin
Venkataraman | null | 1701.07483 | null | null |
FPGA Architecture for Deep Learning and its application to Planetary
Robotics | cs.LG astro-ph.IM cs.RO | Autonomous control systems onboard planetary rovers and spacecraft benefit
from having cognitive capabilities like learning so that they can adapt to
unexpected situations in-situ. Q-learning is a form of reinforcement learning
and it has been efficient in solving certain class of learning problems.
However, embedded systems onboard planetary rovers and spacecraft rarely
implement learning algorithms due to the constraints faced in the field, like
processing power, chip size, convergence rate and costs due to the need for
radiation hardening. These challenges present a compelling need for a portable,
low-power, area efficient hardware accelerator to make learning algorithms
practical onboard space hardware. This paper presents a FPGA implementation of
Q-learning with Artificial Neural Networks (ANN). This method matches the
massive parallelism inherent in neural network software with the fine-grain
parallelism of an FPGA hardware thereby dramatically reducing processing time.
Mars Science Laboratory currently uses Xilinx-Space-grade Virtex FPGA devices
for image processing, pyrotechnic operation control and obstacle avoidance. We
simulate and program our architecture on a Xilinx Virtex 7 FPGA. The
architectural implementation for a single neuron Q-learning and a more complex
Multilayer Perception (MLP) Q-learning accelerator has been demonstrated. The
results show up to a 43-fold speed up by Virtex 7 FPGAs compared to a
conventional Intel i5 2.3 GHz CPU. Finally, we simulate the proposed
architecture using the Symphony simulator and compiler from Xilinx, and
evaluate the performance and power consumption.
| Pranay Gankidi and Jekan Thangavelautham | null | 1701.07543 | null | null |
Dynamic Regret of Strongly Adaptive Methods | cs.LG | To cope with changing environments, recent developments in online learning
have introduced the concepts of adaptive regret and dynamic regret
independently. In this paper, we illustrate an intrinsic connection between
these two concepts by showing that the dynamic regret can be expressed in terms
of the adaptive regret and the functional variation. This observation implies
that strongly adaptive algorithms can be directly leveraged to minimize the
dynamic regret. As a result, we present a series of strongly adaptive
algorithms that have small dynamic regrets for convex functions, exponentially
concave functions, and strongly convex functions, respectively. To the best of
our knowledge, this is the first time that exponential concavity is utilized to
upper bound the dynamic regret. Moreover, all of those adaptive algorithms do
not need any prior knowledge of the functional variation, which is a
significant advantage over previous specialized methods for minimizing dynamic
regret.
| Lijun Zhang, Tianbao Yang, Rong Jin, Zhi-Hua Zhou | null | 1701.0757 | null | null |
Fast and Accurate Time Series Classification with WEASEL | cs.DS cs.LG stat.ML | Time series (TS) occur in many scientific and commercial applications,
ranging from earth surveillance to industry automation to the smart grids. An
important type of TS analysis is classification, which can, for instance,
improve energy load forecasting in smart grids by detecting the types of
electronic devices based on their energy consumption profiles recorded by
automatic sensors. Such sensor-driven applications are very often characterized
by (a) very long TS and (b) very large TS datasets needing classification.
However, current methods to time series classification (TSC) cannot cope with
such data volumes at acceptable accuracy; they are either scalable but offer
only inferior classification quality, or they achieve state-of-the-art
classification quality but cannot scale to large data volumes.
In this paper, we present WEASEL (Word ExtrAction for time SEries
cLassification), a novel TSC method which is both scalable and accurate. Like
other state-of-the-art TSC methods, WEASEL transforms time series into feature
vectors, using a sliding-window approach, which are then analyzed through a
machine learning classifier. The novelty of WEASEL lies in its specific method
for deriving features, resulting in a much smaller yet much more discriminative
feature set. On the popular UCR benchmark of 85 TS datasets, WEASEL is more
accurate than the best current non-ensemble algorithms at orders-of-magnitude
lower classification and training times, and it is almost as accurate as
ensemble classifiers, whose computational complexity makes them inapplicable
even for mid-size datasets. The outstanding robustness of WEASEL is also
confirmed by experiments on two real smart grid datasets, where it
out-of-the-box achieves almost the same accuracy as highly tuned,
domain-specific methods.
| Patrick Sch\"afer and Ulf Leser | 10.1145/3132847.3132980 | 1701.07681 | null | null |
Theoretical Foundations of Forward Feature Selection Methods based on
Mutual Information | stat.ML cs.LG | Feature selection problems arise in a variety of applications, such as
microarray analysis, clinical prediction, text categorization, image
classification and face recognition, multi-label learning, and classification
of internet traffic. Among the various classes of methods, forward feature
selection methods based on mutual information have become very popular and are
widely used in practice. However, comparative evaluations of these methods have
been limited by being based on specific datasets and classifiers. In this
paper, we develop a theoretical framework that allows evaluating the methods
based on their theoretical properties. Our framework is grounded on the
properties of the target objective function that the methods try to
approximate, and on a novel categorization of features, according to their
contribution to the explanation of the class; we derive upper and lower bounds
for the target objective function and relate these bounds with the feature
types. Then, we characterize the types of approximations taken by the methods,
and analyze how these approximations cope with the good properties of the
target objective function. Additionally, we develop a distributional setting
designed to illustrate the various deficiencies of the methods, and provide
several examples of wrong feature selections. Based on our work, we identify
clearly the methods that should be avoided, and the methods that currently have
the best performance.
| Francisco Macedo and M. Ros\'ario Oliveira and Ant\'onio Pacheco and
Rui Valadas | null | 1701.07761 | null | null |
Riemannian-geometry-based modeling and clustering of network-wide
non-stationary time series: The brain-network case | cs.LG stat.ML | This paper advocates Riemannian multi-manifold modeling in the context of
network-wide non-stationary time-series analysis. Time-series data, collected
sequentially over time and across a network, yield features which are viewed as
points in or close to a union of multiple submanifolds of a Riemannian
manifold, and distinguishing disparate time series amounts to clustering
multiple Riemannian submanifolds. To support the claim that exploiting the
latent Riemannian geometry behind many statistical features of time series is
beneficial to learning from network data, this paper focuses on brain networks
and puts forth two feature-generation schemes for network-wide dynamic time
series. The first is motivated by Granger-causality arguments and uses an
auto-regressive moving average model to map low-rank linear vector subspaces,
spanned by column vectors of appropriately defined observability matrices, to
points into the Grassmann manifold. The second utilizes (non-linear)
dependencies among network nodes by introducing kernel-based partial
correlations to generate points in the manifold of positive-definite matrices.
Capitilizing on recently developed research on clustering Riemannian
submanifolds, an algorithm is provided for distinguishing time series based on
their geometrical properties, revealed within Riemannian feature spaces.
Extensive numerical tests demonstrate that the proposed framework outperforms
classical and state-of-the-art techniques in clustering brain-network
states/structures hidden beneath synthetic fMRI time series and brain-activity
signals generated from real brain-network structural connectivity matrices.
| Konstantinos Slavakis and Shiva Salsabilian and David S. Wack and
Sarah F. Muldoon and Henry E. Baidoo-Williams and Jean M. Vettel and Matthew
Cieslak and Scott T. Grafton | null | 1701.07767 | null | null |
Linear convergence of SDCA in statistical estimation | stat.ML cs.LG | In this paper, we consider stochastic dual coordinate (SDCA) {\em without}
strongly convex assumption or convex assumption. We show that SDCA converges
linearly under mild conditions termed restricted strong convexity. This covers
a wide array of popular statistical models including Lasso, group Lasso, and
logistic regression with $\ell_1$ regularization, corrected Lasso and linear
regression with SCAD regularizer. This significantly improves previous
convergence results on SDCA for problems that are not strongly convex. As a by
product, we derive a dual free form of SDCA that can handle general
regularization term, which is of interest by itself.
| Chao Qu, Huan Xu | null | 1701.07808 | null | null |
DroidStar: Callback Typestates for Android Classes | cs.LO cs.LG cs.PL | Event-driven programming frameworks, such as Android, are based on components
with asynchronous interfaces. The protocols for interacting with these
components can often be described by finite-state machines we dub *callback
typestates*. Callback typestates are akin to classical typestates, with the
difference that their outputs (callbacks) are produced asynchronously. While
useful, these specifications are not commonly available, because writing them
is difficult and error-prone.
Our goal is to make the task of producing callback typestates significantly
easier. We present a callback typestate assistant tool, DroidStar, that
requires only limited user interaction to produce a callback typestate. Our
approach is based on an active learning algorithm, L*. We improved the
scalability of equivalence queries (a key component of L*), thus making active
learning tractable on the Android system.
We use DroidStar to learn callback typestates for Android classes both for
cases where one is already provided by the documentation, and for cases where
the documentation is unclear. The results show that DroidStar learns callback
typestates accurately and efficiently. Moreover, in several cases, the
synthesized callback typestates uncovered surprising and undocumented
behaviors.
| Arjun Radhakrishna, Nicholas V. Lewchenko, Shawn Meier, Sergio Mover,
Krishna Chaitanya Sripada, Damien Zufferey, Bor-Yuh Evan Chang, and Pavol
\v{C}ern\'y | null | 1701.07842 | null | null |
An Empirical Analysis of Feature Engineering for Predictive Modeling | cs.LG | Machine learning models, such as neural networks, decision trees, random
forests, and gradient boosting machines, accept a feature vector, and provide a
prediction. These models learn in a supervised fashion where we provide feature
vectors mapped to the expected output. It is common practice to engineer new
features from the provided feature set. Such engineered features will either
augment or replace portions of the existing feature vector. These engineered
features are essentially calculated fields based on the values of the other
features.
Engineering such features is primarily a manual, time-consuming task.
Additionally, each type of model will respond differently to different kinds of
engineered features. This paper reports empirical research to demonstrate what
kinds of engineered features are best suited to various machine learning model
types. We provide this recommendation by generating several datasets that we
designed to benefit from a particular type of engineered feature. The
experiment demonstrates to what degree the machine learning model can
synthesize the needed feature on its own. If a model can synthesize a planned
feature, it is not necessary to provide that feature. The research demonstrated
that the studied models do indeed perform differently with various types of
engineered features.
| Jeff Heaton | 10.1109/SECON.2016.7506650 | 1701.07852 | null | null |
Wasserstein GAN | stat.ML cs.LG | We introduce a new algorithm named WGAN, an alternative to traditional GAN
training. In this new model, we show that we can improve the stability of
learning, get rid of problems like mode collapse, and provide meaningful
learning curves useful for debugging and hyperparameter searches. Furthermore,
we show that the corresponding optimization problem is sound, and provide
extensive theoretical work highlighting the deep connections to other distances
between distributions.
| Martin Arjovsky, Soumith Chintala, L\'eon Bottou | null | 1701.07875 | null | null |
Information Theoretic Limits for Linear Prediction with Graph-Structured
Sparsity | cs.LG cs.IT math.IT stat.ML | We analyze the necessary number of samples for sparse vector recovery in a
noisy linear prediction setup. This model includes problems such as linear
regression and classification. We focus on structured graph models. In
particular, we prove that sufficient number of samples for the weighted graph
model proposed by Hegde and others is also necessary. We use the Fano's
inequality on well constructed ensembles as our main tool in establishing
information theoretic lower bounds.
| Adarsh Barik, Jean Honorio, Mohit Tawarmalani | null | 1701.07895 | null | null |
The Price of Differential Privacy For Online Learning | cs.LG stat.ML | We design differentially private algorithms for the problem of online linear
optimization in the full information and bandit settings with optimal
$\tilde{O}(\sqrt{T})$ regret bounds. In the full-information setting, our
results demonstrate that $\epsilon$-differential privacy may be ensured for
free -- in particular, the regret bounds scale as
$O(\sqrt{T})+\tilde{O}\left(\frac{1}{\epsilon}\right)$. For bandit linear
optimization, and as a special case, for non-stochastic multi-armed bandits,
the proposed algorithm achieves a regret of
$\tilde{O}\left(\frac{1}{\epsilon}\sqrt{T}\right)$, while the previously known
best regret bound was
$\tilde{O}\left(\frac{1}{\epsilon}T^{\frac{2}{3}}\right)$.
| Naman Agarwal and Karan Singh | null | 1701.07953 | null | null |
Reinforced stochastic gradient descent for deep neural network learning | cs.LG cs.NE | Stochastic gradient descent (SGD) is a standard optimization method to
minimize a training error with respect to network parameters in modern neural
network learning. However, it typically suffers from proliferation of saddle
points in the high-dimensional parameter space. Therefore, it is highly
desirable to design an efficient algorithm to escape from these saddle points
and reach a parameter region of better generalization capabilities. Here, we
propose a simple extension of SGD, namely reinforced SGD, which simply adds
previous first-order gradients in a stochastic manner with a probability that
increases with learning time. As verified in a simple synthetic dataset, this
method significantly accelerates learning compared with the original SGD.
Surprisingly, it dramatically reduces over-fitting effects, even compared with
state-of-the-art adaptive learning algorithm---Adam. For a benchmark
handwritten digits dataset, the learning performance is comparable to Adam, yet
with an extra advantage of requiring one-fold less computer memory. The
reinforced SGD is also compared with SGD with fixed or adaptive momentum
parameter and Nesterov's momentum, which shows that the proposed framework is
able to reach a similar generalization accuracy with less computational costs.
Overall, our method introduces stochastic memory into gradients, which plays an
important role in understanding how gradient-based training algorithms can work
and its relationship with generalization abilities of deep networks.
| Haiping Huang and Taro Toyoizumi | null | 1701.07974 | null | null |
Modelling Competitive Sports: Bradley-Terry-\'{E}l\H{o} Models for
Supervised and On-Line Learning of Paired Competition Outcomes | stat.ML cs.LG stat.AP stat.ME | Prediction and modelling of competitive sports outcomes has received much
recent attention, especially from the Bayesian statistics and machine learning
communities. In the real world setting of outcome prediction, the seminal
\'{E}l\H{o} update still remains, after more than 50 years, a valuable baseline
which is difficult to improve upon, though in its original form it is a
heuristic and not a proper statistical "model". Mathematically, the \'{E}l\H{o}
rating system is very closely related to the Bradley-Terry models, which are
usually used in an explanatory fashion rather than in a predictive supervised
or on-line learning setting.
Exploiting this close link between these two model classes and some newly
observed similarities, we propose a new supervised learning framework with
close similarities to logistic regression, low-rank matrix completion and
neural networks. Building on it, we formulate a class of structured log-odds
models, unifying the desirable properties found in the above: supervised
probabilistic prediction of scores and wins/draws/losses, batch/epoch and
on-line learning, as well as the possibility to incorporate features in the
prediction, without having to sacrifice simplicity, parsimony of the
Bradley-Terry models, or computational efficiency of \'{E}l\H{o}'s original
approach.
We validate the structured log-odds modelling approach in synthetic
experiments and English Premier League outcomes, where the added expressivity
yields the best predictions reported in the state-of-art, close to the quality
of contemporary betting odds.
| Franz J. Kir\'aly and Zhaozhi Qian | null | 1701.08055 | null | null |
Model-Free Control of Thermostatically Controlled Loads Connected to a
District Heating Network | cs.SY cs.LG | Optimal control of thermostatically controlled loads connected to a district
heating network is considered a sequential decision- making problem under
uncertainty. The practicality of a direct model-based approach is compromised
by two challenges, namely scalability due to the large dimensionality of the
problem and the system identification required to identify an accurate model.
To help in mitigating these problems, this paper leverages on recent
developments in reinforcement learning in combination with a market-based
multi-agent system to obtain a scalable solution that obtains a significant
performance improvement in a practical learning time. The control approach is
applied on a scenario comprising 100 thermostatically controlled loads
connected to a radial district heating network supplied by a central combined
heat and power plant. Both for an energy arbitrage and a peak shaving
objective, the control approach requires 60 days to obtain a performance within
65% of a theoretical lower bound on the cost.
| Bert J. Claessens, Dirk Vanhoudt, Johan Desmedt, Frederik Ruelens | null | 1701.08074 | null | null |
Faster Discovery of Faster System Configurations with Spectral Learning | cs.SE cs.LG | Despite the huge spread and economical importance of configurable software
systems, there is unsatisfactory support in utilizing the full potential of
these systems with respect to finding performance-optimal configurations. Prior
work on predicting the performance of software configurations suffered from
either (a) requiring far too many sample configurations or (b) large variances
in their predictions. Both these problems can be avoided using the WHAT
spectral learner. WHAT's innovation is the use of the spectrum (eigenvalues) of
the distance matrix between the configurations of a configurable software
system, to perform dimensionality reduction. Within that reduced configuration
space, many closely associated configurations can be studied by executing only
a few sample configurations. For the subject systems studied here, a few dozen
samples yield accurate and stable predictors - less than 10% prediction error,
with a standard deviation of less than 2%. When compared to the state of the
art, WHAT (a) requires 2 to 10 times fewer samples to achieve similar
prediction accuracies, and (b) its predictions are more stable (i.e., have
lower standard deviation). Furthermore, we demonstrate that predictive models
generated by WHAT can be used by optimizers to discover system configurations
that closely approach the optimal performance.
| Vivek Nair, Tim Menzies, Norbert Siegmund, Sven Apel | 10.1007/s1051 | 1701.08106 | null | null |
Multiclass MinMax Rank Aggregation | cs.LG cs.AI q-bio.QM stat.ML | We introduce a new family of minmax rank aggregation problems under two
distance measures, the Kendall {\tau} and the Spearman footrule. As the
problems are NP-hard, we proceed to describe a number of constant-approximation
algorithms for solving them. We conclude with illustrative applications of the
aggregation methods on the Mallows model and genomic data.
| Pan Li and Olgica Milenkovic | null | 1701.08305 | null | null |
Deep Recurrent Neural Network for Protein Function Prediction from
Sequence | q-bio.QM cs.LG q-bio.BM stat.ML | As high-throughput biological sequencing becomes faster and cheaper, the need
to extract useful information from sequencing becomes ever more paramount,
often limited by low-throughput experimental characterizations. For proteins,
accurate prediction of their functions directly from their primary amino-acid
sequences has been a long standing challenge. Here, machine learning using
artificial recurrent neural networks (RNN) was applied towards classification
of protein function directly from primary sequence without sequence alignment,
heuristic scoring or feature engineering. The RNN models containing
long-short-term-memory (LSTM) units trained on public, annotated datasets from
UniProt achieved high performance for in-class prediction of four important
protein functions tested, particularly compared to other machine learning
algorithms using sequence-derived protein features. RNN models were used also
for out-of-class predictions of phylogenetically distinct protein families with
similar functions, including proteins of the CRISPR-associated nuclease,
ferritin-like iron storage and cytochrome P450 families. Applying the trained
RNN models on the partially unannotated UniRef100 database predicted not only
candidates validated by existing annotations but also currently unannotated
sequences. Some RNN predictions for the ferritin-like iron sequestering
function were experimentally validated, even though their sequences differ
significantly from known, characterized proteins and from each other and cannot
be easily predicted using popular bioinformatics methods. As sequencing and
experimental characterization data increases rapidly, the machine-learning
approach based on RNN could be useful for discovery and prediction of
homologues for a wide range of protein functions.
| Xueliang Liu | null | 1701.08318 | null | null |
Feature base fusion for splicing forgery detection based on neuro fuzzy | cs.CV cs.AI cs.LG | Most of researches on image forensics have been mainly focused on detection
of artifacts introduced by a single processing tool. They lead in the
development of many specialized algorithms looking for one or more particular
footprints under specific settings. Naturally, the performance of such
algorithms are not perfect, and accordingly the provided output might be noisy,
inaccurate and only partially correct. Furthermore, a forged image in practical
scenarios is often the result of utilizing several tools available by
image-processing software systems. Therefore, reliable tamper detection
requires developing more poweful tools to deal with various tempering
scenarios. Fusion of forgery detection tools based on Fuzzy Inference System
has been used before for addressing this problem. Adjusting the membership
functions and defining proper fuzzy rules for attaining to better results are
time-consuming processes. This can be accounted as main disadvantage of fuzzy
inference systems. In this paper, a Neuro-Fuzzy inference system for fusion of
forgery detection tools is developed. The neural network characteristic of
these systems provides appropriate tool for automatically adjusting the
membership functions. Moreover, initial fuzzy inference system is generated
based on fuzzy clustering techniques. The proposed framework is implemented and
validated on a benchmark image splicing data set in which three forgery
detection tools are fused based on adaptive Neuro-Fuzzy inference system. The
outcome of the proposed method reveals that applying Neuro Fuzzy inference
systems could be a better approach for fusion of forgery detection tools.
| Habib Ghaffari Hadigheh and Ghazali bin sulong | null | 1701.08374 | null | null |
When Slepian Meets Fiedler: Putting a Focus on the Graph Spectrum | cs.LG cs.CV | The study of complex systems benefits from graph models and their analysis.
In particular, the eigendecomposition of the graph Laplacian lets emerge
properties of global organization from local interactions; e.g., the Fiedler
vector has the smallest non-zero eigenvalue and plays a key role for graph
clustering. Graph signal processing focusses on the analysis of signals that
are attributed to the graph nodes. The eigendecomposition of the graph
Laplacian allows to define the graph Fourier transform and extend conventional
signal-processing operations to graphs. Here, we introduce the design of
Slepian graph signals, by maximizing energy concentration in a predefined
subgraph for a graph spectral bandlimit. We establish a novel link with
classical Laplacian embedding and graph clustering, which provides a meaning to
localized graph frequencies.
| Dimitri Van De Ville, Robin Demesmaeker, Maria Giulia Preti | 10.1109/LSP.2017.2704359 | 1701.08401 | null | null |
On the Local Structure of Stable Clustering Instances | cs.DS cs.CG cs.LG | We study the classic $k$-median and $k$-means clustering objectives in the
beyond-worst-case scenario. We consider three well-studied notions of
structured data that aim at characterizing real-world inputs: Distribution
Stability (introduced by Awasthi, Blum, and Sheffet, FOCS 2010), Spectral
Separability (introduced by Kumar and Kannan, FOCS 2010), Perturbation
Resilience (introduced by Bilu and Linial, ICS 2010).
We prove structural results showing that inputs satisfying at least one of
the conditions are inherently "local". Namely, for any such input, any local
optimum is close both in term of structure and in term of objective value to
the global optima.
As a corollary we obtain that the widely-used Local Search algorithm has
strong performance guarantees for both the tasks of recovering the underlying
optimal clustering and obtaining a clustering of small cost. This is a
significant step toward understanding the success of local search heuristics in
clustering applications.
| Vincent Cohen-Addad, Chris Schwiegelshohn | null | 1701.08423 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.