categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
list |
---|---|---|---|---|---|---|---|---|---|---|
stat.ML cs.LG
| null |
1607.0628
| null | null | null | null | null |
Explaining Classification Models Built on High-Dimensional Sparse Data
|
Predictive modeling applications increasingly use data representing people's
behavior, opinions, and interactions. Fine-grained behavior data often has
different structure from traditional data, being very high-dimensional and
sparse. Models built from these data are quite difficult to interpret, since
they contain many thousands or even many millions of features. Listing features
with large model coefficients is not sufficient, because the model coefficients
do not incorporate information on feature presence, which is key when analysing
sparse data. In this paper we introduce two alternatives for explaining
predictive models by listing important features. We evaluate these alternatives
in terms of explanation "bang for the buck,", i.e., how many examples'
inferences are explained for a given number of features listed. The bottom
line: (i) The proposed alternatives have double the bang-for-the-buck as
compared to just listing the high-coefficient features, and (ii) interestingly,
although they come from different sources and motivations, the two new
alternatives provide strikingly similar rankings of important features.
|
[
"Julie Moeyersoms, Brian d'Alessandro, Foster Provost, David Martens"
] |
null | null |
1607.06280
| null | null |
http://arxiv.org/pdf/1607.06280v2
|
2016-07-26T23:01:11Z
|
2016-07-21T11:50:41Z
|
Explaining Classification Models Built on High-Dimensional Sparse Data
|
Predictive modeling applications increasingly use data representing people's behavior, opinions, and interactions. Fine-grained behavior data often has different structure from traditional data, being very high-dimensional and sparse. Models built from these data are quite difficult to interpret, since they contain many thousands or even many millions of features. Listing features with large model coefficients is not sufficient, because the model coefficients do not incorporate information on feature presence, which is key when analysing sparse data. In this paper we introduce two alternatives for explaining predictive models by listing important features. We evaluate these alternatives in terms of explanation "bang for the buck,", i.e., how many examples' inferences are explained for a given number of features listed. The bottom line: (i) The proposed alternatives have double the bang-for-the-buck as compared to just listing the high-coefficient features, and (ii) interestingly, although they come from different sources and motivations, the two new alternatives provide strikingly similar rankings of important features.
|
[
"['Julie Moeyersoms' \"Brian d'Alessandro\" 'Foster Provost' 'David Martens']"
] |
cs.LG stat.ML
| null |
1607.06294
| null | null |
http://arxiv.org/pdf/1607.06294v1
|
2016-07-21T12:32:47Z
|
2016-07-21T12:32:47Z
|
Hierarchical Clustering of Asymmetric Networks
|
This paper considers networks where relationships between nodes are
represented by directed dissimilarities. The goal is to study methods that,
based on the dissimilarity structure, output hierarchical clusters, i.e., a
family of nested partitions indexed by a connectivity parameter. Our
construction of hierarchical clustering methods is built around the concept of
admissible methods, which are those that abide by the axioms of value - nodes
in a network with two nodes are clustered together at the maximum of the two
dissimilarities between them - and transformation - when dissimilarities are
reduced, the network may become more clustered but not less. Two particular
methods, termed reciprocal and nonreciprocal clustering, are shown to provide
upper and lower bounds in the space of admissible methods. Furthermore,
alternative clustering methodologies and axioms are considered. In particular,
modifying the axiom of value such that clustering in two-node networks occurs
at the minimum of the two dissimilarities entails the existence of a unique
admissible clustering method.
|
[
"['Gunnar Carlsson' 'Facundo Mémoli' 'Alejandro Ribeiro' 'Santiago Segarra']",
"Gunnar Carlsson, Facundo M\\'emoli, Alejandro Ribeiro, Santiago Segarra"
] |
stat.ML cs.LG
| null |
1607.06333
| null | null |
http://arxiv.org/pdf/1607.06333v3
|
2017-05-30T00:06:14Z
|
2016-07-21T14:19:23Z
|
Uncovering Causality from Multivariate Hawkes Integrated Cumulants
|
We design a new nonparametric method that allows one to estimate the matrix
of integrated kernels of a multivariate Hawkes process. This matrix not only
encodes the mutual influences of each nodes of the process, but also
disentangles the causality relationships between them. Our approach is the
first that leads to an estimation of this matrix without any parametric
modeling and estimation of the kernels themselves. A consequence is that it can
give an estimation of causality relationships between nodes (or users), based
on their activity timestamps (on a social network for instance), without
knowing or estimating the shape of the activities lifetime. For that purpose,
we introduce a moment matching method that fits the third-order integrated
cumulants of the process. We show on numerical experiments that our approach is
indeed very robust to the shape of the kernels, and gives appealing results on
the MemeTracker database.
|
[
"['Massil Achab' 'Emmanuel Bacry' 'Stéphane Gaïffas' 'Iacopo Mastromatteo'\n 'Jean-Francois Muzy']",
"Massil Achab, Emmanuel Bacry, St\\'ephane Ga\\\"iffas, Iacopo\n Mastromatteo, Jean-Francois Muzy"
] |
cs.LG stat.ML
| null |
1607.06335
| null | null |
http://arxiv.org/pdf/1607.06335v1
|
2016-07-21T14:22:12Z
|
2016-07-21T14:22:12Z
|
Admissible Hierarchical Clustering Methods and Algorithms for Asymmetric
Networks
|
This paper characterizes hierarchical clustering methods that abide by two
previously introduced axioms -- thus, denominated admissible methods -- and
proposes tractable algorithms for their implementation. We leverage the fact
that, for asymmetric networks, every admissible method must be contained
between reciprocal and nonreciprocal clustering, and describe three families of
intermediate methods. Grafting methods exchange branches between dendrograms
generated by different admissible methods. The convex combination family
combines admissible methods through a convex operation in the space of
dendrograms, and thirdly, the semi-reciprocal family clusters nodes that are
related by strong cyclic influences in the network. Algorithms for the
computation of hierarchical clusters generated by reciprocal and nonreciprocal
clustering as well as the grafting, convex combination, and semi-reciprocal
families are derived using matrix operations in a dioid algebra. Finally, the
introduced clustering methods and algorithms are exemplified through their
application to a network describing the interrelation between sectors of the
United States (U.S.) economy.
|
[
"['Gunnar Carlsson' 'Facundo Mémoli' 'Alejandro Ribeiro' 'Santiago Segarra']",
"Gunnar Carlsson, Facundo M\\'emoli, Alejandro Ribeiro, Santiago Segarra"
] |
cs.LG
| null |
1607.06339
| null | null |
http://arxiv.org/pdf/1607.06339v1
|
2016-07-21T14:28:51Z
|
2016-07-21T14:28:51Z
|
Excisive Hierarchical Clustering Methods for Network Data
|
We introduce two practical properties of hierarchical clustering methods for
(possibly asymmetric) network data: excisiveness and linear scale preservation.
The latter enforces imperviousness to change in units of measure whereas the
former ensures local consistency of the clustering outcome. Algorithmically,
excisiveness implies that we can reduce computational complexity by only
clustering a data subset of interest while theoretically guaranteeing that the
same hierarchical outcome would be observed when clustering the whole dataset.
Moreover, we introduce the concept of representability, i.e. a generative model
for describing clustering methods through the specification of their action on
a collection of networks. We further show that, within a rich set of admissible
methods, requiring representability is equivalent to requiring both
excisiveness and linear scale preservation. Leveraging this equivalence, we
show that all excisive and linear scale preserving methods can be factored into
two steps: a transformation of the weights in the input network followed by the
application of a canonical clustering method. Furthermore, their factorization
can be used to show stability of excisive and linear scale preserving methods
in the sense that a bounded perturbation in the input network entails a bounded
perturbation in the clustering output.
|
[
"['Gunnar Carlsson' 'Facundo Mémoli' 'Alejandro Ribeiro' 'Santiago Segarra']",
"Gunnar Carlsson, Facundo M\\'emoli, Alejandro Ribeiro, Santiago Segarra"
] |
stat.ML cs.LG
| null |
1607.06364
| null | null |
http://arxiv.org/pdf/1607.06364v1
|
2016-07-21T15:32:47Z
|
2016-07-21T15:32:47Z
|
Distributed Supervised Learning using Neural Networks
|
Distributed learning is the problem of inferring a function in the case where
training data is distributed among multiple geographically separated sources.
Particularly, the focus is on designing learning strategies with low
computational requirements, in which communication is restricted only to
neighboring agents, with no reliance on a centralized authority. In this
thesis, we analyze multiple distributed protocols for a large number of neural
network architectures. The first part of the thesis is devoted to a definition
of the problem, followed by an extensive overview of the state-of-the-art.
Next, we introduce different strategies for a relatively simple class of single
layer neural networks, where a linear output layer is preceded by a nonlinear
layer, whose weights are stochastically assigned in the beginning of the
learning process. We consider both batch and sequential learning, with
horizontally and vertically partitioned data. In the third part, we consider
instead the more complex problem of semi-supervised distributed learning, where
each agent is provided with an additional set of unlabeled training samples. We
propose two different algorithms based on diffusion processes for linear
support vector machines and kernel ridge regression. Subsequently, the fourth
part extends the discussion to learning with time-varying data (e.g.
time-series) using recurrent neural networks. We consider two different
families of networks, namely echo state networks (extending the algorithms
introduced in the second part), and spline adaptive filters. Overall, the
algorithms presented throughout the thesis cover a wide range of possible
practical applications, and lead the way to numerous future extensions, which
are briefly summarized in the conclusive chapter.
|
[
"['Simone Scardapane']",
"Simone Scardapane"
] |
stat.ML cs.LG
| null |
1607.0645
| null | null | null | null | null |
Layer Normalization
|
Training state-of-the-art, deep neural networks is computationally expensive.
One way to reduce the training time is to normalize the activities of the
neurons. A recently introduced technique called batch normalization uses the
distribution of the summed input to a neuron over a mini-batch of training
cases to compute a mean and variance which are then used to normalize the
summed input to that neuron on each training case. This significantly reduces
the training time in feed-forward neural networks. However, the effect of batch
normalization is dependent on the mini-batch size and it is not obvious how to
apply it to recurrent neural networks. In this paper, we transpose batch
normalization into layer normalization by computing the mean and variance used
for normalization from all of the summed inputs to the neurons in a layer on a
single training case. Like batch normalization, we also give each neuron its
own adaptive bias and gain which are applied after the normalization but before
the non-linearity. Unlike batch normalization, layer normalization performs
exactly the same computation at training and test times. It is also
straightforward to apply to recurrent neural networks by computing the
normalization statistics separately at each time step. Layer normalization is
very effective at stabilizing the hidden state dynamics in recurrent networks.
Empirically, we show that layer normalization can substantially reduce the
training time compared with previously published techniques.
|
[
"Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E. Hinton"
] |
null | null |
1607.06450
| null | null |
http://arxiv.org/pdf/1607.06450v1
|
2016-07-21T19:57:52Z
|
2016-07-21T19:57:52Z
|
Layer Normalization
|
Training state-of-the-art, deep neural networks is computationally expensive. One way to reduce the training time is to normalize the activities of the neurons. A recently introduced technique called batch normalization uses the distribution of the summed input to a neuron over a mini-batch of training cases to compute a mean and variance which are then used to normalize the summed input to that neuron on each training case. This significantly reduces the training time in feed-forward neural networks. However, the effect of batch normalization is dependent on the mini-batch size and it is not obvious how to apply it to recurrent neural networks. In this paper, we transpose batch normalization into layer normalization by computing the mean and variance used for normalization from all of the summed inputs to the neurons in a layer on a single training case. Like batch normalization, we also give each neuron its own adaptive bias and gain which are applied after the normalization but before the non-linearity. Unlike batch normalization, layer normalization performs exactly the same computation at training and test times. It is also straightforward to apply to recurrent neural networks by computing the normalization statistics separately at each time step. Layer normalization is very effective at stabilizing the hidden state dynamics in recurrent networks. Empirically, we show that layer normalization can substantially reduce the training time compared with previously published techniques.
|
[
"['Jimmy Lei Ba' 'Jamie Ryan Kiros' 'Geoffrey E. Hinton']"
] |
cs.CL cs.AI cs.LG stat.ML
| null |
1607.0652
| null | null | null | null | null |
Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word
Embeddings
|
The blind application of machine learning runs the risk of amplifying biases
present in data. Such a danger is facing us with word embedding, a popular
framework to represent text data as vectors which has been used in many machine
learning and natural language processing tasks. We show that even word
embeddings trained on Google News articles exhibit female/male gender
stereotypes to a disturbing extent. This raises concerns because their
widespread use, as we describe, often tends to amplify these biases.
Geometrically, gender bias is first shown to be captured by a direction in the
word embedding. Second, gender neutral words are shown to be linearly separable
from gender definition words in the word embedding. Using these properties, we
provide a methodology for modifying an embedding to remove gender stereotypes,
such as the association between between the words receptionist and female,
while maintaining desired associations such as between the words queen and
female. We define metrics to quantify both direct and indirect gender biases in
embeddings, and develop algorithms to "debias" the embedding. Using
crowd-worker evaluation as well as standard benchmarks, we empirically
demonstrate that our algorithms significantly reduce gender bias in embeddings
while preserving the its useful properties such as the ability to cluster
related concepts and to solve analogy tasks. The resulting embeddings can be
used in applications without amplifying gender bias.
|
[
"Tolga Bolukbasi, Kai-Wei Chang, James Zou, Venkatesh Saligrama, Adam\n Kalai"
] |
null | null |
1607.06520
| null | null |
http://arxiv.org/pdf/1607.06520v1
|
2016-07-21T22:26:20Z
|
2016-07-21T22:26:20Z
|
Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word
Embeddings
|
The blind application of machine learning runs the risk of amplifying biases present in data. Such a danger is facing us with word embedding, a popular framework to represent text data as vectors which has been used in many machine learning and natural language processing tasks. We show that even word embeddings trained on Google News articles exhibit female/male gender stereotypes to a disturbing extent. This raises concerns because their widespread use, as we describe, often tends to amplify these biases. Geometrically, gender bias is first shown to be captured by a direction in the word embedding. Second, gender neutral words are shown to be linearly separable from gender definition words in the word embedding. Using these properties, we provide a methodology for modifying an embedding to remove gender stereotypes, such as the association between between the words receptionist and female, while maintaining desired associations such as between the words queen and female. We define metrics to quantify both direct and indirect gender biases in embeddings, and develop algorithms to "debias" the embedding. Using crowd-worker evaluation as well as standard benchmarks, we empirically demonstrate that our algorithms significantly reduce gender bias in embeddings while preserving the its useful properties such as the ability to cluster related concepts and to solve analogy tasks. The resulting embeddings can be used in applications without amplifying gender bias.
|
[
"['Tolga Bolukbasi' 'Kai-Wei Chang' 'James Zou' 'Venkatesh Saligrama'\n 'Adam Kalai']"
] |
cs.LG
| null |
1607.06525
| null | null |
http://arxiv.org/pdf/1607.06525v1
|
2016-07-21T23:09:46Z
|
2016-07-21T23:09:46Z
|
CGMOS: Certainty Guided Minority OverSampling
|
Handling imbalanced datasets is a challenging problem that if not treated
correctly results in reduced classification performance. Imbalanced datasets
are commonly handled using minority oversampling, whereas the SMOTE algorithm
is a successful oversampling algorithm with numerous extensions. SMOTE
extensions do not have a theoretical guarantee during training to work better
than SMOTE and in many instances their performance is data dependent. In this
paper we propose a novel extension to the SMOTE algorithm with a theoretical
guarantee for improved classification performance. The proposed approach
considers the classification performance of both the majority and minority
classes. In the proposed approach CGMOS (Certainty Guided Minority
OverSampling) new data points are added by considering certainty changes in the
dataset. The paper provides a proof that the proposed algorithm is guaranteed
to work better than SMOTE for training data. Further experimental results on 30
real-world datasets show that CGMOS works better than existing algorithms when
using 6 different classifiers.
|
[
"['Xi Zhang' 'Di Ma' 'Lin Gan' 'Shanshan Jiang' 'Gady Agam']",
"Xi Zhang and Di Ma and Lin Gan and Shanshan Jiang and Gady Agam"
] |
cs.LG
| null |
1607.06657
| null | null |
http://arxiv.org/pdf/1607.06657v4
|
2016-10-27T10:47:49Z
|
2016-07-21T02:35:57Z
|
e-Distance Weighted Support Vector Regression
|
We propose a novel support vector regression approach called e-Distance
Weighted Support Vector Regression (e-DWSVR).e-DWSVR specifically addresses two
challenging issues in support vector regression: first, the process of noisy
data; second, how to deal with the situation when the distribution of boundary
data is different from that of the overall data. The proposed e-DWSVR optimizes
the minimum margin and the mean of functional margin simultaneously to tackle
these two issues. In addition, we use both dual coordinate descent (CD) and
averaged stochastic gradient descent (ASGD) strategies to make e-DWSVR scalable
to large scale problems. We report promising results obtained by e-DWSVR in
comparison with existing methods on several benchmark datasets.
|
[
"Yan Wang, Ge Ou, Wei Pang, Lan Huang, George Macleod Coghill",
"['Yan Wang' 'Ge Ou' 'Wei Pang' 'Lan Huang' 'George Macleod Coghill']"
] |
cs.LG stat.ML
| null |
1607.06781
| null | null |
http://arxiv.org/pdf/1607.06781v2
|
2017-09-11T14:45:16Z
|
2016-07-22T18:17:10Z
|
On the Use of Sparse Filtering for Covariate Shift Adaptation
|
In this paper we formally analyse the use of sparse filtering algorithms to
perform covariate shift adaptation. We provide a theoretical analysis of sparse
filtering by evaluating the conditions required to perform covariate shift
adaptation. We prove that sparse filtering can perform adaptation only if the
conditional distribution of the labels has a structure explained by a cosine
metric. To overcome this limitation, we propose a new algorithm, named periodic
sparse filtering, and carry out the same theoretical analysis regarding
covariate shift adaptation. We show that periodic sparse filtering can perform
adaptation under the looser and more realistic requirement that the conditional
distribution of the labels has a periodic structure, which may be satisfied,
for instance, by user-dependent data sets. We experimentally validate our
theoretical results on synthetic data. Moreover, we apply periodic sparse
filtering to real-world data sets to demonstrate that this simple and
computationally efficient algorithm is able to achieve competitive
performances.
|
[
"Fabio Massimo Zennaro, Ke Chen",
"['Fabio Massimo Zennaro' 'Ke Chen']"
] |
cs.LG stat.ML
| null |
1607.06988
| null | null |
http://arxiv.org/pdf/1607.06988v1
|
2016-07-24T01:14:19Z
|
2016-07-24T01:14:19Z
|
Interactive Learning from Multiple Noisy Labels
|
Interactive learning is a process in which a machine learning algorithm is
provided with meaningful, well-chosen examples as opposed to randomly chosen
examples typical in standard supervised learning. In this paper, we propose a
new method for interactive learning from multiple noisy labels where we exploit
the disagreement among annotators to quantify the easiness (or meaningfulness)
of an example. We demonstrate the usefulness of this method in estimating the
parameters of a latent variable classification model, and conduct experimental
analyses on a range of synthetic and benchmark datasets. Furthermore, we
theoretically analyze the performance of perceptron in this interactive
learning framework.
|
[
"Shankar Vembu, Sandra Zilles",
"['Shankar Vembu' 'Sandra Zilles']"
] |
stat.ML cs.LG
| null |
1607.06996
| null | null |
http://arxiv.org/pdf/1607.06996v6
|
2019-07-18T10:21:41Z
|
2016-07-24T04:00:30Z
|
Scaling Up Sparse Support Vector Machines by Simultaneous Feature and
Sample Reduction
|
Sparse support vector machine (SVM) is a popular classification technique
that can simultaneously learn a small set of the most interpretable features
and identify the support vectors. It has achieved great successes in many
real-world applications. However, for large-scale problems involving a huge
number of samples and ultra-high dimensional features, solving sparse SVMs
remains challenging. By noting that sparse SVMs induce sparsities in both
feature and sample spaces, we propose a novel approach, which is based on
accurate estimations of the primal and dual optima of sparse SVMs, to
simultaneously identify the inactive features and samples that are guaranteed
to be irrelevant to the outputs. Thus, we can remove the identified inactive
samples and features from the training phase, leading to substantial savings in
the computational cost without sacrificing the accuracy. Moreover, we show that
our method can be extended to multi-class sparse support vector machines. To
the best of our knowledge, the proposed method is the \emph{first}
\emph{static} feature and sample reduction method for sparse SVMs and
multi-class sparse SVMs. Experiments on both synthetic and real data sets
demonstrate that our approach significantly outperforms state-of-the-art
methods and the speedup gained by our approach can be orders of magnitude.
|
[
"Weizhong Zhang and Bin Hong and Wei Liu and Jieping Ye and Deng Cai\n and Xiaofei He and Jie Wang",
"['Weizhong Zhang' 'Bin Hong' 'Wei Liu' 'Jieping Ye' 'Deng Cai'\n 'Xiaofei He' 'Jie Wang']"
] |
cs.LG
|
10.2196/mhealth.6562
|
1607.07034
| null | null |
http://arxiv.org/abs/1607.07034v1
|
2016-07-24T12:12:03Z
|
2016-07-24T12:12:03Z
|
Impact of Physical Activity on Sleep:A Deep Learning Based Exploration
|
The importance of sleep is paramount for maintaining physical, emotional and
mental wellbeing. Though the relationship between sleep and physical activity
is known to be important, it is not yet fully understood. The explosion in
popularity of actigraphy and wearable devices, provides a unique opportunity to
understand this relationship. Leveraging this information source requires new
tools to be developed to facilitate data-driven research for sleep and activity
patient-recommendations.
In this paper we explore the use of deep learning to build sleep quality
prediction models based on actigraphy data. We first use deep learning as a
pure model building device by performing human activity recognition (HAR) on
raw sensor data, and using deep learning to build sleep prediction models. We
compare the deep learning models with those build using classical approaches,
i.e. logistic regression, support vector machines, random forest and adaboost.
Secondly, we employ the advantage of deep learning with its ability to handle
high dimensional datasets. We explore several deep learning models on the raw
wearable sensor output without performing HAR or any other feature extraction.
Our results show that using a convolutional neural network on the raw
wearables output improves the predictive value of sleep quality from physical
activity, by an additional 8% compared to state-of-the-art non-deep learning
approaches, which itself shows a 15% improvement over current practice.
Moreover, utilizing deep learning on raw data eliminates the need for data
pre-processing and simplifies the overall workflow to analyze actigraphy data
for sleep and physical activity research.
|
[
"['Aarti Sathyanarayana' 'Shafiq Joty' 'Luis Fernandez-Luque' 'Ferda Ofli'\n 'Jaideep Srivastava' 'Ahmed Elmagarmid' 'Shahrad Taheri' 'Teresa Arora']",
"Aarti Sathyanarayana, Shafiq Joty, Luis Fernandez-Luque, Ferda Ofli,\n Jaideep Srivastava, Ahmed Elmagarmid, Shahrad Taheri, Teresa Arora"
] |
cs.CV cs.AI cs.LG cs.NE
| null |
1607.07043
| null | null |
http://arxiv.org/pdf/1607.07043v1
|
2016-07-24T13:39:11Z
|
2016-07-24T13:39:11Z
|
Spatio-Temporal LSTM with Trust Gates for 3D Human Action Recognition
|
3D action recognition - analysis of human actions based on 3D skeleton data -
becomes popular recently due to its succinctness, robustness, and
view-invariant representation. Recent attempts on this problem suggested to
develop RNN-based learning methods to model the contextual dependency in the
temporal domain. In this paper, we extend this idea to spatio-temporal domains
to analyze the hidden sources of action-related information within the input
data over both domains concurrently. Inspired by the graphical structure of the
human skeleton, we further propose a more powerful tree-structure based
traversal method. To handle the noise and occlusion in 3D skeleton data, we
introduce new gating mechanism within LSTM to learn the reliability of the
sequential input data and accordingly adjust its effect on updating the
long-term context information stored in the memory cell. Our method achieves
state-of-the-art performance on 4 challenging benchmark datasets for 3D human
action analysis.
|
[
"['Jun Liu' 'Amir Shahroudy' 'Dong Xu' 'Gang Wang']",
"Jun Liu, Amir Shahroudy, Dong Xu, and Gang Wang"
] |
cs.LG
| null |
1607.07086
| null | null |
http://arxiv.org/pdf/1607.07086v3
|
2017-03-03T15:43:52Z
|
2016-07-24T20:05:07Z
|
An Actor-Critic Algorithm for Sequence Prediction
|
We present an approach to training neural networks to generate sequences
using actor-critic methods from reinforcement learning (RL). Current
log-likelihood training methods are limited by the discrepancy between their
training and testing modes, as models must generate tokens conditioned on their
previous guesses rather than the ground-truth tokens. We address this problem
by introducing a \textit{critic} network that is trained to predict the value
of an output token, given the policy of an \textit{actor} network. This results
in a training procedure that is much closer to the test phase, and allows us to
directly optimize for a task-specific score such as BLEU. Crucially, since we
leverage these techniques in the supervised learning setting rather than the
traditional RL setting, we condition the critic network on the ground-truth
output. We show that our method leads to improved performance on both a
synthetic task, and for German-English machine translation. Our analysis paves
the way for such methods to be applied in natural language generation tasks,
such as machine translation, caption generation, and dialogue modelling.
|
[
"Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan\n Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio",
"['Dzmitry Bahdanau' 'Philemon Brakel' 'Kelvin Xu' 'Anirudh Goyal'\n 'Ryan Lowe' 'Joelle Pineau' 'Aaron Courville' 'Yoshua Bengio']"
] |
cs.LG
| null |
1607.0711
| null | null | null | null | null |
Deep nets for local manifold learning
|
The problem of extending a function $f$ defined on a training data
$\mathcal{C}$ on an unknown manifold $\mathbb{X}$ to the entire manifold and a
tubular neighborhood of this manifold is considered in this paper. For
$\mathbb{X}$ embedded in a high dimensional ambient Euclidean space
$\mathbb{R}^D$, a deep learning algorithm is developed for finding a local
coordinate system for the manifold {\bf without eigen--decomposition}, which
reduces the problem to the classical problem of function approximation on a low
dimensional cube. Deep nets (or multilayered neural networks) are proposed to
accomplish this approximation scheme by using the training data. Our methods do
not involve such optimization techniques as back--propagation, while assuring
optimal (a priori) error bounds on the output in terms of the number of
derivatives of the target function. In addition, these methods are universal,
in that they do not require a prior knowledge of the smoothness of the target
function, but adjust the accuracy of approximation locally and automatically,
depending only upon the local smoothness of the target function. Our ideas are
easily extended to solve both the pre--image problem and the out--of--sample
extension problem, with a priori bounds on the growth of the function thus
extended.
|
[
"Charles K. Chui, H. N. Mhaskar"
] |
null | null |
1607.07110
| null | null |
http://arxiv.org/pdf/1607.07110v1
|
2016-07-24T23:23:32Z
|
2016-07-24T23:23:32Z
|
Deep nets for local manifold learning
|
The problem of extending a function $f$ defined on a training data $mathcal{C}$ on an unknown manifold $mathbb{X}$ to the entire manifold and a tubular neighborhood of this manifold is considered in this paper. For $mathbb{X}$ embedded in a high dimensional ambient Euclidean space $mathbb{R}^D$, a deep learning algorithm is developed for finding a local coordinate system for the manifold {bf without eigen--decomposition}, which reduces the problem to the classical problem of function approximation on a low dimensional cube. Deep nets (or multilayered neural networks) are proposed to accomplish this approximation scheme by using the training data. Our methods do not involve such optimization techniques as back--propagation, while assuring optimal (a priori) error bounds on the output in terms of the number of derivatives of the target function. In addition, these methods are universal, in that they do not require a prior knowledge of the smoothness of the target function, but adjust the accuracy of approximation locally and automatically, depending only upon the local smoothness of the target function. Our ideas are easily extended to solve both the pre--image problem and the out--of--sample extension problem, with a priori bounds on the growth of the function thus extended.
|
[
"['Charles K. Chui' 'H. N. Mhaskar']"
] |
cs.LG
| null |
1607.07186
| null | null |
http://arxiv.org/pdf/1607.07186v2
|
2017-05-22T08:57:04Z
|
2016-07-25T09:25:25Z
|
A Cross-Entropy-based Method to Perform Information-based Feature
Selection
|
From a machine learning point of view, identifying a subset of relevant
features from a real data set can be useful to improve the results achieved by
classification methods and to reduce their time and space complexity. To
achieve this goal, feature selection methods are usually employed. These
approaches assume that the data contains redundant or irrelevant attributes
that can be eliminated. In this work, we propose a novel algorithm to manage
the optimization problem that is at the foundation of the Mutual Information
feature selection methods. Furthermore, our novel approach is able to estimate
automatically the number of dimensions to retain. The quality of our method is
confirmed by the promising results achieved on standard real data sets.
|
[
"['Pietro Cassara' 'Alessandro Rozza' 'Mirco Nanni']",
"Pietro Cassara and Alessandro Rozza and Mirco Nanni"
] |
stat.ML cs.LG
| null |
1607.07195
| null | null |
http://arxiv.org/pdf/1607.07195v2
|
2016-10-14T06:32:13Z
|
2016-07-25T10:19:27Z
|
Higher-Order Factorization Machines
|
Factorization machines (FMs) are a supervised learning approach that can use
second-order feature combinations even when the data is very high-dimensional.
Unfortunately, despite increasing interest in FMs, there exists to date no
efficient training algorithm for higher-order FMs (HOFMs). In this paper, we
present the first generic yet efficient algorithms for training arbitrary-order
HOFMs. We also present new variants of HOFMs with shared parameters, which
greatly reduce model size and prediction times while maintaining similar
accuracy. We demonstrate the proposed approaches on four different link
prediction tasks.
|
[
"['Mathieu Blondel' 'Akinori Fujino' 'Naonori Ueda' 'Masakazu Ishihata']",
"Mathieu Blondel, Akinori Fujino, Naonori Ueda and Masakazu Ishihata"
] |
cs.LG cs.CV stat.ML
| null |
1607.0727
| null | null | null | null | null |
A Statistical Test for Joint Distributions Equivalence
|
We provide a distribution-free test that can be used to determine whether any
two joint distributions $p$ and $q$ are statistically different by inspection
of a large enough set of samples. Following recent efforts from Long et al.
[1], we rely on joint kernel distribution embedding to extend the kernel
two-sample test of Gretton et al. [2] to the case of joint probability
distributions. Our main result can be directly applied to verify if a
dataset-shift has occurred between training and test distributions in a
learning framework, without further assuming the shift has occurred only in the
input, in the target or in the conditional distribution.
|
[
"Francesco Solera and Andrea Palazzi"
] |
null | null |
1607.07270
| null | null |
http://arxiv.org/pdf/1607.07270v1
|
2016-07-25T13:48:20Z
|
2016-07-25T13:48:20Z
|
A Statistical Test for Joint Distributions Equivalence
|
We provide a distribution-free test that can be used to determine whether any two joint distributions $p$ and $q$ are statistically different by inspection of a large enough set of samples. Following recent efforts from Long et al. [1], we rely on joint kernel distribution embedding to extend the kernel two-sample test of Gretton et al. [2] to the case of joint probability distributions. Our main result can be directly applied to verify if a dataset-shift has occurred between training and test distributions in a learning framework, without further assuming the shift has occurred only in the input, in the target or in the conditional distribution.
|
[
"['Francesco Solera' 'Andrea Palazzi']"
] |
cs.SI cs.LG physics.soc-ph stat.ME
| null |
1607.0733
| null | null | null | null | null |
Evaluating Link Prediction Accuracy on Dynamic Networks with Added and
Removed Edges
|
The task of predicting future relationships in a social network, known as
link prediction, has been studied extensively in the literature. Many link
prediction methods have been proposed, ranging from common neighbors to
probabilistic models. Recent work by Yang et al. has highlighted several
challenges in evaluating link prediction accuracy. In dynamic networks where
edges are both added and removed over time, the link prediction problem is more
complex and involves predicting both newly added and newly removed edges. This
results in new challenges in the evaluation of dynamic link prediction methods,
and the recommendations provided by Yang et al. are no longer applicable,
because they do not address edge removal. In this paper, we investigate several
metrics currently used for evaluating accuracies of dynamic link prediction
methods and demonstrate why they can be misleading in many cases. We provide
several recommendations on evaluating dynamic link prediction accuracy,
including separation into two categories of evaluation. Finally we propose a
unified metric to characterize link prediction accuracy effectively using a
single number.
|
[
"Ruthwik R. Junuthula, Kevin S. Xu, and Vijay K. Devabhaktuni"
] |
null | null |
1607.07330
| null | null |
http://arxiv.org/pdf/1607.07330v1
|
2016-07-25T16:00:32Z
|
2016-07-25T16:00:32Z
|
Evaluating Link Prediction Accuracy on Dynamic Networks with Added and
Removed Edges
|
The task of predicting future relationships in a social network, known as link prediction, has been studied extensively in the literature. Many link prediction methods have been proposed, ranging from common neighbors to probabilistic models. Recent work by Yang et al. has highlighted several challenges in evaluating link prediction accuracy. In dynamic networks where edges are both added and removed over time, the link prediction problem is more complex and involves predicting both newly added and newly removed edges. This results in new challenges in the evaluation of dynamic link prediction methods, and the recommendations provided by Yang et al. are no longer applicable, because they do not address edge removal. In this paper, we investigate several metrics currently used for evaluating accuracies of dynamic link prediction methods and demonstrate why they can be misleading in many cases. We provide several recommendations on evaluating dynamic link prediction accuracy, including separation into two categories of evaluation. Finally we propose a unified metric to characterize link prediction accuracy effectively using a single number.
|
[
"['Ruthwik R. Junuthula' 'Kevin S. Xu' 'Vijay K. Devabhaktuni']"
] |
cs.LG
| null |
1607.07395
| null | null |
http://arxiv.org/pdf/1607.07395v3
|
2016-07-27T03:52:43Z
|
2016-07-25T18:28:18Z
|
Seeing the Forest from the Trees in Two Looks: Matrix Sketching by
Cascaded Bilateral Sampling
|
Matrix sketching is aimed at finding close approximations of a matrix by
factors of much smaller dimensions, which has important applications in
optimization and machine learning. Given a matrix A of size m by n,
state-of-the-art randomized algorithms take O(m * n) time and space to obtain
its low-rank decomposition. Although quite useful, the need to store or
manipulate the entire matrix makes it a computational bottleneck for truly
large and dense inputs. Can we sketch an m-by-n matrix in O(m + n) cost by
accessing only a small fraction of its rows and columns, without knowing
anything about the remaining data? In this paper, we propose the cascaded
bilateral sampling (CABS) framework to solve this problem. We start from
demonstrating how the approximation quality of bilateral matrix sketching
depends on the encoding powers of sampling. In particular, the sampled rows and
columns should correspond to the code-vectors in the ground truth
decompositions. Motivated by this analysis, we propose to first generate a
pilot-sketch using simple random sampling, and then pursue more advanced,
"follow-up" sampling on the pilot-sketch factors seeking maximal encoding
powers. In this cascading process, the rise of approximation quality is shown
to be lower-bounded by the improvement of encoding powers in the follow-up
sampling step, thus theoretically guarantees the algorithmic boosting property.
Computationally, our framework only takes linear time and space, and at the
same time its performance rivals the quality of state-of-the-art algorithms
consuming a quadratic amount of resources. Empirical evaluations on benchmark
data fully demonstrate the potential of our methods in large scale matrix
sketching and related areas.
|
[
"['Kai Zhang' 'Chuanren Liu' 'Jie Zhang' 'Hui Xiong' 'Eric Xing'\n 'Jieping Ye']",
"Kai Zhang, Chuanren Liu, Jie Zhang, Hui Xiong, Eric Xing, Jieping Ye"
] |
cs.CV cs.LG
| null |
1607.07405
| null | null |
http://arxiv.org/pdf/1607.07405v3
|
2016-08-12T17:28:24Z
|
2016-07-25T18:57:17Z
|
gvnn: Neural Network Library for Geometric Computer Vision
|
We introduce gvnn, a neural network library in Torch aimed towards bridging
the gap between classic geometric computer vision and deep learning. Inspired
by the recent success of Spatial Transformer Networks, we propose several new
layers which are often used as parametric transformations on the data in
geometric computer vision. These layers can be inserted within a neural network
much in the spirit of the original spatial transformers and allow
backpropagation to enable end-to-end learning of a network involving any domain
knowledge in geometric computer vision. This opens up applications in learning
invariance to 3D geometric transformation for place recognition, end-to-end
visual odometry, depth estimation and unsupervised learning through warping
with a parametric transformation for image reconstruction error.
|
[
"['Ankur Handa' 'Michael Bloesch' 'Viorica Patraucean' 'Simon Stent'\n 'John McCormac' 'Andrew Davison']",
"Ankur Handa, Michael Bloesch, Viorica Patraucean, Simon Stent, John\n McCormac, Andrew Davison"
] |
cs.LG stat.AP stat.ME stat.ML
|
10.1109/RAM.2017.7889786
|
1607.07423
| null | null |
http://arxiv.org/abs/1607.07423v3
|
2016-07-29T20:31:54Z
|
2016-07-25T19:40:55Z
|
A Non-Parametric Control Chart For High Frequency Multivariate Data
|
Support Vector Data Description (SVDD) is a machine learning technique used
for single class classification and outlier detection. SVDD based K-chart was
first introduced by Sun and Tsung for monitoring multivariate processes when
underlying distribution of process parameters or quality characteristics depart
from Normality. The method first trains a SVDD model on data obtained from
stable or in-control operations of the process to obtain a threshold $R^2$ and
kernel center a. For each new observation, its Kernel distance from the Kernel
center a is calculated. The kernel distance is compared against the threshold
$R^2$ to determine if the observation is within the control limits. The
non-parametric K-chart provides an attractive alternative to the traditional
control charts such as the Hotelling's $T^2$ charts when distribution of the
underlying multivariate data is either non-normal or is unknown. But there are
challenges when K-chart is deployed in practice. The K-chart requires
calculating kernel distance of each new observation but there are no guidelines
on how to interpret the kernel distance plot and infer about shifts in process
mean or changes in process variation. This limits the application of K-charts
in big-data applications such as equipment health monitoring, where
observations are generated at a very high frequency. In this scenario, the
analyst using the K-chart is inundated with kernel distance results at a very
high frequency, generally without any recourse for detecting presence of any
assignable causes of variation. We propose a new SVDD based control chart,
called as $K_T$ chart, which addresses challenges encountered when using
K-chart for big-data applications. The $K_T$ charts can be used to
simultaneously track process variation and central tendency. We illustrate the
successful use of $K_T$ chart using the Tennessee Eastman process data.
|
[
"Deovrat Kakde, Sergriy Peredriy, Arin Chaudhuri, Anya Mcguirk",
"['Deovrat Kakde' 'Sergriy Peredriy' 'Arin Chaudhuri' 'Anya Mcguirk']"
] |
stat.ML cs.LG
| null |
1607.07519
| null | null |
http://arxiv.org/pdf/1607.07519v1
|
2016-07-26T02:06:33Z
|
2016-07-26T02:06:33Z
|
Deepr: A Convolutional Net for Medical Records
|
Feature engineering remains a major bottleneck when creating predictive
systems from electronic medical records. At present, an important missing
element is detecting predictive regular clinical motifs from irregular episodic
records. We present Deepr (short for Deep record), a new end-to-end deep
learning system that learns to extract features from medical records and
predicts future risk automatically. Deepr transforms a record into a sequence
of discrete elements separated by coded time gaps and hospital transfers. On
top of the sequence is a convolutional neural net that detects and combines
predictive local clinical motifs to stratify the risk. Deepr permits
transparent inspection and visualization of its inner working. We validate
Deepr on hospital data to predict unplanned readmission after discharge. Deepr
achieves superior accuracy compared to traditional techniques, detects
meaningful clinical motifs, and uncovers the underlying structure of the
disease and intervention space.
|
[
"Phuoc Nguyen, Truyen Tran, Nilmini Wickramasinghe, Svetha Venkatesh",
"['Phuoc Nguyen' 'Truyen Tran' 'Nilmini Wickramasinghe' 'Svetha Venkatesh']"
] |
cs.LG
| null |
1607.07526
| null | null |
http://arxiv.org/pdf/1607.07526v5
|
2018-09-13T14:45:07Z
|
2016-07-26T02:58:16Z
|
On the Resistance of Nearest Neighbor to Random Noisy Labels
|
Nearest neighbor has always been one of the most appealing non-parametric
approaches in machine learning, pattern recognition, computer vision, etc.
Previous empirical studies partly shows that nearest neighbor is resistant to
noise, yet there is a lack of deep analysis. This work presents the
finite-sample and distribution-dependent bounds on the consistency of nearest
neighbor in the random noise setting. The theoretical results show that, for
asymmetric noises, k-nearest neighbor is robust enough to classify most data
correctly, except for a handful of examples, whose labels are totally misled by
random noises. For symmetric noises, however, k-nearest neighbor achieves the
same consistent rate as that of noise-free setting, which verifies the
resistance of k-nearest neighbor to random noisy labels. Motivated by the
theoretical analysis, we propose the Robust k-Nearest Neighbor (RkNN) approach
to deal with noisy labels. The basic idea is to make unilateral corrections to
examples, whose labels are totally misled by random noises, and classify the
others directly by utilizing the robustness of k-nearest neighbor. We verify
the effectiveness of the proposed algorithm both theoretically and empirically.
|
[
"Wei Gao and Bin-Bin Yang and Zhi-Hua Zhou",
"['Wei Gao' 'Bin-Bin Yang' 'Zhi-Hua Zhou']"
] |
physics.data-an cs.LG stat.ML
|
10.7566/JPSJ.86.024001
|
1607.0759
| null | null | null | null | null |
Simultaneous Estimation of Noise Variance and Number of Peaks in
Bayesian Spectral Deconvolution
|
The heuristic identification of peaks from noisy complex spectra often leads
to misunderstanding of the physical and chemical properties of matter. In this
paper, we propose a framework based on Bayesian inference, which enables us to
separate multipeak spectra into single peaks statistically and consists of two
steps. The first step is estimating both the noise variance and the number of
peaks as hyperparameters based on Bayes free energy, which generally is not
analytically tractable. The second step is fitting the parameters of each peak
function to the given spectrum by calculating the posterior density, which has
a problem of local minima and saddles since multipeak models are nonlinear and
hierarchical. Our framework enables the escape from local minima or saddles by
using the exchange Monte Carlo method and calculates Bayes free energy via the
multiple histogram method. We discuss a simulation demonstrating how efficient
our framework is and show that estimating both the noise variance and the
number of peaks prevents overfitting, overpenalizing, and misunderstanding the
precision of parameter estimation.
|
[
"Satoru Tokuda, Kenji Nagata, and Masato Okada"
] |
null | null |
1607.07590
| null | null |
http://arxiv.org/abs/1607.07590v2
|
2016-12-15T11:43:21Z
|
2016-07-26T08:36:41Z
|
Simultaneous Estimation of Noise Variance and Number of Peaks in
Bayesian Spectral Deconvolution
|
The heuristic identification of peaks from noisy complex spectra often leads to misunderstanding of the physical and chemical properties of matter. In this paper, we propose a framework based on Bayesian inference, which enables us to separate multipeak spectra into single peaks statistically and consists of two steps. The first step is estimating both the noise variance and the number of peaks as hyperparameters based on Bayes free energy, which generally is not analytically tractable. The second step is fitting the parameters of each peak function to the given spectrum by calculating the posterior density, which has a problem of local minima and saddles since multipeak models are nonlinear and hierarchical. Our framework enables the escape from local minima or saddles by using the exchange Monte Carlo method and calculates Bayes free energy via the multiple histogram method. We discuss a simulation demonstrating how efficient our framework is and show that estimating both the noise variance and the number of peaks prevents overfitting, overpenalizing, and misunderstanding the precision of parameter estimation.
|
[
"['Satoru Tokuda' 'Kenji Nagata' 'Masato Okada']"
] |
cs.LG cs.NA math.NA stat.ML
|
10.1016/j.amc.2019.01.047
|
1607.07607
| null | null |
http://arxiv.org/abs/1607.07607v3
|
2019-08-29T09:11:05Z
|
2016-07-26T09:26:20Z
|
Adaptive Nonnegative Matrix Factorization and Measure Comparisons for
Recommender Systems
|
The Nonnegative Matrix Factorization (NMF) of the rating matrix has shown to
be an effective method to tackle the recommendation problem. In this paper we
propose new methods based on the NMF of the rating matrix and we compare them
with some classical algorithms such as the SVD and the regularized and
unregularized non-negative matrix factorization approach. In particular a new
algorithm is obtained changing adaptively the function to be minimized at each
step, realizing a sort of dynamic prior strategy. Another algorithm is obtained
modifying the function to be minimized in the NMF formulation by enforcing the
reconstruction of the unknown ratings toward a prior term. We then combine
different methods obtaining two mixed strategies which turn out to be very
effective in the reconstruction of missing observations. We perform a
thoughtful comparison of different methods on the basis of several evaluation
measures. We consider in particular rating, classification and ranking measures
showing that the algorithm obtaining the best score for a given measure is in
general the best also when different measures are considered, lowering the
interest in designing specific evaluation measures. The algorithms have been
tested on different datasets, in particular the 1M, and 10M MovieLens datasets
containing ratings on movies, the Jester dataset with ranting on jokes and
Amazon Fine Foods dataset with ratings on foods. The comparison of the
different algorithms, shows the good performance of methods employing both an
explicit and an implicit regularization scheme. Moreover we can get a boost by
mixed strategies combining a fast method with a more accurate one.
|
[
"Gianna M. Del Corso and Francesco Romani",
"['Gianna M. Del Corso' 'Francesco Romani']"
] |
cs.LG cs.RO
| null |
1607.07611
| null | null |
http://arxiv.org/pdf/1607.07611v1
|
2016-07-26T09:40:23Z
|
2016-07-26T09:40:23Z
|
Learning Null Space Projections in Operational Space Formulation
|
In recent years, a number of tools have become available that recover the
underlying control policy from constrained movements. However, few have
explicitly considered learning the constraints of the motion and ways to cope
with unknown environment. In this paper, we consider learning the null space
projection matrix of a kinematically constrained system in the absence of any
prior knowledge either on the underlying policy, the geometry, or
dimensionality of the constraints. Our evaluations have demonstrated the
effectiveness of the proposed approach on problems of differing dimensionality,
and with different degrees of non-linearity.
|
[
"Hsiu-Chin Lin and Matthew Howard",
"['Hsiu-Chin Lin' 'Matthew Howard']"
] |
cs.GT cs.AI cs.LG
| null |
1607.07684
| null | null |
http://arxiv.org/pdf/1607.07684v1
|
2016-07-26T13:23:20Z
|
2016-07-26T13:23:20Z
|
The Price of Anarchy in Auctions
|
This survey outlines a general and modular theory for proving approximation
guarantees for equilibria of auctions in complex settings. This theory
complements traditional economic techniques, which generally focus on exact and
optimal solutions and are accordingly limited to relatively stylized settings.
We highlight three user-friendly analytical tools: smoothness-type
inequalities, which immediately yield approximation guarantees for many auction
formats of interest in the special case of complete information and
deterministic strategies; extension theorems, which extend such guarantees to
randomized strategies, no-regret learning outcomes, and incomplete-information
settings; and composition theorems, which extend such guarantees from simpler
to more complex auctions. Combining these tools yields tight worst-case
approximation guarantees for the equilibria of many widely-used auction
formats.
|
[
"Tim Roughgarden, Vasilis Syrgkanis, Eva Tardos",
"['Tim Roughgarden' 'Vasilis Syrgkanis' 'Eva Tardos']"
] |
cs.NE cs.CV cs.LG
| null |
1607.07695
| null | null |
http://arxiv.org/pdf/1607.07695v2
|
2017-01-11T20:42:47Z
|
2016-07-12T17:26:31Z
|
Hierarchical Multi-resolution Mesh Networks for Brain Decoding
|
We propose a new framework, called Hierarchical Multi-resolution Mesh
Networks (HMMNs), which establishes a set of brain networks at multiple time
resolutions of fMRI signal to represent the underlying cognitive process. The
suggested framework, first, decomposes the fMRI signal into various frequency
subbands using wavelet transforms. Then, a brain network, called mesh network,
is formed at each subband by ensembling a set of local meshes. The locality
around each anatomic region is defined with respect to a neighborhood system
based on functional connectivity. The arc weights of a mesh are estimated by
ridge regression formed among the average region time series. In the final
step, the adjacency matrices of mesh networks obtained at different subbands
are ensembled for brain decoding under a hierarchical learning architecture,
called, fuzzy stacked generalization (FSG). Our results on Human Connectome
Project task-fMRI dataset reflect that the suggested HMMN model can
successfully discriminate tasks by extracting complementary information
obtained from mesh arc weights of multiple subbands. We study the topological
properties of the mesh networks at different resolutions using the network
measures, namely, node degree, node strength, betweenness centrality and global
efficiency; and investigate the connectivity of anatomic regions, during a
cognitive task. We observe significant variations among the network topologies
obtained for different subbands. We, also, analyze the diversity properties of
classifier ensemble, trained by the mesh networks in multiple subbands and
observe that the classifiers in the ensemble collaborate with each other to
fuse the complementary information freed at each subband. We conclude that the
fMRI data, recorded during a cognitive task, embed diverse information across
the anatomic regions at each resolution.
|
[
"['Itir Onal Ertugrul' 'Mete Ozay' 'Fatos Tunay Yarman Vural']",
"Itir Onal Ertugrul, Mete Ozay, Fatos Tunay Yarman Vural"
] |
cs.CY cs.LG
|
10.1177/0269215518771127
|
1607.07751
| null | null |
http://arxiv.org/abs/1607.07751v1
|
2016-07-05T17:10:40Z
|
2016-07-05T17:10:40Z
|
Machine Learning in Falls Prediction; A cognition-based predictor of
falls for the acute neurological in-patient population
|
Background Information: Falls are associated with high direct and indirect
costs, and significant morbidity and mortality for patients. Pathological falls
are usually a result of a compromised motor system, and/or cognition. Very
little research has been conducted on predicting falls based on this premise.
Aims: To demonstrate that cognitive and motor tests can be used to create a
robust predictive tool for falls.
Methods: Three tests of attention and executive function (Stroop, Trail
Making, and Semantic Fluency), a measure of physical function (Walk-12), a
series of questions (concerning recent falls, surgery and physical function)
and demographic information were collected from a cohort of 323 patients at a
tertiary neurological center. The principal outcome was a fall during the
in-patient stay (n = 54). Data-driven, predictive modelling was employed to
identify the statistical modelling strategies which are most accurate in
predicting falls, and which yield the most parsimonious models of clinical
relevance.
Results: The Trail test was identified as the best predictor of falls.
Moreover, addition of any others variables, to the results of the Trail test
did not improve the prediction (Wilcoxon signed-rank p < .001). The best
statistical strategy for predicting falls was the random forest (Wilcoxon
signed-rank p < .001), based solely on results of the Trail test. Tuning of the
model results in the following optimized values: 68% (+- 7.7) sensitivity, 90%
(+- 2.3) specificity, with a positive predictive value of 60%, when the
relevant data is available.
Conclusion: Predictive modelling has identified a simple yet powerful machine
learning prediction strategy based on a single clinical test, the Trail test.
Predictive evaluation shows this strategy to be robust, suggesting predictive
modelling and machine learning as the standard for future predictive tools.
|
[
"Bilal A. Mateen and Matthias Bussas and Catherine Doogan and Denise\n Waller and Alessia Saverino and Franz J Kir\\'aly and E Diane Playford",
"['Bilal A. Mateen' 'Matthias Bussas' 'Catherine Doogan' 'Denise Waller'\n 'Alessia Saverino' 'Franz J Király' 'E Diane Playford']"
] |
cs.AI cs.LG cs.RO stat.AP stat.ML
| null |
1607.07762
| null | null |
http://arxiv.org/pdf/1607.07762v4
|
2016-10-23T04:05:34Z
|
2016-07-26T15:48:03Z
|
Focused Model-Learning and Planning for Non-Gaussian Continuous
State-Action Systems
|
We introduce a framework for model learning and planning in stochastic
domains with continuous state and action spaces and non-Gaussian transition
models. It is efficient because (1) local models are estimated only when the
planner requires them; (2) the planner focuses on the most relevant states to
the current planning problem; and (3) the planner focuses on the most
informative and/or high-value actions. Our theoretical analysis shows the
validity and asymptotic optimality of the proposed approach. Empirically, we
demonstrate the effectiveness of our algorithm on a simulated multi-modal
pushing problem.
|
[
"Zi Wang, Stefanie Jegelka, Leslie Pack Kaelbling, Tom\\'as\n Lozano-P\\'erez",
"['Zi Wang' 'Stefanie Jegelka' 'Leslie Pack Kaelbling' 'Tomás Lozano-Pérez']"
] |
cs.LG
| null |
1607.07804
| null | null |
http://arxiv.org/pdf/1607.07804v1
|
2016-07-03T16:34:24Z
|
2016-07-03T16:34:24Z
|
Error-Resilient Machine Learning in Near Threshold Voltage via
Classifier Ensemble
|
In this paper, we present the design of error-resilient machine learning
architectures by employing a distributed machine learning framework referred to
as classifier ensemble (CE). CE combines several simple classifiers to obtain a
strong one. In contrast, centralized machine learning employs a single complex
block. We compare the random forest (RF) and the support vector machine (SVM),
which are representative techniques from the CE and centralized frameworks,
respectively. Employing the dataset from UCI machine learning repository and
architectural-level error models in a commercial 45 nm CMOS process, it is
demonstrated that RF-based architectures are significantly more robust than SVM
architectures in presence of timing errors due to process variations in
near-threshold voltage (NTV) regions (0.3 V - 0.7 V). In particular, the RF
architecture exhibits a detection accuracy (P_{det}) that varies by 3.2% while
maintaining a median P_{det} > 0.9 at a gate level delay variation of 28.9% .
In comparison, SVM exhibits a P_{det} that varies by 16.8%. Additionally, we
propose an error weighted voting technique that incorporates the timing error
statistics of the NTV circuit fabric to further enhance robustness. Simulation
results confirm that the error weighted voting achieves a P_{det} that varies
by only 1.4%, which is 12X lower compared to SVM.
|
[
"Sai Zhang, Naresh Shanbhag",
"['Sai Zhang' 'Naresh Shanbhag']"
] |
q-bio.QM cs.LG
| null |
1607.07817
| null | null |
http://arxiv.org/pdf/1607.07817v1
|
2016-02-27T00:09:21Z
|
2016-02-27T00:09:21Z
|
Prediction of future hospital admissions - what is the tradeoff between
specificity and accuracy?
|
Large amounts of electronic medical records collected by hospitals across the
developed world offer unprecedented possibilities for knowledge discovery using
computer based data mining and machine learning. Notwithstanding significant
research efforts, the use of this data in the prediction of disease development
has largely been disappointing. In this paper we examine in detail a recently
proposed method which has in preliminary experiments demonstrated highly
promising results on real-world data. We scrutinize the authors' claims that
the proposed model is scalable and investigate whether the tradeoff between
prediction specificity (i.e. the ability of the model to predict a wide number
of different ailments) and accuracy (i.e. the ability of the model to make the
correct prediction) is practically viable. Our experiments conducted on a data
corpus of nearly 3,000,000 admissions support the authors' expectations and
demonstrate that the high prediction accuracy is maintained well even when the
number of admission types explicitly included in the model is increased to
account for 98% of all admissions in the corpus. Thus several promising
directions for future work are highlighted.
|
[
"Ieva Vasiljeva and Ognjen Arandjelovic",
"['Ieva Vasiljeva' 'Ognjen Arandjelovic']"
] |
math.OC cs.DS cs.LG math.NA stat.ML
| null |
1607.07837
| null | null |
http://arxiv.org/pdf/1607.07837v4
|
2017-04-17T02:40:11Z
|
2016-07-26T18:46:21Z
|
First Efficient Convergence for Streaming k-PCA: a Global, Gap-Free, and
Near-Optimal Rate
|
We study streaming principal component analysis (PCA), that is to find, in
$O(dk)$ space, the top $k$ eigenvectors of a $d\times d$ hidden matrix $\bf
\Sigma$ with online vectors drawn from covariance matrix $\bf \Sigma$.
We provide $\textit{global}$ convergence for Oja's algorithm which is
popularly used in practice but lacks theoretical understanding for $k>1$. We
also provide a modified variant $\mathsf{Oja}^{++}$ that runs $\textit{even
faster}$ than Oja's. Our results match the information theoretic lower bound in
terms of dependency on error, on eigengap, on rank $k$, and on dimension $d$,
up to poly-log factors. In addition, our convergence rate can be made gap-free,
that is proportional to the approximation error and independent of the
eigengap.
In contrast, for general rank $k$, before our work (1) it was open to design
any algorithm with efficient global convergence rate; and (2) it was open to
design any algorithm with (even local) gap-free convergence rate in $O(dk)$
space.
|
[
"Zeyuan Allen-Zhu, Yuanzhi Li",
"['Zeyuan Allen-Zhu' 'Yuanzhi Li']"
] |
cs.CR cs.LG
| null |
1607.07903
| null | null |
http://arxiv.org/pdf/1607.07903v1
|
2016-07-26T21:32:11Z
|
2016-07-26T21:32:11Z
|
Product Offerings in Malicious Hacker Markets
|
Marketplaces specializing in malicious hacking products - including malware
and exploits - have recently become more prominent on the darkweb and deepweb.
We scrape 17 such sites and collect information about such products in a
unified database schema. Using a combination of manual labeling and
unsupervised clustering, we examine a corpus of products in order to understand
their various categories and how they become specialized with respect to vendor
and marketplace. This initial study presents how we effectively employed
unsupervised techniques to this data as well as the types of insights we gained
on various categories of malicious hacking products.
|
[
"Ericsson Marin, Ahmad Diab and Paulo Shakarian",
"['Ericsson Marin' 'Ahmad Diab' 'Paulo Shakarian']"
] |
cs.RO cs.LG
| null |
1607.07939
| null | null |
http://arxiv.org/pdf/1607.07939v1
|
2016-07-27T02:29:52Z
|
2016-07-27T02:29:52Z
|
A Sensorimotor Reinforcement Learning Framework for Physical Human-Robot
Interaction
|
Modeling of physical human-robot collaborations is generally a challenging
problem due to the unpredictive nature of human behavior. To address this
issue, we present a data-efficient reinforcement learning framework which
enables a robot to learn how to collaborate with a human partner. The robot
learns the task from its own sensorimotor experiences in an unsupervised
manner. The uncertainty of the human actions is modeled using Gaussian
processes (GP) to implement action-value functions. Optimal action selection
given the uncertain GP model is ensured by Bayesian optimization. We apply the
framework to a scenario in which a human and a PR2 robot jointly control the
ball position on a plank based on vision and force/torque data. Our
experimental results show the suitability of the proposed method in terms of
fast and data-efficient model learning, optimal action selection under
uncertainties and equal role sharing between the partners.
|
[
"Ali Ghadirzadeh, Judith B\\\"utepage, Atsuto Maki, Danica Kragic and\n M{\\aa}rten Bj\\\"orkman",
"['Ali Ghadirzadeh' 'Judith Bütepage' 'Atsuto Maki' 'Danica Kragic'\n 'Mårten Björkman']"
] |
cs.LG stat.ML
| null |
1607.07959
| null | null |
http://arxiv.org/pdf/1607.07959v2
|
2016-09-05T12:25:00Z
|
2016-07-27T04:56:57Z
|
Using Kernel Methods and Model Selection for Prediction of Preterm Birth
|
We describe an application of machine learning to the problem of predicting
preterm birth. We conduct a secondary analysis on a clinical trial dataset
collected by the National In- stitute of Child Health and Human Development
(NICHD) while focusing our attention on predicting different classes of preterm
birth. We compare three approaches for deriving predictive models: a support
vector machine (SVM) approach with linear and non-linear kernels, logistic
regression with different model selection along with a model based on decision
rules prescribed by physician experts for prediction of preterm birth. Our
approach highlights the pre-processing methods applied to handle the inherent
dynamics, noise and gaps in the data and describe techniques used to handle
skewed class distributions. Empirical experiments demonstrate significant
improvement in predicting preterm birth compared to past work.
|
[
"['Ilia Vovsha' 'Ansaf Salleb-Aouissi' 'Anita Raja' 'Thomas Koch'\n 'Alex Rybchuk' 'Axinia Radeva' 'Ashwath Rajan' 'Yiwen Huang' 'Hatim Diab'\n 'Ashish Tomar' 'Ronald Wapner']",
"Ilia Vovsha, Ansaf Salleb-Aouissi, Anita Raja, Thomas Koch, Alex\n Rybchuk, Axinia Radeva, Ashwath Rajan, Yiwen Huang, Hatim Diab, Ashish Tomar,\n and Ronald Wapner"
] |
cs.LG cs.NA math.OC
| null |
1607.08012
| null | null |
http://arxiv.org/pdf/1607.08012v1
|
2016-07-27T09:18:25Z
|
2016-07-27T09:18:25Z
|
Learning of Generalized Low-Rank Models: A Greedy Approach
|
Learning of low-rank matrices is fundamental to many machine learning
applications. A state-of-the-art algorithm is the rank-one matrix pursuit
(R1MP). However, it can only be used in matrix completion problems with the
square loss. In this paper, we develop a more flexible greedy algorithm for
generalized low-rank models whose optimization objective can be smooth or
nonsmooth, general convex or strongly convex. The proposed algorithm has low
per-iteration time complexity and fast convergence rate. Experimental results
show that it is much faster than the state-of-the-art, with comparable or even
better prediction performance.
|
[
"['Quanming Yao' 'James T. Kwok']",
"Quanming Yao and James T. Kwok"
] |
cs.CV cs.LG cs.NE
| null |
1607.08064
| null | null |
http://arxiv.org/pdf/1607.08064v4
|
2020-05-25T06:28:24Z
|
2016-07-27T12:41:00Z
|
CNN-based Patch Matching for Optical Flow with Thresholded Hinge
Embedding Loss
|
Learning based approaches have not yet achieved their full potential in
optical flow estimation, where their performance still trails heuristic
approaches. In this paper, we present a CNN based patch matching approach for
optical flow estimation. An important contribution of our approach is a novel
thresholded loss for Siamese networks. We demonstrate that our loss performs
clearly better than existing losses. It also allows to speed up training by a
factor of 2 in our tests. Furthermore, we present a novel way for calculating
CNN based features for different image scales, which performs better than
existing methods. We also discuss new ways of evaluating the robustness of
trained features for the application of patch matching for optical flow. An
interesting discovery in our paper is that low-pass filtering of feature maps
can increase the robustness of features created by CNNs. We proved the
competitive performance of our approach by submitting it to the KITTI 2012,
KITTI 2015 and MPI-Sintel evaluation portals where we obtained state-of-the-art
results on all three datasets.
|
[
"['Christian Bailer' 'Kiran Varanasi' 'Didier Stricker']",
"Christian Bailer and Kiran Varanasi and Didier Stricker"
] |
cs.CV cs.AI cs.LG math.ST stat.TH
| null |
1607.08085
| null | null |
http://arxiv.org/pdf/1607.08085v1
|
2016-07-27T13:35:16Z
|
2016-07-27T13:35:16Z
|
Improving Semantic Embedding Consistency by Metric Learning for
Zero-Shot Classification
|
This paper addresses the task of zero-shot image classification. The key
contribution of the proposed approach is to control the semantic embedding of
images -- one of the main ingredients of zero-shot learning -- by formulating
it as a metric learning problem. The optimized empirical criterion associates
two types of sub-task constraints: metric discriminating capacity and accurate
attribute prediction. This results in a novel expression of zero-shot learning
not requiring the notion of class in the training phase: only pairs of
image/attributes, augmented with a consistency indicator, are given as ground
truth. At test time, the learned model can predict the consistency of a test
image with a given set of attributes , allowing flexible ways to produce
recognition inferences. Despite its simplicity, the proposed approach gives
state-of-the-art results on four challenging datasets used for zero-shot
recognition evaluation.
|
[
"Maxime Bucher (Palaiseau), St\\'ephane Herbin (Palaiseau), Fr\\'ed\\'eric\n Jurie",
"['Maxime Bucher' 'Stéphane Herbin' 'Frédéric Jurie']"
] |
stat.ML cs.LG q-bio.QM
|
10.1007/978-3-319-50478-0_16
|
1607.08161
| null | null |
http://arxiv.org/abs/1607.08161v2
|
2016-12-15T13:09:49Z
|
2016-07-27T15:53:02Z
|
Network-Guided Biomarker Discovery
|
Identifying measurable genetic indicators (or biomarkers) of a specific
condition of a biological system is a key element of precision medicine. Indeed
it allows to tailor diagnostic, prognostic and treatment choice to individual
characteristics of a patient. In machine learning terms, biomarker discovery
can be framed as a feature selection problem on whole-genome data sets.
However, classical feature selection methods are usually underpowered to
process these data sets, which contain orders of magnitude more features than
samples. This can be addressed by making the assumption that genetic features
that are linked on a biological network are more likely to work jointly towards
explaining the phenotype of interest. We review here three families of methods
for feature selection that integrate prior knowledge in the form of networks.
|
[
"Chlo\\'e-Agathe Azencott",
"['Chloé-Agathe Azencott']"
] |
stat.ML cs.LG
| null |
1607.08194
| null | null |
http://arxiv.org/pdf/1607.08194v4
|
2016-10-10T22:37:55Z
|
2016-07-27T17:44:05Z
|
Convolutional Neural Networks Analyzed via Convolutional Sparse Coding
|
Convolutional neural networks (CNN) have led to many state-of-the-art results
spanning through various fields. However, a clear and profound theoretical
understanding of the forward pass, the core algorithm of CNN, is still lacking.
In parallel, within the wide field of sparse approximation, Convolutional
Sparse Coding (CSC) has gained increasing attention in recent years. A
theoretical study of this model was recently conducted, establishing it as a
reliable and stable alternative to the commonly practiced patch-based
processing. Herein, we propose a novel multi-layer model, ML-CSC, in which
signals are assumed to emerge from a cascade of CSC layers. This is shown to be
tightly connected to CNN, so much so that the forward pass of the CNN is in
fact the thresholding pursuit serving the ML-CSC model. This connection brings
a fresh view to CNN, as we are able to attribute to this architecture
theoretical claims such as uniqueness of the representations throughout the
network, and their stable estimation, all guaranteed under simple local
sparsity conditions. Lastly, identifying the weaknesses in the above pursuit
scheme, we propose an alternative to the forward pass, which is connected to
deconvolutional, recurrent and residual networks, and has better theoretical
guarantees.
|
[
"Vardan Papyan, Yaniv Romano and Michael Elad",
"['Vardan Papyan' 'Yaniv Romano' 'Michael Elad']"
] |
cs.LG
| null |
1607.08206
| null | null |
http://arxiv.org/pdf/1607.08206v2
|
2016-09-13T16:26:41Z
|
2016-07-27T18:20:01Z
|
Diagnostic Prediction Using Discomfort Drawings with IBTM
|
In this paper, we explore the possibility to apply machine learning to make
diagnostic predictions using discomfort drawings. A discomfort drawing is an
intuitive way for patients to express discomfort and pain related symptoms.
These drawings have proven to be an effective method to collect patient data
and make diagnostic decisions in real-life practice. A dataset from real-world
patient cases is collected for which medical experts provide diagnostic labels.
Next, we use a factorized multimodal topic model, Inter-Battery Topic Model
(IBTM), to train a system that can make diagnostic predictions given an unseen
discomfort drawing. The number of output diagnostic labels is determined by
using mean-shift clustering on the discomfort drawing. Experimental results
show reasonable predictions of diagnostic labels given an unseen discomfort
drawing. Additionally, we generate synthetic discomfort drawings with IBTM
given a diagnostic label, which results in typical cases of symptoms. The
positive result indicates a significant potential of machine learning to be
used for parts of the pain diagnostic process and to be a decision support
system for physicians and other health care personnel.
|
[
"Cheng Zhang, Hedvig Kjellstrom, Carl Henrik Ek, Bo C. Bertilson",
"['Cheng Zhang' 'Hedvig Kjellstrom' 'Carl Henrik Ek' 'Bo C. Bertilson']"
] |
math.OC cs.LG stat.ML
| null |
1607.08254
| null | null |
http://arxiv.org/pdf/1607.08254v2
|
2016-07-29T05:01:34Z
|
2016-07-27T20:03:47Z
|
Stochastic Frank-Wolfe Methods for Nonconvex Optimization
|
We study Frank-Wolfe methods for nonconvex stochastic and finite-sum
optimization problems. Frank-Wolfe methods (in the convex case) have gained
tremendous recent interest in machine learning and optimization communities due
to their projection-free property and their ability to exploit structured
constraints. However, our understanding of these algorithms in the nonconvex
setting is fairly limited. In this paper, we propose nonconvex stochastic
Frank-Wolfe methods and analyze their convergence properties. For objective
functions that decompose into a finite-sum, we leverage ideas from variance
reduction techniques for convex optimization to obtain new variance reduced
nonconvex Frank-Wolfe methods that have provably faster convergence than the
classical Frank-Wolfe method. Finally, we show that the faster convergence
rates of our variance reduced methods also translate into improved convergence
rates for the stochastic setting.
|
[
"['Sashank J. Reddi' 'Suvrit Sra' 'Barnabas Poczos' 'Alex Smola']",
"Sashank J. Reddi, Suvrit Sra, Barnabas Poczos, Alex Smola"
] |
cs.AI cs.CY cs.HC cs.LG cs.RO
| null |
1607.08289
| null | null |
http://arxiv.org/pdf/1607.08289v4
|
2019-01-21T19:29:30Z
|
2016-07-28T01:22:26Z
|
Mammalian Value Systems
|
Characterizing human values is a topic deeply interwoven with the sciences,
humanities, art, and many other human endeavors. In recent years, a number of
thinkers have argued that accelerating trends in computer science, cognitive
science, and related disciplines foreshadow the creation of intelligent
machines which meet and ultimately surpass the cognitive abilities of human
beings, thereby entangling an understanding of human values with future
technological development. Contemporary research accomplishments suggest
sophisticated AI systems becoming widespread and responsible for managing many
aspects of the modern world, from preemptively planning users' travel schedules
and logistics, to fully autonomous vehicles, to domestic robots assisting in
daily living. The extrapolation of these trends has been most forcefully
described in the context of a hypothetical "intelligence explosion," in which
the capabilities of an intelligent software agent would rapidly increase due to
the presence of feedback loops unavailable to biological organisms. The
possibility of superintelligent agents, or simply the widespread deployment of
sophisticated, autonomous AI systems, highlights an important theoretical
problem: the need to separate the cognitive and rational capacities of an agent
from the fundamental goal structure, or value system, which constrains and
guides the agent's actions. The "value alignment problem" is to specify a goal
structure for autonomous agents compatible with human values. In this brief
article, we suggest that recent ideas from affective neuroscience and related
disciplines aimed at characterizing neurological and behavioral universals in
the mammalian class provide important conceptual foundations relevant to
describing human values. We argue that the notion of "mammalian value systems"
points to a potential avenue for fundamental research in AI safety and AI
ethics.
|
[
"['Gopal P. Sarma' 'Nick J. Hay']",
"Gopal P. Sarma and Nick J. Hay"
] |
cs.AI cs.LG stat.ML
| null |
1607.08316
| null | null |
http://arxiv.org/pdf/1607.08316v2
|
2017-01-21T03:26:06Z
|
2016-07-28T05:03:32Z
|
Efficient Hyperparameter Optimization of Deep Learning Algorithms Using
Deterministic RBF Surrogates
|
Automatically searching for optimal hyperparameter configurations is of
crucial importance for applying deep learning algorithms in practice. Recently,
Bayesian optimization has been proposed for optimizing hyperparameters of
various machine learning algorithms. Those methods adopt probabilistic
surrogate models like Gaussian processes to approximate and minimize the
validation error function of hyperparameter values. However, probabilistic
surrogates require accurate estimates of sufficient statistics (e.g.,
covariance) of the error distribution and thus need many function evaluations
with a sizeable number of hyperparameters. This makes them inefficient for
optimizing hyperparameters of deep learning algorithms, which are highly
expensive to evaluate. In this work, we propose a new deterministic and
efficient hyperparameter optimization method that employs radial basis
functions as error surrogates. The proposed mixed integer algorithm, called
HORD, searches the surrogate for the most promising hyperparameter values
through dynamic coordinate search and requires many fewer function evaluations.
HORD does well in low dimensions but it is exceptionally better in higher
dimensions. Extensive evaluations on MNIST and CIFAR-10 for four deep neural
networks demonstrate HORD significantly outperforms the well-established
Bayesian optimization methods such as GP, SMAC, and TPE. For instance, on
average, HORD is more than 6 times faster than GP-EI in obtaining the best
configuration of 19 hyperparameters.
|
[
"['Ilija Ilievski' 'Taimoor Akhtar' 'Jiashi Feng'\n 'Christine Annette Shoemaker']",
"Ilija Ilievski and Taimoor Akhtar and Jiashi Feng and Christine\n Annette Shoemaker"
] |
cs.LG
| null |
1607.084
| null | null | null | null | null |
Randomised Algorithm for Feature Selection and Classification
|
We here introduce a novel classification approach adopted from the nonlinear
model identification framework, which jointly addresses the feature selection
and classifier design tasks. The classifier is constructed as a polynomial
expansion of the original attributes and a model structure selection process is
applied to find the relevant terms of the model. The selection method
progressively refines a probability distribution defined on the model structure
space, by extracting sample models from the current distribution and using the
aggregate information obtained from the evaluation of the population of models
to reinforce the probability of extracting the most important terms. To reduce
the initial search space, distance correlation filtering can be applied as a
preprocessing technique. The proposed method is evaluated and compared to other
well-known feature selection and classification methods on standard benchmark
classification problems. The results show the effectiveness of the proposed
method with respect to competitor methods both in terms of classification
accuracy and model complexity. The obtained models have a simple structure,
easily amenable to interpretation and analysis.
|
[
"Aida Brankovic, Alessandro Falsone, Maria Prandini, Luigi Piroddi"
] |
null | null |
1607.08400
| null | null |
http://arxiv.org/pdf/1607.08400v1
|
2016-07-28T11:07:31Z
|
2016-07-28T11:07:31Z
|
Randomised Algorithm for Feature Selection and Classification
|
We here introduce a novel classification approach adopted from the nonlinear model identification framework, which jointly addresses the feature selection and classifier design tasks. The classifier is constructed as a polynomial expansion of the original attributes and a model structure selection process is applied to find the relevant terms of the model. The selection method progressively refines a probability distribution defined on the model structure space, by extracting sample models from the current distribution and using the aggregate information obtained from the evaluation of the population of models to reinforce the probability of extracting the most important terms. To reduce the initial search space, distance correlation filtering can be applied as a preprocessing technique. The proposed method is evaluated and compared to other well-known feature selection and classification methods on standard benchmark classification problems. The results show the effectiveness of the proposed method with respect to competitor methods both in terms of classification accuracy and model complexity. The obtained models have a simple structure, easily amenable to interpretation and analysis.
|
[
"['Aida Brankovic' 'Alessandro Falsone' 'Maria Prandini' 'Luigi Piroddi']"
] |
stat.ML cs.DS cs.LG
| null |
1607.08456
| null | null |
http://arxiv.org/pdf/1607.08456v2
|
2017-10-29T21:33:41Z
|
2016-07-28T13:46:06Z
|
Kernel functions based on triplet comparisons
|
Given only information in the form of similarity triplets "Object A is more
similar to object B than to object C" about a data set, we propose two ways of
defining a kernel function on the data set. While previous approaches construct
a low-dimensional Euclidean embedding of the data set that reflects the given
similarity triplets, we aim at defining kernel functions that correspond to
high-dimensional embeddings. These kernel functions can subsequently be used to
apply any kernel method to the data set.
|
[
"Matth\\\"aus Kleindessner and Ulrike von Luxburg",
"['Matthäus Kleindessner' 'Ulrike von Luxburg']"
] |
cs.CR cs.LG
| null |
1607.08634
| null | null |
http://arxiv.org/pdf/1607.08634v1
|
2016-07-28T20:36:37Z
|
2016-07-28T20:36:37Z
|
Attribute Learning for Network Intrusion Detection
|
Network intrusion detection is one of the most visible uses for Big Data
analytics. One of the main problems in this application is the constant rise of
new attacks. This scenario, characterized by the fact that not enough labeled
examples are available for the new classes of attacks is hardly addressed by
traditional machine learning approaches. New findings on the capabilities of
Zero-Shot learning (ZSL) approach makes it an interesting solution for this
problem because it has the ability to classify instances of unseen classes. ZSL
has inherently two stages: the attribute learning and the inference stage. In
this paper we propose a new algorithm for the attribute learning stage of ZSL.
The idea is to learn new values for the attributes based on decision trees
(DT). Our results show that based on the rules extracted from the DT a better
distribution for the attribute values can be found. We also propose an
experimental setup for the evaluation of ZSL on network intrusion detection
(NID).
|
[
"['Jorge Luis Rivero Pérez' 'Bernardete Ribeiro']",
"Jorge Luis Rivero P\\'erez and Bernardete Ribeiro"
] |
cs.LG stat.ML
| null |
1607.08691
| null | null |
http://arxiv.org/pdf/1607.08691v2
|
2016-08-02T00:48:29Z
|
2016-07-29T06:05:08Z
|
A Non-Parametric Learning Approach to Identify Online Human Trafficking
|
Human trafficking is among the most challenging law enforcement problems
which demands persistent fight against from all over the globe. In this study,
we leverage readily available data from the website "Backpage"-- used for
classified advertisement-- to discern potential patterns of human trafficking
activities which manifest online and identify most likely trafficking related
advertisements. Due to the lack of ground truth, we rely on two human analysts
--one human trafficking victim survivor and one from law enforcement, for
hand-labeling the small portion of the crawled data. We then present a
semi-supervised learning approach that is trained on the available labeled and
unlabeled data and evaluated on unseen data with further verification of
experts.
|
[
"['Hamidreza Alvari' 'Paulo Shakarian' 'J. E. Kelly Snyder']",
"Hamidreza Alvari, Paulo Shakarian, J.E. Kelly Snyder"
] |
cs.LG cs.CL cs.IR stat.ML
| null |
1607.0872
| null | null | null | null | null |
TopicResponse: A Marriage of Topic Modelling and Rasch Modelling for
Automatic Measurement in MOOCs
|
This paper explores the suitability of using automatically discovered topics
from MOOC discussion forums for modelling students' academic abilities. The
Rasch model from psychometrics is a popular generative probabilistic model that
relates latent student skill, latent item difficulty, and observed student-item
responses within a principled, unified framework. According to scholarly
educational theory, discovered topics can be regarded as appropriate
measurement items if (1) students' participation across the discovered topics
is well fit by the Rasch model, and if (2) the topics are interpretable to
subject-matter experts as being educationally meaningful. Such Rasch-scaled
topics, with associated difficulty levels, could be of potential benefit to
curriculum refinement, student assessment and personalised feedback. The
technical challenge that remains, is to discover meaningful topics that
simultaneously achieve good statistical fit with the Rasch model. To address
this challenge, we combine the Rasch model with non-negative matrix
factorisation based topic modelling, jointly fitting both models. We
demonstrate the suitability of our approach with quantitative experiments on
data from three Coursera MOOCs, and with qualitative survey results on topic
interpretability on a Discrete Optimisation MOOC.
|
[
"Jiazhen He, Benjamin I. P. Rubinstein, James Bailey, Rui Zhang, Sandra\n Milligan"
] |
null | null |
1607.08720
| null | null |
http://arxiv.org/pdf/1607.08720v2
|
2017-03-20T04:30:38Z
|
2016-07-29T08:17:45Z
|
TopicResponse: A Marriage of Topic Modelling and Rasch Modelling for
Automatic Measurement in MOOCs
|
This paper explores the suitability of using automatically discovered topics from MOOC discussion forums for modelling students' academic abilities. The Rasch model from psychometrics is a popular generative probabilistic model that relates latent student skill, latent item difficulty, and observed student-item responses within a principled, unified framework. According to scholarly educational theory, discovered topics can be regarded as appropriate measurement items if (1) students' participation across the discovered topics is well fit by the Rasch model, and if (2) the topics are interpretable to subject-matter experts as being educationally meaningful. Such Rasch-scaled topics, with associated difficulty levels, could be of potential benefit to curriculum refinement, student assessment and personalised feedback. The technical challenge that remains, is to discover meaningful topics that simultaneously achieve good statistical fit with the Rasch model. To address this challenge, we combine the Rasch model with non-negative matrix factorisation based topic modelling, jointly fitting both models. We demonstrate the suitability of our approach with quantitative experiments on data from three Coursera MOOCs, and with qualitative survey results on topic interpretability on a Discrete Optimisation MOOC.
|
[
"['Jiazhen He' 'Benjamin I. P. Rubinstein' 'James Bailey' 'Rui Zhang'\n 'Sandra Milligan']"
] |
cs.CL cs.AI cs.LG
|
10.1016/j.cognition.2017.11.008
|
1607.08723
| null | null |
http://arxiv.org/abs/1607.08723v4
|
2018-02-14T15:56:51Z
|
2016-07-29T08:33:10Z
|
Cognitive Science in the era of Artificial Intelligence: A roadmap for
reverse-engineering the infant language-learner
|
During their first years of life, infants learn the language(s) of their
environment at an amazing speed despite large cross cultural variations in
amount and complexity of the available language input. Understanding this
simple fact still escapes current cognitive and linguistic theories. Recently,
spectacular progress in the engineering science, notably, machine learning and
wearable technology, offer the promise of revolutionizing the study of
cognitive development. Machine learning offers powerful learning algorithms
that can achieve human-like performance on many linguistic tasks. Wearable
sensors can capture vast amounts of data, which enable the reconstruction of
the sensory experience of infants in their natural environment. The project of
'reverse engineering' language development, i.e., of building an effective
system that mimics infant's achievements appears therefore to be within reach.
Here, we analyze the conditions under which such a project can contribute to
our scientific understanding of early language development. We argue that
instead of defining a sub-problem or simplifying the data, computational models
should address the full complexity of the learning situation, and take as input
the raw sensory signals available to infants. This implies that (1) accessible
but privacy-preserving repositories of home data be setup and widely shared,
and (2) models be evaluated at different linguistic levels through a benchmark
of psycholinguist tests that can be passed by machines and humans alike, (3)
linguistically and psychologically plausible learning architectures be scaled
up to real data using probabilistic/optimization principles from machine
learning. We discuss the feasibility of this approach and present preliminary
results.
|
[
"Emmanuel Dupoux",
"['Emmanuel Dupoux']"
] |
stat.ML cs.LG
| null |
1607.0881
| null | null | null | null | null |
Polynomial Networks and Factorization Machines: New Insights and
Efficient Training Algorithms
|
Polynomial networks and factorization machines are two recently-proposed
models that can efficiently use feature interactions in classification and
regression tasks. In this paper, we revisit both models from a unified
perspective. Based on this new view, we study the properties of both models and
propose new efficient training algorithms. Key to our approach is to cast
parameter learning as a low-rank symmetric tensor estimation problem, which we
solve by multi-convex optimization. We demonstrate our approach on regression
and recommender system tasks.
|
[
"Mathieu Blondel, Masakazu Ishihata, Akinori Fujino, Naonori Ueda"
] |
null | null |
1607.08810
| null | null |
http://arxiv.org/pdf/1607.08810v1
|
2016-07-29T13:54:51Z
|
2016-07-29T13:54:51Z
|
Polynomial Networks and Factorization Machines: New Insights and
Efficient Training Algorithms
|
Polynomial networks and factorization machines are two recently-proposed models that can efficiently use feature interactions in classification and regression tasks. In this paper, we revisit both models from a unified perspective. Based on this new view, we study the properties of both models and propose new efficient training algorithms. Key to our approach is to cast parameter learning as a low-rank symmetric tensor estimation problem, which we solve by multi-convex optimization. We demonstrate our approach on regression and recommender system tasks.
|
[
"['Mathieu Blondel' 'Masakazu Ishihata' 'Akinori Fujino' 'Naonori Ueda']"
] |
cs.GT cs.LG math.OC
| null |
1607.08863
| null | null |
http://arxiv.org/pdf/1607.08863v1
|
2016-07-29T16:16:49Z
|
2016-07-29T16:16:49Z
|
Exponentially fast convergence to (strict) equilibrium via hedging
|
Motivated by applications to data networks where fast convergence is
essential, we analyze the problem of learning in generic N-person games that
admit a Nash equilibrium in pure strategies. Specifically, we consider a
scenario where players interact repeatedly and try to learn from past
experience by small adjustments based on local - and possibly imperfect -
payoff information. For concreteness, we focus on the so-called "hedge" variant
of the exponential weights algorithm where players select an action with
probability proportional to the exponential of the action's cumulative payoff
over time. When players have perfect information on their mixed payoffs, the
algorithm converges locally to a strict equilibrium and the rate of convergence
is exponentially fast - of the order of
$\mathcal{O}(\exp(-a\sum_{j=1}^{t}\gamma_{j}))$ where $a>0$ is a constant and
$\gamma_{j}$ is the algorithm's step-size. In the presence of uncertainty,
convergence requires a more conservative step-size policy, but with high
probability, the algorithm remains locally convergent and achieves an
exponential convergence rate.
|
[
"Johanne Cohen and Am\\'elie H\\'eliou and Panayotis Mertikopoulos",
"['Johanne Cohen' 'Amélie Héliou' 'Panayotis Mertikopoulos']"
] |
cs.NE cs.AI cs.LG
| null |
1607.08878
| null | null |
http://arxiv.org/pdf/1607.08878v1
|
2016-07-29T18:06:39Z
|
2016-07-29T18:06:39Z
|
Identifying and Harnessing the Building Blocks of Machine Learning
Pipelines for Sensible Initialization of a Data Science Automation Tool
|
As data science continues to grow in popularity, there will be an increasing
need to make data science tools more scalable, flexible, and accessible. In
particular, automated machine learning (AutoML) systems seek to automate the
process of designing and optimizing machine learning pipelines. In this
chapter, we present a genetic programming-based AutoML system called TPOT that
optimizes a series of feature preprocessors and machine learning models with
the goal of maximizing classification accuracy on a supervised classification
problem. Further, we analyze a large database of pipelines that were previously
used to solve various supervised classification problems and identify 100 short
series of machine learning operations that appear the most frequently, which we
call the building blocks of machine learning pipelines. We harness these
building blocks to initialize TPOT with promising solutions, and find that this
sensible initialization method significantly improves TPOT's performance on one
benchmark at no cost of significantly degrading performance on the others.
Thus, sensible initialization with machine learning pipeline building blocks
shows promise for GP-based AutoML systems, and should be further refined in
future work.
|
[
"Randal S. Olson and Jason H. Moore",
"['Randal S. Olson' 'Jason H. Moore']"
] |
stat.ML cs.LG
| null |
1608.00027
| null | null |
http://arxiv.org/pdf/1608.00027v1
|
2016-07-29T20:57:06Z
|
2016-07-29T20:57:06Z
|
gLOP: the global and Local Penalty for Capturing Predictive
Heterogeneity
|
When faced with a supervised learning problem, we hope to have rich enough
data to build a model that predicts future instances well. However, in
practice, problems can exhibit predictive heterogeneity: most instances might
be relatively easy to predict, while others might be predictive outliers for
which a model trained on the entire dataset does not perform well. Identifying
these can help focus future data collection. We present gLOP, the global and
Local Penalty, a framework for capturing predictive heterogeneity and
identifying predictive outliers. gLOP is based on penalized regression for
multitask learning, which improves learning by leveraging training signal
information from related tasks. We give two optimization algorithms for gLOP,
one space-efficient, and another giving the full regularization path. We also
characterize uniqueness in terms of the data and tuning parameters, and present
empirical results on synthetic data and on two health research problems.
|
[
"Rhiannon V. Rose, Daniel J. Lizotte",
"['Rhiannon V. Rose' 'Daniel J. Lizotte']"
] |
cs.LG cs.AI
|
10.1017/S1471068416000260
|
1608.001
| null | null | null | null | null |
Online Learning of Event Definitions
|
Systems for symbolic event recognition infer occurrences of events in time
using a set of event definitions in the form of first-order rules. The Event
Calculus is a temporal logic that has been used as a basis in event recognition
applications, providing among others, direct connections to machine learning,
via Inductive Logic Programming (ILP). We present an ILP system for online
learning of Event Calculus theories. To allow for a single-pass learning
strategy, we use the Hoeffding bound for evaluating clauses on a subset of the
input stream. We employ a decoupling scheme of the Event Calculus axioms during
the learning process, that allows to learn each clause in isolation. Moreover,
we use abductive-inductive logic programming techniques to handle unobserved
target predicates. We evaluate our approach on an activity recognition
application and compare it to a number of batch learning techniques. We obtain
results of comparable predicative accuracy with significant speed-ups in
training time. We also outperform hand-crafted rules and match the performance
of a sound incremental learner that can only operate on noise-free datasets.
This paper is under consideration for acceptance in TPLP.
|
[
"Nikos Katzouris, Alexander Artikis, Georgios Paliouras"
] |
null | null |
1608.00100
| null | null |
http://arxiv.org/abs/1608.00100v1
|
2016-07-30T10:44:58Z
|
2016-07-30T10:44:58Z
|
Online Learning of Event Definitions
|
Systems for symbolic event recognition infer occurrences of events in time using a set of event definitions in the form of first-order rules. The Event Calculus is a temporal logic that has been used as a basis in event recognition applications, providing among others, direct connections to machine learning, via Inductive Logic Programming (ILP). We present an ILP system for online learning of Event Calculus theories. To allow for a single-pass learning strategy, we use the Hoeffding bound for evaluating clauses on a subset of the input stream. We employ a decoupling scheme of the Event Calculus axioms during the learning process, that allows to learn each clause in isolation. Moreover, we use abductive-inductive logic programming techniques to handle unobserved target predicates. We evaluate our approach on an activity recognition application and compare it to a number of batch learning techniques. We obtain results of comparable predicative accuracy with significant speed-ups in training time. We also outperform hand-crafted rules and match the performance of a sound incremental learner that can only operate on noise-free datasets. This paper is under consideration for acceptance in TPLP.
|
[
"['Nikos Katzouris' 'Alexander Artikis' 'Georgios Paliouras']"
] |
cs.LG cs.CL cs.IR
| null |
1608.00104
| null | null |
http://arxiv.org/pdf/1608.00104v1
|
2016-07-30T11:53:04Z
|
2016-07-30T11:53:04Z
|
World Knowledge as Indirect Supervision for Document Clustering
|
One of the key obstacles in making learning protocols realistic in
applications is the need to supervise them, a costly process that often
requires hiring domain experts. We consider the framework to use the world
knowledge as indirect supervision. World knowledge is general-purpose
knowledge, which is not designed for any specific domain. Then the key
challenges are how to adapt the world knowledge to domains and how to represent
it for learning. In this paper, we provide an example of using world knowledge
for domain dependent document clustering. We provide three ways to specify the
world knowledge to domains by resolving the ambiguity of the entities and their
types, and represent the data with world knowledge as a heterogeneous
information network. Then we propose a clustering algorithm that can cluster
multiple types and incorporate the sub-type information as constraints. In the
experiments, we use two existing knowledge bases as our sources of world
knowledge. One is Freebase, which is collaboratively collected knowledge about
entities and their organizations. The other is YAGO2, a knowledge base
automatically extracted from Wikipedia and maps knowledge to the linguistic
knowledge base, WordNet. Experimental results on two text benchmark datasets
(20newsgroups and RCV1) show that incorporating world knowledge as indirect
supervision can significantly outperform the state-of-the-art clustering
algorithms as well as clustering algorithms enhanced with world knowledge
features.
|
[
"['Chenguang Wang' 'Yangqiu Song' 'Dan Roth' 'Ming Zhang' 'Jiawei Han']",
"Chenguang Wang, Yangqiu Song, Dan Roth, Ming Zhang, Jiawei Han"
] |
stat.ML cs.LG
|
10.1145/3097983.3098169
|
1608.00159
| null | null |
http://arxiv.org/abs/1608.00159v4
|
2017-06-24T23:30:59Z
|
2016-07-30T19:52:56Z
|
Learning Tree-Structured Detection Cascades for Heterogeneous Networks
of Embedded Devices
|
In this paper, we present a new approach to learning cascaded classifiers for
use in computing environments that involve networks of heterogeneous and
resource-constrained, low-power embedded compute and sensing nodes. We present
a generalization of the classical linear detection cascade to the case of
tree-structured cascades where different branches of the tree execute on
different physical compute nodes in the network. Different nodes have access to
different features, as well as access to potentially different computation and
energy resources. We concentrate on the problem of jointly learning the
parameters for all of the classifiers in the cascade given a fixed cascade
architecture and a known set of costs required to carry out the computation at
each node.To accomplish the objective of joint learning of all detectors, we
propose a novel approach to combining classifier outputs during training that
better matches the hard cascade setting in which the learned system will be
deployed. This work is motivated by research in the area of mobile health where
energy efficient real time detectors integrating information from multiple
wireless on-body sensors and a smart phone are needed for real-time monitoring
and delivering just- in-time adaptive interventions. We apply our framework to
two activity recognition datasets as well as the problem of cigarette smoking
detection from a combination of wrist-worn actigraphy data and respiration
chest band data.
|
[
"['Hamid Dadkhahi' 'Benjamin M. Marlin']",
"Hamid Dadkhahi and Benjamin M. Marlin"
] |
cs.CV cs.LG
| null |
1608.00182
| null | null |
http://arxiv.org/pdf/1608.00182v1
|
2016-07-31T03:56:30Z
|
2016-07-31T03:56:30Z
|
Deep FisherNet for Object Classification
|
Despite the great success of convolutional neural networks (CNN) for the
image classification task on datasets like Cifar and ImageNet, CNN's
representation power is still somewhat limited in dealing with object images
that have large variation in size and clutter, where Fisher Vector (FV) has
shown to be an effective encoding strategy. FV encodes an image by aggregating
local descriptors with a universal generative Gaussian Mixture Model (GMM). FV
however has limited learning capability and its parameters are mostly fixed
after constructing the codebook. To combine together the best of the two
worlds, we propose in this paper a neural network structure with FV layer being
part of an end-to-end trainable system that is differentiable; we name our
network FisherNet that is learnable using backpropagation. Our proposed
FisherNet combines convolutional neural network training and Fisher Vector
encoding in a single end-to-end structure. We observe a clear advantage of
FisherNet over plain CNN and standard FV in terms of both classification
accuracy and computational efficiency on the challenging PASCAL VOC object
classification task.
|
[
"['Peng Tang' 'Xinggang Wang' 'Baoguang Shi' 'Xiang Bai' 'Wenyu Liu'\n 'Zhuowen Tu']",
"Peng Tang, Xinggang Wang, Baoguang Shi, Xiang Bai, Wenyu Liu, Zhuowen\n Tu"
] |
cs.LG cs.CV cs.NE stat.ML
| null |
1608.00218
| null | null |
http://arxiv.org/pdf/1608.00218v1
|
2016-07-31T14:09:17Z
|
2016-07-31T14:09:17Z
|
Hyperparameter Transfer Learning through Surrogate Alignment for
Efficient Deep Neural Network Training
|
Recently, several optimization methods have been successfully applied to the
hyperparameter optimization of deep neural networks (DNNs). The methods work by
modeling the joint distribution of hyperparameter values and corresponding
error. Those methods become less practical when applied to modern DNNs whose
training may take a few days and thus one cannot collect sufficient
observations to accurately model the distribution. To address this challenging
issue, we propose a method that learns to transfer optimal hyperparameter
values for a small source dataset to hyperparameter values with comparable
performance on a dataset of interest. As opposed to existing transfer learning
methods, our proposed method does not use hand-designed features. Instead, it
uses surrogates to model the hyperparameter-error distributions of the two
datasets and trains a neural network to learn the transfer function. Extensive
experiments on three CV benchmark datasets clearly demonstrate the efficiency
of our method.
|
[
"Ilija Ilievski and Jiashi Feng",
"['Ilija Ilievski' 'Jiashi Feng']"
] |
cs.LG cs.CV
| null |
1608.0022
| null | null | null | null | null |
Learning Robust Features using Deep Learning for Automatic Seizure
Detection
|
We present and evaluate the capacity of a deep neural network to learn robust
features from EEG to automatically detect seizures. This is a challenging
problem because seizure manifestations on EEG are extremely variable both
inter- and intra-patient. By simultaneously capturing spectral, temporal and
spatial information our recurrent convolutional neural network learns a general
spatially invariant representation of a seizure. The proposed approach exceeds
significantly previous results obtained on cross-patient classifiers both in
terms of sensitivity and false positive rate. Furthermore, our model proves to
be robust to missing channel and variable electrode montage.
|
[
"Pierre Thodoroff, Joelle Pineau, Andrew Lim"
] |
null | null |
1608.00220
| null | null |
http://arxiv.org/pdf/1608.00220v1
|
2016-07-31T14:28:15Z
|
2016-07-31T14:28:15Z
|
Learning Robust Features using Deep Learning for Automatic Seizure
Detection
|
We present and evaluate the capacity of a deep neural network to learn robust features from EEG to automatically detect seizures. This is a challenging problem because seizure manifestations on EEG are extremely variable both inter- and intra-patient. By simultaneously capturing spectral, temporal and spatial information our recurrent convolutional neural network learns a general spatially invariant representation of a seizure. The proposed approach exceeds significantly previous results obtained on cross-patient classifiers both in terms of sensitivity and false positive rate. Furthermore, our model proves to be robust to missing channel and variable electrode montage.
|
[
"['Pierre Thodoroff' 'Joelle Pineau' 'Andrew Lim']"
] |
cs.LG
| null |
1608.00242
| null | null |
http://arxiv.org/pdf/1608.00242v2
|
2016-10-08T15:56:32Z
|
2016-07-31T16:58:03Z
|
Input-Output Non-Linear Dynamical Systems applied to Physiological
Condition Monitoring
|
We present a non-linear dynamical system for modelling the effect of drug
infusions on the vital signs of patients admitted in Intensive Care Units
(ICUs). More specifically we are interested in modelling the effect of a widely
used anaesthetic drug (Propofol) on a patient's monitored depth of anaesthesia
and haemodynamics. We compare our approach with one from the
Pharmacokinetics/Pharmacodynamics (PK/PD) literature and show that we can
provide significant improvements in performance without requiring the
incorporation of expert physiological knowledge in our system.
|
[
"['Konstantinos Georgatzis' 'Christopher K. I. Williams'\n 'Christopher Hawthorne']",
"Konstantinos Georgatzis, Christopher K. I. Williams, Christopher\n Hawthorne"
] |
cs.LG stat.ML
|
10.1109/ICPR.2016.7899671
|
1608.0025
| null | null | null | null | null |
On Regularization Parameter Estimation under Covariate Shift
|
This paper identifies a problem with the usual procedure for
L2-regularization parameter estimation in a domain adaptation setting. In such
a setting, there are differences between the distributions generating the
training data (source domain) and the test data (target domain). The usual
cross-validation procedure requires validation data, which can not be obtained
from the unlabeled target data. The problem is that if one decides to use
source validation data, the regularization parameter is underestimated. One
possible solution is to scale the source validation data through importance
weighting, but we show that this correction is not sufficient. We conclude the
paper with an empirical analysis of the effect of several importance weight
estimators on the estimation of the regularization parameter.
|
[
"Wouter M. Kouw and Marco Loog"
] |
null | null |
1608.00250
| null | null |
http://arxiv.org/abs/1608.00250v1
|
2016-07-31T19:02:39Z
|
2016-07-31T19:02:39Z
|
On Regularization Parameter Estimation under Covariate Shift
|
This paper identifies a problem with the usual procedure for L2-regularization parameter estimation in a domain adaptation setting. In such a setting, there are differences between the distributions generating the training data (source domain) and the test data (target domain). The usual cross-validation procedure requires validation data, which can not be obtained from the unlabeled target data. The problem is that if one decides to use source validation data, the regularization parameter is underestimated. One possible solution is to scale the source validation data through importance weighting, but we show that this correction is not sufficient. We conclude the paper with an empirical analysis of the effect of several importance weight estimators on the estimation of the regularization parameter.
|
[
"['Wouter M. Kouw' 'Marco Loog']"
] |
cs.CL cs.LG
| null |
1608.00318
| null | null |
http://arxiv.org/pdf/1608.00318v2
|
2017-03-02T15:34:01Z
|
2016-08-01T04:42:49Z
|
A Neural Knowledge Language Model
|
Current language models have a significant limitation in the ability to
encode and decode factual knowledge. This is mainly because they acquire such
knowledge from statistical co-occurrences although most of the knowledge words
are rarely observed. In this paper, we propose a Neural Knowledge Language
Model (NKLM) which combines symbolic knowledge provided by the knowledge graph
with the RNN language model. By predicting whether the word to generate has an
underlying fact or not, the model can generate such knowledge-related words by
copying from the description of the predicted fact. In experiments, we show
that the NKLM significantly improves the performance while generating a much
smaller number of unknown words.
|
[
"['Sungjin Ahn' 'Heeyoul Choi' 'Tanel Pärnamaa' 'Yoshua Bengio']",
"Sungjin Ahn, Heeyoul Choi, Tanel P\\\"arnamaa, Yoshua Bengio"
] |
cs.RO cs.AI cs.LG
| null |
1608.00359
| null | null |
http://arxiv.org/pdf/1608.00359v1
|
2016-08-01T09:09:04Z
|
2016-08-01T09:09:04Z
|
Discovering Latent States for Model Learning: Applying Sensorimotor
Contingencies Theory and Predictive Processing to Model Context
|
Autonomous robots need to be able to adapt to unforeseen situations and to
acquire new skills through trial and error. Reinforcement learning in principle
offers a suitable methodological framework for this kind of autonomous
learning. However current computational reinforcement learning agents mostly
learn each individual skill entirely from scratch. How can we enable artificial
agents, such as robots, to acquire some form of generic knowledge, which they
could leverage for the learning of new skills? This paper argues that, like the
brain, the cognitive system of artificial agents has to develop a world model
to support adaptive behavior and learning. Inspiration is taken from two recent
developments in the cognitive science literature: predictive processing
theories of cognition, and the sensorimotor contingencies theory of perception.
Based on these, a hypothesis is formulated about what the content of
information might be that is encoded in an internal world model, and how an
agent could autonomously acquire it. A computational model is described to
formalize this hypothesis, and is evaluated in a series of simulation
experiments.
|
[
"['Nikolas J. Hemion']",
"Nikolas J. Hemion"
] |
cs.CL cs.LG cs.NE
| null |
1608.00466
| null | null |
http://arxiv.org/pdf/1608.00466v2
|
2016-10-10T03:57:26Z
|
2016-08-01T15:14:08Z
|
Learning Semantically Coherent and Reusable Kernels in Convolution
Neural Nets for Sentence Classification
|
The state-of-the-art CNN models give good performance on sentence
classification tasks. The purpose of this work is to empirically study
desirable properties such as semantic coherence, attention mechanism and
reusability of CNNs in these tasks. Semantically coherent kernels are
preferable as they are a lot more interpretable for explaining the decision of
the learned CNN model. We observe that the learned kernels do not have semantic
coherence. Motivated by this observation, we propose to learn kernels with
semantic coherence using clustering scheme combined with Word2Vec
representation and domain knowledge such as SentiWordNet. We suggest a
technique to visualize attention mechanism of CNNs for decision explanation
purpose. Reusable property enables kernels learned on one problem to be used in
another problem. This helps in efficient learning as only a few additional
domain specific filters may have to be learned. We demonstrate the efficacy of
our core ideas of learning semantically coherent kernels and leveraging
reusable kernels for efficient learning on several benchmark datasets.
Experimental results show the usefulness of our approach by achieving
performance close to the state-of-the-art methods but with semantic and
reusable properties.
|
[
"Madhusudan Lakshmana, Sundararajan Sellamanickam, Shirish Shevade,\n Keerthi Selvaraj",
"['Madhusudan Lakshmana' 'Sundararajan Sellamanickam' 'Shirish Shevade'\n 'Keerthi Selvaraj']"
] |
cs.LG cs.CR cs.CV cs.NE
| null |
1608.0053
| null | null | null | null | null |
Early Methods for Detecting Adversarial Images
|
Many machine learning classifiers are vulnerable to adversarial
perturbations. An adversarial perturbation modifies an input to change a
classifier's prediction without causing the input to seem substantially
different to human perception. We deploy three methods to detect adversarial
images. Adversaries trying to bypass our detectors must make the adversarial
image less pathological or they will fail trying. Our best detection method
reveals that adversarial images place abnormal emphasis on the lower-ranked
principal components from PCA. Other detectors and a colorful saliency map are
in an appendix.
|
[
"Dan Hendrycks, Kevin Gimpel"
] |
null | null |
1608.00530
| null | null |
http://arxiv.org/pdf/1608.00530v2
|
2017-03-23T18:03:47Z
|
2016-08-01T19:13:58Z
|
Early Methods for Detecting Adversarial Images
|
Many machine learning classifiers are vulnerable to adversarial perturbations. An adversarial perturbation modifies an input to change a classifier's prediction without causing the input to seem substantially different to human perception. We deploy three methods to detect adversarial images. Adversaries trying to bypass our detectors must make the adversarial image less pathological or they will fail trying. Our best detection method reveals that adversarial images place abnormal emphasis on the lower-ranked principal components from PCA. Other detectors and a colorful saliency map are in an appendix.
|
[
"['Dan Hendrycks' 'Kevin Gimpel']"
] |
stat.ME cs.DS cs.IT cs.LG math.IT
| null |
1608.0055
| null | null | null | null | null |
Theory of the GMM Kernel
|
We develop some theoretical results for a robust similarity measure named
"generalized min-max" (GMM). This similarity has direct applications in machine
learning as a positive definite kernel and can be efficiently computed via
probabilistic hashing. Owing to the discrete nature, the hashed values can also
be used for efficient near neighbor search. We prove the theoretical limit of
GMM and the consistency result, assuming that the data follow an elliptical
distribution, which is a very general family of distributions and includes the
multivariate $t$-distribution as a special case. The consistency result holds
as long as the data have bounded first moment (an assumption which essentially
holds for datasets commonly encountered in practice). Furthermore, we establish
the asymptotic normality of GMM. Compared to the "cosine" similarity which is
routinely adopted in current practice in statistics and machine learning, the
consistency of GMM requires much weaker conditions. Interestingly, when the
data follow the $t$-distribution with $\nu$ degrees of freedom, GMM typically
provides a better measure of similarity than "cosine" roughly when $\nu<8$
(which is already very close to normal). These theoretical results will help
explain the recent success of GMM in learning tasks.
|
[
"Ping Li and Cun-Hui Zhang"
] |
null | null |
1608.00550
| null | null |
http://arxiv.org/pdf/1608.00550v1
|
2016-08-01T19:45:57Z
|
2016-08-01T19:45:57Z
|
Theory of the GMM Kernel
|
We develop some theoretical results for a robust similarity measure named "generalized min-max" (GMM). This similarity has direct applications in machine learning as a positive definite kernel and can be efficiently computed via probabilistic hashing. Owing to the discrete nature, the hashed values can also be used for efficient near neighbor search. We prove the theoretical limit of GMM and the consistency result, assuming that the data follow an elliptical distribution, which is a very general family of distributions and includes the multivariate $t$-distribution as a special case. The consistency result holds as long as the data have bounded first moment (an assumption which essentially holds for datasets commonly encountered in practice). Furthermore, we establish the asymptotic normality of GMM. Compared to the "cosine" similarity which is routinely adopted in current practice in statistics and machine learning, the consistency of GMM requires much weaker conditions. Interestingly, when the data follow the $t$-distribution with $nu$ degrees of freedom, GMM typically provides a better measure of similarity than "cosine" roughly when $nu<8$ (which is already very close to normal). These theoretical results will help explain the recent success of GMM in learning tasks.
|
[
"['Ping Li' 'Cun-Hui Zhang']"
] |
cs.CV cs.LG cs.NE
| null |
1608.00611
| null | null |
http://arxiv.org/pdf/1608.00611v1
|
2016-08-01T20:51:29Z
|
2016-08-01T20:51:29Z
|
Attention Tree: Learning Hierarchies of Visual Features for Large-Scale
Image Recognition
|
One of the key challenges in machine learning is to design a computationally
efficient multi-class classifier while maintaining the output accuracy and
performance. In this paper, we present a tree-based classifier: Attention Tree
(ATree) for large-scale image classification that uses recursive Adaboost
training to construct a visual attention hierarchy. The proposed attention
model is inspired from the biological 'selective tuning mechanism for cortical
visual processing'. We exploit the inherent feature similarity across images in
datasets to identify the input variability and use recursive optimization
procedure, to determine data partitioning at each node, thereby, learning the
attention hierarchy. A set of binary classifiers is organized on top of the
learnt hierarchy to minimize the overall test-time complexity. The attention
model maximizes the margins for the binary classifiers for optimal decision
boundary modelling, leading to better performance at minimal complexity. The
proposed framework has been evaluated on both Caltech-256 and SUN datasets and
achieves accuracy improvement over state-of-the-art tree-based methods at
significantly lower computational cost.
|
[
"Priyadarshini Panda, and Kaushik Roy",
"['Priyadarshini Panda' 'Kaushik Roy']"
] |
cs.LG stat.ML
| null |
1608.00619
| null | null |
http://arxiv.org/pdf/1608.00619v2
|
2016-10-11T19:55:58Z
|
2016-08-01T21:13:12Z
|
Recursion-Free Online Multiple Incremental/Decremental Analysis Based on
Ridge Support Vector Learning
|
This study presents a rapid multiple incremental and decremental mechanism
based on Weight-Error Curves (WECs) for support-vector analysis. Recursion-free
computation is proposed for predicting the Lagrangian multipliers of new
samples. This study examines Ridge Support Vector Models, subsequently devising
a recursion-free function derived from WECs. With the proposed function, all
the new Lagrangian multipliers can be computed at once without using any
gradual step sizes. Moreover, such a function relaxes a constraint, where the
increment of new multiple Lagrangian multipliers should be the same in the
previous work, thereby easily satisfying the requirement of KKT conditions. The
proposed mechanism no longer requires typical bookkeeping strategies, which
compute the step size by checking all the training samples in each incremental
round.
|
[
"Bo-Wei Chen",
"['Bo-Wei Chen']"
] |
cs.LG stat.ML
|
10.1016/j.future.2017.08.053
|
1608.00621
| null | null |
http://arxiv.org/abs/1608.00621v3
|
2017-11-09T03:14:27Z
|
2016-08-01T21:21:07Z
|
Efficient Multiple Incremental Computation for Kernel Ridge Regression
with Bayesian Uncertainty Modeling
|
This study presents an efficient incremental/decremental approach for big
streams based on Kernel Ridge Regression (KRR), a frequently used data analysis
in cloud centers. To avoid reanalyzing the whole dataset whenever sensors
receive new training data, typical incremental KRR used a single-instance
mechanism for updating an existing system. However, this inevitably increased
redundant computational time, not to mention applicability to big streams. To
this end, the proposed mechanism supports incremental/decremental processing
for both single and multiple samples (i.e., batch processing). A large scale of
data can be divided into batches, processed by a machine, without sacrificing
the accuracy. Moreover, incremental/decremental analyses in empirical and
intrinsic space are also proposed in this study to handle different types of
data either with a large number of samples or high feature dimensions, whereas
typical methods focused only on one type. At the end of this study, we further
the proposed mechanism to statistical Kernelized Bayesian Regression, so that
uncertainty modeling with incremental/decremental computation becomes
applicable. Experimental results showed that computational time was
significantly reduced, better than the original nonincremental design and the
typical single incremental method. Furthermore, the accuracy of the proposed
method remained the same as the baselines. This implied that the system
enhanced efficiency without sacrificing the accuracy. These findings proved
that the proposed method was appropriate for variable streaming data analysis,
thereby demonstrating the effectiveness of the proposed method.
|
[
"['Bo-Wei Chen' 'Nik Nailah Binti Abdullah' 'Sangoh Park']",
"Bo-Wei Chen, Nik Nailah Binti Abdullah, and Sangoh Park"
] |
cs.RO cs.AI cs.LG
| null |
1608.00627
| null | null |
http://arxiv.org/pdf/1608.00627v1
|
2016-08-01T21:53:04Z
|
2016-08-01T21:53:04Z
|
Learning Transferable Policies for Monocular Reactive MAV Control
|
The ability to transfer knowledge gained in previous tasks into new contexts
is one of the most important mechanisms of human learning. Despite this,
adapting autonomous behavior to be reused in partially similar settings is
still an open problem in current robotics research. In this paper, we take a
small step in this direction and propose a generic framework for learning
transferable motion policies. Our goal is to solve a learning problem in a
target domain by utilizing the training data in a different but related source
domain. We present this in the context of an autonomous MAV flight using
monocular reactive control, and demonstrate the efficacy of our proposed
approach through extensive real-world flight experiments in outdoor cluttered
environments.
|
[
"Shreyansh Daftry, J. Andrew Bagnell and Martial Hebert",
"['Shreyansh Daftry' 'J. Andrew Bagnell' 'Martial Hebert']"
] |
cs.LG
| null |
1608.00647
| null | null |
http://arxiv.org/pdf/1608.00647v3
|
2016-09-20T21:55:00Z
|
2016-08-02T00:09:22Z
|
Multi-task Prediction of Disease Onsets from Longitudinal Lab Tests
|
Disparate areas of machine learning have benefited from models that can take
raw data with little preprocessing as input and learn rich representations of
that raw data in order to perform well on a given prediction task. We evaluate
this approach in healthcare by using longitudinal measurements of lab tests,
one of the more raw signals of a patient's health state widely available in
clinical data, to predict disease onsets. In particular, we train a Long
Short-Term Memory (LSTM) recurrent neural network and two novel convolutional
neural networks for multi-task prediction of disease onset for 133 conditions
based on 18 common lab tests measured over time in a cohort of 298K patients
derived from 8 years of administrative claims data. We compare the neural
networks to a logistic regression with several hand-engineered, clinically
relevant features. We find that the representation-based learning approaches
significantly outperform this baseline. We believe that our work suggests a new
avenue for patient risk stratification based solely on lab results.
|
[
"Narges Razavian, Jake Marcus, David Sontag",
"['Narges Razavian' 'Jake Marcus' 'David Sontag']"
] |
cs.LG cs.AI
| null |
1608.00667
| null | null |
http://arxiv.org/pdf/1608.00667v1
|
2016-08-02T01:30:25Z
|
2016-08-02T01:30:25Z
|
Can Active Learning Experience Be Transferred?
|
Active learning is an important machine learning problem in reducing the
human labeling effort. Current active learning strategies are designed from
human knowledge, and are applied on each dataset in an immutable manner. In
other words, experience about the usefulness of strategies cannot be updated
and transferred to improve active learning on other datasets. This paper
initiates a pioneering study on whether active learning experience can be
transferred. We first propose a novel active learning model that linearly
aggregates existing strategies. The linear weights can then be used to
represent the active learning experience. We equip the model with the popular
linear upper- confidence-bound (LinUCB) algorithm for contextual bandit to
update the weights. Finally, we extend our model to transfer the experience
across datasets with the technique of biased regularization. Empirical studies
demonstrate that the learned experience not only is competitive with existing
strategies on most single datasets, but also can be transferred across datasets
to improve the performance on future learning tasks.
|
[
"['Hong-Min Chu' 'Hsuan-Tien Lin']",
"Hong-Min Chu, Hsuan-Tien Lin"
] |
stat.ML cs.LG
| null |
1608.00686
| null | null |
http://arxiv.org/pdf/1608.00686v3
|
2016-09-22T00:37:40Z
|
2016-08-02T03:09:59Z
|
Clinical Tagging with Joint Probabilistic Models
|
We describe a method for parameter estimation in bipartite probabilistic
graphical models for joint prediction of clinical conditions from the
electronic medical record. The method does not rely on the availability of
gold-standard labels, but rather uses noisy labels, called anchors, for
learning. We provide a likelihood-based objective and a moments-based
initialization that are effective at learning the model parameters. The learned
model is evaluated in a task of assigning a heldout clinical condition to
patients based on retrospective analysis of the records, and outperforms
baselines which do not account for the noisiness in the labels or do not model
the conditions jointly.
|
[
"['Yoni Halpern' 'Steven Horng' 'David Sontag']",
"Yoni Halpern and Steven Horng and David Sontag"
] |
stat.ML cs.LG
| null |
1608.00704
| null | null |
http://arxiv.org/pdf/1608.00704v3
|
2016-09-20T13:01:04Z
|
2016-08-02T06:03:53Z
|
Identifiable Phenotyping using Constrained Non-Negative Matrix
Factorization
|
This work proposes a new algorithm for automated and simultaneous phenotyping
of multiple co-occurring medical conditions, also referred as comorbidities,
using clinical notes from the electronic health records (EHRs). A basic latent
factor estimation technique of non-negative matrix factorization (NMF) is
augmented with domain specific constraints to obtain sparse latent factors that
are anchored to a fixed set of chronic conditions. The proposed anchoring
mechanism ensures a one-to-one identifiable and interpretable mapping between
the latent factors and the target comorbidities. Qualitative assessment of the
empirical results by clinical experts suggests that the proposed model learns
clinically interpretable phenotypes while being predictive of 30 day mortality.
The proposed method can be readily adapted to any non-negative EHR data across
various healthcare institutions.
|
[
"['Shalmali Joshi' 'Suriya Gunasekar' 'David Sontag' 'Joydeep Ghosh']",
"Shalmali Joshi, Suriya Gunasekar, David Sontag, Joydeep Ghosh"
] |
cs.LG
| null |
1608.00712
| null | null |
http://arxiv.org/pdf/1608.00712v1
|
2016-08-02T06:55:44Z
|
2016-08-02T06:55:44Z
|
Size-Consistent Statistics for Anomaly Detection in Dynamic Networks
|
An important task in network analysis is the detection of anomalous events in
a network time series. These events could merely be times of interest in the
network timeline or they could be examples of malicious activity or network
malfunction. Hypothesis testing using network statistics to summarize the
behavior of the network provides a robust framework for the anomaly detection
decision process. Unfortunately, choosing network statistics that are dependent
on confounding factors like the total number of nodes or edges can lead to
incorrect conclusions (e.g., false positives and false negatives). In this
dissertation we describe the challenges that face anomaly detection in dynamic
network streams regarding confounding factors. We also provide two solutions to
avoiding error due to confounding factors: the first is a randomization testing
method that controls for confounding factors, and the second is a set of
size-consistent network statistics which avoid confounding due to the most
common factors, edge count and node count.
|
[
"Timothy La Fond, Jennifer Neville, Brian Gallagher",
"['Timothy La Fond' 'Jennifer Neville' 'Brian Gallagher']"
] |
cs.RO cs.AI cs.LG
| null |
1608.00737
| null | null |
http://arxiv.org/pdf/1608.00737v1
|
2016-08-02T08:57:14Z
|
2016-08-02T08:57:14Z
|
Context Discovery for Model Learning in Partially Observable
Environments
|
The ability to learn a model is essential for the success of autonomous
agents. Unfortunately, learning a model is difficult in partially observable
environments, where latent environmental factors influence what the agent
observes. In the absence of a supervisory training signal, autonomous agents
therefore require a mechanism to autonomously discover these environmental
factors, or sensorimotor contexts.
This paper presents a method to discover sensorimotor contexts in partially
observable environments, by constructing a hierarchical transition model. The
method is evaluated in a simulation experiment, in which a robot learns that
different rooms are characterized by different objects that are found in them.
|
[
"['Nikolas J. Hemion']",
"Nikolas J. Hemion"
] |
stat.ML cs.LG
| null |
1608.00778
| null | null |
http://arxiv.org/pdf/1608.00778v2
|
2016-11-21T15:12:54Z
|
2016-08-02T11:44:19Z
|
Exponential Family Embeddings
|
Word embeddings are a powerful approach for capturing semantic similarity
among terms in a vocabulary. In this paper, we develop exponential family
embeddings, a class of methods that extends the idea of word embeddings to
other types of high-dimensional data. As examples, we studied neural data with
real-valued observations, count data from a market basket analysis, and ratings
data from a movie recommendation system. The main idea is to model each
observation conditioned on a set of other observations. This set is called the
context, and the way the context is defined is a modeling choice that depends
on the problem. In language the context is the surrounding words; in
neuroscience the context is close-by neurons; in market basket data the context
is other items in the shopping cart. Each type of embedding model defines the
context, the exponential family of conditional distributions, and how the
latent embedding vectors are shared across data. We infer the embeddings with a
scalable algorithm based on stochastic gradient descent. On all three
applications - neural activity of zebrafish, users' shopping behavior, and
movie ratings - we found exponential family embedding models to be more
effective than other types of dimension reduction. They better reconstruct
held-out data and find interesting qualitative structure.
|
[
"Maja R. Rudolph, Francisco J. R. Ruiz, Stephan Mandt, David M. Blei",
"['Maja R. Rudolph' 'Francisco J. R. Ruiz' 'Stephan Mandt' 'David M. Blei']"
] |
cs.DC cs.LG cs.NE
| null |
1608.00781
| null | null |
http://arxiv.org/pdf/1608.00781v2
|
2017-02-26T22:01:12Z
|
2016-08-02T11:57:09Z
|
Horn: A System for Parallel Training and Regularizing of Large-Scale
Neural Networks
|
I introduce a new distributed system for effective training and regularizing
of Large-Scale Neural Networks on distributed computing architectures. The
experiments demonstrate the effectiveness of flexible model partitioning and
parallelization strategies based on neuron-centric computation model, with an
implementation of the collective and parallel dropout neural networks training.
Experiments are performed on MNIST handwritten digits classification including
results.
|
[
"Edward J. Yoon",
"['Edward J. Yoon']"
] |
cs.CR cs.LG
|
10.1049/iet-ifs.2014.0099
|
1608.00835
| null | null |
http://arxiv.org/abs/1608.00835v1
|
2016-08-02T14:24:47Z
|
2016-08-02T14:24:47Z
|
High Accuracy Android Malware Detection Using Ensemble Learning
|
With over 50 billion downloads and more than 1.3 million apps in the Google
official market, Android has continued to gain popularity amongst smartphone
users worldwide. At the same time there has been a rise in malware targeting
the platform, with more recent strains employing highly sophisticated detection
avoidance techniques. As traditional signature based methods become less potent
in detecting unknown malware, alternatives are needed for timely zero-day
discovery. Thus this paper proposes an approach that utilizes ensemble learning
for Android malware detection. It combines advantages of static analysis with
the efficiency and performance of ensemble machine learning to improve Android
malware detection accuracy. The machine learning models are built using a large
repository of malware samples and benign apps from a leading antivirus vendor.
Experimental results and analysis presented shows that the proposed method
which uses a large feature space to leverage the power of ensemble learning is
capable of 97.3 to 99 percent detection accuracy with very low false positive
rates.
|
[
"Suleiman Y. Yerima, Sakir Sezer, Igor Muttik",
"['Suleiman Y. Yerima' 'Sakir Sezer' 'Igor Muttik']"
] |
cs.LG
| null |
1608.00842
| null | null |
http://arxiv.org/pdf/1608.00842v1
|
2016-08-02T14:38:02Z
|
2016-08-02T14:38:02Z
|
Mitochondria-based Renal Cell Carcinoma Subtyping: Learning from Deep
vs. Flat Feature Representations
|
Accurate subtyping of renal cell carcinoma (RCC) is of crucial importance for
understanding disease progression and for making informed treatment decisions.
New discoveries of significant alterations to mitochondria between subtypes
make immunohistochemical (IHC) staining based image classification an
imperative. Until now, accurate quantification and subtyping was made
impossible by huge IHC variations, the absence of cell membrane staining for
cytoplasm segmentation as well as the complete lack of systems for robust and
reproducible image based classification. In this paper we present a
comprehensive classification framework to overcome these challenges for tissue
microarrays (TMA) of RCCs. We compare and evaluate models based on domain
specific hand-crafted "flat"-features versus "deep" feature representations
from various layers of a pre-trained convolutional neural network (CNN). The
best model reaches a cross-validation accuracy of 89%, which demonstrates for
the first time, that robust mitochondria-based subtyping of renal cancer is
feasible
|
[
"Peter J. Sch\\\"uffler, Judy Sarungbam, Hassan Muhammad, Ed Reznik,\n Satish K. Tickoo, Thomas J. Fuchs",
"['Peter J. Schüffler' 'Judy Sarungbam' 'Hassan Muhammad' 'Ed Reznik'\n 'Satish K. Tickoo' 'Thomas J. Fuchs']"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.