categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
list |
---|---|---|---|---|---|---|---|---|---|---|
null | null |
1608.04550
| null | null |
http://arxiv.org/pdf/1608.04550v1
|
2016-08-16T11:26:25Z
|
2016-08-16T11:26:25Z
|
Fast Calculation of the Knowledge Gradient for Optimization of
Deterministic Engineering Simulations
|
A novel efficient method for computing the Knowledge-Gradient policy for Continuous Parameters (KGCP) for deterministic optimization is derived. The differences with Expected Improvement (EI), a popular choice for Bayesian optimization of deterministic engineering simulations, are explored. Both policies and the Upper Confidence Bound (UCB) policy are compared on a number of benchmark functions including a problem from structural dynamics. It is empirically shown that KGCP has similar performance as the EI policy for many problems, but has better convergence properties for complex (multi-modal) optimization problems as it emphasizes more on exploration when the model is confident about the shape of optimal regions. In addition, the relationship between Maximum Likelihood Estimation (MLE) and slice sampling for estimation of the hyperparameters of the underlying models, and the complexity of the problem at hand, is studied.
|
[
"['Joachim van der Herten' 'Ivo Couckuyt' 'Dirk Deschrijver' 'Tom Dhaene']"
] |
cs.LG stat.ML
| null |
1608.04581
| null | null |
http://arxiv.org/pdf/1608.04581v1
|
2016-08-16T13:17:51Z
|
2016-08-16T13:17:51Z
|
A novel transfer learning method based on common space mapping and
weighted domain matching
|
In this paper, we propose a novel learning framework for the problem of
domain transfer learning. We map the data of two domains to one single common
space, and learn a classifier in this common space. Then we adapt the common
classifier to the two domains by adding two adaptive functions to it
respectively. In the common space, the target domain data points are weighted
and matched to the target domain in term of distributions. The weighting terms
of source domain data points and the target domain classification responses are
also regularized by the local reconstruction coefficients. The novel transfer
learning framework is evaluated over some benchmark cross-domain data sets, and
it outperforms the existing state-of-the-art transfer learning methods.
|
[
"Ru-Ze Liang, Wei Xie, Weizhi Li, Hongqi Wang, Jim Jing-Yan Wang, Lisa\n Taylor",
"['Ru-Ze Liang' 'Wei Xie' 'Weizhi Li' 'Hongqi Wang' 'Jim Jing-Yan Wang'\n 'Lisa Taylor']"
] |
stat.AP cs.LG stat.ML
| null |
1608.04585
| null | null |
http://arxiv.org/pdf/1608.04585v1
|
2016-08-16T13:32:05Z
|
2016-08-16T13:32:05Z
|
Conformalized density- and distance-based anomaly detection in
time-series data
|
Anomalies (unusual patterns) in time-series data give essential, and often
actionable information in critical situations. Examples can be found in such
fields as healthcare, intrusion detection, finance, security and flight safety.
In this paper we propose new conformalized density- and distance-based anomaly
detection algorithms for a one-dimensional time-series data. The algorithms use
a combination of a feature extraction method, an approach to assess a score
whether a new observation differs significantly from a previously observed
data, and a probabilistic interpretation of this score based on the conformal
paradigm.
|
[
"Evgeny Burnaev and Vladislav Ishimtsev",
"['Evgeny Burnaev' 'Vladislav Ishimtsev']"
] |
cs.NE cs.LG
|
10.1007/s12559-017-9450-z
|
1608.04622
| null | null |
http://arxiv.org/abs/1608.04622v1
|
2016-08-16T14:41:12Z
|
2016-08-16T14:41:12Z
|
Training Echo State Networks with Regularization through Dimensionality
Reduction
|
In this paper we introduce a new framework to train an Echo State Network to
predict real valued time-series. The method consists in projecting the output
of the internal layer of the network on a space with lower dimensionality,
before training the output layer to learn the target task. Notably, we enforce
a regularization constraint that leads to better generalization capabilities.
We evaluate the performances of our approach on several benchmark tests, using
different techniques to train the readout of the network, achieving superior
predictive performance when using the proposed framework. Finally, we provide
an insight on the effectiveness of the implemented mechanics through a
visualization of the trajectory in the phase space and relying on the
methodologies of nonlinear time-series analysis. By applying our method on well
known chaotic systems, we provide evidence that the lower dimensional embedding
retains the dynamical properties of the underlying system better than the
full-dimensional internal states of the network.
|
[
"Sigurd L{\\o}kse, Filippo Maria Bianchi and Robert Jenssen",
"['Sigurd Løkse' 'Filippo Maria Bianchi' 'Robert Jenssen']"
] |
cs.LG math.OC stat.CO stat.ML
| null |
1608.04636
| null | null | null | null | null |
Linear Convergence of Gradient and Proximal-Gradient Methods Under the
Polyak-\L{}ojasiewicz Condition
|
In 1963, Polyak proposed a simple condition that is sufficient to show a
global linear convergence rate for gradient descent. This condition is a
special case of the \L{}ojasiewicz inequality proposed in the same year, and it
does not require strong convexity (or even convexity). In this work, we show
that this much-older Polyak-\L{}ojasiewicz (PL) inequality is actually weaker
than the main conditions that have been explored to show linear convergence
rates without strong convexity over the last 25 years. We also use the PL
inequality to give new analyses of randomized and greedy coordinate descent
methods, sign-based gradient descent methods, and stochastic gradient methods
in the classic setting (with decreasing or constant step-sizes) as well as the
variance-reduced setting. We further propose a generalization that applies to
proximal-gradient methods for non-smooth optimization, leading to simple proofs
of linear convergence of these methods. Along the way, we give simple
convergence results for a wide variety of problems in machine learning: least
squares, logistic regression, boosting, resilient backpropagation,
L1-regularization, support vector machines, stochastic dual coordinate ascent,
and stochastic variance-reduced gradient methods.
|
[
"Hamed Karimi, Julie Nutini and Mark Schmidt"
] |
stat.ML cs.DC cs.LG
|
10.1109/BigData.2016.7840719
|
1608.04647
| null | null |
http://arxiv.org/abs/1608.04647v2
|
2016-08-18T02:30:07Z
|
2016-08-16T16:05:14Z
|
Enabling Factor Analysis on Thousand-Subject Neuroimaging Datasets
|
The scale of functional magnetic resonance image data is rapidly increasing
as large multi-subject datasets are becoming widely available and
high-resolution scanners are adopted. The inherent low-dimensionality of the
information in this data has led neuroscientists to consider factor analysis
methods to extract and analyze the underlying brain activity. In this work, we
consider two recent multi-subject factor analysis methods: the Shared Response
Model and Hierarchical Topographic Factor Analysis. We perform analytical,
algorithmic, and code optimization to enable multi-node parallel
implementations to scale. Single-node improvements result in 99x and 1812x
speedups on these two methods, and enables the processing of larger datasets.
Our distributed implementations show strong scaling of 3.3x and 5.5x
respectively with 20 nodes on real datasets. We also demonstrate weak scaling
on a synthetic dataset with 1024 subjects, on up to 1024 nodes and 32,768
cores.
|
[
"['Michael J. Anderson' 'Mihai Capotă' 'Javier S. Turek' 'Xia Zhu'\n 'Theodore L. Willke' 'Yida Wang' 'Po-Hsuan Chen' 'Jeremy R. Manning'\n 'Peter J. Ramadge' 'Kenneth A. Norman']",
"Michael J. Anderson, Mihai Capot\\u{a}, Javier S. Turek, Xia Zhu,\n Theodore L. Willke, Yida Wang, Po-Hsuan Chen, Jeremy R. Manning, Peter J.\n Ramadge, Kenneth A. Norman"
] |
stat.ML cs.LG stat.ME
| null |
1608.04674
| null | null |
http://arxiv.org/pdf/1608.04674v1
|
2016-08-16T17:00:48Z
|
2016-08-16T17:00:48Z
|
Shape Constrained Tensor Decompositions using Sparse Representations in
Over-Complete Libraries
|
We consider $N$-way data arrays and low-rank tensor factorizations where the
time mode is coded as a sparse linear combination of temporal elements from an
over-complete library. Our method, Shape Constrained Tensor Decomposition
(SCTD) is based upon the CANDECOMP/PARAFAC (CP) decomposition which produces
$r$-rank approximations of data tensors via outer products of vectors in each
dimension of the data. By constraining the vector in the temporal dimension to
known analytic forms which are selected from a large set of candidate
functions, more readily interpretable decompositions are achieved and analytic
time dependencies discovered. The SCTD method circumvents traditional {\em
flattening} techniques where an $N$-way array is reshaped into a matrix in
order to perform a singular value decomposition. A clear advantage of the SCTD
algorithm is its ability to extract transient and intermittent phenomena which
is often difficult for SVD-based methods. We motivate the SCTD method using
several intuitively appealing results before applying it on a number of
high-dimensional, real-world data sets in order to illustrate the efficiency of
the algorithm in extracting interpretable spatio-temporal modes. With the rise
of data-driven discovery methods, the decomposition proposed provides a viable
technique for analyzing multitudes of data in a more comprehensible fashion.
|
[
"['Bethany Lusch' 'Eric C. Chi' 'J. Nathan Kutz']",
"Bethany Lusch, Eric C. Chi, J. Nathan Kutz"
] |
cs.AI cs.LG stat.ML
| null |
1608.04689
| null | null |
http://arxiv.org/pdf/1608.04689v1
|
2016-08-16T17:54:40Z
|
2016-08-16T17:54:40Z
|
A Shallow High-Order Parametric Approach to Data Visualization and
Compression
|
Explicit high-order feature interactions efficiently capture essential
structural knowledge about the data of interest and have been used for
constructing generative models. We present a supervised discriminative
High-Order Parametric Embedding (HOPE) approach to data visualization and
compression. Compared to deep embedding models with complicated deep
architectures, HOPE generates more effective high-order feature mapping through
an embarrassingly simple shallow model. Furthermore, two approaches to
generating a small number of exemplars conveying high-order interactions to
represent large-scale data sets are proposed. These exemplars in combination
with the feature mapping learned by HOPE effectively capture essential data
variations. Moreover, through HOPE, these exemplars are employed to increase
the computational efficiency of kNN classification for fast information
retrieval by thousands of times. For classification in two-dimensional
embedding space on MNIST and USPS datasets, our shallow method HOPE with simple
Sigmoid transformations significantly outperforms state-of-the-art supervised
deep embedding models based on deep neural networks, and even achieved
historically low test error rate of 0.65% in two-dimensional space on MNIST,
which demonstrates the representational efficiency and power of supervised
shallow models with high-order feature interactions.
|
[
"['Martin Renqiang Min' 'Hongyu Guo' 'Dongjin Song']",
"Martin Renqiang Min, Hongyu Guo, Dongjin Song"
] |
q-bio.QM cs.LG stat.ME
| null |
1608.047
| null | null | null | null | null |
A Data-Driven Approach to Estimating the Number of Clusters in
Hierarchical Clustering
|
We propose two new methods for estimating the number of clusters in a
hierarchical clustering framework in the hopes of creating a fully automated
process with no human intervention. The methods are completely data-driven and
require no input from the researcher, and as such are fully automated. They are
quite easy to implement and not computationally intensive in the least. We
analyze performance on several simulated data sets and the Biobase Gene
Expression Set, comparing our methods to the established Gap statistic and
Elbow methods and outperforming both in multi-cluster scenarios.
|
[
"Antoine Zambelli"
] |
null | null |
1608.04700
| null | null |
http://arxiv.org/pdf/1608.04700v1
|
2016-08-16T18:35:09Z
|
2016-08-16T18:35:09Z
|
A Data-Driven Approach to Estimating the Number of Clusters in
Hierarchical Clustering
|
We propose two new methods for estimating the number of clusters in a hierarchical clustering framework in the hopes of creating a fully automated process with no human intervention. The methods are completely data-driven and require no input from the researcher, and as such are fully automated. They are quite easy to implement and not computationally intensive in the least. We analyze performance on several simulated data sets and the Biobase Gene Expression Set, comparing our methods to the established Gap statistic and Elbow methods and outperforming both in multi-cluster scenarios.
|
[
"['Antoine Zambelli']"
] |
cs.DS cs.LG
| null |
1608.04759
| null | null |
http://arxiv.org/pdf/1608.04759v1
|
2016-08-16T20:03:58Z
|
2016-08-16T20:03:58Z
|
Faster Sublinear Algorithms using Conditional Sampling
|
A conditional sampling oracle for a probability distribution D returns
samples from the conditional distribution of D restricted to a specified subset
of the domain. A recent line of work (Chakraborty et al. 2013 and Cannone et
al. 2014) has shown that having access to such a conditional sampling oracle
requires only polylogarithmic or even constant number of samples to solve
distribution testing problems like identity and uniformity. This significantly
improves over the standard sampling model where polynomially many samples are
necessary.
Inspired by these results, we introduce a computational model based on
conditional sampling to develop sublinear algorithms with exponentially faster
runtimes compared to standard sublinear algorithms. We focus on geometric
optimization problems over points in high dimensional Euclidean space. Access
to these points is provided via a conditional sampling oracle that takes as
input a succinct representation of a subset of the domain and outputs a
uniformly random point in that subset. We study two well studied problems:
k-means clustering and estimating the weight of the minimum spanning tree. In
contrast to prior algorithms for the classic model, our algorithms have time,
space and sample complexity that is polynomial in the dimension and
polylogarithmic in the number of points.
Finally, we comment on the applicability of the model and compare with
existing ones like streaming, parallel and distributed computational models.
|
[
"['Themistoklis Gouleakis' 'Christos Tzamos' 'Manolis Zampetakis']",
"Themistoklis Gouleakis, Christos Tzamos and Manolis Zampetakis"
] |
stat.ML cs.DS cs.LG math.NA math.OC
| null |
1608.04773
| null | null |
http://arxiv.org/pdf/1608.04773v2
|
2017-04-24T19:35:38Z
|
2016-08-16T20:48:02Z
|
Faster Principal Component Regression and Stable Matrix Chebyshev
Approximation
|
We solve principal component regression (PCR), up to a multiplicative
accuracy $1+\gamma$, by reducing the problem to $\tilde{O}(\gamma^{-1})$
black-box calls of ridge regression. Therefore, our algorithm does not require
any explicit construction of the top principal components, and is suitable for
large-scale PCR instances. In contrast, previous result requires
$\tilde{O}(\gamma^{-2})$ such black-box calls.
We obtain this result by developing a general stable recurrence formula for
matrix Chebyshev polynomials, and a degree-optimal polynomial approximation to
the matrix sign function. Our techniques may be of independent interests,
especially when designing iterative methods.
|
[
"['Zeyuan Allen-Zhu' 'Yuanzhi Li']",
"Zeyuan Allen-Zhu and Yuanzhi Li"
] |
cs.LG stat.ML
| null |
1608.04783
| null | null |
http://arxiv.org/pdf/1608.04783v1
|
2016-08-16T21:20:30Z
|
2016-08-16T21:20:30Z
|
Application of multiview techniques to NHANES dataset
|
Disease prediction or classification using health datasets involve using
well-known predictors associated with the disease as features for the models.
This study considers multiple data components of an individual's health, using
the relationship between variables to generate features that may improve the
performance of disease classification models. In order to capture information
from different aspects of the data, this project uses a multiview learning
approach, using Canonical Correlation Analysis (CCA), a technique that finds
projections with maximum correlations between two data views. Data categories
collected from the NHANES survey (1999-2014) are used as views to learn the
multiview representations. The usefulness of the representations is
demonstrated by applying them as features in a Diabetes classification task.
|
[
"['Aileme Omogbai']",
"Aileme Omogbai"
] |
cs.CY cs.LG
| null |
1608.04789
| null | null |
http://arxiv.org/pdf/1608.04789v1
|
2016-08-16T21:46:48Z
|
2016-08-16T21:46:48Z
|
Modelling Student Behavior using Granular Large Scale Action Data from a
MOOC
|
Digital learning environments generate a precise record of the actions
learners take as they interact with learning materials and complete exercises
towards comprehension. With this high quantity of sequential data comes the
potential to apply time series models to learn about underlying behavioral
patterns and trends that characterize successful learning based on the granular
record of student actions. There exist several methods for looking at
longitudinal, sequential data like those recorded from learning environments.
In the field of language modelling, traditional n-gram techniques and modern
recurrent neural network (RNN) approaches have been applied to algorithmically
find structure in language and predict the next word given the previous words
in the sentence or paragraph as input. In this paper, we draw an analogy to
this work by treating student sequences of resource views and interactions in a
MOOC as the inputs and predicting students' next interaction as outputs. In
this study, we train only on students who received a certificate of completion.
In doing so, the model could potentially be used for recommendation of
sequences eventually leading to success, as opposed to perpetuating
unproductive behavior. Given that the MOOC used in our study had over 3,500
unique resources, predicting the exact resource that a student will interact
with next might appear to be a difficult classification problem. We find that
simply following the syllabus (built-in structure of the course) gives on
average 23% accuracy in making this prediction, followed by the n-gram method
with 70.4%, and RNN based methods with 72.2%. This research lays the ground
work for recommendation in a MOOC and other digital learning environments where
high volumes of sequential data exist.
|
[
"Steven Tang, Joshua C. Peterson, Zachary A. Pardos",
"['Steven Tang' 'Joshua C. Peterson' 'Zachary A. Pardos']"
] |
stat.ML cs.LG
| null |
1608.04802
| null | null |
http://arxiv.org/pdf/1608.04802v2
|
2017-03-01T07:54:51Z
|
2016-08-16T23:11:14Z
|
Scalable Learning of Non-Decomposable Objectives
|
Modern retrieval systems are often driven by an underlying machine learning
model. The goal of such systems is to identify and possibly rank the few most
relevant items for a given query or context. Thus, such systems are typically
evaluated using a ranking-based performance metric such as the area under the
precision-recall curve, the $F_\beta$ score, precision at fixed recall, etc.
Obviously, it is desirable to train such systems to optimize the metric of
interest.
In practice, due to the scalability limitations of existing approaches for
optimizing such objectives, large-scale retrieval systems are instead trained
to maximize classification accuracy, in the hope that performance as measured
via the true objective will also be favorable. In this work we present a
unified framework that, using straightforward building block bounds, allows for
highly scalable optimization of a wide range of ranking-based objectives. We
demonstrate the advantage of our approach on several real-life retrieval
problems that are significantly larger than those considered in the literature,
while achieving substantial improvement in performance over the
accuracy-objective baseline.
|
[
"['Elad ET. Eban' 'Mariano Schain' 'Alan Mackey' 'Ariel Gordon'\n 'Rif A. Saurous' 'Gal Elidan']",
"Elad ET. Eban, Mariano Schain, Alan Mackey, Ariel Gordon, Rif A.\n Saurous, Gal Elidan"
] |
stat.ML cs.LG
| null |
1608.0483
| null | null | null | null | null |
Outlier Detection on Mixed-Type Data: An Energy-based Approach
|
Outlier detection amounts to finding data points that differ significantly
from the norm. Classic outlier detection methods are largely designed for
single data type such as continuous or discrete. However, real world data is
increasingly heterogeneous, where a data point can have both discrete and
continuous attributes. Handling mixed-type data in a disciplined way remains a
great challenge. In this paper, we propose a new unsupervised outlier detection
method for mixed-type data based on Mixed-variate Restricted Boltzmann Machine
(Mv.RBM). The Mv.RBM is a principled probabilistic method that models data
density. We propose to use \emph{free-energy} derived from Mv.RBM as outlier
score to detect outliers as those data points lying in low density regions. The
method is fast to learn and compute, is scalable to massive datasets. At the
same time, the outlier score is identical to data negative log-density up-to an
additive constant. We evaluate the proposed method on synthetic and real-world
datasets and demonstrate that (a) a proper handling mixed-types is necessary in
outlier detection, and (b) free-energy of Mv.RBM is a powerful and efficient
outlier scoring method, which is highly competitive against state-of-the-arts.
|
[
"Kien Do, Truyen Tran, Dinh Phung and Svetha Venkatesh"
] |
null | null |
1608.04830
| null | null |
http://arxiv.org/pdf/1608.04830v1
|
2016-08-17T01:41:40Z
|
2016-08-17T01:41:40Z
|
Outlier Detection on Mixed-Type Data: An Energy-based Approach
|
Outlier detection amounts to finding data points that differ significantly from the norm. Classic outlier detection methods are largely designed for single data type such as continuous or discrete. However, real world data is increasingly heterogeneous, where a data point can have both discrete and continuous attributes. Handling mixed-type data in a disciplined way remains a great challenge. In this paper, we propose a new unsupervised outlier detection method for mixed-type data based on Mixed-variate Restricted Boltzmann Machine (Mv.RBM). The Mv.RBM is a principled probabilistic method that models data density. We propose to use emph{free-energy} derived from Mv.RBM as outlier score to detect outliers as those data points lying in low density regions. The method is fast to learn and compute, is scalable to massive datasets. At the same time, the outlier score is identical to data negative log-density up-to an additive constant. We evaluate the proposed method on synthetic and real-world datasets and demonstrate that (a) a proper handling mixed-types is necessary in outlier detection, and (b) free-energy of Mv.RBM is a powerful and efficient outlier scoring method, which is highly competitive against state-of-the-arts.
|
[
"['Kien Do' 'Truyen Tran' 'Dinh Phung' 'Svetha Venkatesh']"
] |
cs.LG cs.AI stat.ML
| null |
1608.04839
| null | null |
http://arxiv.org/pdf/1608.04839v3
|
2016-11-01T07:23:16Z
|
2016-08-17T02:38:44Z
|
Dynamic Collaborative Filtering with Compound Poisson Factorization
|
Model-based collaborative filtering analyzes user-item interactions to infer
latent factors that represent user preferences and item characteristics in
order to predict future interactions. Most collaborative filtering algorithms
assume that these latent factors are static, although it has been shown that
user preferences and item perceptions drift over time. In this paper, we
propose a conjugate and numerically stable dynamic matrix factorization (DCPF)
based on compound Poisson matrix factorization that models the smoothly
drifting latent factors using Gamma-Markov chains. We propose a numerically
stable Gamma chain construction, and then present a stochastic variational
inference approach to estimate the parameters of our model. We apply our model
to time-stamped ratings data sets: Netflix, Yelp, and Last.fm, where DCPF
achieves a higher predictive accuracy than state-of-the-art static and dynamic
factorization models.
|
[
"['Ghassen Jerfel' 'Mehmet E. Basbug' 'Barbara E. Engelhardt']",
"Ghassen Jerfel, Mehmet E. Basbug, Barbara E. Engelhardt"
] |
stat.ML cs.AI cs.CV cs.LG
| null |
1608.04846
| null | null |
http://arxiv.org/pdf/1608.04846v1
|
2016-08-17T03:49:56Z
|
2016-08-17T03:49:56Z
|
A Convolutional Autoencoder for Multi-Subject fMRI Data Aggregation
|
Finding the most effective way to aggregate multi-subject fMRI data is a
long-standing and challenging problem. It is of increasing interest in
contemporary fMRI studies of human cognition due to the scarcity of data per
subject and the variability of brain anatomy and functional response across
subjects. Recent work on latent factor models shows promising results in this
task but this approach does not preserve spatial locality in the brain. We
examine two ways to combine the ideas of a factor model and a searchlight based
analysis to aggregate multi-subject fMRI data while preserving spatial
locality. We first do this directly by combining a recent factor method known
as a shared response model with searchlight analysis. Then we design a
multi-view convolutional autoencoder for the same task. Both approaches
preserve spatial locality and have competitive or better performance compared
with standard searchlight analysis and the shared response model applied across
the whole brain. We also report a system design to handle the computational
challenge of training the convolutional autoencoder.
|
[
"['Po-Hsuan Chen' 'Xia Zhu' 'Hejia Zhang' 'Javier S. Turek' 'Janice Chen'\n 'Theodore L. Willke' 'Uri Hasson' 'Peter J. Ramadge']",
"Po-Hsuan Chen, Xia Zhu, Hejia Zhang, Javier S. Turek, Janice Chen,\n Theodore L. Willke, Uri Hasson, Peter J. Ramadge"
] |
cs.IT cs.IR cs.LG math.IT
| null |
1608.04872
| null | null |
http://arxiv.org/pdf/1608.04872v1
|
2016-08-17T06:38:35Z
|
2016-08-17T06:38:35Z
|
Hard Clusters Maximize Mutual Information
|
In this paper, we investigate mutual information as a cost function for
clustering, and show in which cases hard, i.e., deterministic, clusters are
optimal. Using convexity properties of mutual information, we show that certain
formulations of the information bottleneck problem are solved by hard clusters.
Similarly, hard clusters are optimal for the information-theoretic
co-clustering problem that deals with simultaneous clustering of two dependent
data sets. If both data sets have to be clustered using the same cluster
assignment, hard clusters are not optimal in general. We point at interesting
and practically relevant special cases of this so-called pairwise clustering
problem, for which we can either prove or have evidence that hard clusters are
optimal. Our results thus show that one can relax the otherwise combinatorial
hard clustering problem to a real-valued optimization problem with the same
global optimum.
|
[
"['Bernhard C. Geiger' 'Rana Ali Amjad']",
"Bernhard C. Geiger, Rana Ali Amjad"
] |
cs.LG
| null |
1608.04929
| null | null |
http://arxiv.org/pdf/1608.04929v1
|
2016-08-17T11:35:32Z
|
2016-08-17T11:35:32Z
|
Reinforcement Learning algorithms for regret minimization in structured
Markov Decision Processes
|
A recent goal in the Reinforcement Learning (RL) framework is to choose a
sequence of actions or a policy to maximize the reward collected or minimize
the regret incurred in a finite time horizon. For several RL problems in
operation research and optimal control, the optimal policy of the underlying
Markov Decision Process (MDP) is characterized by a known structure. The
current state of the art algorithms do not utilize this known structure of the
optimal policy while minimizing regret. In this work, we develop new RL
algorithms that exploit the structure of the optimal policy to minimize regret.
Numerical experiments on MDPs with structured optimal policies show that our
algorithms have better performance, are easy to implement, have a smaller
run-time and require less number of random number generations.
|
[
"['K J Prabuchandran' 'Tejas Bodas' 'Theja Tulabandhula']",
"K J Prabuchandran, Tejas Bodas and Theja Tulabandhula"
] |
cs.LG cs.NE
| null |
1608.0498
| null | null | null | null | null |
Mollifying Networks
|
The optimization of deep neural networks can be more challenging than
traditional convex optimization problems due to the highly non-convex nature of
the loss function, e.g. it can involve pathological landscapes such as
saddle-surfaces that can be difficult to escape for algorithms based on simple
gradient descent. In this paper, we attack the problem of optimization of
highly non-convex neural networks by starting with a smoothed -- or
\textit{mollified} -- objective function that gradually has a more non-convex
energy landscape during the training. Our proposition is inspired by the recent
studies in continuation methods: similar to curriculum methods, we begin
learning an easier (possibly convex) objective function and let it evolve
during the training, until it eventually goes back to being the original,
difficult to optimize, objective function. The complexity of the mollified
networks is controlled by a single hyperparameter which is annealed during the
training. We show improvements on various difficult optimization tasks and
establish a relationship with recent works on continuation methods for neural
networks and mollifiers.
|
[
"Caglar Gulcehre, Marcin Moczulski, Francesco Visin, Yoshua Bengio"
] |
null | null |
1608.04980
| null | null |
http://arxiv.org/pdf/1608.04980v1
|
2016-08-17T14:37:34Z
|
2016-08-17T14:37:34Z
|
Mollifying Networks
|
The optimization of deep neural networks can be more challenging than traditional convex optimization problems due to the highly non-convex nature of the loss function, e.g. it can involve pathological landscapes such as saddle-surfaces that can be difficult to escape for algorithms based on simple gradient descent. In this paper, we attack the problem of optimization of highly non-convex neural networks by starting with a smoothed -- or textit{mollified} -- objective function that gradually has a more non-convex energy landscape during the training. Our proposition is inspired by the recent studies in continuation methods: similar to curriculum methods, we begin learning an easier (possibly convex) objective function and let it evolve during the training, until it eventually goes back to being the original, difficult to optimize, objective function. The complexity of the mollified networks is controlled by a single hyperparameter which is annealed during the training. We show improvements on various difficult optimization tasks and establish a relationship with recent works on continuation methods for neural networks and mollifiers.
|
[
"['Caglar Gulcehre' 'Marcin Moczulski' 'Francesco Visin' 'Yoshua Bengio']"
] |
cs.CV cs.LG cs.MM
| null |
1608.05001
| null | null |
http://arxiv.org/pdf/1608.05001v2
|
2016-10-09T02:27:20Z
|
2016-08-16T14:51:25Z
|
An image compression and encryption scheme based on deep learning
|
Stacked Auto-Encoder (SAE) is a kind of deep learning algorithm for
unsupervised learning. Which has multi layers that project the vector
representation of input data into a lower vector space. These projection
vectors are dense representations of the input data. As a result, SAE can be
used for image compression. Using chaotic logistic map, the compression ones
can further be encrypted. In this study, an application of image compression
and encryption is suggested using SAE and chaotic logistic map. Experiments
show that this application is feasible and effective. It can be used for image
transmission and image protection on internet simultaneously.
|
[
"Fei Hu, Changjiu Pu, Haowei Gao, Mengzi Tang and Li Li",
"['Fei Hu' 'Changjiu Pu' 'Haowei Gao' 'Mengzi Tang' 'Li Li']"
] |
cs.LG cs.NE stat.ML
| null |
1608.05081
| null | null |
http://arxiv.org/pdf/1608.05081v4
|
2017-11-23T10:24:17Z
|
2016-08-17T20:00:04Z
|
BBQ-Networks: Efficient Exploration in Deep Reinforcement Learning for
Task-Oriented Dialogue Systems
|
We present a new algorithm that significantly improves the efficiency of
exploration for deep Q-learning agents in dialogue systems. Our agents explore
via Thompson sampling, drawing Monte Carlo samples from a Bayes-by-Backprop
neural network. Our algorithm learns much faster than common exploration
strategies such as $\epsilon$-greedy, Boltzmann, bootstrapping, and
intrinsic-reward-based ones. Additionally, we show that spiking the replay
buffer with experiences from just a few successful episodes can make Q-learning
feasible when it might otherwise fail.
|
[
"Zachary C. Lipton, Xiujun Li, Jianfeng Gao, Lihong Li, Faisal Ahmed,\n Li Deng",
"['Zachary C. Lipton' 'Xiujun Li' 'Jianfeng Gao' 'Lihong Li' 'Faisal Ahmed'\n 'Li Deng']"
] |
cs.LG stat.AP stat.ML
| null |
1608.05127
| null | null |
http://arxiv.org/pdf/1608.05127v1
|
2016-08-17T23:30:04Z
|
2016-08-17T23:30:04Z
|
A Bayesian Network approach to County-Level Corn Yield Prediction using
historical data and expert knowledge
|
Crop yield forecasting is the methodology of predicting crop yields prior to
harvest. The availability of accurate yield prediction frameworks have enormous
implications from multiple standpoints, including impact on the crop commodity
futures markets, formulation of agricultural policy, as well as crop insurance
rating. The focus of this work is to construct a corn yield predictor at the
county scale. Corn yield (forecasting) depends on a complex, interconnected set
of variables that include economic, agricultural, management and meteorological
factors. Conventional forecasting is either knowledge-based computer programs
(that simulate plant-weather-soil-management interactions) coupled with
targeted surveys or statistical model based. The former is limited by the need
for painstaking calibration, while the latter is limited to univariate analysis
or similar simplifying assumptions that fail to capture the complex
interdependencies affecting yield. In this paper, we propose a data-driven
approach that is "gray box" i.e. that seamlessly utilizes expert knowledge in
constructing a statistical network model for corn yield forecasting. Our
multivariate gray box model is developed on Bayesian network analysis to build
a Directed Acyclic Graph (DAG) between predictors and yield. Starting from a
complete graph connecting various carefully chosen variables and yield, expert
knowledge is used to prune or strengthen edges connecting variables.
Subsequently the structure (connectivity and edge weights) of the DAG that
maximizes the likelihood of observing the training data is identified via
optimization. We curated an extensive set of historical data (1948-2012) for
each of the 99 counties in Iowa as data to train the model.
|
[
"Vikas Chawla, Hsiang Sing Naik, Adedotun Akintayo, Dermot Hayes,\n Patrick Schnable, Baskar Ganapathysubramanian, Soumik Sarkar",
"['Vikas Chawla' 'Hsiang Sing Naik' 'Adedotun Akintayo' 'Dermot Hayes'\n 'Patrick Schnable' 'Baskar Ganapathysubramanian' 'Soumik Sarkar']"
] |
cs.LG cs.DS stat.ML
| null |
1608.05152
| null | null |
http://arxiv.org/pdf/1608.05152v1
|
2016-08-18T01:30:49Z
|
2016-08-18T01:30:49Z
|
Conditional Sparse Linear Regression
|
Machine learning and statistics typically focus on building models that
capture the vast majority of the data, possibly ignoring a small subset of data
as "noise" or "outliers." By contrast, here we consider the problem of jointly
identifying a significant (but perhaps small) segment of a population in which
there is a highly sparse linear regression fit, together with the coefficients
for the linear fit. We contend that such tasks are of interest both because the
models themselves may be able to achieve better predictions in such special
cases, but also because they may aid our understanding of the data. We give
algorithms for such problems under the sup norm, when this unknown segment of
the population is described by a k-DNF condition and the regression fit is
s-sparse for constant k and s. For the variants of this problem when the
regression fit is not so sparse or using expected error, we also give a
preliminary algorithm and highlight the question as a challenge for future
work.
|
[
"Brendan Juba",
"['Brendan Juba']"
] |
cs.LG stat.ML
| null |
1608.05182
| null | null |
http://arxiv.org/pdf/1608.05182v2
|
2016-12-10T16:44:14Z
|
2016-08-18T05:31:53Z
|
A Bayesian Nonparametric Approach for Estimating Individualized
Treatment-Response Curves
|
We study the problem of estimating the continuous response over time to
interventions using observational time series---a retrospective dataset where
the policy by which the data are generated is unknown to the learner. We are
motivated by applications where response varies by individuals and therefore,
estimating responses at the individual-level is valuable for personalizing
decision-making. We refer to this as the problem of estimating individualized
treatment response (ITR) curves. In statistics, G-computation formula (Robins,
1986) has been commonly used for estimating treatment responses from
observational data containing sequential treatment assignments. However, past
studies have focused predominantly on obtaining point-in-time estimates at the
population level. We leverage the G-computation formula and develop a novel
Bayesian nonparametric (BNP) method that can flexibly model functional data and
provide posterior inference over the treatment response curves at both the
individual and population level. On a challenging dataset containing time
series from patients admitted to a hospital, we estimate responses to
treatments used in managing kidney function and show that the resulting fits
are more accurate than alternative approaches. Accurate methods for obtaining
ITRs from observational data can dramatically accelerate the pace at which
personalized treatment plans become possible.
|
[
"Yanbo Xu, Yanxun Xu and Suchi Saria",
"['Yanbo Xu' 'Yanxun Xu' 'Suchi Saria']"
] |
cs.LG stat.ML
| null |
1608.05225
| null | null |
http://arxiv.org/pdf/1608.05225v1
|
2016-08-18T10:15:54Z
|
2016-08-18T10:15:54Z
|
Active Learning for Approximation of Expensive Functions with Normal
Distributed Output Uncertainty
|
When approximating a black-box function, sampling with active learning
focussing on regions with non-linear responses tends to improve accuracy. We
present the FLOLA-Voronoi method introduced previously for deterministic
responses, and theoretically derive the impact of output uncertainty. The
algorithm automatically puts more emphasis on exploration to provide more
information to the models.
|
[
"Joachim van der Herten and Ivo Couckuyt and Dirk Deschrijver and Tom\n Dhaene",
"['Joachim van der Herten' 'Ivo Couckuyt' 'Dirk Deschrijver' 'Tom Dhaene']"
] |
stat.ML cs.LG
| null |
1608.05258
| null | null |
http://arxiv.org/pdf/1608.05258v1
|
2016-08-18T13:55:41Z
|
2016-08-18T13:55:41Z
|
Parameter Learning for Log-supermodular Distributions
|
We consider log-supermodular models on binary variables, which are
probabilistic models with negative log-densities which are submodular. These
models provide probabilistic interpretations of common combinatorial
optimization tasks such as image segmentation. In this paper, we focus
primarily on parameter estimation in the models from known upper-bounds on the
intractable log-partition function. We show that the bound based on separable
optimization on the base polytope of the submodular function is always inferior
to a bound based on "perturb-and-MAP" ideas. Then, to learn parameters, given
that our approximation of the log-partition function is an expectation (over
our own randomization), we use a stochastic subgradient technique to maximize a
lower-bound on the log-likelihood. This can also be extended to conditional
maximum likelihood. We illustrate our new results in a set of experiments in
binary image denoising, where we highlight the flexibility of a probabilistic
model to learn with missing data.
|
[
"Tatiana Shpakova and Francis Bach",
"['Tatiana Shpakova' 'Francis Bach']"
] |
cs.LG stat.ML
| null |
1608.05275
| null | null |
http://arxiv.org/pdf/1608.05275v1
|
2016-08-18T14:27:45Z
|
2016-08-18T14:27:45Z
|
A Tight Convex Upper Bound on the Likelihood of a Finite Mixture
|
The likelihood function of a finite mixture model is a non-convex function
with multiple local maxima and commonly used iterative algorithms such as EM
will converge to different solutions depending on initial conditions. In this
paper we ask: is it possible to assess how far we are from the global maximum
of the likelihood? Since the likelihood of a finite mixture model can grow
unboundedly by centering a Gaussian on a single datapoint and shrinking the
covariance, we constrain the problem by assuming that the parameters of the
individual models are members of a large discrete set (e.g. estimating a
mixture of two Gaussians where the means and variances of both Gaussians are
members of a set of a million possible means and variances). For this setting
we show that a simple upper bound on the likelihood can be computed using
convex optimization and we analyze conditions under which the bound is
guaranteed to be tight. This bound can then be used to assess the quality of
solutions found by EM (where the final result is projected on the discrete set)
or any other mixture estimation algorithm. For any dataset our method allows us
to find a finite mixture model together with a dataset-specific bound on how
far the likelihood of this mixture is from the global optimum of the likelihood
|
[
"Elad Mezuman and Yair Weiss",
"['Elad Mezuman' 'Yair Weiss']"
] |
cs.LG
| null |
1608.05277
| null | null |
http://arxiv.org/pdf/1608.05277v3
|
2017-01-03T12:31:30Z
|
2016-08-18T14:32:04Z
|
Caveats on Bayesian and hidden-Markov models (v2.8)
|
This paper describes a number of fundamental and practical problems in the
application of hidden-Markov models and Bayes when applied to cursive-script
recognition. Several problems, however, will have an effect in other
application areas. The most fundamental problem is the propagation of error in
the product of probabilities. This is a common and pervasive problem which
deserves more attention. On the basis of Monte Carlo modeling, tables for the
expected relative error are given. It seems that it is distributed according to
a continuous Poisson distribution over log probabilities. A second essential
problem is related to the appropriateness of the Markov assumption. Basic tests
will reveal whether a problem requires modeling of the stochastics of
seriality, at all. Examples are given of lexical encodings which cover 95-99%
classification accuracy of a lexicon, with removed sequence information, for
several European languages. Finally, a summary of results on a non- Bayes,
non-Markov method in handwriting recognition are presented, with very
acceptable results and minimal modeling or training requirements using
nearest-mean classification.
|
[
"['Lambert Schomaker']",
"Lambert Schomaker"
] |
cs.LG
| null |
1608.05343
| null | null |
http://arxiv.org/pdf/1608.05343v2
|
2017-07-03T10:52:04Z
|
2016-08-18T17:29:09Z
|
Decoupled Neural Interfaces using Synthetic Gradients
|
Training directed neural networks typically requires forward-propagating data
through a computation graph, followed by backpropagating error signal, to
produce weight updates. All layers, or more generally, modules, of the network
are therefore locked, in the sense that they must wait for the remainder of the
network to execute forwards and propagate error backwards before they can be
updated. In this work we break this constraint by decoupling modules by
introducing a model of the future computation of the network graph. These
models predict what the result of the modelled subgraph will produce using only
local information. In particular we focus on modelling error gradients: by
using the modelled synthetic gradient in place of true backpropagated error
gradients we decouple subgraphs, and can update them independently and
asynchronously i.e. we realise decoupled neural interfaces. We show results for
feed-forward models, where every layer is trained asynchronously, recurrent
neural networks (RNNs) where predicting one's future gradient extends the time
over which the RNN can effectively model, and also a hierarchical RNN system
with ticking at different timescales. Finally, we demonstrate that in addition
to predicting gradients, the same framework can be used to predict inputs,
resulting in models which are decoupled in both the forward and backwards pass
-- amounting to independent networks which co-learn such that they can be
composed into a single functioning corporation.
|
[
"Max Jaderberg, Wojciech Marian Czarnecki, Simon Osindero, Oriol\n Vinyals, Alex Graves, David Silver, Koray Kavukcuoglu",
"['Max Jaderberg' 'Wojciech Marian Czarnecki' 'Simon Osindero'\n 'Oriol Vinyals' 'Alex Graves' 'David Silver' 'Koray Kavukcuoglu']"
] |
cs.AI cs.LG stat.ML
| null |
1608.05347
| null | null |
http://arxiv.org/pdf/1608.05347v1
|
2016-08-18T17:47:53Z
|
2016-08-18T17:47:53Z
|
Probabilistic Data Analysis with Probabilistic Programming
|
Probabilistic techniques are central to data analysis, but different
approaches can be difficult to apply, combine, and compare. This paper
introduces composable generative population models (CGPMs), a computational
abstraction that extends directed graphical models and can be used to describe
and compose a broad class of probabilistic data analysis techniques. Examples
include hierarchical Bayesian models, multivariate kernel methods,
discriminative machine learning, clustering algorithms, dimensionality
reduction, and arbitrary probabilistic programs. We also demonstrate the
integration of CGPMs into BayesDB, a probabilistic programming platform that
can express data analysis tasks using a modeling language and a structured
query language. The practical value is illustrated in two ways. First, CGPMs
are used in an analysis that identifies satellite data records which probably
violate Kepler's Third Law, by composing causal probabilistic programs with
non-parametric Bayes in under 50 lines of probabilistic code. Second, for
several representative data analysis tasks, we report on lines of code and
accuracy measurements of various CGPMs, plus comparisons with standard baseline
solutions from Python and MATLAB libraries.
|
[
"Feras Saad, Vikash Mansinghka",
"['Feras Saad' 'Vikash Mansinghka']"
] |
cs.DC cs.LG math.OC
| null |
1608.05401
| null | null |
http://arxiv.org/pdf/1608.05401v1
|
2016-08-18T19:57:41Z
|
2016-08-18T19:57:41Z
|
Distributed Optimization of Convex Sum of Non-Convex Functions
|
We present a distributed solution to optimizing a convex function composed of
several non-convex functions. Each non-convex function is privately stored with
an agent while the agents communicate with neighbors to form a network. We show
that coupled consensus and projected gradient descent algorithm proposed in [1]
can optimize convex sum of non-convex functions under an additional assumption
on gradient Lipschitzness. We further discuss the applications of this analysis
in improving privacy in distributed optimization.
|
[
"['Shripad Gade' 'Nitin H. Vaidya']",
"Shripad Gade and Nitin H. Vaidya"
] |
cs.LG stat.ML
| null |
1608.0556
| null | null | null | null | null |
Iterative Views Agreement: An Iterative Low-Rank based Structured
Optimization Method to Multi-View Spectral Clustering
|
Multi-view spectral clustering, which aims at yielding an agreement or
consensus data objects grouping across multi-views with their graph laplacian
matrices, is a fundamental clustering problem. Among the existing methods,
Low-Rank Representation (LRR) based method is quite superior in terms of its
effectiveness, intuitiveness and robustness to noise corruptions. However, it
aggressively tries to learn a common low-dimensional subspace for multi-view
data, while inattentively ignoring the local manifold structure in each view,
which is critically important to the spectral clustering; worse still, the
low-rank minimization is enforced to achieve the data correlation consensus
among all views, failing to flexibly preserve the local manifold structure for
each view. In this paper, 1) we propose a multi-graph laplacian regularized LRR
with each graph laplacian corresponding to one view to characterize its local
manifold structure. 2) Instead of directly enforcing the low-rank minimization
among all views for correlation consensus, we separately impose low-rank
constraint on each view, coupled with a mutual structural consensus constraint,
where it is able to not only well preserve the local manifold structure but
also serve as a constraint for that from other views, which iteratively makes
the views more agreeable. Extensive experiments on real-world multi-view data
sets demonstrate its superiority.
|
[
"Yang Wang, Wenjie Zhang, Lin Wu, Xuemin Lin, Meng Fang, Shirui Pan"
] |
null | null |
1608.05560
| null | null |
http://arxiv.org/pdf/1608.05560v1
|
2016-08-19T10:25:46Z
|
2016-08-19T10:25:46Z
|
Iterative Views Agreement: An Iterative Low-Rank based Structured
Optimization Method to Multi-View Spectral Clustering
|
Multi-view spectral clustering, which aims at yielding an agreement or consensus data objects grouping across multi-views with their graph laplacian matrices, is a fundamental clustering problem. Among the existing methods, Low-Rank Representation (LRR) based method is quite superior in terms of its effectiveness, intuitiveness and robustness to noise corruptions. However, it aggressively tries to learn a common low-dimensional subspace for multi-view data, while inattentively ignoring the local manifold structure in each view, which is critically important to the spectral clustering; worse still, the low-rank minimization is enforced to achieve the data correlation consensus among all views, failing to flexibly preserve the local manifold structure for each view. In this paper, 1) we propose a multi-graph laplacian regularized LRR with each graph laplacian corresponding to one view to characterize its local manifold structure. 2) Instead of directly enforcing the low-rank minimization among all views for correlation consensus, we separately impose low-rank constraint on each view, coupled with a mutual structural consensus constraint, where it is able to not only well preserve the local manifold structure but also serve as a constraint for that from other views, which iteratively makes the views more agreeable. Extensive experiments on real-world multi-view data sets demonstrate its superiority.
|
[
"['Yang Wang' 'Wenjie Zhang' 'Lin Wu' 'Xuemin Lin' 'Meng Fang' 'Shirui Pan']"
] |
stat.ML cs.LG
| null |
1608.05581
| null | null |
http://arxiv.org/pdf/1608.05581v5
|
2017-06-02T19:07:02Z
|
2016-08-19T12:28:21Z
|
Unsupervised Feature Selection Based on the Morisita Estimator of
Intrinsic Dimension
|
This paper deals with a new filter algorithm for selecting the smallest
subset of features carrying all the information content of a data set (i.e. for
removing redundant features). It is an advanced version of the fractal
dimension reduction technique, and it relies on the recently introduced
Morisita estimator of Intrinsic Dimension (ID). Here, the ID is used to
quantify dependencies between subsets of features, which allows the effective
processing of highly non-linear data. The proposed algorithm is successfully
tested on simulated and real world case studies. Different levels of sample
size and noise are examined along with the variability of the results. In
addition, a comprehensive procedure based on random forests shows that the data
dimensionality is significantly reduced by the algorithm without loss of
relevant information. And finally, comparisons with benchmark feature selection
techniques demonstrate the promising performance of this new filter.
|
[
"['Jean Golay' 'Mikhail Kanevski']",
"Jean Golay and Mikhail Kanevski"
] |
cs.LG stat.ML
| null |
1608.0561
| null | null | null | null | null |
A Strongly Quasiconvex PAC-Bayesian Bound
|
We propose a new PAC-Bayesian bound and a way of constructing a hypothesis
space, so that the bound is convex in the posterior distribution and also
convex in a trade-off parameter between empirical performance of the posterior
distribution and its complexity. The complexity is measured by the
Kullback-Leibler divergence to a prior. We derive an alternating procedure for
minimizing the bound. We show that the bound can be rewritten as a
one-dimensional function of the trade-off parameter and provide sufficient
conditions under which the function has a single global minimum. When the
conditions are satisfied the alternating minimization is guaranteed to converge
to the global minimum of the bound. We provide experimental results
demonstrating that rigorous minimization of the bound is competitive with
cross-validation in tuning the trade-off between complexity and empirical
performance. In all our experiments the trade-off turned to be quasiconvex even
when the sufficient conditions were violated.
|
[
"Niklas Thiemann and Christian Igel and Olivier Wintenberger and\n Yevgeny Seldin"
] |
null | null |
1608.05610
| null | null |
http://arxiv.org/pdf/1608.05610v2
|
2017-08-24T09:45:07Z
|
2016-08-19T14:21:18Z
|
A Strongly Quasiconvex PAC-Bayesian Bound
|
We propose a new PAC-Bayesian bound and a way of constructing a hypothesis space, so that the bound is convex in the posterior distribution and also convex in a trade-off parameter between empirical performance of the posterior distribution and its complexity. The complexity is measured by the Kullback-Leibler divergence to a prior. We derive an alternating procedure for minimizing the bound. We show that the bound can be rewritten as a one-dimensional function of the trade-off parameter and provide sufficient conditions under which the function has a single global minimum. When the conditions are satisfied the alternating minimization is guaranteed to converge to the global minimum of the bound. We provide experimental results demonstrating that rigorous minimization of the bound is competitive with cross-validation in tuning the trade-off between complexity and empirical performance. In all our experiments the trade-off turned to be quasiconvex even when the sufficient conditions were violated.
|
[
"['Niklas Thiemann' 'Christian Igel' 'Olivier Wintenberger'\n 'Yevgeny Seldin']"
] |
cs.LG
| null |
1608.05639
| null | null |
http://arxiv.org/pdf/1608.05639v1
|
2016-08-19T15:34:43Z
|
2016-08-19T15:34:43Z
|
Operator-Valued Bochner Theorem, Fourier Feature Maps for
Operator-Valued Kernels, and Vector-Valued Learning
|
This paper presents a framework for computing random operator-valued feature
maps for operator-valued positive definite kernels. This is a generalization of
the random Fourier features for scalar-valued kernels to the operator-valued
case. Our general setting is that of operator-valued kernels corresponding to
RKHS of functions with values in a Hilbert space. We show that in general, for
a given kernel, there are potentially infinitely many random feature maps,
which can be bounded or unbounded. Most importantly, given a kernel, we present
a general, closed form formula for computing a corresponding probability
measure, which is required for the construction of the Fourier features, and
which, unlike the scalar case, is not uniquely and automatically determined by
the kernel. We also show that, under appropriate conditions, random bounded
feature maps can always be computed. Furthermore, we show the uniform
convergence, under the Hilbert-Schmidt norm, of the resulting approximate
kernel to the exact kernel on any compact subset of Euclidean space. Our
convergence requires differentiable kernels, an improvement over the
twice-differentiability requirement in previous work in the scalar setting. We
then show how operator-valued feature maps and their approximations can be
employed in a general vector-valued learning framework. The mathematical
formulation is illustrated by numerical examples on matrix-valued kernels.
|
[
"Ha Quang Minh",
"['Ha Quang Minh']"
] |
cs.LG cs.AI cs.NE
| null |
1608.05745
| null | null |
http://arxiv.org/pdf/1608.05745v4
|
2017-02-26T15:13:31Z
|
2016-08-19T21:54:46Z
|
RETAIN: An Interpretable Predictive Model for Healthcare using Reverse
Time Attention Mechanism
|
Accuracy and interpretability are two dominant features of successful
predictive models. Typically, a choice must be made in favor of complex black
box models such as recurrent neural networks (RNN) for accuracy versus less
accurate but more interpretable traditional models such as logistic regression.
This tradeoff poses challenges in medicine where both accuracy and
interpretability are important. We addressed this challenge by developing the
REverse Time AttentIoN model (RETAIN) for application to Electronic Health
Records (EHR) data. RETAIN achieves high accuracy while remaining clinically
interpretable and is based on a two-level neural attention model that detects
influential past visits and significant clinical variables within those visits
(e.g. key diagnoses). RETAIN mimics physician practice by attending the EHR
data in a reverse time order so that recent clinical visits are likely to
receive higher attention. RETAIN was tested on a large health system EHR
dataset with 14 million visits completed by 263K patients over an 8 year period
and demonstrated predictive accuracy and computational scalability comparable
to state-of-the-art methods such as RNN, and ease of interpretability
comparable to traditional models.
|
[
"Edward Choi, Mohammad Taha Bahadori, Joshua A. Kulas, Andy Schuetz,\n Walter F. Stewart, Jimeng Sun",
"['Edward Choi' 'Mohammad Taha Bahadori' 'Joshua A. Kulas' 'Andy Schuetz'\n 'Walter F. Stewart' 'Jimeng Sun']"
] |
cs.LG cs.IT math.IT math.ST stat.ML stat.TH
| null |
1608.05749
| null | null |
http://arxiv.org/pdf/1608.05749v1
|
2016-08-19T22:10:46Z
|
2016-08-19T22:10:46Z
|
Solving a Mixture of Many Random Linear Equations by Tensor
Decomposition and Alternating Minimization
|
We consider the problem of solving mixed random linear equations with $k$
components. This is the noiseless setting of mixed linear regression. The goal
is to estimate multiple linear models from mixed samples in the case where the
labels (which sample corresponds to which model) are not observed. We give a
tractable algorithm for the mixed linear equation problem, and show that under
some technical conditions, our algorithm is guaranteed to solve the problem
exactly with sample complexity linear in the dimension, and polynomial in $k$,
the number of components. Previous approaches have required either exponential
dependence on $k$, or super-linear dependence on the dimension. The proposed
algorithm is a combination of tensor decomposition and alternating
minimization. Our analysis involves proving that the initialization provided by
the tensor method allows alternating minimization, which is equivalent to EM in
our setting, to converge to the global optimum at a linear rate.
|
[
"['Xinyang Yi' 'Constantine Caramanis' 'Sujay Sanghavi']",
"Xinyang Yi, Constantine Caramanis, Sujay Sanghavi"
] |
cs.NA cs.LG math.NA
|
10.1162/NECO_a_00951
|
1608.05754
| null | null |
http://arxiv.org/abs/1608.05754v1
|
2016-08-19T23:07:08Z
|
2016-08-19T23:07:08Z
|
Fast estimation of approximate matrix ranks using spectral densities
|
In many machine learning and data related applications, it is required to
have the knowledge of approximate ranks of large data matrices at hand. In this
paper, we present two computationally inexpensive techniques to estimate the
approximate ranks of such large matrices. These techniques exploit approximate
spectral densities, popular in physics, which are probability density
distributions that measure the likelihood of finding eigenvalues of the matrix
at a given point on the real line. Integrating the spectral density over an
interval gives the eigenvalue count of the matrix in that interval. Therefore
the rank can be approximated by integrating the spectral density over a
carefully selected interval. Two different approaches are discussed to estimate
the approximate rank, one based on Chebyshev polynomials and the other based on
the Lanczos algorithm. In order to obtain the appropriate interval, it is
necessary to locate a gap between the eigenvalues that correspond to noise and
the relevant eigenvalues that contribute to the matrix rank. A method for
locating this gap and selecting the interval of integration is proposed based
on the plot of the spectral density. Numerical experiments illustrate the
performance of these techniques on matrices from typical applications.
|
[
"['Shashanka Ubaru' 'Yousef Saad' 'Abd-Krim Seghouane']",
"Shashanka Ubaru, Yousef Saad, Abd-Krim Seghouane"
] |
cs.CR cs.LG
|
10.1049/iet-ifs.2013.0095
|
1608.05812
| null | null |
http://arxiv.org/abs/1608.05812v1
|
2016-08-20T12:10:49Z
|
2016-08-20T12:10:49Z
|
Analysis of Bayesian Classification based Approaches for Android Malware
Detection
|
Mobile malware has been growing in scale and complexity spurred by the
unabated uptake of smartphones worldwide. Android is fast becoming the most
popular mobile platform resulting in sharp increase in malware targeting the
platform. Additionally, Android malware is evolving rapidly to evade detection
by traditional signature-based scanning. Despite current detection measures in
place, timely discovery of new malware is still a critical issue. This calls
for novel approaches to mitigate the growing threat of zero-day Android
malware. Hence, in this paper we develop and analyze proactive Machine Learning
approaches based on Bayesian classification aimed at uncovering unknown Android
malware via static analysis. The study, which is based on a large malware
sample set of majority of the existing families, demonstrates detection
capabilities with high accuracy. Empirical results and comparative analysis are
presented offering useful insight towards development of effective
static-analytic Bayesian classification based solutions for detecting unknown
Android malware.
|
[
"Suleiman Y. Yerima, Sakir Sezer, Gavin McWilliams",
"['Suleiman Y. Yerima' 'Sakir Sezer' 'Gavin McWilliams']"
] |
cs.CV cs.LG stat.ML
|
10.1109/TKDE.2015.2441716
|
1608.05889
| null | null |
http://arxiv.org/abs/1608.05889v1
|
2016-08-21T02:39:48Z
|
2016-08-21T02:39:48Z
|
Online Feature Selection with Group Structure Analysis
|
Online selection of dynamic features has attracted intensive interest in
recent years. However, existing online feature selection methods evaluate
features individually and ignore the underlying structure of feature stream.
For instance, in image analysis, features are generated in groups which
represent color, texture and other visual information. Simply breaking the
group structure in feature selection may degrade performance. Motivated by this
fact, we formulate the problem as an online group feature selection. The
problem assumes that features are generated individually but there are group
structure in the feature stream. To the best of our knowledge, this is the
first time that the correlation among feature stream has been considered in the
online feature selection process. To solve this problem, we develop a novel
online group feature selection method named OGFS. Our proposed approach
consists of two stages: online intra-group selection and online inter-group
selection. In the intra-group selection, we design a criterion based on
spectral analysis to select discriminative features in each group. In the
inter-group selection, we utilize a linear regression model to select an
optimal subset. This two-stage procedure continues until there are no more
features arriving or some predefined stopping conditions are met. %Our method
has been applied Finally, we apply our method to multiple tasks including image
classification %, face verification and face verification. Extensive empirical
studies performed on real-world and benchmark data sets demonstrate that our
method outperforms other state-of-the-art online feature selection %method
methods.
|
[
"Jing Wang and Meng Wang and Peipei Li and Luoqi Liu and Zhongqiu Zhao\n and Xuegang Hu and Xindong Wu",
"['Jing Wang' 'Meng Wang' 'Peipei Li' 'Luoqi Liu' 'Zhongqiu Zhao'\n 'Xuegang Hu' 'Xindong Wu']"
] |
stat.ML cs.AI cs.LG
|
10.1145/2983323.2983677
|
1608.05921
| null | null |
http://arxiv.org/abs/1608.05921v2
|
2016-09-05T04:52:33Z
|
2016-08-21T11:49:53Z
|
Probabilistic Knowledge Graph Construction: Compositional and
Incremental Approaches
|
Knowledge graph construction consists of two tasks: extracting information
from external resources (knowledge population) and inferring missing
information through a statistical analysis on the extracted information
(knowledge completion). In many cases, insufficient external resources in the
knowledge population hinder the subsequent statistical inference. The gap
between these two processes can be reduced by an incremental population
approach. We propose a new probabilistic knowledge graph factorisation method
that benefits from the path structure of existing knowledge (e.g. syllogism)
and enables a common modelling approach to be used for both incremental
population and knowledge completion tasks. More specifically, the probabilistic
formulation allows us to develop an incremental population algorithm that
trades off exploitation-exploration. Experiments on three benchmark datasets
show that the balanced exploitation-exploration helps the incremental
population, and the additional path structure helps to predict missing
information in knowledge completion.
|
[
"['Dongwoo Kim' 'Lexing Xie' 'Cheng Soon Ong']",
"Dongwoo Kim, Lexing Xie, Cheng Soon Ong"
] |
cs.LG q-bio.QM
| null |
1608.05949
| null | null |
http://arxiv.org/pdf/1608.05949v2
|
2016-09-12T07:54:51Z
|
2016-08-21T14:58:01Z
|
Distributed Representations for Biological Sequence Analysis
|
Biological sequence comparison is a key step in inferring the relatedness of
various organisms and the functional similarity of their components. Thanks to
the Next Generation Sequencing efforts, an abundance of sequence data is now
available to be processed for a range of bioinformatics applications. Embedding
a biological sequence over a nucleotide or amino acid alphabet in a lower
dimensional vector space makes the data more amenable for use by current
machine learning tools, provided the quality of embedding is high and it
captures the most meaningful information of the original sequences. Motivated
by recent advances in the text document embedding literature, we present a new
method, called seq2vec, to represent a complete biological sequence in an
Euclidean space. The new representation has the potential to capture the
contextual information of the original sequence necessary for sequence
comparison tasks. We test our embeddings with protein sequence classification
and retrieval tasks and demonstrate encouraging outcomes.
|
[
"['Dhananjay Kimothi' 'Akshay Soni' 'Pravesh Biyani' 'James M. Hogan']",
"Dhananjay Kimothi, Akshay Soni, Pravesh Biyani, James M. Hogan"
] |
cs.LG stat.ML
| null |
1608.05983
| null | null |
http://arxiv.org/pdf/1608.05983v2
|
2017-08-24T14:20:27Z
|
2016-08-21T19:02:27Z
|
Inverting Variational Autoencoders for Improved Generative Accuracy
|
Recent advances in semi-supervised learning with deep generative models have
shown promise in generalizing from small labeled datasets
($\mathbf{x},\mathbf{y}$) to large unlabeled ones ($\mathbf{x}$). In the case
where the codomain has known structure, a large unfeatured dataset
($\mathbf{y}$) is potentially available. We develop a parameter-efficient, deep
semi-supervised generative model for the purpose of exploiting this untapped
data source. Empirical results show improved performance in disentangling
latent variable semantics as well as improved discriminative prediction on
Martian spectroscopic and handwritten digit domains.
|
[
"['Ian Gemp' 'Ishan Durugkar' 'Mario Parente' 'M. Darby Dyar'\n 'Sridhar Mahadevan']",
"Ian Gemp, Ishan Durugkar, Mario Parente, M. Darby Dyar, Sridhar\n Mahadevan"
] |
stat.ML cs.LG
| null |
1608.05995
| null | null |
http://arxiv.org/pdf/1608.05995v5
|
2016-10-25T21:23:23Z
|
2016-08-21T20:28:29Z
|
A Non-convex One-Pass Framework for Generalized Factorization Machine
and Rank-One Matrix Sensing
|
We develop an efficient alternating framework for learning a generalized
version of Factorization Machine (gFM) on steaming data with provable
guarantees. When the instances are sampled from $d$ dimensional random Gaussian
vectors and the target second order coefficient matrix in gFM is of rank $k$,
our algorithm converges linearly, achieves $O(\epsilon)$ recovery error after
retrieving $O(k^{3}d\log(1/\epsilon))$ training instances, consumes $O(kd)$
memory in one-pass of dataset and only requires matrix-vector product
operations in each iteration. The key ingredient of our framework is a
construction of an estimation sequence endowed with a so-called Conditionally
Independent RIP condition (CI-RIP). As special cases of gFM, our framework can
be applied to symmetric or asymmetric rank-one matrix sensing problems, such as
inductive matrix completion and phase retrieval.
|
[
"Ming Lin and Jieping Ye",
"['Ming Lin' 'Jieping Ye']"
] |
cs.SI cs.LG cs.MA
| null |
1608.06007
| null | null |
http://arxiv.org/pdf/1608.06007v2
|
2016-12-28T02:17:22Z
|
2016-08-21T22:14:48Z
|
Distributed Probabilistic Bisection Search using Social Learning
|
We present a novel distributed probabilistic bisection algorithm using social
learning with application to target localization. Each agent in the network
first constructs a query about the target based on its local information and
obtains a noisy response. Agents then perform a Bayesian update of their
beliefs followed by an averaging of the log beliefs over local neighborhoods.
This two stage algorithm consisting of repeated querying and averaging runs
until convergence. We derive bounds on the rate of convergence of the beliefs
at the correct target location. Numerical simulations show that our method
outperforms current state of the art methods.
|
[
"Athanasios Tsiligkaridis and Theodoros Tsiligkaridis",
"['Athanasios Tsiligkaridis' 'Theodoros Tsiligkaridis']"
] |
cs.LG cs.AI cs.CV stat.ML
| null |
1608.0601
| null | null | null | null | null |
Feedback-Controlled Sequential Lasso Screening
|
One way to solve lasso problems when the dictionary does not fit into
available memory is to first screen the dictionary to remove unneeded features.
Prior research has shown that sequential screening methods offer the greatest
promise in this endeavor. Most existing work on sequential screening targets
the context of tuning parameter selection, where one screens and solves a
sequence of $N$ lasso problems with a fixed grid of geometrically spaced
regularization parameters. In contrast, we focus on the scenario where a target
regularization parameter has already been chosen via cross-validated model
selection, and we then need to solve many lasso instances using this fixed
value. In this context, we propose and explore a feedback controlled sequential
screening scheme. Feedback is used at each iteration to select the next problem
to be solved. This allows the sequence of problems to be adapted to the
instance presented and the number of intermediate problems to be automatically
selected. We demonstrate our feedback scheme using several datasets including a
dictionary of approximate size 100,000 by 300,000.
|
[
"Yun Wang, Xu Chen and Peter J. Ramadge"
] |
null | null |
1608.06010
| null | null |
http://arxiv.org/pdf/1608.06010v2
|
2016-08-25T22:52:30Z
|
2016-08-21T23:40:56Z
|
Feedback-Controlled Sequential Lasso Screening
|
One way to solve lasso problems when the dictionary does not fit into available memory is to first screen the dictionary to remove unneeded features. Prior research has shown that sequential screening methods offer the greatest promise in this endeavor. Most existing work on sequential screening targets the context of tuning parameter selection, where one screens and solves a sequence of $N$ lasso problems with a fixed grid of geometrically spaced regularization parameters. In contrast, we focus on the scenario where a target regularization parameter has already been chosen via cross-validated model selection, and we then need to solve many lasso instances using this fixed value. In this context, we propose and explore a feedback controlled sequential screening scheme. Feedback is used at each iteration to select the next problem to be solved. This allows the sequence of problems to be adapted to the instance presented and the number of intermediate problems to be automatically selected. We demonstrate our feedback scheme using several datasets including a dictionary of approximate size 100,000 by 300,000.
|
[
"['Yun Wang' 'Xu Chen' 'Peter J. Ramadge']"
] |
cs.LG cs.AI cs.CV stat.ML
| null |
1608.06014
| null | null |
http://arxiv.org/pdf/1608.06014v2
|
2016-08-25T22:05:24Z
|
2016-08-21T23:48:43Z
|
The Symmetry of a Simple Optimization Problem in Lasso Screening
|
Recently dictionary screening has been proposed as an effective way to
improve the computational efficiency of solving the lasso problem, which is one
of the most commonly used method for learning sparse representations. To
address today's ever increasing large dataset, effective screening relies on a
tight region bound on the solution to the dual lasso. Typical region bounds are
in the form of an intersection of a sphere and multiple half spaces. One way to
tighten the region bound is using more half spaces, which however, adds to the
overhead of solving the high dimensional optimization problem in lasso
screening. This paper reveals the interesting property that the optimization
problem only depends on the projection of features onto the subspace spanned by
the normals of the half spaces. This property converts an optimization problem
in high dimension to much lower dimension, and thus sheds light on reducing the
computation overhead of lasso screening based on tighter region bounds.
|
[
"['Yun Wang' 'Peter J. Ramadge']",
"Yun Wang and Peter J. Ramadge"
] |
cs.LG cs.NE
| null |
1608.06027
| null | null |
http://arxiv.org/pdf/1608.06027v4
|
2016-10-19T04:32:46Z
|
2016-08-22T01:42:45Z
|
Surprisal-Driven Feedback in Recurrent Networks
|
Recurrent neural nets are widely used for predicting temporal data. Their
inherent deep feedforward structure allows learning complex sequential
patterns. It is believed that top-down feedback might be an important missing
ingredient which in theory could help disambiguate similar patterns depending
on broader context. In this paper we introduce surprisal-driven recurrent
networks, which take into account past error information when making new
predictions. This is achieved by continuously monitoring the discrepancy
between most recent predictions and the actual observations. Furthermore, we
show that it outperforms other stochastic and fully deterministic approaches on
enwik8 character level prediction task achieving 1.37 BPC on the test portion
of the text.
|
[
"Kamil M Rocki",
"['Kamil M Rocki']"
] |
cs.LG cs.DS stat.ML
| null |
1608.06031
| null | null |
http://arxiv.org/pdf/1608.06031v2
|
2017-05-24T02:19:52Z
|
2016-08-22T02:05:10Z
|
Towards Instance Optimal Bounds for Best Arm Identification
|
In the classical best arm identification (Best-$1$-Arm) problem, we are given
$n$ stochastic bandit arms, each associated with a reward distribution with an
unknown mean. We would like to identify the arm with the largest mean with
probability at least $1-\delta$, using as few samples as possible.
Understanding the sample complexity of Best-$1$-Arm has attracted significant
attention since the last decade. However, the exact sample complexity of the
problem is still unknown.
Recently, Chen and Li made the gap-entropy conjecture concerning the instance
sample complexity of Best-$1$-Arm. Given an instance $I$, let $\mu_{[i]}$ be
the $i$th largest mean and $\Delta_{[i]}=\mu_{[1]}-\mu_{[i]}$ be the
corresponding gap. $H(I)=\sum_{i=2}^n\Delta_{[i]}^{-2}$ is the complexity of
the instance. The gap-entropy conjecture states that
$\Omega\left(H(I)\cdot\left(\ln\delta^{-1}+\mathsf{Ent}(I)\right)\right)$ is an
instance lower bound, where $\mathsf{Ent}(I)$ is an entropy-like term
determined by the gaps, and there is a $\delta$-correct algorithm for
Best-$1$-Arm with sample complexity
$O\left(H(I)\cdot\left(\ln\delta^{-1}+\mathsf{Ent}(I)\right)+\Delta_{[2]}^{-2}\ln\ln\Delta_{[2]}^{-1}\right)$.
If the conjecture is true, we would have a complete understanding of the
instance-wise sample complexity of Best-$1$-Arm.
We make significant progress towards the resolution of the gap-entropy
conjecture. For the upper bound, we provide a highly nontrivial algorithm which
requires \[O\left(H(I)\cdot\left(\ln\delta^{-1}
+\mathsf{Ent}(I)\right)+\Delta_{[2]}^{-2}\ln\ln\Delta_{[2]}^{-1}\mathrm{polylog}(n,\delta^{-1})\right)\]
samples in expectation. For the lower bound, we show that for any Gaussian
Best-$1$-Arm instance with gaps of the form $2^{-k}$, any $\delta$-correct
monotone algorithm requires $\Omega\left(H(I)\cdot\left(\ln\delta^{-1} +
\mathsf{Ent}(I)\right)\right)$ samples in expectation.
|
[
"Lijie Chen, Jian Li, Mingda Qiao",
"['Lijie Chen' 'Jian Li' 'Mingda Qiao']"
] |
stat.AP cs.LG stat.ML
| null |
1608.06048
| null | null |
http://arxiv.org/pdf/1608.06048v1
|
2016-08-22T04:27:28Z
|
2016-08-22T04:27:28Z
|
Survey of resampling techniques for improving classification performance
in unbalanced datasets
|
A number of classification problems need to deal with data imbalance between
classes. Often it is desired to have a high recall on the minority class while
maintaining a high precision on the majority class. In this paper, we review a
number of resampling techniques proposed in literature to handle unbalanced
datasets and study their effect on classification performance.
|
[
"['Ajinkya More']",
"Ajinkya More"
] |
cs.LG cs.CV
| null |
1608.06049
| null | null |
http://arxiv.org/pdf/1608.06049v2
|
2017-07-01T17:02:44Z
|
2016-08-22T04:32:21Z
|
Local Binary Convolutional Neural Networks
|
We propose local binary convolution (LBC), an efficient alternative to
convolutional layers in standard convolutional neural networks (CNN). The
design principles of LBC are motivated by local binary patterns (LBP). The LBC
layer comprises of a set of fixed sparse pre-defined binary convolutional
filters that are not updated during the training process, a non-linear
activation function and a set of learnable linear weights. The linear weights
combine the activated filter responses to approximate the corresponding
activated filter responses of a standard convolutional layer. The LBC layer
affords significant parameter savings, 9x to 169x in the number of learnable
parameters compared to a standard convolutional layer. Furthermore, the sparse
and binary nature of the weights also results in up to 9x to 169x savings in
model size compared to a standard convolutional layer. We demonstrate both
theoretically and experimentally that our local binary convolution layer is a
good approximation of a standard convolutional layer. Empirically, CNNs with
LBC layers, called local binary convolutional neural networks (LBCNN), achieves
performance parity with regular CNNs on a range of visual datasets (MNIST,
SVHN, CIFAR-10, and ImageNet) while enjoying significant computational savings.
|
[
"Felix Juefei-Xu, Vishnu Naresh Boddeti, Marios Savvides",
"['Felix Juefei-Xu' 'Vishnu Naresh Boddeti' 'Marios Savvides']"
] |
cs.LG cs.IT math.IT stat.ML
| null |
1608.06072
| null | null |
http://arxiv.org/pdf/1608.06072v2
|
2016-10-03T16:36:11Z
|
2016-08-22T07:47:56Z
|
Uniform Generalization, Concentration, and Adaptive Learning
|
One fundamental goal in any learning algorithm is to mitigate its risk for
overfitting. Mathematically, this requires that the learning algorithm enjoys a
small generalization risk, which is defined either in expectation or in
probability. Both types of generalization are commonly used in the literature.
For instance, generalization in expectation has been used to analyze
algorithms, such as ridge regression and SGD, whereas generalization in
probability is used in the VC theory, among others. Recently, a third notion of
generalization has been studied, called uniform generalization, which requires
that the generalization risk vanishes uniformly in expectation across all
bounded parametric losses. It has been shown that uniform generalization is, in
fact, equivalent to an information-theoretic stability constraint, and that it
recovers classical results in learning theory. It is achievable under various
settings, such as sample compression schemes, finite hypothesis spaces, finite
domains, and differential privacy. However, the relationship between uniform
generalization and concentration remained unknown. In this paper, we answer
this question by proving that, while a generalization in expectation does not
imply a generalization in probability, a uniform generalization in expectation
does imply concentration. We establish a chain rule for the uniform
generalization risk of the composition of hypotheses and use it to derive a
large deviation bound. Finally, we prove that the bound is tight.
|
[
"['Ibrahim Alabdulmohsin']",
"Ibrahim Alabdulmohsin"
] |
cs.LG cs.AI
| null |
1608.06154
| null | null |
http://arxiv.org/pdf/1608.06154v1
|
2016-08-22T12:59:31Z
|
2016-08-22T12:59:31Z
|
Multi-Sensor Prognostics using an Unsupervised Health Index based on
LSTM Encoder-Decoder
|
Many approaches for estimation of Remaining Useful Life (RUL) of a machine,
using its operational sensor data, make assumptions about how a system degrades
or a fault evolves, e.g., exponential degradation. However, in many domains
degradation may not follow a pattern. We propose a Long Short Term Memory based
Encoder-Decoder (LSTM-ED) scheme to obtain an unsupervised health index (HI)
for a system using multi-sensor time-series data. LSTM-ED is trained to
reconstruct the time-series corresponding to healthy state of a system. The
reconstruction error is used to compute HI which is then used for RUL
estimation. We evaluate our approach on publicly available Turbofan Engine and
Milling Machine datasets. We also present results on a real-world industry
dataset from a pulverizer mill where we find significant correlation between
LSTM-ED based HI and maintenance costs.
|
[
"['Pankaj Malhotra' 'Vishnu TV' 'Anusha Ramakrishnan' 'Gaurangi Anand'\n 'Lovekesh Vig' 'Puneet Agarwal' 'Gautam Shroff']",
"Pankaj Malhotra, Vishnu TV, Anusha Ramakrishnan, Gaurangi Anand,\n Lovekesh Vig, Puneet Agarwal, Gautam Shroff"
] |
cs.LG cs.IT math.IT stat.ML
| null |
1608.06203
| null | null |
http://arxiv.org/pdf/1608.06203v1
|
2016-08-22T15:58:31Z
|
2016-08-22T15:58:31Z
|
Computational and Statistical Tradeoffs in Learning to Rank
|
For massive and heterogeneous modern datasets, it is of fundamental interest
to provide guarantees on the accuracy of estimation when computational
resources are limited. In the application of learning to rank, we provide a
hierarchy of rank-breaking mechanisms ordered by the complexity in thus
generated sketch of the data. This allows the number of data points collected
to be gracefully traded off against computational resources available, while
guaranteeing the desired level of accuracy. Theoretical guarantees on the
proposed generalized rank-breaking implicitly provide such trade-offs, which
can be explicitly characterized under certain canonical scenarios on the
structure of the data.
|
[
"Ashish Khetan, Sewoong Oh",
"['Ashish Khetan' 'Sewoong Oh']"
] |
cs.RO cs.LG
| null |
1608.06235
| null | null |
http://arxiv.org/pdf/1608.06235v2
|
2016-09-11T23:11:23Z
|
2016-08-22T17:49:50Z
|
Adaptive Probabilistic Trajectory Optimization via Efficient Approximate
Inference
|
Robotic systems must be able to quickly and robustly make decisions when
operating in uncertain and dynamic environments. While Reinforcement Learning
(RL) can be used to compute optimal policies with little prior knowledge about
the environment, it suffers from slow convergence. An alternative approach is
Model Predictive Control (MPC), which optimizes policies quickly, but also
requires accurate models of the system dynamics and environment. In this paper
we propose a new approach, adaptive probabilistic trajectory optimization, that
combines the benefits of RL and MPC. Our method uses scalable approximate
inference to learn and updates probabilistic models in an online incremental
fashion while also computing optimal control policies via successive local
approximations. We present two variations of our algorithm based on the Sparse
Spectrum Gaussian Process (SSGP) model, and we test our algorithm on three
learning tasks, demonstrating the effectiveness and efficiency of our approach.
|
[
"['Yunpeng Pan' 'Xinyan Yan' 'Evangelos Theodorou' 'Byron Boots']",
"Yunpeng Pan, Xinyan Yan, Evangelos Theodorou and Byron Boots"
] |
cs.IR cs.LG stat.ML
| null |
1608.06253
| null | null |
http://arxiv.org/pdf/1608.06253v1
|
2016-08-22T18:20:18Z
|
2016-08-22T18:20:18Z
|
Multi-Dueling Bandits and Their Application to Online Ranker Evaluation
|
New ranking algorithms are continually being developed and refined,
necessitating the development of efficient methods for evaluating these
rankers. Online ranker evaluation focuses on the challenge of efficiently
determining, from implicit user feedback, which ranker out of a finite set of
rankers is the best. Online ranker evaluation can be modeled by dueling ban-
dits, a mathematical model for online learning under limited feedback from
pairwise comparisons. Comparisons of pairs of rankers is performed by
interleaving their result sets and examining which documents users click on.
The dueling bandits model addresses the key issue of which pair of rankers to
compare at each iteration, thereby providing a solution to the
exploration-exploitation trade-off. Recently, methods for simultaneously
comparing more than two rankers have been developed. However, the question of
which rankers to compare at each iteration was left open. We address this
question by proposing a generalization of the dueling bandits model that uses
simultaneous comparisons of an unrestricted number of rankers. We evaluate our
algorithm on synthetic data and several standard large-scale online ranker
evaluation datasets. Our experimental results show that the algorithm yields
orders of magnitude improvement in performance compared to stateof- the-art
dueling bandit algorithms.
|
[
"Brian Brost and Yevgeny Seldin and Ingemar J. Cox and Christina Lioma",
"['Brian Brost' 'Yevgeny Seldin' 'Ingemar J. Cox' 'Christina Lioma']"
] |
cs.LG q-bio.NC stat.ML
| null |
1608.06315
| null | null |
http://arxiv.org/pdf/1608.06315v1
|
2016-08-22T21:15:00Z
|
2016-08-22T21:15:00Z
|
LFADS - Latent Factor Analysis via Dynamical Systems
|
Neuroscience is experiencing a data revolution in which many hundreds or
thousands of neurons are recorded simultaneously. Currently, there is little
consensus on how such data should be analyzed. Here we introduce LFADS (Latent
Factor Analysis via Dynamical Systems), a method to infer latent dynamics from
simultaneously recorded, single-trial, high-dimensional neural spiking data.
LFADS is a sequential model based on a variational auto-encoder. By making a
dynamical systems hypothesis regarding the generation of the observed data,
LFADS reduces observed spiking to a set of low-dimensional temporal factors,
per-trial initial conditions, and inferred inputs. We compare LFADS to existing
methods on synthetic data and show that it significantly out-performs them in
inferring neural firing rates and latent dynamics.
|
[
"David Sussillo, Rafal Jozefowicz, L. F. Abbott, Chethan Pandarinath",
"['David Sussillo' 'Rafal Jozefowicz' 'L. F. Abbott' 'Chethan Pandarinath']"
] |
cs.LG cs.CV
| null |
1608.06374
| null | null |
http://arxiv.org/pdf/1608.06374v2
|
2016-10-02T03:01:51Z
|
2016-08-23T03:50:01Z
|
Deep Double Sparsity Encoder: Learning to Sparsify Not Only Features But
Also Parameters
|
This paper emphasizes the significance to jointly exploit the problem
structure and the parameter structure, in the context of deep modeling. As a
specific and interesting example, we describe the deep double sparsity encoder
(DDSE), which is inspired by the double sparsity model for dictionary learning.
DDSE simultaneously sparsities the output features and the learned model
parameters, under one unified framework. In addition to its intuitive model
interpretation, DDSE also possesses compact model size and low complexity.
Extensive simulations compare DDSE with several carefully-designed baselines,
and verify the consistently superior performance of DDSE. We further apply DDSE
to the novel application domain of brain encoding, with promising preliminary
results achieved.
|
[
"['Zhangyang Wang' 'Thomas S. Huang']",
"Zhangyang Wang, Thomas S. Huang"
] |
cs.LG
| null |
1608.06408
| null | null |
http://arxiv.org/pdf/1608.06408v1
|
2016-08-23T07:40:08Z
|
2016-08-23T07:40:08Z
|
Online Learning to Rank with Top-k Feedback
|
We consider two settings of online learning to rank where feedback is
restricted to top ranked items. The problem is cast as an online game between a
learner and sequence of users, over $T$ rounds. In both settings, the learners
objective is to present ranked list of items to the users. The learner's
performance is judged on the entire ranked list and true relevances of the
items. However, the learner receives highly restricted feedback at end of each
round, in form of relevances of only the top $k$ ranked items, where $k \ll m$.
The first setting is \emph{non-contextual}, where the list of items to be
ranked is fixed. The second setting is \emph{contextual}, where lists of items
vary, in form of traditional query-document lists. No stochastic assumption is
made on the generation process of relevances of items and contexts. We provide
efficient ranking strategies for both the settings. The strategies achieve
$O(T^{2/3})$ regret, where regret is based on popular ranking measures in first
setting and ranking surrogates in second setting. We also provide impossibility
results for certain ranking measures and a certain class of surrogates, when
feedback is restricted to the top ranked item, i.e. $k=1$. We empirically
demonstrate the performance of our algorithms on simulated and real world
datasets.
|
[
"Sougata Chaudhuri and Ambuj Tewari",
"['Sougata Chaudhuri' 'Ambuj Tewari']"
] |
cs.LG cs.IT cs.NI math.IT
| null |
1608.06409
| null | null |
http://arxiv.org/pdf/1608.06409v1
|
2016-08-23T07:41:31Z
|
2016-08-23T07:41:31Z
|
Learning to Communicate: Channel Auto-encoders, Domain Specific
Regularizers, and Attention
|
We address the problem of learning efficient and adaptive ways to communicate
binary information over an impaired channel. We treat the problem as
reconstruction optimization through impairment layers in a channel autoencoder
and introduce several new domain-specific regularizing layers to emulate common
channel impairments. We also apply a radio transformer network based attention
model on the input of the decoder to help recover canonical signal
representations. We demonstrate some promising initial capacity results from
this architecture and address several remaining challenges before such a system
could become practical.
|
[
"[\"Timothy J O'Shea\" 'Kiran Karra' 'T. Charles Clancy']",
"Timothy J O'Shea, Kiran Karra, T. Charles Clancy"
] |
cs.LG
|
10.1109/IISWC.2016.7581275
|
1608.06581
| null | null |
http://arxiv.org/abs/1608.06581v1
|
2016-08-23T17:11:07Z
|
2016-08-23T17:11:07Z
|
Fathom: Reference Workloads for Modern Deep Learning Methods
|
Deep learning has been popularized by its recent successes on challenging
artificial intelligence problems. One of the reasons for its dominance is also
an ongoing challenge: the need for immense amounts of computational power.
Hardware architects have responded by proposing a wide array of promising
ideas, but to date, the majority of the work has focused on specific algorithms
in somewhat narrow application domains. While their specificity does not
diminish these approaches, there is a clear need for more flexible solutions.
We believe the first step is to examine the characteristics of cutting edge
models from across the deep learning community.
Consequently, we have assembled Fathom: a collection of eight archetypal deep
learning workloads for study. Each of these models comes from a seminal work in
the deep learning community, ranging from the familiar deep convolutional
neural network of Krizhevsky et al., to the more exotic memory networks from
Facebook's AI research group. Fathom has been released online, and this paper
focuses on understanding the fundamental performance characteristics of each
model. We use a set of application-level modeling tools built around the
TensorFlow deep learning framework in order to analyze the behavior of the
Fathom workloads. We present a breakdown of where time is spent, the
similarities between the performance profiles of our models, an analysis of
behavior in inference and training, and the effects of parallelism on scaling.
|
[
"Robert Adolf, Saketh Rama, Brandon Reagen, Gu-Yeon Wei, David Brooks",
"['Robert Adolf' 'Saketh Rama' 'Brandon Reagen' 'Gu-Yeon Wei'\n 'David Brooks']"
] |
cs.IT cs.LG math.IT
| null |
1608.06602
| null | null |
http://arxiv.org/pdf/1608.06602v1
|
2016-08-23T18:49:03Z
|
2016-08-23T18:49:03Z
|
Self-Averaging Expectation Propagation
|
We investigate the problem of approximate Bayesian inference for a general
class of observation models by means of the expectation propagation (EP)
framework for large systems under some statistical assumptions. Our approach
tries to overcome the numerical bottleneck of EP caused by the inversion of
large matrices. Assuming that the measurement matrices are realizations of
specific types of ensembles we use the concept of freeness from random matrix
theory to show that the EP cavity variances exhibit an asymptotic
self-averaging property. They can be pre-computed using specific generating
functions, i.e. the R- and/or S-transforms in free probability, which do not
require matrix inversions. Our approach extends the framework of (generalized)
approximate message passing -- assumes zero-mean iid entries of the measurement
matrix -- to a general class of random matrix ensembles. The generalization is
via a simple formulation of the R- and/or S-transforms of the limiting
eigenvalue distribution of the Gramian of the measurement matrix. We
demonstrate the performance of our approach on a signal recovery problem of
nonlinear compressed sensing and compare it with that of EP.
|
[
"Burak \\c{C}akmak, Manfred Opper, Bernard H. Fleury and Ole Winther",
"['Burak Çakmak' 'Manfred Opper' 'Bernard H. Fleury' 'Ole Winther']"
] |
cs.LG
| null |
1608.06608
| null | null |
http://arxiv.org/pdf/1608.06608v3
|
2017-10-21T00:56:08Z
|
2016-08-23T19:14:47Z
|
Infinite-Label Learning with Semantic Output Codes
|
We develop a new statistical machine learning paradigm, named infinite-label
learning, to annotate a data point with more than one relevant labels from a
candidate set, which pools both the finite labels observed at training and a
potentially infinite number of previously unseen labels. The infinite-label
learning fundamentally expands the scope of conventional multi-label learning,
and better models the practical requirements in various real-world
applications, such as image tagging, ads-query association, and article
categorization. However, how can we learn a labeling function that is capable
of assigning to a data point the labels omitted from the training set? To
answer the question, we seek some clues from the recent work on zero-shot
learning, where the key is to represent a class/label by a vector of semantic
codes, as opposed to treating them as atomic labels. We validate the
infinite-label learning by a PAC bound in theory and some empirical studies on
both synthetic and real data.
|
[
"['Yang Zhang' 'Rupam Acharyya' 'Ji Liu' 'Boqing Gong']",
"Yang Zhang, Rupam Acharyya, Ji Liu, Boqing Gong"
] |
cs.IR cs.AI cs.CL cs.LG
|
10.1145/2872427.2882974
|
1608.06651
| null | null |
http://arxiv.org/abs/1608.06651v2
|
2017-09-17T04:57:54Z
|
2016-08-23T20:55:09Z
|
Unsupervised, Efficient and Semantic Expertise Retrieval
|
We introduce an unsupervised discriminative model for the task of retrieving
experts in online document collections. We exclusively employ textual evidence
and avoid explicit feature engineering by learning distributed word
representations in an unsupervised way. We compare our model to
state-of-the-art unsupervised statistical vector space and probabilistic
generative approaches. Our proposed log-linear model achieves the retrieval
performance levels of state-of-the-art document-centric methods with the low
inference cost of so-called profile-centric approaches. It yields a
statistically significant improved ranking over vector space and generative
models in most cases, matching the performance of supervised methods on various
benchmarks. That is, by using solely text we can do as well as methods that
work with external evidence and/or relevance feedback. A contrastive analysis
of rankings produced by discriminative and generative approaches shows that
they have complementary strengths due to the ability of the unsupervised
discriminative model to perform semantic matching.
|
[
"Christophe Van Gysel, Maarten de Rijke, Marcel Worring",
"['Christophe Van Gysel' 'Maarten de Rijke' 'Marcel Worring']"
] |
cs.LG cs.IR
| null |
1608.06664
| null | null |
http://arxiv.org/pdf/1608.06664v1
|
2016-08-23T22:44:42Z
|
2016-08-23T22:44:42Z
|
Topic Grids for Homogeneous Data Visualization
|
We propose the topic grids to detect anomaly and analyze the behavior based
on the access log content. Content-based behavioral risk is quantified in the
high dimensional space where the topics are generated from the log. The topics
are being projected homogeneously into a space that is perception- and
interaction-friendly to the human experts.
|
[
"['Shih-Chieh Su' 'Joseph Vaughn' 'Jean-Laurent Huynh']",
"Shih-Chieh Su, Joseph Vaughn and Jean-Laurent Huynh"
] |
q-bio.BM cs.LG
| null |
1608.06665
| null | null |
http://arxiv.org/pdf/1608.06665v1
|
2016-08-23T22:52:22Z
|
2016-08-23T22:52:22Z
|
Deep learning is competing random forest in computational docking
|
Computational docking is the core process of computer-aided drug design; it
aims at predicting the best orientation and conformation of a small drug
molecule when bound to a target large protein receptor. The docking quality is
typically measured by a scoring function: a mathematical predictive model that
produces a score representing the binding free energy and hence the stability
of the resulting complex molecule. We analyze the performance of both learning
techniques on the scoring power, the ranking power, docking power, and
screening power using the PDBbind 2013 database. For the scoring and ranking
powers, the proposed learning scoring functions depend on a wide range of
features (energy terms, pharmacophore, intermolecular) that entirely
characterize the protein-ligand complexes. For the docking and screening
powers, the proposed learning scoring functions depend on the intermolecular
features of the RF-Score to utilize a larger number of training complexes. For
the scoring power, the DL\_RF scoring function achieves Pearson's correlation
coefficient between the predicted and experimentally measured binding
affinities of 0.799 versus 0.758 of the RF scoring function. For the ranking
power, the DL scoring function ranks the ligands bound to fixed target protein
with accuracy 54% for the high-level ranking and with accuracy 78% for the
low-level ranking while the RF scoring function achieves (46% and 62%)
respectively. For the docking power, the DL\_RF scoring function has a success
rate when the three best-scored ligand binding poses are considered within 2
\AA\ root-mean-square-deviation from the native pose of 36.0% versus 30.2% of
the RF scoring function. For the screening power, the DL scoring function has
an average enrichment factor and success rate at the top 1% level of (2.69 and
6.45%) respectively versus (1.61 and 4.84%) respectively of the RF scoring
function.
|
[
"['Mohamed Khamis' 'Walid Gomaa' 'Basem Galal']",
"Mohamed Khamis, Walid Gomaa, Basem Galal"
] |
cs.LG
|
10.1109/TPAMI.2018.2860995
|
1608.06807
| null | null |
http://arxiv.org/abs/1608.06807v4
|
2018-03-14T08:04:58Z
|
2016-08-24T13:38:16Z
|
Efficient Training for Positive Unlabeled Learning
|
Positive unlabeled (PU) learning is useful in various practical situations,
where there is a need to learn a classifier for a class of interest from an
unlabeled data set, which may contain anomalies as well as samples from unknown
classes. The learning task can be formulated as an optimization problem under
the framework of statistical learning theory. Recent studies have theoretically
analyzed its properties and generalization performance, nevertheless, little
effort has been made to consider the problem of scalability, especially when
large sets of unlabeled data are available. In this work we propose a novel
scalable PU learning algorithm that is theoretically proven to provide the
optimal solution, while showing superior computational and memory performance.
Experimental evaluation confirms the theoretical evidence and shows that the
proposed method can be successfully applied to a large variety of real-world
problems involving PU learning.
|
[
"Emanuele Sansone, Francesco G.B. De Natale, Zhi-Hua Zhou",
"['Emanuele Sansone' 'Francesco G. B. De Natale' 'Zhi-Hua Zhou']"
] |
cs.CV cs.LG stat.ML
| null |
1608.06863
| null | null |
http://arxiv.org/pdf/1608.06863v1
|
2016-08-24T15:32:51Z
|
2016-08-24T15:32:51Z
|
Kullback-Leibler Penalized Sparse Discriminant Analysis for
Event-Related Potential Classification
|
A brain computer interface (BCI) is a system which provides direct
communication between the mind of a person and the outside world by using only
brain activity (EEG). The event-related potential (ERP)-based BCI problem
consists of a binary pattern recognition. Linear discriminant analysis (LDA) is
widely used to solve this type of classification problems, but it fails when
the number of features is large relative to the number of observations. In this
work we propose a penalized version of the sparse discriminant analysis (SDA),
called Kullback-Leibler penalized sparse discriminant analysis (KLSDA). This
method inherits both the discriminative feature selection and classification
properties of SDA and it also improves SDA performance through the addition of
Kullback-Leibler class discrepancy information. The KLSDA method is design to
automatically select the optimal regularization parameters. Numerical
experiments with two real ERP-EEG datasets show that this new method
outperforms standard SDA.
|
[
"['Victoria Peterson' 'Hugo Leonardo Rufiner' 'Ruben Daniel Spies']",
"Victoria Peterson, Hugo Leonardo Rufiner, Ruben Daniel Spies"
] |
math.OC cs.LG stat.ML
| null |
1608.06879
| null | null |
http://arxiv.org/pdf/1608.06879v1
|
2016-08-24T16:04:12Z
|
2016-08-24T16:04:12Z
|
AIDE: Fast and Communication Efficient Distributed Optimization
|
In this paper, we present two new communication-efficient methods for
distributed minimization of an average of functions. The first algorithm is an
inexact variant of the DANE algorithm that allows any local algorithm to return
an approximate solution to a local subproblem. We show that such a strategy
does not affect the theoretical guarantees of DANE significantly. In fact, our
approach can be viewed as a robustification strategy since the method is
substantially better behaved than DANE on data partition arising in practice.
It is well known that DANE algorithm does not match the communication
complexity lower bounds. To bridge this gap, we propose an accelerated variant
of the first method, called AIDE, that not only matches the communication lower
bounds but can also be implemented using a purely first-order oracle. Our
empirical results show that AIDE is superior to other communication efficient
algorithms in settings that naturally arise in machine learning applications.
|
[
"['Sashank J. Reddi' 'Jakub Konečný' 'Peter Richtárik' 'Barnabás Póczós'\n 'Alex Smola']",
"Sashank J. Reddi, Jakub Kone\\v{c}n\\'y, Peter Richt\\'arik, Barnab\\'as\n P\\'ocz\\'os, Alex Smola"
] |
stat.ML cs.CV cs.LG cs.NE
| null |
1608.06884
| null | null |
http://arxiv.org/pdf/1608.06884v2
|
2016-09-03T15:32:04Z
|
2016-08-24T16:15:22Z
|
Towards Bayesian Deep Learning: A Framework and Some Existing Methods
|
While perception tasks such as visual object recognition and text
understanding play an important role in human intelligence, the subsequent
tasks that involve inference, reasoning and planning require an even higher
level of intelligence. The past few years have seen major advances in many
perception tasks using deep learning models. For higher-level inference,
however, probabilistic graphical models with their Bayesian nature are still
more powerful and flexible. To achieve integrated intelligence that involves
both perception and inference, it is naturally desirable to tightly integrate
deep learning and Bayesian models within a principled probabilistic framework,
which we call Bayesian deep learning. In this unified framework, the perception
of text or images using deep learning can boost the performance of higher-level
inference and in return, the feedback from the inference process is able to
enhance the perception of text or images. This paper proposes a general
framework for Bayesian deep learning and reviews its recent applications on
recommender systems, topic models, and control. In this paper, we also discuss
the relationship and differences between Bayesian deep learning and other
related topics like Bayesian treatment of neural networks.
|
[
"['Hao Wang' 'Dit-Yan Yeung']",
"Hao Wang and Dit-Yan Yeung"
] |
cs.LG
| null |
1608.06984
| null | null |
http://arxiv.org/pdf/1608.06984v4
|
2017-04-26T18:00:36Z
|
2016-08-24T23:22:06Z
|
Learning an Optimization Algorithm through Human Design Iterations
|
Solving optimal design problems through crowdsourcing faces a dilemma: On one
hand, human beings have been shown to be more effective than algorithms at
searching for good solutions of certain real-world problems with
high-dimensional or discrete solution spaces; on the other hand, the cost of
setting up crowdsourcing environments, the uncertainty in the crowd's
domain-specific competence, and the lack of commitment of the crowd, all
contribute to the lack of real-world application of design crowdsourcing. We
are thus motivated to investigate a solution-searching mechanism where an
optimization algorithm is tuned based on human demonstrations on solution
searching, so that the search can be continued after human participants abandon
the problem. To do so, we model the iterative search process as a Bayesian
Optimization (BO) algorithm, and propose an inverse BO (IBO) algorithm to find
the maximum likelihood estimators of the BO parameters based on human
solutions. We show through a vehicle design and control problem that the search
performance of BO can be improved by recovering its parameters based on an
effective human search. Thus, IBO has the potential to improve the success rate
of design crowdsourcing activities, by requiring only good search strategies
instead of good solutions from the crowd.
|
[
"Thurston Sexton and Max Yi Ren",
"['Thurston Sexton' 'Max Yi Ren']"
] |
cs.CV cs.LG
| null |
1608.06993
| null | null |
http://arxiv.org/pdf/1608.06993v5
|
2018-01-28T17:12:02Z
|
2016-08-25T00:44:55Z
|
Densely Connected Convolutional Networks
|
Recent work has shown that convolutional networks can be substantially
deeper, more accurate, and efficient to train if they contain shorter
connections between layers close to the input and those close to the output. In
this paper, we embrace this observation and introduce the Dense Convolutional
Network (DenseNet), which connects each layer to every other layer in a
feed-forward fashion. Whereas traditional convolutional networks with L layers
have L connections - one between each layer and its subsequent layer - our
network has L(L+1)/2 direct connections. For each layer, the feature-maps of
all preceding layers are used as inputs, and its own feature-maps are used as
inputs into all subsequent layers. DenseNets have several compelling
advantages: they alleviate the vanishing-gradient problem, strengthen feature
propagation, encourage feature reuse, and substantially reduce the number of
parameters. We evaluate our proposed architecture on four highly competitive
object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet).
DenseNets obtain significant improvements over the state-of-the-art on most of
them, whilst requiring less computation to achieve high performance. Code and
pre-trained models are available at https://github.com/liuzhuang13/DenseNet .
|
[
"Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger",
"['Gao Huang' 'Zhuang Liu' 'Laurens van der Maaten' 'Kilian Q. Weinberger']"
] |
cs.AI cs.LG stat.ML
| null |
1608.07001
| null | null |
http://arxiv.org/pdf/1608.07001v1
|
2016-08-25T01:56:20Z
|
2016-08-25T01:56:20Z
|
Incremental Minimax Optimization based Fuzzy Clustering for Large
Multi-view Data
|
Incremental clustering approaches have been proposed for handling large data
when given data set is too large to be stored. The key idea of these approaches
is to find representatives to represent each cluster in each data chunk and
final data analysis is carried out based on those identified representatives
from all the chunks. However, most of the incremental approaches are used for
single view data. As large multi-view data generated from multiple sources
becomes prevalent nowadays, there is a need for incremental clustering
approaches to handle both large and multi-view data. In this paper we propose a
new incremental clustering approach called incremental minimax optimization
based fuzzy clustering (IminimaxFCM) to handle large multi-view data. In
IminimaxFCM, representatives with multiple views are identified to represent
each cluster by integrating multiple complementary views using minimax
optimization. The detailed problem formulation, updating rules derivation, and
the in-depth analysis of the proposed IminimaxFCM are provided. Experimental
studies on several real world multi-view data sets have been conducted. We
observed that IminimaxFCM outperforms related incremental fuzzy clustering in
terms of clustering accuracy, demonstrating the great potential of IminimaxFCM
for large multi-view data analysis.
|
[
"Yangtao Wang, Lihui Chen, Xiaoli Li",
"['Yangtao Wang' 'Lihui Chen' 'Xiaoli Li']"
] |
cs.AI cs.LG stat.ML
| null |
1608.07005
| null | null |
http://arxiv.org/pdf/1608.07005v1
|
2016-08-25T02:15:37Z
|
2016-08-25T02:15:37Z
|
Multi-View Fuzzy Clustering with Minimax Optimization for Effective
Clustering of Data from Multiple Sources
|
Multi-view data clustering refers to categorizing a data set by making good
use of related information from multiple representations of the data. It
becomes important nowadays because more and more data can be collected in a
variety of ways, in different settings and from different sources, so each data
set can be represented by different sets of features to form different views of
it. Many approaches have been proposed to improve clustering performance by
exploring and integrating heterogeneous information underlying different views.
In this paper, we propose a new multi-view fuzzy clustering approach called
MinimaxFCM by using minimax optimization based on well-known Fuzzy c means. In
MinimaxFCM the consensus clustering results are generated based on minimax
optimization in which the maximum disagreements of different weighted views are
minimized. Moreover, the weight of each view can be learned automatically in
the clustering process. In addition, there is only one parameter to be set
besides the fuzzifier. The detailed problem formulation, updating rules
derivation, and the in-depth analysis of the proposed MinimaxFCM are provided
here. Experimental studies on nine multi-view data sets including real world
image and document data sets have been conducted. We observed that MinimaxFCM
outperforms related multi-view clustering approaches in terms of clustering
accuracy, demonstrating the great potential of MinimaxFCM for multi-view data
analysis.
|
[
"Yangtao Wang, Lihui Chen",
"['Yangtao Wang' 'Lihui Chen']"
] |
cs.LG stat.ML
|
10.1016/j.compbiolchem.2016.09.010
|
1608.07019
| null | null |
http://arxiv.org/abs/1608.07019v5
|
2017-06-17T04:12:57Z
|
2016-08-25T05:14:57Z
|
Comparison among dimensionality reduction techniques based on Random
Projection for cancer classification
|
Random Projection (RP) technique has been widely applied in many scenarios
because it can reduce high-dimensional features into low-dimensional space
within short time and meet the need of real-time analysis of massive data.
There is an urgent need of dimensionality reduction with fast increase of big
genomics data. However, the performance of RP is usually lower. We attempt to
improve classification accuracy of RP through combining other reduction
dimension methods such as Principle Component Analysis (PCA), Linear
Discriminant Analysis (LDA), and Feature Selection (FS). We compared
classification accuracy and running time of different combination methods on
three microarray datasets and a simulation dataset. Experimental results show a
remarkable improvement of 14.77% in classification accuracy of FS followed by
RP compared to RP on BC-TCGA dataset. LDA followed by RP also helps RP to yield
a more discriminative subspace with an increase of 13.65% on classification
accuracy on the same dataset. FS followed by RP outperforms other combination
methods in classification accuracy on most of the datasets.
|
[
"['Haozhe Xie' 'Jie Li' 'Qiaosheng Zhang' 'Yadong Wang']",
"Haozhe Xie, Jie Li, Qiaosheng Zhang and Yadong Wang"
] |
cs.LG cs.IR
|
10.1145/2983323.2983672
|
1608.07051
| null | null |
http://arxiv.org/abs/1608.07051v1
|
2016-08-25T08:39:47Z
|
2016-08-25T08:39:47Z
|
Learning Points and Routes to Recommend Trajectories
|
The problem of recommending tours to travellers is an important and broadly
studied area. Suggested solutions include various approaches of
points-of-interest (POI) recommendation and route planning. We consider the
task of recommending a sequence of POIs, that simultaneously uses information
about POIs and routes. Our approach unifies the treatment of various sources of
information by representing them as features in machine learning algorithms,
enabling us to learn from past behaviour. Information about POIs are used to
learn a POI ranking model that accounts for the start and end points of tours.
Data about previous trajectories are used for learning transition patterns
between POIs that enable us to recommend probable routes. In addition, a
probabilistic model is proposed to combine the results of POI ranking and the
POI to POI transitions. We propose a new F$_1$ score on pairs of POIs that
capture the order of visits. Empirical results show that our approach improves
on recent methods, and demonstrate that combining points and routes enables
better trajectory recommendations.
|
[
"Dawei Chen, Cheng Soon Ong, Lexing Xie",
"['Dawei Chen' 'Cheng Soon Ong' 'Lexing Xie']"
] |
cs.LG math.OC
| null |
1608.07159
| null | null |
http://arxiv.org/pdf/1608.07159v1
|
2016-08-25T14:14:16Z
|
2016-08-25T14:14:16Z
|
Active Robust Learning
|
In many practical applications of learning algorithms, unlabeled data is
cheap and abundant whereas labeled data is expensive. Active learning
algorithms developed to achieve better performance with lower cost. Usually
Representativeness and Informativeness are used in active learning algoirthms.
Advanced recent active learning methods consider both of these criteria.
Despite its vast literature, very few active learning methods consider noisy
instances, i.e. label noisy and outlier instances. Also, these methods didn't
consider accuracy in computing representativeness and informativeness. Based on
the idea that inaccuracy in these measures and not taking noisy instances into
consideration are two sides of a coin and are inherently related, a new loss
function is proposed. This new loss function helps to decrease the effect of
noisy instances while at the same time, reduces bias. We defined "instance
complexity" as a new notion of complexity for instances of a learning problem.
It is proved that noisy instances in the data if any, are the ones with maximum
instance complexity. Based on this loss function which has two functions for
classifying ordinary and noisy instances, a new classifier, named
"Simple-Complex Classifier" is proposed. In this classifier there are a simple
and a complex function, with the complex function responsible for selecting
noisy instances. The resulting optimization problem for both learning and
active learning is highly non-convex and very challenging. In order to solve
it, a convex relaxation is proposed.
|
[
"Hossein Ghafarian and Hadi Sadoghi Yazdi",
"['Hossein Ghafarian' 'Hadi Sadoghi Yazdi']"
] |
cs.LG cs.DS stat.ML
| null |
1608.07179
| null | null |
http://arxiv.org/pdf/1608.07179v1
|
2016-08-25T14:43:17Z
|
2016-08-25T14:43:17Z
|
Minimizing Quadratic Functions in Constant Time
|
A sampling-based optimization method for quadratic functions is proposed. Our
method approximately solves the following $n$-dimensional quadratic
minimization problem in constant time, which is independent of $n$:
$z^*=\min_{\mathbf{v} \in \mathbb{R}^n}\langle\mathbf{v}, A \mathbf{v}\rangle +
n\langle\mathbf{v}, \mathrm{diag}(\mathbf{d})\mathbf{v}\rangle +
n\langle\mathbf{b}, \mathbf{v}\rangle$, where $A \in \mathbb{R}^{n \times n}$
is a matrix and $\mathbf{d},\mathbf{b} \in \mathbb{R}^n$ are vectors. Our
theoretical analysis specifies the number of samples $k(\delta, \epsilon)$ such
that the approximated solution $z$ satisfies $|z - z^*| = O(\epsilon n^2)$ with
probability $1-\delta$. The empirical performance (accuracy and runtime) is
positively confirmed by numerical experiments.
|
[
"['Kohei Hayashi' 'Yuichi Yoshida']",
"Kohei Hayashi, Yuichi Yoshida"
] |
cs.AI cs.CL cs.CY cs.LG
|
10.1126/science.aal4230
|
1608.07187
| null | null |
http://arxiv.org/abs/1608.07187v4
|
2017-05-25T17:50:31Z
|
2016-08-25T15:07:17Z
|
Semantics derived automatically from language corpora contain human-like
biases
|
Artificial intelligence and machine learning are in a period of astounding
growth. However, there are concerns that these technologies may be used, either
with or without intention, to perpetuate the prejudice and unfairness that
unfortunately characterizes many human institutions. Here we show for the first
time that human-like semantic biases result from the application of standard
machine learning to ordinary language---the same sort of language humans are
exposed to every day. We replicate a spectrum of standard human biases as
exposed by the Implicit Association Test and other well-known psychological
studies. We replicate these using a widely used, purely statistical
machine-learning model---namely, the GloVe word embedding---trained on a corpus
of text from the Web. Our results indicate that language itself contains
recoverable and accurate imprints of our historic biases, whether these are
morally neutral as towards insects or flowers, problematic as towards race or
gender, or even simply veridical, reflecting the {\em status quo} for the
distribution of gender with respect to careers or first names. These
regularities are captured by machine learning along with the rest of semantics.
In addition to our empirical findings concerning language, we also contribute
new methods for evaluating bias in text, the Word Embedding Association Test
(WEAT) and the Word Embedding Factual Association Test (WEFAT). Our results
have implications not only for AI and machine learning, but also for the fields
of psychology, sociology, and human ethics, since they raise the possibility
that mere exposure to everyday language can account for the biases we replicate
here.
|
[
"['Aylin Caliskan' 'Joanna J. Bryson' 'Arvind Narayanan']",
"Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan"
] |
cs.DC cs.LG
| null |
1608.07249
| null | null |
http://arxiv.org/pdf/1608.07249v7
|
2017-02-17T11:02:08Z
|
2016-08-25T18:48:16Z
|
Benchmarking State-of-the-Art Deep Learning Software Tools
|
Deep learning has been shown as a successful machine learning method for a
variety of tasks, and its popularity results in numerous open-source deep
learning software tools. Training a deep network is usually a very
time-consuming process. To address the computational challenge in deep
learning, many tools exploit hardware features such as multi-core CPUs and
many-core GPUs to shorten the training time. However, different tools exhibit
different features and running performance when training different types of
deep networks on different hardware platforms, which makes it difficult for end
users to select an appropriate pair of software and hardware. In this paper, we
aim to make a comparative study of the state-of-the-art GPU-accelerated deep
learning software tools, including Caffe, CNTK, MXNet, TensorFlow, and Torch.
We first benchmark the running performance of these tools with three popular
types of neural networks on two CPU platforms and three GPU platforms. We then
benchmark some distributed versions on multiple GPUs. Our contribution is
two-fold. First, for end users of deep learning tools, our benchmarking results
can serve as a guide to selecting appropriate hardware platforms and software
tools. Second, for software developers of deep learning tools, our in-depth
analysis points out possible future directions to further optimize the running
performance.
|
[
"Shaohuai Shi, Qiang Wang, Pengfei Xu, Xiaowen Chu",
"['Shaohuai Shi' 'Qiang Wang' 'Pengfei Xu' 'Xiaowen Chu']"
] |
cs.LG stat.ML
| null |
1608.07251
| null | null |
http://arxiv.org/pdf/1608.07251v1
|
2016-08-19T23:21:49Z
|
2016-08-19T23:21:49Z
|
Large-scale Collaborative Imaging Genetics Studies of Risk Genetic
Factors for Alzheimer's Disease Across Multiple Institutions
|
Genome-wide association studies (GWAS) offer new opportunities to identify
genetic risk factors for Alzheimer's disease (AD). Recently, collaborative
efforts across different institutions emerged that enhance the power of many
existing techniques on individual institution data. However, a major barrier to
collaborative studies of GWAS is that many institutions need to preserve
individual data privacy. To address this challenge, we propose a novel
distributed framework, termed Local Query Model (LQM) to detect risk SNPs for
AD across multiple research institutions. To accelerate the learning process,
we propose a Distributed Enhanced Dual Polytope Projection (D-EDPP) screening
rule to identify irrelevant features and remove them from the optimization. To
the best of our knowledge, this is the first successful run of the
computationally intensive model selection procedure to learn a consistent model
across different institutions without compromising their privacy while ranking
the SNPs that may collectively affect AD. Empirical studies are conducted on
809 subjects with 5.9 million SNP features which are distributed across three
individual institutions. D-EDPP achieved a 66-fold speed-up by effectively
identifying irrelevant features.
|
[
"Qingyang Li, Tao Yang, Liang Zhan, Derrek Paul Hibar, Neda Jahanshad,\n Yalin Wang, Jieping Ye, Paul M. Thompson, Jie Wang",
"['Qingyang Li' 'Tao Yang' 'Liang Zhan' 'Derrek Paul Hibar'\n 'Neda Jahanshad' 'Yalin Wang' 'Jieping Ye' 'Paul M. Thompson' 'Jie Wang']"
] |
math.OC cs.GT cs.LG
|
10.1007/s10107-017-1228-2
|
1608.0731
| null | null | null | null | null |
Learning in games with continuous action sets and unknown payoff
functions
|
This paper examines the convergence of no-regret learning in games with
continuous action sets. For concreteness, we focus on learning via "dual
averaging", a widely used class of no-regret learning schemes where players
take small steps along their individual payoff gradients and then "mirror" the
output back to their action sets. In terms of feedback, we assume that players
can only estimate their payoff gradients up to a zero-mean error with bounded
variance. To study the convergence of the induced sequence of play, we
introduce the notion of variational stability, and we show that stable
equilibria are locally attracting with high probability whereas globally stable
equilibria are globally attracting with probability 1. We also discuss some
applications to mixed-strategy learning in finite games, and we provide
explicit estimates of the method's convergence speed.
|
[
"Panayotis Mertikopoulos and Zhengyuan Zhou"
] |
null | null |
1608.07310
| null | null |
http://arxiv.org/abs/1608.07310v2
|
2018-01-16T03:57:37Z
|
2016-08-25T21:01:23Z
|
Learning in games with continuous action sets and unknown payoff
functions
|
This paper examines the convergence of no-regret learning in games with continuous action sets. For concreteness, we focus on learning via "dual averaging", a widely used class of no-regret learning schemes where players take small steps along their individual payoff gradients and then "mirror" the output back to their action sets. In terms of feedback, we assume that players can only estimate their payoff gradients up to a zero-mean error with bounded variance. To study the convergence of the induced sequence of play, we introduce the notion of variational stability, and we show that stable equilibria are locally attracting with high probability whereas globally stable equilibria are globally attracting with probability 1. We also discuss some applications to mixed-strategy learning in finite games, and we provide explicit estimates of the method's convergence speed.
|
[
"['Panayotis Mertikopoulos' 'Zhengyuan Zhou']"
] |
cs.LG cs.IT math.IT
| null |
1608.07328
| null | null |
http://arxiv.org/pdf/1608.07328v1
|
2016-08-25T22:43:46Z
|
2016-08-25T22:43:46Z
|
Fundamental Limits of Budget-Fidelity Trade-off in Label Crowdsourcing
|
Digital crowdsourcing (CS) is a modern approach to perform certain large
projects using small contributions of a large crowd. In CS, a taskmaster
typically breaks down the project into small batches of tasks and assigns them
to so-called workers with imperfect skill levels. The crowdsourcer then
collects and analyzes the results for inference and serving the purpose of the
project. In this work, the CS problem, as a human-in-the-loop computation
problem, is modeled and analyzed in an information theoretic rate-distortion
framework. The purpose is to identify the ultimate fidelity that one can
achieve by any form of query from the crowd and any decoding (inference)
algorithm with a given budget. The results are established by a joint source
channel (de)coding scheme, which represent the query scheme and inference, over
parallel noisy channels, which model workers with imperfect skill levels. We
also present and analyze a query scheme dubbed $k$-ary incidence coding and
study optimized query pricing in this setting.
|
[
"['Farshad Lahouti' 'Babak Hassibi']",
"Farshad Lahouti, Babak Hassibi"
] |
cs.IR cs.LG
| null |
1608.074
| null | null | null | null | null |
Collaborative Filtering with Recurrent Neural Networks
|
We show that collaborative filtering can be viewed as a sequence prediction
problem, and that given this interpretation, recurrent neural networks offer
very competitive approach. In particular we study how the long short-term
memory (LSTM) can be applied to collaborative filtering, and how it compares to
standard nearest neighbors and matrix factorization methods on movie
recommendation. We show that the LSTM is competitive in all aspects, and
largely outperforms other methods in terms of item coverage and short term
predictions.
|
[
"Robin Devooght and Hugues Bersini"
] |
null | null |
1608.07400
| null | null |
http://arxiv.org/pdf/1608.07400v2
|
2017-01-03T07:41:44Z
|
2016-08-26T09:20:21Z
|
Collaborative Filtering with Recurrent Neural Networks
|
We show that collaborative filtering can be viewed as a sequence prediction problem, and that given this interpretation, recurrent neural networks offer very competitive approach. In particular we study how the long short-term memory (LSTM) can be applied to collaborative filtering, and how it compares to standard nearest neighbors and matrix factorization methods on movie recommendation. We show that the LSTM is competitive in all aspects, and largely outperforms other methods in terms of item coverage and short term predictions.
|
[
"['Robin Devooght' 'Hugues Bersini']"
] |
cs.LG cs.AI cs.CV stat.ML
| null |
1608.07441
| null | null |
http://arxiv.org/pdf/1608.07441v1
|
2016-08-26T12:42:43Z
|
2016-08-26T12:42:43Z
|
Hard Negative Mining for Metric Learning Based Zero-Shot Classification
|
Zero-Shot learning has been shown to be an efficient strategy for domain
adaptation. In this context, this paper builds on the recent work of Bucher et
al. [1], which proposed an approach to solve Zero-Shot classification problems
(ZSC) by introducing a novel metric learning based objective function. This
objective function allows to learn an optimal embedding of the attributes
jointly with a measure of similarity between images and attributes. This paper
extends their approach by proposing several schemes to control the generation
of the negative pairs, resulting in a significant improvement of the
performance and giving above state-of-the-art results on three challenging ZSC
datasets.
|
[
"Maxime Bucher (Palaiseau), St\\'ephane Herbin (Palaiseau), Fr\\'ed\\'eric\n Jurie",
"['Maxime Bucher' 'Stéphane Herbin' 'Frédéric Jurie']"
] |
cs.LG cs.CR stat.ML
| null |
1608.07502
| null | null |
http://arxiv.org/pdf/1608.07502v1
|
2016-08-26T16:15:43Z
|
2016-08-26T16:15:43Z
|
Entity Embedding-based Anomaly Detection for Heterogeneous Categorical
Events
|
Anomaly detection plays an important role in modern data-driven security
applications, such as detecting suspicious access to a socket from a process.
In many cases, such events can be described as a collection of categorical
values that are considered as entities of different types, which we call
heterogeneous categorical events. Due to the lack of intrinsic distance
measures among entities, and the exponentially large event space, most existing
work relies heavily on heuristics to calculate abnormal scores for events.
Different from previous work, we propose a principled and unified probabilistic
model APE (Anomaly detection via Probabilistic pairwise interaction and Entity
embedding) that directly models the likelihood of events. In this model, we
embed entities into a common latent space using their observed co-occurrence in
different events. More specifically, we first model the compatibility of each
pair of entities according to their embeddings. Then we utilize the weighted
pairwise interactions of different entity types to define the event
probability. Using Noise-Contrastive Estimation with "context-dependent" noise
distribution, our model can be learned efficiently regardless of the large
event space. Experimental results on real enterprise surveillance data show
that our methods can accurately detect abnormal events compared to other
state-of-the-art abnormal detection techniques.
|
[
"Ting Chen, Lu-An Tang, Yizhou Sun, Zhengzhang Chen, Kai Zhang",
"['Ting Chen' 'Lu-An Tang' 'Yizhou Sun' 'Zhengzhang Chen' 'Kai Zhang']"
] |
cs.LG stat.ML
| null |
1608.07536
| null | null |
http://arxiv.org/pdf/1608.07536v1
|
2016-08-26T17:47:58Z
|
2016-08-26T17:47:58Z
|
Leveraging over intact priors for boosting control and dexterity of
prosthetic hands by amputees
|
Non-invasive myoelectric prostheses require a long training time to obtain
satisfactory control dexterity. These training times could possibly be reduced
by leveraging over training efforts by previous subjects. So-called domain
adaptation algorithms formalize this strategy and have indeed been shown to
significantly reduce the amount of required training data for intact subjects
for myoelectric movements classification. It is not clear, however, whether
these results extend also to amputees and, if so, whether prior information
from amputees and intact subjects is equally useful. To overcome this problem,
we evaluated several domain adaptation algorithms on data coming from both
amputees and intact subjects. Our findings indicate that: (1) the use of
previous experience from other subjects allows us to reduce the training time
by about an order of magnitude; (2) this improvement holds regardless of
whether an amputee exploits previous information from other amputees or from
intact subjects.
|
[
"Valentina Gregori and Barbara Caputo",
"['Valentina Gregori' 'Barbara Caputo']"
] |
stat.ML cs.LG cs.SI
|
10.1109/TSIPN.2016.2601022
|
1608.07605
| null | null |
http://arxiv.org/abs/1608.07605v1
|
2016-08-26T20:49:06Z
|
2016-08-26T20:49:06Z
|
Clustering and Community Detection with Imbalanced Clusters
|
Spectral clustering methods which are frequently used in clustering and
community detection applications are sensitive to the specific graph
constructions particularly when imbalanced clusters are present. We show that
ratio cut (RCut) or normalized cut (NCut) objectives are not tailored to
imbalanced cluster sizes since they tend to emphasize cut sizes over cut
values. We propose a graph partitioning problem that seeks minimum cut
partitions under minimum size constraints on partitions to deal with imbalanced
cluster sizes. Our approach parameterizes a family of graphs by adaptively
modulating node degrees on a fixed node set, yielding a set of parameter
dependent cuts reflecting varying levels of imbalance. The solution to our
problem is then obtained by optimizing over these parameters. We present
rigorous limit cut analysis results to justify our approach and demonstrate the
superiority of our method through experiments on synthetic and real datasets
for data clustering, semi-supervised learning and community detection.
|
[
"Cem Aksoylar, Jing Qian, Venkatesh Saligrama",
"['Cem Aksoylar' 'Jing Qian' 'Venkatesh Saligrama']"
] |
cs.LG
| null |
1608.07619
| null | null |
http://arxiv.org/pdf/1608.07619v1
|
2016-08-26T22:17:56Z
|
2016-08-26T22:17:56Z
|
Interacting with Massive Behavioral Data
|
In this short paper, we propose the split-diffuse (SD) algorithm that takes
the output of an existing word embedding algorithm, and distributes the data
points uniformly across the visualization space. The result improves the
perceivability and the interactability by the human.
We apply the SD algorithm to analyze the user behavior through access logs
within the cyber security domain. The result, named the topic grids, is a set
of grids on various topics generated from the logs. On the same set of grids,
different behavioral metrics can be shown on different targets over different
periods of time, to provide visualization and interaction to the human experts.
Analysis, investigation, and other types of interaction can be performed on
the topic grids more efficiently than on the output of existing dimension
reduction methods. In addition to the cyber security domain, the topic grids
can be further applied to other domains like e-commerce, credit card
transaction, customer service to analyze the behavior in a large scale.
|
[
"Shih-Chieh Su",
"['Shih-Chieh Su']"
] |
cs.LG
| null |
1608.07625
| null | null |
http://arxiv.org/pdf/1608.07625v1
|
2016-08-26T23:07:43Z
|
2016-08-26T23:07:43Z
|
Large Scale Behavioral Analytics via Topical Interaction
|
We propose the split-diffuse (SD) algorithm that takes the output of an
existing dimension reduction algorithm, and distributes the data points
uniformly across the visualization space. The result, called the topic grids,
is a set of grids on various topics which are generated from the free-form text
content of any domain of interest. The topic grids efficiently utilizes the
visualization space to provide visual summaries for massive data. Topical
analysis, comparison and interaction can be performed on the topic grids in a
more perceivable way.
|
[
"Shih-Chieh Su",
"['Shih-Chieh Su']"
] |
math.ST cs.LG stat.CO stat.ML stat.TH
| null |
1608.0763
| null | null | null | null | null |
Global analysis of Expectation Maximization for mixtures of two
Gaussians
|
Expectation Maximization (EM) is among the most popular algorithms for
estimating parameters of statistical models. However, EM, which is an iterative
algorithm based on the maximum likelihood principle, is generally only
guaranteed to find stationary points of the likelihood objective, and these
points may be far from any maximizer. This article addresses this disconnect
between the statistical principles behind EM and its algorithmic properties.
Specifically, it provides a global analysis of EM for specific models in which
the observations comprise an i.i.d. sample from a mixture of two Gaussians.
This is achieved by (i) studying the sequence of parameters from idealized
execution of EM in the infinite sample limit, and fully characterizing the
limit points of the sequence in terms of the initial parameters; and then (ii)
based on this convergence analysis, establishing statistical consistency (or
lack thereof) for the actual sequence of parameters produced by EM.
|
[
"Ji Xu, Daniel Hsu, Arian Maleki"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.