categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
list |
---|---|---|---|---|---|---|---|---|---|---|
cs.LG
| null |
1607.02552
| null | null |
http://arxiv.org/pdf/1607.02552v2
|
2016-08-26T01:06:24Z
|
2016-07-08T23:42:22Z
|
Online Learning Schemes for Power Allocation in Energy Harvesting
Communications
|
We consider the problem of power allocation over a time-varying channel with
unknown distribution in energy harvesting communication systems. In this
problem, the transmitter has to choose the transmit power based on the amount
of stored energy in its battery with the goal of maximizing the average rate
obtained over time. We model this problem as a Markov decision process (MDP)
with the transmitter as the agent, the battery status as the state, the
transmit power as the action and the rate obtained as the reward. The average
reward maximization problem over the MDP can be solved by a linear program (LP)
that uses the transition probabilities for the state-action pairs and their
reward values to choose a power allocation policy. Since the rewards associated
the state-action pairs are unknown, we propose two online learning algorithms:
UCLP and Epoch-UCLP that learn these rewards and adapt their policies along the
way. The UCLP algorithm solves the LP at each step to decide its current policy
using the upper confidence bounds on the rewards, while the Epoch-UCLP
algorithm divides the time into epochs, solves the LP only at the beginning of
the epochs and follows the obtained policy in that epoch. We prove that the
reward losses or regrets incurred by both these algorithms are upper bounded by
constants. Epoch-UCLP incurs a higher regret compared to UCLP, but reduces the
computational requirements substantially. We also show that the presented
algorithms work for online learning in cost minimization problems like the
packet scheduling with power-delay tradeoff with minor changes.
|
[
"Pranav Sakulkar and Bhaskar Krishnamachari",
"['Pranav Sakulkar' 'Bhaskar Krishnamachari']"
] |
cs.LG
| null |
1607.02559
| null | null |
http://arxiv.org/pdf/1607.02559v1
|
2016-07-09T02:29:53Z
|
2016-07-09T02:29:53Z
|
Uncovering Locally Discriminative Structure for Feature Analysis
|
Manifold structure learning is often used to exploit geometric information
among data in semi-supervised feature learning algorithms. In this paper, we
find that local discriminative information is also of importance for
semi-supervised feature learning. We propose a method that utilizes both the
manifold structure of data and local discriminant information. Specifically, we
define a local clique for each data point. The k-Nearest Neighbors (kNN) is
used to determine the structural information within each clique. We then employ
a variant of Fisher criterion model to each clique for local discriminant
evaluation and sum all cliques as global integration into the framework. In
this way, local discriminant information is embedded. Labels are also utilized
to minimize distances between data from the same class. In addition, we use the
kernel method to extend our proposed model and facilitate feature learning in a
high-dimensional space after feature mapping. Experimental results show that
our method is superior to all other compared methods over a number of datasets.
|
[
"Sen Wang and Feiping Nie and Xiaojun Chang and Xue Li and Quan Z.\n Sheng and Lina Yao",
"['Sen Wang' 'Feiping Nie' 'Xiaojun Chang' 'Xue Li' 'Quan Z. Sheng'\n 'Lina Yao']"
] |
cs.CV cs.LG
| null |
1607.02586
| null | null |
http://arxiv.org/pdf/1607.02586v1
|
2016-07-09T08:41:40Z
|
2016-07-09T08:41:40Z
|
Visual Dynamics: Probabilistic Future Frame Synthesis via Cross
Convolutional Networks
|
We study the problem of synthesizing a number of likely future frames from a
single input image. In contrast to traditional methods, which have tackled this
problem in a deterministic or non-parametric way, we propose a novel approach
that models future frames in a probabilistic manner. Our probabilistic model
makes it possible for us to sample and synthesize many possible future frames
from a single input image. Future frame synthesis is challenging, as it
involves low- and high-level image and motion understanding. We propose a novel
network structure, namely a Cross Convolutional Network to aid in synthesizing
future frames; this network structure encodes image and motion information as
feature maps and convolutional kernels, respectively. In experiments, our model
performs well on synthetic data, such as 2D shapes and animated game sprites,
as well as on real-wold videos. We also show that our model can be applied to
tasks such as visual analogy-making, and present an analysis of the learned
network representations.
|
[
"['Tianfan Xue' 'Jiajun Wu' 'Katherine L. Bouman' 'William T. Freeman']",
"Tianfan Xue, Jiajun Wu, Katherine L. Bouman, William T. Freeman"
] |
cs.LG stat.AP stat.ML
| null |
1607.02665
| null | null |
http://arxiv.org/pdf/1607.02665v2
|
2018-02-19T20:18:35Z
|
2016-07-09T21:18:23Z
|
Classifier Risk Estimation under Limited Labeling Resources
|
In this paper we propose strategies for estimating performance of a
classifier when labels cannot be obtained for the whole test set. The number of
test instances which can be labeled is very small compared to the whole test
data size. The goal then is to obtain a precise estimate of classifier
performance using as little labeling resource as possible. Specifically, we try
to answer, how to select a subset of the large test set for labeling such that
the performance of a classifier estimated on this subset is as close as
possible to the one on the whole test set. We propose strategies based on
stratified sampling for selecting this subset. We show that these strategies
can reduce the variance in estimation of classifier accuracy by a significant
amount compared to simple random sampling (over 65% in several cases). Hence,
our proposed methods are much more precise compared to random sampling for
accuracy estimation under restricted labeling resources. The reduction in
number of samples required (compared to random sampling) to estimate the
classifier accuracy with only 1% error is high as 60% in some cases.
|
[
"['Anurag Kumar' 'Bhiksha Raj']",
"Anurag Kumar, Bhiksha Raj"
] |
cs.LG
| null |
1607.02705
| null | null |
http://arxiv.org/pdf/1607.02705v1
|
2016-07-10T07:34:27Z
|
2016-07-10T07:34:27Z
|
Dealing with Class Imbalance using Thresholding
|
We propose thresholding as an approach to deal with class imbalance. We
define the concept of thresholding as a process of determining a decision
boundary in the presence of a tunable parameter. The threshold is the maximum
value of this tunable parameter where the conditions of a certain decision are
satisfied. We show that thresholding is applicable not only for linear
classifiers but also for non-linear classifiers. We show that this is the
implicit assumption for many approaches to deal with class imbalance in linear
classifiers. We then extend this paradigm beyond linear classification and show
how non-linear classification can be dealt with under this umbrella framework
of thresholding. The proposed method can be used for outlier detection in many
real-life scenarios like in manufacturing. In advanced manufacturing units,
where the manufacturing process has matured over time, the number of instances
(or parts) of the product that need to be rejected (based on a strict regime of
quality tests) becomes relatively rare and are defined as outliers. How to
detect these rare parts or outliers beforehand? How to detect combination of
conditions leading to these outliers? These are the questions motivating our
research. This paper focuses on prediction of outliers and conditions leading
to outliers using classification. We address the problem of outlier detection
using classification. The classes are good parts (those passing the quality
tests) and bad parts (those failing the quality tests and can be considered as
outliers). The rarity of outliers transforms this problem into a
class-imbalanced classification problem.
|
[
"['Charmgil Hong' 'Rumi Ghosh' 'Soundar Srinivasan']",
"Charmgil Hong, Rumi Ghosh, Soundar Srinivasan"
] |
cs.AI cs.LG stat.ML
| null |
1607.02763
| null | null |
http://arxiv.org/pdf/1607.02763v1
|
2016-07-10T16:19:00Z
|
2016-07-10T16:19:00Z
|
How to Allocate Resources For Features Acquisition?
|
We study classification problems where features are corrupted by noise and
where the magnitude of the noise in each feature is influenced by the resources
allocated to its acquisition. This is the case, for example, when multiple
sensors share a common resource (power, bandwidth, attention, etc.). We develop
a method for computing the optimal resource allocation for a variety of
scenarios and derive theoretical bounds concerning the benefit that may arise
by non-uniform allocation. We further demonstrate the effectiveness of the
developed method in simulations.
|
[
"['Oran Richman' 'Shie Mannor']",
"Oran Richman, Shie Mannor"
] |
math.OC cs.LG stat.ML
| null |
1607.02793
| null | null |
http://arxiv.org/pdf/1607.02793v3
|
2017-11-22T15:40:31Z
|
2016-07-10T23:15:18Z
|
On Faster Convergence of Cyclic Block Coordinate Descent-type Methods
for Strongly Convex Minimization
|
The cyclic block coordinate descent-type (CBCD-type) methods, which performs
iterative updates for a few coordinates (a block) simultaneously throughout the
procedure, have shown remarkable computational performance for solving strongly
convex minimization problems. Typical applications include many popular
statistical machine learning methods such as elastic-net regression, ridge
penalized logistic regression, and sparse additive regression. Existing
optimization literature has shown that for strongly convex minimization, the
CBCD-type methods attain iteration complexity of
$\mathcal{O}(p\log(1/\epsilon))$, where $\epsilon$ is a pre-specified accuracy
of the objective value, and $p$ is the number of blocks. However, such
iteration complexity explicitly depends on $p$, and therefore is at least $p$
times worse than the complexity $\mathcal{O}(\log(1/\epsilon))$ of gradient
descent (GD) methods. To bridge this theoretical gap, we propose an improved
convergence analysis for the CBCD-type methods. In particular, we first show
that for a family of quadratic minimization problems, the iteration complexity
$\mathcal{O}(\log^2(p)\cdot\log(1/\epsilon))$ of the CBCD-type methods matches
that of the GD methods in term of dependency on $p$, up to a $\log^2 p$ factor.
Thus our complexity bounds are sharper than the existing bounds by at least a
factor of $p/\log^2(p)$. We also provide a lower bound to confirm that our
improved complexity bounds are tight (up to a $\log^2 (p)$ factor), under the
assumption that the largest and smallest eigenvalues of the Hessian matrix do
not scale with $p$. Finally, we generalize our analysis to other strongly
convex minimization problems beyond quadratic ones.
|
[
"Xingguo Li, Tuo Zhao, Raman Arora, Han Liu, Mingyi Hong",
"['Xingguo Li' 'Tuo Zhao' 'Raman Arora' 'Han Liu' 'Mingyi Hong']"
] |
cs.LG
| null |
1607.02834
| null | null |
http://arxiv.org/pdf/1607.02834v2
|
2016-07-14T03:04:51Z
|
2016-07-11T06:45:46Z
|
Tight Lower Bounds for Multiplicative Weights Algorithmic Families
|
We study the fundamental problem of prediction with expert advice and develop
regret lower bounds for a large family of algorithms for this problem. We
develop simple adversarial primitives, that lend themselves to various
combinations leading to sharp lower bounds for many algorithmic families. We
use these primitives to show that the classic Multiplicative Weights Algorithm
(MWA) has a regret of $\sqrt{\frac{T \ln k}{2}}$, there by completely closing
the gap between upper and lower bounds. We further show a regret lower bound of
$\frac{2}{3}\sqrt{\frac{T\ln k}{2}}$ for a much more general family of
algorithms than MWA, where the learning rate can be arbitrarily varied over
time, or even picked from arbitrary distributions over time. We also use our
primitives to construct adversaries in the geometric horizon setting for MWA to
precisely characterize the regret at $\frac{0.391}{\sqrt{\delta}}$ for the case
of $2$ experts and a lower bound of $\frac{1}{2}\sqrt{\frac{\ln k}{2\delta}}$
for the case of arbitrary number of experts $k$.
|
[
"Nick Gravin, Yuval Peres, Balasubramanian Sivan",
"['Nick Gravin' 'Yuval Peres' 'Balasubramanian Sivan']"
] |
cs.NE cs.LG cs.MM cs.SD
| null |
1607.02857
| null | null |
http://arxiv.org/pdf/1607.02857v1
|
2016-07-11T08:33:48Z
|
2016-07-11T08:33:48Z
|
Classifying Variable-Length Audio Files with All-Convolutional Networks
and Masked Global Pooling
|
We trained a deep all-convolutional neural network with masked global pooling
to perform single-label classification for acoustic scene classification and
multi-label classification for domestic audio tagging in the DCASE-2016
contest. Our network achieved an average accuracy of 84.5% on the four-fold
cross-validation for acoustic scene recognition, compared to the provided
baseline of 72.5%, and an average equal error rate of 0.17 for domestic audio
tagging, compared to the baseline of 0.21. The network therefore improves the
baselines by a relative amount of 17% and 19%, respectively. The network only
consists of convolutional layers to extract features from the short-time
Fourier transform and one global pooling layer to combine those features. It
particularly possesses neither fully-connected layers, besides the
fully-connected output layer, nor dropout layers.
|
[
"['Lars Hertel' 'Huy Phan' 'Alfred Mertins']",
"Lars Hertel, Huy Phan, Alfred Mertins"
] |
cs.LG cs.IR
| null |
1607.02858
| null | null |
http://arxiv.org/pdf/1607.02858v1
|
2016-07-11T08:37:42Z
|
2016-07-11T08:37:42Z
|
Incremental Factorization Machines for Persistently Cold-starting Online
Item Recommendation
|
Real-world item recommenders commonly suffer from a persistent cold-start
problem which is caused by dynamically changing users and items. In order to
overcome the problem, several context-aware recommendation techniques have been
recently proposed. In terms of both feasibility and performance, factorization
machine (FM) is one of the most promising methods as generalization of the
conventional matrix factorization techniques. However, since online algorithms
are suitable for dynamic data, the static FMs are still inadequate. Thus, this
paper proposes incremental FMs (iFMs), a general online factorization
framework, and specially extends iFMs into an online item recommender. The
proposed framework can be a promising baseline for further development of the
production recommender systems. Evaluation is done empirically both on
synthetic and real-world unstable datasets.
|
[
"Takuya Kitazawa",
"['Takuya Kitazawa']"
] |
cs.GT cs.LG stat.ML
| null |
1607.02959
| null | null |
http://arxiv.org/pdf/1607.02959v2
|
2016-10-19T19:56:19Z
|
2016-07-11T14:05:16Z
|
From Behavior to Sparse Graphical Games: Efficient Recovery of
Equilibria
|
In this paper we study the problem of exact recovery of the pure-strategy
Nash equilibria (PSNE) set of a graphical game from noisy observations of joint
actions of the players alone. We consider sparse linear influence games --- a
parametric class of graphical games with linear payoffs, and represented by
directed graphs of n nodes (players) and in-degree of at most k. We present an
$\ell_1$-regularized logistic regression based algorithm for recovering the
PSNE set exactly, that is both computationally efficient --- i.e. runs in
polynomial time --- and statistically efficient --- i.e. has logarithmic sample
complexity. Specifically, we show that the sufficient number of samples
required for exact PSNE recovery scales as $\mathcal{O}(\mathrm{poly}(k) \log
n)$. We also validate our theoretical results using synthetic experiments.
|
[
"['Asish Ghoshal' 'Jean Honorio']",
"Asish Ghoshal and Jean Honorio"
] |
cs.LG stat.ML
| null |
1607.0305
| null | null | null | null | null |
Learning a metric for class-conditional KNN
|
Naive Bayes Nearest Neighbour (NBNN) is a simple and effective framework
which addresses many of the pitfalls of K-Nearest Neighbour (KNN)
classification. It has yielded competitive results on several computer vision
benchmarks. Its central tenet is that during NN search, a query is not compared
to every example in a database, ignoring class information. Instead, NN
searches are performed within each class, generating a score per class. A key
problem with NN techniques, including NBNN, is that they fail when the data
representation does not capture perceptual (e.g.~class-based) similarity. NBNN
circumvents this by using independent engineered descriptors (e.g.~SIFT). To
extend its applicability outside of image-based domains, we propose to learn a
metric which captures perceptual similarity. Similar to how Neighbourhood
Components Analysis optimizes a differentiable form of KNN classification, we
propose "Class Conditional" metric learning (CCML), which optimizes a soft form
of the NBNN selection rule. Typical metric learning algorithms learn either a
global or local metric. However, our proposed method can be adjusted to a
particular level of locality by tuning a single parameter. An empirical
evaluation on classification and retrieval tasks demonstrates that our proposed
method clearly outperforms existing learned distance metrics across a variety
of image and non-image datasets.
|
[
"Daniel Jiwoong Im, Graham W. Taylor"
] |
null | null |
1607.03050
| null | null |
http://arxiv.org/pdf/1607.03050v1
|
2016-07-11T17:29:19Z
|
2016-07-11T17:29:19Z
|
Learning a metric for class-conditional KNN
|
Naive Bayes Nearest Neighbour (NBNN) is a simple and effective framework which addresses many of the pitfalls of K-Nearest Neighbour (KNN) classification. It has yielded competitive results on several computer vision benchmarks. Its central tenet is that during NN search, a query is not compared to every example in a database, ignoring class information. Instead, NN searches are performed within each class, generating a score per class. A key problem with NN techniques, including NBNN, is that they fail when the data representation does not capture perceptual (e.g.~class-based) similarity. NBNN circumvents this by using independent engineered descriptors (e.g.~SIFT). To extend its applicability outside of image-based domains, we propose to learn a metric which captures perceptual similarity. Similar to how Neighbourhood Components Analysis optimizes a differentiable form of KNN classification, we propose "Class Conditional" metric learning (CCML), which optimizes a soft form of the NBNN selection rule. Typical metric learning algorithms learn either a global or local metric. However, our proposed method can be adjusted to a particular level of locality by tuning a single parameter. An empirical evaluation on classification and retrieval tasks demonstrates that our proposed method clearly outperforms existing learned distance metrics across a variety of image and non-image datasets.
|
[
"['Daniel Jiwoong Im' 'Graham W. Taylor']"
] |
cs.NA cs.LG math.OC stat.ML
| null |
1607.03081
| null | null |
http://arxiv.org/pdf/1607.03081v2
|
2017-10-17T01:08:29Z
|
2016-07-11T19:20:06Z
|
Proximal Quasi-Newton Methods for Regularized Convex Optimization with
Linear and Accelerated Sublinear Convergence Rates
|
In [19], a general, inexact, efficient proximal quasi-Newton algorithm for
composite optimization problems has been proposed and a sublinear global
convergence rate has been established. In this paper, we analyze the
convergence properties of this method, both in the exact and inexact setting,
in the case when the objective function is strongly convex. We also investigate
a practical variant of this method by establishing a simple stopping criterion
for the subproblem optimization. Furthermore, we consider an accelerated
variant, based on FISTA [1], to the proximal quasi-Newton algorithm. A similar
accelerated method has been considered in [7], where the convergence rate
analysis relies on very strong impractical assumptions. We present a modified
analysis while relaxing these assumptions and perform a practical comparison of
the accelerated proximal quasi- Newton algorithm and the regular one. Our
analysis and computational results show that acceleration may not bring any
benefit in the quasi-Newton setting.
|
[
"Hiva Ghanbari, Katya Scheinberg",
"['Hiva Ghanbari' 'Katya Scheinberg']"
] |
cs.LG cs.DS math.OC stat.ML
| null |
1607.03084
| null | null |
http://arxiv.org/pdf/1607.03084v1
|
2016-07-11T19:25:07Z
|
2016-07-11T19:25:07Z
|
Kernel-based methods for bandit convex optimization
|
We consider the adversarial convex bandit problem and we build the first
$\mathrm{poly}(T)$-time algorithm with $\mathrm{poly}(n) \sqrt{T}$-regret for
this problem. To do so we introduce three new ideas in the derivative-free
optimization literature: (i) kernel methods, (ii) a generalization of Bernoulli
convolutions, and (iii) a new annealing schedule for exponential weights (with
increasing learning rate). The basic version of our algorithm achieves
$\tilde{O}(n^{9.5} \sqrt{T})$-regret, and we show that a simple variant of this
algorithm can be run in $\mathrm{poly}(n \log(T))$-time per step at the cost of
an additional $\mathrm{poly}(n) T^{o(1)}$ factor in the regret. These results
improve upon the $\tilde{O}(n^{11} \sqrt{T})$-regret and
$\exp(\mathrm{poly}(T))$-time result of the first two authors, and the
$\log(T)^{\mathrm{poly}(n)} \sqrt{T}$-regret and
$\log(T)^{\mathrm{poly}(n)}$-time result of Hazan and Li. Furthermore we
conjecture that another variant of the algorithm could achieve
$\tilde{O}(n^{1.5} \sqrt{T})$-regret, and moreover that this regret is
unimprovable (the current best lower bound being $\Omega(n \sqrt{T})$ and it is
achieved with linear functions). For the simpler situation of zeroth order
stochastic convex optimization this corresponds to the conjecture that the
optimal query complexity is of order $n^3 / \epsilon^2$.
|
[
"['Sébastien Bubeck' 'Ronen Eldan' 'Yin Tat Lee']",
"S\\'ebastien Bubeck and Ronen Eldan and Yin Tat Lee"
] |
cs.LG cs.NE
| null |
1607.03085
| null | null |
http://arxiv.org/pdf/1607.03085v3
|
2016-10-23T02:01:55Z
|
2016-07-11T19:29:44Z
|
Recurrent Memory Array Structures
|
The following report introduces ideas augmenting standard Long Short Term
Memory (LSTM) architecture with multiple memory cells per hidden unit in order
to improve its generalization capabilities. It considers both deterministic and
stochastic variants of memory operation. It is shown that the nondeterministic
Array-LSTM approach improves state-of-the-art performance on character level
text prediction achieving 1.402 BPC on enwik8 dataset. Furthermore, this report
estabilishes baseline neural-based results of 1.12 BPC and 1.19 BPC for enwik9
and enwik10 datasets respectively.
|
[
"Kamil Rocki",
"['Kamil Rocki']"
] |
cs.LG
| null |
1607.03182
| null | null |
http://arxiv.org/pdf/1607.03182v1
|
2016-07-11T22:08:58Z
|
2016-07-11T22:08:58Z
|
Stream-based Online Active Learning in a Contextual Multi-Armed Bandit
Framework
|
We study the stream-based online active learning in a contextual multi-armed
bandit framework. In this framework, the reward depends on both the arm and the
context. In a stream-based active learning setting, obtaining the ground truth
of the reward is costly, and the conventional contextual multi-armed bandit
algorithm fails to achieve a sublinear regret due to this cost. Hence, the
algorithm needs to determine whether or not to request the ground truth of the
reward at current time slot. In our framework, we consider a stream-based
active learning setting in which a query request for the ground truth is sent
to the annotator, together with some prior information of the ground truth.
Depending on the accuracy of the prior information, the query cost varies. Our
algorithm mainly carries out two operations: the refinement of the context and
arm spaces and the selection of actions. In our algorithm, the partitions of
the context space and the arm space are maintained for a certain time slots,
and then become finer as more information about the rewards accumulates. We use
a strategic way to select the arms and to request the ground truth of the
reward, aiming to maximize the total reward. We analytically show that the
regret is sublinear and in the same order with that of the conventional
contextual multi-armed bandit algorithms, where no query cost
|
[
"Linqi Song",
"['Linqi Song']"
] |
cs.LG cs.DS stat.ML
| null |
1607.03183
| null | null |
http://arxiv.org/pdf/1607.03183v1
|
2016-07-11T22:10:04Z
|
2016-07-11T22:10:04Z
|
How to calculate partition functions using convex programming
hierarchies: provable bounds for variational methods
|
We consider the problem of approximating partition functions for Ising
models. We make use of recent tools in combinatorial optimization: the
Sherali-Adams and Lasserre convex programming hierarchies, in combination with
variational methods to get algorithms for calculating partition functions in
these families. These techniques give new, non-trivial approximation guarantees
for the partition function beyond the regime of correlation decay. They also
generalize some classical results from statistical physics about the
Curie-Weiss ferromagnetic Ising model, as well as provide a partition function
counterpart of classical results about max-cut on dense graphs
\cite{arora1995polynomial}. With this, we connect techniques from two
apparently disparate research areas -- optimization and counting/partition
function approximations. (i.e. \#-P type of problems).
Furthermore, we design to the best of our knowledge the first provable,
convex variational methods. Though in the literature there are a host of convex
versions of variational methods \cite{wainwright2003tree, wainwright2005new,
heskes2006convexity, meshi2009convexifying}, they come with no guarantees
(apart from some extremely special cases, like e.g. the graph has a single
cycle \cite{weiss2000correctness}). We consider dense and low threshold rank
graphs, and interestingly, the reason our approach works on these types of
graphs is because local correlations propagate to global correlations --
completely the opposite of algorithms based on correlation decay. In the
process we design novel entropy approximations based on the low-order moments
of a distribution.
Our proof techniques are very simple and generic, and likely to be applicable
to many other settings other than Ising models.
|
[
"['Andrej Risteski']",
"Andrej Risteski"
] |
cs.IT cs.LG math.IT stat.ML
| null |
1607.03191
| null | null |
http://arxiv.org/pdf/1607.03191v1
|
2016-07-11T22:40:31Z
|
2016-07-11T22:40:31Z
|
On Deterministic Conditions for Subspace Clustering under Missing Data
|
In this paper we present deterministic conditions for success of sparse
subspace clustering (SSC) under missing data, when data is assumed to come from
a Union of Subspaces (UoS) model. We consider two algorithms, which are
variants of SSC with entry-wise zero-filling that differ in terms of the
optimization problems used to find affinity matrix for spectral clustering. For
both the algorithms, we provide deterministic conditions for any pattern of
missing data such that perfect clustering can be achieved. We provide extensive
sets of simulation results for clustering as well as completion of data at
missing entries, under the UoS model. Our experimental results indicate that in
contrast to the full data case, accurate clustering does not imply accurate
subspace identification and completion, indicating the natural order of
relative hardness of these problems.
|
[
"Wenqi Wang and Shuchin Aeron and Vaneet Aggarwal",
"['Wenqi Wang' 'Shuchin Aeron' 'Vaneet Aggarwal']"
] |
math.OC cs.LG stat.CO
| null |
1607.03195
| null | null |
http://arxiv.org/pdf/1607.03195v1
|
2016-07-11T23:09:52Z
|
2016-07-11T23:09:52Z
|
Multi-Step Bayesian Optimization for One-Dimensional Feasibility
Determination
|
Bayesian optimization methods allocate limited sampling budgets to maximize
expensive-to-evaluate functions. One-step-lookahead policies are often used,
but computing optimal multi-step-lookahead policies remains a challenge. We
consider a specialized Bayesian optimization problem: finding the superlevel
set of an expensive one-dimensional function, with a Markov process prior. We
compute the Bayes-optimal sampling policy efficiently, and characterize the
suboptimality of one-step lookahead. Our numerical experiments demonstrate that
the one-step lookahead policy is close to optimal in this problem, performing
within 98% of optimal in the experimental settings considered.
|
[
"J. Massey Cashore, Lemuel Kumarga, Peter I. Frazier",
"['J. Massey Cashore' 'Lemuel Kumarga' 'Peter I. Frazier']"
] |
stat.ML cs.LG
| null |
1607.03204
| null | null |
http://arxiv.org/pdf/1607.03204v1
|
2016-07-12T00:11:59Z
|
2016-07-12T00:11:59Z
|
Information Projection and Approximate Inference for Structured Sparse
Variables
|
Approximate inference via information projection has been recently introduced
as a general-purpose approach for efficient probabilistic inference given
sparse variables. This manuscript goes beyond classical sparsity by proposing
efficient algorithms for approximate inference via information projection that
are applicable to any structure on the set of variables that admits enumeration
using a \emph{matroid}. We show that the resulting information projection can
be reduced to combinatorial submodular optimization subject to matroid
constraints. Further, leveraging recent advances in submodular optimization, we
provide an efficient greedy algorithm with strong optimization-theoretic
guarantees. The class of probabilistic models that can be expressed in this way
is quite broad and, as we show, includes group sparse regression, group sparse
principal components analysis and sparse canonical correlation analysis, among
others. Moreover, empirical results on simulated data and high dimensional
neuroimaging data highlight the superior performance of the information
projection approach as compared to established baselines for a range of
probabilistic models.
|
[
"['Rajiv Khanna' 'Joydeep Ghosh' 'Russell Poldrack' 'Oluwasanmi Koyejo']",
"Rajiv Khanna, Joydeep Ghosh, Russell Poldrack, Oluwasanmi Koyejo"
] |
cs.NE cs.CV cs.LG
| null |
1607.0325
| null | null | null | null | null |
Network Trimming: A Data-Driven Neuron Pruning Approach towards
Efficient Deep Architectures
|
State-of-the-art neural networks are getting deeper and wider. While their
performance increases with the increasing number of layers and neurons, it is
crucial to design an efficient deep architecture in order to reduce
computational and memory costs. Designing an efficient neural network, however,
is labor intensive requiring many experiments, and fine-tunings. In this paper,
we introduce network trimming which iteratively optimizes the network by
pruning unimportant neurons based on analysis of their outputs on a large
dataset. Our algorithm is inspired by an observation that the outputs of a
significant portion of neurons in a large network are mostly zero, regardless
of what inputs the network received. These zero activation neurons are
redundant, and can be removed without affecting the overall accuracy of the
network. After pruning the zero activation neurons, we retrain the network
using the weights before pruning as initialization. We alternate the pruning
and retraining to further reduce zero activations in a network. Our experiments
on the LeNet and VGG-16 show that we can achieve high compression ratio of
parameters without losing or even achieving higher accuracy than the original
network.
|
[
"Hengyuan Hu, Rui Peng, Yu-Wing Tai, Chi-Keung Tang"
] |
null | null |
1607.03250
| null | null |
http://arxiv.org/pdf/1607.03250v1
|
2016-07-12T07:43:01Z
|
2016-07-12T07:43:01Z
|
Network Trimming: A Data-Driven Neuron Pruning Approach towards
Efficient Deep Architectures
|
State-of-the-art neural networks are getting deeper and wider. While their performance increases with the increasing number of layers and neurons, it is crucial to design an efficient deep architecture in order to reduce computational and memory costs. Designing an efficient neural network, however, is labor intensive requiring many experiments, and fine-tunings. In this paper, we introduce network trimming which iteratively optimizes the network by pruning unimportant neurons based on analysis of their outputs on a large dataset. Our algorithm is inspired by an observation that the outputs of a significant portion of neurons in a large network are mostly zero, regardless of what inputs the network received. These zero activation neurons are redundant, and can be removed without affecting the overall accuracy of the network. After pruning the zero activation neurons, we retrain the network using the weights before pruning as initialization. We alternate the pruning and retraining to further reduce zero activations in a network. Our experiments on the LeNet and VGG-16 show that we can achieve high compression ratio of parameters without losing or even achieving higher accuracy than the original network.
|
[
"['Hengyuan Hu' 'Rui Peng' 'Yu-Wing Tai' 'Chi-Keung Tang']"
] |
stat.ML cs.LG
| null |
1607.03313
| null | null |
http://arxiv.org/pdf/1607.03313v1
|
2016-07-12T11:30:30Z
|
2016-07-12T11:30:30Z
|
Predicting the evolution of stationary graph signals
|
An emerging way of tackling the dimensionality issues arising in the modeling
of a multivariate process is to assume that the inherent data structure can be
captured by a graph. Nevertheless, though state-of-the-art graph-based methods
have been successful for many learning tasks, they do not consider
time-evolving signals and thus are not suitable for prediction. Based on the
recently introduced joint stationarity framework for time-vertex processes,
this letter considers multivariate models that exploit the graph topology so as
to facilitate the prediction. The resulting method yields similar accuracy to
the joint (time-graph) mean-squared error estimator but at lower complexity,
and outperforms purely time-based methods.
|
[
"Andreas Loukas and Nathanael Perraudin",
"['Andreas Loukas' 'Nathanael Perraudin']"
] |
cs.CV cs.LG
| null |
1607.03343
| null | null |
http://arxiv.org/pdf/1607.03343v2
|
2016-07-18T17:21:36Z
|
2016-07-12T13:14:02Z
|
DeepBinaryMask: Learning a Binary Mask for Video Compressive Sensing
|
In this paper, we propose a novel encoder-decoder neural network model
referred to as DeepBinaryMask for video compressive sensing. In video
compressive sensing one frame is acquired using a set of coded masks (sensing
matrix) from which a number of video frames is reconstructed, equal to the
number of coded masks. The proposed framework is an end-to-end model where the
sensing matrix is trained along with the video reconstruction. The encoder
learns the binary elements of the sensing matrix and the decoder is trained to
recover the unknown video sequence. The reconstruction performance is found to
improve when using the trained sensing mask from the network as compared to
other mask designs such as random, across a wide variety of compressive sensing
reconstruction algorithms. Finally, our analysis and discussion offers insights
into understanding the characteristics of the trained mask designs that lead to
the improved reconstruction quality.
|
[
"Michael Iliadis, Leonidas Spinoulas, Aggelos K. Katsaggelos",
"['Michael Iliadis' 'Leonidas Spinoulas' 'Aggelos K. Katsaggelos']"
] |
cs.LG cs.DS stat.ML
| null |
1607.0336
| null | null | null | null | null |
Approximate maximum entropy principles via Goemans-Williamson with
applications to provable variational methods
|
The well known maximum-entropy principle due to Jaynes, which states that
given mean parameters, the maximum entropy distribution matching them is in an
exponential family, has been very popular in machine learning due to its
"Occam's razor" interpretation. Unfortunately, calculating the potentials in
the maximum-entropy distribution is intractable \cite{bresler2014hardness}. We
provide computationally efficient versions of this principle when the mean
parameters are pairwise moments: we design distributions that approximately
match given pairwise moments, while having entropy which is comparable to the
maximum entropy distribution matching those moments.
We additionally provide surprising applications of the approximate maximum
entropy principle to designing provable variational methods for partition
function calculations for Ising models without any assumptions on the
potentials of the model. More precisely, we show that in every temperature, we
can get approximation guarantees for the log-partition function comparable to
those in the low-temperature limit, which is the setting of optimization of
quadratic forms over the hypercube. \cite{alon2006approximating}
|
[
"Yuanzhi Li, Andrej Risteski"
] |
null | null |
1607.03360
| null | null |
http://arxiv.org/pdf/1607.03360v1
|
2016-07-12T14:09:03Z
|
2016-07-12T14:09:03Z
|
Approximate maximum entropy principles via Goemans-Williamson with
applications to provable variational methods
|
The well known maximum-entropy principle due to Jaynes, which states that given mean parameters, the maximum entropy distribution matching them is in an exponential family, has been very popular in machine learning due to its "Occam's razor" interpretation. Unfortunately, calculating the potentials in the maximum-entropy distribution is intractable cite{bresler2014hardness}. We provide computationally efficient versions of this principle when the mean parameters are pairwise moments: we design distributions that approximately match given pairwise moments, while having entropy which is comparable to the maximum entropy distribution matching those moments. We additionally provide surprising applications of the approximate maximum entropy principle to designing provable variational methods for partition function calculations for Ising models without any assumptions on the potentials of the model. More precisely, we show that in every temperature, we can get approximation guarantees for the log-partition function comparable to those in the low-temperature limit, which is the setting of optimization of quadratic forms over the hypercube. cite{alon2006approximating}
|
[
"['Yuanzhi Li' 'Andrej Risteski']"
] |
cs.HC cs.LG cs.MM
| null |
1607.03401
| null | null |
http://arxiv.org/pdf/1607.03401v1
|
2016-07-12T15:30:10Z
|
2016-07-12T15:30:10Z
|
Parsimonious Mixed-Effects HodgeRank for Crowdsourced Preference
Aggregation
|
In crowdsourced preference aggregation, it is often assumed that all the
annotators are subject to a common preference or utility function which
generates their comparison behaviors in experiments. However, in reality
annotators are subject to variations due to multi-criteria, abnormal, or a
mixture of such behaviors. In this paper, we propose a parsimonious
mixed-effects model based on HodgeRank, which takes into account both the fixed
effect that the majority of annotators follows a common linear utility model,
and the random effect that a small subset of annotators might deviate from the
common significantly and exhibits strongly personalized preferences. HodgeRank
has been successfully applied to subjective quality evaluation of multimedia
and resolves pairwise crowdsourced ranking data into a global consensus ranking
and cyclic conflicts of interests. As an extension, our proposed methodology
further explores the conflicts of interests through the random effect in
annotator specific variations. The key algorithm in this paper establishes a
dynamic path from the common utility to individual variations, with different
levels of parsimony or sparsity on personalization, based on newly developed
Linearized Bregman Algorithms with Inverse Scale Space method. Finally the
validity of the methodology are supported by experiments with both simulated
examples and three real-world crowdsourcing datasets, which shows that our
proposed method exhibits better performance (i.e. smaller test error) compared
with HodgeRank due to its parsimonious property.
|
[
"Qianqian Xu, Jiechao Xiong, Xiaochun Cao, and Yuan Yao",
"['Qianqian Xu' 'Jiechao Xiong' 'Xiaochun Cao' 'Yuan Yao']"
] |
cs.LG cs.SY quant-ph stat.ML
|
10.1016/j.neucom.2016.12.087
|
1607.03428
| null | null |
http://arxiv.org/abs/1607.03428v3
|
2016-11-25T23:24:10Z
|
2016-07-12T16:17:38Z
|
Learning in Quantum Control: High-Dimensional Global Optimization for
Noisy Quantum Dynamics
|
Quantum control is valuable for various quantum technologies such as
high-fidelity gates for universal quantum computing, adaptive quantum-enhanced
metrology, and ultra-cold atom manipulation. Although supervised machine
learning and reinforcement learning are widely used for optimizing control
parameters in classical systems, quantum control for parameter optimization is
mainly pursued via gradient-based greedy algorithms. Although the quantum
fitness landscape is often compatible with greedy algorithms, sometimes greedy
algorithms yield poor results, especially for large-dimensional quantum
systems. We employ differential evolution algorithms to circumvent the
stagnation problem of non-convex optimization. We improve quantum control
fidelity for noisy system by averaging over the objective function. To reduce
computational cost, we introduce heuristics for early termination of runs and
for adaptive selection of search subspaces. Our implementation is massively
parallel and vectorized to reduce run time even further. We demonstrate our
methods with two examples, namely quantum phase estimation and quantum gate
design, for which we achieve superior fidelity and scalability than obtained
using greedy algorithms.
|
[
"Pantita Palittapongarnpim, Peter Wittek, Ehsan Zahedinejad, Shakib\n Vedaie, Barry C. Sanders",
"['Pantita Palittapongarnpim' 'Peter Wittek' 'Ehsan Zahedinejad'\n 'Shakib Vedaie' 'Barry C. Sanders']"
] |
cs.LG stat.ML
| null |
1607.03456
| null | null |
http://arxiv.org/pdf/1607.03456v1
|
2016-07-12T18:20:23Z
|
2016-07-12T18:20:23Z
|
Incomplete Pivoted QR-based Dimensionality Reduction
|
High-dimensional big data appears in many research fields such as image
recognition, biology and collaborative filtering. Often, the exploration of
such data by classic algorithms is encountered with difficulties due to `curse
of dimensionality' phenomenon. Therefore, dimensionality reduction methods are
applied to the data prior to its analysis. Many of these methods are based on
principal components analysis, which is statistically driven, namely they map
the data into a low-dimension subspace that preserves significant statistical
properties of the high-dimensional data. As a consequence, such methods do not
directly address the geometry of the data, reflected by the mutual distances
between multidimensional data point. Thus, operations such as classification,
anomaly detection or other machine learning tasks may be affected.
This work provides a dictionary-based framework for geometrically driven data
analysis that includes dimensionality reduction, out-of-sample extension and
anomaly detection. It embeds high-dimensional data in a low-dimensional
subspace. This embedding preserves the original high-dimensional geometry of
the data up to a user-defined distortion rate. In addition, it identifies a
subset of landmark data points that constitute a dictionary for the analyzed
dataset. The dictionary enables to have a natural extension of the
low-dimensional embedding to out-of-sample data points, which gives rise to a
distortion-based criterion for anomaly detection. The suggested method is
demonstrated on synthetic and real-world datasets and achieves good results for
classification, anomaly detection and out-of-sample tasks.
|
[
"Amit Bermanis, Aviv Rotbart, Moshe Salhov and Amir Averbuch",
"['Amit Bermanis' 'Aviv Rotbart' 'Moshe Salhov' 'Amir Averbuch']"
] |
cs.NA cs.DS cs.LG math.OC stat.ML
| null |
1607.03463
| null | null |
http://arxiv.org/pdf/1607.03463v2
|
2017-01-23T18:55:31Z
|
2016-07-12T18:41:52Z
|
LazySVD: Even Faster SVD Decomposition Yet Without Agonizing Pain
|
We study $k$-SVD that is to obtain the first $k$ singular vectors of a matrix
$A$. Recently, a few breakthroughs have been discovered on $k$-SVD: Musco and
Musco [1] proved the first gap-free convergence result using the block Krylov
method, Shamir [2] discovered the first variance-reduction stochastic method,
and Bhojanapalli et al. [3] provided the fastest $O(\mathsf{nnz}(A) +
\mathsf{poly}(1/\varepsilon))$-time algorithm using alternating minimization.
In this paper, we put forward a new and simple LazySVD framework to improve
the above breakthroughs. This framework leads to a faster gap-free method
outperforming [1], and the first accelerated and stochastic method
outperforming [2]. In the $O(\mathsf{nnz}(A) + \mathsf{poly}(1/\varepsilon))$
running-time regime, LazySVD outperforms [3] in certain parameter regimes
without even using alternating minimization.
|
[
"Zeyuan Allen-Zhu, Yuanzhi Li",
"['Zeyuan Allen-Zhu' 'Yuanzhi Li']"
] |
cs.LG cs.CL cs.NE
| null |
1607.03474
| null | null |
http://arxiv.org/pdf/1607.03474v5
|
2017-07-04T19:29:23Z
|
2016-07-12T19:36:50Z
|
Recurrent Highway Networks
|
Many sequential processing tasks require complex nonlinear transition
functions from one step to the next. However, recurrent neural networks with
'deep' transition functions remain difficult to train, even when using Long
Short-Term Memory (LSTM) networks. We introduce a novel theoretical analysis of
recurrent networks based on Gersgorin's circle theorem that illuminates several
modeling and optimization issues and improves our understanding of the LSTM
cell. Based on this analysis we propose Recurrent Highway Networks, which
extend the LSTM architecture to allow step-to-step transition depths larger
than one. Several language modeling experiments demonstrate that the proposed
architecture results in powerful and efficient models. On the Penn Treebank
corpus, solely increasing the transition depth from 1 to 10 improves word-level
perplexity from 90.6 to 65.4 using the same number of parameters. On the larger
Wikipedia datasets for character prediction (text8 and enwik8), RHNs outperform
all previous results and achieve an entropy of 1.27 bits per character.
|
[
"Julian Georg Zilly, Rupesh Kumar Srivastava, Jan Koutn\\'ik and\n J\\\"urgen Schmidhuber",
"['Julian Georg Zilly' 'Rupesh Kumar Srivastava' 'Jan Koutník'\n 'Jürgen Schmidhuber']"
] |
stat.ML cs.LG
| null |
1607.03475
| null | null |
http://arxiv.org/pdf/1607.03475v1
|
2016-07-12T19:42:40Z
|
2016-07-12T19:42:40Z
|
Nystrom Method for Approximating the GMM Kernel
|
The GMM (generalized min-max) kernel was recently proposed (Li, 2016) as a
measure of data similarity and was demonstrated effective in machine learning
tasks. In order to use the GMM kernel for large-scale datasets, the prior work
resorted to the (generalized) consistent weighted sampling (GCWS) to convert
the GMM kernel to linear kernel. We call this approach as ``GMM-GCWS''.
In the machine learning literature, there is a popular algorithm which we
call ``RBF-RFF''. That is, one can use the ``random Fourier features'' (RFF) to
convert the ``radial basis function'' (RBF) kernel to linear kernel. It was
empirically shown in (Li, 2016) that RBF-RFF typically requires substantially
more samples than GMM-GCWS in order to achieve comparable accuracies.
The Nystrom method is a general tool for computing nonlinear kernels, which
again converts nonlinear kernels into linear kernels. We apply the Nystrom
method for approximating the GMM kernel, a strategy which we name as
``GMM-NYS''. In this study, our extensive experiments on a set of fairly large
datasets confirm that GMM-NYS is also a strong competitor of RBF-RFF.
|
[
"Ping Li",
"['Ping Li']"
] |
cs.CV cs.AI cs.LG stat.ML
| null |
1607.03516
| null | null |
http://arxiv.org/pdf/1607.03516v2
|
2016-08-01T09:58:13Z
|
2016-07-12T20:48:58Z
|
Deep Reconstruction-Classification Networks for Unsupervised Domain
Adaptation
|
In this paper, we propose a novel unsupervised domain adaptation algorithm
based on deep learning for visual object recognition. Specifically, we design a
new model called Deep Reconstruction-Classification Network (DRCN), which
jointly learns a shared encoding representation for two tasks: i) supervised
classification of labeled source data, and ii) unsupervised reconstruction of
unlabeled target data.In this way, the learnt representation not only preserves
discriminability, but also encodes useful information from the target domain.
Our new DRCN model can be optimized by using backpropagation similarly as the
standard neural networks.
We evaluate the performance of DRCN on a series of cross-domain object
recognition tasks, where DRCN provides a considerable improvement (up to ~8% in
accuracy) over the prior state-of-the-art algorithms. Interestingly, we also
observe that the reconstruction pipeline of DRCN transforms images from the
source domain into images whose appearance resembles the target dataset. This
suggests that DRCN's performance is due to constructing a single composite
representation that encodes information about both the structure of target
images and the classification of source images. Finally, we provide a formal
analysis to justify the algorithm's objective in domain adaptation context.
|
[
"['Muhammad Ghifary' 'W. Bastiaan Kleijn' 'Mengjie Zhang' 'David Balduzzi'\n 'Wen Li']",
"Muhammad Ghifary and W. Bastiaan Kleijn and Mengjie Zhang and David\n Balduzzi and Wen Li"
] |
cs.CV cs.LG
| null |
1607.03547
| null | null |
http://arxiv.org/pdf/1607.03547v2
|
2016-11-15T19:29:30Z
|
2016-07-12T23:56:33Z
|
Improved Multi-Class Cost-Sensitive Boosting via Estimation of the
Minimum-Risk Class
|
We present a simple unified framework for multi-class cost-sensitive
boosting. The minimum-risk class is estimated directly, rather than via an
approximation of the posterior distribution. Our method jointly optimizes
binary weak learners and their corresponding output vectors, requiring classes
to share features at each iteration. By training in a cost-sensitive manner,
weak learners are invested in separating classes whose discrimination is
important, at the expense of less relevant classification boundaries.
Additional contributions are a family of loss functions along with proof that
our algorithm is Boostable in the theoretical sense, as well as an efficient
procedure for growing decision trees for use as weak learners. We evaluate our
method on a variety of datasets: a collection of synthetic planar data, common
UCI datasets, MNIST digits, SUN scenes, and CUB-200 birds. Results show
state-of-the-art performance across all datasets against several strong
baselines, including non-boosting multi-class approaches.
|
[
"['Ron Appel' 'Xavier Burgos-Artizzu' 'Pietro Perona']",
"Ron Appel, Xavier Burgos-Artizzu, Pietro Perona"
] |
cs.LG cs.DS math.PR stat.ML
| null |
1607.03559
| null | null |
http://arxiv.org/pdf/1607.03559v1
|
2016-07-13T01:22:04Z
|
2016-07-13T01:22:04Z
|
Fast Sampling for Strongly Rayleigh Measures with Application to
Determinantal Point Processes
|
In this note we consider sampling from (non-homogeneous) strongly Rayleigh
probability measures. As an important corollary, we obtain a fast mixing Markov
Chain sampler for Determinantal Point Processes.
|
[
"['Chengtao Li' 'Stefanie Jegelka' 'Suvrit Sra']",
"Chengtao Li, Stefanie Jegelka, Suvrit Sra"
] |
cs.LG
| null |
1607.03594
| null | null |
http://arxiv.org/pdf/1607.03594v2
|
2017-01-22T04:25:30Z
|
2016-07-13T05:07:33Z
|
Estimating Uncertainty Online Against an Adversary
|
Assessing uncertainty is an important step towards ensuring the safety and
reliability of machine learning systems. Existing uncertainty estimation
techniques may fail when their modeling assumptions are not met, e.g. when the
data distribution differs from the one seen at training time. Here, we propose
techniques that assess a classification algorithm's uncertainty via calibrated
probabilities (i.e. probabilities that match empirical outcome frequencies in
the long run) and which are guaranteed to be reliable (i.e. accurate and
calibrated) on out-of-distribution input, including input generated by an
adversary. This represents an extension of classical online learning that
handles uncertainty in addition to guaranteeing accuracy under adversarial
assumptions. We establish formal guarantees for our methods, and we validate
them on two real-world problems: question answering and medical diagnosis from
genomic data.
|
[
"['Volodymyr Kuleshov' 'Stefano Ermon']",
"Volodymyr Kuleshov and Stefano Ermon"
] |
cs.AI cs.LG
| null |
1607.03611
| null | null |
http://arxiv.org/pdf/1607.03611v2
|
2016-10-08T05:21:00Z
|
2016-07-13T07:15:30Z
|
Characterizing Driving Styles with Deep Learning
|
Characterizing driving styles of human drivers using vehicle sensor data,
e.g., GPS, is an interesting research problem and an important real-world
requirement from automotive industries. A good representation of driving
features can be highly valuable for autonomous driving, auto insurance, and
many other application scenarios. However, traditional methods mainly rely on
handcrafted features, which limit machine learning algorithms to achieve a
better performance. In this paper, we propose a novel deep learning solution to
this problem, which could be the first attempt of extending deep learning to
driving behavior analysis based on GPS data. The proposed approach can
effectively extract high level and interpretable features describing complex
driving patterns. It also requires significantly less human experience and
work. The power of the learned driving style representations are validated
through the driver identification problem using a large real dataset.
|
[
"['Weishan Dong' 'Jian Li' 'Renjie Yao' 'Changsheng Li' 'Ting Yuan'\n 'Lanjun Wang']",
"Weishan Dong, Jian Li, Renjie Yao, Changsheng Li, Ting Yuan, Lanjun\n Wang"
] |
cs.LG
| null |
1607.03626
| null | null |
http://arxiv.org/pdf/1607.03626v1
|
2016-07-13T08:03:35Z
|
2016-07-13T08:03:35Z
|
San Francisco Crime Classification
|
San Francisco Crime Classification is an online competition administered by
Kaggle Inc. The competition aims at predicting the future crimes based on a
given set of geographical and time-based features. In this paper, I achieved a
an accuracy that ranks at top %18, as of May 19th, 2016. I will explore the
data, and explain in details the tools I used to achieve that result.
|
[
"Yehya Abouelnaga",
"['Yehya Abouelnaga']"
] |
cs.SD cs.CV cs.LG
|
10.1109/TASLP.2017.2690563
|
1607.03681
| null | null |
http://arxiv.org/abs/1607.03681v2
|
2016-11-29T15:56:36Z
|
2016-07-13T11:31:14Z
|
Unsupervised Feature Learning Based on Deep Models for Environmental
Audio Tagging
|
Environmental audio tagging aims to predict only the presence or absence of
certain acoustic events in the interested acoustic scene. In this paper we make
contributions to audio tagging in two parts, respectively, acoustic modeling
and feature learning. We propose to use a shrinking deep neural network (DNN)
framework incorporating unsupervised feature learning to handle the multi-label
classification task. For the acoustic modeling, a large set of contextual
frames of the chunk are fed into the DNN to perform a multi-label
classification for the expected tags, considering that only chunk (or
utterance) level rather than frame-level labels are available. Dropout and
background noise aware training are also adopted to improve the generalization
capability of the DNNs. For the unsupervised feature learning, we propose to
use a symmetric or asymmetric deep de-noising auto-encoder (sDAE or aDAE) to
generate new data-driven features from the Mel-Filter Banks (MFBs) features.
The new features, which are smoothed against background noise and more compact
with contextual information, can further improve the performance of the DNN
baseline. Compared with the standard Gaussian Mixture Model (GMM) baseline of
the DCASE 2016 audio tagging challenge, our proposed method obtains a
significant equal error rate (EER) reduction from 0.21 to 0.13 on the
development set. The proposed aDAE system can get a relative 6.7% EER reduction
compared with the strong DNN baseline on the development set. Finally, the
results also show that our approach obtains the state-of-the-art performance
with 0.15 EER on the evaluation set of the DCASE 2016 audio tagging task while
EER of the first prize of this challenge is 0.17.
|
[
"Yong Xu, Qiang Huang, Wenwu Wang, Peter Foster, Siddharth Sigtia,\n Philip J. B. Jackson, and Mark D. Plumbley",
"['Yong Xu' 'Qiang Huang' 'Wenwu Wang' 'Peter Foster' 'Siddharth Sigtia'\n 'Philip J. B. Jackson' 'Mark D. Plumbley']"
] |
cs.SD cs.CV cs.LG
| null |
1607.03682
| null | null |
http://arxiv.org/pdf/1607.03682v3
|
2016-08-13T10:37:53Z
|
2016-07-13T11:31:25Z
|
Hierarchical learning for DNN-based acoustic scene classification
|
In this paper, we present a deep neural network (DNN)-based acoustic scene
classification framework. Two hierarchical learning methods are proposed to
improve the DNN baseline performance by incorporating the hierarchical taxonomy
information of environmental sounds. Firstly, the parameters of the DNN are
initialized by the proposed hierarchical pre-training. Multi-level objective
function is then adopted to add more constraint on the cross-entropy based loss
function. A series of experiments were conducted on the Task1 of the Detection
and Classification of Acoustic Scenes and Events (DCASE) 2016 challenge. The
final DNN-based system achieved a 22.9% relative improvement on average scene
classification error as compared with the Gaussian Mixture Model (GMM)-based
benchmark system across four standard folds.
|
[
"Yong Xu, Qiang Huang, Wenwu Wang, Mark D. Plumbley",
"['Yong Xu' 'Qiang Huang' 'Wenwu Wang' 'Mark D. Plumbley']"
] |
cs.LG
| null |
1607.03691
| null | null |
http://arxiv.org/pdf/1607.03691v1
|
2016-07-13T12:10:08Z
|
2016-07-13T12:10:08Z
|
Sequential Cost-Sensitive Feature Acquisition
|
We propose a reinforcement learning based approach to tackle the
cost-sensitive learning problem where each input feature has a specific cost.
The acquisition process is handled through a stochastic policy which allows
features to be acquired in an adaptive way. The general architecture of our
approach relies on representation learning to enable performing prediction on
any partially observed sample, whatever the set of its observed features are.
The resulting model is an original mix of representation learning and of
reinforcement learning ideas. It is learned with policy gradient techniques to
minimize a budgeted inference cost. We demonstrate the effectiveness of our
proposed method with several experiments on a variety of datasets for the
sparse prediction problem where all features have the same cost, but also for
some cost-sensitive settings.
|
[
"['Gabriella Contardo' 'Ludovic Denoyer' 'Thierry Artières']",
"Gabriella Contardo, Ludovic Denoyer, Thierry Arti\\`eres"
] |
cs.AI cs.LG
| null |
1607.03705
| null | null |
http://arxiv.org/pdf/1607.03705v1
|
2016-07-13T12:45:53Z
|
2016-07-13T12:45:53Z
|
Possibilistic Networks: Parameters Learning from Imprecise Data and
Evaluation strategy
|
There has been an ever-increasing interest in multidisciplinary research on
representing and reasoning with imperfect data. Possibilistic networks present
one of the powerful frameworks of interest for representing uncertain and
imprecise information. This paper covers the problem of their parameters
learning from imprecise datasets, i.e., containing multi-valued data. We
propose in the rst part of this paper a possibilistic networks sampling
process. In the second part, we propose a likelihood function which explores
the link between random sets theory and possibility theory. This function is
then deployed to parametrize possibilistic networks.
|
[
"Maroua Haddad (LINA, LARODEC), Philippe Leray (LINA), Nahla Ben Amor\n (LARODEC)",
"['Maroua Haddad' 'Philippe Leray' 'Nahla Ben Amor']"
] |
cs.CL cs.LG
| null |
1607.03707
| null | null |
http://arxiv.org/pdf/1607.03707v1
|
2016-07-13T12:48:33Z
|
2016-07-13T12:48:33Z
|
Re-presenting a Story by Emotional Factors using Sentimental Analysis
Method
|
Remembering an event is affected by personal emotional status. We examined
the psychological status and personal factors; depression (Center for
Epidemiological Studies - Depression, Radloff, 1977), present affective
(Positive Affective and Negative Affective Schedule, Watson et al., 1988), life
orient (Life Orient Test, Scheier & Carver, 1985), self-awareness (Core Self
Evaluation Scale, Judge et al., 2003), and social factor (Social Support,
Sarason et al., 1983) of undergraduate students (N=64) and got summaries of a
story, Chronicle of a Death Foretold (Gabriel Garcia Marquez, 1981) from them.
We implement a sentimental analysis model based on convolutional neural network
(LeCun & Bengio, 1995) to evaluate each summary. From the same vein used for
transfer learning (Pan & Yang, 2010), we collected 38,265 movie review data to
train the model and then use them to score summaries of each student. The
results of CES-D and PANAS show the relationship between emotion and memory
retrieval as follows: depressed people have shown a tendency of representing a
story more negatively, and they seemed less expressive. People with full of
emotion - high in PANAS - have retrieved their memory more expressively than
others, using more negative words then others. The contributions of this study
can be summarized as follows: First, lightening the relationship between
emotion and its effect during times of storing or retrieving a memory. Second,
suggesting objective methods to evaluate the intensity of emotion in natural
language format, using a sentimental analysis model.
|
[
"Hwiyeol Jo, Yohan Moon, Jong In Kim, and Jeong Ryu",
"['Hwiyeol Jo' 'Yohan Moon' 'Jong In Kim' 'Jeong Ryu']"
] |
stat.ML cs.LG
| null |
1607.0373
| null | null | null | null | null |
Learning Shallow Detection Cascades for Wearable Sensor-Based Mobile
Health Applications
|
The field of mobile health aims to leverage recent advances in wearable
on-body sensing technology and smart phone computing capabilities to develop
systems that can monitor health states and deliver just-in-time adaptive
interventions. However, existing work has largely focused on analyzing
collected data in the off-line setting. In this paper, we propose a novel
approach to learning shallow detection cascades developed explicitly for use in
a real-time wearable-phone or wearable-phone-cloud systems. We apply our
approach to the problem of cigarette smoking detection from a combination of
wrist-worn actigraphy data and respiration chest band data using two and three
stage cascades.
|
[
"Hamid Dadkhahi, Nazir Saleheen, Santosh Kumar, Benjamin Marlin"
] |
null | null |
1607.03730
| null | null |
http://arxiv.org/pdf/1607.03730v1
|
2016-07-13T13:47:49Z
|
2016-07-13T13:47:49Z
|
Learning Shallow Detection Cascades for Wearable Sensor-Based Mobile
Health Applications
|
The field of mobile health aims to leverage recent advances in wearable on-body sensing technology and smart phone computing capabilities to develop systems that can monitor health states and deliver just-in-time adaptive interventions. However, existing work has largely focused on analyzing collected data in the off-line setting. In this paper, we propose a novel approach to learning shallow detection cascades developed explicitly for use in a real-time wearable-phone or wearable-phone-cloud systems. We apply our approach to the problem of cigarette smoking detection from a combination of wrist-worn actigraphy data and respiration chest band data using two and three stage cascades.
|
[
"['Hamid Dadkhahi' 'Nazir Saleheen' 'Santosh Kumar' 'Benjamin Marlin']"
] |
cs.CL cs.LG
| null |
1607.0378
| null | null | null | null | null |
A Vector Space for Distributional Semantics for Entailment
|
Distributional semantics creates vector-space representations that capture
many forms of semantic similarity, but their relation to semantic entailment
has been less clear. We propose a vector-space model which provides a formal
foundation for a distributional semantics of entailment. Using a mean-field
approximation, we develop approximate inference procedures and entailment
operators over vectors of probabilities of features being known (versus
unknown). We use this framework to reinterpret an existing
distributional-semantic model (Word2Vec) as approximating an entailment-based
model of the distributions of words in contexts, thereby predicting lexical
entailment relations. In both unsupervised and semi-supervised experiments on
hyponymy detection, we get substantial improvements over previous results.
|
[
"James Henderson and Diana Nicoleta Popa"
] |
null | null |
1607.03780
| null | null |
http://arxiv.org/pdf/1607.03780v1
|
2016-07-13T15:08:26Z
|
2016-07-13T15:08:26Z
|
A Vector Space for Distributional Semantics for Entailment
|
Distributional semantics creates vector-space representations that capture many forms of semantic similarity, but their relation to semantic entailment has been less clear. We propose a vector-space model which provides a formal foundation for a distributional semantics of entailment. Using a mean-field approximation, we develop approximate inference procedures and entailment operators over vectors of probabilities of features being known (versus unknown). We use this framework to reinterpret an existing distributional-semantic model (Word2Vec) as approximating an entailment-based model of the distributions of words in contexts, thereby predicting lexical entailment relations. In both unsupervised and semi-supervised experiments on hyponymy detection, we get substantial improvements over previous results.
|
[
"['James Henderson' 'Diana Nicoleta Popa']"
] |
stat.ML cs.LG
| null |
1607.03822
| null | null |
http://arxiv.org/pdf/1607.03822v1
|
2016-07-13T16:46:55Z
|
2016-07-13T16:46:55Z
|
Feature Extraction and Automated Classification of Heartbeats by Machine
Learning
|
We present algorithms for the detection of a class of heart arrhythmias with
the goal of eventual adoption by practicing cardiologists. In clinical
practice, detection is based on a small number of meaningful features extracted
from the heartbeat cycle. However, techniques proposed in the literature use
high dimensional vectors consisting of morphological, and time based features
for detection. Using electrocardiogram (ECG) signals, we found smaller subsets
of features sufficient to detect arrhythmias with high accuracy. The features
were found by an iterative step-wise feature selection method. We depart from
common literature in the following aspects: 1. As opposed to a high dimensional
feature vectors, we use a small set of features with meaningful clinical
interpretation, 2. we eliminate the necessity of short-duration
patient-specific ECG data to append to the global training data for
classification 3. We apply semi-parametric classification procedures (in an
ensemble framework) for arrhythmia detection, and 4. our approach is based on a
reduced sampling rate of ~ 115 Hz as opposed to 360 Hz in standard literature.
|
[
"Choudur Lakshminarayan and Tony Basil",
"['Choudur Lakshminarayan' 'Tony Basil']"
] |
cs.RO cs.CL cs.CV cs.LG
|
10.1089/big.2016.0028
|
1607.03827
| null | null |
http://arxiv.org/abs/1607.03827v2
|
2018-08-09T14:24:47Z
|
2016-07-13T17:08:01Z
|
The KIT Motion-Language Dataset
|
Linking human motion and natural language is of great interest for the
generation of semantic representations of human activities as well as for the
generation of robot activities based on natural language input. However, while
there have been years of research in this area, no standardized and openly
available dataset exists to support the development and evaluation of such
systems. We therefore propose the KIT Motion-Language Dataset, which is large,
open, and extensible. We aggregate data from multiple motion capture databases
and include them in our dataset using a unified representation that is
independent of the capture system or marker set, making it easy to work with
the data regardless of its origin. To obtain motion annotations in natural
language, we apply a crowd-sourcing approach and a web-based tool that was
specifically build for this purpose, the Motion Annotation Tool. We thoroughly
document the annotation process itself and discuss gamification methods that we
used to keep annotators motivated. We further propose a novel method,
perplexity-based selection, which systematically selects motions for further
annotation that are either under-represented in our dataset or that have
erroneous annotations. We show that our method mitigates the two aforementioned
problems and ensures a systematic annotation process. We provide an in-depth
analysis of the structure and contents of our resulting dataset, which, as of
October 10, 2016, contains 3911 motions with a total duration of 11.23 hours
and 6278 annotations in natural language that contain 52,903 words. We believe
this makes our dataset an excellent choice that enables more transparent and
comparable research in this important area.
|
[
"['Matthias Plappert' 'Christian Mandery' 'Tamim Asfour']",
"Matthias Plappert, Christian Mandery, Tamim Asfour"
] |
cs.LG cs.CG stat.ML
| null |
1607.03849
| null | null |
http://arxiv.org/pdf/1607.03849v2
|
2016-08-02T15:34:40Z
|
2016-07-13T18:15:52Z
|
Fitting a Simplicial Complex using a Variation of k-means
|
We give a simple and effective two stage algorithm for approximating a point
cloud $\mathcal{S}\subset\mathbb{R}^m$ by a simplicial complex $K$. The first
stage is an iterative fitting procedure that generalizes k-means clustering,
while the second stage involves deleting redundant simplices. A form of
dimension reduction of $\mathcal{S}$ is obtained as a consequence.
|
[
"Piotr Beben",
"['Piotr Beben']"
] |
cs.LG cs.CV cs.DS
| null |
1607.03967
| null | null |
http://arxiv.org/pdf/1607.03967v1
|
2016-07-14T00:24:33Z
|
2016-07-14T00:24:33Z
|
Concatenated image completion via tensor augmentation and completion
|
This paper proposes a novel framework called concatenated image completion
via tensor augmentation and completion (ICTAC), which recovers missing entries
of color images with high accuracy. Typical images are second- or third-order
tensors (2D/3D) depending if they are grayscale or color, hence tensor
completion algorithms are ideal for their recovery. The proposed framework
performs image completion by concatenating copies of a single image that has
missing entries into a third-order tensor, applying a dimensionality
augmentation technique to the tensor, utilizing a tensor completion algorithm
for recovering its missing entries, and finally extracting the recovered image
from the tensor. The solution relies on two key components that have been
recently proposed to take advantage of the tensor train (TT) rank: A tensor
augmentation tool called ket augmentation (KA) that represents a low-order
tensor by a higher-order tensor, and the algorithm tensor completion by
parallel matrix factorization via tensor train (TMac-TT), which has been
demonstrated to outperform state-of-the-art tensor completion algorithms.
Simulation results for color image recovery show the clear advantage of our
framework against current state-of-the-art tensor completion algorithms.
|
[
"Johann A. Bengua, Hoang D. Tuan, Ho N. Phien, Minh N. Do",
"['Johann A. Bengua' 'Hoang D. Tuan' 'Ho N. Phien' 'Minh N. Do']"
] |
cs.LG cs.DS math.ST stat.TH
| null |
1607.0399
| null | null | null | null | null |
Fast Algorithms for Segmented Regression
|
We study the fixed design segmented regression problem: Given noisy samples
from a piecewise linear function $f$, we want to recover $f$ up to a desired
accuracy in mean-squared error.
Previous rigorous approaches for this problem rely on dynamic programming
(DP) and, while sample efficient, have running time quadratic in the sample
size. As our main contribution, we provide new sample near-linear time
algorithms for the problem that -- while not being minimax optimal -- achieve a
significantly better sample-time tradeoff on large datasets compared to the DP
approach. Our experimental evaluation shows that, compared with the DP
approach, our algorithms provide a convergence rate that is only off by a
factor of $2$ to $4$, while achieving speedups of three orders of magnitude.
|
[
"Jayadev Acharya, Ilias Diakonikolas, Jerry Li, Ludwig Schmidt"
] |
null | null |
1607.03990
| null | null |
http://arxiv.org/pdf/1607.03990v1
|
2016-07-14T04:52:53Z
|
2016-07-14T04:52:53Z
|
Fast Algorithms for Segmented Regression
|
We study the fixed design segmented regression problem: Given noisy samples from a piecewise linear function $f$, we want to recover $f$ up to a desired accuracy in mean-squared error. Previous rigorous approaches for this problem rely on dynamic programming (DP) and, while sample efficient, have running time quadratic in the sample size. As our main contribution, we provide new sample near-linear time algorithms for the problem that -- while not being minimax optimal -- achieve a significantly better sample-time tradeoff on large datasets compared to the DP approach. Our experimental evaluation shows that, compared with the DP approach, our algorithms provide a convergence rate that is only off by a factor of $2$ to $4$, while achieving speedups of three orders of magnitude.
|
[
"['Jayadev Acharya' 'Ilias Diakonikolas' 'Jerry Li' 'Ludwig Schmidt']"
] |
cs.LG cs.IR stat.ML
|
10.1145/2959100.2959170
|
1607.04228
| null | null |
http://arxiv.org/abs/1607.04228v1
|
2016-07-14T17:55:33Z
|
2016-07-14T17:55:33Z
|
Fifty Shades of Ratings: How to Benefit from a Negative Feedback in
Top-N Recommendations Tasks
|
Conventional collaborative filtering techniques treat a top-n recommendations
problem as a task of generating a list of the most relevant items. This
formulation, however, disregards an opposite - avoiding recommendations with
completely irrelevant items. Due to that bias, standard algorithms, as well as
commonly used evaluation metrics, become insensitive to negative feedback. In
order to resolve this problem we propose to treat user feedback as a
categorical variable and model it with users and items in a ternary way. We
employ a third-order tensor factorization technique and implement a higher
order folding-in method to support online recommendations. The method is
equally sensitive to entire spectrum of user ratings and is able to accurately
predict relevant items even from a negative only feedback. Our method may
partially eliminate the need for complicated rating elicitation process as it
provides means for personalized recommendations from the very beginning of an
interaction with a recommender system. We also propose a modification of
standard metrics which helps to reveal unwanted biases and account for
sensitivity to a negative feedback. Our model achieves state-of-the-art quality
in standard recommendation tasks while significantly outperforming other
methods in the cold-start "no-positive-feedback" scenarios.
|
[
"['Evgeny Frolov' 'Ivan Oseledets']",
"Evgeny Frolov, Ivan Oseledets"
] |
cs.LG cs.CL stat.ML
| null |
1607.04315
| null | null |
http://arxiv.org/pdf/1607.04315v3
|
2017-01-05T15:41:13Z
|
2016-07-14T20:58:26Z
|
Neural Semantic Encoders
|
We present a memory augmented neural network for natural language
understanding: Neural Semantic Encoders. NSE is equipped with a novel memory
update rule and has a variable sized encoding memory that evolves over time and
maintains the understanding of input sequences through read}, compose and write
operations. NSE can also access multiple and shared memories. In this paper, we
demonstrated the effectiveness and the flexibility of NSE on five different
natural language tasks: natural language inference, question answering,
sentence classification, document sentiment analysis and machine translation
where NSE achieved state-of-the-art performance when evaluated on publically
available benchmarks. For example, our shared-memory model showed an
encouraging result on neural machine translation, improving an attention-based
baseline by approximately 1.0 BLEU.
|
[
"Tsendsuren Munkhdalai and Hong Yu",
"['Tsendsuren Munkhdalai' 'Hong Yu']"
] |
stat.ML cs.LG q-bio.NC
| null |
1607.04331
| null | null |
http://arxiv.org/pdf/1607.04331v2
|
2016-09-10T02:37:47Z
|
2016-07-14T21:43:39Z
|
Random projections of random manifolds
|
Interesting data often concentrate on low dimensional smooth manifolds inside
a high dimensional ambient space. Random projections are a simple, powerful
tool for dimensionality reduction of such data. Previous works have studied
bounds on how many projections are needed to accurately preserve the geometry
of these manifolds, given their intrinsic dimensionality, volume and curvature.
However, such works employ definitions of volume and curvature that are
inherently difficult to compute. Therefore such theory cannot be easily tested
against numerical simulations to understand the tightness of the proven bounds.
We instead study typical distortions arising in random projections of an
ensemble of smooth Gaussian random manifolds. We find explicitly computable,
approximate theoretical bounds on the number of projections required to
accurately preserve the geometry of these manifolds. Our bounds, while
approximate, can only be violated with a probability that is exponentially
small in the ambient dimension, and therefore they hold with high probability
in cases of practical interest. Moreover, unlike previous work, we test our
theoretical bounds against numerical experiments on the actual geometric
distortions that typically occur for random projections of random smooth
manifolds. We find our bounds are tighter than previous results by several
orders of magnitude.
|
[
"['Subhaneil Lahiri' 'Peiran Gao' 'Surya Ganguli']",
"Subhaneil Lahiri, Peiran Gao, Surya Ganguli"
] |
cs.AI cs.LG q-bio.QM
| null |
1607.04379
| null | null |
http://arxiv.org/pdf/1607.04379v1
|
2016-07-15T04:28:55Z
|
2016-07-15T04:28:55Z
|
DeepQA: Improving the estimation of single protein model quality with
deep belief networks
|
Protein quality assessment (QA) by ranking and selecting protein models has
long been viewed as one of the major challenges for protein tertiary structure
prediction. Especially, estimating the quality of a single protein model, which
is important for selecting a few good models out of a large model pool
consisting of mostly low-quality models, is still a largely unsolved problem.
We introduce a novel single-model quality assessment method DeepQA based on
deep belief network that utilizes a number of selected features describing the
quality of a model from different perspectives, such as energy, physio-chemical
characteristics, and structural information. The deep belief network is trained
on several large datasets consisting of models from the Critical Assessment of
Protein Structure Prediction (CASP) experiments, several publicly available
datasets, and models generated by our in-house ab initio method. Our experiment
demonstrate that deep belief network has better performance compared to Support
Vector Machines and Neural Networks on the protein model quality assessment
problem, and our method DeepQA achieves the state-of-the-art performance on
CASP11 dataset. It also outperformed two well-established methods in selecting
good outlier models from a large set of models of mostly low quality generated
by ab initio modeling methods. DeepQA is a useful tool for protein single model
quality assessment and protein structure prediction. The source code,
executable, document and training/test datasets of DeepQA for Linux is freely
available to non-commercial users at http://cactus.rnet.missouri.edu/DeepQA/.
|
[
"['Renzhi Cao' 'Debswapna Bhattacharya' 'Jie Hou' 'Jianlin Cheng']",
"Renzhi Cao, Debswapna Bhattacharya, Jie Hou, and Jianlin Cheng"
] |
cs.LG cs.IT math.IT
| null |
1607.04427
| null | null |
http://arxiv.org/pdf/1607.04427v3
|
2016-12-02T20:25:40Z
|
2016-07-15T09:22:55Z
|
A Theoretical Analysis of the BDeu Scores in Bayesian Network Structure
Learning
|
In Bayesian network structure learning (BNSL), we need the prior probability
over structures and parameters. If the former is the uniform distribution, the
latter determines the correctness of BNSL. In this paper, we compare BDeu
(Bayesian Dirichlet equivalent uniform) and Jeffreys' prior w.r.t. their
consistency. When we seek a parent set $U$ of a variable $X$, we require
regularity that if $H(X|U)\leq H(X|U')$ and $U\subsetneq U'$, then $U$ should
be chosen rather than $U'$. We prove that the BDeu scores violate the property
and cause fatal situations in BNSL. This is because for the BDeu scores, for
any sample size $n$,there exists a probability in the form
$P(X,Y,Z)={P(XZ)P(YZ)}/{P(Z)}$ such that the probability of deciding that $X$
and $Y$ are not conditionally independent given $Z$ is more than a half. For
Jeffreys' prior, the false-positive probability uniformly converges to zero
without depending on any parameter values, and no such an inconvenience occurs.
|
[
"Joe Suzuki",
"['Joe Suzuki']"
] |
cs.NI cs.LG
| null |
1607.0445
| null | null | null | null | null |
Channel Selection Algorithm for Cognitive Radio Networks with
Heavy-Tailed Idle Times
|
We consider a multichannel Cognitive Radio Network (CRN), where secondary
users sequentially sense channels for opportunistic spectrum access. In this
scenario, the Channel Selection Algorithm (CSA) allows secondary users to find
a vacant channel with the minimal number of channel switches. Most of the
existing CSA literature assumes exponential ON-OFF time distribution for
primary users (PU) channel occupancy pattern. This exponential assumption might
be helpful to get performance bounds; but not useful to evaluate the
performance of CSA under realistic conditions. An in-depth analysis of
independent spectrum measurement traces reveals that wireless channels have
typically heavy-tailed PU OFF times. In this paper, we propose an extension to
the Predictive CSA framework and its generalization for heavy tailed PU OFF
time distribution, which represents realistic scenarios. In particular, we
calculate the probability of channel being idle for hyper-exponential OFF times
to use in CSA. We implement our proposed CSA framework in a wireless test-bed
and comprehensively evaluate its performance by recreating the realistic PU
channel occupancy patterns. The proposed CSA shows significant reduction in
channel switches and energy consumption as compared to Predictive CSA which
always assumes exponential PU ON-OFF times.Through our work, we show the impact
of the PU channel occupancy pattern on the performance of CSA in multichannel
CRN.
|
[
"S. Senthilmurugan, Junaid Ansari, Petri M\\\"ah\\\"onen, T.G. Venkatesh,\n and Marina Petrova"
] |
null | null |
1607.04450
| null | null |
http://arxiv.org/pdf/1607.04450v1
|
2016-07-15T10:46:43Z
|
2016-07-15T10:46:43Z
|
Channel Selection Algorithm for Cognitive Radio Networks with
Heavy-Tailed Idle Times
|
We consider a multichannel Cognitive Radio Network (CRN), where secondary users sequentially sense channels for opportunistic spectrum access. In this scenario, the Channel Selection Algorithm (CSA) allows secondary users to find a vacant channel with the minimal number of channel switches. Most of the existing CSA literature assumes exponential ON-OFF time distribution for primary users (PU) channel occupancy pattern. This exponential assumption might be helpful to get performance bounds; but not useful to evaluate the performance of CSA under realistic conditions. An in-depth analysis of independent spectrum measurement traces reveals that wireless channels have typically heavy-tailed PU OFF times. In this paper, we propose an extension to the Predictive CSA framework and its generalization for heavy tailed PU OFF time distribution, which represents realistic scenarios. In particular, we calculate the probability of channel being idle for hyper-exponential OFF times to use in CSA. We implement our proposed CSA framework in a wireless test-bed and comprehensively evaluate its performance by recreating the realistic PU channel occupancy patterns. The proposed CSA shows significant reduction in channel switches and energy consumption as compared to Predictive CSA which always assumes exponential PU ON-OFF times.Through our work, we show the impact of the PU channel occupancy pattern on the performance of CSA in multichannel CRN.
|
[
"['S. Senthilmurugan' 'Junaid Ansari' 'Petri Mähönen' 'T. G. Venkatesh'\n 'Marina Petrova']"
] |
cs.CL cs.LG stat.ML
| null |
1607.04492
| null | null |
http://arxiv.org/pdf/1607.04492v2
|
2017-02-28T17:10:33Z
|
2016-07-15T12:59:01Z
|
Neural Tree Indexers for Text Understanding
|
Recurrent neural networks (RNNs) process input text sequentially and model
the conditional transition between word tokens. In contrast, the advantages of
recursive networks include that they explicitly model the compositionality and
the recursive structure of natural language. However, the current recursive
architecture is limited by its dependence on syntactic tree. In this paper, we
introduce a robust syntactic parsing-independent tree structured model, Neural
Tree Indexers (NTI) that provides a middle ground between the sequential RNNs
and the syntactic treebased recursive models. NTI constructs a full n-ary tree
by processing the input text with its node function in a bottom-up fashion.
Attention mechanism can then be applied to both structure and node function. We
implemented and evaluated a binarytree model of NTI, showing the model achieved
the state-of-the-art performance on three different NLP tasks: natural language
inference, answer sentence selection, and sentence classification,
outperforming state-of-the-art recurrent and recursive neural networks.
|
[
"Tsendsuren Munkhdalai and Hong Yu",
"['Tsendsuren Munkhdalai' 'Hong Yu']"
] |
cs.LG math.OC stat.ML
| null |
1607.04579
| null | null |
http://arxiv.org/pdf/1607.04579v2
|
2016-12-31T06:54:37Z
|
2016-07-15T16:56:22Z
|
Learning from Conditional Distributions via Dual Embeddings
|
Many machine learning tasks, such as learning with invariance and policy
evaluation in reinforcement learning, can be characterized as problems of
learning from conditional distributions. In such problems, each sample $x$
itself is associated with a conditional distribution $p(z|x)$ represented by
samples $\{z_i\}_{i=1}^M$, and the goal is to learn a function $f$ that links
these conditional distributions to target values $y$. These learning problems
become very challenging when we only have limited samples or in the extreme
case only one sample from each conditional distribution. Commonly used
approaches either assume that $z$ is independent of $x$, or require an
overwhelmingly large samples from each conditional distribution.
To address these challenges, we propose a novel approach which employs a new
min-max reformulation of the learning from conditional distribution problem.
With such new reformulation, we only need to deal with the joint distribution
$p(z,x)$. We also design an efficient learning algorithm, Embedding-SGD, and
establish theoretical sample complexity for such problems. Finally, our
numerical experiments on both synthetic and real-world datasets show that the
proposed approach can significantly improve over the existing algorithms.
|
[
"Bo Dai, Niao He, Yunpeng Pan, Byron Boots, Le Song",
"['Bo Dai' 'Niao He' 'Yunpeng Pan' 'Byron Boots' 'Le Song']"
] |
cs.SD cs.LG cs.NE
|
10.1109/TASLP.2016.2592698
|
1607.04589
| null | null |
http://arxiv.org/abs/1607.04589v1
|
2016-07-15T17:29:26Z
|
2016-07-15T17:29:26Z
|
Automatic Environmental Sound Recognition: Performance versus
Computational Cost
|
In the context of the Internet of Things (IoT), sound sensing applications
are required to run on embedded platforms where notions of product pricing and
form factor impose hard constraints on the available computing power. Whereas
Automatic Environmental Sound Recognition (AESR) algorithms are most often
developed with limited consideration for computational cost, this article seeks
which AESR algorithm can make the most of a limited amount of computing power
by comparing the sound classification performance em as a function of its
computational cost. Results suggest that Deep Neural Networks yield the best
ratio of sound classification accuracy across a range of computational costs,
while Gaussian Mixture Models offer a reasonable accuracy at a consistently
small cost, and Support Vector Machines stand between both in terms of
compromise between accuracy and computational cost.
|
[
"['Siddharth Sigtia' 'Adam M. Stark' 'Sacha Krstulovic' 'Mark D. Plumbley']",
"Siddharth Sigtia, Adam M. Stark, Sacha Krstulovic and Mark D. Plumbley"
] |
cs.CL cs.LG
| null |
1607.04606
| null | null |
http://arxiv.org/pdf/1607.04606v2
|
2017-06-19T17:41:07Z
|
2016-07-15T18:27:55Z
|
Enriching Word Vectors with Subword Information
|
Continuous word representations, trained on large unlabeled corpora are
useful for many natural language processing tasks. Popular models that learn
such representations ignore the morphology of words, by assigning a distinct
vector to each word. This is a limitation, especially for languages with large
vocabularies and many rare words. In this paper, we propose a new approach
based on the skipgram model, where each word is represented as a bag of
character $n$-grams. A vector representation is associated to each character
$n$-gram; words being represented as the sum of these representations. Our
method is fast, allowing to train models on large corpora quickly and allows us
to compute word representations for words that did not appear in the training
data. We evaluate our word representations on nine different languages, both on
word similarity and analogy tasks. By comparing to recently proposed
morphological word representations, we show that our vectors achieve
state-of-the-art performance on these tasks.
|
[
"Piotr Bojanowski, Edouard Grave, Armand Joulin, Tomas Mikolov",
"['Piotr Bojanowski' 'Edouard Grave' 'Armand Joulin' 'Tomas Mikolov']"
] |
cs.LG cs.RO
| null |
1607.04614
| null | null |
http://arxiv.org/pdf/1607.04614v1
|
2016-07-15T18:54:15Z
|
2016-07-15T18:54:15Z
|
Guided Policy Search as Approximate Mirror Descent
|
Guided policy search algorithms can be used to optimize complex nonlinear
policies, such as deep neural networks, without directly computing policy
gradients in the high-dimensional parameter space. Instead, these methods use
supervised learning to train the policy to mimic a "teacher" algorithm, such as
a trajectory optimizer or a trajectory-centric reinforcement learning method.
Guided policy search methods provide asymptotic local convergence guarantees by
construction, but it is not clear how much the policy improves within a small,
finite number of iterations. We show that guided policy search algorithms can
be interpreted as an approximate variant of mirror descent, where the
projection onto the constraint manifold is not exact. We derive a new guided
policy search algorithm that is simpler and provides appealing improvement and
convergence guarantees in simplified convex and linear settings, and show that
in the more general nonlinear setting, the error in the projection step can be
bounded. We provide empirical results on several simulated robotic navigation
and manipulation tasks that show that our method is stable and achieves similar
or better performance when compared to prior guided policy search methods, with
a simpler formulation and fewer hyperparameters.
|
[
"William Montgomery, Sergey Levine",
"['William Montgomery' 'Sergey Levine']"
] |
cs.LG cs.CL
| null |
1607.04683
| null | null |
http://arxiv.org/pdf/1607.04683v2
|
2016-12-17T01:31:31Z
|
2016-07-15T23:31:45Z
|
On the efficient representation and execution of deep acoustic models
|
In this paper we present a simple and computationally efficient quantization
scheme that enables us to reduce the resolution of the parameters of a neural
network from 32-bit floating point values to 8-bit integer values. The proposed
quantization scheme leads to significant memory savings and enables the use of
optimized hardware instructions for integer arithmetic, thus significantly
reducing the cost of inference. Finally, we propose a "quantization aware"
training process that applies the proposed scheme during network training and
find that it allows us to recover most of the loss in accuracy introduced by
quantization. We validate the proposed techniques by applying them to a long
short-term memory-based acoustic model on an open-ended large vocabulary speech
recognition task.
|
[
"Raziel Alvarez, Rohit Prabhavalkar, Anton Bakhtin",
"['Raziel Alvarez' 'Rohit Prabhavalkar' 'Anton Bakhtin']"
] |
cs.SI cs.LG
| null |
1607.04747
| null | null |
http://arxiv.org/pdf/1607.04747v2
|
2016-12-24T02:27:15Z
|
2016-07-16T15:00:40Z
|
Learning Social Circles in Ego Networks based on Multi-View Social
Graphs
|
In social network analysis, automatic social circle detection in ego-networks
is becoming a fundamental and important task, with many potential applications
such as user privacy protection or interest group recommendation. So far, most
studies have focused on addressing two questions, namely, how to detect
overlapping circles and how to detect circles using a combination of network
structure and network node attributes. This paper asks an orthogonal research
question, that is, how to detect circles based on network structures that are
(usually) described by multiple views? Our investigation begins with crawling
ego-networks from Twitter and employing classic techniques to model their
structures by six views, including user relationships, user interactions and
user content. We then apply both standard and our modified multi-view spectral
clustering techniques to detect social circles in these ego-networks. Based on
extensive automatic and manual experimental evaluations, we deliver two major
findings: first, multi-view clustering techniques perform better than common
single-view clustering techniques, which only use one view or naively integrate
all views for detection, second, the standard multi-view clustering technique
is less robust than our modified technique, which selectively transfers
information across views based on an assumption that sparse network structures
are (potentially) incomplete. In particular, the second finding makes us
believe a direct application of standard clustering on potentially incomplete
networks may yield biased results. We lightly examine this issue in theory,
where we derive an upper bound for such bias by integrating theories of
spectral clustering and matrix perturbation, and discuss how it may be affected
by several network characteristics.
|
[
"Chao Lan, Yuhao Yang, Xiaoli Li, Bo Luo, Jun Huan",
"['Chao Lan' 'Yuhao Yang' 'Xiaoli Li' 'Bo Luo' 'Jun Huan']"
] |
cs.CY cs.LG
|
10.13140/RG.2.1.2449.0486
|
1607.0477
| null | null | null | null | null |
Shesop Healthcare: Stress and influenza classification using support
vector machine kernel
|
Shesop is an integrated system to make human lives more easily and to help
people in terms of healthcare. Stress and influenza classification is a part of
Shesop's application for a healthcare devices such as smartwatch, polar and
fitbit. The main objective of this paper is to classify a new data and inform
whether you are stress, depressed, caught by influenza or not. We will use the
heart rate data taken for months in Bandung, analyze the data and find the
Heart rate variance that constantly related with the stress and flu level.
After we found the variable, we will use the variable as an input to the
support vector machine learning. We will use the lagrangian and kernel
technique to transform 2D data into 3D data so we can use the linear
classification in 3D space. In the end, we could use the machine learning's
result to classify new data and get the final result immediately: stress or
not, influenza or not.
|
[
"Andrien Ivander Wijaya, Ary Setijadi Prihatmanto, Rifki Wijaya"
] |
null | null |
1607.04770
| null | null |
http://arxiv.org/abs/1607.04770v1
|
2016-07-16T17:22:00Z
|
2016-07-16T17:22:00Z
|
Shesop Healthcare: Stress and influenza classification using support
vector machine kernel
|
Shesop is an integrated system to make human lives more easily and to help people in terms of healthcare. Stress and influenza classification is a part of Shesop's application for a healthcare devices such as smartwatch, polar and fitbit. The main objective of this paper is to classify a new data and inform whether you are stress, depressed, caught by influenza or not. We will use the heart rate data taken for months in Bandung, analyze the data and find the Heart rate variance that constantly related with the stress and flu level. After we found the variable, we will use the variable as an input to the support vector machine learning. We will use the lagrangian and kernel technique to transform 2D data into 3D data so we can use the linear classification in 3D space. In the end, we could use the machine learning's result to classify new data and get the final result immediately: stress or not, influenza or not.
|
[
"['Andrien Ivander Wijaya' 'Ary Setijadi Prihatmanto' 'Rifki Wijaya']"
] |
cs.CV cs.LG
| null |
1607.0478
| null | null | null | null | null |
Exploiting Multi-modal Curriculum in Noisy Web Data for Large-scale
Concept Learning
|
Learning video concept detectors automatically from the big but noisy web
data with no additional manual annotations is a novel but challenging area in
the multimedia and the machine learning community. A considerable amount of
videos on the web are associated with rich but noisy contextual information,
such as the title, which provides weak annotations or labels about the video
content. To leverage the big noisy web labels, this paper proposes a novel
method called WEbly-Labeled Learning (WELL), which is established on the
state-of-the-art machine learning algorithm inspired by the learning process of
human. WELL introduces a number of novel multi-modal approaches to incorporate
meaningful prior knowledge called curriculum from the noisy web videos. To
investigate this problem, we empirically study the curriculum constructed from
the multi-modal features of the videos collected from YouTube and Flickr. The
efficacy and the scalability of WELL have been extensively demonstrated on two
public benchmarks, including the largest multimedia dataset and the largest
manually-labeled video set. The comprehensive experimental results demonstrate
that WELL outperforms state-of-the-art studies by a statically significant
margin on learning concepts from noisy web video data. In addition, the results
also verify that WELL is robust to the level of noisiness in the video data.
Notably, WELL trained on sufficient noisy web labels is able to achieve a
comparable accuracy to supervised learning methods trained on the clean
manually-labeled data.
|
[
"Junwei Liang, Lu Jiang, Deyu Meng, Alexander Hauptmann"
] |
null | null |
1607.04780
| null | null |
http://arxiv.org/pdf/1607.04780v1
|
2016-07-16T18:14:51Z
|
2016-07-16T18:14:51Z
|
Exploiting Multi-modal Curriculum in Noisy Web Data for Large-scale
Concept Learning
|
Learning video concept detectors automatically from the big but noisy web data with no additional manual annotations is a novel but challenging area in the multimedia and the machine learning community. A considerable amount of videos on the web are associated with rich but noisy contextual information, such as the title, which provides weak annotations or labels about the video content. To leverage the big noisy web labels, this paper proposes a novel method called WEbly-Labeled Learning (WELL), which is established on the state-of-the-art machine learning algorithm inspired by the learning process of human. WELL introduces a number of novel multi-modal approaches to incorporate meaningful prior knowledge called curriculum from the noisy web videos. To investigate this problem, we empirically study the curriculum constructed from the multi-modal features of the videos collected from YouTube and Flickr. The efficacy and the scalability of WELL have been extensively demonstrated on two public benchmarks, including the largest multimedia dataset and the largest manually-labeled video set. The comprehensive experimental results demonstrate that WELL outperforms state-of-the-art studies by a statically significant margin on learning concepts from noisy web video data. In addition, the results also verify that WELL is robust to the level of noisiness in the video data. Notably, WELL trained on sufficient noisy web labels is able to achieve a comparable accuracy to supervised learning methods trained on the clean manually-labeled data.
|
[
"['Junwei Liang' 'Lu Jiang' 'Deyu Meng' 'Alexander Hauptmann']"
] |
cs.IT cs.LG cs.NE math.IT
| null |
1607.04793
| null | null |
http://arxiv.org/pdf/1607.04793v2
|
2016-09-30T14:43:52Z
|
2016-07-16T19:09:26Z
|
Learning to Decode Linear Codes Using Deep Learning
|
A novel deep learning method for improving the belief propagation algorithm
is proposed. The method generalizes the standard belief propagation algorithm
by assigning weights to the edges of the Tanner graph. These edges are then
trained using deep learning techniques. A well-known property of the belief
propagation algorithm is the independence of the performance on the transmitted
codeword. A crucial property of our new method is that our decoder preserved
this property. Furthermore, this property allows us to learn only a single
codeword instead of exponential number of code-words. Improvements over the
belief propagation algorithm are demonstrated for various high density parity
check codes.
|
[
"['Eliya Nachmani' 'Yair Beery' 'David Burshtein']",
"Eliya Nachmani, Yair Beery and David Burshtein"
] |
cs.LG
|
10.1016/j.jcp.2017.01.060
|
1607.04805
| null | null |
http://arxiv.org/abs/1607.04805v1
|
2016-07-16T22:12:26Z
|
2016-07-16T22:12:26Z
|
Inferring solutions of differential equations using noisy multi-fidelity
data
|
For more than two centuries, solutions of differential equations have been
obtained either analytically or numerically based on typically well-behaved
forcing and boundary conditions for well-posed problems. We are changing this
paradigm in a fundamental way by establishing an interface between
probabilistic machine learning and differential equations. We develop
data-driven algorithms for general linear equations using Gaussian process
priors tailored to the corresponding integro-differential operators. The only
observables are scarce noisy multi-fidelity data for the forcing and solution
that are not required to reside on the domain boundary. The resulting
predictive posterior distributions quantify uncertainty and naturally lead to
adaptive solution refinement via active learning. This general framework
circumvents the tyranny of numerical discretization as well as the consistency
and stability issues of time-integration, and is scalable to high-dimensions.
|
[
"Maziar Raissi, Paris Perdikaris, George Em. Karniadakis",
"['Maziar Raissi' 'Paris Perdikaris' 'George Em. Karniadakis']"
] |
cs.LG
|
10.1109/ICDMW.2016.0077
|
1607.04867
| null | null |
http://arxiv.org/abs/1607.04867v2
|
2016-07-19T07:01:07Z
|
2016-07-17T13:14:56Z
|
Robust Automated Human Activity Recognition and its Application to Sleep
Research
|
Human Activity Recognition (HAR) is a powerful tool for understanding human
behaviour. Applying HAR to wearable sensors can provide new insights by
enriching the feature set in health studies, and enhance the personalisation
and effectiveness of health, wellness, and fitness applications. Wearable
devices provide an unobtrusive platform for user monitoring, and due to their
increasing market penetration, feel intrinsic to the wearer. The integration of
these devices in daily life provide a unique opportunity for understanding
human health and wellbeing. This is referred to as the "quantified self"
movement. The analyses of complex health behaviours such as sleep,
traditionally require a time-consuming manual interpretation by experts. This
manual work is necessary due to the erratic periodicity and persistent
noisiness of human behaviour. In this paper, we present a robust automated
human activity recognition algorithm, which we call RAHAR. We test our
algorithm in the application area of sleep research by providing a novel
framework for evaluating sleep quality and examining the correlation between
the aforementioned and an individual's physical activity. Our results improve
the state-of-the-art procedure in sleep research by 15 percent for area under
ROC and by 30 percent for F1 score on average. However, application of RAHAR is
not limited to sleep analysis and can be used for understanding other health
problems such as obesity, diabetes, and cardiac diseases.
|
[
"['Aarti Sathyanarayana' 'Ferda Ofli' 'Luis Fernandes-Luque'\n 'Jaideep Srivastava' 'Ahmed Elmagarmid' 'Teresa Arora' 'Shahrad Taheri']",
"Aarti Sathyanarayana, Ferda Ofli, Luis Fernandes-Luque, Jaideep\n Srivastava, Ahmed Elmagarmid, Teresa Arora, Shahrad Taheri"
] |
stat.ML cs.LG
| null |
1607.04903
| null | null |
http://arxiv.org/pdf/1607.04903v3
|
2017-01-10T11:13:35Z
|
2016-07-17T18:58:12Z
|
Learning Unitary Operators with Help From u(n)
|
A major challenge in the training of recurrent neural networks is the
so-called vanishing or exploding gradient problem. The use of a norm-preserving
transition operator can address this issue, but parametrization is challenging.
In this work we focus on unitary operators and describe a parametrization using
the Lie algebra $\mathfrak{u}(n)$ associated with the Lie group $U(n)$ of $n
\times n$ unitary matrices. The exponential map provides a correspondence
between these spaces, and allows us to define a unitary matrix using $n^2$ real
coefficients relative to a basis of the Lie algebra. The parametrization is
closed under additive updates of these coefficients, and thus provides a simple
space in which to do gradient descent. We demonstrate the effectiveness of this
parametrization on the problem of learning arbitrary unitary operators,
comparing to several baselines and outperforming a recently-proposed
lower-dimensional parametrization. We additionally use our parametrization to
generalize a recently-proposed unitary recurrent neural network to arbitrary
unitary matrices, using it to solve standard long-memory tasks.
|
[
"Stephanie L. Hyland, Gunnar R\\\"atsch",
"['Stephanie L. Hyland' 'Gunnar Rätsch']"
] |
cs.LG cs.AI cs.CV
| null |
1607.04917
| null | null |
http://arxiv.org/pdf/1607.04917v2
|
2016-12-28T06:39:01Z
|
2016-07-17T21:49:00Z
|
Piecewise convexity of artificial neural networks
|
Although artificial neural networks have shown great promise in applications
including computer vision and speech recognition, there remains considerable
practical and theoretical difficulty in optimizing their parameters. The
seemingly unreasonable success of gradient descent methods in minimizing these
non-convex functions remains poorly understood. In this work we offer some
theoretical guarantees for networks with piecewise affine activation functions,
which have in recent years become the norm. We prove three main results.
Firstly, that the network is piecewise convex as a function of the input data.
Secondly, that the network, considered as a function of the parameters in a
single layer, all others held constant, is again piecewise convex. Finally,
that the network as a function of all its parameters is piecewise multi-convex,
a generalization of biconvexity. From here we characterize the local minima and
stationary points of the training objective, showing that they minimize certain
subsets of the parameter space. We then analyze the performance of two
optimization algorithms on multi-convex problems: gradient descent, and a
method which repeatedly solves a number of convex sub-problems. We prove
necessary convergence conditions for the first algorithm and both necessary and
sufficient conditions for the second, after introducing regularization to the
objective. Finally, we remark on the remaining difficulty of the global
optimization problem. Under the squared error objective, we show that by
varying the training data, a single rectifier neuron admits local minima
arbitrarily far apart, both in objective value and parameter space.
|
[
"Blaine Rister, Daniel L Rubin",
"['Blaine Rister' 'Daniel L Rubin']"
] |
cs.DS cs.DC cs.LG
| null |
1607.04984
| null | null |
http://arxiv.org/pdf/1607.04984v3
|
2019-04-11T13:47:40Z
|
2016-07-18T09:30:49Z
|
Distributed Graph Clustering by Load Balancing
|
Graph clustering is a fundamental computational problem with a number of
applications in algorithm design, machine learning, data mining, and analysis
of social networks. Over the past decades, researchers have proposed a number
of algorithmic design methods for graph clustering. However, most of these
methods are based on complicated spectral techniques or convex optimisation,
and cannot be applied directly for clustering many networks that occur in
practice, whose information is often collected on different sites. Designing a
simple and distributed clustering algorithm is of great interest, and has wide
applications for processing big datasets. In this paper we present a simple and
distributed algorithm for graph clustering: for a wide class of graphs that are
characterised by a strong cluster-structure, our algorithm finishes in a
poly-logarithmic number of rounds, and recovers a partition of the graph close
to an optimal partition. The main component of our algorithm is an application
of the random matching model of load balancing, which is a fundamental protocol
in distributed computing and has been extensively studied in the past 20 years.
Hence, our result highlights an intrinsic and interesting connection between
graph clustering and load balancing. At a technical level, we present a purely
algebraic result characterising the early behaviours of load balancing
processes for graphs exhibiting a cluster-structure. We believe that this
result can be further applied to analyse other gossip processes, such as rumour
spreading and averaging processes.
|
[
"['He Sun' 'Luca Zanetti']",
"He Sun, Luca Zanetti"
] |
stat.ML cs.LG
| null |
1607.05002
| null | null |
http://arxiv.org/pdf/1607.05002v1
|
2016-07-18T10:14:46Z
|
2016-07-18T10:14:46Z
|
Geometric Mean Metric Learning
|
We revisit the task of learning a Euclidean metric from data. We approach
this problem from first principles and formulate it as a surprisingly simple
optimization problem. Indeed, our formulation even admits a closed form
solution. This solution possesses several very attractive properties: (i) an
innate geometric appeal through the Riemannian geometry of positive definite
matrices; (ii) ease of interpretability; and (iii) computational speed several
orders of magnitude faster than the widely used LMNN and ITML methods.
Furthermore, on standard benchmark datasets, our closed-form solution
consistently attains higher classification accuracy.
|
[
"['Pourya Habib Zadeh' 'Reshad Hosseini' 'Suvrit Sra']",
"Pourya Habib Zadeh, Reshad Hosseini and Suvrit Sra"
] |
stat.ML cs.LG
| null |
1607.05047
| null | null |
http://arxiv.org/pdf/1607.05047v1
|
2016-07-18T12:43:40Z
|
2016-07-18T12:43:40Z
|
A Batch, Off-Policy, Actor-Critic Algorithm for Optimizing the Average
Reward
|
We develop an off-policy actor-critic algorithm for learning an optimal
policy from a training set composed of data from multiple individuals. This
algorithm is developed with a view towards its use in mobile health.
|
[
"['S. A. Murphy' 'Y. Deng' 'E. B. Laber' 'H. R. Maei' 'R. S. Sutton'\n 'K. Witkiewitz']",
"S.A. Murphy, Y. Deng, E.B. Laber, H.R. Maei, R.S. Sutton, K.\n Witkiewitz"
] |
cs.CL cs.LG stat.ML
| null |
1607.05241
| null | null |
http://arxiv.org/pdf/1607.05241v1
|
2016-07-18T19:01:00Z
|
2016-07-18T19:01:00Z
|
Imitation Learning with Recurrent Neural Networks
|
We present a novel view that unifies two frameworks that aim to solve
sequential prediction problems: learning to search (L2S) and recurrent neural
networks (RNN). We point out equivalences between elements of the two
frameworks. By complementing what is missing from one framework comparing to
the other, we introduce a more advanced imitation learning framework that, on
one hand, augments L2S s notion of search space and, on the other hand,
enhances RNNs training procedure to be more robust to compounding errors
arising from training on highly correlated examples.
|
[
"Khanh Nguyen",
"['Khanh Nguyen']"
] |
cs.LG
| null |
1607.05271
| null | null |
http://arxiv.org/pdf/1607.05271v1
|
2016-07-18T14:46:05Z
|
2016-07-18T14:46:05Z
|
A Semiparametric Model for Bayesian Reader Identification
|
We study the problem of identifying individuals based on their characteristic
gaze patterns during reading of arbitrary text. The motivation for this problem
is an unobtrusive biometric setting in which a user is observed during access
to a document, but no specific challenge protocol requiring the user's time and
attention is carried out. Existing models of individual differences in gaze
control during reading are either based on simple aggregate features of eye
movements, or rely on parametric density models to describe, for instance,
saccade amplitudes or word fixation durations. We develop flexible
semiparametric models of eye movements during reading in which densities are
inferred under a Gaussian process prior centered at a parametric distribution
family that is expected to approximate the true distribution well. An empirical
study on reading data from 251 individuals shows significant improvements over
the state of the art.
|
[
"Ahmed Abdelwahab, Reinhold Kliegl and Niels Landwehr",
"['Ahmed Abdelwahab' 'Reinhold Kliegl' 'Niels Landwehr']"
] |
cs.AI cs.CV cs.LG
| null |
1607.05387
| null | null |
http://arxiv.org/pdf/1607.05387v2
|
2016-11-14T07:32:35Z
|
2016-07-19T03:09:31Z
|
Generating Images Part by Part with Composite Generative Adversarial
Networks
|
Image generation remains a fundamental problem in artificial intelligence in
general and deep learning in specific. The generative adversarial network (GAN)
was successful in generating high quality samples of natural images. We propose
a model called composite generative adversarial network, that reveals the
complex structure of images with multiple generators in which each generator
generates some part of the image. Those parts are combined by alpha blending
process to create a new single image. It can generate, for example, background
and face sequentially with two generators, after training on face dataset.
Training was done in an unsupervised way without any labels about what each
generator should generate. We found possibilities of learning the structure by
using this generative model empirically.
|
[
"Hanock Kwak, Byoung-Tak Zhang",
"['Hanock Kwak' 'Byoung-Tak Zhang']"
] |
cs.DS cs.GT cs.LG
| null |
1607.05397
| null | null |
http://arxiv.org/pdf/1607.05397v3
|
2017-06-10T20:10:09Z
|
2016-07-19T04:22:00Z
|
Multidimensional Dynamic Pricing for Welfare Maximization
|
We study the problem of a seller dynamically pricing $d$ distinct types of
indivisible goods, when faced with the online arrival of unit-demand buyers
drawn independently from an unknown distribution. The goods are not in limited
supply, but can only be produced at a limited rate and are costly to produce.
The seller observes only the bundle of goods purchased at each day, but nothing
else about the buyer's valuation function. Our main result is a dynamic pricing
algorithm for optimizing welfare (including the seller's cost of production)
that runs in time and a number of rounds that are polynomial in $d$ and the
approximation parameter. We are able to do this despite the fact that (i) the
price-response function is not continuous, and even its fractional relaxation
is a non-concave function of the prices, and (ii) the welfare is not observable
to the seller.
We derive this result as an application of a general technique for optimizing
welfare over \emph{divisible} goods, which is of independent interest. When
buyers have strongly concave, H\"older continuous valuation functions over $d$
divisible goods, we give a general polynomial time dynamic pricing technique.
We are able to apply this technique to the setting of unit demand buyers
despite the fact that in that setting the goods are not divisible, and the
natural fractional relaxation of a unit demand valuation is not strongly
concave. In order to apply our general technique, we introduce a novel price
randomization procedure which has the effect of implicitly inducing buyers to
"regularize" their valuations with a strongly concave function. Finally, we
also extend our results to a limited-supply setting in which the number of
copies of each good cannot be replenished.
|
[
"Aaron Roth, Aleksandrs Slivkins, Jonathan Ullman, Zhiwei Steven Wu",
"['Aaron Roth' 'Aleksandrs Slivkins' 'Jonathan Ullman' 'Zhiwei Steven Wu']"
] |
cs.CV cs.LG stat.ML
| null |
1607.05691
| null | null |
http://arxiv.org/pdf/1607.05691v1
|
2016-07-19T18:40:01Z
|
2016-07-19T18:40:01Z
|
Information-theoretical label embeddings for large-scale image
classification
|
We present a method for training multi-label, massively multi-class image
classification models, that is faster and more accurate than supervision via a
sigmoid cross-entropy loss (logistic regression). Our method consists in
embedding high-dimensional sparse labels onto a lower-dimensional dense sphere
of unit-normed vectors, and treating the classification problem as a cosine
proximity regression problem on this sphere. We test our method on a dataset of
300 million high-resolution images with 17,000 labels, where it yields
considerably faster convergence, as well as a 7% higher mean average precision
compared to logistic regression.
|
[
"Fran\\c{c}ois Chollet",
"['François Chollet']"
] |
cs.LG
| null |
1607.05749
| null | null |
http://arxiv.org/pdf/1607.05749v1
|
2016-07-19T20:21:43Z
|
2016-07-19T20:21:43Z
|
PRIIME: A Generic Framework for Interactive Personalized Interesting
Pattern Discovery
|
The traditional frequent pattern mining algorithms generate an exponentially
large number of patterns of which a substantial proportion are not much
significant for many data analysis endeavors. Discovery of a small number of
personalized interesting patterns from the large output set according to a
particular user's interest is an important as well as challenging task.
Existing works on pattern summarization do not solve this problem from the
personalization viewpoint. In this work, we propose an interactive pattern
discovery framework named PRIIME which identifies a set of interesting patterns
for a specific user without requiring any prior input on the interestingness
measure of patterns from the user. The proposed framework is generic to support
discovery of the interesting set, sequence and graph type patterns. We develop
a softmax classification based iterative learning algorithm that uses a limited
number of interactive feedback from the user to learn her interestingness
profile, and use this profile for pattern recommendation. To handle sequence
and graph type patterns PRIIME adopts a neural net (NN) based unsupervised
feature construction approach. We also develop a strategy that combines
exploration and exploitation to select patterns for feedback. We show
experimental results on several real-life datasets to validate the performance
of the proposed method. We also compare with the existing methods of
interactive pattern discovery to show that our method is substantially superior
in performance. To portray the applicability of the framework, we present a
case study from the real-estate domain.
|
[
"['Mansurul Bhuiyan' 'Mohammad Al Hasan']",
"Mansurul Bhuiyan and Mohammad Al Hasan"
] |
cs.SI cs.LG physics.data-an physics.soc-ph stat.OT
|
10.1007/s10618-017-0548-4
|
1607.05952
| null | null |
http://arxiv.org/abs/1607.05952v3
|
2017-12-09T10:51:19Z
|
2016-07-16T11:54:27Z
|
Data-driven generation of spatio-temporal routines in human mobility
|
The generation of realistic spatio-temporal trajectories of human mobility is
of fundamental importance in a wide range of applications, such as the
developing of protocols for mobile ad-hoc networks or what-if analysis in urban
ecosystems. Current generative algorithms fail in accurately reproducing the
individuals' recurrent schedules and at the same time in accounting for the
possibility that individuals may break the routine during periods of variable
duration. In this article we present DITRAS (DIary-based TRAjectory Simulator),
a framework to simulate the spatio-temporal patterns of human mobility. DITRAS
operates in two steps: the generation of a mobility diary and the translation
of the mobility diary into a mobility trajectory. We propose a data-driven
algorithm which constructs a diary generator from real data, capturing the
tendency of individuals to follow or break their routine. We also propose a
trajectory generator based on the concept of preferential exploration and
preferential return. We instantiate DITRAS with the proposed diary and
trajectory generators and compare the resulting algorithm with real data and
synthetic data produced by other generative algorithms, built by instantiating
DITRAS with several combinations of diary and trajectory generators. We show
that the proposed algorithm reproduces the statistical properties of real
trajectories in the most accurate way, making a step forward the understanding
of the origin of the spatio-temporal patterns of human mobility.
|
[
"Luca Pappalardo and Filippo Simini",
"['Luca Pappalardo' 'Filippo Simini']"
] |
cs.SY cs.LG
| null |
1607.05962
| null | null |
http://arxiv.org/pdf/1607.05962v1
|
2016-07-20T14:00:53Z
|
2016-07-20T14:00:53Z
|
Indoor occupancy estimation from carbon dioxide concentration
|
This paper presents an indoor occupancy estimator with which we can estimate
the number of real-time indoor occupants based on the carbon dioxide (CO2)
measurement. The estimator is actually a dynamic model of the occupancy level.
To identify the dynamic model, we propose the Feature Scaled Extreme Learning
Machine (FS-ELM) algorithm, which is a variation of the standard Extreme
Learning Machine (ELM) but is shown to perform better for the occupancy
estimation problem. The measured CO2 concentration suffers from serious spikes.
We find that pre-smoothing the CO2 data can greatly improve the estimation
accuracy. In real applications, however, we cannot obtain the real-time
globally smoothed CO2 data. We provide a way to use the locally smoothed CO2
data instead, which is real-time available. We introduce a new criterion, i.e.
$x$-tolerance accuracy, to assess the occupancy estimator. The proposed
occupancy estimator was tested in an office room with 24 cubicles and 11 open
seats. The accuracy is up to 94 percent with a tolerance of four occupants.
|
[
"['Chaoyang Jiang' 'Mustafa K. Masood' 'Yeng Chai Soh' 'Hua Li']",
"Chaoyang Jiang, Mustafa K. Masood, Yeng Chai Soh, and Hua Li"
] |
cs.IT cs.LG math.IT stat.ML
| null |
1607.05966
| null | null |
http://arxiv.org/pdf/1607.05966v1
|
2016-07-20T14:14:49Z
|
2016-07-20T14:14:49Z
|
Onsager-corrected deep learning for sparse linear inverse problems
|
Deep learning has gained great popularity due to its widespread success on
many inference problems. We consider the application of deep learning to the
sparse linear inverse problem encountered in compressive sensing, where one
seeks to recover a sparse signal from a small number of noisy linear
measurements. In this paper, we propose a novel neural-network architecture
that decouples prediction errors across layers in the same way that the
approximate message passing (AMP) algorithm decouples them across iterations:
through Onsager correction. Numerical experiments suggest that our "learned
AMP" network significantly improves upon Gregor and LeCun's "learned ISTA"
network in both accuracy and complexity.
|
[
"Mark Borgerding and Philip Schniter",
"['Mark Borgerding' 'Philip Schniter']"
] |
stat.ML cs.LG
|
10.1017/S0269964816000279
|
1607.0597
| null | null | null | null | null |
On the Identification and Mitigation of Weaknesses in the Knowledge
Gradient Policy for Multi-Armed Bandits
|
The Knowledge Gradient (KG) policy was originally proposed for online ranking
and selection problems but has recently been adapted for use in online decision
making in general and multi-armed bandit problems (MABs) in particular. We
study its use in a class of exponential family MABs and identify weaknesses,
including a propensity to take actions which are dominated with respect to both
exploitation and exploration. We propose variants of KG which avoid such
errors. These new policies include an index heuristic which deploys a KG
approach to develop an approximation to the Gittins index. A numerical study
shows this policy to perform well over a range of MABs including those for
which index policies are not optimal. While KG does not make dominated actions
when bandits are Gaussian, it fails to be index consistent and appears not to
enjoy a performance advantage over competitor policies when arms are correlated
to compensate for its greater computational demands.
|
[
"James Edwards, Paul Fearnhead, Kevin Glazebrook"
] |
null | null |
1607.05970
| null | null |
http://arxiv.org/abs/1607.05970v2
|
2016-10-17T15:36:30Z
|
2016-07-20T14:21:42Z
|
On the Identification and Mitigation of Weaknesses in the Knowledge
Gradient Policy for Multi-Armed Bandits
|
The Knowledge Gradient (KG) policy was originally proposed for online ranking and selection problems but has recently been adapted for use in online decision making in general and multi-armed bandit problems (MABs) in particular. We study its use in a class of exponential family MABs and identify weaknesses, including a propensity to take actions which are dominated with respect to both exploitation and exploration. We propose variants of KG which avoid such errors. These new policies include an index heuristic which deploys a KG approach to develop an approximation to the Gittins index. A numerical study shows this policy to perform well over a range of MABs including those for which index policies are not optimal. While KG does not make dominated actions when bandits are Gaussian, it fails to be index consistent and appears not to enjoy a performance advantage over competitor policies when arms are correlated to compensate for its greater computational demands.
|
[
"['James Edwards' 'Paul Fearnhead' 'Kevin Glazebrook']"
] |
cs.LG cs.CV physics.data-an stat.ML
| null |
1607.06011
| null | null |
http://arxiv.org/pdf/1607.06011v1
|
2016-07-20T16:25:27Z
|
2016-07-20T16:25:27Z
|
On the Modeling of Error Functions as High Dimensional Landscapes for
Weight Initialization in Learning Networks
|
Next generation deep neural networks for classification hosted on embedded
platforms will rely on fast, efficient, and accurate learning algorithms.
Initialization of weights in learning networks has a great impact on the
classification accuracy. In this paper we focus on deriving good initial
weights by modeling the error function of a deep neural network as a
high-dimensional landscape. We observe that due to the inherent complexity in
its algebraic structure, such an error function may conform to general results
of the statistics of large systems. To this end we apply some results from
Random Matrix Theory to analyse these functions. We model the error function in
terms of a Hamiltonian in N-dimensions and derive some theoretical results
about its general behavior. These results are further used to make better
initial guesses of weights for the learning algorithm.
|
[
"['Julius' 'Gopinath Mahale' 'Sumana T.' 'C. S. Adityakrishna']",
"Julius, Gopinath Mahale, Sumana T., C. S. Adityakrishna"
] |
math.OC cs.DS cs.LG stat.ML
| null |
1607.06017
| null | null |
http://arxiv.org/pdf/1607.06017v2
|
2016-11-26T03:18:24Z
|
2016-07-20T16:43:18Z
|
Doubly Accelerated Methods for Faster CCA and Generalized
Eigendecomposition
|
We study $k$-GenEV, the problem of finding the top $k$ generalized
eigenvectors, and $k$-CCA, the problem of finding the top $k$ vectors in
canonical-correlation analysis. We propose algorithms $\mathtt{LazyEV}$ and
$\mathtt{LazyCCA}$ to solve the two problems with running times linearly
dependent on the input size and on $k$.
Furthermore, our algorithms are DOUBLY-ACCELERATED: our running times depend
only on the square root of the matrix condition number, and on the square root
of the eigengap. This is the first such result for both $k$-GenEV or $k$-CCA.
We also provide the first gap-free results, which provide running times that
depend on $1/\sqrt{\varepsilon}$ rather than the eigengap.
|
[
"Zeyuan Allen-Zhu, Yuanzhi Li",
"['Zeyuan Allen-Zhu' 'Yuanzhi Li']"
] |
cs.LG
| null |
1607.06123
| null | null |
http://arxiv.org/pdf/1607.06123v2
|
2016-09-07T21:15:06Z
|
2016-07-20T20:55:14Z
|
Predicting Branch Visits and Credit Card Up-selling using Temporal
Banking Data
|
There is an abundance of temporal and non-temporal data in banking (and other
industries), but such temporal activity data can not be used directly with
classical machine learning models. In this work, we perform extensive feature
extraction from the temporal user activity data in an attempt to predict user
visits to different branches and credit card up-selling utilizing user
information and the corresponding activity data, as part of \emph{ECML/PKDD
Discovery Challenge 2016 on Bank Card Usage Analysis}. Our solution ranked
\nth{4} for \emph{Task 1} and achieved an AUC of \textbf{$0.7056$} for
\emph{Task 2} on public leaderboard.
|
[
"Sandra Mitrovi\\'c and Gaurav Singh",
"['Sandra Mitrović' 'Gaurav Singh']"
] |
cs.CV cs.LG cs.NE
| null |
1607.06125
| null | null |
http://arxiv.org/pdf/1607.06125v1
|
2016-07-20T21:02:16Z
|
2016-07-20T21:02:16Z
|
Sequence to sequence learning for unconstrained scene text recognition
|
In this work we present a state-of-the-art approach for unconstrained natural
scene text recognition. We propose a cascade approach that incorporates a
convolutional neural network (CNN) architecture followed by a long short term
memory model (LSTM). The CNN learns visual features for the characters and uses
them with a softmax layer to detect sequence of characters. While the CNN gives
very good recognition results, it does not model relation between characters,
hence gives rise to false positive and false negative cases (confusing
characters due to visual similarities like "g" and "9", or confusing background
patches with characters; either removing existing characters or adding
non-existing ones) To alleviate these problems we leverage recent developments
in LSTM architectures to encode contextual information. We show that the LSTM
can dramatically reduce such errors and achieve state-of-the-art accuracy in
the task of unconstrained natural scene text recognition. Moreover we manually
remove all occurrences of the words that exist in the test set from our
training set to test whether our approach will generalize to unseen data. We
use the ICDAR 13 test set for evaluation and compare the results with the state
of the art approaches [11, 18]. We finally present an application of the work
in the domain of for traffic monitoring.
|
[
"Ahmed Mamdouh A. Hassanien",
"['Ahmed Mamdouh A. Hassanien']"
] |
cs.LG quant-ph stat.ML
| null |
1607.06146
| null | null |
http://arxiv.org/pdf/1607.06146v1
|
2016-07-20T22:46:32Z
|
2016-07-20T22:46:32Z
|
Supervised quantum gate "teaching" for quantum hardware design
|
We show how to train a quantum network of pairwise interacting qubits such
that its evolution implements a target quantum algorithm into a given network
subset. Our strategy is inspired by supervised learning and is designed to help
the physical construction of a quantum computer which operates with minimal
external classical control.
|
[
"['Leonardo Banchi' 'Nicola Pancotti' 'Sougato Bose']",
"Leonardo Banchi, Nicola Pancotti, Sougato Bose"
] |
cs.SI cs.IR cs.LG
| null |
1607.06182
| null | null |
http://arxiv.org/pdf/1607.06182v1
|
2016-07-21T04:10:38Z
|
2016-07-21T04:10:38Z
|
Streaming Recommender Systems
|
The increasing popularity of real-world recommender systems produces data
continuously and rapidly, and it becomes more realistic to study recommender
systems under streaming scenarios. Data streams present distinct properties
such as temporally ordered, continuous and high-velocity, which poses
tremendous challenges to traditional recommender systems. In this paper, we
investigate the problem of recommendation with stream inputs. In particular, we
provide a principled framework termed sRec, which provides explicit
continuous-time random process models of the creation of users and topics, and
of the evolution of their interests. A variational Bayesian approach called
recursive meanfield approximation is proposed, which permits computationally
efficient instantaneous on-line inference. Experimental results on several
real-world datasets demonstrate the advantages of our sRec over other
state-of-the-arts.
|
[
"Shiyu Chang, Yang Zhang, Jiliang Tang, Dawei Yin, Yi Chang, Mark A.\n Hasegawa-Johnson, Thomas S. Huang",
"['Shiyu Chang' 'Yang Zhang' 'Jiliang Tang' 'Dawei Yin' 'Yi Chang'\n 'Mark A. Hasegawa-Johnson' 'Thomas S. Huang']"
] |
cs.LG
|
10.1109/DSAA.2015.7344863
|
1607.0619
| null | null | null | null | null |
An ensemble of machine learning and anti-learning methods for predicting
tumour patient survival rates
|
This paper primarily addresses a dataset relating to cellular, chemical and
physical conditions of patients gathered at the time they are operated upon to
remove colorectal tumours. This data provides a unique insight into the
biochemical and immunological status of patients at the point of tumour removal
along with information about tumour classification and post-operative survival.
The relationship between severity of tumour, based on TNM staging, and survival
is still unclear for patients with TNM stage 2 and 3 tumours. We ask whether it
is possible to predict survival rate more accurately using a selection of
machine learning techniques applied to subsets of data to gain a deeper
understanding of the relationships between a patient's biochemical markers and
survival. We use a range of feature selection and single classification
techniques to predict the 5 year survival rate of TNM stage 2 and 3 patients
which initially produces less than ideal results. The performance of each model
individually is then compared with subsets of the data where agreement is
reached for multiple models. This novel method of selective ensembling
demonstrates that significant improvements in model accuracy on an unseen test
set can be achieved for patients where agreement between models is achieved.
Finally we point at a possible method to identify whether a patients prognosis
can be accurately predicted or not.
|
[
"Christopher Roadknight, Durga Suryanarayanan, Uwe Aickelin, John\n Scholefield, Lindy Durrant"
] |
null | null |
1607.06190
| null | null |
http://arxiv.org/abs/1607.06190v1
|
2016-07-21T04:57:16Z
|
2016-07-21T04:57:16Z
|
An ensemble of machine learning and anti-learning methods for predicting
tumour patient survival rates
|
This paper primarily addresses a dataset relating to cellular, chemical and physical conditions of patients gathered at the time they are operated upon to remove colorectal tumours. This data provides a unique insight into the biochemical and immunological status of patients at the point of tumour removal along with information about tumour classification and post-operative survival. The relationship between severity of tumour, based on TNM staging, and survival is still unclear for patients with TNM stage 2 and 3 tumours. We ask whether it is possible to predict survival rate more accurately using a selection of machine learning techniques applied to subsets of data to gain a deeper understanding of the relationships between a patient's biochemical markers and survival. We use a range of feature selection and single classification techniques to predict the 5 year survival rate of TNM stage 2 and 3 patients which initially produces less than ideal results. The performance of each model individually is then compared with subsets of the data where agreement is reached for multiple models. This novel method of selective ensembling demonstrates that significant improvements in model accuracy on an unseen test set can be achieved for patients where agreement between models is achieved. Finally we point at a possible method to identify whether a patients prognosis can be accurately predicted or not.
|
[
"['Christopher Roadknight' 'Durga Suryanarayanan' 'Uwe Aickelin'\n 'John Scholefield' 'Lindy Durrant']"
] |
cs.DS cs.LG
| null |
1607.06203
| null | null |
http://arxiv.org/pdf/1607.06203v1
|
2016-07-21T06:04:36Z
|
2016-07-21T06:04:36Z
|
Greedy bi-criteria approximations for $k$-medians and $k$-means
|
This paper investigates the following natural greedy procedure for clustering
in the bi-criterion setting: iteratively grow a set of centers, in each round
adding the center from a candidate set that maximally decreases clustering
cost. In the case of $k$-medians and $k$-means, the key results are as follows.
$\bullet$ When the method considers all data points as candidate centers,
then selecting $\mathcal{O}(k\log(1/\varepsilon))$ centers achieves cost at
most $2+\varepsilon$ times the optimal cost with $k$ centers.
$\bullet$ Alternatively, the same guarantees hold if each round samples
$\mathcal{O}(k/\varepsilon^5)$ candidate centers proportionally to their
cluster cost (as with $\texttt{kmeans++}$, but holding centers fixed).
$\bullet$ In the case of $k$-means, considering an augmented set of
$n^{\lceil1/\varepsilon\rceil}$ candidate centers gives $1+\varepsilon$
approximation with $\mathcal{O}(k\log(1/\varepsilon))$ centers, the entire
algorithm taking
$\mathcal{O}(dk\log(1/\varepsilon)n^{1+\lceil1/\varepsilon\rceil})$ time, where
$n$ is the number of data points in $\mathbb{R}^d$.
$\bullet$ In the case of Euclidean $k$-medians, generating a candidate set
via $n^{\mathcal{O}(1/\varepsilon^2)}$ executions of stochastic gradient
descent with adaptively determined constraint sets will once again give
approximation $1+\varepsilon$ with $\mathcal{O}(k\log(1/\varepsilon))$ centers
in $dk\log(1/\varepsilon)n^{\mathcal{O}(1/\varepsilon^2)}$ time.
Ancillary results include: guarantees for cluster costs based on powers of
metrics; a brief, favorable empirical evaluation against $\texttt{kmeans++}$;
data-dependent bounds allowing $1+\varepsilon$ in the first two bullets above,
for example with $k$-medians over finite metric spaces.
|
[
"Daniel Hsu and Matus Telgarsky",
"['Daniel Hsu' 'Matus Telgarsky']"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.