title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Proximal Quasi-Newton Methods for Regularized Convex Optimization with
Linear and Accelerated Sublinear Convergence Rates | cs.NA cs.LG math.OC stat.ML | In [19], a general, inexact, efficient proximal quasi-Newton algorithm for
composite optimization problems has been proposed and a sublinear global
convergence rate has been established. In this paper, we analyze the
convergence properties of this method, both in the exact and inexact setting,
in the case when the objective function is strongly convex. We also investigate
a practical variant of this method by establishing a simple stopping criterion
for the subproblem optimization. Furthermore, we consider an accelerated
variant, based on FISTA [1], to the proximal quasi-Newton algorithm. A similar
accelerated method has been considered in [7], where the convergence rate
analysis relies on very strong impractical assumptions. We present a modified
analysis while relaxing these assumptions and perform a practical comparison of
the accelerated proximal quasi- Newton algorithm and the regular one. Our
analysis and computational results show that acceleration may not bring any
benefit in the quasi-Newton setting.
| Hiva Ghanbari, Katya Scheinberg | null | 1607.03081 | null | null |
Kernel-based methods for bandit convex optimization | cs.LG cs.DS math.OC stat.ML | We consider the adversarial convex bandit problem and we build the first
$\mathrm{poly}(T)$-time algorithm with $\mathrm{poly}(n) \sqrt{T}$-regret for
this problem. To do so we introduce three new ideas in the derivative-free
optimization literature: (i) kernel methods, (ii) a generalization of Bernoulli
convolutions, and (iii) a new annealing schedule for exponential weights (with
increasing learning rate). The basic version of our algorithm achieves
$\tilde{O}(n^{9.5} \sqrt{T})$-regret, and we show that a simple variant of this
algorithm can be run in $\mathrm{poly}(n \log(T))$-time per step at the cost of
an additional $\mathrm{poly}(n) T^{o(1)}$ factor in the regret. These results
improve upon the $\tilde{O}(n^{11} \sqrt{T})$-regret and
$\exp(\mathrm{poly}(T))$-time result of the first two authors, and the
$\log(T)^{\mathrm{poly}(n)} \sqrt{T}$-regret and
$\log(T)^{\mathrm{poly}(n)}$-time result of Hazan and Li. Furthermore we
conjecture that another variant of the algorithm could achieve
$\tilde{O}(n^{1.5} \sqrt{T})$-regret, and moreover that this regret is
unimprovable (the current best lower bound being $\Omega(n \sqrt{T})$ and it is
achieved with linear functions). For the simpler situation of zeroth order
stochastic convex optimization this corresponds to the conjecture that the
optimal query complexity is of order $n^3 / \epsilon^2$.
| S\'ebastien Bubeck and Ronen Eldan and Yin Tat Lee | null | 1607.03084 | null | null |
Recurrent Memory Array Structures | cs.LG cs.NE | The following report introduces ideas augmenting standard Long Short Term
Memory (LSTM) architecture with multiple memory cells per hidden unit in order
to improve its generalization capabilities. It considers both deterministic and
stochastic variants of memory operation. It is shown that the nondeterministic
Array-LSTM approach improves state-of-the-art performance on character level
text prediction achieving 1.402 BPC on enwik8 dataset. Furthermore, this report
estabilishes baseline neural-based results of 1.12 BPC and 1.19 BPC for enwik9
and enwik10 datasets respectively.
| Kamil Rocki | null | 1607.03085 | null | null |
Stream-based Online Active Learning in a Contextual Multi-Armed Bandit
Framework | cs.LG | We study the stream-based online active learning in a contextual multi-armed
bandit framework. In this framework, the reward depends on both the arm and the
context. In a stream-based active learning setting, obtaining the ground truth
of the reward is costly, and the conventional contextual multi-armed bandit
algorithm fails to achieve a sublinear regret due to this cost. Hence, the
algorithm needs to determine whether or not to request the ground truth of the
reward at current time slot. In our framework, we consider a stream-based
active learning setting in which a query request for the ground truth is sent
to the annotator, together with some prior information of the ground truth.
Depending on the accuracy of the prior information, the query cost varies. Our
algorithm mainly carries out two operations: the refinement of the context and
arm spaces and the selection of actions. In our algorithm, the partitions of
the context space and the arm space are maintained for a certain time slots,
and then become finer as more information about the rewards accumulates. We use
a strategic way to select the arms and to request the ground truth of the
reward, aiming to maximize the total reward. We analytically show that the
regret is sublinear and in the same order with that of the conventional
contextual multi-armed bandit algorithms, where no query cost
| Linqi Song | null | 1607.03182 | null | null |
How to calculate partition functions using convex programming
hierarchies: provable bounds for variational methods | cs.LG cs.DS stat.ML | We consider the problem of approximating partition functions for Ising
models. We make use of recent tools in combinatorial optimization: the
Sherali-Adams and Lasserre convex programming hierarchies, in combination with
variational methods to get algorithms for calculating partition functions in
these families. These techniques give new, non-trivial approximation guarantees
for the partition function beyond the regime of correlation decay. They also
generalize some classical results from statistical physics about the
Curie-Weiss ferromagnetic Ising model, as well as provide a partition function
counterpart of classical results about max-cut on dense graphs
\cite{arora1995polynomial}. With this, we connect techniques from two
apparently disparate research areas -- optimization and counting/partition
function approximations. (i.e. \#-P type of problems).
Furthermore, we design to the best of our knowledge the first provable,
convex variational methods. Though in the literature there are a host of convex
versions of variational methods \cite{wainwright2003tree, wainwright2005new,
heskes2006convexity, meshi2009convexifying}, they come with no guarantees
(apart from some extremely special cases, like e.g. the graph has a single
cycle \cite{weiss2000correctness}). We consider dense and low threshold rank
graphs, and interestingly, the reason our approach works on these types of
graphs is because local correlations propagate to global correlations --
completely the opposite of algorithms based on correlation decay. In the
process we design novel entropy approximations based on the low-order moments
of a distribution.
Our proof techniques are very simple and generic, and likely to be applicable
to many other settings other than Ising models.
| Andrej Risteski | null | 1607.03183 | null | null |
On Deterministic Conditions for Subspace Clustering under Missing Data | cs.IT cs.LG math.IT stat.ML | In this paper we present deterministic conditions for success of sparse
subspace clustering (SSC) under missing data, when data is assumed to come from
a Union of Subspaces (UoS) model. We consider two algorithms, which are
variants of SSC with entry-wise zero-filling that differ in terms of the
optimization problems used to find affinity matrix for spectral clustering. For
both the algorithms, we provide deterministic conditions for any pattern of
missing data such that perfect clustering can be achieved. We provide extensive
sets of simulation results for clustering as well as completion of data at
missing entries, under the UoS model. Our experimental results indicate that in
contrast to the full data case, accurate clustering does not imply accurate
subspace identification and completion, indicating the natural order of
relative hardness of these problems.
| Wenqi Wang and Shuchin Aeron and Vaneet Aggarwal | null | 1607.03191 | null | null |
Multi-Step Bayesian Optimization for One-Dimensional Feasibility
Determination | math.OC cs.LG stat.CO | Bayesian optimization methods allocate limited sampling budgets to maximize
expensive-to-evaluate functions. One-step-lookahead policies are often used,
but computing optimal multi-step-lookahead policies remains a challenge. We
consider a specialized Bayesian optimization problem: finding the superlevel
set of an expensive one-dimensional function, with a Markov process prior. We
compute the Bayes-optimal sampling policy efficiently, and characterize the
suboptimality of one-step lookahead. Our numerical experiments demonstrate that
the one-step lookahead policy is close to optimal in this problem, performing
within 98% of optimal in the experimental settings considered.
| J. Massey Cashore, Lemuel Kumarga, Peter I. Frazier | null | 1607.03195 | null | null |
Information Projection and Approximate Inference for Structured Sparse
Variables | stat.ML cs.LG | Approximate inference via information projection has been recently introduced
as a general-purpose approach for efficient probabilistic inference given
sparse variables. This manuscript goes beyond classical sparsity by proposing
efficient algorithms for approximate inference via information projection that
are applicable to any structure on the set of variables that admits enumeration
using a \emph{matroid}. We show that the resulting information projection can
be reduced to combinatorial submodular optimization subject to matroid
constraints. Further, leveraging recent advances in submodular optimization, we
provide an efficient greedy algorithm with strong optimization-theoretic
guarantees. The class of probabilistic models that can be expressed in this way
is quite broad and, as we show, includes group sparse regression, group sparse
principal components analysis and sparse canonical correlation analysis, among
others. Moreover, empirical results on simulated data and high dimensional
neuroimaging data highlight the superior performance of the information
projection approach as compared to established baselines for a range of
probabilistic models.
| Rajiv Khanna, Joydeep Ghosh, Russell Poldrack, Oluwasanmi Koyejo | null | 1607.03204 | null | null |
Network Trimming: A Data-Driven Neuron Pruning Approach towards
Efficient Deep Architectures | cs.NE cs.CV cs.LG | State-of-the-art neural networks are getting deeper and wider. While their
performance increases with the increasing number of layers and neurons, it is
crucial to design an efficient deep architecture in order to reduce
computational and memory costs. Designing an efficient neural network, however,
is labor intensive requiring many experiments, and fine-tunings. In this paper,
we introduce network trimming which iteratively optimizes the network by
pruning unimportant neurons based on analysis of their outputs on a large
dataset. Our algorithm is inspired by an observation that the outputs of a
significant portion of neurons in a large network are mostly zero, regardless
of what inputs the network received. These zero activation neurons are
redundant, and can be removed without affecting the overall accuracy of the
network. After pruning the zero activation neurons, we retrain the network
using the weights before pruning as initialization. We alternate the pruning
and retraining to further reduce zero activations in a network. Our experiments
on the LeNet and VGG-16 show that we can achieve high compression ratio of
parameters without losing or even achieving higher accuracy than the original
network.
| Hengyuan Hu, Rui Peng, Yu-Wing Tai, Chi-Keung Tang | null | 1607.0325 | null | null |
Predicting the evolution of stationary graph signals | stat.ML cs.LG | An emerging way of tackling the dimensionality issues arising in the modeling
of a multivariate process is to assume that the inherent data structure can be
captured by a graph. Nevertheless, though state-of-the-art graph-based methods
have been successful for many learning tasks, they do not consider
time-evolving signals and thus are not suitable for prediction. Based on the
recently introduced joint stationarity framework for time-vertex processes,
this letter considers multivariate models that exploit the graph topology so as
to facilitate the prediction. The resulting method yields similar accuracy to
the joint (time-graph) mean-squared error estimator but at lower complexity,
and outperforms purely time-based methods.
| Andreas Loukas and Nathanael Perraudin | null | 1607.03313 | null | null |
DeepBinaryMask: Learning a Binary Mask for Video Compressive Sensing | cs.CV cs.LG | In this paper, we propose a novel encoder-decoder neural network model
referred to as DeepBinaryMask for video compressive sensing. In video
compressive sensing one frame is acquired using a set of coded masks (sensing
matrix) from which a number of video frames is reconstructed, equal to the
number of coded masks. The proposed framework is an end-to-end model where the
sensing matrix is trained along with the video reconstruction. The encoder
learns the binary elements of the sensing matrix and the decoder is trained to
recover the unknown video sequence. The reconstruction performance is found to
improve when using the trained sensing mask from the network as compared to
other mask designs such as random, across a wide variety of compressive sensing
reconstruction algorithms. Finally, our analysis and discussion offers insights
into understanding the characteristics of the trained mask designs that lead to
the improved reconstruction quality.
| Michael Iliadis, Leonidas Spinoulas, Aggelos K. Katsaggelos | null | 1607.03343 | null | null |
Approximate maximum entropy principles via Goemans-Williamson with
applications to provable variational methods | cs.LG cs.DS stat.ML | The well known maximum-entropy principle due to Jaynes, which states that
given mean parameters, the maximum entropy distribution matching them is in an
exponential family, has been very popular in machine learning due to its
"Occam's razor" interpretation. Unfortunately, calculating the potentials in
the maximum-entropy distribution is intractable \cite{bresler2014hardness}. We
provide computationally efficient versions of this principle when the mean
parameters are pairwise moments: we design distributions that approximately
match given pairwise moments, while having entropy which is comparable to the
maximum entropy distribution matching those moments.
We additionally provide surprising applications of the approximate maximum
entropy principle to designing provable variational methods for partition
function calculations for Ising models without any assumptions on the
potentials of the model. More precisely, we show that in every temperature, we
can get approximation guarantees for the log-partition function comparable to
those in the low-temperature limit, which is the setting of optimization of
quadratic forms over the hypercube. \cite{alon2006approximating}
| Yuanzhi Li, Andrej Risteski | null | 1607.0336 | null | null |
Parsimonious Mixed-Effects HodgeRank for Crowdsourced Preference
Aggregation | cs.HC cs.LG cs.MM | In crowdsourced preference aggregation, it is often assumed that all the
annotators are subject to a common preference or utility function which
generates their comparison behaviors in experiments. However, in reality
annotators are subject to variations due to multi-criteria, abnormal, or a
mixture of such behaviors. In this paper, we propose a parsimonious
mixed-effects model based on HodgeRank, which takes into account both the fixed
effect that the majority of annotators follows a common linear utility model,
and the random effect that a small subset of annotators might deviate from the
common significantly and exhibits strongly personalized preferences. HodgeRank
has been successfully applied to subjective quality evaluation of multimedia
and resolves pairwise crowdsourced ranking data into a global consensus ranking
and cyclic conflicts of interests. As an extension, our proposed methodology
further explores the conflicts of interests through the random effect in
annotator specific variations. The key algorithm in this paper establishes a
dynamic path from the common utility to individual variations, with different
levels of parsimony or sparsity on personalization, based on newly developed
Linearized Bregman Algorithms with Inverse Scale Space method. Finally the
validity of the methodology are supported by experiments with both simulated
examples and three real-world crowdsourcing datasets, which shows that our
proposed method exhibits better performance (i.e. smaller test error) compared
with HodgeRank due to its parsimonious property.
| Qianqian Xu, Jiechao Xiong, Xiaochun Cao, and Yuan Yao | null | 1607.03401 | null | null |
Learning in Quantum Control: High-Dimensional Global Optimization for
Noisy Quantum Dynamics | cs.LG cs.SY quant-ph stat.ML | Quantum control is valuable for various quantum technologies such as
high-fidelity gates for universal quantum computing, adaptive quantum-enhanced
metrology, and ultra-cold atom manipulation. Although supervised machine
learning and reinforcement learning are widely used for optimizing control
parameters in classical systems, quantum control for parameter optimization is
mainly pursued via gradient-based greedy algorithms. Although the quantum
fitness landscape is often compatible with greedy algorithms, sometimes greedy
algorithms yield poor results, especially for large-dimensional quantum
systems. We employ differential evolution algorithms to circumvent the
stagnation problem of non-convex optimization. We improve quantum control
fidelity for noisy system by averaging over the objective function. To reduce
computational cost, we introduce heuristics for early termination of runs and
for adaptive selection of search subspaces. Our implementation is massively
parallel and vectorized to reduce run time even further. We demonstrate our
methods with two examples, namely quantum phase estimation and quantum gate
design, for which we achieve superior fidelity and scalability than obtained
using greedy algorithms.
| Pantita Palittapongarnpim, Peter Wittek, Ehsan Zahedinejad, Shakib
Vedaie, Barry C. Sanders | 10.1016/j.neucom.2016.12.087 | 1607.03428 | null | null |
Incomplete Pivoted QR-based Dimensionality Reduction | cs.LG stat.ML | High-dimensional big data appears in many research fields such as image
recognition, biology and collaborative filtering. Often, the exploration of
such data by classic algorithms is encountered with difficulties due to `curse
of dimensionality' phenomenon. Therefore, dimensionality reduction methods are
applied to the data prior to its analysis. Many of these methods are based on
principal components analysis, which is statistically driven, namely they map
the data into a low-dimension subspace that preserves significant statistical
properties of the high-dimensional data. As a consequence, such methods do not
directly address the geometry of the data, reflected by the mutual distances
between multidimensional data point. Thus, operations such as classification,
anomaly detection or other machine learning tasks may be affected.
This work provides a dictionary-based framework for geometrically driven data
analysis that includes dimensionality reduction, out-of-sample extension and
anomaly detection. It embeds high-dimensional data in a low-dimensional
subspace. This embedding preserves the original high-dimensional geometry of
the data up to a user-defined distortion rate. In addition, it identifies a
subset of landmark data points that constitute a dictionary for the analyzed
dataset. The dictionary enables to have a natural extension of the
low-dimensional embedding to out-of-sample data points, which gives rise to a
distortion-based criterion for anomaly detection. The suggested method is
demonstrated on synthetic and real-world datasets and achieves good results for
classification, anomaly detection and out-of-sample tasks.
| Amit Bermanis, Aviv Rotbart, Moshe Salhov and Amir Averbuch | null | 1607.03456 | null | null |
LazySVD: Even Faster SVD Decomposition Yet Without Agonizing Pain | cs.NA cs.DS cs.LG math.OC stat.ML | We study $k$-SVD that is to obtain the first $k$ singular vectors of a matrix
$A$. Recently, a few breakthroughs have been discovered on $k$-SVD: Musco and
Musco [1] proved the first gap-free convergence result using the block Krylov
method, Shamir [2] discovered the first variance-reduction stochastic method,
and Bhojanapalli et al. [3] provided the fastest $O(\mathsf{nnz}(A) +
\mathsf{poly}(1/\varepsilon))$-time algorithm using alternating minimization.
In this paper, we put forward a new and simple LazySVD framework to improve
the above breakthroughs. This framework leads to a faster gap-free method
outperforming [1], and the first accelerated and stochastic method
outperforming [2]. In the $O(\mathsf{nnz}(A) + \mathsf{poly}(1/\varepsilon))$
running-time regime, LazySVD outperforms [3] in certain parameter regimes
without even using alternating minimization.
| Zeyuan Allen-Zhu, Yuanzhi Li | null | 1607.03463 | null | null |
Recurrent Highway Networks | cs.LG cs.CL cs.NE | Many sequential processing tasks require complex nonlinear transition
functions from one step to the next. However, recurrent neural networks with
'deep' transition functions remain difficult to train, even when using Long
Short-Term Memory (LSTM) networks. We introduce a novel theoretical analysis of
recurrent networks based on Gersgorin's circle theorem that illuminates several
modeling and optimization issues and improves our understanding of the LSTM
cell. Based on this analysis we propose Recurrent Highway Networks, which
extend the LSTM architecture to allow step-to-step transition depths larger
than one. Several language modeling experiments demonstrate that the proposed
architecture results in powerful and efficient models. On the Penn Treebank
corpus, solely increasing the transition depth from 1 to 10 improves word-level
perplexity from 90.6 to 65.4 using the same number of parameters. On the larger
Wikipedia datasets for character prediction (text8 and enwik8), RHNs outperform
all previous results and achieve an entropy of 1.27 bits per character.
| Julian Georg Zilly, Rupesh Kumar Srivastava, Jan Koutn\'ik and
J\"urgen Schmidhuber | null | 1607.03474 | null | null |
Nystrom Method for Approximating the GMM Kernel | stat.ML cs.LG | The GMM (generalized min-max) kernel was recently proposed (Li, 2016) as a
measure of data similarity and was demonstrated effective in machine learning
tasks. In order to use the GMM kernel for large-scale datasets, the prior work
resorted to the (generalized) consistent weighted sampling (GCWS) to convert
the GMM kernel to linear kernel. We call this approach as ``GMM-GCWS''.
In the machine learning literature, there is a popular algorithm which we
call ``RBF-RFF''. That is, one can use the ``random Fourier features'' (RFF) to
convert the ``radial basis function'' (RBF) kernel to linear kernel. It was
empirically shown in (Li, 2016) that RBF-RFF typically requires substantially
more samples than GMM-GCWS in order to achieve comparable accuracies.
The Nystrom method is a general tool for computing nonlinear kernels, which
again converts nonlinear kernels into linear kernels. We apply the Nystrom
method for approximating the GMM kernel, a strategy which we name as
``GMM-NYS''. In this study, our extensive experiments on a set of fairly large
datasets confirm that GMM-NYS is also a strong competitor of RBF-RFF.
| Ping Li | null | 1607.03475 | null | null |
Deep Reconstruction-Classification Networks for Unsupervised Domain
Adaptation | cs.CV cs.AI cs.LG stat.ML | In this paper, we propose a novel unsupervised domain adaptation algorithm
based on deep learning for visual object recognition. Specifically, we design a
new model called Deep Reconstruction-Classification Network (DRCN), which
jointly learns a shared encoding representation for two tasks: i) supervised
classification of labeled source data, and ii) unsupervised reconstruction of
unlabeled target data.In this way, the learnt representation not only preserves
discriminability, but also encodes useful information from the target domain.
Our new DRCN model can be optimized by using backpropagation similarly as the
standard neural networks.
We evaluate the performance of DRCN on a series of cross-domain object
recognition tasks, where DRCN provides a considerable improvement (up to ~8% in
accuracy) over the prior state-of-the-art algorithms. Interestingly, we also
observe that the reconstruction pipeline of DRCN transforms images from the
source domain into images whose appearance resembles the target dataset. This
suggests that DRCN's performance is due to constructing a single composite
representation that encodes information about both the structure of target
images and the classification of source images. Finally, we provide a formal
analysis to justify the algorithm's objective in domain adaptation context.
| Muhammad Ghifary and W. Bastiaan Kleijn and Mengjie Zhang and David
Balduzzi and Wen Li | null | 1607.03516 | null | null |
Improved Multi-Class Cost-Sensitive Boosting via Estimation of the
Minimum-Risk Class | cs.CV cs.LG | We present a simple unified framework for multi-class cost-sensitive
boosting. The minimum-risk class is estimated directly, rather than via an
approximation of the posterior distribution. Our method jointly optimizes
binary weak learners and their corresponding output vectors, requiring classes
to share features at each iteration. By training in a cost-sensitive manner,
weak learners are invested in separating classes whose discrimination is
important, at the expense of less relevant classification boundaries.
Additional contributions are a family of loss functions along with proof that
our algorithm is Boostable in the theoretical sense, as well as an efficient
procedure for growing decision trees for use as weak learners. We evaluate our
method on a variety of datasets: a collection of synthetic planar data, common
UCI datasets, MNIST digits, SUN scenes, and CUB-200 birds. Results show
state-of-the-art performance across all datasets against several strong
baselines, including non-boosting multi-class approaches.
| Ron Appel, Xavier Burgos-Artizzu, Pietro Perona | null | 1607.03547 | null | null |
Fast Sampling for Strongly Rayleigh Measures with Application to
Determinantal Point Processes | cs.LG cs.DS math.PR stat.ML | In this note we consider sampling from (non-homogeneous) strongly Rayleigh
probability measures. As an important corollary, we obtain a fast mixing Markov
Chain sampler for Determinantal Point Processes.
| Chengtao Li, Stefanie Jegelka, Suvrit Sra | null | 1607.03559 | null | null |
Estimating Uncertainty Online Against an Adversary | cs.LG | Assessing uncertainty is an important step towards ensuring the safety and
reliability of machine learning systems. Existing uncertainty estimation
techniques may fail when their modeling assumptions are not met, e.g. when the
data distribution differs from the one seen at training time. Here, we propose
techniques that assess a classification algorithm's uncertainty via calibrated
probabilities (i.e. probabilities that match empirical outcome frequencies in
the long run) and which are guaranteed to be reliable (i.e. accurate and
calibrated) on out-of-distribution input, including input generated by an
adversary. This represents an extension of classical online learning that
handles uncertainty in addition to guaranteeing accuracy under adversarial
assumptions. We establish formal guarantees for our methods, and we validate
them on two real-world problems: question answering and medical diagnosis from
genomic data.
| Volodymyr Kuleshov and Stefano Ermon | null | 1607.03594 | null | null |
Characterizing Driving Styles with Deep Learning | cs.AI cs.LG | Characterizing driving styles of human drivers using vehicle sensor data,
e.g., GPS, is an interesting research problem and an important real-world
requirement from automotive industries. A good representation of driving
features can be highly valuable for autonomous driving, auto insurance, and
many other application scenarios. However, traditional methods mainly rely on
handcrafted features, which limit machine learning algorithms to achieve a
better performance. In this paper, we propose a novel deep learning solution to
this problem, which could be the first attempt of extending deep learning to
driving behavior analysis based on GPS data. The proposed approach can
effectively extract high level and interpretable features describing complex
driving patterns. It also requires significantly less human experience and
work. The power of the learned driving style representations are validated
through the driver identification problem using a large real dataset.
| Weishan Dong, Jian Li, Renjie Yao, Changsheng Li, Ting Yuan, Lanjun
Wang | null | 1607.03611 | null | null |
San Francisco Crime Classification | cs.LG | San Francisco Crime Classification is an online competition administered by
Kaggle Inc. The competition aims at predicting the future crimes based on a
given set of geographical and time-based features. In this paper, I achieved a
an accuracy that ranks at top %18, as of May 19th, 2016. I will explore the
data, and explain in details the tools I used to achieve that result.
| Yehya Abouelnaga | null | 1607.03626 | null | null |
Unsupervised Feature Learning Based on Deep Models for Environmental
Audio Tagging | cs.SD cs.CV cs.LG | Environmental audio tagging aims to predict only the presence or absence of
certain acoustic events in the interested acoustic scene. In this paper we make
contributions to audio tagging in two parts, respectively, acoustic modeling
and feature learning. We propose to use a shrinking deep neural network (DNN)
framework incorporating unsupervised feature learning to handle the multi-label
classification task. For the acoustic modeling, a large set of contextual
frames of the chunk are fed into the DNN to perform a multi-label
classification for the expected tags, considering that only chunk (or
utterance) level rather than frame-level labels are available. Dropout and
background noise aware training are also adopted to improve the generalization
capability of the DNNs. For the unsupervised feature learning, we propose to
use a symmetric or asymmetric deep de-noising auto-encoder (sDAE or aDAE) to
generate new data-driven features from the Mel-Filter Banks (MFBs) features.
The new features, which are smoothed against background noise and more compact
with contextual information, can further improve the performance of the DNN
baseline. Compared with the standard Gaussian Mixture Model (GMM) baseline of
the DCASE 2016 audio tagging challenge, our proposed method obtains a
significant equal error rate (EER) reduction from 0.21 to 0.13 on the
development set. The proposed aDAE system can get a relative 6.7% EER reduction
compared with the strong DNN baseline on the development set. Finally, the
results also show that our approach obtains the state-of-the-art performance
with 0.15 EER on the evaluation set of the DCASE 2016 audio tagging task while
EER of the first prize of this challenge is 0.17.
| Yong Xu, Qiang Huang, Wenwu Wang, Peter Foster, Siddharth Sigtia,
Philip J. B. Jackson, and Mark D. Plumbley | 10.1109/TASLP.2017.2690563 | 1607.03681 | null | null |
Hierarchical learning for DNN-based acoustic scene classification | cs.SD cs.CV cs.LG | In this paper, we present a deep neural network (DNN)-based acoustic scene
classification framework. Two hierarchical learning methods are proposed to
improve the DNN baseline performance by incorporating the hierarchical taxonomy
information of environmental sounds. Firstly, the parameters of the DNN are
initialized by the proposed hierarchical pre-training. Multi-level objective
function is then adopted to add more constraint on the cross-entropy based loss
function. A series of experiments were conducted on the Task1 of the Detection
and Classification of Acoustic Scenes and Events (DCASE) 2016 challenge. The
final DNN-based system achieved a 22.9% relative improvement on average scene
classification error as compared with the Gaussian Mixture Model (GMM)-based
benchmark system across four standard folds.
| Yong Xu, Qiang Huang, Wenwu Wang, Mark D. Plumbley | null | 1607.03682 | null | null |
Sequential Cost-Sensitive Feature Acquisition | cs.LG | We propose a reinforcement learning based approach to tackle the
cost-sensitive learning problem where each input feature has a specific cost.
The acquisition process is handled through a stochastic policy which allows
features to be acquired in an adaptive way. The general architecture of our
approach relies on representation learning to enable performing prediction on
any partially observed sample, whatever the set of its observed features are.
The resulting model is an original mix of representation learning and of
reinforcement learning ideas. It is learned with policy gradient techniques to
minimize a budgeted inference cost. We demonstrate the effectiveness of our
proposed method with several experiments on a variety of datasets for the
sparse prediction problem where all features have the same cost, but also for
some cost-sensitive settings.
| Gabriella Contardo, Ludovic Denoyer, Thierry Arti\`eres | null | 1607.03691 | null | null |
Possibilistic Networks: Parameters Learning from Imprecise Data and
Evaluation strategy | cs.AI cs.LG | There has been an ever-increasing interest in multidisciplinary research on
representing and reasoning with imperfect data. Possibilistic networks present
one of the powerful frameworks of interest for representing uncertain and
imprecise information. This paper covers the problem of their parameters
learning from imprecise datasets, i.e., containing multi-valued data. We
propose in the rst part of this paper a possibilistic networks sampling
process. In the second part, we propose a likelihood function which explores
the link between random sets theory and possibility theory. This function is
then deployed to parametrize possibilistic networks.
| Maroua Haddad (LINA, LARODEC), Philippe Leray (LINA), Nahla Ben Amor
(LARODEC) | null | 1607.03705 | null | null |
Re-presenting a Story by Emotional Factors using Sentimental Analysis
Method | cs.CL cs.LG | Remembering an event is affected by personal emotional status. We examined
the psychological status and personal factors; depression (Center for
Epidemiological Studies - Depression, Radloff, 1977), present affective
(Positive Affective and Negative Affective Schedule, Watson et al., 1988), life
orient (Life Orient Test, Scheier & Carver, 1985), self-awareness (Core Self
Evaluation Scale, Judge et al., 2003), and social factor (Social Support,
Sarason et al., 1983) of undergraduate students (N=64) and got summaries of a
story, Chronicle of a Death Foretold (Gabriel Garcia Marquez, 1981) from them.
We implement a sentimental analysis model based on convolutional neural network
(LeCun & Bengio, 1995) to evaluate each summary. From the same vein used for
transfer learning (Pan & Yang, 2010), we collected 38,265 movie review data to
train the model and then use them to score summaries of each student. The
results of CES-D and PANAS show the relationship between emotion and memory
retrieval as follows: depressed people have shown a tendency of representing a
story more negatively, and they seemed less expressive. People with full of
emotion - high in PANAS - have retrieved their memory more expressively than
others, using more negative words then others. The contributions of this study
can be summarized as follows: First, lightening the relationship between
emotion and its effect during times of storing or retrieving a memory. Second,
suggesting objective methods to evaluate the intensity of emotion in natural
language format, using a sentimental analysis model.
| Hwiyeol Jo, Yohan Moon, Jong In Kim, and Jeong Ryu | null | 1607.03707 | null | null |
Learning Shallow Detection Cascades for Wearable Sensor-Based Mobile
Health Applications | stat.ML cs.LG | The field of mobile health aims to leverage recent advances in wearable
on-body sensing technology and smart phone computing capabilities to develop
systems that can monitor health states and deliver just-in-time adaptive
interventions. However, existing work has largely focused on analyzing
collected data in the off-line setting. In this paper, we propose a novel
approach to learning shallow detection cascades developed explicitly for use in
a real-time wearable-phone or wearable-phone-cloud systems. We apply our
approach to the problem of cigarette smoking detection from a combination of
wrist-worn actigraphy data and respiration chest band data using two and three
stage cascades.
| Hamid Dadkhahi, Nazir Saleheen, Santosh Kumar, Benjamin Marlin | null | 1607.0373 | null | null |
A Vector Space for Distributional Semantics for Entailment | cs.CL cs.LG | Distributional semantics creates vector-space representations that capture
many forms of semantic similarity, but their relation to semantic entailment
has been less clear. We propose a vector-space model which provides a formal
foundation for a distributional semantics of entailment. Using a mean-field
approximation, we develop approximate inference procedures and entailment
operators over vectors of probabilities of features being known (versus
unknown). We use this framework to reinterpret an existing
distributional-semantic model (Word2Vec) as approximating an entailment-based
model of the distributions of words in contexts, thereby predicting lexical
entailment relations. In both unsupervised and semi-supervised experiments on
hyponymy detection, we get substantial improvements over previous results.
| James Henderson and Diana Nicoleta Popa | null | 1607.0378 | null | null |
Feature Extraction and Automated Classification of Heartbeats by Machine
Learning | stat.ML cs.LG | We present algorithms for the detection of a class of heart arrhythmias with
the goal of eventual adoption by practicing cardiologists. In clinical
practice, detection is based on a small number of meaningful features extracted
from the heartbeat cycle. However, techniques proposed in the literature use
high dimensional vectors consisting of morphological, and time based features
for detection. Using electrocardiogram (ECG) signals, we found smaller subsets
of features sufficient to detect arrhythmias with high accuracy. The features
were found by an iterative step-wise feature selection method. We depart from
common literature in the following aspects: 1. As opposed to a high dimensional
feature vectors, we use a small set of features with meaningful clinical
interpretation, 2. we eliminate the necessity of short-duration
patient-specific ECG data to append to the global training data for
classification 3. We apply semi-parametric classification procedures (in an
ensemble framework) for arrhythmia detection, and 4. our approach is based on a
reduced sampling rate of ~ 115 Hz as opposed to 360 Hz in standard literature.
| Choudur Lakshminarayan and Tony Basil | null | 1607.03822 | null | null |
The KIT Motion-Language Dataset | cs.RO cs.CL cs.CV cs.LG | Linking human motion and natural language is of great interest for the
generation of semantic representations of human activities as well as for the
generation of robot activities based on natural language input. However, while
there have been years of research in this area, no standardized and openly
available dataset exists to support the development and evaluation of such
systems. We therefore propose the KIT Motion-Language Dataset, which is large,
open, and extensible. We aggregate data from multiple motion capture databases
and include them in our dataset using a unified representation that is
independent of the capture system or marker set, making it easy to work with
the data regardless of its origin. To obtain motion annotations in natural
language, we apply a crowd-sourcing approach and a web-based tool that was
specifically build for this purpose, the Motion Annotation Tool. We thoroughly
document the annotation process itself and discuss gamification methods that we
used to keep annotators motivated. We further propose a novel method,
perplexity-based selection, which systematically selects motions for further
annotation that are either under-represented in our dataset or that have
erroneous annotations. We show that our method mitigates the two aforementioned
problems and ensures a systematic annotation process. We provide an in-depth
analysis of the structure and contents of our resulting dataset, which, as of
October 10, 2016, contains 3911 motions with a total duration of 11.23 hours
and 6278 annotations in natural language that contain 52,903 words. We believe
this makes our dataset an excellent choice that enables more transparent and
comparable research in this important area.
| Matthias Plappert, Christian Mandery, Tamim Asfour | 10.1089/big.2016.0028 | 1607.03827 | null | null |
Fitting a Simplicial Complex using a Variation of k-means | cs.LG cs.CG stat.ML | We give a simple and effective two stage algorithm for approximating a point
cloud $\mathcal{S}\subset\mathbb{R}^m$ by a simplicial complex $K$. The first
stage is an iterative fitting procedure that generalizes k-means clustering,
while the second stage involves deleting redundant simplices. A form of
dimension reduction of $\mathcal{S}$ is obtained as a consequence.
| Piotr Beben | null | 1607.03849 | null | null |
Concatenated image completion via tensor augmentation and completion | cs.LG cs.CV cs.DS | This paper proposes a novel framework called concatenated image completion
via tensor augmentation and completion (ICTAC), which recovers missing entries
of color images with high accuracy. Typical images are second- or third-order
tensors (2D/3D) depending if they are grayscale or color, hence tensor
completion algorithms are ideal for their recovery. The proposed framework
performs image completion by concatenating copies of a single image that has
missing entries into a third-order tensor, applying a dimensionality
augmentation technique to the tensor, utilizing a tensor completion algorithm
for recovering its missing entries, and finally extracting the recovered image
from the tensor. The solution relies on two key components that have been
recently proposed to take advantage of the tensor train (TT) rank: A tensor
augmentation tool called ket augmentation (KA) that represents a low-order
tensor by a higher-order tensor, and the algorithm tensor completion by
parallel matrix factorization via tensor train (TMac-TT), which has been
demonstrated to outperform state-of-the-art tensor completion algorithms.
Simulation results for color image recovery show the clear advantage of our
framework against current state-of-the-art tensor completion algorithms.
| Johann A. Bengua, Hoang D. Tuan, Ho N. Phien, Minh N. Do | null | 1607.03967 | null | null |
Fast Algorithms for Segmented Regression | cs.LG cs.DS math.ST stat.TH | We study the fixed design segmented regression problem: Given noisy samples
from a piecewise linear function $f$, we want to recover $f$ up to a desired
accuracy in mean-squared error.
Previous rigorous approaches for this problem rely on dynamic programming
(DP) and, while sample efficient, have running time quadratic in the sample
size. As our main contribution, we provide new sample near-linear time
algorithms for the problem that -- while not being minimax optimal -- achieve a
significantly better sample-time tradeoff on large datasets compared to the DP
approach. Our experimental evaluation shows that, compared with the DP
approach, our algorithms provide a convergence rate that is only off by a
factor of $2$ to $4$, while achieving speedups of three orders of magnitude.
| Jayadev Acharya, Ilias Diakonikolas, Jerry Li, Ludwig Schmidt | null | 1607.0399 | null | null |
Fifty Shades of Ratings: How to Benefit from a Negative Feedback in
Top-N Recommendations Tasks | cs.LG cs.IR stat.ML | Conventional collaborative filtering techniques treat a top-n recommendations
problem as a task of generating a list of the most relevant items. This
formulation, however, disregards an opposite - avoiding recommendations with
completely irrelevant items. Due to that bias, standard algorithms, as well as
commonly used evaluation metrics, become insensitive to negative feedback. In
order to resolve this problem we propose to treat user feedback as a
categorical variable and model it with users and items in a ternary way. We
employ a third-order tensor factorization technique and implement a higher
order folding-in method to support online recommendations. The method is
equally sensitive to entire spectrum of user ratings and is able to accurately
predict relevant items even from a negative only feedback. Our method may
partially eliminate the need for complicated rating elicitation process as it
provides means for personalized recommendations from the very beginning of an
interaction with a recommender system. We also propose a modification of
standard metrics which helps to reveal unwanted biases and account for
sensitivity to a negative feedback. Our model achieves state-of-the-art quality
in standard recommendation tasks while significantly outperforming other
methods in the cold-start "no-positive-feedback" scenarios.
| Evgeny Frolov, Ivan Oseledets | 10.1145/2959100.2959170 | 1607.04228 | null | null |
Neural Semantic Encoders | cs.LG cs.CL stat.ML | We present a memory augmented neural network for natural language
understanding: Neural Semantic Encoders. NSE is equipped with a novel memory
update rule and has a variable sized encoding memory that evolves over time and
maintains the understanding of input sequences through read}, compose and write
operations. NSE can also access multiple and shared memories. In this paper, we
demonstrated the effectiveness and the flexibility of NSE on five different
natural language tasks: natural language inference, question answering,
sentence classification, document sentiment analysis and machine translation
where NSE achieved state-of-the-art performance when evaluated on publically
available benchmarks. For example, our shared-memory model showed an
encouraging result on neural machine translation, improving an attention-based
baseline by approximately 1.0 BLEU.
| Tsendsuren Munkhdalai and Hong Yu | null | 1607.04315 | null | null |
Random projections of random manifolds | stat.ML cs.LG q-bio.NC | Interesting data often concentrate on low dimensional smooth manifolds inside
a high dimensional ambient space. Random projections are a simple, powerful
tool for dimensionality reduction of such data. Previous works have studied
bounds on how many projections are needed to accurately preserve the geometry
of these manifolds, given their intrinsic dimensionality, volume and curvature.
However, such works employ definitions of volume and curvature that are
inherently difficult to compute. Therefore such theory cannot be easily tested
against numerical simulations to understand the tightness of the proven bounds.
We instead study typical distortions arising in random projections of an
ensemble of smooth Gaussian random manifolds. We find explicitly computable,
approximate theoretical bounds on the number of projections required to
accurately preserve the geometry of these manifolds. Our bounds, while
approximate, can only be violated with a probability that is exponentially
small in the ambient dimension, and therefore they hold with high probability
in cases of practical interest. Moreover, unlike previous work, we test our
theoretical bounds against numerical experiments on the actual geometric
distortions that typically occur for random projections of random smooth
manifolds. We find our bounds are tighter than previous results by several
orders of magnitude.
| Subhaneil Lahiri, Peiran Gao, Surya Ganguli | null | 1607.04331 | null | null |
DeepQA: Improving the estimation of single protein model quality with
deep belief networks | cs.AI cs.LG q-bio.QM | Protein quality assessment (QA) by ranking and selecting protein models has
long been viewed as one of the major challenges for protein tertiary structure
prediction. Especially, estimating the quality of a single protein model, which
is important for selecting a few good models out of a large model pool
consisting of mostly low-quality models, is still a largely unsolved problem.
We introduce a novel single-model quality assessment method DeepQA based on
deep belief network that utilizes a number of selected features describing the
quality of a model from different perspectives, such as energy, physio-chemical
characteristics, and structural information. The deep belief network is trained
on several large datasets consisting of models from the Critical Assessment of
Protein Structure Prediction (CASP) experiments, several publicly available
datasets, and models generated by our in-house ab initio method. Our experiment
demonstrate that deep belief network has better performance compared to Support
Vector Machines and Neural Networks on the protein model quality assessment
problem, and our method DeepQA achieves the state-of-the-art performance on
CASP11 dataset. It also outperformed two well-established methods in selecting
good outlier models from a large set of models of mostly low quality generated
by ab initio modeling methods. DeepQA is a useful tool for protein single model
quality assessment and protein structure prediction. The source code,
executable, document and training/test datasets of DeepQA for Linux is freely
available to non-commercial users at http://cactus.rnet.missouri.edu/DeepQA/.
| Renzhi Cao, Debswapna Bhattacharya, Jie Hou, and Jianlin Cheng | null | 1607.04379 | null | null |
A Theoretical Analysis of the BDeu Scores in Bayesian Network Structure
Learning | cs.LG cs.IT math.IT | In Bayesian network structure learning (BNSL), we need the prior probability
over structures and parameters. If the former is the uniform distribution, the
latter determines the correctness of BNSL. In this paper, we compare BDeu
(Bayesian Dirichlet equivalent uniform) and Jeffreys' prior w.r.t. their
consistency. When we seek a parent set $U$ of a variable $X$, we require
regularity that if $H(X|U)\leq H(X|U')$ and $U\subsetneq U'$, then $U$ should
be chosen rather than $U'$. We prove that the BDeu scores violate the property
and cause fatal situations in BNSL. This is because for the BDeu scores, for
any sample size $n$,there exists a probability in the form
$P(X,Y,Z)={P(XZ)P(YZ)}/{P(Z)}$ such that the probability of deciding that $X$
and $Y$ are not conditionally independent given $Z$ is more than a half. For
Jeffreys' prior, the false-positive probability uniformly converges to zero
without depending on any parameter values, and no such an inconvenience occurs.
| Joe Suzuki | null | 1607.04427 | null | null |
Channel Selection Algorithm for Cognitive Radio Networks with
Heavy-Tailed Idle Times | cs.NI cs.LG | We consider a multichannel Cognitive Radio Network (CRN), where secondary
users sequentially sense channels for opportunistic spectrum access. In this
scenario, the Channel Selection Algorithm (CSA) allows secondary users to find
a vacant channel with the minimal number of channel switches. Most of the
existing CSA literature assumes exponential ON-OFF time distribution for
primary users (PU) channel occupancy pattern. This exponential assumption might
be helpful to get performance bounds; but not useful to evaluate the
performance of CSA under realistic conditions. An in-depth analysis of
independent spectrum measurement traces reveals that wireless channels have
typically heavy-tailed PU OFF times. In this paper, we propose an extension to
the Predictive CSA framework and its generalization for heavy tailed PU OFF
time distribution, which represents realistic scenarios. In particular, we
calculate the probability of channel being idle for hyper-exponential OFF times
to use in CSA. We implement our proposed CSA framework in a wireless test-bed
and comprehensively evaluate its performance by recreating the realistic PU
channel occupancy patterns. The proposed CSA shows significant reduction in
channel switches and energy consumption as compared to Predictive CSA which
always assumes exponential PU ON-OFF times.Through our work, we show the impact
of the PU channel occupancy pattern on the performance of CSA in multichannel
CRN.
| S. Senthilmurugan, Junaid Ansari, Petri M\"ah\"onen, T.G. Venkatesh,
and Marina Petrova | null | 1607.0445 | null | null |
Neural Tree Indexers for Text Understanding | cs.CL cs.LG stat.ML | Recurrent neural networks (RNNs) process input text sequentially and model
the conditional transition between word tokens. In contrast, the advantages of
recursive networks include that they explicitly model the compositionality and
the recursive structure of natural language. However, the current recursive
architecture is limited by its dependence on syntactic tree. In this paper, we
introduce a robust syntactic parsing-independent tree structured model, Neural
Tree Indexers (NTI) that provides a middle ground between the sequential RNNs
and the syntactic treebased recursive models. NTI constructs a full n-ary tree
by processing the input text with its node function in a bottom-up fashion.
Attention mechanism can then be applied to both structure and node function. We
implemented and evaluated a binarytree model of NTI, showing the model achieved
the state-of-the-art performance on three different NLP tasks: natural language
inference, answer sentence selection, and sentence classification,
outperforming state-of-the-art recurrent and recursive neural networks.
| Tsendsuren Munkhdalai and Hong Yu | null | 1607.04492 | null | null |
Learning from Conditional Distributions via Dual Embeddings | cs.LG math.OC stat.ML | Many machine learning tasks, such as learning with invariance and policy
evaluation in reinforcement learning, can be characterized as problems of
learning from conditional distributions. In such problems, each sample $x$
itself is associated with a conditional distribution $p(z|x)$ represented by
samples $\{z_i\}_{i=1}^M$, and the goal is to learn a function $f$ that links
these conditional distributions to target values $y$. These learning problems
become very challenging when we only have limited samples or in the extreme
case only one sample from each conditional distribution. Commonly used
approaches either assume that $z$ is independent of $x$, or require an
overwhelmingly large samples from each conditional distribution.
To address these challenges, we propose a novel approach which employs a new
min-max reformulation of the learning from conditional distribution problem.
With such new reformulation, we only need to deal with the joint distribution
$p(z,x)$. We also design an efficient learning algorithm, Embedding-SGD, and
establish theoretical sample complexity for such problems. Finally, our
numerical experiments on both synthetic and real-world datasets show that the
proposed approach can significantly improve over the existing algorithms.
| Bo Dai, Niao He, Yunpeng Pan, Byron Boots, Le Song | null | 1607.04579 | null | null |
Automatic Environmental Sound Recognition: Performance versus
Computational Cost | cs.SD cs.LG cs.NE | In the context of the Internet of Things (IoT), sound sensing applications
are required to run on embedded platforms where notions of product pricing and
form factor impose hard constraints on the available computing power. Whereas
Automatic Environmental Sound Recognition (AESR) algorithms are most often
developed with limited consideration for computational cost, this article seeks
which AESR algorithm can make the most of a limited amount of computing power
by comparing the sound classification performance em as a function of its
computational cost. Results suggest that Deep Neural Networks yield the best
ratio of sound classification accuracy across a range of computational costs,
while Gaussian Mixture Models offer a reasonable accuracy at a consistently
small cost, and Support Vector Machines stand between both in terms of
compromise between accuracy and computational cost.
| Siddharth Sigtia, Adam M. Stark, Sacha Krstulovic and Mark D. Plumbley | 10.1109/TASLP.2016.2592698 | 1607.04589 | null | null |
Enriching Word Vectors with Subword Information | cs.CL cs.LG | Continuous word representations, trained on large unlabeled corpora are
useful for many natural language processing tasks. Popular models that learn
such representations ignore the morphology of words, by assigning a distinct
vector to each word. This is a limitation, especially for languages with large
vocabularies and many rare words. In this paper, we propose a new approach
based on the skipgram model, where each word is represented as a bag of
character $n$-grams. A vector representation is associated to each character
$n$-gram; words being represented as the sum of these representations. Our
method is fast, allowing to train models on large corpora quickly and allows us
to compute word representations for words that did not appear in the training
data. We evaluate our word representations on nine different languages, both on
word similarity and analogy tasks. By comparing to recently proposed
morphological word representations, we show that our vectors achieve
state-of-the-art performance on these tasks.
| Piotr Bojanowski, Edouard Grave, Armand Joulin, Tomas Mikolov | null | 1607.04606 | null | null |
Guided Policy Search as Approximate Mirror Descent | cs.LG cs.RO | Guided policy search algorithms can be used to optimize complex nonlinear
policies, such as deep neural networks, without directly computing policy
gradients in the high-dimensional parameter space. Instead, these methods use
supervised learning to train the policy to mimic a "teacher" algorithm, such as
a trajectory optimizer or a trajectory-centric reinforcement learning method.
Guided policy search methods provide asymptotic local convergence guarantees by
construction, but it is not clear how much the policy improves within a small,
finite number of iterations. We show that guided policy search algorithms can
be interpreted as an approximate variant of mirror descent, where the
projection onto the constraint manifold is not exact. We derive a new guided
policy search algorithm that is simpler and provides appealing improvement and
convergence guarantees in simplified convex and linear settings, and show that
in the more general nonlinear setting, the error in the projection step can be
bounded. We provide empirical results on several simulated robotic navigation
and manipulation tasks that show that our method is stable and achieves similar
or better performance when compared to prior guided policy search methods, with
a simpler formulation and fewer hyperparameters.
| William Montgomery, Sergey Levine | null | 1607.04614 | null | null |
On the efficient representation and execution of deep acoustic models | cs.LG cs.CL | In this paper we present a simple and computationally efficient quantization
scheme that enables us to reduce the resolution of the parameters of a neural
network from 32-bit floating point values to 8-bit integer values. The proposed
quantization scheme leads to significant memory savings and enables the use of
optimized hardware instructions for integer arithmetic, thus significantly
reducing the cost of inference. Finally, we propose a "quantization aware"
training process that applies the proposed scheme during network training and
find that it allows us to recover most of the loss in accuracy introduced by
quantization. We validate the proposed techniques by applying them to a long
short-term memory-based acoustic model on an open-ended large vocabulary speech
recognition task.
| Raziel Alvarez, Rohit Prabhavalkar, Anton Bakhtin | null | 1607.04683 | null | null |
Learning Social Circles in Ego Networks based on Multi-View Social
Graphs | cs.SI cs.LG | In social network analysis, automatic social circle detection in ego-networks
is becoming a fundamental and important task, with many potential applications
such as user privacy protection or interest group recommendation. So far, most
studies have focused on addressing two questions, namely, how to detect
overlapping circles and how to detect circles using a combination of network
structure and network node attributes. This paper asks an orthogonal research
question, that is, how to detect circles based on network structures that are
(usually) described by multiple views? Our investigation begins with crawling
ego-networks from Twitter and employing classic techniques to model their
structures by six views, including user relationships, user interactions and
user content. We then apply both standard and our modified multi-view spectral
clustering techniques to detect social circles in these ego-networks. Based on
extensive automatic and manual experimental evaluations, we deliver two major
findings: first, multi-view clustering techniques perform better than common
single-view clustering techniques, which only use one view or naively integrate
all views for detection, second, the standard multi-view clustering technique
is less robust than our modified technique, which selectively transfers
information across views based on an assumption that sparse network structures
are (potentially) incomplete. In particular, the second finding makes us
believe a direct application of standard clustering on potentially incomplete
networks may yield biased results. We lightly examine this issue in theory,
where we derive an upper bound for such bias by integrating theories of
spectral clustering and matrix perturbation, and discuss how it may be affected
by several network characteristics.
| Chao Lan, Yuhao Yang, Xiaoli Li, Bo Luo, Jun Huan | null | 1607.04747 | null | null |
Shesop Healthcare: Stress and influenza classification using support
vector machine kernel | cs.CY cs.LG | Shesop is an integrated system to make human lives more easily and to help
people in terms of healthcare. Stress and influenza classification is a part of
Shesop's application for a healthcare devices such as smartwatch, polar and
fitbit. The main objective of this paper is to classify a new data and inform
whether you are stress, depressed, caught by influenza or not. We will use the
heart rate data taken for months in Bandung, analyze the data and find the
Heart rate variance that constantly related with the stress and flu level.
After we found the variable, we will use the variable as an input to the
support vector machine learning. We will use the lagrangian and kernel
technique to transform 2D data into 3D data so we can use the linear
classification in 3D space. In the end, we could use the machine learning's
result to classify new data and get the final result immediately: stress or
not, influenza or not.
| Andrien Ivander Wijaya, Ary Setijadi Prihatmanto, Rifki Wijaya | 10.13140/RG.2.1.2449.0486 | 1607.0477 | null | null |
Exploiting Multi-modal Curriculum in Noisy Web Data for Large-scale
Concept Learning | cs.CV cs.LG | Learning video concept detectors automatically from the big but noisy web
data with no additional manual annotations is a novel but challenging area in
the multimedia and the machine learning community. A considerable amount of
videos on the web are associated with rich but noisy contextual information,
such as the title, which provides weak annotations or labels about the video
content. To leverage the big noisy web labels, this paper proposes a novel
method called WEbly-Labeled Learning (WELL), which is established on the
state-of-the-art machine learning algorithm inspired by the learning process of
human. WELL introduces a number of novel multi-modal approaches to incorporate
meaningful prior knowledge called curriculum from the noisy web videos. To
investigate this problem, we empirically study the curriculum constructed from
the multi-modal features of the videos collected from YouTube and Flickr. The
efficacy and the scalability of WELL have been extensively demonstrated on two
public benchmarks, including the largest multimedia dataset and the largest
manually-labeled video set. The comprehensive experimental results demonstrate
that WELL outperforms state-of-the-art studies by a statically significant
margin on learning concepts from noisy web video data. In addition, the results
also verify that WELL is robust to the level of noisiness in the video data.
Notably, WELL trained on sufficient noisy web labels is able to achieve a
comparable accuracy to supervised learning methods trained on the clean
manually-labeled data.
| Junwei Liang, Lu Jiang, Deyu Meng, Alexander Hauptmann | null | 1607.0478 | null | null |
Learning to Decode Linear Codes Using Deep Learning | cs.IT cs.LG cs.NE math.IT | A novel deep learning method for improving the belief propagation algorithm
is proposed. The method generalizes the standard belief propagation algorithm
by assigning weights to the edges of the Tanner graph. These edges are then
trained using deep learning techniques. A well-known property of the belief
propagation algorithm is the independence of the performance on the transmitted
codeword. A crucial property of our new method is that our decoder preserved
this property. Furthermore, this property allows us to learn only a single
codeword instead of exponential number of code-words. Improvements over the
belief propagation algorithm are demonstrated for various high density parity
check codes.
| Eliya Nachmani, Yair Beery and David Burshtein | null | 1607.04793 | null | null |
Inferring solutions of differential equations using noisy multi-fidelity
data | cs.LG | For more than two centuries, solutions of differential equations have been
obtained either analytically or numerically based on typically well-behaved
forcing and boundary conditions for well-posed problems. We are changing this
paradigm in a fundamental way by establishing an interface between
probabilistic machine learning and differential equations. We develop
data-driven algorithms for general linear equations using Gaussian process
priors tailored to the corresponding integro-differential operators. The only
observables are scarce noisy multi-fidelity data for the forcing and solution
that are not required to reside on the domain boundary. The resulting
predictive posterior distributions quantify uncertainty and naturally lead to
adaptive solution refinement via active learning. This general framework
circumvents the tyranny of numerical discretization as well as the consistency
and stability issues of time-integration, and is scalable to high-dimensions.
| Maziar Raissi, Paris Perdikaris, George Em. Karniadakis | 10.1016/j.jcp.2017.01.060 | 1607.04805 | null | null |
Robust Automated Human Activity Recognition and its Application to Sleep
Research | cs.LG | Human Activity Recognition (HAR) is a powerful tool for understanding human
behaviour. Applying HAR to wearable sensors can provide new insights by
enriching the feature set in health studies, and enhance the personalisation
and effectiveness of health, wellness, and fitness applications. Wearable
devices provide an unobtrusive platform for user monitoring, and due to their
increasing market penetration, feel intrinsic to the wearer. The integration of
these devices in daily life provide a unique opportunity for understanding
human health and wellbeing. This is referred to as the "quantified self"
movement. The analyses of complex health behaviours such as sleep,
traditionally require a time-consuming manual interpretation by experts. This
manual work is necessary due to the erratic periodicity and persistent
noisiness of human behaviour. In this paper, we present a robust automated
human activity recognition algorithm, which we call RAHAR. We test our
algorithm in the application area of sleep research by providing a novel
framework for evaluating sleep quality and examining the correlation between
the aforementioned and an individual's physical activity. Our results improve
the state-of-the-art procedure in sleep research by 15 percent for area under
ROC and by 30 percent for F1 score on average. However, application of RAHAR is
not limited to sleep analysis and can be used for understanding other health
problems such as obesity, diabetes, and cardiac diseases.
| Aarti Sathyanarayana, Ferda Ofli, Luis Fernandes-Luque, Jaideep
Srivastava, Ahmed Elmagarmid, Teresa Arora, Shahrad Taheri | 10.1109/ICDMW.2016.0077 | 1607.04867 | null | null |
Learning Unitary Operators with Help From u(n) | stat.ML cs.LG | A major challenge in the training of recurrent neural networks is the
so-called vanishing or exploding gradient problem. The use of a norm-preserving
transition operator can address this issue, but parametrization is challenging.
In this work we focus on unitary operators and describe a parametrization using
the Lie algebra $\mathfrak{u}(n)$ associated with the Lie group $U(n)$ of $n
\times n$ unitary matrices. The exponential map provides a correspondence
between these spaces, and allows us to define a unitary matrix using $n^2$ real
coefficients relative to a basis of the Lie algebra. The parametrization is
closed under additive updates of these coefficients, and thus provides a simple
space in which to do gradient descent. We demonstrate the effectiveness of this
parametrization on the problem of learning arbitrary unitary operators,
comparing to several baselines and outperforming a recently-proposed
lower-dimensional parametrization. We additionally use our parametrization to
generalize a recently-proposed unitary recurrent neural network to arbitrary
unitary matrices, using it to solve standard long-memory tasks.
| Stephanie L. Hyland, Gunnar R\"atsch | null | 1607.04903 | null | null |
Piecewise convexity of artificial neural networks | cs.LG cs.AI cs.CV | Although artificial neural networks have shown great promise in applications
including computer vision and speech recognition, there remains considerable
practical and theoretical difficulty in optimizing their parameters. The
seemingly unreasonable success of gradient descent methods in minimizing these
non-convex functions remains poorly understood. In this work we offer some
theoretical guarantees for networks with piecewise affine activation functions,
which have in recent years become the norm. We prove three main results.
Firstly, that the network is piecewise convex as a function of the input data.
Secondly, that the network, considered as a function of the parameters in a
single layer, all others held constant, is again piecewise convex. Finally,
that the network as a function of all its parameters is piecewise multi-convex,
a generalization of biconvexity. From here we characterize the local minima and
stationary points of the training objective, showing that they minimize certain
subsets of the parameter space. We then analyze the performance of two
optimization algorithms on multi-convex problems: gradient descent, and a
method which repeatedly solves a number of convex sub-problems. We prove
necessary convergence conditions for the first algorithm and both necessary and
sufficient conditions for the second, after introducing regularization to the
objective. Finally, we remark on the remaining difficulty of the global
optimization problem. Under the squared error objective, we show that by
varying the training data, a single rectifier neuron admits local minima
arbitrarily far apart, both in objective value and parameter space.
| Blaine Rister, Daniel L Rubin | null | 1607.04917 | null | null |
Distributed Graph Clustering by Load Balancing | cs.DS cs.DC cs.LG | Graph clustering is a fundamental computational problem with a number of
applications in algorithm design, machine learning, data mining, and analysis
of social networks. Over the past decades, researchers have proposed a number
of algorithmic design methods for graph clustering. However, most of these
methods are based on complicated spectral techniques or convex optimisation,
and cannot be applied directly for clustering many networks that occur in
practice, whose information is often collected on different sites. Designing a
simple and distributed clustering algorithm is of great interest, and has wide
applications for processing big datasets. In this paper we present a simple and
distributed algorithm for graph clustering: for a wide class of graphs that are
characterised by a strong cluster-structure, our algorithm finishes in a
poly-logarithmic number of rounds, and recovers a partition of the graph close
to an optimal partition. The main component of our algorithm is an application
of the random matching model of load balancing, which is a fundamental protocol
in distributed computing and has been extensively studied in the past 20 years.
Hence, our result highlights an intrinsic and interesting connection between
graph clustering and load balancing. At a technical level, we present a purely
algebraic result characterising the early behaviours of load balancing
processes for graphs exhibiting a cluster-structure. We believe that this
result can be further applied to analyse other gossip processes, such as rumour
spreading and averaging processes.
| He Sun, Luca Zanetti | null | 1607.04984 | null | null |
Geometric Mean Metric Learning | stat.ML cs.LG | We revisit the task of learning a Euclidean metric from data. We approach
this problem from first principles and formulate it as a surprisingly simple
optimization problem. Indeed, our formulation even admits a closed form
solution. This solution possesses several very attractive properties: (i) an
innate geometric appeal through the Riemannian geometry of positive definite
matrices; (ii) ease of interpretability; and (iii) computational speed several
orders of magnitude faster than the widely used LMNN and ITML methods.
Furthermore, on standard benchmark datasets, our closed-form solution
consistently attains higher classification accuracy.
| Pourya Habib Zadeh, Reshad Hosseini and Suvrit Sra | null | 1607.05002 | null | null |
A Batch, Off-Policy, Actor-Critic Algorithm for Optimizing the Average
Reward | stat.ML cs.LG | We develop an off-policy actor-critic algorithm for learning an optimal
policy from a training set composed of data from multiple individuals. This
algorithm is developed with a view towards its use in mobile health.
| S.A. Murphy, Y. Deng, E.B. Laber, H.R. Maei, R.S. Sutton, K.
Witkiewitz | null | 1607.05047 | null | null |
Imitation Learning with Recurrent Neural Networks | cs.CL cs.LG stat.ML | We present a novel view that unifies two frameworks that aim to solve
sequential prediction problems: learning to search (L2S) and recurrent neural
networks (RNN). We point out equivalences between elements of the two
frameworks. By complementing what is missing from one framework comparing to
the other, we introduce a more advanced imitation learning framework that, on
one hand, augments L2S s notion of search space and, on the other hand,
enhances RNNs training procedure to be more robust to compounding errors
arising from training on highly correlated examples.
| Khanh Nguyen | null | 1607.05241 | null | null |
A Semiparametric Model for Bayesian Reader Identification | cs.LG | We study the problem of identifying individuals based on their characteristic
gaze patterns during reading of arbitrary text. The motivation for this problem
is an unobtrusive biometric setting in which a user is observed during access
to a document, but no specific challenge protocol requiring the user's time and
attention is carried out. Existing models of individual differences in gaze
control during reading are either based on simple aggregate features of eye
movements, or rely on parametric density models to describe, for instance,
saccade amplitudes or word fixation durations. We develop flexible
semiparametric models of eye movements during reading in which densities are
inferred under a Gaussian process prior centered at a parametric distribution
family that is expected to approximate the true distribution well. An empirical
study on reading data from 251 individuals shows significant improvements over
the state of the art.
| Ahmed Abdelwahab, Reinhold Kliegl and Niels Landwehr | null | 1607.05271 | null | null |
Generating Images Part by Part with Composite Generative Adversarial
Networks | cs.AI cs.CV cs.LG | Image generation remains a fundamental problem in artificial intelligence in
general and deep learning in specific. The generative adversarial network (GAN)
was successful in generating high quality samples of natural images. We propose
a model called composite generative adversarial network, that reveals the
complex structure of images with multiple generators in which each generator
generates some part of the image. Those parts are combined by alpha blending
process to create a new single image. It can generate, for example, background
and face sequentially with two generators, after training on face dataset.
Training was done in an unsupervised way without any labels about what each
generator should generate. We found possibilities of learning the structure by
using this generative model empirically.
| Hanock Kwak, Byoung-Tak Zhang | null | 1607.05387 | null | null |
Multidimensional Dynamic Pricing for Welfare Maximization | cs.DS cs.GT cs.LG | We study the problem of a seller dynamically pricing $d$ distinct types of
indivisible goods, when faced with the online arrival of unit-demand buyers
drawn independently from an unknown distribution. The goods are not in limited
supply, but can only be produced at a limited rate and are costly to produce.
The seller observes only the bundle of goods purchased at each day, but nothing
else about the buyer's valuation function. Our main result is a dynamic pricing
algorithm for optimizing welfare (including the seller's cost of production)
that runs in time and a number of rounds that are polynomial in $d$ and the
approximation parameter. We are able to do this despite the fact that (i) the
price-response function is not continuous, and even its fractional relaxation
is a non-concave function of the prices, and (ii) the welfare is not observable
to the seller.
We derive this result as an application of a general technique for optimizing
welfare over \emph{divisible} goods, which is of independent interest. When
buyers have strongly concave, H\"older continuous valuation functions over $d$
divisible goods, we give a general polynomial time dynamic pricing technique.
We are able to apply this technique to the setting of unit demand buyers
despite the fact that in that setting the goods are not divisible, and the
natural fractional relaxation of a unit demand valuation is not strongly
concave. In order to apply our general technique, we introduce a novel price
randomization procedure which has the effect of implicitly inducing buyers to
"regularize" their valuations with a strongly concave function. Finally, we
also extend our results to a limited-supply setting in which the number of
copies of each good cannot be replenished.
| Aaron Roth, Aleksandrs Slivkins, Jonathan Ullman, Zhiwei Steven Wu | null | 1607.05397 | null | null |
Information-theoretical label embeddings for large-scale image
classification | cs.CV cs.LG stat.ML | We present a method for training multi-label, massively multi-class image
classification models, that is faster and more accurate than supervision via a
sigmoid cross-entropy loss (logistic regression). Our method consists in
embedding high-dimensional sparse labels onto a lower-dimensional dense sphere
of unit-normed vectors, and treating the classification problem as a cosine
proximity regression problem on this sphere. We test our method on a dataset of
300 million high-resolution images with 17,000 labels, where it yields
considerably faster convergence, as well as a 7% higher mean average precision
compared to logistic regression.
| Fran\c{c}ois Chollet | null | 1607.05691 | null | null |
PRIIME: A Generic Framework for Interactive Personalized Interesting
Pattern Discovery | cs.LG | The traditional frequent pattern mining algorithms generate an exponentially
large number of patterns of which a substantial proportion are not much
significant for many data analysis endeavors. Discovery of a small number of
personalized interesting patterns from the large output set according to a
particular user's interest is an important as well as challenging task.
Existing works on pattern summarization do not solve this problem from the
personalization viewpoint. In this work, we propose an interactive pattern
discovery framework named PRIIME which identifies a set of interesting patterns
for a specific user without requiring any prior input on the interestingness
measure of patterns from the user. The proposed framework is generic to support
discovery of the interesting set, sequence and graph type patterns. We develop
a softmax classification based iterative learning algorithm that uses a limited
number of interactive feedback from the user to learn her interestingness
profile, and use this profile for pattern recommendation. To handle sequence
and graph type patterns PRIIME adopts a neural net (NN) based unsupervised
feature construction approach. We also develop a strategy that combines
exploration and exploitation to select patterns for feedback. We show
experimental results on several real-life datasets to validate the performance
of the proposed method. We also compare with the existing methods of
interactive pattern discovery to show that our method is substantially superior
in performance. To portray the applicability of the framework, we present a
case study from the real-estate domain.
| Mansurul Bhuiyan and Mohammad Al Hasan | null | 1607.05749 | null | null |
Data-driven generation of spatio-temporal routines in human mobility | cs.SI cs.LG physics.data-an physics.soc-ph stat.OT | The generation of realistic spatio-temporal trajectories of human mobility is
of fundamental importance in a wide range of applications, such as the
developing of protocols for mobile ad-hoc networks or what-if analysis in urban
ecosystems. Current generative algorithms fail in accurately reproducing the
individuals' recurrent schedules and at the same time in accounting for the
possibility that individuals may break the routine during periods of variable
duration. In this article we present DITRAS (DIary-based TRAjectory Simulator),
a framework to simulate the spatio-temporal patterns of human mobility. DITRAS
operates in two steps: the generation of a mobility diary and the translation
of the mobility diary into a mobility trajectory. We propose a data-driven
algorithm which constructs a diary generator from real data, capturing the
tendency of individuals to follow or break their routine. We also propose a
trajectory generator based on the concept of preferential exploration and
preferential return. We instantiate DITRAS with the proposed diary and
trajectory generators and compare the resulting algorithm with real data and
synthetic data produced by other generative algorithms, built by instantiating
DITRAS with several combinations of diary and trajectory generators. We show
that the proposed algorithm reproduces the statistical properties of real
trajectories in the most accurate way, making a step forward the understanding
of the origin of the spatio-temporal patterns of human mobility.
| Luca Pappalardo and Filippo Simini | 10.1007/s10618-017-0548-4 | 1607.05952 | null | null |
Indoor occupancy estimation from carbon dioxide concentration | cs.SY cs.LG | This paper presents an indoor occupancy estimator with which we can estimate
the number of real-time indoor occupants based on the carbon dioxide (CO2)
measurement. The estimator is actually a dynamic model of the occupancy level.
To identify the dynamic model, we propose the Feature Scaled Extreme Learning
Machine (FS-ELM) algorithm, which is a variation of the standard Extreme
Learning Machine (ELM) but is shown to perform better for the occupancy
estimation problem. The measured CO2 concentration suffers from serious spikes.
We find that pre-smoothing the CO2 data can greatly improve the estimation
accuracy. In real applications, however, we cannot obtain the real-time
globally smoothed CO2 data. We provide a way to use the locally smoothed CO2
data instead, which is real-time available. We introduce a new criterion, i.e.
$x$-tolerance accuracy, to assess the occupancy estimator. The proposed
occupancy estimator was tested in an office room with 24 cubicles and 11 open
seats. The accuracy is up to 94 percent with a tolerance of four occupants.
| Chaoyang Jiang, Mustafa K. Masood, Yeng Chai Soh, and Hua Li | null | 1607.05962 | null | null |
Onsager-corrected deep learning for sparse linear inverse problems | cs.IT cs.LG math.IT stat.ML | Deep learning has gained great popularity due to its widespread success on
many inference problems. We consider the application of deep learning to the
sparse linear inverse problem encountered in compressive sensing, where one
seeks to recover a sparse signal from a small number of noisy linear
measurements. In this paper, we propose a novel neural-network architecture
that decouples prediction errors across layers in the same way that the
approximate message passing (AMP) algorithm decouples them across iterations:
through Onsager correction. Numerical experiments suggest that our "learned
AMP" network significantly improves upon Gregor and LeCun's "learned ISTA"
network in both accuracy and complexity.
| Mark Borgerding and Philip Schniter | null | 1607.05966 | null | null |
On the Identification and Mitigation of Weaknesses in the Knowledge
Gradient Policy for Multi-Armed Bandits | stat.ML cs.LG | The Knowledge Gradient (KG) policy was originally proposed for online ranking
and selection problems but has recently been adapted for use in online decision
making in general and multi-armed bandit problems (MABs) in particular. We
study its use in a class of exponential family MABs and identify weaknesses,
including a propensity to take actions which are dominated with respect to both
exploitation and exploration. We propose variants of KG which avoid such
errors. These new policies include an index heuristic which deploys a KG
approach to develop an approximation to the Gittins index. A numerical study
shows this policy to perform well over a range of MABs including those for
which index policies are not optimal. While KG does not make dominated actions
when bandits are Gaussian, it fails to be index consistent and appears not to
enjoy a performance advantage over competitor policies when arms are correlated
to compensate for its greater computational demands.
| James Edwards, Paul Fearnhead, Kevin Glazebrook | 10.1017/S0269964816000279 | 1607.0597 | null | null |
On the Modeling of Error Functions as High Dimensional Landscapes for
Weight Initialization in Learning Networks | cs.LG cs.CV physics.data-an stat.ML | Next generation deep neural networks for classification hosted on embedded
platforms will rely on fast, efficient, and accurate learning algorithms.
Initialization of weights in learning networks has a great impact on the
classification accuracy. In this paper we focus on deriving good initial
weights by modeling the error function of a deep neural network as a
high-dimensional landscape. We observe that due to the inherent complexity in
its algebraic structure, such an error function may conform to general results
of the statistics of large systems. To this end we apply some results from
Random Matrix Theory to analyse these functions. We model the error function in
terms of a Hamiltonian in N-dimensions and derive some theoretical results
about its general behavior. These results are further used to make better
initial guesses of weights for the learning algorithm.
| Julius, Gopinath Mahale, Sumana T., C. S. Adityakrishna | null | 1607.06011 | null | null |
Doubly Accelerated Methods for Faster CCA and Generalized
Eigendecomposition | math.OC cs.DS cs.LG stat.ML | We study $k$-GenEV, the problem of finding the top $k$ generalized
eigenvectors, and $k$-CCA, the problem of finding the top $k$ vectors in
canonical-correlation analysis. We propose algorithms $\mathtt{LazyEV}$ and
$\mathtt{LazyCCA}$ to solve the two problems with running times linearly
dependent on the input size and on $k$.
Furthermore, our algorithms are DOUBLY-ACCELERATED: our running times depend
only on the square root of the matrix condition number, and on the square root
of the eigengap. This is the first such result for both $k$-GenEV or $k$-CCA.
We also provide the first gap-free results, which provide running times that
depend on $1/\sqrt{\varepsilon}$ rather than the eigengap.
| Zeyuan Allen-Zhu, Yuanzhi Li | null | 1607.06017 | null | null |
Predicting Branch Visits and Credit Card Up-selling using Temporal
Banking Data | cs.LG | There is an abundance of temporal and non-temporal data in banking (and other
industries), but such temporal activity data can not be used directly with
classical machine learning models. In this work, we perform extensive feature
extraction from the temporal user activity data in an attempt to predict user
visits to different branches and credit card up-selling utilizing user
information and the corresponding activity data, as part of \emph{ECML/PKDD
Discovery Challenge 2016 on Bank Card Usage Analysis}. Our solution ranked
\nth{4} for \emph{Task 1} and achieved an AUC of \textbf{$0.7056$} for
\emph{Task 2} on public leaderboard.
| Sandra Mitrovi\'c and Gaurav Singh | null | 1607.06123 | null | null |
Sequence to sequence learning for unconstrained scene text recognition | cs.CV cs.LG cs.NE | In this work we present a state-of-the-art approach for unconstrained natural
scene text recognition. We propose a cascade approach that incorporates a
convolutional neural network (CNN) architecture followed by a long short term
memory model (LSTM). The CNN learns visual features for the characters and uses
them with a softmax layer to detect sequence of characters. While the CNN gives
very good recognition results, it does not model relation between characters,
hence gives rise to false positive and false negative cases (confusing
characters due to visual similarities like "g" and "9", or confusing background
patches with characters; either removing existing characters or adding
non-existing ones) To alleviate these problems we leverage recent developments
in LSTM architectures to encode contextual information. We show that the LSTM
can dramatically reduce such errors and achieve state-of-the-art accuracy in
the task of unconstrained natural scene text recognition. Moreover we manually
remove all occurrences of the words that exist in the test set from our
training set to test whether our approach will generalize to unseen data. We
use the ICDAR 13 test set for evaluation and compare the results with the state
of the art approaches [11, 18]. We finally present an application of the work
in the domain of for traffic monitoring.
| Ahmed Mamdouh A. Hassanien | null | 1607.06125 | null | null |
Supervised quantum gate "teaching" for quantum hardware design | cs.LG quant-ph stat.ML | We show how to train a quantum network of pairwise interacting qubits such
that its evolution implements a target quantum algorithm into a given network
subset. Our strategy is inspired by supervised learning and is designed to help
the physical construction of a quantum computer which operates with minimal
external classical control.
| Leonardo Banchi, Nicola Pancotti, Sougato Bose | null | 1607.06146 | null | null |
Streaming Recommender Systems | cs.SI cs.IR cs.LG | The increasing popularity of real-world recommender systems produces data
continuously and rapidly, and it becomes more realistic to study recommender
systems under streaming scenarios. Data streams present distinct properties
such as temporally ordered, continuous and high-velocity, which poses
tremendous challenges to traditional recommender systems. In this paper, we
investigate the problem of recommendation with stream inputs. In particular, we
provide a principled framework termed sRec, which provides explicit
continuous-time random process models of the creation of users and topics, and
of the evolution of their interests. A variational Bayesian approach called
recursive meanfield approximation is proposed, which permits computationally
efficient instantaneous on-line inference. Experimental results on several
real-world datasets demonstrate the advantages of our sRec over other
state-of-the-arts.
| Shiyu Chang, Yang Zhang, Jiliang Tang, Dawei Yin, Yi Chang, Mark A.
Hasegawa-Johnson, Thomas S. Huang | null | 1607.06182 | null | null |
An ensemble of machine learning and anti-learning methods for predicting
tumour patient survival rates | cs.LG | This paper primarily addresses a dataset relating to cellular, chemical and
physical conditions of patients gathered at the time they are operated upon to
remove colorectal tumours. This data provides a unique insight into the
biochemical and immunological status of patients at the point of tumour removal
along with information about tumour classification and post-operative survival.
The relationship between severity of tumour, based on TNM staging, and survival
is still unclear for patients with TNM stage 2 and 3 tumours. We ask whether it
is possible to predict survival rate more accurately using a selection of
machine learning techniques applied to subsets of data to gain a deeper
understanding of the relationships between a patient's biochemical markers and
survival. We use a range of feature selection and single classification
techniques to predict the 5 year survival rate of TNM stage 2 and 3 patients
which initially produces less than ideal results. The performance of each model
individually is then compared with subsets of the data where agreement is
reached for multiple models. This novel method of selective ensembling
demonstrates that significant improvements in model accuracy on an unseen test
set can be achieved for patients where agreement between models is achieved.
Finally we point at a possible method to identify whether a patients prognosis
can be accurately predicted or not.
| Christopher Roadknight, Durga Suryanarayanan, Uwe Aickelin, John
Scholefield, Lindy Durrant | 10.1109/DSAA.2015.7344863 | 1607.0619 | null | null |
Greedy bi-criteria approximations for $k$-medians and $k$-means | cs.DS cs.LG | This paper investigates the following natural greedy procedure for clustering
in the bi-criterion setting: iteratively grow a set of centers, in each round
adding the center from a candidate set that maximally decreases clustering
cost. In the case of $k$-medians and $k$-means, the key results are as follows.
$\bullet$ When the method considers all data points as candidate centers,
then selecting $\mathcal{O}(k\log(1/\varepsilon))$ centers achieves cost at
most $2+\varepsilon$ times the optimal cost with $k$ centers.
$\bullet$ Alternatively, the same guarantees hold if each round samples
$\mathcal{O}(k/\varepsilon^5)$ candidate centers proportionally to their
cluster cost (as with $\texttt{kmeans++}$, but holding centers fixed).
$\bullet$ In the case of $k$-means, considering an augmented set of
$n^{\lceil1/\varepsilon\rceil}$ candidate centers gives $1+\varepsilon$
approximation with $\mathcal{O}(k\log(1/\varepsilon))$ centers, the entire
algorithm taking
$\mathcal{O}(dk\log(1/\varepsilon)n^{1+\lceil1/\varepsilon\rceil})$ time, where
$n$ is the number of data points in $\mathbb{R}^d$.
$\bullet$ In the case of Euclidean $k$-medians, generating a candidate set
via $n^{\mathcal{O}(1/\varepsilon^2)}$ executions of stochastic gradient
descent with adaptively determined constraint sets will once again give
approximation $1+\varepsilon$ with $\mathcal{O}(k\log(1/\varepsilon))$ centers
in $dk\log(1/\varepsilon)n^{\mathcal{O}(1/\varepsilon^2)}$ time.
Ancillary results include: guarantees for cluster costs based on powers of
metrics; a brief, favorable empirical evaluation against $\texttt{kmeans++}$;
data-dependent bounds allowing $1+\varepsilon$ in the first two bullets above,
for example with $k$-medians over finite metric spaces.
| Daniel Hsu and Matus Telgarsky | null | 1607.06203 | null | null |
Explaining Classification Models Built on High-Dimensional Sparse Data | stat.ML cs.LG | Predictive modeling applications increasingly use data representing people's
behavior, opinions, and interactions. Fine-grained behavior data often has
different structure from traditional data, being very high-dimensional and
sparse. Models built from these data are quite difficult to interpret, since
they contain many thousands or even many millions of features. Listing features
with large model coefficients is not sufficient, because the model coefficients
do not incorporate information on feature presence, which is key when analysing
sparse data. In this paper we introduce two alternatives for explaining
predictive models by listing important features. We evaluate these alternatives
in terms of explanation "bang for the buck,", i.e., how many examples'
inferences are explained for a given number of features listed. The bottom
line: (i) The proposed alternatives have double the bang-for-the-buck as
compared to just listing the high-coefficient features, and (ii) interestingly,
although they come from different sources and motivations, the two new
alternatives provide strikingly similar rankings of important features.
| Julie Moeyersoms, Brian d'Alessandro, Foster Provost, David Martens | null | 1607.0628 | null | null |
Hierarchical Clustering of Asymmetric Networks | cs.LG stat.ML | This paper considers networks where relationships between nodes are
represented by directed dissimilarities. The goal is to study methods that,
based on the dissimilarity structure, output hierarchical clusters, i.e., a
family of nested partitions indexed by a connectivity parameter. Our
construction of hierarchical clustering methods is built around the concept of
admissible methods, which are those that abide by the axioms of value - nodes
in a network with two nodes are clustered together at the maximum of the two
dissimilarities between them - and transformation - when dissimilarities are
reduced, the network may become more clustered but not less. Two particular
methods, termed reciprocal and nonreciprocal clustering, are shown to provide
upper and lower bounds in the space of admissible methods. Furthermore,
alternative clustering methodologies and axioms are considered. In particular,
modifying the axiom of value such that clustering in two-node networks occurs
at the minimum of the two dissimilarities entails the existence of a unique
admissible clustering method.
| Gunnar Carlsson, Facundo M\'emoli, Alejandro Ribeiro, Santiago Segarra | null | 1607.06294 | null | null |
Uncovering Causality from Multivariate Hawkes Integrated Cumulants | stat.ML cs.LG | We design a new nonparametric method that allows one to estimate the matrix
of integrated kernels of a multivariate Hawkes process. This matrix not only
encodes the mutual influences of each nodes of the process, but also
disentangles the causality relationships between them. Our approach is the
first that leads to an estimation of this matrix without any parametric
modeling and estimation of the kernels themselves. A consequence is that it can
give an estimation of causality relationships between nodes (or users), based
on their activity timestamps (on a social network for instance), without
knowing or estimating the shape of the activities lifetime. For that purpose,
we introduce a moment matching method that fits the third-order integrated
cumulants of the process. We show on numerical experiments that our approach is
indeed very robust to the shape of the kernels, and gives appealing results on
the MemeTracker database.
| Massil Achab, Emmanuel Bacry, St\'ephane Ga\"iffas, Iacopo
Mastromatteo, Jean-Francois Muzy | null | 1607.06333 | null | null |
Admissible Hierarchical Clustering Methods and Algorithms for Asymmetric
Networks | cs.LG stat.ML | This paper characterizes hierarchical clustering methods that abide by two
previously introduced axioms -- thus, denominated admissible methods -- and
proposes tractable algorithms for their implementation. We leverage the fact
that, for asymmetric networks, every admissible method must be contained
between reciprocal and nonreciprocal clustering, and describe three families of
intermediate methods. Grafting methods exchange branches between dendrograms
generated by different admissible methods. The convex combination family
combines admissible methods through a convex operation in the space of
dendrograms, and thirdly, the semi-reciprocal family clusters nodes that are
related by strong cyclic influences in the network. Algorithms for the
computation of hierarchical clusters generated by reciprocal and nonreciprocal
clustering as well as the grafting, convex combination, and semi-reciprocal
families are derived using matrix operations in a dioid algebra. Finally, the
introduced clustering methods and algorithms are exemplified through their
application to a network describing the interrelation between sectors of the
United States (U.S.) economy.
| Gunnar Carlsson, Facundo M\'emoli, Alejandro Ribeiro, Santiago Segarra | null | 1607.06335 | null | null |
Excisive Hierarchical Clustering Methods for Network Data | cs.LG | We introduce two practical properties of hierarchical clustering methods for
(possibly asymmetric) network data: excisiveness and linear scale preservation.
The latter enforces imperviousness to change in units of measure whereas the
former ensures local consistency of the clustering outcome. Algorithmically,
excisiveness implies that we can reduce computational complexity by only
clustering a data subset of interest while theoretically guaranteeing that the
same hierarchical outcome would be observed when clustering the whole dataset.
Moreover, we introduce the concept of representability, i.e. a generative model
for describing clustering methods through the specification of their action on
a collection of networks. We further show that, within a rich set of admissible
methods, requiring representability is equivalent to requiring both
excisiveness and linear scale preservation. Leveraging this equivalence, we
show that all excisive and linear scale preserving methods can be factored into
two steps: a transformation of the weights in the input network followed by the
application of a canonical clustering method. Furthermore, their factorization
can be used to show stability of excisive and linear scale preserving methods
in the sense that a bounded perturbation in the input network entails a bounded
perturbation in the clustering output.
| Gunnar Carlsson, Facundo M\'emoli, Alejandro Ribeiro, Santiago Segarra | null | 1607.06339 | null | null |
Distributed Supervised Learning using Neural Networks | stat.ML cs.LG | Distributed learning is the problem of inferring a function in the case where
training data is distributed among multiple geographically separated sources.
Particularly, the focus is on designing learning strategies with low
computational requirements, in which communication is restricted only to
neighboring agents, with no reliance on a centralized authority. In this
thesis, we analyze multiple distributed protocols for a large number of neural
network architectures. The first part of the thesis is devoted to a definition
of the problem, followed by an extensive overview of the state-of-the-art.
Next, we introduce different strategies for a relatively simple class of single
layer neural networks, where a linear output layer is preceded by a nonlinear
layer, whose weights are stochastically assigned in the beginning of the
learning process. We consider both batch and sequential learning, with
horizontally and vertically partitioned data. In the third part, we consider
instead the more complex problem of semi-supervised distributed learning, where
each agent is provided with an additional set of unlabeled training samples. We
propose two different algorithms based on diffusion processes for linear
support vector machines and kernel ridge regression. Subsequently, the fourth
part extends the discussion to learning with time-varying data (e.g.
time-series) using recurrent neural networks. We consider two different
families of networks, namely echo state networks (extending the algorithms
introduced in the second part), and spline adaptive filters. Overall, the
algorithms presented throughout the thesis cover a wide range of possible
practical applications, and lead the way to numerous future extensions, which
are briefly summarized in the conclusive chapter.
| Simone Scardapane | null | 1607.06364 | null | null |
Layer Normalization | stat.ML cs.LG | Training state-of-the-art, deep neural networks is computationally expensive.
One way to reduce the training time is to normalize the activities of the
neurons. A recently introduced technique called batch normalization uses the
distribution of the summed input to a neuron over a mini-batch of training
cases to compute a mean and variance which are then used to normalize the
summed input to that neuron on each training case. This significantly reduces
the training time in feed-forward neural networks. However, the effect of batch
normalization is dependent on the mini-batch size and it is not obvious how to
apply it to recurrent neural networks. In this paper, we transpose batch
normalization into layer normalization by computing the mean and variance used
for normalization from all of the summed inputs to the neurons in a layer on a
single training case. Like batch normalization, we also give each neuron its
own adaptive bias and gain which are applied after the normalization but before
the non-linearity. Unlike batch normalization, layer normalization performs
exactly the same computation at training and test times. It is also
straightforward to apply to recurrent neural networks by computing the
normalization statistics separately at each time step. Layer normalization is
very effective at stabilizing the hidden state dynamics in recurrent networks.
Empirically, we show that layer normalization can substantially reduce the
training time compared with previously published techniques.
| Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E. Hinton | null | 1607.0645 | null | null |
Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word
Embeddings | cs.CL cs.AI cs.LG stat.ML | The blind application of machine learning runs the risk of amplifying biases
present in data. Such a danger is facing us with word embedding, a popular
framework to represent text data as vectors which has been used in many machine
learning and natural language processing tasks. We show that even word
embeddings trained on Google News articles exhibit female/male gender
stereotypes to a disturbing extent. This raises concerns because their
widespread use, as we describe, often tends to amplify these biases.
Geometrically, gender bias is first shown to be captured by a direction in the
word embedding. Second, gender neutral words are shown to be linearly separable
from gender definition words in the word embedding. Using these properties, we
provide a methodology for modifying an embedding to remove gender stereotypes,
such as the association between between the words receptionist and female,
while maintaining desired associations such as between the words queen and
female. We define metrics to quantify both direct and indirect gender biases in
embeddings, and develop algorithms to "debias" the embedding. Using
crowd-worker evaluation as well as standard benchmarks, we empirically
demonstrate that our algorithms significantly reduce gender bias in embeddings
while preserving the its useful properties such as the ability to cluster
related concepts and to solve analogy tasks. The resulting embeddings can be
used in applications without amplifying gender bias.
| Tolga Bolukbasi, Kai-Wei Chang, James Zou, Venkatesh Saligrama, Adam
Kalai | null | 1607.0652 | null | null |
CGMOS: Certainty Guided Minority OverSampling | cs.LG | Handling imbalanced datasets is a challenging problem that if not treated
correctly results in reduced classification performance. Imbalanced datasets
are commonly handled using minority oversampling, whereas the SMOTE algorithm
is a successful oversampling algorithm with numerous extensions. SMOTE
extensions do not have a theoretical guarantee during training to work better
than SMOTE and in many instances their performance is data dependent. In this
paper we propose a novel extension to the SMOTE algorithm with a theoretical
guarantee for improved classification performance. The proposed approach
considers the classification performance of both the majority and minority
classes. In the proposed approach CGMOS (Certainty Guided Minority
OverSampling) new data points are added by considering certainty changes in the
dataset. The paper provides a proof that the proposed algorithm is guaranteed
to work better than SMOTE for training data. Further experimental results on 30
real-world datasets show that CGMOS works better than existing algorithms when
using 6 different classifiers.
| Xi Zhang and Di Ma and Lin Gan and Shanshan Jiang and Gady Agam | null | 1607.06525 | null | null |
e-Distance Weighted Support Vector Regression | cs.LG | We propose a novel support vector regression approach called e-Distance
Weighted Support Vector Regression (e-DWSVR).e-DWSVR specifically addresses two
challenging issues in support vector regression: first, the process of noisy
data; second, how to deal with the situation when the distribution of boundary
data is different from that of the overall data. The proposed e-DWSVR optimizes
the minimum margin and the mean of functional margin simultaneously to tackle
these two issues. In addition, we use both dual coordinate descent (CD) and
averaged stochastic gradient descent (ASGD) strategies to make e-DWSVR scalable
to large scale problems. We report promising results obtained by e-DWSVR in
comparison with existing methods on several benchmark datasets.
| Yan Wang, Ge Ou, Wei Pang, Lan Huang, George Macleod Coghill | null | 1607.06657 | null | null |
On the Use of Sparse Filtering for Covariate Shift Adaptation | cs.LG stat.ML | In this paper we formally analyse the use of sparse filtering algorithms to
perform covariate shift adaptation. We provide a theoretical analysis of sparse
filtering by evaluating the conditions required to perform covariate shift
adaptation. We prove that sparse filtering can perform adaptation only if the
conditional distribution of the labels has a structure explained by a cosine
metric. To overcome this limitation, we propose a new algorithm, named periodic
sparse filtering, and carry out the same theoretical analysis regarding
covariate shift adaptation. We show that periodic sparse filtering can perform
adaptation under the looser and more realistic requirement that the conditional
distribution of the labels has a periodic structure, which may be satisfied,
for instance, by user-dependent data sets. We experimentally validate our
theoretical results on synthetic data. Moreover, we apply periodic sparse
filtering to real-world data sets to demonstrate that this simple and
computationally efficient algorithm is able to achieve competitive
performances.
| Fabio Massimo Zennaro, Ke Chen | null | 1607.06781 | null | null |
Interactive Learning from Multiple Noisy Labels | cs.LG stat.ML | Interactive learning is a process in which a machine learning algorithm is
provided with meaningful, well-chosen examples as opposed to randomly chosen
examples typical in standard supervised learning. In this paper, we propose a
new method for interactive learning from multiple noisy labels where we exploit
the disagreement among annotators to quantify the easiness (or meaningfulness)
of an example. We demonstrate the usefulness of this method in estimating the
parameters of a latent variable classification model, and conduct experimental
analyses on a range of synthetic and benchmark datasets. Furthermore, we
theoretically analyze the performance of perceptron in this interactive
learning framework.
| Shankar Vembu, Sandra Zilles | null | 1607.06988 | null | null |
Scaling Up Sparse Support Vector Machines by Simultaneous Feature and
Sample Reduction | stat.ML cs.LG | Sparse support vector machine (SVM) is a popular classification technique
that can simultaneously learn a small set of the most interpretable features
and identify the support vectors. It has achieved great successes in many
real-world applications. However, for large-scale problems involving a huge
number of samples and ultra-high dimensional features, solving sparse SVMs
remains challenging. By noting that sparse SVMs induce sparsities in both
feature and sample spaces, we propose a novel approach, which is based on
accurate estimations of the primal and dual optima of sparse SVMs, to
simultaneously identify the inactive features and samples that are guaranteed
to be irrelevant to the outputs. Thus, we can remove the identified inactive
samples and features from the training phase, leading to substantial savings in
the computational cost without sacrificing the accuracy. Moreover, we show that
our method can be extended to multi-class sparse support vector machines. To
the best of our knowledge, the proposed method is the \emph{first}
\emph{static} feature and sample reduction method for sparse SVMs and
multi-class sparse SVMs. Experiments on both synthetic and real data sets
demonstrate that our approach significantly outperforms state-of-the-art
methods and the speedup gained by our approach can be orders of magnitude.
| Weizhong Zhang and Bin Hong and Wei Liu and Jieping Ye and Deng Cai
and Xiaofei He and Jie Wang | null | 1607.06996 | null | null |
Impact of Physical Activity on Sleep:A Deep Learning Based Exploration | cs.LG | The importance of sleep is paramount for maintaining physical, emotional and
mental wellbeing. Though the relationship between sleep and physical activity
is known to be important, it is not yet fully understood. The explosion in
popularity of actigraphy and wearable devices, provides a unique opportunity to
understand this relationship. Leveraging this information source requires new
tools to be developed to facilitate data-driven research for sleep and activity
patient-recommendations.
In this paper we explore the use of deep learning to build sleep quality
prediction models based on actigraphy data. We first use deep learning as a
pure model building device by performing human activity recognition (HAR) on
raw sensor data, and using deep learning to build sleep prediction models. We
compare the deep learning models with those build using classical approaches,
i.e. logistic regression, support vector machines, random forest and adaboost.
Secondly, we employ the advantage of deep learning with its ability to handle
high dimensional datasets. We explore several deep learning models on the raw
wearable sensor output without performing HAR or any other feature extraction.
Our results show that using a convolutional neural network on the raw
wearables output improves the predictive value of sleep quality from physical
activity, by an additional 8% compared to state-of-the-art non-deep learning
approaches, which itself shows a 15% improvement over current practice.
Moreover, utilizing deep learning on raw data eliminates the need for data
pre-processing and simplifies the overall workflow to analyze actigraphy data
for sleep and physical activity research.
| Aarti Sathyanarayana, Shafiq Joty, Luis Fernandez-Luque, Ferda Ofli,
Jaideep Srivastava, Ahmed Elmagarmid, Shahrad Taheri, Teresa Arora | 10.2196/mhealth.6562 | 1607.07034 | null | null |
Spatio-Temporal LSTM with Trust Gates for 3D Human Action Recognition | cs.CV cs.AI cs.LG cs.NE | 3D action recognition - analysis of human actions based on 3D skeleton data -
becomes popular recently due to its succinctness, robustness, and
view-invariant representation. Recent attempts on this problem suggested to
develop RNN-based learning methods to model the contextual dependency in the
temporal domain. In this paper, we extend this idea to spatio-temporal domains
to analyze the hidden sources of action-related information within the input
data over both domains concurrently. Inspired by the graphical structure of the
human skeleton, we further propose a more powerful tree-structure based
traversal method. To handle the noise and occlusion in 3D skeleton data, we
introduce new gating mechanism within LSTM to learn the reliability of the
sequential input data and accordingly adjust its effect on updating the
long-term context information stored in the memory cell. Our method achieves
state-of-the-art performance on 4 challenging benchmark datasets for 3D human
action analysis.
| Jun Liu, Amir Shahroudy, Dong Xu, and Gang Wang | null | 1607.07043 | null | null |
An Actor-Critic Algorithm for Sequence Prediction | cs.LG | We present an approach to training neural networks to generate sequences
using actor-critic methods from reinforcement learning (RL). Current
log-likelihood training methods are limited by the discrepancy between their
training and testing modes, as models must generate tokens conditioned on their
previous guesses rather than the ground-truth tokens. We address this problem
by introducing a \textit{critic} network that is trained to predict the value
of an output token, given the policy of an \textit{actor} network. This results
in a training procedure that is much closer to the test phase, and allows us to
directly optimize for a task-specific score such as BLEU. Crucially, since we
leverage these techniques in the supervised learning setting rather than the
traditional RL setting, we condition the critic network on the ground-truth
output. We show that our method leads to improved performance on both a
synthetic task, and for German-English machine translation. Our analysis paves
the way for such methods to be applied in natural language generation tasks,
such as machine translation, caption generation, and dialogue modelling.
| Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan
Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio | null | 1607.07086 | null | null |
Deep nets for local manifold learning | cs.LG | The problem of extending a function $f$ defined on a training data
$\mathcal{C}$ on an unknown manifold $\mathbb{X}$ to the entire manifold and a
tubular neighborhood of this manifold is considered in this paper. For
$\mathbb{X}$ embedded in a high dimensional ambient Euclidean space
$\mathbb{R}^D$, a deep learning algorithm is developed for finding a local
coordinate system for the manifold {\bf without eigen--decomposition}, which
reduces the problem to the classical problem of function approximation on a low
dimensional cube. Deep nets (or multilayered neural networks) are proposed to
accomplish this approximation scheme by using the training data. Our methods do
not involve such optimization techniques as back--propagation, while assuring
optimal (a priori) error bounds on the output in terms of the number of
derivatives of the target function. In addition, these methods are universal,
in that they do not require a prior knowledge of the smoothness of the target
function, but adjust the accuracy of approximation locally and automatically,
depending only upon the local smoothness of the target function. Our ideas are
easily extended to solve both the pre--image problem and the out--of--sample
extension problem, with a priori bounds on the growth of the function thus
extended.
| Charles K. Chui, H. N. Mhaskar | null | 1607.0711 | null | null |
A Cross-Entropy-based Method to Perform Information-based Feature
Selection | cs.LG | From a machine learning point of view, identifying a subset of relevant
features from a real data set can be useful to improve the results achieved by
classification methods and to reduce their time and space complexity. To
achieve this goal, feature selection methods are usually employed. These
approaches assume that the data contains redundant or irrelevant attributes
that can be eliminated. In this work, we propose a novel algorithm to manage
the optimization problem that is at the foundation of the Mutual Information
feature selection methods. Furthermore, our novel approach is able to estimate
automatically the number of dimensions to retain. The quality of our method is
confirmed by the promising results achieved on standard real data sets.
| Pietro Cassara and Alessandro Rozza and Mirco Nanni | null | 1607.07186 | null | null |
Higher-Order Factorization Machines | stat.ML cs.LG | Factorization machines (FMs) are a supervised learning approach that can use
second-order feature combinations even when the data is very high-dimensional.
Unfortunately, despite increasing interest in FMs, there exists to date no
efficient training algorithm for higher-order FMs (HOFMs). In this paper, we
present the first generic yet efficient algorithms for training arbitrary-order
HOFMs. We also present new variants of HOFMs with shared parameters, which
greatly reduce model size and prediction times while maintaining similar
accuracy. We demonstrate the proposed approaches on four different link
prediction tasks.
| Mathieu Blondel, Akinori Fujino, Naonori Ueda and Masakazu Ishihata | null | 1607.07195 | null | null |
A Statistical Test for Joint Distributions Equivalence | cs.LG cs.CV stat.ML | We provide a distribution-free test that can be used to determine whether any
two joint distributions $p$ and $q$ are statistically different by inspection
of a large enough set of samples. Following recent efforts from Long et al.
[1], we rely on joint kernel distribution embedding to extend the kernel
two-sample test of Gretton et al. [2] to the case of joint probability
distributions. Our main result can be directly applied to verify if a
dataset-shift has occurred between training and test distributions in a
learning framework, without further assuming the shift has occurred only in the
input, in the target or in the conditional distribution.
| Francesco Solera and Andrea Palazzi | null | 1607.0727 | null | null |
Evaluating Link Prediction Accuracy on Dynamic Networks with Added and
Removed Edges | cs.SI cs.LG physics.soc-ph stat.ME | The task of predicting future relationships in a social network, known as
link prediction, has been studied extensively in the literature. Many link
prediction methods have been proposed, ranging from common neighbors to
probabilistic models. Recent work by Yang et al. has highlighted several
challenges in evaluating link prediction accuracy. In dynamic networks where
edges are both added and removed over time, the link prediction problem is more
complex and involves predicting both newly added and newly removed edges. This
results in new challenges in the evaluation of dynamic link prediction methods,
and the recommendations provided by Yang et al. are no longer applicable,
because they do not address edge removal. In this paper, we investigate several
metrics currently used for evaluating accuracies of dynamic link prediction
methods and demonstrate why they can be misleading in many cases. We provide
several recommendations on evaluating dynamic link prediction accuracy,
including separation into two categories of evaluation. Finally we propose a
unified metric to characterize link prediction accuracy effectively using a
single number.
| Ruthwik R. Junuthula, Kevin S. Xu, and Vijay K. Devabhaktuni | null | 1607.0733 | null | null |
Seeing the Forest from the Trees in Two Looks: Matrix Sketching by
Cascaded Bilateral Sampling | cs.LG | Matrix sketching is aimed at finding close approximations of a matrix by
factors of much smaller dimensions, which has important applications in
optimization and machine learning. Given a matrix A of size m by n,
state-of-the-art randomized algorithms take O(m * n) time and space to obtain
its low-rank decomposition. Although quite useful, the need to store or
manipulate the entire matrix makes it a computational bottleneck for truly
large and dense inputs. Can we sketch an m-by-n matrix in O(m + n) cost by
accessing only a small fraction of its rows and columns, without knowing
anything about the remaining data? In this paper, we propose the cascaded
bilateral sampling (CABS) framework to solve this problem. We start from
demonstrating how the approximation quality of bilateral matrix sketching
depends on the encoding powers of sampling. In particular, the sampled rows and
columns should correspond to the code-vectors in the ground truth
decompositions. Motivated by this analysis, we propose to first generate a
pilot-sketch using simple random sampling, and then pursue more advanced,
"follow-up" sampling on the pilot-sketch factors seeking maximal encoding
powers. In this cascading process, the rise of approximation quality is shown
to be lower-bounded by the improvement of encoding powers in the follow-up
sampling step, thus theoretically guarantees the algorithmic boosting property.
Computationally, our framework only takes linear time and space, and at the
same time its performance rivals the quality of state-of-the-art algorithms
consuming a quadratic amount of resources. Empirical evaluations on benchmark
data fully demonstrate the potential of our methods in large scale matrix
sketching and related areas.
| Kai Zhang, Chuanren Liu, Jie Zhang, Hui Xiong, Eric Xing, Jieping Ye | null | 1607.07395 | null | null |
gvnn: Neural Network Library for Geometric Computer Vision | cs.CV cs.LG | We introduce gvnn, a neural network library in Torch aimed towards bridging
the gap between classic geometric computer vision and deep learning. Inspired
by the recent success of Spatial Transformer Networks, we propose several new
layers which are often used as parametric transformations on the data in
geometric computer vision. These layers can be inserted within a neural network
much in the spirit of the original spatial transformers and allow
backpropagation to enable end-to-end learning of a network involving any domain
knowledge in geometric computer vision. This opens up applications in learning
invariance to 3D geometric transformation for place recognition, end-to-end
visual odometry, depth estimation and unsupervised learning through warping
with a parametric transformation for image reconstruction error.
| Ankur Handa, Michael Bloesch, Viorica Patraucean, Simon Stent, John
McCormac, Andrew Davison | null | 1607.07405 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.