title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
NIPS 2016 Tutorial: Generative Adversarial Networks | cs.LG | This report summarizes the tutorial presented by the author at NIPS 2016 on
generative adversarial networks (GANs). The tutorial describes: (1) Why
generative modeling is a topic worth studying, (2) how generative models work,
and how GANs compare to other generative models, (3) the details of how GANs
work, (4) research frontiers in GANs, and (5) state-of-the-art image models
that combine GANs with other methods. Finally, the tutorial contains three
exercises for readers to complete, and the solutions to these exercises.
| Ian Goodfellow | null | 1701.0016 | null | null |
Very Fast Kernel SVM under Budget Constraints | stat.ML cs.LG | In this paper we propose a fast online Kernel SVM algorithm under tight
budget constraints. We propose to split the input space using LVQ and train a
Kernel SVM in each cluster. To allow for online training, we propose to limit
the size of the support vector set of each cluster using different strategies.
We show in the experiment that our algorithm is able to achieve high accuracy
while having a very high number of samples processed per second both in
training and in the evaluation.
| David Picard | null | 1701.00167 | null | null |
Lazily Adapted Constant Kinky Inference for Nonparametric Regression and
Model-Reference Adaptive Control | math.OC cs.AI cs.LG cs.SY stat.ML | Techniques known as Nonlinear Set Membership prediction, Lipschitz
Interpolation or Kinky Inference are approaches to machine learning that
utilise presupposed Lipschitz properties to compute inferences over unobserved
function values. Provided a bound on the true best Lipschitz constant of the
target function is known a priori they offer convergence guarantees as well as
bounds around the predictions. Considering a more general setting that builds
on Hoelder continuity relative to pseudo-metrics, we propose an online method
for estimating the Hoelder constant online from function value observations
that possibly are corrupted by bounded observational errors. Utilising this to
compute adaptive parameters within a kinky inference rule gives rise to a
nonparametric machine learning method, for which we establish strong universal
approximation guarantees. That is, we show that our prediction rule can learn
any continuous function in the limit of increasingly dense data to within a
worst-case error bound that depends on the level of observational uncertainty.
We apply our method in the context of nonparametric model-reference adaptive
control (MRAC). Across a range of simulated aircraft roll-dynamics and
performance metrics our approach outperforms recently proposed alternatives
that were based on Gaussian processes and RBF-neural networks. For
discrete-time systems, we provide guarantees on the tracking success of our
learning-based controllers both for the batch and the online learning setting.
| Jan-Peter Calliess | null | 1701.00178 | null | null |
Classification of Smartphone Users Using Internet Traffic | cs.LG cs.CR | Today, smartphone devices are owned by a large portion of the population and
have become a very popular platform for accessing the Internet. Smartphones
provide the user with immediate access to information and services. However,
they can easily expose the user to many privacy risks. Applications that are
installed on the device and entities with access to the device's Internet
traffic can reveal private information about the smartphone user and steal
sensitive content stored on the device or transmitted by the device over the
Internet. In this paper, we present a method to reveal various demographics and
technical computer skills of smartphone users by their Internet traffic
records, using machine learning classification models. We implement and
evaluate the method on real life data of smartphone users and show that
smartphone users can be classified by their gender, smoking habits, software
programming experience, and other characteristics.
| Andrey Finkelstein, Ron Biton, Rami Puzis, Asaf Shabtai | null | 1701.0022 | null | null |
Outlier Robust Online Learning | cs.LG stat.ML | We consider the problem of learning from noisy data in practical settings
where the size of data is too large to store on a single machine. More
challenging, the data coming from the wild may contain malicious outliers. To
address the scalability and robustness issues, we present an online robust
learning (ORL) approach. ORL is simple to implement and has provable robustness
guarantee -- in stark contrast to existing online learning approaches that are
generally fragile to outliers. We specialize the ORL approach for two concrete
cases: online robust principal component analysis and online linear regression.
We demonstrate the efficiency and robustness advantages of ORL through
comprehensive simulations and predicting image tags on a large-scale data set.
We also discuss extension of the ORL to distributed learning and provide
experimental evaluations.
| Jiashi Feng, Huan Xu, Shie Mannor | null | 1701.00251 | null | null |
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs
by Selective Execution | cs.LG stat.ML | We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward
deep neural network that allows selective execution. Given an input, only a
subset of D2NN neurons are executed, and the particular subset is determined by
the D2NN itself. By pruning unnecessary computation depending on input, D2NNs
provide a way to improve computational efficiency. To achieve dynamic selective
execution, a D2NN augments a feed-forward deep neural network (directed acyclic
graph of differentiable modules) with controller modules. Each controller
module is a sub-network whose output is a decision that controls whether other
modules can execute. A D2NN is trained end to end. Both regular and controller
modules in a D2NN are learnable and are jointly trained to optimize both
accuracy and efficiency. Such training is achieved by integrating
backpropagation with reinforcement learning. With extensive experiments of
various D2NN architectures on image classification tasks, we demonstrate that
D2NNs are general and flexible, and can effectively optimize
accuracy-efficiency trade-offs.
| Lanlan Liu, Jia Deng | null | 1701.00299 | null | null |
Two-Bit Networks for Deep Learning on Resource-Constrained Embedded
Devices | cs.LG cs.CV | With the rapid proliferation of Internet of Things and intelligent edge
devices, there is an increasing need for implementing machine learning
algorithms, including deep learning, on resource-constrained mobile embedded
devices with limited memory and computation power. Typical large Convolutional
Neural Networks (CNNs) need large amounts of memory and computational power,
and cannot be deployed on embedded devices efficiently. We present Two-Bit
Networks (TBNs) for model compression of CNNs with edge weights constrained to
(-2, -1, 1, 2), which can be encoded with two bits. Our approach can reduce the
memory usage and improve computational efficiency significantly while achieving
good performance in terms of classification accuracy, thus representing a
reasonable tradeoff between model size and performance.
| Wenjia Meng, Zonghua Gu, Ming Zhang, Zhaohui Wu | null | 1701.00485 | null | null |
Robust method for finding sparse solutions to linear inverse problems
using an L2 regularization | cs.NA cs.LG stat.ML | We analyzed the performance of a biologically inspired algorithm called the
Corrected Projections Algorithm (CPA) when a sparseness constraint is required
to unambiguously reconstruct an observed signal using atoms from an
overcomplete dictionary. By changing the geometry of the estimation problem,
CPA gives an analytical expression for a binary variable that indicates the
presence or absence of a dictionary atom using an L2 regularizer. The
regularized solution can be implemented using an efficient real-time
Kalman-filter type of algorithm. The smoother L2 regularization of CPA makes it
very robust to noise, and CPA outperforms other methods in identifying known
atoms in the presence of strong novel atoms in the signal.
| Gonzalo H Otazu | null | 1701.00573 | null | null |
HLA class I binding prediction via convolutional neural networks | q-bio.QM cs.LG | Many biological processes are governed by protein-ligand interactions. One
such example is the recognition of self and nonself cells by the immune system.
This immune response process is regulated by the major histocompatibility
complex (MHC) protein which is encoded by the human leukocyte antigen (HLA)
complex. Understanding the binding potential between MHC and peptides can lead
to the design of more potent, peptide-based vaccines and immunotherapies for
infectious autoimmune diseases.
We apply machine learning techniques from the natural language processing
(NLP) domain to address the task of MHC-peptide binding prediction. More
specifically, we introduce a new distributed representation of amino acids,
name HLA-Vec, that can be used for a variety of downstream proteomic machine
learning tasks. We then propose a deep convolutional neural network
architecture, name HLA-CNN, for the task of HLA class I-peptide binding
prediction. Experimental results show combining the new distributed
representation with our HLA-CNN architecture achieves state-of-the-art results
in the majority of the latest two Immune Epitope Database (IEDB) weekly
automated benchmark datasets. We further apply our model to predict binding on
the human genome and identify 15 genes with potential for self binding.
| Yeeleng Scott Vang and Xiaohui Xie | null | 1701.00593 | null | null |
Deep Convolutional Neural Networks for Pairwise Causality | cs.LG | Discovering causal models from observational and interventional data is an
important first step preceding what-if analysis or counterfactual reasoning. As
has been shown before, the direction of pairwise causal relations can, under
certain conditions, be inferred from observational data via standard
gradient-boosted classifiers (GBC) using carefully engineered statistical
features. In this paper we apply deep convolutional neural networks (CNNs) to
this problem by plotting attribute pairs as 2-D scatter plots that are fed to
the CNN as images. We evaluate our approach on the 'Cause- Effect Pairs' NIPS
2013 Data Challenge. We observe that a weighted ensemble of CNN with the
earlier GBC approach yields significant improvement. Further, we observe that
when less training data is available, our approach performs better than the GBC
based approach suggesting that CNN models pre-trained to determine the
direction of pairwise causal direction could have wider applicability in causal
discovery and enabling what-if or counterfactual analysis.
| Karamjit Singh, Garima Gupta, Lovekesh Vig, Gautam Shroff, and Puneet
Agarwal | null | 1701.00597 | null | null |
Akid: A Library for Neural Network Research and Production from a
Dataism Approach | cs.LG cs.DC | Neural networks are a revolutionary but immature technique that is fast
evolving and heavily relies on data. To benefit from the newest development and
newly available data, we want the gap between research and production as small
as possibly. On the other hand, differing from traditional machine learning
models, neural network is not just yet another statistic model, but a model for
the natural processing engine --- the brain. In this work, we describe a neural
network library named {\texttt akid}. It provides higher level of abstraction
for entities (abstracted as blocks) in nature upon the abstraction done on
signals (abstracted as tensors) by Tensorflow, characterizing the dataism
observation that all entities in nature processes input and emit out in some
ways. It includes a full stack of software that provides abstraction to let
researchers focus on research instead of implementation, while at the same time
the developed program can also be put into production seamlessly in a
distributed environment, and be production ready. At the top application stack,
it provides out-of-box tools for neural network applications. Lower down, akid
provides a programming paradigm that lets user easily build customized models.
The distributed computing stack handles the concurrency and communication, thus
letting models be trained or deployed to a single GPU, multiple GPUs, or a
distributed environment without affecting how a model is specified in the
programming paradigm stack. Lastly, the distributed deployment stack handles
how the distributed computing is deployed, thus decoupling the research
prototype environment with the actual production environment, and is able to
dynamically allocate computing resources, so development (Devs) and operations
(Ops) could be separated. Please refer to http://akid.readthedocs.io/en/latest/
for documentation.
| Shuai Li | null | 1701.00609 | null | null |
New Methods of Enhancing Prediction Accuracy in Linear Models with
Missing Data | stat.ML cs.LG | In this paper, prediction for linear systems with missing information is
investigated. New methods are introduced to improve the Mean Squared Error
(MSE) on the test set in comparison to state-of-the-art methods, through
appropriate tuning of Bias-Variance trade-off. First, the use of proposed Soft
Weighted Prediction (SWP) algorithm and its efficacy are depicted and compared
to previous works for non-missing scenarios. The algorithm is then modified and
optimized for missing scenarios. It is shown that controlled over-fitting by
suggested algorithms will improve prediction accuracy in various cases.
Simulation results approve our heuristics in enhancing the prediction accuracy.
| Mohammad Amin Fakharian, Ashkan Esmaeili, and Farokh Marvasti | null | 1701.00677 | null | null |
Using Big Data to Enhance the Bosch Production Line Performance: A
Kaggle Challenge | cs.LG | This paper describes our approach to the Bosch production line performance
challenge run by Kaggle.com. Maximizing the production yield is at the heart of
the manufacturing industry. At the Bosch assembly line, data is recorded for
products as they progress through each stage. Data science methods are applied
to this huge data repository consisting records of tests and measurements made
for each component along the assembly line to predict internal failures. We
found that it is possible to train a model that predicts which parts are most
likely to fail. Thus a smarter failure detection system can be built and the
parts tagged likely to fail can be salvaged to decrease operating costs and
increase the profit margins.
| Ankita Mangal and Nishant Kumar | 10.1109/BigData.2016.7840826 | 1701.00705 | null | null |
Deterministic and Probabilistic Conditions for Finite Completability of
Low-rank Multi-View Data | cs.IT cs.LG math.AG math.IT | We consider the multi-view data completion problem, i.e., to complete a
matrix $\mathbf{U}=[\mathbf{U}_1|\mathbf{U}_2]$ where the ranks of
$\mathbf{U},\mathbf{U}_1$, and $\mathbf{U}_2$ are given. In particular, we
investigate the fundamental conditions on the sampling pattern, i.e., locations
of the sampled entries for finite completability of such a multi-view data
given the corresponding rank constraints. In contrast with the existing
analysis on Grassmannian manifold for a single-view matrix, i.e., conventional
matrix completion, we propose a geometric analysis on the manifold structure
for multi-view data to incorporate more than one rank constraint. We provide a
deterministic necessary and sufficient condition on the sampling pattern for
finite completability. We also give a probabilistic condition in terms of the
number of samples per column that guarantees finite completability with high
probability. Finally, using the developed tools, we derive the deterministic
and probabilistic guarantees for unique completability.
| Morteza Ashraphijuo and Xiaodong Wang and Vaneet Aggarwal | null | 1701.00737 | null | null |
Using Artificial Neural Networks (ANN) to Control Chaos | cs.LG nlin.CD | Controlling Chaos could be a big factor in getting great stable amounts of
energy out of small amounts of not necessarily stable resources. By definition,
Chaos is getting huge changes in the system's output due to unpredictable small
changes in initial conditions, and that means we could take advantage of this
fact and select the proper control system to manipulate system's initial
conditions and inputs in general and get a desirable output out of otherwise a
Chaotic system. That was accomplished by first building some known chaotic
circuit (Chua circuit) and the NI's MultiSim was used to simulate the ANN
control system. It was shown that this technique can also be used to stabilize
some hard to stabilize electronic systems.
| Ibrahim Ighneiwaa, Salwa Hamidatoua, and Fadia Ben Ismaela | null | 1701.00754 | null | null |
Clustering Signed Networks with the Geometric Mean of Laplacians | stat.ML cs.LG math.NA | Signed networks allow to model positive and negative relationships. We
analyze existing extensions of spectral clustering to signed networks. It turns
out that existing approaches do not recover the ground truth clustering in
several situations where either the positive or the negative network structures
contain no noise. Our analysis shows that these problems arise as existing
approaches take some form of arithmetic mean of the Laplacians of the positive
and negative part. As a solution we propose to use the geometric mean of the
Laplacians of positive and negative part and show that it outperforms the
existing approaches. While the geometric mean of matrices is computationally
expensive, we show that eigenvectors of the geometric mean can be computed
efficiently, leading to a numerical scheme for sparse matrices which is of
independent interest.
| Pedro Mercado, Francesco Tudisco and Matthias Hein | null | 1701.00757 | null | null |
Collapsing of dimensionality | cs.LG | We analyze a new approach to Machine Learning coming from a modification of
classical regularization networks by casting the process in the time dimension,
leading to a sort of collapse of dimensionality in the problem of learning the
model parameters. This approach allows the definition of a online learning
algorithm that progressively accumulates the knowledge provided in the input
trajectory. The regularization principle leads to a solution based on a
dynamical system that is paired with a procedure to develop a graph structure
that stores the input regularities acquired from the temporal evolution. We
report an extensive experimental exploration on the behavior of the parameter
of the proposed model and an evaluation on artificial dataset.
| Marco Gori, Marco Maggini, Alessandro Rossi | null | 1701.00831 | null | null |
Unsupervised neural and Bayesian models for zero-resource speech
processing | cs.CL cs.LG | In settings where only unlabelled speech data is available, zero-resource
speech technology needs to be developed without transcriptions, pronunciation
dictionaries, or language modelling text. There are two central problems in
zero-resource speech processing: (i) finding frame-level feature
representations which make it easier to discriminate between linguistic units
(phones or words), and (ii) segmenting and clustering unlabelled speech into
meaningful units. In this thesis, we argue that a combination of top-down and
bottom-up modelling is advantageous in tackling these two problems.
To address the problem of frame-level representation learning, we present the
correspondence autoencoder (cAE), a neural network trained with weak top-down
supervision from an unsupervised term discovery system. By combining this
top-down supervision with unsupervised bottom-up initialization, the cAE yields
much more discriminative features than previous approaches. We then present our
unsupervised segmental Bayesian model that segments and clusters unlabelled
speech into hypothesized words. By imposing a consistent top-down segmentation
while also using bottom-up knowledge from detected syllable boundaries, our
system outperforms several others on multi-speaker conversational English and
Xitsonga speech data. Finally, we show that the clusters discovered by the
segmental Bayesian model can be made less speaker- and gender-specific by using
features from the cAE instead of traditional acoustic features.
In summary, the different models and systems presented in this thesis show
that both top-down and bottom-up modelling can improve representation learning,
segmentation and clustering of unlabelled speech data.
| Herman Kamper | null | 1701.00851 | null | null |
Neural Probabilistic Model for Non-projective MST Parsing | cs.CL cs.LG stat.ML | In this paper, we propose a probabilistic parsing model, which defines a
proper conditional probability distribution over non-projective dependency
trees for a given sentence, using neural representations as inputs. The neural
network architecture is based on bi-directional LSTM-CNNs which benefits from
both word- and character-level representations automatically, by using
combination of bidirectional LSTM and CNN. On top of the neural network, we
introduce a probabilistic structured layer, defining a conditional log-linear
model over non-projective trees. We evaluate our model on 17 different
datasets, across 14 different languages. By exploiting Kirchhoff's Matrix-Tree
Theorem (Tutte, 1984), the partition functions and marginals can be computed
efficiently, leading to a straight-forward end-to-end model training procedure
via back-propagation. Our parser achieves state-of-the-art parsing performance
on nine datasets.
| Xuezhe Ma, Eduard Hovy | null | 1701.00874 | null | null |
On the Usability of Probably Approximately Correct Implication Bases | cs.AI cs.LG cs.LO | We revisit the notion of probably approximately correct implication bases
from the literature and present a first formulation in the language of formal
concept analysis, with the goal to investigate whether such bases represent a
suitable substitute for exact implication bases in practical use-cases. To this
end, we quantitatively examine the behavior of probably approximately correct
implication bases on artificial and real-world data sets and compare their
precision and recall with respect to their corresponding exact implication
bases. Using a small example, we also provide qualitative insight that
implications from probably approximately correct bases can still represent
meaningful knowledge from a given data set.
| Daniel Borchmann, Tom Hanika, Sergei Obiedkov | 10.1007/978-3-319-59271-8_5 | 1701.00877 | null | null |
An Interval-Based Bayesian Generative Model for Human Complex Activity
Recognition | stat.ML cs.LG | Complex activity recognition is challenging due to the inherent uncertainty
and diversity of performing a complex activity. Normally, each instance of a
complex activity has its own configuration of atomic actions and their temporal
dependencies. We propose in this paper an atomic action-based Bayesian model
that constructs Allen's interval relation networks to characterize complex
activities with structural varieties in a probabilistic generative way: By
introducing latent variables from the Chinese restaurant process, our approach
is able to capture all possible styles of a particular complex activity as a
unique set of distributions over atomic actions and relations. We also show
that local temporal dependencies can be retained and are globally consistent in
the resulting interval network. Moreover, network structure can be learned from
empirical data. A new dataset of complex hand activities has been constructed
and made publicly available, which is much larger in size than any existing
datasets. Empirical evaluations on benchmark datasets as well as our in-house
dataset demonstrate the competitiveness of our approach.
| Li Liu and Yongzhong Yang and Lakshmi Narasimhan Govindarajan and Shu
Wang and Bin Hu and Li Cheng and David S. Rosenblum | null | 1701.00903 | null | null |
Dense Associative Memory is Robust to Adversarial Inputs | cs.LG cs.CR cs.CV q-bio.NC stat.ML | Deep neural networks (DNN) trained in a supervised way suffer from two known
problems. First, the minima of the objective function used in learning
correspond to data points (also known as rubbish examples or fooling images)
that lack semantic similarity with the training data. Second, a clean input can
be changed by a small, and often imperceptible for human vision, perturbation,
so that the resulting deformed input is misclassified by the network. These
findings emphasize the differences between the ways DNN and humans classify
patterns, and raise a question of designing learning algorithms that more
accurately mimic human perception compared to the existing methods.
Our paper examines these questions within the framework of Dense Associative
Memory (DAM) models. These models are defined by the energy function, with
higher order (higher than quadratic) interactions between the neurons. We show
that in the limit when the power of the interaction vertex in the energy
function is sufficiently large, these models have the following three
properties. First, the minima of the objective function are free from rubbish
images, so that each minimum is a semantically meaningful pattern. Second,
artificial patterns poised precisely at the decision boundary look ambiguous to
human subjects and share aspects of both classes that are separated by that
decision boundary. Third, adversarial images constructed by models with small
power of the interaction vertex, which are equivalent to DNN with rectified
linear units (ReLU), fail to transfer to and fool the models with higher order
interactions. This opens up a possibility to use higher order models for
detecting and stopping malicious adversarial attacks. The presented results
suggest that DAM with higher order energy functions are closer to human visual
perception than DNN with ReLUs.
| Dmitry Krotov, John J Hopfield | 10.1162/neco_a_01143 | 1701.00939 | null | null |
Online Learning Sensing Matrix and Sparsifying Dictionary Simultaneously
for Compressive Sensing | cs.LG | This paper considers the problem of simultaneously learning the Sensing
Matrix and Sparsifying Dictionary (SMSD) on a large training dataset. To
address the formulated joint learning problem, we propose an online algorithm
that consists of a closed-form solution for optimizing the sensing matrix with
a fixed sparsifying dictionary and a stochastic method for learning the
sparsifying dictionary on a large dataset when the sensing matrix is given.
Benefiting from training on a large dataset, the obtained compressive sensing
(CS) system by the proposed algorithm yields a much better performance in terms
of signal recovery accuracy than the existing ones. The simulation results on
natural images demonstrate the effectiveness of the suggested online algorithm
compared with the existing methods.
| Tao Hong and Zhihui Zhu | null | 1701.01 | null | null |
Demystifying Neural Style Transfer | cs.CV cs.LG cs.NE | Neural Style Transfer has recently demonstrated very exciting results which
catches eyes in both academia and industry. Despite the amazing results, the
principle of neural style transfer, especially why the Gram matrices could
represent style remains unclear. In this paper, we propose a novel
interpretation of neural style transfer by treating it as a domain adaptation
problem. Specifically, we theoretically show that matching the Gram matrices of
feature maps is equivalent to minimize the Maximum Mean Discrepancy (MMD) with
the second order polynomial kernel. Thus, we argue that the essence of neural
style transfer is to match the feature distributions between the style images
and the generated images. To further support our standpoint, we experiment with
several other distribution alignment methods, and achieve appealing results. We
believe this novel interpretation connects these two important research fields,
and could enlighten future researches.
| Yanghao Li, Naiyan Wang, Jiaying Liu and Xiaodi Hou | null | 1701.01036 | null | null |
Estimating Quality in Multi-Objective Bandits Optimization | cs.LG stat.ML | Many real-world applications are characterized by a number of conflicting
performance measures. As optimizing in a multi-objective setting leads to a set
of non-dominated solutions, a preference function is required for selecting the
solution with the appropriate trade-off between the objectives. The question
is: how good do estimations of these objectives have to be in order for the
solution maximizing the preference function to remain unchanged? In this paper,
we introduce the concept of preference radius to characterize the robustness of
the preference function and provide guidelines for controlling the quality of
estimations in the multi-objective setting. More specifically, we provide a
general formulation of multi-objective optimization under the bandits setting.
We show how the preference radius relates to the optimal gap and we use this
concept to provide a theoretical analysis of the Thompson sampling algorithm
from multivariate normal priors. We finally present experiments to support the
theoretical results and highlight the fact that one cannot simply scalarize
multi-objective problems into single-objective problems.
| Audrey Durand, Christian Gagn\'e | null | 1701.01095 | null | null |
Overlapping Cover Local Regression Machines | cs.LG cs.CV | We present the Overlapping Domain Cover (ODC) notion for kernel machines, as
a set of overlapping subsets of the data that covers the entire training set
and optimized to be spatially cohesive as possible. We show how this notion
benefit the speed of local kernel machines for regression in terms of both
speed while achieving while minimizing the prediction error. We propose an
efficient ODC framework, which is applicable to various regression models and
in particular reduces the complexity of Twin Gaussian Processes (TGP)
regression from cubic to quadratic. Our notion is also applicable to several
kernel methods (e.g., Gaussian Process Regression(GPR) and IWTGP regression, as
shown in our experiments). We also theoretically justified the idea behind our
method to improve local prediction by the overlapping cover. We validated and
analyzed our method on three benchmark human pose estimation datasets and
interesting findings are discussed.
| Mohamed Elhoseiny and Ahmed Elgammal | null | 1701.01218 | null | null |
OpenML: An R Package to Connect to the Machine Learning Platform OpenML | stat.ML cs.LG | OpenML is an online machine learning platform where researchers can easily
share data, machine learning tasks and experiments as well as organize them
online to work and collaborate more efficiently. In this paper, we present an R
package to interface with the OpenML platform and illustrate its usage in
combination with the machine learning R package mlr. We show how the OpenML
package allows R users to easily search, download and upload data sets and
machine learning tasks. Furthermore, we also show how to upload results of
experiments, share them with others and download results from other users.
Beyond ensuring reproducibility of results, the OpenML platform automates much
of the drudge work, speeds up research, facilitates collaboration and increases
the users' visibility online.
| Giuseppe Casalicchio, Jakob Bossek, Michel Lang, Dominik Kirchhoff,
Pascal Kerschke, Benjamin Hofner, Heidi Seibold, Joaquin Vanschoren, Bernd
Bischl | 10.1007/s00180-017-0742-2 | 1701.01293 | null | null |
Toward negotiable reinforcement learning: shifting priorities in Pareto
optimal sequential decision-making | cs.AI cs.GT cs.LG | Existing multi-objective reinforcement learning (MORL) algorithms do not
account for objectives that arise from players with differing beliefs.
Concretely, consider two players with different beliefs and utility functions
who may cooperate to build a machine that takes actions on their behalf. A
representation is needed for how much the machine's policy will prioritize each
player's interests over time. Assuming the players have reached common
knowledge of their situation, this paper derives a recursion that any Pareto
optimal policy must satisfy. Two qualitative observations can be made from the
recursion: the machine must (1) use each player's own beliefs in evaluating how
well an action will serve that player's utility function, and (2) shift the
relative priority it assigns to each player's expected utilities over time, by
a factor proportional to how well that player's beliefs predict the machine's
inputs. Observation (2) represents a substantial divergence from na\"{i}ve
linear utility aggregation (as in Harsanyi's utilitarian theorem, and existing
MORL algorithms), which is shown here to be inadequate for Pareto optimal
sequential decision-making on behalf of players with different beliefs.
| Andrew Critch | null | 1701.01302 | null | null |
Outlier Detection for Text Data : An Extended Version | cs.IR cs.LG stat.ML | The problem of outlier detection is extremely challenging in many domains
such as text, in which the attribute values are typically non-negative, and
most values are zero. In such cases, it often becomes difficult to separate the
outliers from the natural variations in the patterns in the underlying data. In
this paper, we present a matrix factorization method, which is naturally able
to distinguish the anomalies with the use of low rank approximations of the
underlying data. Our iterative algorithm TONMF is based on block coordinate
descent (BCD) framework. We define blocks over the term-document matrix such
that the function becomes solvable. Given most recently updated values of other
matrix blocks, we always update one block at a time to its optimal. Our
approach has significant advantages over traditional methods for text outlier
detection. Finally, we present experimental results illustrating the
effectiveness of our method over competing methods.
| Ramakrishnan Kannan, Hyenkyun Woo, Charu C. Aggarwal, Haesun Park | null | 1701.01325 | null | null |
Generating Focussed Molecule Libraries for Drug Discovery with Recurrent
Neural Networks | cs.NE cs.AI cs.LG physics.chem-ph stat.ML | In de novo drug design, computational strategies are used to generate novel
molecules with good affinity to the desired biological target. In this work, we
show that recurrent neural networks can be trained as generative models for
molecular structures, similar to statistical language models in natural
language processing. We demonstrate that the properties of the generated
molecules correlate very well with the properties of the molecules used to
train the model. In order to enrich libraries with molecules active towards a
given biological target, we propose to fine-tune the model with small sets of
molecules, which are known to be active against that target.
Against Staphylococcus aureus, the model reproduced 14% of 6051 hold-out test
molecules that medicinal chemists designed, whereas against Plasmodium
falciparum (Malaria) it reproduced 28% of 1240 test molecules. When coupled
with a scoring function, our model can perform the complete de novo drug design
cycle to generate large sets of novel molecules for drug discovery.
| Marwin H.S. Segler, Thierry Kogej, Christian Tyrchan, Mark P. Waller | null | 1701.01329 | null | null |
NeuroRule: A Connectionist Approach to Data Mining | cs.LG | Classification, which involves finding rules that partition a given data set
into disjoint groups, is one class of data mining problems. Approaches proposed
so far for mining classification rules for large databases are mainly decision
tree based symbolic learning methods. The connectionist approach based on
neural networks has been thought not well suited for data mining. One of the
major reasons cited is that knowledge generated by neural networks is not
explicitly represented in the form of rules suitable for verification or
interpretation by humans. This paper examines this issue. With our newly
developed algorithms, rules which are similar to, or more concise than those
generated by the symbolic methods can be extracted from the neural networks.
The data mining process using neural networks with the emphasis on rule
extraction is described. Experimental results and comparison with previously
published works are presented.
| Hongjun Lu and Rudy Setiono and Huan Liu | null | 1701.01358 | null | null |
On spectral partitioning of signed graphs | cs.DS cs.LG math.NA stat.ML | We argue that the standard graph Laplacian is preferable for spectral
partitioning of signed graphs compared to the signed Laplacian. Simple examples
demonstrate that partitioning based on signs of components of the leading
eigenvectors of the signed Laplacian may be meaningless, in contrast to
partitioning based on the Fiedler vector of the standard graph Laplacian for
signed graphs. We observe that negative eigenvalues are beneficial for spectral
partitioning of signed graphs, making the Fiedler vector easier to compute.
| Andrew V. Knyazev | null | 1701.01394 | null | null |
Learning local trajectories for high precision robotic tasks :
application to KUKA LBR iiwa Cartesian positioning | cs.AI cs.LG cs.RO | To ease the development of robot learning in industry, two conditions need to
be fulfilled. Manipulators must be able to learn high accuracy and precision
tasks while being safe for workers in the factory. In this paper, we extend
previously submitted work which consists in rapid learning of local high
accuracy behaviors. By exploration and regression, linear and quadratic models
are learnt for respectively the dynamics and cost function. Iterative Linear
Quadratic Gaussian Regulator combined with cost quadratic regression can
converge rapidly in the final stages towards high accuracy behavior as the cost
function is modelled quite precisely. In this paper, both a different cost
function and a second order improvement method are implemented within this
framework. We also propose an analysis of the algorithm parameters through
simulation for a positioning task. Finally, an experimental validation on a
KUKA LBR iiwa robot is carried out. This collaborative robot manipulator can be
easily programmed into safety mode, which makes it qualified for the second
industry constraint stated above.
| Joris Guerin, Olivier Gibaru, Eric Nyiri and Stephane Thiery | 10.1109/IECON.2016.7793388 | 1701.01497 | null | null |
Follow the Compressed Leader: Faster Online Learning of Eigenvectors and
Faster MMWU | cs.LG cs.DS math.OC stat.ML | The online problem of computing the top eigenvector is fundamental to machine
learning. In both adversarial and stochastic settings, previous results (such
as matrix multiplicative weight update, follow the regularized leader, follow
the compressed leader, block power method) either achieve optimal regret but
run slow, or run fast at the expense of loosing a $\sqrt{d}$ factor in total
regret where $d$ is the matrix dimension.
We propose a $\textit{follow-the-compressed-leader (FTCL)}$ framework which
achieves optimal regret without sacrificing the running time. Our idea is to
"compress" the matrix strategy to dimension 3 in the adversarial setting, or
dimension 1 in the stochastic setting. These respectively resolve two open
questions regarding the design of optimal and efficient algorithms for the
online eigenvector problem.
| Zeyuan Allen-Zhu and Yuanzhi Li | null | 1701.01722 | null | null |
Deep Learning for Time-Series Analysis | cs.LG | In many real-world application, e.g., speech recognition or sleep stage
classification, data are captured over the course of time, constituting a
Time-Series. Time-Series often contain temporal dependencies that cause two
otherwise identical points of time to belong to different classes or predict
different behavior. This characteristic generally increases the difficulty of
analysing them. Existing techniques often depended on hand-crafted features
that were expensive to create and required expert knowledge of the field. With
the advent of Deep Learning new models of unsupervised learning of features for
Time-series analysis and forecast have been developed. Such new developments
are the topic of this paper: a review of the main Deep Learning techniques is
presented, and some applications on Time-Series analysis are summaried. The
results make it clear that Deep Learning has a lot to contribute to the field.
| John Cristian Borges Gamboa | null | 1701.01887 | null | null |
See the Near Future: A Short-Term Predictive Methodology to Traffic Load
in ITS | cs.LG stat.AP | The Intelligent Transportation System (ITS) targets to a coordinated traffic
system by applying the advanced wireless communication technologies for road
traffic scheduling. Towards an accurate road traffic control, the short-term
traffic forecasting to predict the road traffic at the particular site in a
short period is often useful and important. In existing works, Seasonal
Autoregressive Integrated Moving Average (SARIMA) model is a popular approach.
The scheme however encounters two challenges: 1) the analysis on related data
is insufficient whereas some important features of data may be neglected; and
2) with data presenting different features, it is unlikely to have one
predictive model that can fit all situations. To tackle above issues, in this
work, we develop a hybrid model to improve accuracy of SARIMA. In specific, we
first explore the autocorrelation and distribution features existed in traffic
flow to revise structure of the time series model. Based on the Gaussian
distribution of traffic flow, a hybrid model with a Bayesian learning algorithm
is developed which can effectively expand the application scenarios of SARIMA.
We show the efficiency and accuracy of our proposal using both analysis and
experimental studies. Using the real-world trace data, we show that the
proposed predicting approach can achieve satisfactory performance in practice.
| Xun Zhou, Changle Li, Zhe Liu, Tom H. Luan, Zhifang Miao, Lina Zhu and
Lei Xiong | null | 1701.01917 | null | null |
Large-scale network motif analysis using compression | cs.LG | We introduce a new method for finding network motifs: interesting or
informative subgraph patterns in a network. Subgraphs are motifs when their
frequency in the data is high compared to the expected frequency under a null
model. To compute this expectation, a full or approximate count of the
occurrences of a motif is normally repeated on as many as 1000 random graphs
sampled from the null model; a prohibitively expensive step. We use ideas from
the Minimum Description Length (MDL) literature to define a new measure of
motif relevance. With our method, samples from the null model are not required.
Instead we compute the probability of the data under the null model and compare
this to the probability under a specially designed alternative model. With this
new relevance test, we can search for motifs by random sampling, rather than
requiring an accurate count of all instances of a motif. This allows motif
analysis to scale to networks with billions of links.
| Peter Bloem and Steven de Rooij | 10.1007/s10618-020-00691-y | 1701.02026 | null | null |
Tunable GMM Kernels | stat.ML cs.LG | The recently proposed "generalized min-max" (GMM) kernel can be efficiently
linearized, with direct applications in large-scale statistical learning and
fast near neighbor search. The linearized GMM kernel was extensively compared
in with linearized radial basis function (RBF) kernel. On a large number of
classification tasks, the tuning-free GMM kernel performs (surprisingly) well
compared to the best-tuned RBF kernel. Nevertheless, one would naturally expect
that the GMM kernel ought to be further improved if we introduce tuning
parameters.
In this paper, we study three simple constructions of tunable GMM kernels:
(i) the exponentiated-GMM (or eGMM) kernel, (ii) the powered-GMM (or pGMM)
kernel, and (iii) the exponentiated-powered-GMM (epGMM) kernel. The pGMM kernel
can still be efficiently linearized by modifying the original hashing procedure
for the GMM kernel. On about 60 publicly available classification datasets, we
verify that the proposed tunable GMM kernels typically improve over the
original GMM kernel. On some datasets, the improvements can be astonishingly
significant.
For example, on 11 popular datasets which were used for testing deep learning
algorithms and tree methods, our experiments show that the proposed tunable GMM
kernels are strong competitors to trees and deep nets. The previous studies
developed tree methods including "abc-robust-logitboost" and demonstrated the
excellent performance on those 11 datasets (and other datasets), by
establishing the second-order tree-split formula and new derivatives for
multi-class logistic loss. Compared to tree methods like
"abc-robust-logitboost" (which are slow and need substantial model sizes), the
tunable GMM kernels produce largely comparable results.
| Ping Li | null | 1701.02046 | null | null |
Coupled Compound Poisson Factorization | cs.LG cs.AI stat.ML | We present a general framework, the coupled compound Poisson factorization
(CCPF), to capture the missing-data mechanism in extremely sparse data sets by
coupling a hierarchical Poisson factorization with an arbitrary data-generating
model. We derive a stochastic variational inference algorithm for the resulting
model and, as examples of our framework, implement three different
data-generating models---a mixture model, linear regression, and factor
analysis---to robustly model non-random missing data in the context of
clustering, prediction, and matrix factorization. In all three cases, we test
our framework against models that ignore the missing-data mechanism on large
scale studies with non-random missing data, and we show that explicitly
modeling the missing-data mechanism substantially improves the quality of the
results, as measured using data log likelihood on a held-out test set.
| Mehmet E. Basbug, Barbara E. Engelhardt | null | 1701.02058 | null | null |
Deep driven fMRI decoding of visual categories | stat.ML cs.LG q-bio.NC | Deep neural networks have been developed drawing inspiration from the brain
visual pathway, implementing an end-to-end approach: from image data to video
object classes. However building an fMRI decoder with the typical structure of
Convolutional Neural Network (CNN), i.e. learning multiple level of
representations, seems impractical due to lack of brain data. As a possible
solution, this work presents the first hybrid fMRI and deep features decoding
approach: collected fMRI and deep learnt representations of video object
classes are linked together by means of Kernel Canonical Correlation Analysis.
In decoding, this allows exploiting the discriminatory power of CNN by relating
the fMRI representation to the last layer of CNN (fc7). We show the
effectiveness of embedding fMRI data onto a subspace related to deep features
in distinguishing semantic visual categories based solely on brain imaging
data.
| Michele Svanera, Sergio Benini, Gal Raz, Talma Hendler, Rainer Goebel,
and Giancarlo Valente | null | 1701.02133 | null | null |
Shallow and Deep Networks Intrusion Detection System: A Taxonomy and
Survey | cs.CR cs.LG | Intrusion detection has attracted a considerable interest from researchers
and industries. The community, after many years of research, still faces the
problem of building reliable and efficient IDS that are capable of handling
large quantities of data, with changing patterns in real time situations. The
work presented in this manuscript classifies intrusion detection systems (IDS).
Moreover, a taxonomy and survey of shallow and deep networks intrusion
detection systems is presented based on previous and current works. This
taxonomy and survey reviews machine learning techniques and their performance
in detecting anomalies. Feature selection which influences the effectiveness of
machine learning (ML) IDS is discussed to explain the role of feature selection
in the classification and training phase of ML IDS. Finally, a discussion of
the false and true positive alarm rates is presented to help researchers model
reliable and efficient machine learning based intrusion detection systems.
| Elike Hodo, Xavier Bellekens, Andrew Hamilton, Christos Tachtatzis and
Robert Atkinson | null | 1701.02145 | null | null |
DeepDSL: A Compilation-based Domain-Specific Language for Deep Learning | cs.PL cs.LG | In recent years, Deep Learning (DL) has found great success in domains such
as multimedia understanding. However, the complex nature of multimedia data
makes it difficult to develop DL-based software. The state-of-the art tools,
such as Caffe, TensorFlow, Torch7, and CNTK, while are successful in their
applicable domains, are programming libraries with fixed user interface,
internal representation, and execution environment. This makes it difficult to
implement portable and customized DL applications.
In this paper, we present DeepDSL, a domain specific language (DSL) embedded
in Scala, that compiles deep networks written in DeepDSL to Java source code.
Deep DSL provides (1) intuitive constructs to support compact encoding of deep
networks; (2) symbolic gradient derivation of the networks; (3) static analysis
for memory consumption and error detection; and (4) DSL-level optimization to
improve memory and runtime efficiency.
DeepDSL programs are compiled into compact, efficient, customizable, and
portable Java source code, which operates the CUDA and CUDNN interfaces running
on Nvidia GPU via a Java Native Interface (JNI) library. We evaluated DeepDSL
with a number of popular DL networks. Our experiments show that the compiled
programs have very competitive runtime performance and memory efficiency
compared to the existing libraries.
| Tian Zhao, Xiaobing Huang, Yu Cao | null | 1701.02284 | null | null |
QuickNet: Maximizing Efficiency and Efficacy in Deep Architectures | cs.LG stat.ML | We present QuickNet, a fast and accurate network architecture that is both
faster and significantly more accurate than other fast deep architectures like
SqueezeNet. Furthermore, it uses less parameters than previous networks, making
it more memory efficient. We do this by making two major modifications to the
reference Darknet model (Redmon et al, 2015): 1) The use of depthwise separable
convolutions and 2) The use of parametric rectified linear units. We make the
observation that parametric rectified linear units are computationally
equivalent to leaky rectified linear units at test time and the observation
that separable convolutions can be interpreted as a compressed Inception
network (Chollet, 2016). Using these observations, we derive a network
architecture, which we call QuickNet, that is both faster and more accurate
than previous models. Our architecture provides at least four major advantages:
(1) A smaller model size, which is more tenable on memory constrained systems;
(2) A significantly faster network which is more tenable on computationally
constrained systems; (3) A high accuracy of 95.7 percent on the CIFAR-10
Dataset which outperforms all but one result published so far, although we note
that our works are orthogonal approaches and can be combined (4) Orthogonality
to previous model compression approaches allowing for further speed gains to be
realized.
| Tapabrata Ghosh | null | 1701.02291 | null | null |
A Homological Theory of Functions | math.AC cs.CC cs.DM cs.LG math.CO | In computational complexity, a complexity class is given by a set of problems
or functions, and a basic challenge is to show separations of complexity
classes $A \not= B$ especially when $A$ is known to be a subset of $B$. In this
paper we introduce a homological theory of functions that can be used to
establish complexity separations, while also providing other interesting
consequences. We propose to associate a topological space $S_A$ to each class
of functions $A$, such that, to separate complexity classes $A \subseteq B'$,
it suffices to observe a change in "the number of holes", i.e. homology, in
$S_A$ as a subclass $B$ of $B'$ is added to $A$. In other words, if the
homologies of $S_A$ and $S_{A \cup B}$ are different, then $A \not= B'$. We
develop the underlying theory of functions based on combinatorial and
homological commutative algebra and Stanley-Reisner theory, and recover Minsky
and Papert's 1969 result that parity cannot be computed by nonmaximal degree
polynomial threshold functions. In the process, we derive a "maximal principle"
for polynomial threshold functions that is used to extend this result further
to arbitrary symmetric functions. A surprising coincidence is demonstrated,
where the maximal dimension of "holes" in $S_A$ upper bounds the VC dimension
of $A$, with equality for common computational cases such as the class of
polynomial threshold functions or the class of linear functionals in $\mathbb
F_2$, or common algebraic cases such as when the Stanley-Reisner ring of $S_A$
is Cohen-Macaulay. As another interesting application of our theory, we prove a
result that a priori has nothing to do with complexity separation: it
characterizes when a vector subspace intersects the positive cone, in terms of
homological conditions. By analogy to Farkas' result doing the same with
*linear conditions*, we call our theorem the Homological Farkas Lemma.
| Greg Yang | null | 1701.02302 | null | null |
The principle of cognitive action - Preliminary experimental analysis | cs.LG | In this document we shows a first implementation and some preliminary results
of a new theory, facing Machine Learning problems in the frameworks of
Classical Mechanics and Variational Calculus. We give a general formulation of
the problem and then we studies basic behaviors of the model on simple
practical implementations.
| Marco Gori, Marco Maggini, Alessandro Rossi | null | 1701.02377 | null | null |
AdaGAN: Boosting Generative Models | stat.ML cs.LG | Generative Adversarial Networks (GAN) (Goodfellow et al., 2014) are an
effective method for training generative models of complex data such as natural
images. However, they are notoriously hard to train and can suffer from the
problem of missing modes where the model is not able to produce examples in
certain regions of the space. We propose an iterative procedure, called AdaGAN,
where at every step we add a new component into a mixture model by running a
GAN algorithm on a reweighted sample. This is inspired by boosting algorithms,
where many potentially weak individual predictors are greedily aggregated to
form a strong composite predictor. We prove that such an incremental procedure
leads to convergence to the true distribution in a finite number of steps if
each step is optimal, and convergence at an exponential rate otherwise. We also
illustrate experimentally that this procedure addresses the problem of missing
modes.
| Ilya Tolstikhin, Sylvain Gelly, Olivier Bousquet, Carl-Johann
Simon-Gabriel and Bernhard Sch\"olkopf | null | 1701.02386 | null | null |
Reinforcement Learning via Recurrent Convolutional Neural Networks | cs.LG cs.AI | Deep Reinforcement Learning has enabled the learning of policies for complex
tasks in partially observable environments, without explicitly learning the
underlying model of the tasks. While such model-free methods achieve
considerable performance, they often ignore the structure of task. We present a
natural representation of to Reinforcement Learning (RL) problems using
Recurrent Convolutional Neural Networks (RCNNs), to better exploit this
inherent structure. We define 3 such RCNNs, whose forward passes execute an
efficient Value Iteration, propagate beliefs of state in partially observable
environments, and choose optimal actions respectively. Backpropagating
gradients through these RCNNs allows the system to explicitly learn the
Transition Model and Reward Function associated with the underlying MDP,
serving as an elegant alternative to classical model-based RL. We evaluate the
proposed algorithms in simulation, considering a robot planning problem. We
demonstrate the capability of our framework to reduce the cost of replanning,
learn accurate MDP models, and finally re-plan with learnt models to achieve
near-optimal policies.
| Tanmay Shankar, Santosha K. Dwivedy, Prithwijit Guha | null | 1701.02392 | null | null |
Machine Learning of Linear Differential Equations using Gaussian
Processes | cs.LG math.NA stat.ML | This work leverages recent advances in probabilistic machine learning to
discover conservation laws expressed by parametric linear equations. Such
equations involve, but are not limited to, ordinary and partial differential,
integro-differential, and fractional order operators. Here, Gaussian process
priors are modified according to the particular form of such operators and are
employed to infer parameters of the linear equations from scarce and possibly
noisy observations. Such observations may come from experiments or "black-box"
computer simulations.
| Maziar Raissi and George Em. Karniadakis | 10.1016/j.jcp.2017.07.050 | 1701.0244 | null | null |
Multi-task Learning Of Deep Neural Networks For Audio Visual Automatic
Speech Recognition | cs.CL cs.AI cs.CV cs.LG | Multi-task learning (MTL) involves the simultaneous training of two or more
related tasks over shared representations. In this work, we apply MTL to
audio-visual automatic speech recognition(AV-ASR). Our primary task is to learn
a mapping between audio-visual fused features and frame labels obtained from
acoustic GMM/HMM model. This is combined with an auxiliary task which maps
visual features to frame labels obtained from a separate visual GMM/HMM model.
The MTL model is tested at various levels of babble noise and the results are
compared with a base-line hybrid DNN-HMM AV-ASR model. Our results indicate
that MTL is especially useful at higher level of noise. Compared to base-line,
upto 7\% relative improvement in WER is reported at -3 SNR dB
| Abhinav Thanda, Shankar M Venkatesan | null | 1701.02477 | null | null |
Implicitly Incorporating Morphological Information into Word Embedding | cs.CL cs.LG | In this paper, we propose three novel models to enhance word embedding by
implicitly using morphological information. Experiments on word similarity and
syntactic analogy show that the implicit models are superior to traditional
explicit ones. Our models outperform all state-of-the-art baselines and
significantly improve the performance on both tasks. Moreover, our performance
on the smallest corpus is similar to the performance of CBOW on the corpus
which is five times the size of ours. Parameter analysis indicates that the
implicit models can supplement semantic information during the word embedding
training process.
| Yang Xu and Jiawei Liu | null | 1701.02481 | null | null |
Real-Time Bidding by Reinforcement Learning in Display Advertising | cs.LG cs.AI cs.GT | The majority of online display ads are served through real-time bidding (RTB)
--- each ad display impression is auctioned off in real-time when it is just
being generated from a user visit. To place an ad automatically and optimally,
it is critical for advertisers to devise a learning algorithm to cleverly bid
an ad impression in real-time. Most previous works consider the bid decision as
a static optimization problem of either treating the value of each impression
independently or setting a bid price to each segment of ad volume. However, the
bidding for a given ad campaign would repeatedly happen during its life span
before the budget runs out. As such, each bid is strategically correlated by
the constrained budget and the overall effectiveness of the campaign (e.g., the
rewards from generated clicks), which is only observed after the campaign has
completed. Thus, it is of great interest to devise an optimal bidding strategy
sequentially so that the campaign budget can be dynamically allocated across
all the available impressions on the basis of both the immediate and future
rewards. In this paper, we formulate the bid decision process as a
reinforcement learning problem, where the state space is represented by the
auction information and the campaign's real-time parameters, while an action is
the bid price to set. By modeling the state transition via auction competition,
we build a Markov Decision Process framework for learning the optimal bidding
policy to optimize the advertising performance in the dynamic real-time bidding
environment. Furthermore, the scalability problem from the large real-world
auction volume and campaign budget is well handled by state value approximation
using neural networks.
| Han Cai, Kan Ren, Weinan Zhang, Kleanthis Malialis, Jun Wang, Yong Yu,
Defeng Guo | 10.1145/3018661.3018702 | 1701.0249 | null | null |
Heterogeneous domain adaptation: An unsupervised approach | cs.LG stat.ML | Domain adaptation leverages the knowledge in one domain - the source domain -
to improve learning efficiency in another domain - the target domain. Existing
heterogeneous domain adaptation research is relatively well-progressed, but
only in situations where the target domain contains at least a few labeled
instances. In contrast, heterogeneous domain adaptation with an unlabeled
target domain has not been well-studied. To contribute to the research in this
emerging field, this paper presents: (1) an unsupervised knowledge transfer
theorem that guarantees the correctness of transferring knowledge; and (2) a
principal angle-based metric to measure the distance between two pairs of
domains: one pair comprises the original source and target domains and the
other pair comprises two homogeneous representations of two domains. The
theorem and the metric have been implemented in an innovative transfer model,
called a Grassmann-Linear monotonic maps-geodesic flow kernel (GLG), that is
specifically designed for heterogeneous unsupervised domain adaptation (HeUDA).
The linear monotonic maps meet the conditions of the theorem and are used to
construct homogeneous representations of the heterogeneous domains. The metric
shows the extent to which the homogeneous representations have preserved the
information in the original source and target domains. By minimizing the
proposed metric, the GLG model learns the homogeneous representations of
heterogeneous domains and transfers knowledge through these learned
representations via a geodesic flow kernel. To evaluate the model, five public
datasets were reorganized into ten HeUDA tasks across three applications:
cancer detection, credit assessment, and text classification. The experiments
demonstrate that the proposed model delivers superior performance over the
existing baselines.
| Feng Liu, Guanquan Zhang, Jie Lu | 10.1109/TNNLS.2020.2973293 | 1701.02511 | null | null |
Unsupervised Image-to-Image Translation with Generative Adversarial
Networks | cs.CV cs.LG | It's useful to automatically transform an image from its original form to
some synthetic form (style, partial contents, etc.), while keeping the original
structure or semantics. We define this requirement as the "image-to-image
translation" problem, and propose a general approach to achieve it, based on
deep convolutional and conditional generative adversarial networks (GANs),
which has gained a phenomenal success to learn mapping images from noise input
since 2014. In this work, we develop a two step (unsupervised) learning method
to translate images between different domains by using unlabeled images without
specifying any correspondence between them, so that to avoid the cost of
acquiring labeled data. Compared with prior works, we demonstrated the capacity
of generality in our model, by which variance of translations can be conduct by
a single type of model. Such capability is desirable in applications like
bidirectional translation
| Hao Dong, Paarth Neekhara, Chao Wu, Yike Guo | null | 1701.02676 | null | null |
Towards End-to-End Speech Recognition with Deep Convolutional Neural
Networks | cs.CL cs.LG stat.ML | Convolutional Neural Networks (CNNs) are effective models for reducing
spectral variations and modeling spectral correlations in acoustic features for
automatic speech recognition (ASR). Hybrid speech recognition systems
incorporating CNNs with Hidden Markov Models/Gaussian Mixture Models
(HMMs/GMMs) have achieved the state-of-the-art in various benchmarks.
Meanwhile, Connectionist Temporal Classification (CTC) with Recurrent Neural
Networks (RNNs), which is proposed for labeling unsegmented sequences, makes it
feasible to train an end-to-end speech recognition system instead of hybrid
settings. However, RNNs are computationally expensive and sometimes difficult
to train. In this paper, inspired by the advantages of both CNNs and the CTC
approach, we propose an end-to-end speech framework for sequence labeling, by
combining hierarchical CNNs with CTC directly without recurrent connections. By
evaluating the approach on the TIMIT phoneme recognition task, we show that the
proposed model is not only computationally efficient, but also competitive with
the existing baseline systems. Moreover, we argue that CNNs have the capability
to model temporal correlations with appropriate context information.
| Ying Zhang, Mohammad Pezeshki, Philemon Brakel, Saizheng Zhang, Cesar
Laurent Yoshua Bengio, Aaron Courville | null | 1701.0272 | null | null |
Identifying Best Interventions through Online Importance Sampling | stat.ML cs.IT cs.LG math.IT | Motivated by applications in computational advertising and systems biology,
we consider the problem of identifying the best out of several possible soft
interventions at a source node $V$ in an acyclic causal directed graph, to
maximize the expected value of a target node $Y$ (located downstream of $V$).
Our setting imposes a fixed total budget for sampling under various
interventions, along with cost constraints on different types of interventions.
We pose this as a best arm identification bandit problem with $K$ arms where
each arm is a soft intervention at $V,$ and leverage the information leakage
among the arms to provide the first gap dependent error and simple regret
bounds for this problem. Our results are a significant improvement over the
traditional best arm identification results. We empirically show that our
algorithms outperform the state of the art in the Flow Cytometry data-set, and
also apply our algorithm for model interpretation of the Inception-v3 deep net
that classifies images.
| Rajat Sen, Karthikeyan Shanmugam, Alexandros G. Dimakis, and Sanjay
Shakkottai | null | 1701.02789 | null | null |
Similarity Function Tracking using Pairwise Comparisons | stat.ML cs.LG | Recent work in distance metric learning has focused on learning
transformations of data that best align with specified pairwise similarity and
dissimilarity constraints, often supplied by a human observer. The learned
transformations lead to improved retrieval, classification, and clustering
algorithms due to the better adapted distance or similarity measures. Here, we
address the problem of learning these transformations when the underlying
constraint generation process is nonstationary. This nonstationarity can be due
to changes in either the ground-truth clustering used to generate constraints
or changes in the feature subspaces in which the class structure is apparent.
We propose Online Convex Ensemble StrongLy Adaptive Dynamic Learning (OCELAD),
a general adaptive, online approach for learning and tracking optimal metrics
as they change over time that is highly robust to a variety of nonstationary
behaviors in the changing metric. We apply the OCELAD framework to an ensemble
of online learners. Specifically, we create a retro-initialized composite
objective mirror descent (COMID) ensemble (RICE) consisting of a set of
parallel COMID learners with different learning rates, and demonstrate
parameter-free RICE-OCELAD metric learning on both synthetic data and a highly
nonstationary Twitter dataset. We show significant performance improvements and
increased robustness to nonstationary effects relative to previously proposed
batch and online distance metric learning algorithms.
| Kristjan Greenewald, Stephen Kelley, Brandon Oselio, Alfred O. Hero
III | 10.1109/TSP.2017.2739100 | 1701.02804 | null | null |
Stochastic Generative Hashing | cs.LG cs.CV stat.ML | Learning-based binary hashing has become a powerful paradigm for fast search
and retrieval in massive databases. However, due to the requirement of discrete
outputs for the hash functions, learning such functions is known to be very
challenging. In addition, the objective functions adopted by existing hashing
techniques are mostly chosen heuristically. In this paper, we propose a novel
generative approach to learn hash functions through Minimum Description Length
principle such that the learned hash codes maximally compress the dataset and
can also be used to regenerate the inputs. We also develop an efficient
learning algorithm based on the stochastic distributional gradient, which
avoids the notorious difficulty caused by binary output constraints, to jointly
optimize the parameters of the hash function and the associated generative
model. Extensive experiments on a variety of large-scale datasets show that the
proposed method achieves better retrieval results than the existing
state-of-the-art methods.
| Bo Dai, Ruiqi Guo, Sanjiv Kumar, Niao He, Le Song | null | 1701.02815 | null | null |
The empirical Christoffel function with applications in data analysis | cs.LG | We illustrate the potential applications in machine learning of the
Christoffel function, or more precisely, its empirical counterpart associated
with a counting measure uniformly supported on a finite set of points. Firstly,
we provide a thresholding scheme which allows to approximate the support of a
measure from a finite subset of its moments with strong asymptotic guaranties.
Secondly, we provide a consistency result which relates the empirical
Christoffel function and its population counterpart in the limit of large
samples. Finally, we illustrate the relevance of our results on simulated and
real world datasets for several applications in statistics and machine
learning: (a) density and support estimation from finite samples, (b) outlier
and novelty detection and (c) affine matching.
| Jean-Bernard Lasserre and Edouard Pauwels | null | 1701.02886 | null | null |
Multivariate Regression with Grossly Corrupted Observations: A Robust
Approach and its Applications | stat.ML cs.CV cs.LG | This paper studies the problem of multivariate linear regression where a
portion of the observations is grossly corrupted or is missing, and the
magnitudes and locations of such occurrences are unknown in priori. To deal
with this problem, we propose a new approach by explicitly consider the error
source as well as its sparseness nature. An interesting property of our
approach lies in its ability of allowing individual regression output elements
or tasks to possess their unique noise levels. Moreover, despite working with a
non-smooth optimization problem, our approach still guarantees to converge to
its optimal solution. Experiments on synthetic data demonstrate the
competitiveness of our approach compared with existing multivariate regression
models. In addition, empirically our approach has been validated with very
promising results on two exemplar real-world applications: The first concerns
the prediction of \textit{Big-Five} personality based on user behaviors at
social network sites (SNSs), while the second is 3D human hand pose estimation
from depth images. The implementation of our approach and comparison methods as
well as the involved datasets are made publicly available in support of the
open-source and reproducible research initiatives.
| Xiaowei Zhang and Chi Xu and Yu Zhang and Tingshao Zhu and Li Cheng | null | 1701.02892 | null | null |
Fast mixing for Latent Dirichlet allocation | cs.LG stat.ML | Markov chain Monte Carlo (MCMC) algorithms are ubiquitous in probability
theory in general and in machine learning in particular. A Markov chain is
devised so that its stationary distribution is some probability distribution of
interest. Then one samples from the given distribution by running the Markov
chain for a "long time" until it appears to be stationary and then collects the
sample. However these chains are often very complex and there are no
theoretical guarantees that stationarity is actually reached. In this paper we
study the Gibbs sampler of the posterior distribution of a very simple case of
Latent Dirichlet Allocation, the arguably most well known Bayesian unsupervised
learning model for text generation and text classification. It is shown that
when the corpus consists of two long documents of equal length $m$ and the
vocabulary consists of only two different words, the mixing time is at most of
order $m^2\log m$ (which corresponds to $m\log m$ rounds over the corpus). It
will be apparent from our analysis that it seems very likely that the mixing
time is not much worse in the more relevant case when the number of documents
and the size of the vocabulary are also large as long as each word is
represented a large number in each document, even though the computations
involved may be intractable.
| Johan Jonasson | null | 1701.0296 | null | null |
Compressive Sensing via Convolutional Factor Analysis | stat.ML cs.LG | We solve the compressive sensing problem via convolutional factor analysis,
where the convolutional dictionaries are learned {\em in situ} from the
compressed measurements. An alternating direction method of multipliers (ADMM)
paradigm for compressive sensing inversion based on convolutional factor
analysis is developed. The proposed algorithm provides reconstructed images as
well as features, which can be directly used for recognition ($e.g.$,
classification) tasks. When a deep (multilayer) model is constructed, a
stochastic unpooling process is employed to build a generative model. During
reconstruction and testing, we project the upper layer dictionary to the data
level and only a single layer deconvolution is required. We demonstrate that
using $\sim30\%$ (relative to pixel numbers) compressed measurements, the
proposed model achieves the classification accuracy comparable to the original
data on MNIST. We also observe that when the compressed measurements are very
limited ($e.g.$, $<10\%$), the upper layer dictionary can provide better
reconstruction results than the bottom layer.
| Xin Yuan, Yunchen Pu, Lawrence Carin | null | 1701.03006 | null | null |
A General and Adaptive Robust Loss Function | cs.CV cs.LG stat.ML | We present a generalization of the Cauchy/Lorentzian, Geman-McClure,
Welsch/Leclerc, generalized Charbonnier, Charbonnier/pseudo-Huber/L1-L2, and L2
loss functions. By introducing robustness as a continuous parameter, our loss
function allows algorithms built around robust loss minimization to be
generalized, which improves performance on basic vision tasks such as
registration and clustering. Interpreting our loss as the negative log of a
univariate density yields a general probability distribution that includes
normal and Cauchy distributions as special cases. This probabilistic
interpretation enables the training of neural networks in which the robustness
of the loss automatically adapts itself during training, which improves
performance on learning-based tasks such as generative image synthesis and
unsupervised monocular depth estimation, without requiring any manual parameter
tuning.
| Jonathan T. Barron | null | 1701.03077 | null | null |
Linear Disentangled Representation Learning for Facial Actions | cs.CV cs.AI cs.LG stat.ML | Limited annotated data available for the recognition of facial expression and
action units embarrasses the training of deep networks, which can learn
disentangled invariant features. However, a linear model with just several
parameters normally is not demanding in terms of training data. In this paper,
we propose an elegant linear model to untangle confounding factors in
challenging realistic multichannel signals such as 2D face videos. The simple
yet powerful model does not rely on huge training data and is natural for
recognizing facial actions without explicitly disentangling the identity. Base
on well-understood intuitive linear models such as Sparse Representation based
Classification (SRC), previous attempts require a prepossessing of explicit
decoupling which is practically inexact. Instead, we exploit the low-rank
property across frames to subtract the underlying neutral faces which are
modeled jointly with sparse representation on the action components with group
sparsity enforced. On the extended Cohn-Kanade dataset (CK+), our one-shot
automatic method on raw face videos performs as competitive as SRC applied on
manually prepared action components and performs even better than SRC in terms
of true positive rate. We apply the model to the even more challenging task of
facial action unit recognition, verified on the MPI Face Video Database
(MPI-VDB) achieving a decent performance. All the programs and data have been
made publicly available.
| Xiang Xiang, Trac D. Tran | null | 1701.03102 | null | null |
Real-time eSports Match Result Prediction | stat.AP cs.AI cs.LG | In this paper, we try to predict the winning team of a match in the
multiplayer eSports game Dota 2. To address the weaknesses of previous work, we
consider more aspects of prior (pre-match) features from individual players'
match history, as well as real-time (during-match) features at each minute as
the match progresses. We use logistic regression, the proposed Attribute
Sequence Model, and their combinations as the prediction models. In a dataset
of 78362 matches where 20631 matches contain replay data, our experiments show
that adding more aspects of prior features improves accuracy from 58.69% to
71.49%, and introducing real-time features achieves up to 93.73% accuracy when
predicting at the 40th minute.
| Yifan Yang and Tian Qin and Yu-Heng Lei | null | 1701.03162 | null | null |
Unsupervised Latent Behavior Manifold Learning from Acoustic Features:
audio2behavior | cs.LG cs.SD | Behavioral annotation using signal processing and machine learning is highly
dependent on training data and manual annotations of behavioral labels.
Previous studies have shown that speech information encodes significant
behavioral information and be used in a variety of automated behavior
recognition tasks. However, extracting behavior information from speech is
still a difficult task due to the sparseness of training data coupled with the
complex, high-dimensionality of speech, and the complex and multiple
information streams it encodes. In this work we exploit the slow varying
properties of human behavior. We hypothesize that nearby segments of speech
share the same behavioral context and hence share a similar underlying
representation in a latent space. Specifically, we propose a Deep Neural
Network (DNN) model to connect behavioral context and derive the behavioral
manifold in an unsupervised manner. We evaluate the proposed manifold in the
couples therapy domain and also provide examples from publicly available data
(e.g. stand-up comedy). We further investigate training within the couples'
therapy domain and from movie data. The results are extremely encouraging and
promise improved behavioral quantification in an unsupervised manner and
warrants further investigation in a range of applications.
| Haoqi Li, Brian Baucom, Panayiotis Georgiou | null | 1701.03198 | null | null |
Sparse-TDA: Sparse Realization of Topological Data Analysis for
Multi-Way Classification | stat.ML cs.LG | Topological data analysis (TDA) has emerged as one of the most promising
techniques to reconstruct the unknown shapes of high-dimensional spaces from
observed data samples. TDA, thus, yields key shape descriptors in the form of
persistent topological features that can be used for any supervised or
unsupervised learning task, including multi-way classification. Sparse
sampling, on the other hand, provides a highly efficient technique to
reconstruct signals in the spatial-temporal domain from just a few
carefully-chosen samples. Here, we present a new method, referred to as the
Sparse-TDA algorithm, that combines favorable aspects of the two techniques.
This combination is realized by selecting an optimal set of sparse pixel
samples from the persistent features generated by a vector-based TDA algorithm.
These sparse samples are selected from a low-rank matrix representation of
persistent features using QR pivoting. We show that the Sparse-TDA method
demonstrates promising performance on three benchmark problems related to human
posture recognition and image texture classification.
| Wei Guo, Krithika Manohar, Steven L. Brunton and Ashis G. Banerjee | null | 1701.03212 | null | null |
Prior matters: simple and general methods for evaluating and improving
topic quality in topic modeling | cs.CL cs.IR cs.LG | Latent Dirichlet Allocation (LDA) models trained without stopword removal
often produce topics with high posterior probabilities on uninformative words,
obscuring the underlying corpus content. Even when canonical stopwords are
manually removed, uninformative words common in that corpus will still dominate
the most probable words in a topic. In this work, we first show how the
standard topic quality measures of coherence and pointwise mutual information
act counter-intuitively in the presence of common but irrelevant words, making
it difficult to even quantitatively identify situations in which topics may be
dominated by stopwords. We propose an additional topic quality metric that
targets the stopword problem, and show that it, unlike the standard measures,
correctly correlates with human judgements of quality. We also propose a
simple-to-implement strategy for generating topics that are evaluated to be of
much higher quality by both human assessment and our new metric. This approach,
a collection of informative priors easily introduced into most LDA-style
inference methods, automatically promotes terms with domain relevance and
demotes domain-specific stop words. We demonstrate this approach's
effectiveness in three very different domains: Department of Labor accident
reports, online health forum posts, and NIPS abstracts. Overall we find that
current practices thought to solve this problem do not do so adequately, and
that our proposal offers a substantial improvement for those interested in
interpreting their topics as objects in their own right.
| Angela Fan, Finale Doshi-Velez, Luke Miratrix | null | 1701.03227 | null | null |
Modularized Morphing of Neural Networks | cs.LG cs.NE | In this work we study the problem of network morphism, an effective learning
scheme to morph a well-trained neural network to a new one with the network
function completely preserved. Different from existing work where basic
morphing types on the layer level were addressed, we target at the central
problem of network morphism at a higher level, i.e., how a convolutional layer
can be morphed into an arbitrary module of a neural network. To simplify the
representation of a network, we abstract a module as a graph with blobs as
vertices and convolutional layers as edges, based on which the morphing process
is able to be formulated as a graph transformation problem. Two atomic morphing
operations are introduced to compose the graphs, based on which modules are
classified into two families, i.e., simple morphable modules and complex
modules. We present practical morphing solutions for both of these two
families, and prove that any reasonable module can be morphed from a single
convolutional layer. Extensive experiments have been conducted based on the
state-of-the-art ResNet on benchmark datasets, and the effectiveness of the
proposed solution has been verified.
| Tao Wei, Changhu Wang, Chang Wen Chen | null | 1701.03281 | null | null |
Residual LSTM: Design of a Deep Recurrent Architecture for Distant
Speech Recognition | cs.LG cs.AI cs.SD | In this paper, a novel architecture for a deep recurrent neural network,
residual LSTM is introduced. A plain LSTM has an internal memory cell that can
learn long term dependencies of sequential data. It also provides a temporal
shortcut path to avoid vanishing or exploding gradients in the temporal domain.
The residual LSTM provides an additional spatial shortcut path from lower
layers for efficient training of deep networks with multiple LSTM layers.
Compared with the previous work, highway LSTM, residual LSTM separates a
spatial shortcut path with temporal one by using output layers, which can help
to avoid a conflict between spatial and temporal-domain gradient flows.
Furthermore, residual LSTM reuses the output projection matrix and the output
gate of LSTM to control the spatial information flow instead of additional gate
networks, which effectively reduces more than 10% of network parameters. An
experiment for distant speech recognition on the AMI SDM corpus shows that
10-layer plain and highway LSTM networks presented 13.7% and 6.2% increase in
WER over 3-layer aselines, respectively. On the contrary, 10-layer residual
LSTM networks provided the lowest WER 41.0%, which corresponds to 3.3% and 2.8%
WER reduction over plain and highway LSTM networks, respectively.
| Jaeyoung Kim, Mostafa El-Khamy, and Jungwon Lee | null | 1701.0336 | null | null |
Scaling Binarized Neural Networks on Reconfigurable Logic | cs.CV cs.LG | Binarized neural networks (BNNs) are gaining interest in the deep learning
community due to their significantly lower computational and memory cost. They
are particularly well suited to reconfigurable logic devices, which contain an
abundance of fine-grained compute resources and can result in smaller, lower
power implementations, or conversely in higher classification rates. Towards
this end, the Finn framework was recently proposed for building fast and
flexible field programmable gate array (FPGA) accelerators for BNNs. Finn
utilized a novel set of optimizations that enable efficient mapping of BNNs to
hardware and implemented fully connected, non-padded convolutional and pooling
layers, with per-layer compute resources being tailored to user-provided
throughput requirements. However, FINN was not evaluated on larger topologies
due to the size of the chosen FPGA, and exhibited decreased accuracy due to
lack of padding. In this paper, we improve upon Finn to show how padding can be
employed on BNNs while still maintaining a 1-bit datapath and high accuracy.
Based on this technique, we demonstrate numerous experiments to illustrate
flexibility and scalability of the approach. In particular, we show that a
large BNN requiring 1.2 billion operations per frame running on an ADM-PCIE-8K5
platform can classify images at 12 kFPS with 671 us latency while drawing less
than 41 W board power and classifying CIFAR-10 images at 88.7% accuracy. Our
implementation of this network achieves 14.8 trillion operations per second. We
believe this is the fastest classification rate reported to date on this
benchmark at this level of accuracy.
| Nicholas J. Fraser, Yaman Umuroglu, Giulio Gambardella, Michaela
Blott, Philip Leong, Magnus Jahre and Kees Vissers | null | 1701.034 | null | null |
Manifold Alignment Determination: finding correspondences across
different data views | stat.ML cs.LG math.PR | We present Manifold Alignment Determination (MAD), an algorithm for learning
alignments between data points from multiple views or modalities. The approach
is capable of learning correspondences between views as well as correspondences
between individual data-points. The proposed method requires only a few aligned
examples from which it is capable to recover a global alignment through a
probabilistic model. The strong, yet flexible regularization provided by the
generative model is sufficient to align the views. We provide experiments on
both synthetic and real data to highlight the benefit of the proposed approach.
| Andreas Damianou, Neil D. Lawrence and Carl Henrik Ek | null | 1701.03449 | null | null |
An Asynchronous Parallel Approach to Sparse Recovery | cs.LG cs.DC | Asynchronous parallel computing and sparse recovery are two areas that have
received recent interest. Asynchronous algorithms are often studied to solve
optimization problems where the cost function takes the form $\sum_{i=1}^M
f_i(x)$, with a common assumption that each $f_i$ is sparse; that is, each
$f_i$ acts only on a small number of components of $x\in\mathbb{R}^n$. Sparse
recovery problems, such as compressed sensing, can be formulated as
optimization problems, however, the cost functions $f_i$ are dense with respect
to the components of $x$, and instead the signal $x$ is assumed to be sparse,
meaning that it has only $s$ non-zeros where $s\ll n$. Here we address how one
may use an asynchronous parallel architecture when the cost functions $f_i$ are
not sparse in $x$, but rather the signal $x$ is sparse. We propose an
asynchronous parallel approach to sparse recovery via a stochastic greedy
algorithm, where multiple processors asynchronously update a vector in shared
memory containing information on the estimated signal support. We include
numerical simulations that illustrate the potential benefits of our proposed
asynchronous method.
| Deanna Needell, Tina Woolf | null | 1701.03458 | null | null |
Perishability of Data: Dynamic Pricing under Varying-Coefficient Models | cs.GT cs.LG stat.ML | We consider a firm that sells a large number of products to its customers in
an online fashion. Each product is described by a high dimensional feature
vector, and the market value of a product is assumed to be linear in the values
of its features. Parameters of the valuation model are unknown and can change
over time. The firm sequentially observes a product's features and can use the
historical sales data (binary sale/no sale feedbacks) to set the price of
current product, with the objective of maximizing the collected revenue. We
measure the performance of a dynamic pricing policy via regret, which is the
expected revenue loss compared to a clairvoyant that knows the sequence of
model parameters in advance.
We propose a pricing policy based on projected stochastic gradient descent
(PSGD) and characterize its regret in terms of time $T$, features dimension
$d$, and the temporal variability in the model parameters, $\delta_t$. We
consider two settings. In the first one, feature vectors are chosen
antagonistically by nature and we prove that the regret of PSGD pricing policy
is of order $O(\sqrt{T} + \sum_{t=1}^T \sqrt{t}\delta_t)$. In the second
setting (referred to as stochastic features model), the feature vectors are
drawn independently from an unknown distribution. We show that in this case,
the regret of PSGD pricing policy is of order $O(d^2 \log T + \sum_{t=1}^T
t\delta_t/d)$.
| Adel Javanmard | null | 1701.03537 | null | null |
Kernel Approximation Methods for Speech Recognition | stat.ML cs.AI cs.CL cs.LG | We study large-scale kernel methods for acoustic modeling in speech
recognition and compare their performance to deep neural networks (DNNs). We
perform experiments on four speech recognition datasets, including the TIMIT
and Broadcast News benchmark tasks, and compare these two types of models on
frame-level performance metrics (accuracy, cross-entropy), as well as on
recognition metrics (word/character error rate). In order to scale kernel
methods to these large datasets, we use the random Fourier feature method of
Rahimi and Recht (2007). We propose two novel techniques for improving the
performance of kernel acoustic models. First, in order to reduce the number of
random features required by kernel models, we propose a simple but effective
method for feature selection. The method is able to explore a large number of
non-linear features while maintaining a compact model more efficiently than
existing approaches. Second, we present a number of frame-level metrics which
correlate very strongly with recognition performance when computed on the
heldout set; we take advantage of these correlations by monitoring these
metrics during training in order to decide when to stop learning. This
technique can noticeably improve the recognition performance of both DNN and
kernel models, while narrowing the gap between them. Additionally, we show that
the linear bottleneck method of Sainath et al. (2013) improves the performance
of our kernel models significantly, in addition to speeding up training and
making the models more compact. Together, these three methods dramatically
improve the performance of kernel acoustic models, making their performance
comparable to DNNs on the tasks we explored.
| Avner May, Alireza Bagheri Garakani, Zhiyun Lu, Dong Guo, Kuan Liu,
Aur\'elien Bellet, Linxi Fan, Michael Collins, Daniel Hsu, Brian Kingsbury,
Michael Picheny, Fei Sha | null | 1701.03577 | null | null |
Diffusion-based nonlinear filtering for multimodal data fusion with
application to sleep stage assessment | stat.ML cs.LG physics.data-an | The problem of information fusion from multiple data-sets acquired by
multimodal sensors has drawn significant research attention over the years. In
this paper, we focus on a particular problem setting consisting of a physical
phenomenon or a system of interest observed by multiple sensors. We assume that
all sensors measure some aspects of the system of interest with additional
sensor-specific and irrelevant components. Our goal is to recover the variables
relevant to the observed system and to filter out the nuisance effects of the
sensor-specific variables. We propose an approach based on manifold learning,
which is particularly suitable for problems with multiple modalities, since it
aims to capture the intrinsic structure of the data and relies on minimal prior
model knowledge. Specifically, we propose a nonlinear filtering scheme, which
extracts the hidden sources of variability captured by two or more sensors,
that are independent of the sensor-specific components. In addition to
presenting a theoretical analysis, we demonstrate our technique on real
measured data for the purpose of sleep stage assessment based on multiple,
multimodal sensor measurements. We show that without prior knowledge on the
different modalities and on the measured system, our method gives rise to a
data-driven representation that is well correlated with the underlying sleep
process and is robust to noise and sensor-specific effects.
| Ori Katz, Ronen Talmon, Yu-Lun Lo and Hau-Tieng Wu | null | 1701.03619 | null | null |
A dissimilarity-based approach to predictive maintenance with
application to HVAC systems | cs.LG | The goal of predictive maintenance is to forecast the occurrence of faults of
an appliance, in order to proactively take the necessary actions to ensure its
availability. In many application scenarios, predictive maintenance is applied
to a set of homogeneous appliances. In this paper, we firstly review taxonomies
and main methodologies currently used for condition-based maintenance;
secondly, we argue that the mutual dissimilarities of the behaviours of all
appliances of this set (the "cohort") can be exploited to detect upcoming
faults. Specifically, inspired by dissimilarity-based representations, we
propose a novel machine learning approach based on the analysis of concurrent
mutual differences of the measurements coming from the cohort. We evaluate our
method over one year of historical data from a cohort of 17 HVAC (Heating,
Ventilation and Air Conditioning) systems installed in an Italian hospital. We
show that certain kinds of faults can be foreseen with an accuracy, measured in
terms of area under the ROC curve, as high as 0.96.
| Riccardo Satta, Stefano Cavallari, Eraldo Pomponi, Daniele Grasselli,
Davide Picheo, Carlo Annis | null | 1701.03633 | null | null |
Symbolic Regression Algorithms with Built-in Linear Regression | cs.LG | Recently, several algorithms for symbolic regression (SR) emerged which
employ a form of multiple linear regression (LR) to produce generalized linear
models. The use of LR allows the algorithms to create models with relatively
small error right from the beginning of the search; such algorithms are thus
claimed to be (sometimes by orders of magnitude) faster than SR algorithms
based on vanilla genetic programming. However, a systematic comparison of these
algorithms on a common set of problems is still missing. In this paper we
conceptually and experimentally compare several representatives of such
algorithms (GPTIPS, FFX, and EFS). They are applied as off-the-shelf,
ready-to-use techniques, mostly using their default settings. The methods are
compared on several synthetic and real-world SR benchmark problems. Their
performance is also related to the performance of three conventional machine
learning algorithms --- multiple regression, random forests and support vector
regression.
| Jan \v{Z}egklitz, Petr Po\v{s}\'ik | null | 1701.03641 | null | null |
Restricted Boltzmann Machines with Gaussian Visible Units Guided by
Pairwise Constraints | cs.LG | Restricted Boltzmann machines (RBMs) and their variants are usually trained
by contrastive divergence (CD) learning, but the training procedure is an
unsupervised learning approach, without any guidances of the background
knowledge. To enhance the expression ability of traditional RBMs, in this
paper, we propose pairwise constraints restricted Boltzmann machine with
Gaussian visible units (pcGRBM) model, in which the learning procedure is
guided by pairwise constraints and the process of encoding is conducted under
these guidances. The pairwise constraints are encoded in hidden layer features
of pcGRBM. Then, some pairwise hidden features of pcGRBM flock together and
another part of them are separated by the guidances. In order to deal with
real-valued data, the binary visible units are replaced by linear units with
Gausian noise in the pcGRBM model. In the learning process of pcGRBM, the
pairwise constraints are iterated transitions between visible and hidden units
during CD learning procedure. Then, the proposed model is inferred by
approximative gradient descent method and the corresponding learning algorithm
is designed in this paper. In order to compare the availability of pcGRBM and
traditional RBMs with Gaussian visible units, the features of the pcGRBM and
RBMs hidden layer are used as input 'data' for K-means, spectral clustering
(SP) and affinity propagation (AP) algorithms, respectively. A thorough
experimental evaluation is performed with sixteen image datasets of Microsoft
Research Asia Multimedia (MSRA-MM). The experimental results show that the
clustering performance of K-means, SP and AP algorithms based on pcGRBM model
are significantly better than traditional RBMs. In addition, the pcGRBM model
for clustering task shows better performance than some semi-supervised
clustering algorithms.
| Jielei Chu, Hongjun Wang, Hua Meng, Peng Jin and Tianrui Li (Senior
member, IEEE) | null | 1701.03647 | null | null |
Dictionary Learning from Incomplete Data | cs.LG stat.ML | This paper extends the recently proposed and theoretically justified
iterative thresholding and $K$ residual means algorithm ITKrM to learning
dicionaries from incomplete/masked training data (ITKrMM). It further adapts
the algorithm to the presence of a low rank component in the data and provides
a strategy for recovering this low rank component again from incomplete data.
Several synthetic experiments show the advantages of incorporating information
about the corruption into the algorithm. Finally, image inpainting is
considered as application example, which demonstrates the superior performance
of ITKrMM in terms of speed at similar or better reconstruction quality
compared to its closest dictionary learning counterpart.
| Valeriya Naumova and Karin Schnass | 10.1186/s13634-018-0533-0 | 1701.03655 | null | null |
Truncation-free Hybrid Inference for DPMM | cs.LG stat.ML | Dirichlet process mixture models (DPMM) are a cornerstone of Bayesian
non-parametrics. While these models free from choosing the number of components
a-priori, computationally attractive variational inference often reintroduces
the need to do so, via a truncation on the variational distribution. In this
paper we present a truncation-free hybrid inference for DPMM, combining the
advantages of sampling-based MCMC and variational methods. The proposed
hybridization enables more efficient variational updates, while increasing
model complexity only if needed. We evaluate the properties of the hybrid
updates and their empirical performance in single- as well as mixed-membership
models. Our method is easy to implement and performs favorably compared to
existing schemas.
| Arnim Bleier | null | 1701.03743 | null | null |
Deep Probabilistic Programming | stat.ML cs.AI cs.LG cs.PL stat.CO | We propose Edward, a Turing-complete probabilistic programming language.
Edward defines two compositional representations---random variables and
inference. By treating inference as a first class citizen, on a par with
modeling, we show that probabilistic programming can be as flexible and
computationally efficient as traditional deep learning. For flexibility, Edward
makes it easy to fit the same model using a variety of composable inference
methods, ranging from point estimation to variational inference to MCMC. In
addition, Edward can reuse the modeling representation as part of inference,
facilitating the design of rich variational models and generative adversarial
networks. For efficiency, Edward is integrated into TensorFlow, providing
significant speedups over existing probabilistic systems. For example, we show
on a benchmark logistic regression task that Edward is at least 35x faster than
Stan and 6x faster than PyMC3. Further, Edward incurs no runtime overhead: it
is as fast as handwritten TensorFlow.
| Dustin Tran, Matthew D. Hoffman, Rif A. Saurous, Eugene Brevdo, Kevin
Murphy, David M. Blei | null | 1701.03757 | null | null |
Long Timescale Credit Assignment in NeuralNetworks with External Memory | cs.AI cs.LG cs.NE | Credit assignment in traditional recurrent neural networks usually involves
back-propagating through a long chain of tied weight matrices. The length of
this chain scales linearly with the number of time-steps as the same network is
run at each time-step. This creates many problems, such as vanishing gradients,
that have been well studied. In contrast, a NNEM's architecture recurrent
activity doesn't involve a long chain of activity (though some architectures
such as the NTM do utilize a traditional recurrent architecture as a
controller). Rather, the externally stored embedding vectors are used at each
time-step, but no messages are passed from previous time-steps. This means that
vanishing gradients aren't a problem, as all of the necessary gradient paths
are short. However, these paths are extremely numerous (one per embedding
vector in memory) and reused for a very long time (until it leaves the memory).
Thus, the forward-pass information of each memory must be stored for the entire
duration of the memory. This is problematic as this additional storage far
surpasses that of the actual memories, to the extent that large memories on
infeasible to back-propagate through in high dimensional settings. One way to
get around the need to hold onto forward-pass information is to recalculate the
forward-pass whenever gradient information is available. However, if the
observations are too large to store in the domain of interest, direct
reinstatement of a forward pass cannot occur. Instead, we rely on a learned
autoencoder to reinstate the observation, and then use the embedding network to
recalculate the forward-pass. Since the recalculated embedding vector is
unlikely to perfectly match the one stored in memory, we try out 2
approximations to utilize error gradient w.r.t. the vector in memory.
| Steven Stenberg Hansen | null | 1701.03866 | null | null |
Learning to Invert: Signal Recovery via Deep Convolutional Networks | stat.ML cs.AI cs.IT cs.LG math.IT | The promise of compressive sensing (CS) has been offset by two significant
challenges. First, real-world data is not exactly sparse in a fixed basis.
Second, current high-performance recovery algorithms are slow to converge,
which limits CS to either non-real-time applications or scenarios where massive
back-end computing is available. In this paper, we attack both of these
challenges head-on by developing a new signal recovery framework we call {\em
DeepInverse} that learns the inverse transformation from measurement vectors to
signals using a {\em deep convolutional network}. When trained on a set of
representative images, the network learns both a representation for the signals
(addressing challenge one) and an inverse map approximating a greedy or convex
recovery algorithm (addressing challenge two). Our experiments indicate that
the DeepInverse network closely approximates the solution produced by
state-of-the-art CS recovery algorithms yet is hundreds of times faster in run
time. The tradeoff for the ultrafast run time is a computationally intensive,
off-line training procedure typical to deep networks. However, the training
needs to be completed only once, which makes the approach attractive for a host
of sparse recovery problems.
| Ali Mousavi, Richard G. Baraniuk | null | 1701.03891 | null | null |
On H\"older projective divergences | cs.LG cs.CV cs.IT math.IT | We describe a framework to build distances by measuring the tightness of
inequalities, and introduce the notion of proper statistical divergences and
improper pseudo-divergences. We then consider the H\"older ordinary and reverse
inequalities, and present two novel classes of H\"older divergences and
pseudo-divergences that both encapsulate the special case of the Cauchy-Schwarz
divergence. We report closed-form formulas for those statistical
dissimilarities when considering distributions belonging to the same
exponential family provided that the natural parameter space is a cone (e.g.,
multivariate Gaussians), or affine (e.g., categorical distributions). Those new
classes of H\"older distances are invariant to rescaling, and thus do not
require distributions to be normalized. Finally, we show how to compute
statistical H\"older centroids with respect to those divergences, and carry out
center-based clustering toy experiments on a set of Gaussian distributions that
demonstrate empirically that symmetrized H\"older divergences outperform the
symmetric Cauchy-Schwarz divergence.
| Frank Nielsen and Ke Sun and St\'ephane Marchand-Maillet | 10.3390/e19030122 | 1701.03916 | null | null |
Marked Temporal Dynamics Modeling based on Recurrent Neural Network | cs.LG stat.ML | We are now witnessing the increasing availability of event stream data, i.e.,
a sequence of events with each event typically being denoted by the time it
occurs and its mark information (e.g., event type). A fundamental problem is to
model and predict such kind of marked temporal dynamics, i.e., when the next
event will take place and what its mark will be. Existing methods either
predict only the mark or the time of the next event, or predict both of them,
yet separately. Indeed, in marked temporal dynamics, the time and the mark of
the next event are highly dependent on each other, requiring a method that
could simultaneously predict both of them. To tackle this problem, in this
paper, we propose to model marked temporal dynamics by using a mark-specific
intensity function to explicitly capture the dependency between the mark and
the time of the next event. Extensive experiments on two datasets demonstrate
that the proposed method outperforms state-of-the-art methods at predicting
marked temporal dynamics.
| Yongqing Wang, Shenghua Liu, Huawei Shen, Xueqi Cheng | null | 1701.03918 | null | null |
Scalable and Incremental Learning of Gaussian Mixture Models | cs.LG | This work presents a fast and scalable algorithm for incremental learning of
Gaussian mixture models. By performing rank-one updates on its precision
matrices and determinants, its asymptotic time complexity is of \BigO{NKD^2}
for $N$ data points, $K$ Gaussian components and $D$ dimensions. The resulting
algorithm can be applied to high dimensional tasks, and this is confirmed by
applying it to the classification datasets MNIST and CIFAR-10. Additionally, in
order to show the algorithm's applicability to function approximation and
control tasks, it is applied to three reinforcement learning tasks and its
data-efficiency is evaluated.
| Rafael Pinto, Paulo Engel | null | 1701.0394 | null | null |
Communication-Efficient Algorithms for Decentralized and Stochastic
Optimization | math.OC cs.LG | We present a new class of decentralized first-order methods for nonsmooth and
stochastic optimization problems defined over multiagent networks. Considering
that communication is a major bottleneck in decentralized optimization, our
main goal in this paper is to develop algorithmic frameworks which can
significantly reduce the number of inter-node communications. We first propose
a decentralized primal-dual method which can find an $\epsilon$-solution both
in terms of functional optimality gap and feasibility residual in
$O(1/\epsilon)$ inter-node communication rounds when the objective functions
are convex and the local primal subproblems are solved exactly. Our major
contribution is to present a new class of decentralized primal-dual type
algorithms, namely the decentralized communication sliding (DCS) methods, which
can skip the inter-node communications while agents solve the primal
subproblems iteratively through linearizations of their local objective
functions. By employing DCS, agents can still find an $\epsilon$-solution in
$O(1/\epsilon)$ (resp., $O(1/\sqrt{\epsilon})$) communication rounds for
general convex functions (resp., strongly convex functions), while maintaining
the $O(1/\epsilon^2)$ (resp., $O(1/\epsilon)$) bound on the total number of
intra-node subgradient evaluations. We also present a stochastic counterpart
for these algorithms, denoted by SDCS, for solving stochastic optimization
problems whose objective function cannot be evaluated exactly. In comparison
with existing results for decentralized nonsmooth and stochastic optimization,
we can reduce the total number of inter-node communication rounds by orders of
magnitude while still maintaining the optimal complexity bounds on intra-node
stochastic subgradient evaluations. The bounds on the subgradient evaluations
are actually comparable to those required for centralized nonsmooth and
stochastic optimization.
| Guanghui Lan, Soomin Lee, and Yi Zhou | null | 1701.03961 | null | null |
An Online Convex Optimization Approach to Dynamic Network Resource
Allocation | cs.SY cs.LG math.OC stat.ML | Existing approaches to online convex optimization (OCO) make sequential
one-slot-ahead decisions, which lead to (possibly adversarial) losses that
drive subsequent decision iterates. Their performance is evaluated by the
so-called regret that measures the difference of losses between the online
solution and the best yet fixed overall solution in hindsight. The present
paper deals with online convex optimization involving adversarial loss
functions and adversarial constraints, where the constraints are revealed after
making decisions, and can be tolerable to instantaneous violations but must be
satisfied in the long term. Performance of an online algorithm in this setting
is assessed by: i) the difference of its losses relative to the best dynamic
solution with one-slot-ahead information of the loss function and the
constraint (that is here termed dynamic regret); and, ii) the accumulated
amount of constraint violations (that is here termed dynamic fit). In this
context, a modified online saddle-point (MOSP) scheme is developed, and proved
to simultaneously yield sub-linear dynamic regret and fit, provided that the
accumulated variations of per-slot minimizers and constraints are sub-linearly
growing with time. MOSP is also applied to the dynamic network resource
allocation task, and it is compared with the well-known stochastic dual
gradient method. Under various scenarios, numerical experiments demonstrate the
performance gain of MOSP relative to the state-of-the-art.
| Tianyi Chen, Qing Ling, Georgios B. Giannakis | 10.1109/TSP.2017.2750109 | 1701.03974 | null | null |
Breeding electric zebras in the fields of Medicine | cs.LG | A few notes on the use of machine learning in medicine and the related
unintended consequences.
| Federico Cabitza | null | 1701.04077 | null | null |
Agent-Agnostic Human-in-the-Loop Reinforcement Learning | cs.LG cs.AI | Providing Reinforcement Learning agents with expert advice can dramatically
improve various aspects of learning. Prior work has developed teaching
protocols that enable agents to learn efficiently in complex environments; many
of these methods tailor the teacher's guidance to agents with a particular
representation or underlying learning scheme, offering effective but
specialized teaching procedures. In this work, we explore protocol programs, an
agent-agnostic schema for Human-in-the-Loop Reinforcement Learning. Our goal is
to incorporate the beneficial properties of a human teacher into Reinforcement
Learning without making strong assumptions about the inner workings of the
agent. We show how to represent existing approaches such as action pruning,
reward shaping, and training in simulation as special cases of our schema and
conduct preliminary experiments on simple domains.
| David Abel, John Salvatier, Andreas Stuhlm\"uller, Owain Evans | null | 1701.04079 | null | null |
Field-aware Factorization Machines in a Real-world Online Advertising
System | cs.LG | Predicting user response is one of the core machine learning tasks in
computational advertising. Field-aware Factorization Machines (FFM) have
recently been established as a state-of-the-art method for that problem and in
particular won two Kaggle challenges. This paper presents some results from
implementing this method in a production system that predicts click-through and
conversion rates for display advertising and shows that this method it is not
only effective to win challenges but is also valuable in a real-world
prediction system. We also discuss some specific challenges and solutions to
reduce the training time, namely the use of an innovative seeding algorithm and
a distributed learning mechanism.
| Yuchin Juan, Damien Lefortier, Olivier Chapelle | null | 1701.04099 | null | null |
Near Optimal Behavior via Approximate State Abstraction | cs.LG cs.AI | The combinatorial explosion that plagues planning and reinforcement learning
(RL) algorithms can be moderated using state abstraction. Prohibitively large
task representations can be condensed such that essential information is
preserved, and consequently, solutions are tractably computable. However, exact
abstractions, which treat only fully-identical situations as equivalent, fail
to present opportunities for abstraction in environments where no two
situations are exactly alike. In this work, we investigate approximate state
abstractions, which treat nearly-identical situations as equivalent. We present
theoretical guarantees of the quality of behaviors derived from four types of
approximate abstractions. Additionally, we empirically demonstrate that
approximate abstractions lead to reduction in task complexity and bounded loss
of optimality of behavior in a variety of environments.
| David Abel, D. Ellis Hershkowitz, Michael L. Littman | null | 1701.04113 | null | null |
Understanding the Effective Receptive Field in Deep Convolutional Neural
Networks | cs.CV cs.AI cs.LG | We study characteristics of receptive fields of units in deep convolutional
networks. The receptive field size is a crucial issue in many visual tasks, as
the output must respond to large enough areas in the image to capture
information about large objects. We introduce the notion of an effective
receptive field, and show that it both has a Gaussian distribution and only
occupies a fraction of the full theoretical receptive field. We analyze the
effective receptive field in several architecture designs, and the effect of
nonlinear activations, dropout, sub-sampling and skip connections on it. This
leads to suggestions for ways to address its tendency to be too small.
| Wenjie Luo and Yujia Li and Raquel Urtasun and Richard Zemel | null | 1701.04128 | null | null |
Vulnerability of Deep Reinforcement Learning to Policy Induction Attacks | cs.LG cs.AI | Deep learning classifiers are known to be inherently vulnerable to
manipulation by intentionally perturbed inputs, named adversarial examples. In
this work, we establish that reinforcement learning techniques based on Deep
Q-Networks (DQNs) are also vulnerable to adversarial input perturbations, and
verify the transferability of adversarial examples across different DQN models.
Furthermore, we present a novel class of attacks based on this vulnerability
that enable policy manipulation and induction in the learning process of DQNs.
We propose an attack mechanism that exploits the transferability of adversarial
examples to implement policy induction attacks on DQNs, and demonstrate its
efficacy and impact through experimental study of a game-learning scenario.
| Vahid Behzadan and Arslan Munir | null | 1701.04143 | null | null |
Achieving Privacy in the Adversarial Multi-Armed Bandit | cs.LG cs.AI cs.CR | In this paper, we improve the previously best known regret bound to achieve
$\epsilon$-differential privacy in oblivious adversarial bandits from
$\mathcal{O}{(T^{2/3}/\epsilon)}$ to $\mathcal{O}{(\sqrt{T} \ln T /\epsilon)}$.
This is achieved by combining a Laplace Mechanism with EXP3. We show that
though EXP3 is already differentially private, it leaks a linear amount of
information in $T$. However, we can improve this privacy by relying on its
intrinsic exponential mechanism for selecting actions. This allows us to reach
$\mathcal{O}{(\sqrt{\ln T})}$-DP, with a regret of $\mathcal{O}{(T^{2/3})}$
that holds against an adaptive adversary, an improvement from the best known of
$\mathcal{O}{(T^{3/4})}$. This is done by using an algorithm that run EXP3 in a
mini-batch loop. Finally, we run experiments that clearly demonstrate the
validity of our theoretical analysis.
| Aristide C. Y. Tossou and Christos Dimitrakakis | null | 1701.04222 | null | null |
Thompson Sampling For Stochastic Bandits with Graph Feedback | cs.LG cs.AI | We present a novel extension of Thompson Sampling for stochastic sequential
decision problems with graph feedback, even when the graph structure itself is
unknown and/or changing. We provide theoretical guarantees on the Bayesian
regret of the algorithm, linking its performance to the underlying properties
of the graph. Thompson Sampling has the advantage of being applicable without
the need to construct complicated upper confidence bounds for different
problems. We illustrate its performance through extensive experimental results
on real and simulated networks with graph feedback. More specifically, we
tested our algorithms on power law, planted partitions and Erdo's-Renyi graphs,
as well as on graphs derived from Facebook and Flixster data. These all show
that our algorithms clearly outperform related methods that employ upper
confidence bounds, even if the latter use more information about the graph.
| Aristide C. Y. Tossou, Christos Dimitrakakis, Devdatt Dubhashi | null | 1701.04238 | null | null |
Learning Traffic as Images: A Deep Convolutional Neural Network for
Large-Scale Transportation Network Speed Prediction | cs.LG stat.ML | This paper proposes a convolutional neural network (CNN)-based method that
learns traffic as images and predicts large-scale, network-wide traffic speed
with a high accuracy. Spatiotemporal traffic dynamics are converted to images
describing the time and space relations of traffic flow via a two-dimensional
time-space matrix. A CNN is applied to the image following two consecutive
steps: abstract traffic feature extraction and network-wide traffic speed
prediction. The effectiveness of the proposed method is evaluated by taking two
real-world transportation networks, the second ring road and north-east
transportation network in Beijing, as examples, and comparing the method with
four prevailing algorithms, namely, ordinary least squares, k-nearest
neighbors, artificial neural network, and random forest, and three deep
learning architectures, namely, stacked autoencoder, recurrent neural network,
and long-short-term memory network. The results show that the proposed method
outperforms other algorithms by an average accuracy improvement of 42.91%
within an acceptable execution time. The CNN can train the model in a
reasonable time and, thus, is suitable for large-scale transportation networks.
| Xiaolei Ma, Zhuang Dai, Zhengbing He, Jihui Na, Yong Wang and Yunpeng
Wang | null | 1701.04245 | null | null |
Geometric features for voxel-based surface recognition | cs.CV cs.LG | We introduce a library of geometric voxel features for CAD surface
recognition/retrieval tasks. Our features include local versions of the
intrinsic volumes (the usual 3D volume, surface area, integrated mean and
Gaussian curvature) and a few closely related quantities. We also compute Haar
wavelet and statistical distribution features by aggregating raw voxel
features. We apply our features to object classification on the ESB data set
and demonstrate accurate results with a small number of shallow decision trees.
| Dmitry Yarotsky | null | 1701.04249 | null | null |
Fast Rates for Empirical Risk Minimization of Strict Saddle Problems | cs.LG | We derive bounds on the sample complexity of empirical risk minimization
(ERM) in the context of minimizing non-convex risks that admit the strict
saddle property. Recent progress in non-convex optimization has yielded
efficient algorithms for minimizing such functions. Our results imply that
these efficient algorithms are statistically stable and also generalize well.
In particular, we derive fast rates which resemble the bounds that are often
attained in the strongly convex setting. We specify our bounds to Principal
Component Analysis and Independent Component Analysis. Our results and
techniques may pave the way for statistical analyses of additional strict
saddle problems.
| Alon Gonen and Shai Shalev-Shwartz | null | 1701.04271 | null | null |
End-to-End ASR-free Keyword Search from Speech | cs.CL cs.IR cs.LG cs.NE | End-to-end (E2E) systems have achieved competitive results compared to
conventional hybrid hidden Markov model (HMM)-deep neural network based
automatic speech recognition (ASR) systems. Such E2E systems are attractive due
to the lack of dependence on alignments between input acoustic and output
grapheme or HMM state sequence during training. This paper explores the design
of an ASR-free end-to-end system for text query-based keyword search (KWS) from
speech trained with minimal supervision. Our E2E KWS system consists of three
sub-systems. The first sub-system is a recurrent neural network (RNN)-based
acoustic auto-encoder trained to reconstruct the audio through a
finite-dimensional representation. The second sub-system is a character-level
RNN language model using embeddings learned from a convolutional neural
network. Since the acoustic and text query embeddings occupy different
representation spaces, they are input to a third feed-forward neural network
that predicts whether the query occurs in the acoustic utterance or not. This
E2E ASR-free KWS system performs respectably despite lacking a conventional ASR
system and trains much faster.
| Kartik Audhkhasi, Andrew Rosenberg, Abhinav Sethy, Bhuvana
Ramabhadran, Brian Kingsbury | 10.1109/JSTSP.2017.2759726 | 1701.04313 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.