title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
BranchyNet: Fast Inference via Early Exiting from Deep Neural Networks | cs.NE cs.CV cs.LG | Deep neural networks are state of the art methods for many learning tasks due
to their ability to extract increasingly better features at each network layer.
However, the improved performance of additional layers in a deep network comes
at the cost of added latency and energy usage in feedforward inference. As
networks continue to get deeper and larger, these costs become more prohibitive
for real-time and energy-sensitive applications. To address this issue, we
present BranchyNet, a novel deep network architecture that is augmented with
additional side branch classifiers. The architecture allows prediction results
for a large portion of test samples to exit the network early via these
branches when samples can already be inferred with high confidence. BranchyNet
exploits the observation that features learned at an early layer of a network
may often be sufficient for the classification of many data points. For more
difficult samples, which are expected less frequently, BranchyNet will use
further or all network layers to provide the best likelihood of correct
prediction. We study the BranchyNet architecture using several well-known
networks (LeNet, AlexNet, ResNet) and datasets (MNIST, CIFAR10) and show that
it can both improve accuracy and significantly reduce the inference time of the
network.
| Surat Teerapittayanon, Bradley McDanel, H.T. Kung | null | 1709.01686 | null | null |
Conditional Generative Adversarial Networks for Speech Enhancement and
Noise-Robust Speaker Verification | eess.AS cs.LG cs.SD eess.SP stat.ML | Improving speech system performance in noisy environments remains a
challenging task, and speech enhancement (SE) is one of the effective
techniques to solve the problem. Motivated by the promising results of
generative adversarial networks (GANs) in a variety of image processing tasks,
we explore the potential of conditional GANs (cGANs) for SE, and in particular,
we make use of the image processing framework proposed by Isola et al. [1] to
learn a mapping from the spectrogram of noisy speech to an enhanced
counterpart. The SE cGAN consists of two networks, trained in an adversarial
manner: a generator that tries to enhance the input noisy spectrogram, and a
discriminator that tries to distinguish between enhanced spectrograms provided
by the generator and clean ones from the database using the noisy spectrogram
as a condition. We evaluate the performance of the cGAN method in terms of
perceptual evaluation of speech quality (PESQ), short-time objective
intelligibility (STOI), and equal error rate (EER) of speaker verification (an
example application). Experimental results show that the cGAN method overall
outperforms the classical short-time spectral amplitude minimum mean square
error (STSA-MMSE) SE algorithm, and is comparable to a deep neural
network-based SE approach (DNN-SE).
| Daniel Michelsanti and Zheng-Hua Tan | 10.21437/Interspeech.2017-1620 | 1709.01703 | null | null |
Optimal Sub-sampling with Influence Functions | stat.ML cs.LG | Sub-sampling is a common and often effective method to deal with the
computational challenges of large datasets. However, for most statistical
models, there is no well-motivated approach for drawing a non-uniform
subsample. We show that the concept of an asymptotically linear estimator and
the associated influence function leads to optimal sampling procedures for a
wide class of popular models. Furthermore, for linear regression models which
have well-studied procedures for non-uniform sub-sampling, we show our optimal
influence function based method outperforms previous approaches. We empirically
show the improved performance of our method on real datasets.
| Daniel Ting and Eric Brochu | null | 1709.01716 | null | null |
Temporal Pattern Discovery for Accurate Sepsis Diagnosis in ICU Patients | cs.LG stat.AP | Sepsis is a condition caused by the body's overwhelming and life-threatening
response to infection, which can lead to tissue damage, organ failure, and
finally death. Common signs and symptoms include fever, increased heart rate,
increased breathing rate, and confusion. Sepsis is difficult to predict,
diagnose, and treat. Patients who develop sepsis have an increased risk of
complications and death and face higher health care costs and longer
hospitalization. Today, sepsis is one of the leading causes of mortality among
populations in intensive care units (ICUs). In this paper, we look at the
problem of early detection of sepsis by using temporal data mining. We focus on
the use of knowledge-based temporal abstraction to create meaningful
interval-based abstractions, and on time-interval mining to discover frequent
interval-based patterns. We used 2,560 cases derived from the MIMIC-III
database. We found that the distribution of the temporal patterns whose
frequency is above 10% discovered in the records of septic patients during the
last 6 and 12 hours before onset of sepsis is significantly different from that
distribution within a similar period, during an equivalent time window during
hospitalization, in the records of non-septic patients. This discovery is
encouraging for the purpose of performing an early diagnosis of sepsis using
the discovered patterns as constructed features.
| Eitam Sheetrit, Nir Nissim, Denis Klimov, Lior Fuchs, Yuval Elovici,
Yuval Shahar | null | 1709.0172 | null | null |
Deep learning from crowds | stat.ML cs.CV cs.HC cs.LG | Over the last few years, deep learning has revolutionized the field of
machine learning by dramatically improving the state-of-the-art in various
domains. However, as the size of supervised artificial neural networks grows,
typically so does the need for larger labeled datasets. Recently, crowdsourcing
has established itself as an efficient and cost-effective solution for labeling
large sets of data in a scalable manner, but it often requires aggregating
labels from multiple noisy contributors with different levels of expertise. In
this paper, we address the problem of learning deep neural networks from
crowds. We begin by describing an EM algorithm for jointly learning the
parameters of the network and the reliabilities of the annotators. Then, a
novel general-purpose crowd layer is proposed, which allows us to train deep
neural networks end-to-end, directly from the noisy labels of multiple
annotators, using only backpropagation. We empirically show that the proposed
approach is able to internally capture the reliability and biases of different
annotators and achieve new state-of-the-art results for various crowdsourced
datasets across different settings, namely classification, regression and
sequence labeling.
| Filipe Rodrigues and Francisco Pereira | null | 1709.01779 | null | null |
Symmetric Variational Autoencoder and Connections to Adversarial
Learning | stat.ML cs.LG | A new form of the variational autoencoder (VAE) is proposed, based on the
symmetric Kullback-Leibler divergence. It is demonstrated that learning of the
resulting symmetric VAE (sVAE) has close connections to previously developed
adversarial-learning methods. This relationship helps unify the previously
distinct techniques of VAE and adversarially learning, and provides insights
that allow us to ameliorate shortcomings with some previously developed
adversarial methods. In addition to an analysis that motivates and explains the
sVAE, an extensive set of experiments validate the utility of the approach.
| Liqun Chen, Shuyang Dai, Yunchen Pu, Chunyuan Li, Qinliang Su,
Lawrence Carin | null | 1709.01846 | null | null |
The low-rank hurdle model | stat.ML cs.LG | A composite loss framework is proposed for low-rank modeling of data
consisting of interesting and common values, such as excess zeros or missing
values. The methodology is motivated by the generalized low-rank framework and
the hurdle method which is commonly used to analyze zero-inflated counts. The
model is demonstrated on a manufacturing data set and applied to the problem of
missing value imputation.
| Christopher Dienes | null | 1709.0186 | null | null |
Neural Networks Regularization Through Class-wise Invariant
Representation Learning | cs.LG stat.ML | Training deep neural networks is known to require a large number of training
samples. However, in many applications only few training samples are available.
In this work, we tackle the issue of training neural networks for
classification task when few training samples are available. We attempt to
solve this issue by proposing a new regularization term that constrains the
hidden layers of a network to learn class-wise invariant representations. In
our regularization framework, learning invariant representations is generalized
to the class membership where samples with the same class should have the same
representation. Numerical experiments over MNIST and its variants showed that
our proposal helps improving the generalization of neural network particularly
when trained with few samples. We provide the source code of our framework
https://github.com/sbelharbi/learning-class-invariant-features .
| Soufiane Belharbi, Cl\'ement Chatelain, Romain H\'erault, S\'ebastien
Adam | null | 1709.01867 | null | null |
Clustering of Data with Missing Entries using Non-convex Fusion
Penalties | cs.CV cs.LG stat.ML | The presence of missing entries in data often creates challenges for pattern
recognition algorithms. Traditional algorithms for clustering data assume that
all the feature values are known for every data point. We propose a method to
cluster data in the presence of missing information. Unlike conventional
clustering techniques where every feature is known for each point, our
algorithm can handle cases where a few feature values are unknown for every
point. For this more challenging problem, we provide theoretical guarantees for
clustering using a $\ell_0$ fusion penalty based optimization problem.
Furthermore, we propose an algorithm to solve a relaxation of this problem
using saturating non-convex fusion penalties. It is observed that this
algorithm produces solutions that degrade gradually with an increase in the
fraction of missing feature values. We demonstrate the utility of the proposed
method using a simulated dataset, the Wine dataset and also an under-sampled
cardiac MRI dataset. It is shown that the proposed method is a promising
clustering technique for datasets with large fractions of missing entries.
| Sunrita Poddar, Mathews Jacob | null | 1709.0187 | null | null |
Language Modeling by Clustering with Word Embeddings for Text
Readability Assessment | cs.CL cs.LG | We present a clustering-based language model using word embeddings for text
readability prediction. Presumably, an Euclidean semantic space hypothesis
holds true for word embeddings whose training is done by observing word
co-occurrences. We argue that clustering with word embeddings in the metric
space should yield feature representations in a higher semantic space
appropriate for text regression. Also, by representing features in terms of
histograms, our approach can naturally address documents of varying lengths. An
empirical evaluation using the Common Core Standards corpus reveals that the
features formed on our clustering-based language model significantly improve
the previously known results for the same corpus in readability prediction. We
also evaluate the task of sentence matching based on semantic relatedness using
the Wiki-SimpleWiki corpus and find that our features lead to superior matching
performance.
| Miriam Cha, Youngjune Gwon, H.T. Kung | null | 1709.01888 | null | null |
Convolutional Gaussian Processes | stat.ML cs.LG | We present a practical way of introducing convolutional structure into
Gaussian processes, making them more suited to high-dimensional inputs like
images. The main contribution of our work is the construction of an
inter-domain inducing point approximation that is well-tailored to the
convolutional kernel. This allows us to gain the generalisation benefit of a
convolutional kernel, together with fast but accurate posterior inference. We
investigate several variations of the convolutional kernel, and apply it to
MNIST and CIFAR-10, which have both been known to be challenging for Gaussian
processes. We also show how the marginal likelihood can be used to find an
optimal weighting between convolutional and RBF kernels to further improve
performance. We hope that this illustration of the usefulness of a marginal
likelihood will help automate discovering architectures in larger models.
| Mark van der Wilk, Carl Edward Rasmussen, James Hensman | null | 1709.01894 | null | null |
Estimation of a Low-rank Topic-Based Model for Information Cascades | stat.ML cs.LG cs.SI | We consider the problem of estimating the latent structure of a social
network based on the observed information diffusion events, or cascades, where
the observations for a given cascade consist of only the timestamps of
infection for infected nodes but not the source of the infection. Most of the
existing work on this problem has focused on estimating a diffusion matrix
without any structural assumptions on it. In this paper, we propose a novel
model based on the intuition that an information is more likely to propagate
among two nodes if they are interested in similar topics which are also
prominent in the information content. In particular, our model endows each node
with an influence vector (which measures how authoritative the node is on each
topic) and a receptivity vector (which measures how susceptible the node is for
each topic). We show how this node-topic structure can be estimated from the
observed cascades, and prove the consistency of the estimator. Experiments on
synthetic and real data demonstrate the improved performance and better
interpretability of our model compared to existing state-of-the-art methods.
| Ming Yu, Varun Gupta, Mladen Kolar | null | 1709.01919 | null | null |
A Comparison of Audio Signal Preprocessing Methods for Deep Neural
Networks on Music Tagging | cs.SD cs.CV cs.IR cs.LG | In this paper, we empirically investigate the effect of audio preprocessing
on music tagging with deep neural networks. We perform comprehensive
experiments involving audio preprocessing using different time-frequency
representations, logarithmic magnitude compression, frequency weighting, and
scaling. We show that many commonly used input preprocessing techniques are
redundant except magnitude compression.
| Keunwoo Choi, Gy\"orgy Fazekas, Kyunghyun Cho and Mark Sandler | null | 1709.01922 | null | null |
Implicit Regularization in Deep Learning | cs.LG | In an attempt to better understand generalization in deep learning, we study
several possible explanations. We show that implicit regularization induced by
the optimization method is playing a key role in generalization and success of
deep learning models. Motivated by this view, we study how different complexity
measures can ensure generalization and explain how optimization algorithms can
implicitly regularize complexity measures. We empirically investigate the
ability of these measures to explain different observed phenomena in deep
learning. We further study the invariances in neural networks, suggest
complexity measures and optimization algorithms that have similar invariances
to those in neural networks and evaluate them on a number of learning tasks.
| Behnam Neyshabur | null | 1709.01953 | null | null |
A Quasi-isometric Embedding Algorithm | stat.ML cs.CG cs.LG | The Whitney embedding theorem gives an upper bound on the smallest embedding
dimension of a manifold. If a data set lies on a manifold, a random projection
into this reduced dimension will retain the manifold structure. Here we present
an algorithm to find a projection that distorts the data as little as possible.
| David W. Dreisigmeyer | null | 1709.01972 | null | null |
On Fairness and Calibration | cs.LG cs.CY stat.ML | The machine learning community has become increasingly concerned with the
potential for bias and discrimination in predictive models. This has motivated
a growing line of work on what it means for a classification procedure to be
"fair." In this paper, we investigate the tension between minimizing error
disparity across different population groups while maintaining calibrated
probability estimates. We show that calibration is compatible only with a
single error constraint (i.e. equal false-negatives rates across groups), and
show that any algorithm that satisfies this relaxation is no better than
randomizing a percentage of predictions for an existing classifier. These
unsettling findings, which extend and generalize existing results, are
empirically confirmed on several datasets.
| Geoff Pleiss, Manish Raghavan, Felix Wu, Jon Kleinberg, Kilian Q.
Weinberger | null | 1709.02012 | null | null |
CausalGAN: Learning Causal Implicit Generative Models with Adversarial
Training | cs.LG cs.AI cs.IT math.IT stat.ML | We propose an adversarial training procedure for learning a causal implicit
generative model for a given causal graph. We show that adversarial training
can be used to learn a generative model with true observational and
interventional distributions if the generator architecture is consistent with
the given causal graph. We consider the application of generating faces based
on given binary labels where the dependency structure between the labels is
preserved with a causal graph. This problem can be seen as learning a causal
implicit generative model for the image and labels. We devise a two-stage
procedure for this problem. First we train a causal implicit generative model
over binary labels using a neural network consistent with a causal graph as the
generator. We empirically show that WassersteinGAN can be used to output
discrete labels. Later, we propose two new conditional GAN architectures, which
we call CausalGAN and CausalBEGAN. We show that the optimal generator of the
CausalGAN, given the labels, samples from the image distributions conditioned
on these labels. The conditional GAN combined with a trained causal implicit
generative model for the labels is then a causal implicit generative model over
the labels and the generated image. We show that the proposed architectures can
be used to sample from observational and interventional image distributions,
even for interventions which do not naturally occur in the dataset.
| Murat Kocaoglu, Christopher Snyder, Alexandros G. Dimakis, Sriram
Vishwanath | null | 1709.02023 | null | null |
Formulation of Deep Reinforcement Learning Architecture Toward
Autonomous Driving for On-Ramp Merge | cs.LG cs.AI | Multiple automakers have in development or in production automated driving
systems (ADS) that offer freeway-pilot functions. This type of ADS is typically
limited to restricted-access freeways only, that is, the transition from manual
to automated modes takes place only after the ramp merging process is completed
manually. One major challenge to extend the automation to ramp merging is that
the automated vehicle needs to incorporate and optimize long-term objectives
(e.g. successful and smooth merge) when near-term actions must be safely
executed. Moreover, the merging process involves interactions with other
vehicles whose behaviors are sometimes hard to predict but may influence the
merging vehicle optimal actions. To tackle such a complicated control problem,
we propose to apply Deep Reinforcement Learning (DRL) techniques for finding an
optimal driving policy by maximizing the long-term reward in an interactive
environment. Specifically, we apply a Long Short-Term Memory (LSTM)
architecture to model the interactive environment, from which an internal state
containing historical driving information is conveyed to a Deep Q-Network
(DQN). The DQN is used to approximate the Q-function, which takes the internal
state as input and generates Q-values as output for action selection. With this
DRL architecture, the historical impact of interactive environment on the
long-term reward can be captured and taken into account for deciding the
optimal control policy. The proposed architecture has the potential to be
extended and applied to other autonomous driving scenarios such as driving
through a complex intersection or changing lanes under varying traffic flow
conditions.
| Pin Wang, Ching-Yao Chan | 10.1109/ITSC.2017.8317735 | 1709.02066 | null | null |
A deep generative model for gene expression profiles from single-cell
RNA sequencing | cs.LG q-bio.GN stat.ML | We propose a probabilistic model for interpreting gene expression levels that
are observed through single-cell RNA sequencing. In the model, each cell has a
low-dimensional latent representation. Additional latent variables account for
technical effects that may erroneously set some observations of gene expression
levels to zero. Conditional distributions are specified by neural networks,
giving the proposed model enough flexibility to fit the data well. We use
variational inference and stochastic optimization to approximate the posterior
distribution. The inference procedure scales to over one million cells, whereas
competing algorithms do not. Even for smaller datasets, for several tasks, the
proposed procedure outperforms state-of-the-art methods like ZIFA and
ZINB-WaVE. We also extend our framework to account for batch effects and other
confounding factors, and propose a Bayesian hypothesis test for differential
expression that outperforms DESeq2.
| Romain Lopez, Jeffrey Regier, Michael Cole, Michael Jordan and Nir
Yosef | null | 1709.02082 | null | null |
Sharp Bounds for Generalized Uniformity Testing | cs.DS cs.IT cs.LG math.IT math.ST stat.TH | We study the problem of generalized uniformity testing \cite{BC17} of a
discrete probability distribution: Given samples from a probability
distribution $p$ over an {\em unknown} discrete domain $\mathbf{\Omega}$, we
want to distinguish, with probability at least $2/3$, between the case that $p$
is uniform on some {\em subset} of $\mathbf{\Omega}$ versus $\epsilon$-far, in
total variation distance, from any such uniform distribution.
We establish tight bounds on the sample complexity of generalized uniformity
testing. In more detail, we present a computationally efficient tester whose
sample complexity is optimal, up to constant factors, and a matching
information-theoretic lower bound. Specifically, we show that the sample
complexity of generalized uniformity testing is
$\Theta\left(1/(\epsilon^{4/3}\|p\|_3) + 1/(\epsilon^{2} \|p\|_2) \right)$.
| Ilias Diakonikolas, Daniel M. Kane, Alistair Stewart | null | 1709.02087 | null | null |
Integrating Specialized Classifiers Based on Continuous Time Markov
Chain | cs.LG cs.CV | Specialized classifiers, namely those dedicated to a subset of classes, are
often adopted in real-world recognition systems. However, integrating such
classifiers is nontrivial. Existing methods, e.g. weighted average, usually
implicitly assume that all constituents of an ensemble cover the same set of
classes. Such methods can produce misleading predictions when used to combine
specialized classifiers. This work explores a novel approach. Instead of
combining predictions from individual classifiers directly, it first decomposes
the predictions into sets of pairwise preferences, treating them as transition
channels between classes, and thereon constructs a continuous-time Markov
chain, and use the equilibrium distribution of this chain as the final
prediction. This way allows us to form a coherent picture over all specialized
predictions. On large public datasets, the proposed method obtains considerable
improvement compared to mainstream ensemble methods, especially when the
classifier coverage is highly unbalanced.
| Zhizhong Li and Dahua Lin | null | 1709.02123 | null | null |
Bayesian Optimisation for Safe Navigation under Localisation Uncertainty | cs.RO cs.AI cs.LG | In outdoor environments, mobile robots are required to navigate through
terrain with varying characteristics, some of which might significantly affect
the integrity of the platform. Ideally, the robot should be able to identify
areas that are safe for navigation based on its own percepts about the
environment while avoiding damage to itself. Bayesian optimisation (BO) has
been successfully applied to the task of learning a model of terrain
traversability while guiding the robot through more traversable areas. An
issue, however, is that localisation uncertainty can end up guiding the robot
to unsafe areas and distort the model being learnt. In this paper, we address
this problem and present a novel method that allows BO to consider localisation
uncertainty by applying a Gaussian process model for uncertain inputs as a
prior. We evaluate the proposed method in simulation and in experiments with a
real robot navigating over rough terrain and compare it against standard BO
methods.
| Rafael Oliveira, Lionel Ott, Vitor Guizilini and Fabio Ramos | null | 1709.02169 | null | null |
Approximating meta-heuristics with homotopic recurrent neural networks | stat.ML cs.DM cs.LG | Much combinatorial optimisation problems constitute a non-polynomial (NP)
hard optimisation problem, i.e., they can not be solved in polynomial time. One
such problem is finding the shortest route between two nodes on a graph.
Meta-heuristic algorithms such as $A^{*}$ along with mixed-integer programming
(MIP) methods are often employed for these problems. Our work demonstrates that
it is possible to approximate solutions generated by a meta-heuristic algorithm
using a deep recurrent neural network. We compare different methodologies based
on reinforcement learning (RL) and recurrent neural networks (RNN) to gauge
their respective quality of approximation. We show the viability of recurrent
neural network solutions on a graph that has over 300 nodes and argue that a
sequence-to-sequence network rather than other recurrent networks has improved
approximation quality. Additionally, we argue that homotopy continuation --
that increases chances of hitting an extremum -- further improves the estimate
generated by a vanilla RNN.
| Alessandro Bay and Biswa Sengupta | null | 1709.02194 | null | null |
RNN-based Early Cyber-Attack Detection for the Tennessee Eastman Process | cs.CR cs.LG | An RNN-based forecasting approach is used to early detect anomalies in
industrial multivariate time series data from a simulated Tennessee Eastman
Process (TEP) with many cyber-attacks. This work continues a previously
proposed LSTM-based approach to the fault detection in simpler data. It is
considered necessary to adapt the RNN network to deal with data containing
stochastic, stationary, transitive and a rich variety of anomalous behaviours.
There is particular focus on early detection with special NAB-metric. A
comparison with the DPCA approach is provided. The generated data set is made
publicly available.
| Pavel Filonov, Fedor Kitashov, Andrey Lavrentyev | null | 1709.02232 | null | null |
Visual Cues to Improve Myoelectric Control of Upper Limb Prostheses | cs.CV cs.LG | The instability of myoelectric signals over time complicates their use to
control highly articulated prostheses. To address this problem, studies have
tried to combine surface electromyography with modalities that are less
affected by the amputation and environment, such as accelerometry or gaze
information. In the latter case, the hypothesis is that a subject looks at the
object he or she intends to manipulate and that knowing this object's
affordances allows to constrain the set of possible grasps. In this paper, we
develop an automated way to detect stable fixations and show that gaze
information is indeed helpful in predicting hand movements. In our multimodal
approach, we automatically detect stable gazes and segment an object of
interest around the subject's fixation in the visual frame. The patch extracted
around this object is subsequently fed through an off-the-shelf deep
convolutional neural network to obtain a high level feature representation,
which is then combined with traditional surface electromyography in the
classification stage. Tests have been performed on a dataset acquired from five
intact subjects who performed ten types of grasps on various objects as well as
in a functional setting. They show that the addition of gaze information
increases the classification accuracy considerably. Further analysis
demonstrates that this improvement is consistent for all grasps and
concentrated during the movement onset and offset.
| Andrea Gigli, Arjan Gijsberts, Valentina Gregori, Matteo Cognolato,
Manfredo Atzori, Barbara Caputo | null | 1709.02236 | null | null |
Uncertainty-Aware Learning from Demonstration using Mixture Density
Networks with Sampling-Free Variance Modeling | cs.CV cs.AI cs.LG cs.RO | In this paper, we propose an uncertainty-aware learning from demonstration
method by presenting a novel uncertainty estimation method utilizing a mixture
density network appropriate for modeling complex and noisy human behaviors. The
proposed uncertainty acquisition can be done with a single forward path without
Monte Carlo sampling and is suitable for real-time robotics applications. The
properties of the proposed uncertainty measure are analyzed through three
different synthetic examples, absence of data, heavy measurement noise, and
composition of functions scenarios. We show that each case can be distinguished
using the proposed uncertainty measure and presented an uncertainty-aware
learn- ing from demonstration method of an autonomous driving using this
property. The proposed uncertainty-aware learning from demonstration method
outperforms other compared methods in terms of safety using a complex
real-world driving dataset.
| Sungjoon Choi, Kyungjae Lee, Sungbin Lim, Songhwai Oh | null | 1709.02249 | null | null |
Multi-modal Conditional Attention Fusion for Dimensional Emotion
Prediction | cs.CV cs.LG cs.MM | Continuous dimensional emotion prediction is a challenging task where the
fusion of various modalities usually achieves state-of-the-art performance such
as early fusion or late fusion. In this paper, we propose a novel multi-modal
fusion strategy named conditional attention fusion, which can dynamically pay
attention to different modalities at each time step. Long-short term memory
recurrent neural networks (LSTM-RNN) is applied as the basic uni-modality model
to capture long time dependencies. The weights assigned to different modalities
are automatically decided by the current input features and recent history
information rather than being fixed at any kinds of situation. Our experimental
results on a benchmark dataset AVEC2015 show the effectiveness of our method
which outperforms several common fusion strategies for valence prediction.
| Shizhe Chen, Qin Jin | null | 1709.02251 | null | null |
Linear vs Nonlinear Extreme Learning Machine for Spectral-Spatial
Classification of Hyperspectral Image | cs.CV cs.LG | As a new machine learning approach, extreme learning machine (ELM) has
received wide attentions due to its good performances. However, when directly
applied to the hyperspectral image (HSI) classification, the recognition rate
is too low. This is because ELM does not use the spatial information which is
very important for HSI classification. In view of this, this paper proposes a
new framework for spectral-spatial classification of HSI by combining ELM with
loopy belief propagation (LBP). The original ELM is linear, and the nonlinear
ELMs (or Kernel ELMs) are the improvement of linear ELM (LELM). However, based
on lots of experiments and analysis, we found out that the LELM is a better
choice than nonlinear ELM for spectral-spatial classification of HSI.
Furthermore, we exploit the marginal probability distribution that uses the
whole information in the HSI and learn such distribution using the LBP. The
proposed method not only maintain the fast speed of ELM, but also greatly
improves the accuracy of classification. The experimental results in the
well-known HSI data sets, Indian Pines and Pavia University, demonstrate the
good performances of the proposed method.
| Faxian Cao, Zhijing Yang, Jinchang Ren, Mengying Jiang, Wing-Kuen Ling | 10.3390/s17112603 | 1709.02253 | null | null |
Intraoperative Organ Motion Models with an Ensemble of Conditional
Generative Adversarial Networks | cs.CV cs.LG | In this paper, we describe how a patient-specific, ultrasound-probe-induced
prostate motion model can be directly generated from a single preoperative MR
image. Our motion model allows for sampling from the conditional distribution
of dense displacement fields, is encoded by a generative neural network
conditioned on a medical image, and accepts random noise as additional input.
The generative network is trained by a minimax optimisation with a second
discriminative neural network, tasked to distinguish generated samples from
training motion data. In this work, we propose that 1) jointly optimising a
third conditioning neural network that pre-processes the input image, can
effectively extract patient-specific features for conditioning; and 2)
combining multiple generative models trained separately with heuristically
pre-disjointed training data sets can adequately mitigate the problem of mode
collapse. Trained with diagnostic T2-weighted MR images from 143 real patients
and 73,216 3D dense displacement fields from finite element simulations of
intraoperative prostate motion due to transrectal ultrasound probe pressure,
the proposed models produced physically-plausible patient-specific motion of
prostate glands. The ability to capture biomechanically simulated motion was
evaluated using two errors representing generalisability and specificity of the
model. The median values, calculated from a 10-fold cross-validation, were
2.8+/-0.3 mm and 1.7+/-0.1 mm, respectively. We conclude that the introduced
approach demonstrates the feasibility of applying state-of-the-art machine
learning algorithms to generate organ motion models from patient images, and
shows significant promise for future research.
| Yipeng Hu, Eli Gibson, Tom Vercauteren, Hashim U. Ahmed, Mark
Emberton, Caroline M. Moore, J. Alison Noble, Dean C. Barratt | 10.1007/978-3-319-66185-8_42 | 1709.02255 | null | null |
Embedded Binarized Neural Networks | cs.CV cs.LG | We study embedded Binarized Neural Networks (eBNNs) with the aim of allowing
current binarized neural networks (BNNs) in the literature to perform
feedforward inference efficiently on small embedded devices. We focus on
minimizing the required memory footprint, given that these devices often have
memory as small as tens of kilobytes (KB). Beyond minimizing the memory
required to store weights, as in a BNN, we show that it is essential to
minimize the memory used for temporaries which hold intermediate results
between layers in feedforward inference. To accomplish this, eBNN reorders the
computation of inference while preserving the original BNN structure, and uses
just a single floating-point temporary for the entire neural network. All
intermediate results from a layer are stored as binary values, as opposed to
floating-points used in current BNN implementations, leading to a 32x reduction
in required temporary space. We provide empirical evidence that our proposed
eBNN approach allows efficient inference (10s of ms) on devices with severely
limited memory (10s of KB). For example, eBNN achieves 95\% accuracy on the
MNIST dataset running on an Intel Curie with only 15 KB of usable memory with
an inference runtime of under 50 ms per sample. To ease the development of
applications in embedded contexts, we make our source code available that
allows users to train and discover eBNN models for a learning task at hand,
which fit within the memory constraint of the target device.
| Bradley McDanel, Surat Teerapittayanon, H.T. Kung | null | 1709.0226 | null | null |
Phylogenetic Convolutional Neural Networks in Metagenomics | q-bio.QM cs.LG cs.NE q-bio.GN | Background: Convolutional Neural Networks can be effectively used only when
data are endowed with an intrinsic concept of neighbourhood in the input space,
as is the case of pixels in images. We introduce here Ph-CNN, a novel deep
learning architecture for the classification of metagenomics data based on the
Convolutional Neural Networks, with the patristic distance defined on the
phylogenetic tree being used as the proximity measure. The patristic distance
between variables is used together with a sparsified version of
MultiDimensional Scaling to embed the phylogenetic tree in a Euclidean space.
Results: Ph-CNN is tested with a domain adaptation approach on synthetic data
and on a metagenomics collection of gut microbiota of 38 healthy subjects and
222 Inflammatory Bowel Disease patients, divided in 6 subclasses.
Classification performance is promising when compared to classical algorithms
like Support Vector Machines and Random Forest and a baseline fully connected
neural network, e.g. the Multi-Layer Perceptron. Conclusion: Ph-CNN represents
a novel deep learning approach for the classification of metagenomics data.
Operatively, the algorithm has been implemented as a custom Keras layer taking
care of passing to the following convolutional layer not only the data but also
the ranked list of neighbourhood of each sample, thus mimicking the case of
image data, transparently to the user. Keywords: Metagenomics; Deep learning;
Convolutional Neural Networks; Phylogenetic trees
| Diego Fioravanti, Ylenia Giarratano, Valerio Maggio, Claudio
Agostinelli, Marco Chierici, Giuseppe Jurman and Cesare Furlanello | null | 1709.02268 | null | null |
Basic Filters for Convolutional Neural Networks Applied to Music:
Training or Design? | cs.LG cs.IR | When convolutional neural networks are used to tackle learning problems based
on music or, more generally, time series data, raw one-dimensional data are
commonly pre-processed to obtain spectrogram or mel-spectrogram coefficients,
which are then used as input to the actual neural network. In this
contribution, we investigate, both theoretically and experimentally, the
influence of this pre-processing step on the network's performance and pose the
question, whether replacing it by applying adaptive or learned filters directly
to the raw data, can improve learning success. The theoretical results show
that approximately reproducing mel-spectrogram coefficients by applying
adaptive filters and subsequent time-averaging is in principle possible. We
also conducted extensive experimental work on the task of singing voice
detection in music. The results of these experiments show that for
classification based on Convolutional Neural Networks the features obtained
from adaptive filter banks followed by time-averaging perform better than the
canonical Fourier-transform-based mel-spectrogram coefficients. Alternative
adaptive approaches with center frequencies or time-averaging lengths learned
from training data perform equally well.
| Monika Doerfler, Thomas Grill, Roswitha Bammer, Arthur Flexer | null | 1709.02291 | null | null |
Answering Visual-Relational Queries in Web-Extracted Knowledge Graphs | cs.LG cs.AI | A visual-relational knowledge graph (KG) is a multi-relational graph whose
entities are associated with images. We explore novel machine learning
approaches for answering visual-relational queries in web-extracted knowledge
graphs. To this end, we have created ImageGraph, a KG with 1,330 relation
types, 14,870 entities, and 829,931 images crawled from the web. With
visual-relational KGs such as ImageGraph one can introduce novel probabilistic
query types in which images are treated as first-class citizens. Both the
prediction of relations between unseen images as well as multi-relational image
retrieval can be expressed with specific families of visual-relational queries.
We introduce novel combinations of convolutional networks and knowledge graph
embedding methods to answer such queries. We also explore a zero-shot learning
scenario where an image of an entirely new entity is linked with multiple
relations to entities of an existing KG. The resulting multi-relational
grounding of unseen entity images into a knowledge graph serves as a semantic
entity representation. We conduct experiments to demonstrate that the proposed
methods can answer these visual-relational queries efficiently and accurately.
| Daniel O\~noro-Rubio, Mathias Niepert, Alberto Garc\'ia-Dur\'an,
Roberto Gonz\'alez and Roberto J. L\'opez-Sastre | null | 1709.02314 | null | null |
Feature selection in high-dimensional dataset using MapReduce | cs.DC cs.LG stat.ML | This paper describes a distributed MapReduce implementation of the minimum
Redundancy Maximum Relevance algorithm, a popular feature selection method in
bioinformatics and network inference problems. The proposed approach handles
both tall/narrow and wide/short datasets. We further provide an open source
implementation based on Hadoop/Spark, and illustrate its scalability on
datasets involving millions of observations or features.
| Claudio Reggiani, Yann-A\"el Le Borgne, Gianluca Bontempi | null | 1709.02327 | null | null |
A Deep Reinforcement Learning Chatbot | cs.CL cs.AI cs.LG cs.NE stat.ML | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data.
| Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng
Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath
Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R.
Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau,
Yoshua Bengio | null | 1709.02349 | null | null |
Adaptive PCA for Time-Varying Data | stat.ML cs.CV cs.LG | In this paper, we present an online adaptive PCA algorithm that is able to
compute the full dimensional eigenspace per new time-step of sequential data.
The algorithm is based on a one-step update rule that considers all second
order correlations between previous samples and the new time-step. Our
algorithm has O(n) complexity per new time-step in its deterministic mode and
O(1) complexity per new time-step in its stochastic mode. We test our algorithm
on a number of time-varying datasets of different physical phenomena. Explained
variance curves indicate that our technique provides an excellent approximation
to the original eigenspace computed using standard PCA in batch mode. In
addition, our experiments show that the stochastic mode, despite its much lower
computational complexity, converges to the same eigenspace computed using the
deterministic mode.
| Salaheddin Alakkari and John Dingliana | null | 1709.02373 | null | null |
How Does Knowledge of the AUC Constrain the Set of Possible Ground-truth
Labelings? | cs.LG | Recent work on privacy-preserving machine learning has considered how
data-mining competitions such as Kaggle could potentially be "hacked", either
intentionally or inadvertently, by using information from an oracle that
reports a classifier's accuracy on the test set. For binary classification
tasks in particular, one of the most common accuracy metrics is the Area Under
the ROC Curve (AUC), and in this paper we explore the mathematical structure of
how the AUC is computed from an n-vector of real-valued "guesses" with respect
to the ground-truth labels. We show how knowledge of a classifier's AUC on the
test set can constrain the set of possible ground-truth labelings, and we
derive an algorithm both to compute the exact number of such labelings and to
enumerate efficiently over them. Finally, we provide empirical evidence that,
surprisingly, the number of compatible labelings can actually decrease as n
grows, until a test set-dependent threshold is reached.
| Jacob Whitehill | null | 1709.02418 | null | null |
Deep Learning the Physics of Transport Phenomena | cs.LG physics.comp-ph | We have developed a new data-driven paradigm for the rapid inference,
modeling and simulation of the physics of transport phenomena by deep learning.
Using conditional generative adversarial networks (cGAN), we train models for
the direct generation of solutions to steady state heat conduction and
incompressible fluid flow purely on observation without knowledge of the
underlying governing equations. Rather than using iterative numerical methods
to approximate the solution of the constitutive equations, cGANs learn to
directly generate the solutions to these phenomena, given arbitrary boundary
conditions and domain, with high test accuracy (MAE$<$1\%) and state-of-the-art
computational performance. The cGAN framework can be used to learn causal
models directly from experimental observations where the underlying physical
model is complex or unknown.
| Amir Barati Farimani, Joseph Gomes, and Vijay S. Pande | null | 1709.02432 | null | null |
An Analysis of ISO 26262: Using Machine Learning Safely in Automotive
Software | cs.AI cs.LG cs.SE cs.SY | Machine learning (ML) plays an ever-increasing role in advanced automotive
functionality for driver assistance and autonomous operation; however, its
adequacy from the perspective of safety certification remains controversial. In
this paper, we analyze the impacts that the use of ML as an implementation
approach has on ISO 26262 safety lifecycle and ask what could be done to
address them. We then provide a set of recommendations on how to adapt the
standard to accommodate ML.
| Rick Salay, Rodrigo Queiroz, Krzysztof Czarnecki | null | 1709.02435 | null | null |
Network Vector: Distributed Representations of Networks with Global
Context | cs.SI cs.LG | We propose a neural embedding algorithm called Network Vector, which learns
distributed representations of nodes and the entire networks simultaneously. By
embedding networks in a low-dimensional space, the algorithm allows us to
compare networks in terms of structural similarity and to solve outstanding
predictive problems. Unlike alternative approaches that focus on node level
features, we learn a continuous global vector that captures each node's global
context by maximizing the predictive likelihood of random walk paths in the
network. Our algorithm is scalable to real world graphs with many nodes. We
evaluate our algorithm on datasets from diverse domains, and compare it with
state-of-the-art techniques in node classification, role discovery and concept
analogy tasks. The empirical results show the effectiveness and the efficiency
of our algorithm.
| Hao Wu, Kristina Lerman | null | 1709.02448 | null | null |
Reservoir of Diverse Adaptive Learners and Stacking Fast Hoeffding Drift
Detection Methods for Evolving Data Streams | stat.ML cs.DB cs.LG | The last decade has seen a surge of interest in adaptive learning algorithms
for data stream classification, with applications ranging from predicting ozone
level peaks, learning stock market indicators, to detecting computer security
violations. In addition, a number of methods have been developed to detect
concept drifts in these streams. Consider a scenario where we have a number of
classifiers with diverse learning styles and different drift detectors.
Intuitively, the current 'best' (classifier, detector) pair is application
dependent and may change as a result of the stream evolution. Our research
builds on this observation. We introduce the $\mbox{Tornado}$ framework that
implements a reservoir of diverse classifiers, together with a variety of drift
detection algorithms. In our framework, all (classifier, detector) pairs
proceed, in parallel, to construct models against the evolving data streams. At
any point in time, we select the pair which currently yields the best
performance. We further incorporate two novel stacking-based drift detection
methods, namely the $\mbox{FHDDMS}$ and $\mbox{FHDDMS}_{add}$ approaches. The
experimental evaluation confirms that the current 'best' (classifier, detector)
pair is not only heavily dependent on the characteristics of the stream, but
also that this selection evolves as the stream flows. Further, our
$\mbox{FHDDMS}$ variants detect concept drifts accurately in a timely fashion
while outperforming the state-of-the-art.
| Ali Pesaranghader, Herna Viktor and Eric Paquet | 10.1007/s10994-018-5719-z | 1709.02457 | null | null |
Inferring Generative Model Structure with Static Analysis | cs.LG cs.AI stat.ML | Obtaining enough labeled data to robustly train complex discriminative models
is a major bottleneck in the machine learning pipeline. A popular solution is
combining multiple sources of weak supervision using generative models. The
structure of these models affects training label quality, but is difficult to
learn without any ground truth labels. We instead rely on these weak
supervision sources having some structure by virtue of being encoded
programmatically. We present Coral, a paradigm that infers generative model
structure by statically analyzing the code for these heuristics, thus reducing
the data required to learn structure significantly. We prove that Coral's
sample complexity scales quasilinearly with the number of heuristics and number
of relations found, improving over the standard sample complexity, which is
exponential in $n$ for identifying $n^{\textrm{th}}$ degree relations.
Experimentally, Coral matches or outperforms traditional structure learning
approaches by up to 3.81 F1 points. Using Coral to model dependencies instead
of assuming independence results in better performance than a fully supervised
model by 3.07 accuracy points when heuristics are used to label radiology data
without ground truth labels.
| Paroma Varma, Bryan He, Payal Bajaj, Imon Banerjee, Nishith Khandwala,
Daniel L. Rubin, Christopher R\'e | null | 1709.02477 | null | null |
Mirror Descent Search and its Acceleration | cs.LG | In recent years, attention has been focused on the relationship between
black-box optimiza- tion problem and reinforcement learning problem. In this
research, we propose the Mirror Descent Search (MDS) algorithm which is
applicable both for black box optimization prob- lems and reinforcement
learning problems. Our method is based on the mirror descent method, which is a
general optimization algorithm. The contribution of this research is roughly
twofold. We propose two essential algorithms, called MDS and Accelerated Mirror
Descent Search (AMDS), and two more approximate algorithms: Gaussian Mirror
Descent Search (G-MDS) and Gaussian Accelerated Mirror Descent Search (G-AMDS).
This re- search shows that the advanced methods developed in the context of the
mirror descent research can be applied to reinforcement learning problem. We
also clarify the relationship between an existing reinforcement learning
algorithm and our method. With two evaluation experiments, we show our proposed
algorithms converge faster than some state-of-the-art methods.
| Megumi Miyashita, Shiro Yano, Toshiyuki Kondo | 10.1016/j.robot.2018.04.009 | 1709.02535 | null | null |
DeepFense: Online Accelerated Defense Against Adversarial Deep Learning | cs.CR cs.LG stat.ML | Recent advances in adversarial Deep Learning (DL) have opened up a largely
unexplored surface for malicious attacks jeopardizing the integrity of
autonomous DL systems. With the wide-spread usage of DL in critical and
time-sensitive applications, including unmanned vehicles, drones, and video
surveillance systems, online detection of malicious inputs is of utmost
importance. We propose DeepFense, the first end-to-end automated framework that
simultaneously enables efficient and safe execution of DL models. DeepFense
formalizes the goal of thwarting adversarial attacks as an optimization problem
that minimizes the rarely observed regions in the latent feature space spanned
by a DL network. To solve the aforementioned minimization problem, a set of
complementary but disjoint modular redundancies are trained to validate the
legitimacy of the input samples in parallel with the victim DL model. DeepFense
leverages hardware/software/algorithm co-design and customized acceleration to
achieve just-in-time performance in resource-constrained settings. The proposed
countermeasure is unsupervised, meaning that no adversarial sample is leveraged
to train modular redundancies. We further provide an accompanying API to reduce
the non-recurring engineering cost and ensure automated adaptation to various
platforms. Extensive evaluations on FPGAs and GPUs demonstrate up to two orders
of magnitude performance improvement while enabling online adversarial sample
detection.
| Bita Darvish Rouhani, Mohammad Samragh, Mojan Javaheripi, Tara Javidi,
Farinaz Koushanfar | null | 1709.02538 | null | null |
The Expressive Power of Neural Networks: A View from the Width | cs.LG | The expressive power of neural networks is important for understanding deep
learning. Most existing works consider this problem from the view of the depth
of a network. In this paper, we study how width affects the expressiveness of
neural networks. Classical results state that depth-bounded (e.g. depth-$2$)
networks with suitable activation functions are universal approximators. We
show a universal approximation theorem for width-bounded ReLU networks:
width-$(n+4)$ ReLU networks, where $n$ is the input dimension, are universal
approximators. Moreover, except for a measure zero set, all functions cannot be
approximated by width-$n$ ReLU networks, which exhibits a phase transition.
Several recent works demonstrate the benefits of depth by proving the
depth-efficiency of neural networks. That is, there are classes of deep
networks which cannot be realized by any shallow network whose size is no more
than an exponential bound. Here we pose the dual question on the
width-efficiency of ReLU networks: Are there wide networks that cannot be
realized by narrow networks whose size is not substantially larger? We show
that there exist classes of wide networks which cannot be realized by any
narrow network whose depth is no more than a polynomial bound. On the other
hand, we demonstrate by extensive experiments that narrow networks whose size
exceed the polynomial bound by a constant factor can approximate wide and
shallow network with high accuracy. Our results provide more comprehensive
evidence that depth is more effective than width for the expressiveness of ReLU
networks.
| Zhou Lu, Hongming Pu, Feicheng Wang, Zhiqiang Hu, Liwei Wang | null | 1709.0254 | null | null |
Causality-Aided Falsification | cs.SY cs.AI cs.LG cs.LO | Falsification is drawing attention in quality assurance of heterogeneous
systems whose complexities are beyond most verification techniques'
scalability. In this paper we introduce the idea of causality aid in
falsification: by providing a falsification solver -- that relies on stochastic
optimization of a certain cost function -- with suitable causal information
expressed by a Bayesian network, search for a falsifying input value can be
efficient. Our experiment results show the idea's viability.
| Takumi Akazaki (1), Yoshihiro Kumazawa (1), Ichiro Hasuo (2) ((1)
University of Tokyo, (2) National Institute of Informatics) | 10.4204/EPTCS.257.2 | 1709.02555 | null | null |
Deep learning for undersampled MRI reconstruction | stat.ML cs.LG physics.med-ph | This paper presents a deep learning method for faster magnetic resonance
imaging (MRI) by reducing k-space data with sub-Nyquist sampling strategies and
provides a rationale for why the proposed approach works well. Uniform
subsampling is used in the time-consuming phase-encoding direction to capture
high-resolution image information, while permitting the image-folding problem
dictated by the Poisson summation formula. To deal with the localization
uncertainty due to image folding, very few low-frequency k-space data are
added. Training the deep learning net involves input and output images that are
pairs of Fourier transforms of the subsampled and fully sampled k-space data.
Numerous experiments show the remarkable performance of the proposed method;
only 29% of k-space data can generate images of high quality as effectively as
standard MRI reconstruction with fully sampled data.
| Chang Min Hyun, Hwa Pyung Kim, Sung Min Lee, Sungchul Lee and Jin Keun
Seo | 10.1088/1361-6560/aac71a | 1709.02576 | null | null |
Gaussian Quadrature for Kernel Features | cs.LG | Kernel methods have recently attracted resurgent interest, showing
performance competitive with deep neural networks in tasks such as speech
recognition. The random Fourier features map is a technique commonly used to
scale up kernel machines, but employing the randomized feature map means that
$O(\epsilon^{-2})$ samples are required to achieve an approximation error of at
most $\epsilon$. We investigate some alternative schemes for constructing
feature maps that are deterministic, rather than random, by approximating the
kernel in the frequency domain using Gaussian quadrature. We show that
deterministic feature maps can be constructed, for any $\gamma > 0$, to achieve
error $\epsilon$ with $O(e^{e^\gamma} + \epsilon^{-1/\gamma})$ samples as
$\epsilon$ goes to 0. Our method works particularly well with sparse ANOVA
kernels, which are inspired by the convolutional layer of CNNs. We validate our
methods on datasets in different domains, such as MNIST and TIMIT, showing that
deterministic features are faster to generate and achieve accuracy comparable
to the state-of-the-art kernel methods based on random Fourier features.
| Tri Dao, Christopher De Sa, Christopher R\'e | null | 1709.02605 | null | null |
Deep Packet: A Novel Approach For Encrypted Traffic Classification Using
Deep Learning | cs.LG cs.CR cs.NI | Internet traffic classification has become more important with rapid growth
of current Internet network and online applications. There have been numerous
studies on this topic which have led to many different approaches. Most of
these approaches use predefined features extracted by an expert in order to
classify network traffic. In contrast, in this study, we propose a \emph{deep
learning} based approach which integrates both feature extraction and
classification phases into one system. Our proposed scheme, called "Deep
Packet," can handle both \emph{traffic characterization} in which the network
traffic is categorized into major classes (\eg, FTP and P2P) and application
identification in which end-user applications (\eg, BitTorrent and Skype)
identification is desired. Contrary to most of the current methods, Deep Packet
can identify encrypted traffic and also distinguishes between VPN and non-VPN
network traffic. After an initial pre-processing phase on data, packets are fed
into Deep Packet framework that embeds stacked autoencoder and convolution
neural network in order to classify network traffic. Deep packet with CNN as
its classification model achieved recall of $0.98$ in application
identification task and $0.94$ in traffic categorization task. To the best of
our knowledge, Deep Packet outperforms all of the proposed classification
methods on UNB ISCX VPN-nonVPN dataset.
| Mohammad Lotfollahi, Ramin Shirali Hossein Zade, Mahdi Jafari
Siavoshani, Mohammdsadegh Saberian | null | 1709.02656 | null | null |
Multi-level Feedback Web Links Selection Problem: Learning and
Optimization | cs.LG | Selecting the right web links for a website is important because appropriate
links not only can provide high attractiveness but can also increase the
website's revenue. In this work, we first show that web links have an intrinsic
\emph{multi-level feedback structure}. For example, consider a $2$-level
feedback web link: the $1$st level feedback provides the Click-Through Rate
(CTR) and the $2$nd level feedback provides the potential revenue, which
collectively produce the compound $2$-level revenue. We consider the
context-free links selection problem of selecting links for a homepage so as to
maximize the total compound $2$-level revenue while keeping the total $1$st
level feedback above a preset threshold. We further generalize the problem to
links with $n~(n\ge2)$-level feedback structure. The key challenge is that the
links' multi-level feedback structures are unobservable unless the links are
selected on the homepage. To our best knowledge, we are the first to model the
links selection problem as a constrained multi-armed bandit problem and design
an effective links selection algorithm by learning the links' multi-level
structure with provable \emph{sub-linear} regret and violation bounds. We
uncover the multi-level feedback structures of web links in two real-world
datasets. We also conduct extensive experiments on the datasets to compare our
proposed \textbf{LExp} algorithm with two state-of-the-art context-free bandit
algorithms and show that \textbf{LExp} algorithm is the most effective in links
selection while satisfying the constraint.
| Kechao Cai, Kun Chen, Longbo Huang, John C.S. Lui | null | 1709.02664 | null | null |
Learning Populations of Parameters | cs.LG | Consider the following estimation problem: there are $n$ entities, each with
an unknown parameter $p_i \in [0,1]$, and we observe $n$ independent random
variables, $X_1,\ldots,X_n$, with $X_i \sim $ Binomial$(t, p_i)$. How
accurately can one recover the "histogram" (i.e. cumulative density function)
of the $p_i$'s? While the empirical estimates would recover the histogram to
earth mover distance $\Theta(\frac{1}{\sqrt{t}})$ (equivalently, $\ell_1$
distance between the CDFs), we show that, provided $n$ is sufficiently large,
we can achieve error $O(\frac{1}{t})$ which is information theoretically
optimal. We also extend our results to the multi-dimensional parameter case,
capturing settings where each member of the population has multiple associated
parameters. Beyond the theoretical results, we demonstrate that the recovery
algorithm performs well in practice on a variety of datasets, providing
illuminating insights into several domains, including politics, sports
analytics, and variation in the gender ratio of offspring.
| Kevin Tian, Weihao Kong, Gregory Valiant | null | 1709.02707 | null | null |
A Modular Analysis of Adaptive (Non-)Convex Optimization: Optimism,
Composite Objectives, and Variational Bounds | cs.LG math.OC stat.ML | Recently, much work has been done on extending the scope of online learning
and incremental stochastic optimization algorithms. In this paper we contribute
to this effort in two ways: First, based on a new regret decomposition and a
generalization of Bregman divergences, we provide a self-contained, modular
analysis of the two workhorses of online learning: (general) adaptive versions
of Mirror Descent (MD) and the Follow-the-Regularized-Leader (FTRL) algorithms.
The analysis is done with extra care so as not to introduce assumptions not
needed in the proofs and allows to combine, in a straightforward way, different
algorithmic ideas (e.g., adaptivity, optimism, implicit updates) and learning
settings (e.g., strongly convex or composite objectives). This way we are able
to reprove, extend and refine a large body of the literature, while keeping the
proofs concise. The second contribution is a byproduct of this careful
analysis: We present algorithms with improved variational bounds for smooth,
composite objectives, including a new family of optimistic MD algorithms with
only one projection step per round. Furthermore, we provide a simple extension
of adaptive regret bounds to practically relevant non-convex problem settings
with essentially no extra effort.
| Pooria Joulani, Andr\'as Gy\"orgy, Csaba Szepesv\'ari | null | 1709.02726 | null | null |
Cycles in adversarial regularized learning | cs.GT cs.LG | Regularized learning is a fundamental technique in online optimization,
machine learning and many other fields of computer science. A natural question
that arises in these settings is how regularized learning algorithms behave
when faced against each other. We study a natural formulation of this problem
by coupling regularized learning dynamics in zero-sum games. We show that the
system's behavior is Poincar\'e recurrent, implying that almost every
trajectory revisits any (arbitrarily small) neighborhood of its starting point
infinitely often. This cycling behavior is robust to the agents' choice of
regularization mechanism (each agent could be using a different regularizer),
to positive-affine transformations of the agents' utilities, and it also
persists in the case of networked competition, i.e., for zero-sum polymatrix
games.
| Panayotis Mertikopoulos, Christos Papadimitriou and Georgios Piliouras | null | 1709.02738 | null | null |
Privacy Loss in Apple's Implementation of Differential Privacy on MacOS
10.12 | cs.CR cs.CY cs.LG | In June 2016, Apple announced that it will deploy differential privacy for
some user data collection in order to ensure privacy of user data, even from
Apple. The details of Apple's approach remained sparse. Although several
patents have since appeared hinting at the algorithms that may be used to
achieve differential privacy, they did not include a precise explanation of the
approach taken to privacy parameter choice. Such choice and the overall
approach to privacy budget use and management are key questions for
understanding the privacy protections provided by any deployment of
differential privacy.
In this work, through a combination of experiments, static and dynamic code
analysis of macOS Sierra (Version 10.12) implementation, we shed light on the
choices Apple made for privacy budget management. We discover and describe
Apple's set-up for differentially private data processing, including the
overall data pipeline, the parameters used for differentially private
perturbation of each piece of data, and the frequency with which such data is
sent to Apple's servers.
We find that although Apple's deployment ensures that the (differential)
privacy loss per each datum submitted to its servers is $1$ or $2$, the overall
privacy loss permitted by the system is significantly higher, as high as $16$
per day for the four initially announced applications of Emojis, New words,
Deeplinks and Lookup Hints. Furthermore, Apple renews the privacy budget
available every day, which leads to a possible privacy loss of 16 times the
number of days since user opt-in to differentially private data collection for
those four applications.
We advocate that in order to claim the full benefits of differentially
private data collection, Apple must give full transparency of its
implementation, enable user choice in areas related to privacy loss, and set
meaningful defaults on the privacy loss permitted.
| Jun Tang, Aleksandra Korolova, Xiaolong Bai, Xueqiang Wang, Xiaofeng
Wang | null | 1709.02753 | null | null |
Semantic Preserving Embeddings for Generalized Graphs | cs.AI cs.LG | A new approach to the study of Generalized Graphs as semantic data structures
using machine learning techniques is presented. We show how vector
representations maintaining semantic characteristics of the original data can
be obtained from a given graph using neural encoding architectures and
considering the topological properties of the graph. Semantic features of these
new representations are tested by using some machine learning tasks and new
directions on efficient link discovery, entitity retrieval and long distance
query methodologies on large relational datasets are investigated using real
datasets.
----
En este trabajo se presenta un nuevo enfoque en el contexto del aprendizaje
autom\'atico multi-relacional para el estudio de Grafos Generalizados. Se
muestra c\'omo se pueden obtener representaciones vectoriales que mantienen
caracter\'isticas sem\'anticas del grafo original utilizando codificadores
neuronales y considerando las propiedades topol\'ogicas del grafo. Adem\'as, se
eval\'uan las caracter\'isticas sem\'anticas capturadas por estas nuevas
representaciones y se investigan nuevas metodolog\'ias eficientes relacionadas
con Link Discovery, Entity Retrieval y consultas a larga distancia en grandes
conjuntos de datos relacionales haciendo uso de bases de datos reales.
| Pedro Almagro-Blanco, Fernando Sancho-Caparrini | null | 1709.02759 | null | null |
On the exact relationship between the denoising function and the data
distribution | cs.NE cs.LG stat.ML | We prove an exact relationship between the optimal denoising function and the
data distribution in the case of additive Gaussian noise, showing that
denoising implicitly models the structure of data allowing it to be exploited
in the unsupervised learning of representations. This result generalizes a
known relationship [2], which is valid only in the limit of small corruption
noise.
| Heikki Arponen, Matti Herranen, Harri Valpola | null | 1709.02797 | null | null |
GOOWE: Geometrically Optimum and Online-Weighted Ensemble Classifier for
Evolving Data Streams | cs.LG | Designing adaptive classifiers for an evolving data stream is a challenging
task due to the data size and its dynamically changing nature. Combining
individual classifiers in an online setting, the ensemble approach, is a
well-known solution. It is possible that a subset of classifiers in the
ensemble outperforms others in a time-varying fashion. However, optimum weight
assignment for component classifiers is a problem which is not yet fully
addressed in online evolving environments. We propose a novel data stream
ensemble classifier, called Geometrically Optimum and Online-Weighted Ensemble
(GOOWE), which assigns optimum weights to the component classifiers using a
sliding window containing the most recent data instances. We map vote scores of
individual classifiers and true class labels into a spatial environment. Based
on the Euclidean distance between vote scores and ideal-points, and using the
linear least squares (LSQ) solution, we present a novel, dynamic, and online
weighting approach. While LSQ is used for batch mode ensemble classifiers, it
is the first time that we adapt and use it for online environments by providing
a spatial modeling of online ensembles. In order to show the robustness of the
proposed algorithm, we use real-world datasets and synthetic data generators
using the MOA libraries. First, we analyze the impact of our weighting system
on prediction accuracy through two scenarios. Second, we compare GOOWE with 8
state-of-the-art ensemble classifiers in a comprehensive experimental
environment. Our experiments show that GOOWE provides improved reactions to
different types of concept drift compared to our baselines. The statistical
tests indicate a significant improvement in accuracy, with conservative time
and memory requirements.
| Hamed R. Bonab and Fazli Can | null | 1709.028 | null | null |
Towards Proving the Adversarial Robustness of Deep Neural Networks | cs.LG cs.CR cs.LO stat.ML | Autonomous vehicles are highly complex systems, required to function reliably
in a wide variety of situations. Manually crafting software controllers for
these vehicles is difficult, but there has been some success in using deep
neural networks generated using machine-learning. However, deep neural networks
are opaque to human engineers, rendering their correctness very difficult to
prove manually; and existing automated techniques, which were not designed to
operate on neural networks, fail to scale to large systems. This paper focuses
on proving the adversarial robustness of deep neural networks, i.e. proving
that small perturbations to a correctly-classified input to the network cannot
cause it to be misclassified. We describe some of our recent and ongoing work
on verifying the adversarial robustness of networks, and discuss some of the
open questions we have encountered and how they might be addressed.
| Guy Katz (Stanford University), Clark Barrett (Stanford University),
David L. Dill (Stanford University), Kyle Julian (Stanford University), Mykel
J. Kochenderfer (Stanford University) | 10.4204/EPTCS.257.3 | 1709.02802 | null | null |
A Brief Introduction to Machine Learning for Engineers | cs.LG cs.IT math.IT stat.ML | This monograph aims at providing an introduction to key concepts, algorithms,
and theoretical results in machine learning. The treatment concentrates on
probabilistic models for supervised and unsupervised learning problems. It
introduces fundamental concepts and algorithms by building on first principles,
while also exposing the reader to more advanced topics with extensive pointers
to the literature, within a unified notation and mathematical framework. The
material is organized according to clearly defined categories, such as
discriminative and generative models, frequentist and Bayesian approaches,
exact and approximate inference, as well as directed and undirected models.
This monograph is meant as an entry point for researchers with a background in
probability and linear algebra.
| Osvaldo Simeone | null | 1709.0284 | null | null |
TensorFlow Agents: Efficient Batched Reinforcement Learning in
TensorFlow | cs.LG cs.AI | We introduce TensorFlow Agents, an efficient infrastructure paradigm for
building parallel reinforcement learning algorithms in TensorFlow. We simulate
multiple environments in parallel, and group them to perform the neural network
computation on a batch rather than individual observations. This allows the
TensorFlow execution engine to parallelize computation, without the need for
manual synchronization. Environments are stepped in separate Python processes
to progress them in parallel without interference of the global interpreter
lock. As part of this project, we introduce BatchPPO, an efficient
implementation of the proximal policy optimization algorithm. By open sourcing
TensorFlow Agents, we hope to provide a flexible starting point for future
projects that accelerates future research in the field.
| Danijar Hafner, James Davidson, Vincent Vanhoucke | null | 1709.02878 | null | null |
Optimization assisted MCMC | stat.CO cs.LG | Markov Chain Monte Carlo (MCMC) sampling methods are widely used but often
encounter either slow convergence or biased sampling when applied to multimodal
high dimensional distributions. In this paper, we present a general framework
of improving classical MCMC samplers by employing a global optimization method.
The global optimization method first reduces a high dimensional search to an
one dimensional geodesic to find a starting point close to a local mode. The
search is accelerated and completed by using a local search method such as
BFGS. We modify the target distribution by extracting a local Gaussian
distribution aound the found mode. The process is repeated to find all the
modes during sampling on the fly. We integrate the optimization algorithm into
the Wormhole Hamiltonian Monte Carlo (WHMC) method. Experimental results show
that, when applied to high dimensional, multimodal Gaussian mixture models and
the network sensor localization problem, the proposed method achieves much
faster convergence, with relative error from the mean improved by about an
order of magnitude than WHMC in some cases.
| Ricky Fok, Aijun An, Xiaogang Wang | null | 1709.02888 | null | null |
Convolutional Dictionary Learning: A Comparative Review and New
Algorithms | cs.LG eess.IV stat.ML | Convolutional sparse representations are a form of sparse representation with
a dictionary that has a structure that is equivalent to convolution with a set
of linear filters. While effective algorithms have recently been developed for
the convolutional sparse coding problem, the corresponding dictionary learning
problem is substantially more challenging. Furthermore, although a number of
different approaches have been proposed, the absence of thorough comparisons
between them makes it difficult to determine which of them represents the
current state of the art. The present work both addresses this deficiency and
proposes some new approaches that outperform existing ones in certain contexts.
A thorough set of performance comparisons indicates a very wide range of
performance differences among the existing and proposed methods, and clearly
identifies those that are the most effective.
| Cristina Garcia-Cardona and Brendt Wohlberg | 10.1109/TCI.2018.2840334 | 1709.02893 | null | null |
Simultaneously Learning Neighborship and Projection Matrix for
Supervised Dimensionality Reduction | cs.CV cs.LG stat.ML | Explicitly or implicitly, most of dimensionality reduction methods need to
determine which samples are neighbors and the similarity between the neighbors
in the original highdimensional space. The projection matrix is then learned on
the assumption that the neighborhood information (e.g., the similarity) is
known and fixed prior to learning. However, it is difficult to precisely
measure the intrinsic similarity of samples in high-dimensional space because
of the curse of dimensionality. Consequently, the neighbors selected according
to such similarity might and the projection matrix obtained according to such
similarity and neighbors are not optimal in the sense of classification and
generalization. To overcome the drawbacks, in this paper we propose to let the
similarity and neighbors be variables and model them in low-dimensional space.
Both the optimal similarity and projection matrix are obtained by minimizing a
unified objective function. Nonnegative and sum-to-one constraints on the
similarity are adopted. Instead of empirically setting the regularization
parameter, we treat it as a variable to be optimized. It is interesting that
the optimal regularization parameter is adaptive to the neighbors in
low-dimensional space and has intuitive meaning. Experimental results on the
YALE B, COIL-100, and MNIST datasets demonstrate the effectiveness of the
proposed method.
| Yanwei Pang, Bo Zhou, and Feiping Nie | null | 1709.02896 | null | null |
A Simple Analysis for Exp-concave Empirical Minimization with Arbitrary
Convex Regularizer | stat.ML cs.LG math.OC | In this paper, we present a simple analysis of {\bf fast rates} with {\it
high probability} of {\bf empirical minimization} for {\it stochastic composite
optimization} over a finite-dimensional bounded convex set with exponential
concave loss functions and an arbitrary convex regularization. To the best of
our knowledge, this result is the first of its kind. As a byproduct, we can
directly obtain the fast rate with {\it high probability} for exponential
concave empirical risk minimization with and without any convex regularization,
which not only extends existing results of empirical risk minimization but also
provides a unified framework for analyzing exponential concave empirical risk
minimization with and without {\it any} convex regularization. Our proof is
very simple only exploiting the covering number of a finite-dimensional bounded
set and a concentration inequality of random vectors.
| Tianbao Yang, Zhe Li, Lijun Zhang | null | 1709.02909 | null | null |
Less Is More: A Comprehensive Framework for the Number of Components of
Ensemble Classifiers | cs.LG stat.ML | The number of component classifiers chosen for an ensemble greatly impacts
the prediction ability. In this paper, we use a geometric framework for a
priori determining the ensemble size, which is applicable to most of existing
batch and online ensemble classifiers. There are only a limited number of
studies on the ensemble size examining Majority Voting (MV) and Weighted
Majority Voting (WMV). Almost all of them are designed for batch-mode, hardly
addressing online environments. Big data dimensions and resource limitations,
in terms of time and memory, make determination of ensemble size crucial,
especially for online environments. For the MV aggregation rule, our framework
proves that the more strong components we add to the ensemble, the more
accurate predictions we can achieve. For the WMV aggregation rule, our
framework proves the existence of an ideal number of components, which is equal
to the number of class labels, with the premise that components are completely
independent of each other and strong enough. While giving the exact definition
for a strong and independent classifier in the context of an ensemble is a
challenging task, our proposed geometric framework provides a theoretical
explanation of diversity and its impact on the accuracy of predictions. We
conduct a series of experimental evaluations to show the practical value of our
theorems and existing challenges.
| Hamed Bonab and Fazli Can | null | 1709.02925 | null | null |
Deep Residual Networks and Weight Initialization | cs.LG stat.ML | Residual Network (ResNet) is the state-of-the-art architecture that realizes
successful training of really deep neural network. It is also known that good
weight initialization of neural network avoids problem of vanishing/exploding
gradients. In this paper, simplified models of ResNets are analyzed. We argue
that goodness of ResNet is correlated with the fact that ResNets are relatively
insensitive to choice of initial weights. We also demonstrate how batch
normalization improves backpropagation of deep ResNets without tuning initial
values of weights.
| Masato Taki | null | 1709.02956 | null | null |
RDeepSense: Reliable Deep Mobile Computing Models with Uncertainty
Estimations | cs.LG cs.NI | Recent advances in deep learning have led various applications to
unprecedented achievements, which could potentially bring higher intelligence
to a broad spectrum of mobile and ubiquitous applications. Although existing
studies have demonstrated the effectiveness and feasibility of running deep
neural network inference operations on mobile and embedded devices, they
overlooked the reliability of mobile computing models. Reliability measurements
such as predictive uncertainty estimations are key factors for improving the
decision accuracy and user experience. In this work, we propose RDeepSense, the
first deep learning model that provides well-calibrated uncertainty estimations
for resource-constrained mobile and embedded devices. RDeepSense enables the
predictive uncertainty by adopting a tunable proper scoring rule as the
training criterion and dropout as the implicit Bayesian approximation, which
theoretically proves its correctness.To reduce the computational complexity,
RDeepSense employs efficient dropout and predictive distribution estimation
instead of model ensemble or sampling-based method for inference operations. We
evaluate RDeepSense with four mobile sensing applications using Intel Edison
devices. Results show that RDeepSense can reduce around 90% of the energy
consumption while producing superior uncertainty estimations and preserving at
least the same model accuracy compared with other state-of-the-art methods.
| Shuochao Yao, Yiran Zhao, Huajie Shao, Aston Zhang, Chao Zhang, Shen
Li, Tarek Abdelzaher | null | 1709.0298 | null | null |
Identifying Irregular Power Usage by Turning Predictions into
Holographic Spatial Visualizations | cs.LG cs.AI cs.HC | Power grids are critical infrastructure assets that face non-technical losses
(NTL) such as electricity theft or faulty meters. NTL may range up to 40% of
the total electricity distributed in emerging countries. Industrial NTL
detection systems are still largely based on expert knowledge when deciding
whether to carry out costly on-site inspections of customers. Electricity
providers are reluctant to move to large-scale deployments of automated systems
that learn NTL profiles from data due to the latter's propensity to suggest a
large number of unnecessary inspections. In this paper, we propose a novel
system that combines automated statistical decision making with expert
knowledge. First, we propose a machine learning framework that classifies
customers into NTL or non-NTL using a variety of features derived from the
customers' consumption data. The methodology used is specifically tailored to
the level of noise in the data. Second, in order to allow human experts to feed
their knowledge in the decision loop, we propose a method for visualizing
prediction results at various granularity levels in a spatial hologram. Our
approach allows domain experts to put the classification results into the
context of the data and to incorporate their knowledge for making the final
decisions of which customers to inspect. This work has resulted in appreciable
results on a real-world data set of 3.6M customers. Our system is being
deployed in a commercial NTL detection software.
| Patrick Glauner, Niklas Dahringer, Oleksandr Puhachov, Jorge Augusto
Meira, Petko Valtchev, Radu State, Diogo Duarte | null | 1709.03008 | null | null |
Classifying Unordered Feature Sets with Convolutional Deep Averaging
Networks | cs.LG stat.ML | Unordered feature sets are a nonstandard data structure that traditional
neural networks are incapable of addressing in a principled manner. Providing a
concatenation of features in an arbitrary order may lead to the learning of
spurious patterns or biases that do not actually exist. Another complication is
introduced if the number of features varies between each set. We propose
convolutional deep averaging networks (CDANs) for classifying and learning
representations of datasets whose instances comprise variable-size, unordered
feature sets. CDANs are efficient, permutation-invariant, and capable of
accepting sets of arbitrary size. We emphasize the importance of nonlinear
feature embeddings for obtaining effective CDAN classifiers and illustrate
their advantages in experiments versus linear embeddings and alternative
permutation-invariant and -equivariant architectures.
| Andrew Gardner and Jinko Kanno and Christian A. Duncan and Rastko R.
Selmic | null | 1709.03019 | null | null |
Robust Sparse Coding via Self-Paced Learning | cs.LG cs.CV | Sparse coding (SC) is attracting more and more attention due to its
comprehensive theoretical studies and its excellent performance in many signal
processing applications. However, most existing sparse coding algorithms are
nonconvex and are thus prone to becoming stuck into bad local minima,
especially when there are outliers and noisy data. To enhance the learning
robustness, in this paper, we propose a unified framework named Self-Paced
Sparse Coding (SPSC), which gradually include matrix elements into SC learning
from easy to complex. We also generalize the self-paced learning schema into
different levels of dynamic selection on samples, features and elements
respectively. Experimental results on real-world data demonstrate the efficacy
of the proposed algorithms.
| Xiaodong Feng, Zhiwei Tang, Sen Wu | null | 1709.0303 | null | null |
Abductive Matching in Question Answering | cs.CL cs.LG | We study question-answering over semi-structured data. We introduce a new way
to apply the technique of semantic parsing by applying machine learning only to
provide annotations that the system infers to be missing; all the other parsing
logic is in the form of manually authored rules. In effect, the machine
learning is used to provide non-syntactic matches, a step that is ill-suited to
manual rules. The advantage of this approach is in its debuggability and in its
transparency to the end-user. We demonstrate the effectiveness of the approach
by achieving state-of-the-art performance of 40.42% accuracy on a standard
benchmark dataset over tables from Wikipedia.
| Kedar Dhamdhere and Kevin S. McCurley and Mukund Sundararajan and
Ankur Taly | null | 1709.03036 | null | null |
A Neural Network Architecture Combining Gated Recurrent Unit (GRU) and
Support Vector Machine (SVM) for Intrusion Detection in Network Traffic Data | cs.NE cs.CR cs.LG stat.ML | Gated Recurrent Unit (GRU) is a recently-developed variation of the long
short-term memory (LSTM) unit, both of which are types of recurrent neural
network (RNN). Through empirical evidence, both models have been proven to be
effective in a wide variety of machine learning tasks such as natural language
processing (Wen et al., 2015), speech recognition (Chorowski et al., 2015), and
text classification (Yang et al., 2016). Conventionally, like most neural
networks, both of the aforementioned RNN variants employ the Softmax function
as its final output layer for its prediction, and the cross-entropy function
for computing its loss. In this paper, we present an amendment to this norm by
introducing linear support vector machine (SVM) as the replacement for Softmax
in the final output layer of a GRU model. Furthermore, the cross-entropy
function shall be replaced with a margin-based function. While there have been
similar studies (Alalshekmubarak & Smith, 2013; Tang, 2013), this proposal is
primarily intended for binary classification on intrusion detection using the
2013 network traffic data from the honeypot systems of Kyoto University.
Results show that the GRU-SVM model performs relatively higher than the
conventional GRU-Softmax model. The proposed model reached a training accuracy
of ~81.54% and a testing accuracy of ~84.15%, while the latter was able to
reach a training accuracy of ~63.07% and a testing accuracy of ~70.75%. In
addition, the juxtaposition of these two final output layers indicate that the
SVM would outperform Softmax in prediction time - a theoretical implication
which was supported by the actual training and testing time in the study.
| Abien Fred Agarap | 10.1145/3195106.3195117 | 1709.03082 | null | null |
Efficient Online Linear Optimization with Approximation Algorithms | cs.LG math.OC | We revisit the problem of \textit{online linear optimization} in case the set
of feasible actions is accessible through an approximated linear optimization
oracle with a factor $\alpha$ multiplicative approximation guarantee. This
setting is in particular interesting since it captures natural online
extensions of well-studied \textit{offline} linear optimization problems which
are NP-hard, yet admit efficient approximation algorithms. The goal here is to
minimize the $\alpha$\textit{-regret} which is the natural extension of the
standard \textit{regret} in \textit{online learning} to this setting.
We present new algorithms with significantly improved oracle complexity for
both the full information and bandit variants of the problem. Mainly, for both
variants, we present $\alpha$-regret bounds of $O(T^{-1/3})$, were $T$ is the
number of prediction rounds, using only $O(\log{T})$ calls to the approximation
oracle per iteration, on average. These are the first results to obtain both
average oracle complexity of $O(\log{T})$ (or even poly-logarithmic in $T$) and
$\alpha$-regret bound $O(T^{-c})$ for a constant $c>0$, for both variants.
| Dan Garber | null | 1709.03093 | null | null |
Robust Emotion Recognition from Low Quality and Low Bit Rate Video: A
Deep Learning Approach | cs.CV cs.AI cs.LG | Emotion recognition from facial expressions is tremendously useful,
especially when coupled with smart devices and wireless multimedia
applications. However, the inadequate network bandwidth often limits the
spatial resolution of the transmitted video, which will heavily degrade the
recognition reliability. We develop a novel framework to achieve robust emotion
recognition from low bit rate video. While video frames are downsampled at the
encoder side, the decoder is embedded with a deep network model for joint
super-resolution (SR) and recognition. Notably, we propose a novel max-mix
training strategy, leading to a single "One-for-All" model that is remarkably
robust to a vast range of downsampling factors. That makes our framework well
adapted for the varied bandwidths in real transmission scenarios, without
hampering scalability or efficiency. The proposed framework is evaluated on the
AVEC 2016 benchmark, and demonstrates significantly improved stand-alone
recognition performance, as well as rate-distortion (R-D) performance, than
either directly recognizing from LR frames, or separating SR and recognition.
| Bowen Cheng, Zhangyang Wang, Zhaobin Zhang, Zhu Li, Ding Liu, Jianchao
Yang, Shuai Huang, Thomas S. Huang | null | 1709.03126 | null | null |
MBMF: Model-Based Priors for Model-Free Reinforcement Learning | cs.LG cs.AI cs.RO cs.SY | Reinforcement Learning is divided in two main paradigms: model-free and
model-based. Each of these two paradigms has strengths and limitations, and has
been successfully applied to real world domains that are appropriate to its
corresponding strengths. In this paper, we present a new approach aimed at
bridging the gap between these two paradigms. We aim to take the best of the
two paradigms and combine them in an approach that is at the same time
data-efficient and cost-savvy. We do so by learning a probabilistic dynamics
model and leveraging it as a prior for the intertwined model-free optimization.
As a result, our approach can exploit the generality and structure of the
dynamics model, but is also capable of ignoring its inevitable inaccuracies, by
directly incorporating the evidence provided by the direct observation of the
cost. Preliminary results demonstrate that our approach outperforms purely
model-based and model-free approaches, as well as the approach of simply
switching from a model-based to a model-free setting.
| Somil Bansal, Roberto Calandra, Kurtland Chua, Sergey Levine, Claire
Tomlin | null | 1709.03153 | null | null |
R2N2: Residual Recurrent Neural Networks for Multivariate Time Series
Forecasting | cs.LG stat.ML | Multivariate time-series modeling and forecasting is an important problem
with numerous applications. Traditional approaches such as VAR (vector
auto-regressive) models and more recent approaches such as RNNs (recurrent
neural networks) are indispensable tools in modeling time-series data. In many
multivariate time series modeling problems, there is usually a significant
linear dependency component, for which VARs are suitable, and a nonlinear
component, for which RNNs are suitable. Modeling such times series with only
VAR or only RNNs can lead to poor predictive performance or complex models with
large training times. In this work, we propose a hybrid model called R2N2
(Residual RNN), which first models the time series with a simple linear model
(like VAR) and then models its residual errors using RNNs. R2N2s can be trained
using existing algorithms for VARs and RNNs. Through an extensive empirical
evaluation on two real world datasets (aviation and climate domains), we show
that R2N2 is competitive, usually better than VAR or RNN, used alone. We also
show that R2N2 is faster to train as compared to an RNN, while requiring less
number of hidden units.
| Hardik Goel, Igor Melnyk, Arindam Banerjee | null | 1709.03159 | null | null |
Bayesian bandits: balancing the exploration-exploitation tradeoff via
double sampling | stat.ML cs.LG stat.CO | Reinforcement learning studies how to balance exploration and exploitation in
real-world systems, optimizing interactions with the world while simultaneously
learning how the world operates. One general class of algorithms for such
learning is the multi-armed bandit setting. Randomized probability matching,
based upon the Thompson sampling approach introduced in the 1930s, has recently
been shown to perform well and to enjoy provable optimality properties. It
permits generative, interpretable modeling in a Bayesian setting, where prior
knowledge is incorporated, and the computed posteriors naturally capture the
full state of knowledge. In this work, we harness the information contained in
the Bayesian posterior and estimate its sufficient statistics via sampling. In
several application domains, for example in health and medicine, each
interaction with the world can be expensive and invasive, whereas drawing
samples from the model is relatively inexpensive. Exploiting this viewpoint, we
develop a double sampling technique driven by the uncertainty in the learning
process: it favors exploitation when certain about the properties of each arm,
exploring otherwise. The proposed algorithm does not make any distributional
assumption and it is applicable to complex reward distributions, as long as
Bayesian posterior updates are computable. Utilizing the estimated posterior
sufficient statistics, double sampling autonomously balances the
exploration-exploitation tradeoff to make better informed decisions. We
empirically show its reduced cumulative regret when compared to
state-of-the-art alternatives in representative bandit settings.
| I\~nigo Urteaga and Chris H. Wiggins | null | 1709.03162 | null | null |
Variational inference for the multi-armed contextual bandit | stat.ML cs.LG stat.CO | In many biomedical, science, and engineering problems, one must sequentially
decide which action to take next so as to maximize rewards. One general class
of algorithms for optimizing interactions with the world, while simultaneously
learning how the world operates, is the multi-armed bandit setting and, in
particular, the contextual bandit case. In this setting, for each executed
action, one observes rewards that are dependent on a given 'context', available
at each interaction with the world. The Thompson sampling algorithm has
recently been shown to enjoy provable optimality properties for this set of
problems, and to perform well in real-world settings. It facilitates generative
and interpretable modeling of the problem at hand. Nevertheless, the design and
complexity of the model limit its application, since one must both sample from
the distributions modeled and calculate their expected rewards. We here show
how these limitations can be overcome using variational inference to
approximate complex models, applying to the reinforcement learning case
advances developed for the inference case in the machine learning community
over the past two decades. We consider contextual multi-armed bandit
applications where the true reward distribution is unknown and complex, which
we approximate with a mixture model whose parameters are inferred via
variational inference. We show how the proposed variational Thompson sampling
approach is accurate in approximating the true distribution, and attains
reduced regrets even with complex reward distributions. The proposed algorithm
is valuable for practical scenarios where restrictive modeling assumptions are
undesirable.
| I\~nigo Urteaga and Chris H. Wiggins | null | 1709.03163 | null | null |
Rates of Convergence of Spectral Methods for Graphon Estimation | stat.ML cs.LG cs.SI math.ST stat.TH | This paper studies the problem of estimating the grahpon model - the
underlying generating mechanism of a network. Graphon estimation arises in many
applications such as predicting missing links in networks and learning user
preferences in recommender systems. The graphon model deals with a random graph
of $n$ vertices such that each pair of two vertices $i$ and $j$ are connected
independently with probability $\rho \times f(x_i,x_j)$, where $x_i$ is the
unknown $d$-dimensional label of vertex $i$, $f$ is an unknown symmetric
function, and $\rho$ is a scaling parameter characterizing the graph sparsity.
Recent studies have identified the minimax error rate of estimating the graphon
from a single realization of the random graph. However, there exists a wide gap
between the known error rates of computationally efficient estimation
procedures and the minimax optimal error rate.
Here we analyze a spectral method, namely universal singular value
thresholding (USVT) algorithm, in the relatively sparse regime with the average
vertex degree $n\rho=\Omega(\log n)$. When $f$ belongs to H\"{o}lder or Sobolev
space with smoothness index $\alpha$, we show the error rate of USVT is at most
$(n\rho)^{ -2 \alpha / (2\alpha+d)}$, approaching the minimax optimal error
rate $\log (n\rho)/(n\rho)$ for $d=1$ as $\alpha$ increases. Furthermore, when
$f$ is analytic, we show the error rate of USVT is at most $\log^d
(n\rho)/(n\rho)$. In the special case of stochastic block model with $k$
blocks, the error rate of USVT is at most $k/(n\rho)$, which is larger than the
minimax optimal error rate by at most a multiplicative factor $k/\log k$. This
coincides with the computational gap observed for community detection. A key
step of our analysis is to derive the eigenvalue decaying rate of the edge
probability matrix using piecewise polynomial approximations of the graphon
function $f$.
| Jiaming Xu | null | 1709.03183 | null | null |
Semi-Supervised Active Clustering with Weak Oracles | stat.ML cs.LG | Semi-supervised active clustering (SSAC) utilizes the knowledge of a domain
expert to cluster data points by interactively making pairwise "same-cluster"
queries. However, it is impractical to ask human oracles to answer every
pairwise query. In this paper, we study the influence of allowing "not-sure"
answers from a weak oracle and propose algorithms to efficiently handle
uncertainties. Different types of model assumptions are analyzed to cover
realistic scenarios of oracle abstraction. In the first model, random-weak
oracle, an oracle randomly abstains with a certain probability. We also
proposed two distance-weak oracle models which simulate the case of getting
confused based on the distance between two points in a pairwise query. For each
weak oracle model, we show that a small query complexity is adequate for the
effective $k$ means clustering with high probability. Sufficient conditions for
the guarantee include a $\gamma$-margin property of the data, and an existence
of a point close to each cluster center. Furthermore, we provide a sample
complexity with a reduced effect of the cluster's margin and only a logarithmic
dependency on the data dimension. Our results allow significantly less number
of same-cluster queries if the margin of the clusters is tight, i.e. $\gamma
\approx 1$. Experimental results on synthetic data show the effective
performance of our approach in overcoming uncertainties.
| Taewan Kim, Joydeep Ghosh | null | 1709.03202 | null | null |
Fairness Testing: Testing Software for Discrimination | cs.SE cs.AI cs.CY cs.DB cs.LG | This paper defines software fairness and discrimination and develops a
testing-based method for measuring if and how much software discriminates,
focusing on causality in discriminatory behavior. Evidence of software
discrimination has been found in modern software systems that recommend
criminal sentences, grant access to financial products, and determine who is
allowed to participate in promotions. Our approach, Themis, generates efficient
test suites to measure discrimination. Given a schema describing valid system
inputs, Themis generates discrimination tests automatically and does not
require an oracle. We evaluate Themis on 20 software systems, 12 of which come
from prior work with explicit focus on avoiding discrimination. We find that
(1) Themis is effective at discovering software discrimination, (2)
state-of-the-art techniques for removing discrimination from algorithms fail in
many situations, at times discriminating against as much as 98% of an input
subdomain, (3) Themis optimizations are effective at producing efficient test
suites for measuring discrimination, and (4) Themis is more efficient on
systems that exhibit more discrimination. We thus demonstrate that fairness
testing is a critical aspect of the software development cycle in domains with
possible discrimination and provide initial tools for measuring software
discrimination.
| Sainyam Galhotra, Yuriy Brun, Alexandra Meliou | 10.1145/3106237.3106277 | 1709.03221 | null | null |
On better training the infinite restricted Boltzmann machines | cs.LG cs.AI stat.ML | The infinite restricted Boltzmann machine (iRBM) is an extension of the
classic RBM. It enjoys a good property of automatically deciding the size of
the hidden layer according to specific training data. With sufficient training,
the iRBM can achieve a competitive performance with that of the classic RBM.
However, the convergence of learning the iRBM is slow, due to the fact that the
iRBM is sensitive to the ordering of its hidden units, the learned filters
change slowly from the left-most hidden unit to right. To break this dependency
between neighboring hidden units and speed up the convergence of training, a
novel training strategy is proposed. The key idea of the proposed training
strategy is randomly regrouping the hidden units before each gradient descent
step. Potentially, a mixing of infinite many iRBMs with different permutations
of the hidden units can be achieved by this learning method, which has a
similar effect of preventing the model from over-fitting as the dropout. The
original iRBM is also modified to be capable of carrying out discriminative
training. To evaluate the impact of our method on convergence speed of learning
and the model's generalization ability, several experiments have been performed
on the binarized MNIST and CalTech101 Silhouettes datasets. Experimental
results indicate that the proposed training strategy can greatly accelerate
learning and enhance generalization ability of iRBMs.
| Xuan Peng, Xunzhang Gao, Xiang Li | 10.1007/s10994-018-5696-2 | 1709.03239 | null | null |
Gigamachine: incremental machine learning on desktop computers | cs.AI cs.LG | We present a concrete design for Solomonoff's incremental machine learning
system suitable for desktop computers. We use R5RS Scheme and its standard
library with a few omissions as the reference machine. We introduce a Levin
Search variant based on a stochastic Context Free Grammar together with new
update algorithms that use the same grammar as a guiding probability
distribution for incremental machine learning. The updates include adjusting
production probabilities, re-using previous solutions, learning programming
idioms and discovery of frequent subprograms. The issues of extending the a
priori probability distribution and bootstrapping are discussed. We have
implemented a good portion of the proposed algorithms. Experiments with toy
problems show that the update algorithms work as expected.
| Eray \"Ozkural | null | 1709.03413 | null | null |
Ensemble Methods as a Defense to Adversarial Perturbations Against Deep
Neural Networks | stat.ML cs.LG | Deep learning has become the state of the art approach in many machine
learning problems such as classification. It has recently been shown that deep
learning is highly vulnerable to adversarial perturbations. Taking the camera
systems of self-driving cars as an example, small adversarial perturbations can
cause the system to make errors in important tasks, such as classifying traffic
signs or detecting pedestrians. Hence, in order to use deep learning without
safety concerns a proper defense strategy is required. We propose to use
ensemble methods as a defense strategy against adversarial perturbations. We
find that an attack leading one model to misclassify does not imply the same
for other networks performing the same task. This makes ensemble methods an
attractive defense strategy against adversarial attacks. We empirically show
for the MNIST and the CIFAR-10 data sets that ensemble methods not only improve
the accuracy of neural networks on test data but also increase their robustness
against adversarial perturbations.
| Thilo Strauss, Markus Hanselmann, Andrej Junginger, Holger Ulmer | null | 1709.03423 | null | null |
The Diverse Cohort Selection Problem | cs.LG | How should a firm allocate its limited interviewing resources to select the
optimal cohort of new employees from a large set of job applicants? How should
that firm allocate cheap but noisy resume screenings and expensive but in-depth
in-person interviews? We view this problem through the lens of combinatorial
pure exploration (CPE) in the multi-armed bandit setting, where a central
learning agent performs costly exploration of a set of arms before selecting a
final subset with some combinatorial structure. We generalize a recent CPE
algorithm to the setting where arm pulls can have different costs and return
different levels of information. We then prove theoretical upper bounds for a
general class of arm-pulling strategies in this new setting. We apply our
general algorithm to a real-world problem with combinatorial structure:
incorporating diversity into university admissions. We take real data from
admissions at one of the largest US-based computer science graduate programs
and show that a simulation of our algorithm produces a cohort with hiring
overall utility while spending comparable budget to the current admissions
process at that university.
| Candice Schumann, Samsara N. Counts, Jeffrey S. Foster and John P.
Dickerson | null | 1709.03441 | null | null |
UI-Net: Interactive Artificial Neural Networks for Iterative Image
Segmentation Based on a User Model | cs.CV cs.AI cs.LG cs.NE | For complex segmentation tasks, fully automatic systems are inherently
limited in their achievable accuracy for extracting relevant objects.
Especially in cases where only few data sets need to be processed for a highly
accurate result, semi-automatic segmentation techniques exhibit a clear benefit
for the user. One area of application is medical image processing during an
intervention for a single patient. We propose a learning-based cooperative
segmentation approach which includes the computing entity as well as the user
into the task. Our system builds upon a state-of-the-art fully convolutional
artificial neural network (FCN) as well as an active user model for training.
During the segmentation process, a user of the trained system can iteratively
add additional hints in form of pictorial scribbles as seed points into the FCN
system to achieve an interactive and precise segmentation result. The
segmentation quality of interactive FCNs is evaluated. Iterative FCN approaches
can yield superior results compared to networks without the user input channel
component, due to a consistent improvement in segmentation quality after each
interaction.
| Mario Amrehn, Sven Gaube, Mathias Unberath, Frank Schebesch, Tim Horz,
Maddalena Strumia, Stefan Steidl, Markus Kowarschik, Andreas Maier | 10.2312/vcbm.20171248 | 1709.0345 | null | null |
NiftyNet: a deep-learning platform for medical imaging | cs.CV cs.LG cs.NE | Medical image analysis and computer-assisted intervention problems are
increasingly being addressed with deep-learning-based solutions. Established
deep-learning platforms are flexible but do not provide specific functionality
for medical image analysis and adapting them for this application requires
substantial implementation effort. Thus, there has been substantial duplication
of effort and incompatible infrastructure developed across many research
groups. This work presents the open-source NiftyNet platform for deep learning
in medical imaging. The ambition of NiftyNet is to accelerate and simplify the
development of these solutions, and to provide a common mechanism for
disseminating research outputs for the community to use, adapt and build upon.
NiftyNet provides a modular deep-learning pipeline for a range of medical
imaging applications including segmentation, regression, image generation and
representation learning applications. Components of the NiftyNet pipeline
including data loading, data augmentation, network architectures, loss
functions and evaluation metrics are tailored to, and take advantage of, the
idiosyncracies of medical image analysis and computer-assisted intervention.
NiftyNet is built on TensorFlow and supports TensorBoard visualization of 2D
and 3D images and computational graphs by default.
We present 3 illustrative medical image analysis applications built using
NiftyNet: (1) segmentation of multiple abdominal organs from computed
tomography; (2) image regression to predict computed tomography attenuation
maps from brain magnetic resonance images; and (3) generation of simulated
ultrasound images for specified anatomical poses.
NiftyNet enables researchers to rapidly develop and distribute deep learning
solutions for segmentation, regression, image generation and representation
learning applications, or extend the platform to new applications.
| Eli Gibson, Wenqi Li, Carole Sudre, Lucas Fidon, Dzhoshkun I. Shakir,
Guotai Wang, Zach Eaton-Rosen, Robert Gray, Tom Doel, Yipeng Hu, Tom Whyntie,
Parashkev Nachev, Marc Modat, Dean C. Barratt, S\'ebastien Ourselin, M. Jorge
Cardoso and Tom Vercauteren | 10.1016/j.cmpb.2018.01.025 | 1709.03485 | null | null |
GIANT: Globally Improved Approximate Newton Method for Distributed
Optimization | cs.LG cs.DC math.OC stat.ML | For distributed computing environment, we consider the empirical risk
minimization problem and propose a distributed and communication-efficient
Newton-type optimization method. At every iteration, each worker locally finds
an Approximate NewTon (ANT) direction, which is sent to the main driver. The
main driver, then, averages all the ANT directions received from workers to
form a {\it Globally Improved ANT} (GIANT) direction. GIANT is highly
communication efficient and naturally exploits the trade-offs between local
computations and global communications in that more local computations result
in fewer overall rounds of communications. Theoretically, we show that GIANT
enjoys an improved convergence rate as compared with first-order methods and
existing distributed Newton-type methods. Further, and in sharp contrast with
many existing distributed Newton-type methods, as well as popular first-order
methods, a highly advantageous practical feature of GIANT is that it only
involves one tuning parameter. We conduct large-scale experiments on a computer
cluster and, empirically, demonstrate the superior performance of GIANT.
| Shusen Wang, Farbod Roosta-Khorasani, Peng Xu and Michael W. Mahoney | null | 1709.03528 | null | null |
Learning Graph Topological Features via GAN | cs.SI cs.LG | Inspired by the generation power of generative adversarial networks (GANs) in
image domains, we introduce a novel hierarchical architecture for learning
characteristic topological features from a single arbitrary input graph via
GANs. The hierarchical architecture consisting of multiple GANs preserves both
local and global topological features and automatically partitions the input
graph into representative stages for feature learning. The stages facilitate
reconstruction and can be used as indicators of the importance of the
associated topological structures. Experiments show that our method produces
subgraphs retaining a wide range of topological features, even in early
reconstruction stages (unlike a single GAN, which cannot easily identify such
features, let alone reconstruct the original graph). This paper is firstline
research on combining the use of GANs and graph topological analysis.
| Weiyi Liu, Hal Cooper, Min Hwan Oh, Sailung Yeung, Pin-Yu Chen,
Toyotaro Suzumura, Lingli Chen | null | 1709.03545 | null | null |
False arrhythmia alarm reduction in the intensive care unit | cs.LG | Research has shown that false alarms constitute more than 80% of the alarms
triggered in the intensive care unit (ICU). The high false arrhythmia alarm
rate has severe implications such as disruption of patient care, caregiver
alarm fatigue, and desensitization from clinical staff to real life-threatening
alarms. A method to reduce the false alarm rate would therefore greatly benefit
patients as well as nurses in their ability to provide care. We here develop
and describe a robust false arrhythmia alarm reduction system for use in the
ICU. Building off of work previously described in the literature, we make use
of signal processing and machine learning techniques to identify true and false
alarms for five arrhythmia types. This baseline algorithm alone is able to
perform remarkably well, with a sensitivity of 0.908, a specificity of 0.838,
and a PhysioNet/CinC challenge score of 0.756. We additionally explore dynamic
time warping techniques on both the entire alarm signal as well as on a
beat-by-beat basis in an effort to improve performance of ventricular
tachycardia, which has in the literature been one of the hardest arrhythmias to
classify. Such an algorithm with strong performance and efficiency could
potentially be translated for use in the ICU to promote overall patient care
and recovery.
| Andrea S. Li, Alistair E. W. Johnson, Roger G. Mark | 10.5281/zenodo.889036 | 1709.03562 | null | null |
Anomaly Detection in Hierarchical Data Streams under Unknown Models | cs.LG | We consider the problem of detecting a few targets among a large number of
hierarchical data streams. The data streams are modeled as random processes
with unknown and potentially heavy-tailed distributions. The objective is an
active inference strategy that determines, sequentially, which data stream to
collect samples from in order to minimize the sample complexity under a
reliability constraint. We propose an active inference strategy that induces a
biased random walk on the tree-structured hierarchy based on confidence bounds
of sample statistics. We then establish its order optimality in terms of both
the size of the search space (i.e., the number of data streams) and the
reliability requirement. The results find applications in hierarchical heavy
hitter detection, noisy group testing, and adaptive sampling for active
learning, classification, and stochastic root finding.
| Sattar Vakili, Qing Zhao, Chang Liu, Chen-Nee Chuah | null | 1709.03573 | null | null |
Art of singular vectors and universal adversarial perturbations | cs.CV cs.AI cs.LG | Vulnerability of Deep Neural Networks (DNNs) to adversarial attacks has been
attracting a lot of attention in recent studies. It has been shown that for
many state of the art DNNs performing image classification there exist
universal adversarial perturbations --- image-agnostic perturbations mere
addition of which to natural images with high probability leads to their
misclassification. In this work we propose a new algorithm for constructing
such universal perturbations. Our approach is based on computing the so-called
$(p, q)$-singular vectors of the Jacobian matrices of hidden layers of a
network. Resulting perturbations present interesting visual patterns, and by
using only 64 images we were able to construct universal perturbations with
more than 60 \% fooling rate on the dataset consisting of 50000 images. We also
investigate a correlation between the maximal singular value of the Jacobian
matrix and the fooling rate of the corresponding singular vector, and show that
the constructed perturbations generalize across networks.
| Valentin Khrulkov and Ivan Oseledets | null | 1709.03582 | null | null |
Budgeted Experiment Design for Causal Structure Learning | cs.LG cs.AI stat.ML | We study the problem of causal structure learning when the experimenter is
limited to perform at most $k$ non-adaptive experiments of size $1$. We
formulate the problem of finding the best intervention target set as an
optimization problem, which aims to maximize the average number of edges whose
directions are resolved. We prove that the corresponding objective function is
submodular and a greedy algorithm suffices to achieve
$(1-\frac{1}{e})$-approximation of the optimal value. We further present an
accelerated variant of the greedy algorithm, which can lead to orders of
magnitude performance speedup. We validate our proposed approach on synthetic
and real graphs. The results show that compared to the purely observational
setting, our algorithm orients the majority of the edges through a considerably
small number of interventions.
| AmirEmad Ghassami, Saber Salehkaleybar, Negar Kiyavash, Elias
Bareinboim | null | 1709.03625 | null | null |
What were you expecting? Using Expectancy Features to Predict Expressive
Performances of Classical Piano Music | cs.SD cs.IT cs.LG math.IT | In this paper we present preliminary work examining the relationship between
the formation of expectations and the realization of musical performances,
paying particular attention to expressive tempo and dynamics. To compute
features that reflect what a listener is expecting to hear, we employ a
computational model of auditory expectation called the Information Dynamics of
Music model (IDyOM). We then explore how well these expectancy features -- when
combined with score descriptors using the Basis-Function modeling approach --
can predict expressive tempo and dynamics in a dataset of Mozart piano sonata
performances. Our results suggest that using expectancy features significantly
improves the predictions for tempo.
| Carlos Cancino-Chac\'on, Maarten Grachten, David R. W. Sears, Gerhard
Widmer | null | 1709.03629 | null | null |
Identifying Genetic Risk Factors via Sparse Group Lasso with Group Graph
Structure | stat.ML cs.LG q-bio.GN | Genome-wide association studies (GWA studies or GWAS) investigate the
relationships between genetic variants such as single-nucleotide polymorphisms
(SNPs) and individual traits. Recently, incorporating biological priors
together with machine learning methods in GWA studies has attracted increasing
attention. However, in real-world, nucleotide-level bio-priors have not been
well-studied to date. Alternatively, studies at gene-level, for example,
protein--protein interactions and pathways, are more rigorous and legitimate,
and it is potentially beneficial to utilize such gene-level priors in GWAS. In
this paper, we proposed a novel two-level structured sparse model, called
Sparse Group Lasso with Group-level Graph structure (SGLGG), for GWAS. It can
be considered as a sparse group Lasso along with a group-level graph Lasso.
Essentially, SGLGG penalizes the nucleotide-level sparsity as well as takes
advantages of gene-level priors (both gene groups and networks), to identifying
phenotype-associated risk SNPs. We employ the alternating direction method of
multipliers algorithm to optimize the proposed model. Our experiments on the
Alzheimer's Disease Neuroimaging Initiative whole genome sequence data and
neuroimage data demonstrate the effectiveness of SGLGG. As a regression model,
it is competitive to the state-of-the-arts sparse models; as a variable
selection method, SGLGG is promising for identifying Alzheimer's
disease-related risk SNPs.
| Tao Yang, Paul Thompson, Sihai Zhao, Jieping Ye | null | 1709.03645 | null | null |
A Denoising Loss Bound for Neural Network based Universal Discrete
Denoisers | cs.LG cs.IT math.IT | We obtain a denoising loss bound of the recently proposed neural network
based universal discrete denoiser, Neural DUDE, which can adaptively learn its
parameters solely from the noise-corrupted data, by minimizing the
\emph{empirical estimated loss}. The resulting bound resembles the
generalization error bound of the standard empirical risk minimizers (ERM) in
supervised learning, and we show that the well-known bias-variance tradeoff
also exists in our loss bound. The key tool we develop is the concentration of
the unbiased estimated loss on the true denoising loss, which is shown to hold
\emph{uniformly} for \emph{all} bounded network parameters and \emph{all}
underlying clean sequences. For proving our main results, we make a novel
application of the tools from the statistical learning theory. Finally, we show
that the hyperparameters of Neural DUDE can be chosen from a small validation
set to significantly improve the denoising performance, as predicted by the
theoretical result of this paper.
| Taesup Moon | null | 1709.03657 | null | null |
End-to-End Waveform Utterance Enhancement for Direct Evaluation Metrics
Optimization by Fully Convolutional Neural Networks | stat.ML cs.LG cs.SD | Speech enhancement model is used to map a noisy speech to a clean speech. In
the training stage, an objective function is often adopted to optimize the
model parameters. However, in most studies, there is an inconsistency between
the model optimization criterion and the evaluation criterion on the enhanced
speech. For example, in measuring speech intelligibility, most of the
evaluation metric is based on a short-time objective intelligibility (STOI)
measure, while the frame based minimum mean square error (MMSE) between
estimated and clean speech is widely used in optimizing the model. Due to the
inconsistency, there is no guarantee that the trained model can provide optimal
performance in applications. In this study, we propose an end-to-end
utterance-based speech enhancement framework using fully convolutional neural
networks (FCN) to reduce the gap between the model optimization and evaluation
criterion. Because of the utterance-based optimization, temporal correlation
information of long speech segments, or even at the entire utterance level, can
be considered when perception-based objective functions are used for the direct
optimization. As an example, we implement the proposed FCN enhancement
framework to optimize the STOI measure. Experimental results show that the STOI
of test speech is better than conventional MMSE-optimized speech due to the
consistency between the training and evaluation target. Moreover, by
integrating the STOI in model optimization, the intelligibility of human
subjects and automatic speech recognition (ASR) system on the enhanced speech
is also substantially improved compared to those generated by the MMSE
criterion.
| Szu-Wei Fu, Tao-Wei Wang, Yu Tsao, Xugang Lu, and Hisashi Kawai | null | 1709.03658 | null | null |
Multi-view Graph Embedding with Hub Detection for Brain Network Analysis | cs.LG | Multi-view graph embedding has become a widely studied problem in the area of
graph learning. Most of the existing works on multi-view graph embedding aim to
find a shared common node embedding across all the views of the graph by
combining the different views in a specific way. Hub detection, as another
essential topic in graph mining has also drawn extensive attentions in recent
years, especially in the context of brain network analysis. Both the graph
embedding and hub detection relate to the node clustering structure of graphs.
The multi-view graph embedding usually implies the node clustering structure of
the graph based on the multiple views, while the hubs are the boundary-spanning
nodes across different node clusters in the graph and thus may potentially
influence the clustering structure of the graph. However, none of the existing
works in multi-view graph embedding considered the hubs when learning the
multi-view embeddings. In this paper, we propose to incorporate the hub
detection task into the multi-view graph embedding framework so that the two
tasks could benefit each other. Specifically, we propose an auto-weighted
framework of Multi-view Graph Embedding with Hub Detection (MVGE-HD) for brain
network analysis. The MVGE-HD framework learns a unified graph embedding across
all the views while reducing the potential influence of the hubs on blurring
the boundaries between node clusters in the graph, thus leading to a clear and
discriminative node clustering structure for the graph. We apply MVGE-HD on two
real multi-view brain network datasets (i.e., HIV and Bipolar). The
experimental results demonstrate the superior performance of the proposed
framework in brain network analysis for clinical investigation and application.
| Guixiang Ma, Chun-Ta Lu, Lifang He, Philip S. Yu, Ann B. Ragin | null | 1709.03659 | null | null |
Community Recovery in Hypergraphs | cs.IT cs.LG math.IT stat.ML | Community recovery is a central problem that arises in a wide variety of
applications such as network clustering, motion segmentation, face clustering
and protein complex detection. The objective of the problem is to cluster data
points into distinct communities based on a set of measurements, each of which
is associated with the values of a certain number of data points. While most of
the prior works focus on a setting in which the number of data points involved
in a measurement is two, this work explores a generalized setting in which the
number can be more than two. Motivated by applications particularly in machine
learning and channel coding, we consider two types of measurements: (1)
homogeneity measurement which indicates whether or not the associated data
points belong to the same community; (2) parity measurement which denotes the
modulo-2 sum of the values of the data points. Such measurements are possibly
corrupted by Bernoulli noise. We characterize the fundamental limits on the
number of measurements required to reconstruct the communities for the
considered models.
| Kwangjun Ahn, Kangwook Lee, Changho Suh | null | 1709.0367 | null | null |
Rapid Near-Neighbor Interaction of High-dimensional Data via
Hierarchical Clustering | cs.LG | Calculation of near-neighbor interactions among high dimensional, irregularly
distributed data points is a fundamental task to many graph-based or
kernel-based machine learning algorithms and applications. Such calculations,
involving large, sparse interaction matrices, expose the limitation of
conventional data-and-computation reordering techniques for improving space and
time locality on modern computer memory hierarchies. We introduce a novel
method for obtaining a matrix permutation that renders a desirable sparsity
profile. The method is distinguished by the guiding principle to obtain a
profile that is block-sparse with dense blocks. Our profile model and measure
capture the essential properties affecting space and time locality, and permit
variation in sparsity profile without imposing a restriction to a fixed
pattern. The second distinction lies in an efficient algorithm for obtaining a
desirable profile, via exploring and exploiting multi-scale cluster structure
hidden in but intrinsic to the data. The algorithm accomplishes its task with
key components for lower-dimensional embedding with data-specific principal
feature axes, hierarchical data clustering, multi-level matrix compression
storage, and multi-level interaction computations. We provide experimental
results from case studies with two important data analysis algorithms. The
resulting performance is remarkably comparable to the BLAS performance for the
best-case interaction governed by a regularly banded matrix with the same
sparsity.
| Nikos Pitsianis, Dimitris Floros, Alexandros-Stavros Iliopoulos,
Kostas Mylonakis, Nikos Sismanis and Xiaobai Sun | null | 1709.03671 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.