title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Hybrid Clustering based on Content and Connection Structure using Joint
Nonnegative Matrix Factorization | cs.LG stat.ML | We present a hybrid method for latent information discovery on the data sets
containing both text content and connection structure based on constrained low
rank approximation. The new method jointly optimizes the Nonnegative Matrix
Factorization (NMF) objective function for text clustering and the Symmetric
NMF (SymNMF) objective function for graph clustering. We propose an effective
algorithm for the joint NMF objective function, based on a block coordinate
descent (BCD) framework. The proposed hybrid method discovers content
associations via latent connections found using SymNMF. The method can also be
applied with a natural conversion of the problem when a hypergraph formulation
is used or the content is associated with hypergraph edges.
Experimental results show that by simultaneously utilizing both content and
connection structure, our hybrid method produces higher quality clustering
results compared to the other NMF clustering methods that uses content alone
(standard NMF) or connection structure alone (SymNMF). We also present some
interesting applications to several types of real world data such as citation
recommendations of papers. The hybrid method proposed in this paper can also be
applied to general data expressed with both feature space vectors and pairwise
similarities and can be extended to the case with multiple feature spaces or
multiple similarity measures.
| Rundong Du, Barry Drake, Haesun Park | null | 1703.09646 | null | null |
Structural Damage Identification Using Artificial Neural Network and
Synthetic data | cs.LG cs.CE | This paper presents real-time vibration based identification technique using
measured frequency response functions(FRFs) under random vibration loading.
Artificial Neural Networks (ANNs) are trained to map damage fingerprints to
damage characteristic parameters. Principal component statistical analysis(PCA)
technique was used to tackle the problem of high dimensionality and high noise
of data, which is common for industrial structures. The present study considers
Crack, Rivet hole expansion and redundant uniform mass as damages on the
structure. Frequency response function data after being reduced in size using
PCA is fed to individual neural networks to localize and predict the severity
of damage on the structure. The system of ANNs trained with both numerical and
experimental model data to make the system reliable and robust. The methodology
is applied to a numerical model of stiffened panel structure, where damages are
confined close to the stiffener. The results showed that, in all the cases
considered, it is possible to localize and predict severity of the damage
occurrence with very good accuracy and reliability.
| Divya Shyam Singha, G.B.L. Chowdarya, D Roy Mahapatraa | null | 1703.09651 | null | null |
Inverse Reinforcement Learning from Summary Data | cs.LG cs.AI stat.ML | Inverse reinforcement learning (IRL) aims to explain observed strategic
behavior by fitting reinforcement learning models to behavioral data. However,
traditional IRL methods are only applicable when the observations are in the
form of state-action paths. This assumption may not hold in many real-world
modeling settings, where only partial or summarized observations are available.
In general, we may assume that there is a summarizing function $\sigma$, which
acts as a filter between us and the true state-action paths that constitute the
demonstration. Some initial approaches to extending IRL to such situations have
been presented, but with very specific assumptions about the structure of
$\sigma$, such as that only certain state observations are missing. This paper
instead focuses on the most general case of the problem, where no assumptions
are made about the summarizing function, except that it can be evaluated. We
demonstrate that inference is still possible. The paper presents exact and
approximate inference algorithms that allow full posterior inference, which is
particularly important for assessing parameter uncertainty in this challenging
inference situation. Empirical scalability is demonstrated to reasonably sized
problems, and practical applicability is demonstrated by estimating the
posterior for a cognitive science RL model based on an observed user's task
completion time only.
| Antti Kangasr\"a\"asi\"o, Samuel Kaski | 10.1007/s10994-018-5730-4 | 1703.097 | null | null |
Collective Anomaly Detection based on Long Short Term Memory Recurrent
Neural Network | cs.LG cs.CR | Intrusion detection for computer network systems becomes one of the most
critical tasks for network administrators today. It has an important role for
organizations, governments and our society due to its valuable resources on
computer networks. Traditional misuse detection strategies are unable to detect
new and unknown intrusion. Besides, anomaly detection in network security is
aim to distinguish between illegal or malicious events and normal behavior of
network systems. Anomaly detection can be considered as a classification
problem where it builds models of normal network behavior, which it uses to
detect new patterns that significantly deviate from the model. Most of the cur-
rent research on anomaly detection is based on the learning of normally and
anomaly behaviors. They do not take into account the previous, re- cent events
to detect the new incoming one. In this paper, we propose a real time
collective anomaly detection model based on neural network learning and feature
operating. Normally a Long Short Term Memory Recurrent Neural Network (LSTM
RNN) is trained only on normal data and it is capable of predicting several
time steps ahead of an input. In our approach, a LSTM RNN is trained with
normal time series data before performing a live prediction for each time step.
Instead of considering each time step separately, the observation of prediction
errors from a certain number of time steps is now proposed as a new idea for
detecting collective anomalies. The prediction errors from a number of the
latest time steps above a threshold will indicate a collective anomaly. The
model is built on a time series version of the KDD 1999 dataset. The
experiments demonstrate that it is possible to offer reliable and efficient for
collective anomaly detection.
| Loic Bontemps, Van Loi Cao, James McDermott, Nhien-An Le-Khac | null | 1703.09752 | null | null |
Unifying the Stochastic Spectral Descent for Restricted Boltzmann
Machines with Bernoulli or Gaussian Inputs | stat.ML cs.LG | Stochastic gradient descent based algorithms are typically used as the
general optimization tools for most deep learning models. A Restricted
Boltzmann Machine (RBM) is a probabilistic generative model that can be stacked
to construct deep architectures. For RBM with Bernoulli inputs, non-Euclidean
algorithm such as stochastic spectral descent (SSD) has been specifically
designed to speed up the convergence with improved use of the gradient
estimation by sampling methods. However, the existing algorithm and
corresponding theoretical justification depend on the assumption that the
possible configurations of inputs are finite, like binary variables. The
purpose of this paper is to generalize SSD for Gaussian RBM being capable of
mod- eling continuous data, regardless of the previous assumption. We propose
the gradient descent methods in non-Euclidean space of parameters, via de-
riving the upper bounds of logarithmic partition function for RBMs based on
Schatten-infinity norm. We empirically show that the advantage and improvement
of SSD over stochastic gradient descent (SGD).
| Kai Fan | null | 1703.09766 | null | null |
Particle Filtering for PLCA model with Application to Music
Transcription | stat.ML cs.LG cs.SD | Automatic Music Transcription (AMT) consists in automatically estimating the
notes in an audio recording, through three attributes: onset time, duration and
pitch. Probabilistic Latent Component Analysis (PLCA) has become very popular
for this task. PLCA is a spectrogram factorization method, able to model a
magnitude spectrogram as a linear combination of spectral vectors from a
dictionary. Such methods use the Expectation-Maximization (EM) algorithm to
estimate the parameters of the acoustic model. This algorithm presents
well-known inherent defaults (local convergence, initialization dependency),
making EM-based systems limited in their applications to AMT, particularly in
regards to the mathematical form and number of priors. To overcome such limits,
we propose in this paper to employ a different estimation framework based on
Particle Filtering (PF), which consists in sampling the posterior distribution
over larger parameter ranges. This framework proves to be more robust in
parameter estimation, more flexible and unifying in the integration of prior
knowledge in the system. Note-level transcription accuracies of 61.8 $\%$ and
59.5 $\%$ were achieved on evaluation sound datasets of two different
instrument repertoires, including the classical piano (from MAPS dataset) and
the marovany zither, and direct comparisons to previous PLCA-based approaches
are provided. Steps for further development are also outlined.
| D. Cazau, G. Revillon, W. Yuancheng, O. Adam | null | 1703.09772 | null | null |
Two-Stream RNN/CNN for Action Recognition in 3D Videos | cs.CV cs.LG | The recognition of actions from video sequences has many applications in
health monitoring, assisted living, surveillance, and smart homes. Despite
advances in sensing, in particular related to 3D video, the methodologies to
process the data are still subject to research. We demonstrate superior results
by a system which combines recurrent neural networks with convolutional neural
networks in a voting approach. The gated-recurrent-unit-based neural networks
are particularly well-suited to distinguish actions based on long-term
information from optical tracking data; the 3D-CNNs focus more on detailed,
recent information from video data. The resulting features are merged in an SVM
which then classifies the movement. In this architecture, our method improves
recognition rates of state-of-the-art methods by 14% on standard data sets.
| Rui Zhao, Haider Ali, Patrick van der Smagt | 10.1109/IROS.2017.8206288 | 1703.09783 | null | null |
Perception Driven Texture Generation | cs.CV cs.AI cs.LG | This paper investigates a novel task of generating texture images from
perceptual descriptions. Previous work on texture generation focused on either
synthesis from examples or generation from procedural models. Generating
textures from perceptual attributes have not been well studied yet. Meanwhile,
perceptual attributes, such as directionality, regularity and roughness are
important factors for human observers to describe a texture. In this paper, we
propose a joint deep network model that combines adversarial training and
perceptual feature regression for texture generation, while only random noise
and user-defined perceptual attributes are required as input. In this model, a
preliminary trained convolutional neural network is essentially integrated with
the adversarial framework, which can drive the generated textures to possess
given perceptual attributes. An important aspect of the proposed model is that,
if we change one of the input perceptual features, the corresponding appearance
of the generated textures will also be changed. We design several experiments
to validate the effectiveness of the proposed method. The results show that the
proposed method can produce high quality texture images with desired perceptual
properties.
| Yanhai Gan, Huifang Chi, Ying Gao, Jun Liu, Guoqiang Zhong, Junyu Dong | null | 1703.09784 | null | null |
Deceiving Google's Cloud Video Intelligence API Built for Summarizing
Videos | cs.CV cs.LG | Despite the rapid progress of the techniques for image classification, video
annotation has remained a challenging task. Automated video annotation would be
a breakthrough technology, enabling users to search within the videos.
Recently, Google introduced the Cloud Video Intelligence API for video
analysis. As per the website, the system can be used to "separate signal from
noise, by retrieving relevant information at the video, shot or per frame"
level. A demonstration website has been also launched, which allows anyone to
select a video for annotation. The API then detects the video labels (objects
within the video) as well as shot labels (description of the video events over
time). In this paper, we examine the usability of the Google's Cloud Video
Intelligence API in adversarial environments. In particular, we investigate
whether an adversary can subtly manipulate a video in such a way that the API
will return only the adversary-desired labels. For this, we select an image,
which is different from the video content, and insert it, periodically and at a
very low rate, into the video. We found that if we insert one image every two
seconds, the API is deceived into annotating the video as if it only contained
the inserted image. Note that the modification to the video is hardly
noticeable as, for instance, for a typical frame rate of 25, we insert only one
image per 50 video frames. We also found that, by inserting one image per
second, all the shot labels returned by the API are related to the inserted
image. We perform the experiments on the sample videos provided by the API
demonstration website and show that our attack is successful with different
videos and images.
| Hossein Hosseini, Baicen Xiao and Radha Poovendran | null | 1703.09793 | null | null |
Disruptive Event Classification using PMU Data in Distribution Networks | cs.LG cs.SY | Proliferation of advanced metering devices with high sampling rates in
distribution grids, e.g., micro-phasor measurement units ({\mu}PMU), provides
unprecedented potentials for wide-area monitoring and diagnostic applications,
e.g., situational awareness, health monitoring of distribution assets.
Unexpected disruptive events interrupting the normal operation of assets in
distribution grids can eventually lead to permanent failure with expensive
replacement cost over time. Therefore, disruptive event classification provides
useful information for preventive maintenance of the assets in distribution
networks. Preventive maintenance provides wide range of benefits in terms of
time, avoiding unexpected outages, maintenance crew utilization, and equipment
replacement cost. In this paper, a PMU-data-driven framework is proposed for
classification of disruptive events in distribution networks. The two
disruptive events, i.e., malfunctioned capacitor bank switching and
malfunctioned regulator on-load tap changer (OLTC) switching are considered and
distinguished from the normal abrupt load change in distribution grids. The
performance of the proposed framework is verified using the simulation of the
events in the IEEE 13-bus distribution network. The event classification is
formulated using two different algorithms as; i) principle component analysis
(PCA) together with multi-class support vector machine (SVM), and ii)
autoencoder along with softmax classifier. The results demonstrate the
effectiveness of the proposed algorithms and satisfactory classification
accuracies.
| Iman Niazazari and Hanif Livani | null | 1703.098 | null | null |
A Deep Compositional Framework for Human-like Language Acquisition in
Virtual Environment | cs.CL cs.LG | We tackle a task where an agent learns to navigate in a 2D maze-like
environment called XWORLD. In each session, the agent perceives a sequence of
raw-pixel frames, a natural language command issued by a teacher, and a set of
rewards. The agent learns the teacher's language from scratch in a grounded and
compositional manner, such that after training it is able to correctly execute
zero-shot commands: 1) the combination of words in the command never appeared
before, and/or 2) the command contains new object concepts that are learned
from another task but never learned from navigation. Our deep framework for the
agent is trained end to end: it learns simultaneously the visual
representations of the environment, the syntax and semantics of the language,
and the action module that outputs actions. The zero-shot learning capability
of our framework results from its compositionality and modularity with
parameter tying. We visualize the intermediate outputs of the framework,
demonstrating that the agent truly understands how to solve the problem. We
believe that our results provide some preliminary insights on how to train an
agent with similar abilities in a 3D environment.
| Haonan Yu, Haichao Zhang, and Wei Xu | null | 1703.09831 | null | null |
Theory II: Landscape of the Empirical Risk in Deep Learning | cs.LG cs.CV cs.NE | Previous theoretical work on deep learning and neural network optimization
tend to focus on avoiding saddle points and local minima. However, the
practical observation is that, at least in the case of the most successful Deep
Convolutional Neural Networks (DCNNs), practitioners can always increase the
network size to fit the training data (an extreme example would be [1]). The
most successful DCNNs such as VGG and ResNets are best used with a degree of
"overparametrization". In this work, we characterize with a mix of theory and
experiments, the landscape of the empirical risk of overparametrized DCNNs. We
first prove in the regression framework the existence of a large number of
degenerate global minimizers with zero empirical error (modulo inconsistent
equations). The argument that relies on the use of Bezout theorem is rigorous
when the RELUs are replaced by a polynomial nonlinearity (which empirically
works as well). As described in our Theory III [2] paper, the same minimizers
are degenerate and thus very likely to be found by SGD that will furthermore
select with higher probability the most robust zero-minimizer. We further
experimentally explored and visualized the landscape of empirical risk of a
DCNN on CIFAR-10 during the entire training process and especially the global
minima. Finally, based on our theoretical and experimental results, we propose
an intuitive model of the landscape of DCNN's empirical loss surface, which
might not be as complicated as people commonly believe.
| Qianli Liao and Tomaso Poggio | null | 1703.09833 | null | null |
Inverse Risk-Sensitive Reinforcement Learning | cs.LG stat.ML | We address the problem of inverse reinforcement learning in Markov decision
processes where the agent is risk-sensitive. In particular, we model
risk-sensitivity in a reinforcement learning framework by making use of models
of human decision-making having their origins in behavioral psychology,
behavioral economics, and neuroscience. We propose a gradient-based inverse
reinforcement learning algorithm that minimizes a loss function defined on the
observed behavior. We demonstrate the performance of the proposed technique on
two examples, the first of which is the canonical Grid World example and the
second of which is a Markov decision process modeling passengers' decisions
regarding ride-sharing. In the latter, we use pricing and travel time data from
a ride-sharing company to construct the transition probabilities and rewards of
the Markov decision process.
| Lillian J. Ratliff and Eric Mazumdar | null | 1703.09842 | null | null |
Multi-Scale Dense Networks for Resource Efficient Image Classification | cs.LG | In this paper we investigate image classification with computational resource
limits at test time. Two such settings are: 1. anytime classification, where
the network's prediction for a test example is progressively updated,
facilitating the output of a prediction at any time; and 2. budgeted batch
classification, where a fixed amount of computation is available to classify a
set of examples that can be spent unevenly across "easier" and "harder" inputs.
In contrast to most prior work, such as the popular Viola and Jones algorithm,
our approach is based on convolutional neural networks. We train multiple
classifiers with varying resource demands, which we adaptively apply during
test time. To maximally re-use computation between the classifiers, we
incorporate them as early-exits into a single deep convolutional neural network
and inter-connect them with dense connectivity. To facilitate high quality
classification early on, we use a two-dimensional multi-scale network
architecture that maintains coarse and fine level features all-throughout the
network. Experiments on three image-classification tasks demonstrate that our
framework substantially improves the existing state-of-the-art in both
settings.
| Gao Huang, Danlu Chen, Tianhong Li, Felix Wu, Laurens van der Maaten
and Kilian Q. Weinberger | null | 1703.09844 | null | null |
Solar Power Forecasting Using Support Vector Regression | cs.LG cs.CE stat.AP | Generation and load balance is required in the economic scheduling of
generating units in the smart grid. Variable energy generations, particularly
from wind and solar energy resources, are witnessing a rapid boost, and, it is
anticipated that with a certain level of their penetration, they can become
noteworthy sources of uncertainty. As in the case of load demand, energy
forecasting can also be used to mitigate some of the challenges that arise from
the uncertainty in the resource. While wind energy forecasting research is
considered mature, solar energy forecasting is witnessing a steadily growing
attention from the research community. This paper presents a support vector
regression model to produce solar power forecasts on a rolling basis for 24
hours ahead over an entire year, to mimic the practical business of energy
forecasting. Twelve weather variables are considered from a high-quality
benchmark dataset and new variables are extracted. The added value of the heat
index and wind speed as additional variables to the model is studied across
different seasons. The support vector regression model performance is compared
with artificial neural networks and multiple linear regression models for
energy forecasting.
| Mohamed Abuella and Badrul Chowdhury | null | 1703.09851 | null | null |
LabelBank: Revisiting Global Perspectives for Semantic Segmentation | cs.CV cs.AI cs.LG | Semantic segmentation requires a detailed labeling of image pixels by object
category. Information derived from local image patches is necessary to describe
the detailed shape of individual objects. However, this information is
ambiguous and can result in noisy labels. Global inference of image content can
instead capture the general semantic concepts present. We advocate that
holistic inference of image concepts provides valuable information for detailed
pixel labeling. We propose a generic framework to leverage holistic information
in the form of a LabelBank for pixel-level segmentation.
We show the ability of our framework to improve semantic segmentation
performance in a variety of settings. We learn models for extracting a holistic
LabelBank from visual cues, attributes, and/or textual descriptions. We
demonstrate improvements in semantic segmentation accuracy on standard datasets
across a range of state-of-the-art segmentation architectures and holistic
inference approaches.
| Hexiang Hu, Zhiwei Deng, Guang-Tong Zhou, Fei Sha, Greg Mori | null | 1703.09891 | null | null |
Grouped Convolutional Neural Networks for Multivariate Time Series | cs.LG | Analyzing multivariate time series data is important for many applications
such as automated control, fault diagnosis and anomaly detection. One of the
key challenges is to learn latent features automatically from dynamically
changing multivariate input. In visual recognition tasks, convolutional neural
networks (CNNs) have been successful to learn generalized feature extractors
with shared parameters over the spatial domain. However, when high-dimensional
multivariate time series is given, designing an appropriate CNN model structure
becomes challenging because the kernels may need to be extended through the
full dimension of the input volume. To address this issue, we present two
structure learning algorithms for deep CNN models. Our algorithms exploit the
covariance structure over multiple time series to partition input volume into
groups. The first algorithm learns the group CNN structures explicitly by
clustering individual input sequences. The second algorithm learns the group
CNN structures implicitly from the error backpropagation. In experiments with
two real-world datasets, we demonstrate that our group CNNs outperform existing
CNN based regression methods.
| Subin Yi, Janghoon Ju, Man-Ki Yoon, Jaesik Choi | null | 1703.09938 | null | null |
Efficient Private ERM for Smooth Objectives | cs.LG cs.DS stat.ML | In this paper, we consider efficient differentially private empirical risk
minimization from the viewpoint of optimization algorithms. For strongly convex
and smooth objectives, we prove that gradient descent with output perturbation
not only achieves nearly optimal utility, but also significantly improves the
running time of previous state-of-the-art private optimization algorithms, for
both $\epsilon$-DP and $(\epsilon, \delta)$-DP. For non-convex but smooth
objectives, we propose an RRPSGD (Random Round Private Stochastic Gradient
Descent) algorithm, which provably converges to a stationary point with privacy
guarantee. Besides the expected utility bounds, we also provide guarantees in
high probability form. Experiments demonstrate that our algorithm consistently
outperforms existing method in both utility and running time.
| Jiaqi Zhang, Kai Zheng, Wenlong Mou, Liwei Wang | null | 1703.09947 | null | null |
Marginal likelihood based model comparison in Fuzzy Bayesian Learning | stat.ML cs.LG | In a recent paper [1] we introduced the Fuzzy Bayesian Learning (FBL)
paradigm where expert opinions can be encoded in the form of fuzzy rule bases
and the hyper-parameters of the fuzzy sets can be learned from data using a
Bayesian approach. The present paper extends this work for selecting the most
appropriate rule base among a set of competing alternatives, which best
explains the data, by calculating the model evidence or marginal likelihood. We
explain why this is an attractive alternative over simply minimizing a mean
squared error metric of prediction and show the validity of the proposition
using synthetic examples and a real world case study in the financial services
sector.
| Indranil Pan and Dirk Bester | null | 1703.09956 | null | null |
Cohesion-based Online Actor-Critic Reinforcement Learning for mHealth
Intervention | cs.LG | In the wake of the vast population of smart device users worldwide, mobile
health (mHealth) technologies are hopeful to generate positive and wide
influence on people's health. They are able to provide flexible, affordable and
portable health guides to device users. Current online decision-making methods
for mHealth assume that the users are completely heterogeneous. They share no
information among users and learn a separate policy for each user. However,
data for each user is very limited in size to support the separate online
learning, leading to unstable policies that contain lots of variances. Besides,
we find the truth that a user may be similar with some, but not all, users, and
connected users tend to have similar behaviors. In this paper, we propose a
network cohesion constrained (actor-critic) Reinforcement Learning (RL) method
for mHealth. The goal is to explore how to share information among similar
users to better convert the limited user information into sharper learned
policies. To the best of our knowledge, this is the first online actor-critic
RL for mHealth and first network cohesion constrained (actor-critic) RL method
in all applications. The network cohesion is important to derive effective
policies. We come up with a novel method to learn the network by using the warm
start trajectory, which directly reflects the users' property. The optimization
of our model is difficult and very different from the general supervised
learning due to the indirect observation of values. As a contribution, we
propose two algorithms for the proposed online RLs. Apart from mHealth, the
proposed methods can be easily applied or adapted to other health-related
tasks. Extensive experiment results on the HeartSteps dataset demonstrates that
in a variety of parameter settings, the proposed two methods obtain obvious
improvements over the state-of-the-art methods.
| Feiyun Zhu, Peng Liao, Xinliang Zhu, Yaowen Yao and Junzhou Huang | null | 1703.10039 | null | null |
Multiagent Bidirectionally-Coordinated Nets: Emergence of Human-level
Coordination in Learning to Play StarCraft Combat Games | cs.AI cs.LG | Many artificial intelligence (AI) applications often require multiple
intelligent agents to work in a collaborative effort. Efficient learning for
intra-agent communication and coordination is an indispensable step towards
general AI. In this paper, we take StarCraft combat game as a case study, where
the task is to coordinate multiple agents as a team to defeat their enemies. To
maintain a scalable yet effective communication protocol, we introduce a
Multiagent Bidirectionally-Coordinated Network (BiCNet ['bIknet]) with a
vectorised extension of actor-critic formulation. We show that BiCNet can
handle different types of combats with arbitrary numbers of AI agents for both
sides. Our analysis demonstrates that without any supervisions such as human
demonstrations or labelled data, BiCNet could learn various types of advanced
coordination strategies that have been commonly used by experienced game
players. In our experiments, we evaluate our approach against multiple
baselines under different scenarios; it shows state-of-the-art performance, and
possesses potential values for large-scale real-world applications.
| Peng Peng, Ying Wen, Yaodong Yang, Quan Yuan, Zhenkun Tang, Haitao
Long, Jun Wang | null | 1703.10069 | null | null |
Position-based Content Attention for Time Series Forecasting with
Sequence-to-sequence RNNs | cs.LG cs.NE | We propose here an extended attention model for sequence-to-sequence
recurrent neural networks (RNNs) designed to capture (pseudo-)periods in time
series. This extended attention model can be deployed on top of any RNN and is
shown to yield state-of-the-art performance for time series forecasting on
several univariate and multivariate time series.
| Yagmur G. Cinar, Hamid Mirisaee, Parantapa Goswami, Eric Gaussier, Ali
Ait-Bachir, and Vadim Strijov | null | 1703.10089 | null | null |
Learning Inverse Mapping by Autoencoder based Generative Adversarial
Nets | cs.LG | The inverse mapping of GANs'(Generative Adversarial Nets) generator has a
great potential value.Hence, some works have been developed to construct the
inverse function of generator by directly learning or adversarial
learning.While the results are encouraging, the problem is highly challenging
and the existing ways of training inverse models of GANs have many
disadvantages, such as hard to train or poor performance.Due to these reasons,
we propose a new approach based on using inverse generator ($IG$) model as
encoder and pre-trained generator ($G$) as decoder of an AutoEncoder network to
train the $IG$ model. In the proposed model, the difference between the input
and output, which are both the generated image of pre-trained GAN's generator,
of AutoEncoder is directly minimized. The optimizing method can overcome the
difficulty in training and inverse model of an non one-to-one function.We also
applied the inverse model of GANs' generators to image searching and
translation.The experimental results prove that the proposed approach works
better than the traditional approaches in image searching.
| Junyu Luo, Yong Xu, Chenwei Tang, and Jiancheng Lv | null | 1703.10094 | null | null |
The Top 10 Topics in Machine Learning Revisited: A Quantitative
Meta-Study | cs.LG cs.AI stat.ML | Which topics of machine learning are most commonly addressed in research?
This question was initially answered in 2007 by doing a qualitative survey
among distinguished researchers. In our study, we revisit this question from a
quantitative perspective. Concretely, we collect 54K abstracts of papers
published between 2007 and 2016 in leading machine learning journals and
conferences. We then use machine learning in order to determine the top 10
topics in machine learning. We not only include models, but provide a holistic
view across optimization, data, features, etc. This quantitative approach
allows reducing the bias of surveys. It reveals new and up-to-date insights
into what the 10 most prolific topics in machine learning research are. This
allows researchers to identify popular topics as well as new and rising topics
for their research.
| Patrick Glauner, Manxing Du, Victor Paraschiv, Andrey Boytsov, Isabel
Lopez Andrade, Jorge Meira, Petko Valtchev, Radu State | null | 1703.10121 | null | null |
Priv'IT: Private and Sample Efficient Identity Testing | cs.DS cs.CR cs.IT cs.LG math.IT math.ST stat.TH | We develop differentially private hypothesis testing methods for the small
sample regime. Given a sample $\cal D$ from a categorical distribution $p$ over
some domain $\Sigma$, an explicitly described distribution $q$ over $\Sigma$,
some privacy parameter $\varepsilon$, accuracy parameter $\alpha$, and
requirements $\beta_{\rm I}$ and $\beta_{\rm II}$ for the type I and type II
errors of our test, the goal is to distinguish between $p=q$ and
$d_{\rm{TV}}(p,q) \geq \alpha$.
We provide theoretical bounds for the sample size $|{\cal D}|$ so that our
method both satisfies $(\varepsilon,0)$-differential privacy, and guarantees
$\beta_{\rm I}$ and $\beta_{\rm II}$ type I and type II errors. We show that
differential privacy may come for free in some regimes of parameters, and we
always beat the sample complexity resulting from running the $\chi^2$-test with
noisy counts, or standard approaches such as repetition for endowing
non-private $\chi^2$-style statistics with differential privacy guarantees. We
experimentally compare the sample complexity of our method to that of recently
proposed methods for private hypothesis testing.
| Bryan Cai, Constantinos Daskalakis, Gautam Kamath | null | 1703.10127 | null | null |
Tacotron: Towards End-to-End Speech Synthesis | cs.CL cs.LG cs.SD | A text-to-speech synthesis system typically consists of multiple stages, such
as a text analysis frontend, an acoustic model and an audio synthesis module.
Building these components often requires extensive domain expertise and may
contain brittle design choices. In this paper, we present Tacotron, an
end-to-end generative text-to-speech model that synthesizes speech directly
from characters. Given <text, audio> pairs, the model can be trained completely
from scratch with random initialization. We present several key techniques to
make the sequence-to-sequence framework perform well for this challenging task.
Tacotron achieves a 3.82 subjective 5-scale mean opinion score on US English,
outperforming a production parametric system in terms of naturalness. In
addition, since Tacotron generates speech at the frame level, it's
substantially faster than sample-level autoregressive methods.
| Yuxuan Wang, RJ Skerry-Ryan, Daisy Stanton, Yonghui Wu, Ron J. Weiss,
Navdeep Jaitly, Zongheng Yang, Ying Xiao, Zhifeng Chen, Samy Bengio, Quoc Le,
Yannis Agiomyrgiannakis, Rob Clark, Rif A. Saurous | null | 1703.10135 | null | null |
Enter the Matrix: Safely Interruptible Autonomous Systems via
Virtualization | cs.AI cs.LG | Autonomous systems that operate around humans will likely always rely on kill
switches that stop their execution and allow them to be remote-controlled for
the safety of humans or to prevent damage to the system. It is theoretically
possible for an autonomous system with sufficient sensor and effector
capability that learn online using reinforcement learning to discover that the
kill switch deprives it of long-term reward and thus learn to disable the
switch or otherwise prevent a human operator from using the switch. This is
referred to as the big red button problem. We present a technique that prevents
a reinforcement learning agent from learning to disable the kill switch. We
introduce an interruption process in which the agent's sensors and effectors
are redirected to a virtual simulation where it continues to believe it is
receiving reward. We illustrate our technique in a simple grid world
environment.
| Mark O. Riedl, Brent Harrison | null | 1703.10284 | null | null |
From Deep to Shallow: Transformations of Deep Rectifier Networks | cs.LG stat.ML | In this paper, we introduce transformations of deep rectifier networks,
enabling the conversion of deep rectifier networks into shallow rectifier
networks. We subsequently prove that any rectifier net of any depth can be
represented by a maximum of a number of functions that can be realized by a
shallow network with a single hidden layer. The transformations of both deep
rectifier nets and deep residual nets are conducted to demonstrate the
advantages of the residual nets over the conventional neural nets and the
advantages of the deep neural nets over the shallow neural nets. In summary,
for two rectifier nets with different depths but with same total number of
hidden units, the corresponding single hidden layer representation of the
deeper net is much more complex than the corresponding single hidden
representation of the shallower net. Similarly, for a residual net and a
conventional rectifier net with the same structure except for the skip
connections in the residual net, the corresponding single hidden layer
representation of the residual net is much more complex than the corresponding
single hidden layer representation of the conventional net.
| Senjian An, Farid Boussaid, Mohammed Bennamoun, and Jiankun Hu | null | 1703.10355 | null | null |
Simplified End-to-End MMI Training and Voting for ASR | cs.LG cs.CL cs.NE | A simplified speech recognition system that uses the maximum mutual
information (MMI) criterion is considered. End-to-end training using gradient
descent is suggested, similarly to the training of connectionist temporal
classification (CTC). We use an MMI criterion with a simple language model in
the training stage, and a standard HMM decoder. Our method compares favorably
to CTC in terms of performance, robustness, decoding time, disk footprint and
quality of alignments. The good alignments enable the use of a straightforward
ensemble method, obtained by simply averaging the predictions of several neural
network models, that were trained separately end-to-end. The ensemble method
yields a considerable reduction in the word error rate.
| Lior Fritz, David Burshtein | null | 1703.10356 | null | null |
On Fundamental Limits of Robust Learning | cs.LG stat.ML | We consider the problems of robust PAC learning from distributed and
streaming data, which may contain malicious errors and outliers, and analyze
their fundamental complexity questions. In particular, we establish lower
bounds on the communication complexity for distributed robust learning
performed on multiple machines, and on the space complexity for robust learning
from streaming data on a single machine. These results demonstrate that gaining
robustness of learning algorithms is usually at the expense of increased
complexities. As far as we know, this work gives the first complexity results
for distributed and online robust PAC learning.
| Jiashi Feng | null | 1703.10444 | null | null |
Application of a Shallow Neural Network to Short-Term Stock Trading | cs.NE cs.LG | Machine learning is increasingly prevalent in stock market trading. Though
neural networks have seen success in computer vision and natural language
processing, they have not been as useful in stock market trading. To
demonstrate the applicability of a neural network in stock trading, we made a
single-layer neural network that recommends buying or selling shares of a stock
by comparing the highest high of 10 consecutive days with that of the next 10
days, a process repeated for the stock's year-long historical data. A
chi-squared analysis found that the neural network can accurately and
appropriately decide whether to buy or sell shares for a given stock, showing
that a neural network can make simple decisions about the stock market.
| Abhinav Madahar, Yuze Ma, and Kunal Patel | null | 1703.10458 | null | null |
On Bayesian Exponentially Embedded Family for Model Order Selection | stat.ML cs.LG | In this paper, we derive a Bayesian model order selection rule by using the
exponentially embedded family method, termed Bayesian EEF. Unlike many other
Bayesian model selection methods, the Bayesian EEF can use vague proper priors
and improper noninformative priors to be objective in the elicitation of
parameter priors. Moreover, the penalty term of the rule is shown to be the sum
of half of the parameter dimension and the estimated mutual information between
parameter and observed data. This helps to reveal the EEF mechanism in
selecting model orders and may provide new insights into the open problems of
choosing an optimal penalty term for model order selection and choosing a good
prior from information theoretic viewpoints. The important example of linear
model order selection is given to illustrate the algorithms and arguments.
Lastly, the Bayesian EEF that uses Jeffreys prior coincides with the EEF rule
derived by frequentist strategies. This shows another interesting relationship
between the frequentist and Bayesian philosophies for model selection.
| Zhenghan Zhu and Steven Kay | 10.1109/TSP.2017.2781642 | 1703.10513 | null | null |
The Informativeness of K -Means for Learning Mixture Models | stat.ML cs.IT cs.LG math.IT stat.ME | The learning of mixture models can be viewed as a clustering problem. Indeed,
given data samples independently generated from a mixture of distributions, we
often would like to find the {\it correct target clustering} of the samples
according to which component distribution they were generated from. For a
clustering problem, practitioners often choose to use the simple $k$-means
algorithm. $k$-means attempts to find an {\it optimal clustering} that
minimizes the sum-of-squares distance between each point and its cluster
center. In this paper, we consider fundamental (i.e., information-theoretic)
limits of the solutions (clusterings) obtained by optimizing the sum-of-squares
distance. In particular, we provide sufficient conditions for the closeness of
any optimal clustering and the correct target clustering assuming that the data
samples are generated from a mixture of spherical Gaussian distributions. We
also generalize our results to log-concave distributions. Moreover, we show
that under similar or even weaker conditions on the mixture model, any optimal
clustering for the samples with reduced dimensionality is also close to the
correct target clustering. These results provide intuition for the
informativeness of $k$-means (with and without dimensionality reduction) as an
algorithm for learning mixture models.
| Zhaoqiang Liu, Vincent Y. F. Tan | 10.1109/TIT.2019.2927560 | 1703.10534 | null | null |
Bootstrapping Labelled Dataset Construction for Cow Tracking and
Behavior Analysis | cs.CV cs.AI cs.LG | This paper introduces a new approach to the long-term tracking of an object
in a challenging environment. The object is a cow and the environment is an
enclosure in a cowshed. Some of the key challenges in this domain are a
cluttered background, low contrast and high similarity between moving objects
which greatly reduces the efficiency of most existing approaches, including
those based on background subtraction. Our approach is split into object
localization, instance segmentation, learning and tracking stages. Our solution
is compared to a range of semi-supervised object tracking algorithms and we
show that the performance is strong and well suited to subsequent analysis. We
present our solution as a first step towards broader tracking and behavior
monitoring for cows in precision agriculture with the ultimate objective of
early detection of lameness.
| Aram Ter-Sarkisov and Robert Ross and John Kelleher | null | 1703.10571 | null | null |
Atomic Convolutional Networks for Predicting Protein-Ligand Binding
Affinity | cs.LG physics.chem-ph stat.ML | Empirical scoring functions based on either molecular force fields or
cheminformatics descriptors are widely used, in conjunction with molecular
docking, during the early stages of drug discovery to predict potency and
binding affinity of a drug-like molecule to a given target. These models
require expert-level knowledge of physical chemistry and biology to be encoded
as hand-tuned parameters or features rather than allowing the underlying model
to select features in a data-driven procedure. Here, we develop a general
3-dimensional spatial convolution operation for learning atomic-level chemical
interactions directly from atomic coordinates and demonstrate its application
to structure-based bioactivity prediction. The atomic convolutional neural
network is trained to predict the experimentally determined binding affinity of
a protein-ligand complex by direct calculation of the energy associated with
the complex, protein, and ligand given the crystal structure of the binding
pose. Non-covalent interactions present in the complex that are absent in the
protein-ligand sub-structures are identified and the model learns the
interaction strength associated with these features. We test our model by
predicting the binding free energy of a subset of protein-ligand complexes
found in the PDBBind dataset and compare with state-of-the-art cheminformatics
and machine learning-based approaches. We find that all methods achieve
experimental accuracy and that atomic convolutional networks either outperform
or perform competitively with the cheminformatics based methods. Unlike all
previous protein-ligand prediction systems, atomic convolutional networks are
end-to-end and fully-differentiable. They represent a new data-driven,
physics-based deep learning model paradigm that offers a strong foundation for
future improvements in structure-based bioactivity prediction.
| Joseph Gomes, Bharath Ramsundar, Evan N. Feinberg, Vijay S. Pande | null | 1703.10603 | null | null |
Diving into the shallows: a computational perspective on large-scale
shallow learning | stat.ML cs.LG | In this paper we first identify a basic limitation in gradient descent-based
optimization methods when used in conjunctions with smooth kernels. An analysis
based on the spectral properties of the kernel demonstrates that only a
vanishingly small portion of the function space is reachable after a polynomial
number of gradient descent iterations. This lack of approximating power
drastically limits gradient descent for a fixed computational budget leading to
serious over-regularization/underfitting. The issue is purely algorithmic,
persisting even in the limit of infinite data.
To address this shortcoming in practice, we introduce EigenPro iteration,
based on a preconditioning scheme using a small number of approximately
computed eigenvectors. It can also be viewed as learning a new kernel optimized
for gradient descent. It turns out that injecting this small (computationally
inexpensive and SGD-compatible) amount of approximate second-order information
leads to major improvements in convergence. For large data, this translates
into significant performance boost over the standard kernel methods. In
particular, we are able to consistently match or improve the state-of-the-art
results recently reported in the literature with a small fraction of their
computational budget.
Finally, we feel that these results show a need for a broader computational
perspective on modern large-scale learning to complement more traditional
statistical and convergence analyses. In particular, many phenomena of
large-scale high-dimensional inference are best understood in terms of
optimization on infinite dimensional Hilbert spaces, where standard algorithms
can sometimes have properties at odds with finite-dimensional intuition. A
systematic analysis concentrating on the approximation power of such algorithms
within a budget of computation may lead to progress both in theory and
practice.
| Siyuan Ma, Mikhail Belkin | null | 1703.10622 | null | null |
Interpretable Learning for Self-Driving Cars by Visualizing Causal
Attention | cs.CV cs.LG | Deep neural perception and control networks are likely to be a key component
of self-driving vehicles. These models need to be explainable - they should
provide easy-to-interpret rationales for their behavior - so that passengers,
insurance companies, law enforcement, developers etc., can understand what
triggered a particular behavior. Here we explore the use of visual
explanations. These explanations take the form of real-time highlighted regions
of an image that causally influence the network's output (steering control).
Our approach is two-stage. In the first stage, we use a visual attention model
to train a convolution network end-to-end from images to steering angle. The
attention model highlights image regions that potentially influence the
network's output. Some of these are true influences, but some are spurious. We
then apply a causal filtering step to determine which input regions actually
influence the output. This produces more succinct visual explanations and more
accurately exposes the network's behavior. We demonstrate the effectiveness of
our model on three datasets totaling 16 hours of driving. We first show that
training with attention does not degrade the performance of the end-to-end
network. Then we show that the network causally cues on a variety of features
that are used by humans while driving.
| Jinkyu Kim and John Canny | null | 1703.10631 | null | null |
Reliable Decision Support using Counterfactual Models | stat.ML cs.AI cs.LG | Decision-makers are faced with the challenge of estimating what is likely to
happen when they take an action. For instance, if I choose not to treat this
patient, are they likely to die? Practitioners commonly use supervised learning
algorithms to fit predictive models that help decision-makers reason about
likely future outcomes, but we show that this approach is unreliable, and
sometimes even dangerous. The key issue is that supervised learning algorithms
are highly sensitive to the policy used to choose actions in the training data,
which causes the model to capture relationships that do not generalize. We
propose using a different learning objective that predicts counterfactuals
instead of predicting outcomes under an existing action policy as in supervised
learning. To support decision-making in temporal settings, we introduce the
Counterfactual Gaussian Process (CGP) to predict the counterfactual future
progression of continuous-time trajectories under sequences of future actions.
We demonstrate the benefits of the CGP on two important decision-support tasks:
risk prediction and "what if?" reasoning for individualized treatment planning.
| Peter Schulam and Suchi Saria | null | 1703.10651 | null | null |
Near Perfect Protein Multi-Label Classification with Deep Neural
Networks | q-bio.BM cs.LG stat.ML | Artificial neural networks (ANNs) have gained a well-deserved popularity
among machine learning tools upon their recent successful applications in
image- and sound processing and classification problems. ANNs have also been
applied for predicting the family or function of a protein, knowing its residue
sequence. Here we present two new ANNs with multi-label classification ability,
showing impressive accuracy when classifying protein sequences into 698 UniProt
families (AUC=99.99%) and 983 Gene Ontology classes (AUC=99.45%).
| Balazs Szalkai and Vince Grolmusz | null | 1703.10663 | null | null |
QoS-Aware Multi-Armed Bandits | cs.LG cs.SE | Motivated by runtime verification of QoS requirements in self-adaptive and
self-organizing systems that are able to reconfigure their structure and
behavior in response to runtime data, we propose a QoS-aware variant of
Thompson sampling for multi-armed bandits. It is applicable in settings where
QoS satisfaction of an arm has to be ensured with high confidence efficiently,
rather than finding the optimal arm while minimizing regret. Preliminary
experimental results encourage further research in the field of QoS-aware
decision making.
| Lenz Belzner, Thomas Gabor | 10.1109/FAS-W.2016.36 | 1703.10669 | null | null |
Applying Ricci Flow to High Dimensional Manifold Learning | cs.LG | Traditional manifold learning algorithms often bear an assumption that the
local neighborhood of any point on embedded manifold is roughly equal to the
tangent space at that point without considering the curvature. The curvature
indifferent way of manifold processing often makes traditional dimension
reduction poorly neighborhood preserving. To overcome this drawback we propose
a new algorithm called RF-ML to perform an operation on the manifold with help
of Ricci flow before reducing the dimension of manifold.
| Yangyang Li and Ruqian Lu | null | 1703.10675 | null | null |
BEGAN: Boundary Equilibrium Generative Adversarial Networks | cs.LG stat.ML | We propose a new equilibrium enforcing method paired with a loss derived from
the Wasserstein distance for training auto-encoder based Generative Adversarial
Networks. This method balances the generator and discriminator during training.
Additionally, it provides a new approximate convergence measure, fast and
stable training and high visual quality. We also derive a way of controlling
the trade-off between image diversity and visual quality. We focus on the image
generation task, setting a new milestone in visual quality, even at higher
resolutions. This is achieved while using a relatively simple model
architecture and a standard training procedure.
| David Berthelot, Thomas Schumm, Luke Metz | null | 1703.10717 | null | null |
Fundamental Conditions for Low-CP-Rank Tensor Completion | cs.LG cs.NA math.NA stat.ML | We consider the problem of low canonical polyadic (CP) rank tensor
completion. A completion is a tensor whose entries agree with the observed
entries and its rank matches the given CP rank. We analyze the manifold
structure corresponding to the tensors with the given rank and define a set of
polynomials based on the sampling pattern and CP decomposition. Then, we show
that finite completability of the sampled tensor is equivalent to having a
certain number of algebraically independent polynomials among the defined
polynomials. Our proposed approach results in characterizing the maximum number
of algebraically independent polynomials in terms of a simple geometric
structure of the sampling pattern, and therefore we obtain the deterministic
necessary and sufficient condition on the sampling pattern for finite
completability of the sampled tensor. Moreover, assuming that the entries of
the tensor are sampled independently with probability $p$ and using the
mentioned deterministic analysis, we propose a combinatorial method to derive a
lower bound on the sampling probability $p$, or equivalently, the number of
sampled entries that guarantees finite completability with high probability. We
also show that the existing result for the matrix completion problem can be
used to obtain a loose lower bound on the sampling probability $p$. In
addition, we obtain deterministic and probabilistic conditions for unique
completability. It is seen that the number of samples required for finite or
unique completability obtained by the proposed analysis on the CP manifold is
orders-of-magnitude lower than that is obtained by the existing analysis on the
Grassmannian manifold.
| Morteza Ashraphijuo, Xiaodong Wang | null | 1703.1074 | null | null |
Diabetic Retinopathy Detection via Deep Convolutional Networks for
Discriminative Localization and Visual Explanation | cs.CV cs.LG cs.NE | We proposed a deep learning method for interpretable diabetic retinopathy
(DR) detection. The visual-interpretable feature of the proposed method is
achieved by adding the regression activation map (RAM) after the global
averaging pooling layer of the convolutional networks (CNN). With RAM, the
proposed model can localize the discriminative regions of an retina image to
show the specific region of interest in terms of its severity level. We believe
this advantage of the proposed deep learning model is highly desired for DR
detection because in practice, users are not only interested with high
prediction performance, but also keen to understand the insights of DR
detection and why the adopted learning model works. In the experiments
conducted on a large scale of retina image dataset, we show that the proposed
CNN model can achieve high performance on DR detection compared with the
state-of-the-art while achieving the merits of providing the RAM to highlight
the salient regions of the input image.
| Zhiguang Wang, Jianbo Yang | null | 1703.10757 | null | null |
Bi-class classification of humpback whale sound units against complex
background noise with Deep Convolution Neural Network | stat.ML cs.LG cs.SD | Automatically detecting sound units of humpback whales in complex
time-varying background noises is a current challenge for scientists. In this
paper, we explore the applicability of Convolution Neural Network (CNN) method
for this task. In the evaluation stage, we present 6 bi-class classification
experimentations of whale sound detection against different background noise
types (e.g., rain, wind). In comparison to classical FFT-based representation
like spectrograms, we showed that the use of image-based pretrained CNN
features brought higher performance to classify whale sounds and background
noise.
| Cazau Dorian, Riwal Lefort, Julien Bonnel, Jean-Luc Zarader and
Olivier Adam | null | 1703.10887 | null | null |
Feature functional theory - binding predictor (FFT-BP) for the blind
prediction of binding free energies | q-bio.QM cs.LG physics.chem-ph | We present a feature functional theory - binding predictor (FFT-BP) for the
protein-ligand binding affinity prediction. The underpinning assumptions of
FFT-BP are as follows: i) representability: there exists a microscopic feature
vector that can uniquely characterize and distinguish one protein-ligand
complex from another; ii) feature-function relationship: the macroscopic
features, including binding free energy, of a complex is a functional of
microscopic feature vectors; and iii) similarity: molecules with similar
microscopic features have similar macroscopic features, such as binding
affinity. Physical models, such as implicit solvent models and quantum theory,
are utilized to extract microscopic features, while machine learning algorithms
are employed to rank the similarity among protein-ligand complexes. A large
variety of numerical validations and tests confirms the accuracy and robustness
of the proposed FFT-BP model. The root mean square errors (RMSEs) of FFT-BP
blind predictions of a benchmark set of 100 complexes, the PDBBind v2007 core
set of 195 complexes and the PDBBind v2015 core set of 195 complexes are 1.99,
2.02 and 1.92 kcal/mol, respectively. Their corresponding Pearson correlation
coefficients are 0.75, 0.80, and 0.78, respectively.
| Bao Wang, Zhixiong Zhao, Duc D. Nguyen, Guo-Wei Wei | null | 1703.10927 | null | null |
Sentence Simplification with Deep Reinforcement Learning | cs.CL cs.LG | Sentence simplification aims to make sentences easier to read and understand.
Most recent approaches draw on insights from machine translation to learn
simplification rewrites from monolingual corpora of complex and simple
sentences. We address the simplification problem with an encoder-decoder model
coupled with a deep reinforcement learning framework. Our model, which we call
{\sc Dress} (as shorthand for {\bf D}eep {\bf RE}inforcement {\bf S}entence
{\bf S}implification), explores the space of possible simplifications while
learning to optimize a reward function that encourages outputs which are
simple, fluent, and preserve the meaning of the input. Experiments on three
datasets demonstrate that our model outperforms competitive simplification
systems.
| Xingxing Zhang, Mirella Lapata | null | 1703.10931 | null | null |
Comparison of multi-task convolutional neural network (MT-CNN) and a few
other methods for toxicity prediction | q-bio.QM cs.LG stat.ML | Toxicity analysis and prediction are of paramount importance to human health
and environmental protection. Existing computational methods are built from a
wide variety of descriptors and regressors, which makes their performance
analysis difficult. For example, deep neural network (DNN), a successful
approach in many occasions, acts like a black box and offers little conceptual
elegance or physical understanding. The present work constructs a common set of
microscopic descriptors based on established physical models for charges,
surface areas and free energies to assess the performance of multi-task
convolutional neural network (MT-CNN) architectures and a few other approaches,
including random forest (RF) and gradient boosting decision tree (GBDT), on an
equal footing. Comparison is also given to convolutional neural network (CNN)
and non-convolutional deep neural network (DNN) algorithms. Four benchmark
toxicity data sets (i.e., endpoints) are used to evaluate various approaches.
Extensive numerical studies indicate that the present MT-CNN architecture is
able to outperform the state-of-the-art methods.
| Kedi Wu, Guo-Wei Wei | null | 1703.10951 | null | null |
Learning Visual Servoing with Deep Features and Fitted Q-Iteration | cs.LG cs.AI cs.RO | Visual servoing involves choosing actions that move a robot in response to
observations from a camera, in order to reach a goal configuration in the
world. Standard visual servoing approaches typically rely on manually designed
features and analytical dynamics models, which limits their generalization
capability and often requires extensive application-specific feature and model
engineering. In this work, we study how learned visual features, learned
predictive dynamics models, and reinforcement learning can be combined to learn
visual servoing mechanisms. We focus on target following, with the goal of
designing algorithms that can learn a visual servo using low amounts of data of
the target in question, to enable quick adaptation to new targets. Our approach
is based on servoing the camera in the space of learned visual features, rather
than image pixels or manually-designed keypoints. We demonstrate that standard
deep features, in our case taken from a model trained for object
classification, can be used together with a bilinear predictive model to learn
an effective visual servo that is robust to visual variation, changes in
viewing angle and appearance, and occlusions. A key component of our approach
is to use a sample-efficient fitted Q-iteration algorithm to learn which
features are best suited for the task at hand. We show that we can learn an
effective visual servo on a complex synthetic car following benchmark using
just 20 training trajectory samples for reinforcement learning. We demonstrate
substantial improvement over a conventional approach based on image pixels or
hand-designed keypoints, and we show an improvement in sample-efficiency of
more than two orders of magnitude over standard model-free deep reinforcement
learning algorithms. Videos are available at
http://rll.berkeley.edu/visual_servoing .
| Alex X. Lee, Sergey Levine, Pieter Abbeel | null | 1703.11 | null | null |
Computing Nonvacuous Generalization Bounds for Deep (Stochastic) Neural
Networks with Many More Parameters than Training Data | cs.LG | One of the defining properties of deep learning is that models are chosen to
have many more parameters than available training data. In light of this
capacity for overfitting, it is remarkable that simple algorithms like SGD
reliably return solutions with low test error. One roadblock to explaining
these phenomena in terms of implicit regularization, structural properties of
the solution, and/or easiness of the data is that many learning bounds are
quantitatively vacuous when applied to networks learned by SGD in this "deep
learning" regime. Logically, in order to explain generalization, we need
nonvacuous bounds. We return to an idea by Langford and Caruana (2001), who
used PAC-Bayes bounds to compute nonvacuous numerical bounds on generalization
error for stochastic two-layer two-hidden-unit neural networks via a
sensitivity analysis. By optimizing the PAC-Bayes bound directly, we are able
to extend their approach and obtain nonvacuous generalization bounds for deep
stochastic neural network classifiers with millions of parameters trained on
only tens of thousands of examples. We connect our findings to recent and old
work on flat minima and MDL-based explanations of generalization.
| Gintare Karolina Dziugaite, Daniel M. Roy | null | 1703.11008 | null | null |
Spectral Methods for Nonparametric Models | cs.LG stat.ML | Nonparametric models are versatile, albeit computationally expensive, tool
for modeling mixture models. In this paper, we introduce spectral methods for
the two most popular nonparametric models: the Indian Buffet Process (IBP) and
the Hierarchical Dirichlet Process (HDP). We show that using spectral methods
for the inference of nonparametric models are computationally and statistically
efficient. In particular, we derive the lower-order moments of the IBP and the
HDP, propose spectral algorithms for both models, and provide reconstruction
guarantees for the algorithms. For the HDP, we further show that applying
hierarchical models on dataset with hierarchical structure, which can be solved
with the generalized spectral HDP, produces better solutions to that of flat
models regarding likelihood performance.
| Hsiao-Yu Fish Tung and Chao-Yuan Wu and Manzil Zaheer and Alexander J.
Smola | null | 1704.00003 | null | null |
On the Reliable Detection of Concept Drift from Streaming Unlabeled Data | stat.ML cs.AI cs.LG | Classifiers deployed in the real world operate in a dynamic environment,
where the data distribution can change over time. These changes, referred to as
concept drift, can cause the predictive performance of the classifier to drop
over time, thereby making it obsolete. To be of any real use, these classifiers
need to detect drifts and be able to adapt to them, over time. Detecting drifts
has traditionally been approached as a supervised task, with labeled data
constantly being used for validating the learned model. Although effective in
detecting drifts, these techniques are impractical, as labeling is a difficult,
costly and time consuming activity. On the other hand, unsupervised change
detection techniques are unreliable, as they produce a large number of false
alarms. The inefficacy of the unsupervised techniques stems from the exclusion
of the characteristics of the learned classifier, from the detection process.
In this paper, we propose the Margin Density Drift Detection (MD3) algorithm,
which tracks the number of samples in the uncertainty region of a classifier,
as a metric to detect drift. The MD3 algorithm is a distribution independent,
application independent, model independent, unsupervised and incremental
algorithm for reliably detecting drifts from data streams. Experimental
evaluation on 6 drift induced datasets and 4 additional datasets from the
cybersecurity domain demonstrates that the MD3 approach can reliably detect
drifts, with significantly fewer false alarms compared to unsupervised feature
based drift detectors. The reduced false alarms enables the signaling of drifts
only when they are most likely to affect classification performance. As such,
the MD3 approach leads to a detection scheme which is credible, label efficient
and general in its applicability.
| Tegjyot Singh Sethi, Mehmed Kantardzic | null | 1704.00023 | null | null |
Improved Training of Wasserstein GANs | cs.LG stat.ML | Generative Adversarial Networks (GANs) are powerful generative models, but
suffer from training instability. The recently proposed Wasserstein GAN (WGAN)
makes progress toward stable training of GANs, but sometimes can still generate
only low-quality samples or fail to converge. We find that these problems are
often due to the use of weight clipping in WGAN to enforce a Lipschitz
constraint on the critic, which can lead to undesired behavior. We propose an
alternative to clipping weights: penalize the norm of gradient of the critic
with respect to its input. Our proposed method performs better than standard
WGAN and enables stable training of a wide variety of GAN architectures with
almost no hyperparameter tuning, including 101-layer ResNets and language
models over discrete data. We also achieve high quality generations on CIFAR-10
and LSUN bedrooms.
| Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin,
Aaron Courville | null | 1704.00028 | null | null |
SafetyNet: Detecting and Rejecting Adversarial Examples Robustly | cs.CV cs.LG | We describe a method to produce a network where current methods such as
DeepFool have great difficulty producing adversarial samples. Our construction
suggests some insights into how deep networks work. We provide a reasonable
analyses that our construction is difficult to defeat, and show experimentally
that our method is hard to defeat with both Type I and Type II attacks using
several standard networks and datasets. This SafetyNet architecture is used to
an important and novel application SceneProof, which can reliably detect
whether an image is a picture of a real scene or not. SceneProof applies to
images captured with depth maps (RGBD images) and checks if a pair of image and
depth map is consistent. It relies on the relative difficulty of producing
naturalistic depth maps for images in post processing. We demonstrate that our
SafetyNet is robust to adversarial examples built from currently known
attacking approaches.
| Jiajun Lu, Theerasit Issaranon, David Forsyth | null | 1704.00103 | null | null |
Assortment Optimization under Unknown MultiNomial Logit Choice Models | cs.LG | Motivated by e-commerce, we study the online assortment optimization problem.
The seller offers an assortment, i.e. a subset of products, to each arriving
customer, who then purchases one or no product from her offered assortment. A
customer's purchase decision is governed by the underlying MultiNomial Logit
(MNL) choice model. The seller aims to maximize the total revenue in a finite
sales horizon, subject to resource constraints and uncertainty in the MNL
choice model. We first propose an efficient online policy which incurs a regret
$\tilde{O}(T^{2/3})$, where $T$ is the number of customers in the sales
horizon. Then, we propose a UCB policy that achieves a regret
$\tilde{O}(T^{1/2})$. Both regret bounds are sublinear in the number of
assortments.
| Wang Chi Cheung, David Simchi-Levi | null | 1704.00108 | null | null |
Snapshot Ensembles: Train 1, get M for free | cs.LG | Ensembles of neural networks are known to be much more robust and accurate
than individual networks. However, training multiple deep networks for model
averaging is computationally expensive. In this paper, we propose a method to
obtain the seemingly contradictory goal of ensembling multiple neural networks
at no additional training cost. We achieve this goal by training a single
neural network, converging to several local minima along its optimization path
and saving the model parameters. To obtain repeated rapid convergence, we
leverage recent work on cyclic learning rate schedules. The resulting
technique, which we refer to as Snapshot Ensembling, is simple, yet
surprisingly effective. We show in a series of experiments that our approach is
compatible with diverse network architectures and learning tasks. It
consistently yields lower error rates than state-of-the-art single models at no
additional training cost, and compares favorably with traditional network
ensembles. On CIFAR-10 and CIFAR-100 our DenseNet Snapshot Ensembles obtain
error rates of 3.4% and 17.4% respectively.
| Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E. Hopcroft,
Kilian Q. Weinberger | null | 1704.00109 | null | null |
Clustering-based Source-aware Assessment of True Robustness for Learning
Models | cs.LG | We introduce a novel validation framework to measure the true robustness of
learning models for real-world applications by creating source-inclusive and
source-exclusive partitions in a dataset via clustering. We develop a
robustness metric derived from source-aware lower and upper bounds of model
accuracy even when data source labels are not readily available. We clearly
demonstrate that even on a well-explored dataset like MNIST, challenging
training scenarios can be constructed under the proposed assessment framework
for two separate yet equally important applications: i) more rigorous learning
model comparison and ii) dataset adequacy evaluation. In addition, our findings
not only promise a more complete identification of trade-offs between model
complexity, accuracy and robustness but can also help researchers optimize
their efforts in data collection by identifying the less robust and more
challenging class labels.
| Ozsel Kilinc, Ismail Uysal | null | 1704.00158 | null | null |
Faster Subgradient Methods for Functions with H\"olderian Growth | math.OC cs.LG cs.NA math.NA | The purpose of this manuscript is to derive new convergence results for
several subgradient methods applied to minimizing nonsmooth convex functions
with H\"olderian growth. The growth condition is satisfied in many applications
and includes functions with quadratic growth and weakly sharp minima as special
cases. To this end there are three main contributions. First, for a constant
and sufficiently small stepsize, we show that the subgradient method achieves
linear convergence up to a certain region including the optimal set, with error
of the order of the stepsize. Second, if appropriate problem parameters are
known, we derive a decaying stepsize which obtains a much faster convergence
rate than is suggested by the classical $O(1/\sqrt{k})$ result for the
subgradient method. Thirdly we develop a novel "descending stairs" stepsize
which obtains this faster convergence rate and also obtains linear convergence
for the special case of weakly sharp functions. We also develop an adaptive
variant of the "descending stairs" stepsize which achieves the same convergence
rate without requiring an error bound constant which is difficult to estimate
in practice.
| Patrick R. Johnstone and Pierre Moulin | 10.1007/s10107-018-01361-0 | 1704.00196 | null | null |
Adversarial Connective-exploiting Networks for Implicit Discourse
Relation Classification | cs.CL cs.AI cs.LG stat.ML | Implicit discourse relation classification is of great challenge due to the
lack of connectives as strong linguistic cues, which motivates the use of
annotated implicit connectives to improve the recognition. We propose a feature
imitation framework in which an implicit relation network is driven to learn
from another neural network with access to connectives, and thus encouraged to
extract similarly salient features for accurate classification. We develop an
adversarial model to enable an adaptive imitation scheme through competition
between the implicit network and a rival feature discriminator. Our method
effectively transfers discriminability of connectives to the implicit features,
and achieves state-of-the-art performance on the PDTB benchmark.
| Lianhui Qin, Zhisong Zhang, Hai Zhao, Zhiting Hu, Eric P. Xing | null | 1704.00217 | null | null |
Online and Stable Learning of Analysis Operators | cs.LG math.NA | In this paper four iterative algorithms for learning analysis operators are
presented. They are built upon the same optimisation principle underlying both
Analysis K-SVD and Analysis SimCO. The Forward and Sequential Analysis Operator
Learning (FAOL and SAOL) algorithms are based on projected gradient descent
with optimally chosen step size. The Implicit AOL (IAOL) algorithm is inspired
by the implicit Euler scheme for solving ordinary differential equations and
does not require to choose a step size. The fourth algorithm, Singular Value
AOL (SVAOL), uses a similar strategy as Analysis K-SVD while avoiding its high
computational cost. All algorithms are proven to decrease or preserve the
target function in each step and a characterisation of their stationary points
is provided. Further they are tested on synthetic and image data, compared to
Analysis SimCO and found to give better recovery rates and faster decay of the
objective function respectively. In a final denoising experiment the presented
algorithms are again shown to perform similar to or better than the
state-of-the-art algorithm ASimCO.
| Michael Sandbichler, Karin Schnass | null | 1704.00227 | null | null |
Aligned Image-Word Representations Improve Inductive Transfer Across
Vision-Language Tasks | cs.CV cs.AI cs.LG cs.NE stat.ML | An important goal of computer vision is to build systems that learn visual
representations over time that can be applied to many tasks. In this paper, we
investigate a vision-language embedding as a core representation and show that
it leads to better cross-task transfer than standard multi-task learning. In
particular, the task of visual recognition is aligned to the task of visual
question answering by forcing each to use the same word-region embeddings. We
show this leads to greater inductive transfer from recognition to VQA than
standard multitask learning. Visual recognition also improves, especially for
categories that have relatively few recognition training labels but appear
often in the VQA setting. Thus, our paper takes a small step towards creating
more general vision systems by showing the benefit of interpretable, flexible,
and trainable core representations.
| Tanmay Gupta, Kevin Shih, Saurabh Singh, and Derek Hoiem | null | 1704.0026 | null | null |
Understanding Concept Drift | cs.LG | Concept drift is a major issue that greatly affects the accuracy and
reliability of many real-world applications of machine learning. We argue that
to tackle concept drift it is important to develop the capacity to describe and
analyze it. We propose tools for this purpose, arguing for the importance of
quantitative descriptions of drift in marginal distributions. We present
quantitative drift analysis techniques along with methods for communicating
their results. We demonstrate their effectiveness by application to three
real-world learning tasks.
| Geoffrey I. Webb, Loong Kuan Lee, Fran\c{c}ois Petitjean, Bart
Goethals | null | 1704.00362 | null | null |
Provable Inductive Robust PCA via Iterative Hard Thresholding | cs.LG cs.IT math.IT stat.ML | The robust PCA problem, wherein, given an input data matrix that is the
superposition of a low-rank matrix and a sparse matrix, we aim to separate out
the low-rank and sparse components, is a well-studied problem in machine
learning. One natural question that arises is that, as in the inductive
setting, if features are provided as input as well, can we hope to do better?
Answering this in the affirmative, the main goal of this paper is to study the
robust PCA problem while incorporating feature information. In contrast to
previous works in which recovery guarantees are based on the convex relaxation
of the problem, we propose a simple iterative algorithm based on
hard-thresholding of appropriate residuals. Under weaker assumptions than
previous works, we prove the global convergence of our iterative procedure;
moreover, it admits a much faster convergence rate and lesser computational
complexity per iteration. In practice, through systematic synthetic and real
data simulations, we confirm our theoretical findings regarding improvements
obtained by using feature information.
| U.N. Niranjan, Arun Rajkumar, Theja Tulabandhula | null | 1704.00367 | null | null |
Hidden Two-Stream Convolutional Networks for Action Recognition | cs.CV cs.LG cs.MM | Analyzing videos of human actions involves understanding the temporal
relationships among video frames. State-of-the-art action recognition
approaches rely on traditional optical flow estimation methods to pre-compute
motion information for CNNs. Such a two-stage approach is computationally
expensive, storage demanding, and not end-to-end trainable. In this paper, we
present a novel CNN architecture that implicitly captures motion information
between adjacent frames. We name our approach hidden two-stream CNNs because it
only takes raw video frames as input and directly predicts action classes
without explicitly computing optical flow. Our end-to-end approach is 10x
faster than its two-stage baseline. Experimental results on four challenging
action recognition datasets: UCF101, HMDB51, THUMOS14 and ActivityNet v1.2 show
that our approach significantly outperforms the previous best real-time
approaches.
| Yi Zhu, Zhenzhong Lan, Shawn Newsam, Alexander G. Hauptmann | null | 1704.00389 | null | null |
On Kernelized Multi-armed Bandits | cs.LG | We consider the stochastic bandit problem with a continuous set of arms, with
the expected reward function over the arms assumed to be fixed but unknown. We
provide two new Gaussian process-based algorithms for continuous bandit
optimization-Improved GP-UCB (IGP-UCB) and GP-Thomson sampling (GP-TS), and
derive corresponding regret bounds. Specifically, the bounds hold when the
expected reward function belongs to the reproducing kernel Hilbert space (RKHS)
that naturally corresponds to a Gaussian process kernel used as input by the
algorithms. Along the way, we derive a new self-normalized concentration
inequality for vector- valued martingales of arbitrary, possibly infinite,
dimension. Finally, experimental evaluation and comparisons to existing
algorithms on synthetic and real-world environments are carried out that
highlight the favorable gains of the proposed strategies in many cases.
| Sayak Ray Chowdhury and Aditya Gopalan | null | 1704.00445 | null | null |
Clustering in Hilbert simplex geometry | cs.LG cs.CV | Clustering categorical distributions in the finite-dimensional probability
simplex is a fundamental task met in many applications dealing with normalized
histograms. Traditionally, the differential-geometric structures of the
probability simplex have been used either by (i) setting the Riemannian metric
tensor to the Fisher information matrix of the categorical distributions, or
(ii) defining the dualistic information-geometric structure induced by a smooth
dissimilarity measure, the Kullback-Leibler divergence. In this work, we
introduce for clustering tasks a novel computationally-friendly framework for
modeling geometrically the probability simplex: The {\em Hilbert simplex
geometry}. In the Hilbert simplex geometry, the distance is the non-separable
Hilbert's metric distance which satisfies the property of information
monotonicity with distance level set functions described by polytope
boundaries. We show that both the Aitchison and Hilbert simplex distances are
norm distances on normalized logarithmic representations with respect to the
$\ell_2$ and variation norms, respectively. We discuss the pros and cons of
those different statistical modelings, and benchmark experimentally these
different kind of geometries for center-based $k$-means and $k$-center
clustering. Furthermore, since a canonical Hilbert distance can be defined on
any bounded convex subset of the Euclidean space, we also consider Hilbert's
geometry of the elliptope of correlation matrices and study its clustering
performances compared to Fr\"obenius and log-det divergences.
| Frank Nielsen and Ke Sun | 10.1007/978-3-030-02520-5_11 | 1704.00454 | null | null |
Are Key-Foreign Key Joins Safe to Avoid when Learning High-Capacity
Classifiers? | cs.DB cs.LG | Machine learning (ML) over relational data is a booming area of the database
industry and academia. While several projects aim to build scalable and fast ML
systems, little work has addressed the pains of sourcing data and features for
ML tasks. Real-world relational databases typically have many tables (often,
dozens) and data scientists often struggle to even obtain and join all possible
tables that provide features for ML. In this context, Kumar et al. showed
recently that key-foreign key dependencies (KFKDs) between tables often lets us
avoid such joins without significantly affecting prediction accuracy--an idea
they called avoiding joins safely. While initially controversial, this idea has
since been used by multiple companies to reduce the burden of data sourcing for
ML. But their work applied only to linear classifiers. In this work, we verify
if their results hold for three popular complex classifiers: decision trees,
SVMs, and ANNs. We conduct an extensive experimental study using both
real-world datasets and simulations to analyze the effects of avoiding KFK
joins on such models. Our results show that these high-capacity classifiers are
surprisingly and counter-intuitively more robust to avoiding KFK joins compared
to linear classifiers, refuting an intuition from the prior work's analysis. We
explain this behavior intuitively and identify open questions at the
intersection of data management and ML theoretical research. All of our code
and datasets are available for download from
http://cseweb.ucsd.edu/~arunkk/hamlet.
| Vraj Shah, Arun Kumar, Xiaojin Zhu | null | 1704.00485 | null | null |
A New Measure of Conditional Dependence | stat.ML cs.LG | Measuring conditional dependencies among the variables of a network is of
great interest to many disciplines. This paper studies some shortcomings of the
existing dependency measures in detecting direct causal influences or their
lack of ability for group selection to capture strong dependencies and
accordingly introduces a new statistical dependency measure to overcome them.
This measure is inspired by Dobrushin's coefficients and based on the fact that
there is no dependency between $X$ and $Y$ given another variable $Z$, if and
only if the conditional distribution of $Y$ given $X=x$ and $Z=z$ does not
change when $X$ takes another realization $x'$ while $Z$ takes the same
realization $z$. We show the advantages of this measure over the related
measures in the literature. Moreover, we establish the connection between our
measure and the integral probability metric (IPM) that helps to develop
estimators of the measure with lower complexity compared to other relevant
information theoretic based measures. Finally, we show the performance of this
measure through numerical simulations.
| Jalal Etesami, Kun Zhang, Negar Kiyavash | null | 1704.00607 | null | null |
Semi-Supervised Generation with Cluster-aware Generative Models | stat.ML cs.AI cs.LG | Deep generative models trained with large amounts of unlabelled data have
proven to be powerful within the domain of unsupervised learning. Many real
life data sets contain a small amount of labelled data points, that are
typically disregarded when training generative models. We propose the
Cluster-aware Generative Model, that uses unlabelled information to infer a
latent representation that models the natural clustering of the data, and
additional labelled data points to refine this clustering. The generative
performances of the model significantly improve when labelled information is
exploited, obtaining a log-likelihood of -79.38 nats on permutation invariant
MNIST, while also achieving competitive semi-supervised classification
accuracies. The model can also be trained fully unsupervised, and still improve
the log-likelihood performance with respect to related methods.
| Lars Maal{\o}e and Marco Fraccaro and Ole Winther | null | 1704.00637 | null | null |
Local nearest neighbour classification with applications to
semi-supervised learning | math.ST cs.CV cs.LG stat.ME stat.TH | We derive a new asymptotic expansion for the global excess risk of a
local-$k$-nearest neighbour classifier, where the choice of $k$ may depend upon
the test point. This expansion elucidates conditions under which the dominant
contribution to the excess risk comes from the decision boundary of the optimal
Bayes classifier, but we also show that if these conditions are not satisfied,
then the dominant contribution may arise from the tails of the marginal
distribution of the features. Moreover, we prove that, provided the
$d$-dimensional marginal distribution of the features has a finite $\rho$th
moment for some $\rho > 4$ (as well as other regularity conditions), a local
choice of $k$ can yield a rate of convergence of the excess risk of
$O(n^{-4/(d+4)})$, where $n$ is the sample size, whereas for the standard
$k$-nearest neighbour classifier, our theory would require $d \geq 5$ and $\rho
> 4d/(d-4)$ finite moments to achieve this rate. These results motivate a new
$k$-nearest neighbour classifier for semi-supervised learning problems, where
the unlabelled data are used to obtain an estimate of the marginal feature
density, and fewer neighbours are used for classification when this density
estimate is small. Our worst-case rates are complemented by a minimax lower
bound, which reveals that the local, semi-supervised $k$-nearest neighbour
classifier attains the minimax optimal rate over our classes for the excess
risk, up to a subpolynomial factor in $n$. These theoretical improvements over
the standard $k$-nearest neighbour classifier are also illustrated through a
simulation study.
| Timothy I. Cannings, Thomas B. Berrett and Richard J. Samworth | null | 1704.00642 | null | null |
Soft-to-Hard Vector Quantization for End-to-End Learning Compressible
Representations | cs.LG cs.CV | We present a new approach to learn compressible representations in deep
architectures with an end-to-end training strategy. Our method is based on a
soft (continuous) relaxation of quantization and entropy, which we anneal to
their discrete counterparts throughout training. We showcase this method for
two challenging applications: Image compression and neural network compression.
While these tasks have typically been approached with different methods, our
soft-to-hard quantization approach gives results competitive with the
state-of-the-art for both.
| Eirikur Agustsson, Fabian Mentzer, Michael Tschannen, Lukas Cavigelli,
Radu Timofte, Luca Benini and Luc Van Gool | null | 1704.00648 | null | null |
No Spurious Local Minima in Nonconvex Low Rank Problems: A Unified
Geometric Analysis | cs.LG math.OC stat.ML | In this paper we develop a new framework that captures the common landscape
underlying the common non-convex low-rank matrix problems including matrix
sensing, matrix completion and robust PCA. In particular, we show for all above
problems (including asymmetric cases): 1) all local minima are also globally
optimal; 2) no high-order saddle points exists. These results explain why
simple algorithms such as stochastic gradient descent have global converge, and
efficiently optimize these non-convex objective functions in practice. Our
framework connects and simplifies the existing analyses on optimization
landscapes for matrix sensing and symmetric matrix completion. The framework
naturally leads to new results for asymmetric matrix completion and robust PCA.
| Rong Ge, Chi Jin, Yi Zheng | null | 1704.00708 | null | null |
Multi-Advisor Reinforcement Learning | cs.LG cs.AI stat.ML | We consider tackling a single-agent RL problem by distributing it to $n$
learners. These learners, called advisors, endeavour to solve the problem from
a different focus. Their advice, taking the form of action values, is then
communicated to an aggregator, which is in control of the system. We show that
the local planning method for the advisors is critical and that none of the
ones found in the literature is flawless: the egocentric planning overestimates
values of states where the other advisors disagree, and the agnostic planning
is inefficient around danger zones. We introduce a novel approach called
empathic and discuss its theoretical aspects. We empirically examine and
validate our theoretical findings on a fruit collection task.
| Romain Laroche and Mehdi Fatemi and Joshua Romoff and Harm van Seijen | null | 1704.00756 | null | null |
Geometric Insights into Support Vector Machine Behavior using the KKT
Conditions | stat.ML cs.LG | The support vector machine (SVM) is a powerful and widely used classification
algorithm. This paper uses the Karush-Kuhn-Tucker conditions to provide
rigorous mathematical proof for new insights into the behavior of SVM. These
insights provide perhaps unexpected relationships between SVM and two other
linear classifiers: the mean difference and the maximal data piling direction.
For example, we show that in many cases SVM can be viewed as a cropped version
of these classifiers. By carefully exploring these connections we show how SVM
tuning behavior is affected by characteristics including: balanced vs.
unbalanced classes, low vs. high dimension, separable vs. non-separable data.
These results provide further insights into tuning SVM via cross-validation by
explaining observed pathological behavior and motivating improved
cross-validation methodology. Finally, we also provide new results on the
geometry of complete data piling directions in high dimensional space.
| Iain Carmichael and J.S. Marron | null | 1704.00767 | null | null |
A comparative study of counterfactual estimators | stat.ML cs.LG | We provide a comparative study of several widely used off-policy estimators
(Empirical Average, Basic Importance Sampling and Normalized Importance
Sampling), detailing the different regimes where they are individually
suboptimal. We then exhibit properties optimal estimators should possess. In
the case where examples have been gathered using multiple policies, we show
that fused estimators dominate basic ones but can still be improved.
| Thomas Nedelec, Nicolas Le Roux and Vianney Perchet | null | 1704.00773 | null | null |
Brief Notes on Hard Takeoff, Value Alignment, and Coherent Extrapolated
Volition | cs.AI cs.CY cs.LG | I make some basic observations about hard takeoff, value alignment, and
coherent extrapolated volition, concepts which have been central in analyses of
superintelligent AI systems.
| Gopal P. Sarma | null | 1704.00783 | null | null |
Online and Linear-Time Attention by Enforcing Monotonic Alignments | cs.LG cs.CL | Recurrent neural network models with an attention mechanism have proven to be
extremely effective on a wide variety of sequence-to-sequence problems.
However, the fact that soft attention mechanisms perform a pass over the entire
input sequence when producing each element in the output sequence precludes
their use in online settings and results in a quadratic time complexity. Based
on the insight that the alignment between input and output sequence elements is
monotonic in many problems of interest, we propose an end-to-end differentiable
method for learning monotonic alignments which, at test time, enables computing
attention online and in linear time. We validate our approach on sentence
summarization, machine translation, and online speech recognition problems and
achieve results competitive with existing sequence-to-sequence models.
| Colin Raffel, Minh-Thang Luong, Peter J. Liu, Ron J. Weiss, Douglas
Eck | null | 1704.00784 | null | null |
Time Series Cluster Kernel for Learning Similarities between
Multivariate Time Series with Missing Data | stat.ML cs.LG | Similarity-based approaches represent a promising direction for time series
analysis. However, many such methods rely on parameter tuning, and some have
shortcomings if the time series are multivariate (MTS), due to dependencies
between attributes, or the time series contain missing data. In this paper, we
address these challenges within the powerful context of kernel methods by
proposing the robust \emph{time series cluster kernel} (TCK). The approach
taken leverages the missing data handling properties of Gaussian mixture models
(GMM) augmented with informative prior distributions. An ensemble learning
approach is exploited to ensure robustness to parameters by combining the
clustering results of many GMM to form the final kernel.
We evaluate the TCK on synthetic and real data and compare to other
state-of-the-art techniques. The experimental results demonstrate that the TCK
is robust to parameter choices, provides competitive results for MTS without
missing data and outstanding results for missing data.
| Karl {\O}yvind Mikalsen, Filippo Maria Bianchi, Cristina Soguero-Ruiz
and Robert Jenssen | null | 1704.00794 | null | null |
On the Properties of the Softmax Function with Application in Game
Theory and Reinforcement Learning | math.OC cs.LG | In this paper, we utilize results from convex analysis and monotone operator
theory to derive additional properties of the softmax function that have not
yet been covered in the existing literature. In particular, we show that the
softmax function is the monotone gradient map of the log-sum-exp function. By
exploiting this connection, we show that the inverse temperature parameter
determines the Lipschitz and co-coercivity properties of the softmax function.
We then demonstrate the usefulness of these properties through an application
in game-theoretic reinforcement learning.
| Bolin Gao, Lacra Pavel | null | 1704.00805 | null | null |
Polynomial Time and Sample Complexity for Non-Gaussian Component
Analysis: Spectral Methods | cs.LG math.PR stat.ML | The problem of Non-Gaussian Component Analysis (NGCA) is about finding a
maximal low-dimensional subspace $E$ in $\mathbb{R}^n$ so that data points
projected onto $E$ follow a non-gaussian distribution. Although this is an
appropriate model for some real world data analysis problems, there has been
little progress on this problem over the last decade.
In this paper, we attempt to address this state of affairs in two ways.
First, we give a new characterization of standard gaussian distributions in
high-dimensions, which lead to effective tests for non-gaussianness. Second, we
propose a simple algorithm, \emph{Reweighted PCA}, as a method for solving the
NGCA problem. We prove that for a general unknown non-gaussian distribution,
this algorithm recovers at least one direction in $E$, with sample and time
complexity depending polynomially on the dimension of the ambient space. We
conjecture that the algorithm actually recovers the entire $E$.
| Yan Shuo Tan, Roman Vershynin | null | 1704.01041 | null | null |
Homotopy Parametric Simplex Method for Sparse Learning | cs.LG math.OC stat.ML | High dimensional sparse learning has imposed a great computational challenge
to large scale data analysis. In this paper, we are interested in a broad class
of sparse learning approaches formulated as linear programs parametrized by a
{\em regularization factor}, and solve them by the parametric simplex method
(PSM). Our parametric simplex method offers significant advantages over other
competing methods: (1) PSM naturally obtains the complete solution path for all
values of the regularization parameter; (2) PSM provides a high precision dual
certificate stopping criterion; (3) PSM yields sparse solutions through very
few iterations, and the solution sparsity significantly reduces the
computational cost per iteration. Particularly, we demonstrate the superiority
of PSM over various sparse learning approaches, including Dantzig selector for
sparse linear regression, LAD-Lasso for sparse robust linear regression, CLIME
for sparse precision matrix estimation, sparse differential network estimation,
and sparse Linear Programming Discriminant (LPD) analysis. We then provide
sufficient conditions under which PSM always outputs sparse solutions such that
its computational performance can be significantly boosted. Thorough numerical
experiments are provided to demonstrate the outstanding performance of the PSM
method.
| Haotian Pang, Robert Vanderbei, Han Liu, Tuo Zhao | null | 1704.01079 | null | null |
Probabilistic Search for Structured Data via Probabilistic Programming
and Nonparametric Bayes | cs.AI cs.DB cs.LG stat.ML | Databases are widespread, yet extracting relevant data can be difficult.
Without substantial domain knowledge, multivariate search queries often return
sparse or uninformative results. This paper introduces an approach for
searching structured data based on probabilistic programming and nonparametric
Bayes. Users specify queries in a probabilistic language that combines standard
SQL database search operators with an information theoretic ranking function
called predictive relevance. Predictive relevance can be calculated by a fast
sparse matrix algorithm based on posterior samples from CrossCat, a
nonparametric Bayesian model for high-dimensional, heterogeneously-typed data
tables. The result is a flexible search technique that applies to a broad class
of information retrieval problems, which we integrate into BayesDB, a
probabilistic programming platform for probabilistic data analysis. This paper
demonstrates applications to databases of US colleges, global macroeconomic
indicators of public health, and classic cars. We found that human evaluators
often prefer the results from probabilistic search to results from a standard
baseline.
| Feras Saad, Leonardo Casarsa, Vikash Mansinghka | null | 1704.01087 | null | null |
Satellite Image-based Localization via Learned Embeddings | cs.RO cs.CV cs.LG | We propose a vision-based method that localizes a ground vehicle using
publicly available satellite imagery as the only prior knowledge of the
environment. Our approach takes as input a sequence of ground-level images
acquired by the vehicle as it navigates, and outputs an estimate of the
vehicle's pose relative to a georeferenced satellite image. We overcome the
significant viewpoint and appearance variations between the images through a
neural multi-view model that learns location-discriminative embeddings in which
ground-level images are matched with their corresponding satellite view of the
scene. We use this learned function as an observation model in a filtering
framework to maintain a distribution over the vehicle's pose. We evaluate our
method on different benchmark datasets and demonstrate its ability localize
ground-level images in environments novel relative to training, despite the
challenges of significant viewpoint and appearance variations.
| Dong-Ki Kim and Matthew R. Walter | null | 1704.01133 | null | null |
DyVEDeep: Dynamic Variable Effort Deep Neural Networks | cs.NE cs.CV cs.LG | Deep Neural Networks (DNNs) have advanced the state-of-the-art in a variety
of machine learning tasks and are deployed in increasing numbers of products
and services. However, the computational requirements of training and
evaluating large-scale DNNs are growing at a much faster pace than the
capabilities of the underlying hardware platforms that they are executed upon.
In this work, we propose Dynamic Variable Effort Deep Neural Networks
(DyVEDeep) to reduce the computational requirements of DNNs during inference.
Previous efforts propose specialized hardware implementations for DNNs,
statically prune the network, or compress the weights. Complementary to these
approaches, DyVEDeep is a dynamic approach that exploits the heterogeneity in
the inputs to DNNs to improve their compute efficiency with comparable
classification accuracy. DyVEDeep equips DNNs with dynamic effort mechanisms
that, in the course of processing an input, identify how critical a group of
computations are to classify the input. DyVEDeep dynamically focuses its
compute effort only on the critical computa- tions, while skipping or
approximating the rest. We propose 3 effort knobs that operate at different
levels of granularity viz. neuron, feature and layer levels. We build DyVEDeep
versions for 5 popular image recognition benchmarks - one for CIFAR-10 and four
for ImageNet (AlexNet, OverFeat and VGG-16, weight-compressed AlexNet). Across
all benchmarks, DyVEDeep achieves 2.1x-2.6x reduction in the number of scalar
operations, which translates to 1.8x-2.3x performance improvement over a
Caffe-based implementation, with < 0.5% loss in accuracy.
| Sanjay Ganapathy, Swagath Venkataramani, Balaraman Ravindran, Anand
Raghunathan | null | 1704.01137 | null | null |
Feature Squeezing: Detecting Adversarial Examples in Deep Neural
Networks | cs.CV cs.CR cs.LG | Although deep neural networks (DNNs) have achieved great success in many
tasks, they can often be fooled by \emph{adversarial examples} that are
generated by adding small but purposeful distortions to natural examples.
Previous studies to defend against adversarial examples mostly focused on
refining the DNN models, but have either shown limited success or required
expensive computation. We propose a new strategy, \emph{feature squeezing},
that can be used to harden DNN models by detecting adversarial examples.
Feature squeezing reduces the search space available to an adversary by
coalescing samples that correspond to many different feature vectors in the
original space into a single sample. By comparing a DNN model's prediction on
the original input with that on squeezed inputs, feature squeezing detects
adversarial examples with high accuracy and few false positives. This paper
explores two feature squeezing methods: reducing the color bit depth of each
pixel and spatial smoothing. These simple strategies are inexpensive and
complementary to other defenses, and can be combined in a joint detection
framework to achieve high detection rates against state-of-the-art attacks.
| Weilin Xu, David Evans, Yanjun Qi | 10.14722/ndss.2018.23198 | 1704.01155 | null | null |
On the Unreported-Profile-is-Negative Assumption for Predictive
Cheminformatics | cs.LG physics.chem-ph stat.ML | In cheminformatics, compound-target binding profiles has been a main source
of data for research. For data repositories that only provide positive
profiles, a popular assumption is that unreported profiles are all negative. In
this paper, we caution audience not to take this assumption for granted, and
present empirical evidence of its ineffectiveness from a machine learning
perspective. Our examination is based on a setting where binding profiles are
used as features to train predictive models; we show (1) prediction performance
degrades when the assumption fails and (2) explicit recovery of unreported
profiles improves prediction performance. In particular, we propose a framework
that jointly recovers profiles and learns predictive model, and show it
achieves further performance improvement. The presented study not only suggests
applying matrix recovery methods to recover unreported profiles, but also
initiates a new missing feature problem which we called Learning with Positive
and Unknown Features.
| Chao Lan, Sai Nivedita Chandrasekaran, Jun Huan | null | 1704.01184 | null | null |
Neural Message Passing for Quantum Chemistry | cs.LG | Supervised learning on molecules has incredible potential to be useful in
chemistry, drug discovery, and materials science. Luckily, several promising
and closely related neural network models invariant to molecular symmetries
have already been described in the literature. These models learn a message
passing algorithm and aggregation procedure to compute a function of their
entire input graph. At this point, the next step is to find a particularly
effective variant of this general approach and apply it to chemical prediction
benchmarks until we either solve them or reach the limits of the approach. In
this paper, we reformulate existing models into a single common framework we
call Message Passing Neural Networks (MPNNs) and explore additional novel
variations within this framework. Using MPNNs we demonstrate state of the art
results on an important molecular property prediction benchmark; these results
are strong enough that we believe future work should focus on datasets with
larger molecules or more accurate ground truth labels.
| Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals,
George E. Dahl | null | 1704.01212 | null | null |
Linear Additive Markov Processes | cs.LG stat.ML | We introduce LAMP: the Linear Additive Markov Process. Transitions in LAMP
may be influenced by states visited in the distant history of the process, but
unlike higher-order Markov processes, LAMP retains an efficient
parametrization. LAMP also allows the specific dependence on history to be
learned efficiently from data. We characterize some theoretical properties of
LAMP, including its steady-state and mixing time. We then give an algorithm
based on alternating minimization to learn LAMP models from data. Finally, we
perform a series of real-world experiments to show that LAMP is more powerful
than first-order Markov processes, and even holds its own against deep
sequential models (LSTMs) with a negligible increase in parameter complexity.
| Ravi Kumar, Maithra Raghu, Tamas Sarlos, Andrew Tomkins | null | 1704.01255 | null | null |
Geometry of Factored Nuclear Norm Regularization | cs.NA cs.IT cs.LG math.IT math.OC | This work investigates the geometry of a nonconvex reformulation of
minimizing a general convex loss function $f(X)$ regularized by the matrix
nuclear norm $\|X\|_*$. Nuclear-norm regularized matrix inverse problems are at
the heart of many applications in machine learning, signal processing, and
control. The statistical performance of nuclear norm regularization has been
studied extensively in literature using convex analysis techniques. Despite its
optimal performance, the resulting optimization has high computational
complexity when solved using standard or even tailored fast convex solvers. To
develop faster and more scalable algorithms, we follow the proposal of
Burer-Monteiro to factor the matrix variable $X$ into the product of two
smaller rectangular matrices $X=UV^T$ and also replace the nuclear norm
$\|X\|_*$ with $(\|U\|_F^2+\|V\|_F^2)/2$. In spite of the nonconvexity of the
factored formulation, we prove that when the convex loss function $f(X)$ is
$(2r,4r)$-restricted well-conditioned, each critical point of the factored
problem either corresponds to the optimal solution $X^\star$ of the original
convex optimization or is a strict saddle point where the Hessian matrix has a
strictly negative eigenvalue. Such a geometric structure of the factored
formulation allows many local search algorithms to converge to the global
optimum with random initializations.
| Qiuwei Li, Zhihui Zhu and Gongguo Tang | null | 1704.01265 | null | null |
Neural Audio Synthesis of Musical Notes with WaveNet Autoencoders | cs.LG cs.AI cs.SD | Generative models in vision have seen rapid progress due to algorithmic
improvements and the availability of high-quality image datasets. In this
paper, we offer contributions in both these areas to enable similar progress in
audio modeling. First, we detail a powerful new WaveNet-style autoencoder model
that conditions an autoregressive decoder on temporal codes learned from the
raw audio waveform. Second, we introduce NSynth, a large-scale and high-quality
dataset of musical notes that is an order of magnitude larger than comparable
public datasets. Using NSynth, we demonstrate improved qualitative and
quantitative performance of the WaveNet autoencoder over a well-tuned spectral
autoencoder baseline. Finally, we show that the model learns a manifold of
embeddings that allows for morphing between instruments, meaningfully
interpolating in timbre to create new types of sounds that are realistic and
expressive.
| Jesse Engel, Cinjon Resnick, Adam Roberts, Sander Dieleman, Douglas
Eck, Karen Simonyan, Mohammad Norouzi | null | 1704.01279 | null | null |
Revisiting the problem of audio-based hit song prediction using
convolutional neural networks | cs.SD cs.LG stat.ML | Being able to predict whether a song can be a hit has impor- tant
applications in the music industry. Although it is true that the popularity of
a song can be greatly affected by exter- nal factors such as social and
commercial influences, to which degree audio features computed from musical
signals (whom we regard as internal factors) can predict song popularity is an
interesting research question on its own. Motivated by the recent success of
deep learning techniques, we attempt to ex- tend previous work on hit song
prediction by jointly learning the audio features and prediction models using
deep learning. Specifically, we experiment with a convolutional neural net-
work model that takes the primitive mel-spectrogram as the input for feature
learning, a more advanced JYnet model that uses an external song dataset for
supervised pre-training and auto-tagging, and the combination of these two
models. We also consider the inception model to characterize audio infor-
mation in different scales. Our experiments suggest that deep structures are
indeed more accurate than shallow structures in predicting the popularity of
either Chinese or Western Pop songs in Taiwan. We also use the tags predicted
by JYnet to gain insights into the result of different models.
| Li-Chia Yang, Szu-Yu Chou, Jen-Yu Liu, Yi-Hsuan Yang, Yi-An Chen | null | 1704.0128 | null | null |
On Generalization and Regularization in Deep Learning | stat.ML cs.LG math.ST stat.TH | Why do large neural network generalize so well on complex tasks such as image
classification or speech recognition? What exactly is the role regularization
for them? These are arguably among the most important open questions in machine
learning today. In a recent and thought provoking paper [C. Zhang et al.]
several authors performed a number of numerical experiments that hint at the
need for novel theoretical concepts to account for this phenomenon. The paper
stirred quit a lot of excitement among the machine learning community but at
the same time it created some confusion as discussions on OpenReview.net
testifies. The aim of this pedagogical paper is to make this debate accessible
to a wider audience of data scientists without advanced theoretical knowledge
in statistical learning. The focus here is on explicit mathematical definitions
and on a discussion of relevant concepts, not on proofs for which we provide
references.
| Pirmin Lemberger | null | 1704.01312 | null | null |
Not All Pixels Are Equal: Difficulty-aware Semantic Segmentation via
Deep Layer Cascade | cs.CV cs.LG | We propose a novel deep layer cascade (LC) method to improve the accuracy and
speed of semantic segmentation. Unlike the conventional model cascade (MC) that
is composed of multiple independent models, LC treats a single deep model as a
cascade of several sub-models. Earlier sub-models are trained to handle easy
and confident regions, and they progressively feed-forward harder regions to
the next sub-model for processing. Convolutions are only calculated on these
regions to reduce computations. The proposed method possesses several
advantages. First, LC classifies most of the easy regions in the shallow stage
and makes deeper stage focuses on a few hard regions. Such an adaptive and
'difficulty-aware' learning improves segmentation performance. Second, LC
accelerates both training and testing of deep network thanks to early decisions
in the shallow stage. Third, in comparison to MC, LC is an end-to-end trainable
framework, allowing joint learning of all sub-models. We evaluate our method on
PASCAL VOC and Cityscapes datasets, achieving state-of-the-art performance and
fast speed.
| Xiaoxiao Li, Ziwei Liu, Ping Luo, Chen Change Loy, Xiaoou Tang | null | 1704.01344 | null | null |
Embodied Artificial Intelligence through Distributed Adaptive Control:
An Integrated Framework | cs.AI cs.LG cs.MA | In this paper, we argue that the future of Artificial Intelligence research
resides in two keywords: integration and embodiment. We support this claim by
analyzing the recent advances of the field. Regarding integration, we note that
the most impactful recent contributions have been made possible through the
integration of recent Machine Learning methods (based in particular on Deep
Learning and Recurrent Neural Networks) with more traditional ones (e.g.
Monte-Carlo tree search, goal babbling exploration or addressable memory
systems). Regarding embodiment, we note that the traditional benchmark tasks
(e.g. visual classification or board games) are becoming obsolete as
state-of-the-art learning algorithms approach or even surpass human performance
in most of them, having recently encouraged the development of first-person 3D
game platforms embedding realistic physics. Building upon this analysis, we
first propose an embodied cognitive architecture integrating heterogenous
sub-fields of Artificial Intelligence into a unified framework. We demonstrate
the utility of our approach by showing how major contributions of the field can
be expressed within the proposed framework. We then claim that benchmarking
environments need to reproduce ecologically-valid conditions for bootstrapping
the acquisition of increasingly complex cognitive skills through the concept of
a cognitive arms race between embodied agents.
| Cl\'ement Moulin-Frier, Jordi-Ysard Puigb\`o, Xerxes D. Arsiwalla,
Mart\`i Sanchez-Fibla, Paul F. M. J. Verschure | null | 1704.01407 | null | null |
Multi-Label Learning with Global and Local Label Correlation | cs.LG cs.AI | It is well-known that exploiting label correlations is important to
multi-label learning. Existing approaches either assume that the label
correlations are global and shared by all instances; or that the label
correlations are local and shared only by a data subset. In fact, in the
real-world applications, both cases may occur that some label correlations are
globally applicable and some are shared only in a local group of instances.
Moreover, it is also a usual case that only partial labels are observed, which
makes the exploitation of the label correlations much more difficult. That is,
it is hard to estimate the label correlations when many labels are absent. In
this paper, we propose a new multi-label approach GLOCAL dealing with both the
full-label and the missing-label cases, exploiting global and local label
correlations simultaneously, through learning a latent label representation and
optimizing label manifolds. The extensive experimental studies validate the
effectiveness of our approach on both full-label and missing-label data.
| Yue Zhu and James T. Kwok and Zhi-Hua Zhou | null | 1704.01415 | null | null |
The Many Faces of Link Fraud | cs.SI cs.LG | Most past work on social network link fraud detection tries to separate
genuine users from fraudsters, implicitly assuming that there is only one type
of fraudulent behavior. But is this assumption true? And, in either case, what
are the characteristics of such fraudulent behaviors? In this work, we set up
honeypots ("dummy" social network accounts), and buy fake followers (after
careful IRB approval). We report the signs of such behaviors including oddities
in local network connectivity, account attributes, and similarities and
differences across fraud providers. Most valuably, we discover and characterize
several types of fraud behaviors. We discuss how to leverage our insights in
practice by engineering strongly performing entropy-based features and
demonstrating high classification accuracy. Our contributions are (a)
instrumentation: we detail our experimental setup and carefully engineered data
collection process to scrape Twitter data while respecting API rate-limits, (b)
observations on fraud multimodality: we analyze our honeypot fraudster
ecosystem and give surprising insights into the multifaceted behaviors of these
fraudster types, and (c) features: we propose novel features that give strong
(>0.95 precision/recall) discriminative power on ground-truth Twitter data.
| Neil Shah, Hemank Lamba, Alex Beutel, Christos Faloutsos | null | 1704.0142 | null | null |
AMIDST: a Java Toolbox for Scalable Probabilistic Machine Learning | cs.LG stat.ML | The AMIDST Toolbox is a software for scalable probabilistic machine learning
with a spe- cial focus on (massive) streaming data. The toolbox supports a
flexible modeling language based on probabilistic graphical models with latent
variables and temporal dependencies. The specified models can be learnt from
large data sets using parallel or distributed implementa- tions of Bayesian
learning algorithms for either streaming or batch data. These algorithms are
based on a flexible variational message passing scheme, which supports discrete
and continu- ous variables from a wide range of probability distributions.
AMIDST also leverages existing functionality and algorithms by interfacing to
software tools such as Flink, Spark, MOA, Weka, R and HUGIN. AMIDST is an open
source toolbox written in Java and available at http://www.amidsttoolbox.com
under the Apache Software License version 2.0.
| Andr\'es R. Masegosa, Ana M. Mart\'inez, Dar\'io Ramos-L\'opez, Rafael
Caba\~nas, Antonio Salmer\'on, Thomas D. Nielsen, Helge Langseth, Anders L.
Madsen | 10.1016/j.knosys.2018.09.019 | 1704.01427 | null | null |
Learning to Generate Reviews and Discovering Sentiment | cs.LG cs.CL cs.NE | We explore the properties of byte-level recurrent language models. When given
sufficient amounts of capacity, training data, and compute time, the
representations learned by these models include disentangled features
corresponding to high-level concepts. Specifically, we find a single unit which
performs sentiment analysis. These representations, learned in an unsupervised
manner, achieve state of the art on the binary subset of the Stanford Sentiment
Treebank. They are also very data efficient. When using only a handful of
labeled examples, our approach matches the performance of strong baselines
trained on full datasets. We also demonstrate the sentiment unit has a direct
influence on the generative process of the model. Simply fixing its value to be
positive or negative generates samples with the corresponding positive or
negative sentiment.
| Alec Radford, Rafal Jozefowicz, Ilya Sutskever | null | 1704.01444 | null | null |
Comparison Based Nearest Neighbor Search | stat.ML cs.DS cs.LG | We consider machine learning in a comparison-based setting where we are given
a set of points in a metric space, but we have no access to the actual
distances between the points. Instead, we can only ask an oracle whether the
distance between two points $i$ and $j$ is smaller than the distance between
the points $i$ and $k$. We are concerned with data structures and algorithms to
find nearest neighbors based on such comparisons. We focus on a simple yet
effective algorithm that recursively splits the space by first selecting two
random pivot points and then assigning all other points to the closer of the
two (comparison tree). We prove that if the metric space satisfies certain
expansion conditions, then with high probability the height of the comparison
tree is logarithmic in the number of points, leading to efficient search
performance. We also provide an upper bound for the failure probability to
return the true nearest neighbor. Experiments show that the comparison tree is
competitive with algorithms that have access to the actual distance values, and
needs less triplet comparisons than other competitors.
| Siavash Haghiri, Debarghya Ghoshdastidar and Ulrike von Luxburg | null | 1704.0146 | null | null |
Automatic Breast Ultrasound Image Segmentation: A Survey | cs.CV cs.LG | Breast cancer is one of the leading causes of cancer death among women
worldwide. In clinical routine, automatic breast ultrasound (BUS) image
segmentation is very challenging and essential for cancer diagnosis and
treatment planning. Many BUS segmentation approaches have been studied in the
last two decades, and have been proved to be effective on private datasets.
Currently, the advancement of BUS image segmentation seems to meet its
bottleneck. The improvement of the performance is increasingly challenging, and
only few new approaches were published in the last several years. It is the
time to look at the field by reviewing previous approaches comprehensively and
to investigate the future directions. In this paper, we study the basic ideas,
theories, pros and cons of the approaches, group them into categories, and
extensively review each category in depth by discussing the principles,
application issues, and advantages/disadvantages.
| Min Xian, Yingtao Zhang, H.D. Cheng, Fei Xu, Boyu Zhang, Jianrui Ding | null | 1704.01472 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.