title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Learning with Changing Features | cs.LG stat.CO stat.ML | In this paper we study the setting where features are added or change
interpretation over time, which has applications in multiple domains such as
retail, manufacturing, finance. In particular, we propose an approach to
provably determine the time instant from which the new/changed features start
becoming relevant with respect to an output variable in an agnostic
(supervised) learning setting. We also suggest an efficient version of our
approach which has the same asymptotic performance. Moreover, our theory also
applies when we have more than one such change point. Independent post analysis
of a change point identified by our method for a large retailer revealed that
it corresponded in time with certain unflattering news stories about a brand
that resulted in the change in customer behavior. We also applied our method to
data from an advanced manufacturing plant identifying the time instant from
which downstream features became relevant. To the best of our knowledge this is
the first work that formally studies change point detection in a distribution
independent agnostic setting, where the change point is based on the changing
relationship between input and output.
| Amit Dhurandhar, Steve Hanneke and Liu Yang | null | 1705.00219 | null | null |
Generalization Guarantees for Multi-item Profit Maximization: Pricing,
Auctions, and Randomized Mechanisms | cs.LG cs.GT | We study multi-item profit maximization when there is an underlying
distribution over buyers' values. In practice, a full description of the
distribution is typically unavailable, so we study the setting where the
mechanism designer only has samples from the distribution. If the designer uses
the samples to optimize over a complex mechanism class -- such as the set of
all multi-item, multi-buyer mechanisms -- a mechanism may have high average
profit over the samples but low expected profit. This raises the central
question of this paper: how many samples are sufficient to ensure that a
mechanism's average profit is close to its expected profit? To answer this
question, we uncover structure shared by many pricing, auction, and lottery
mechanisms: for any set of buyers' values, profit is piecewise linear in the
mechanism's parameters. Using this structure, we prove new bounds for mechanism
classes not yet studied in the sample-based mechanism design literature and
match or improve over the best-known guarantees for many classes.
| Maria-Florina Balcan, Tuomas Sandholm, and Ellen Vitercik | null | 1705.00243 | null | null |
Multi-dueling Bandits with Dependent Arms | cs.LG | The dueling bandits problem is an online learning framework for learning from
pairwise preference feedback, and is particularly well-suited for modeling
settings that elicit subjective or implicit human feedback. In this paper, we
study the problem of multi-dueling bandits with dependent arms, which extends
the original dueling bandits setting by simultaneously dueling multiple arms as
well as modeling dependencies between arms. These extensions capture key
characteristics found in many real-world applications, and allow for the
opportunity to develop significantly more efficient algorithms than were
possible in the original setting. We propose the \selfsparring algorithm, which
reduces the multi-dueling bandits problem to a conventional bandit setting that
can be solved using a stochastic bandit algorithm such as Thompson Sampling,
and can naturally model dependencies using a Gaussian process prior. We present
a no-regret analysis for multi-dueling setting, and demonstrate the
effectiveness of our algorithm empirically on a wide range of simulation
settings.
| Yanan Sui, Vincent Zhuang, Joel W. Burdick, Yisong Yue | null | 1705.00253 | null | null |
Tree-Structured Neural Machine for Linguistics-Aware Sentence Generation | cs.AI cs.CL cs.LG | Different from other sequential data, sentences in natural language are
structured by linguistic grammars. Previous generative conversational models
with chain-structured decoder ignore this structure in human language and might
generate plausible responses with less satisfactory relevance and fluency. In
this study, we aim to incorporate the results from linguistic analysis into the
process of sentence generation for high-quality conversation generation.
Specifically, we use a dependency parser to transform each response sentence
into a dependency tree and construct a training corpus of sentence-tree pairs.
A tree-structured decoder is developed to learn the mapping from a sentence to
its tree, where different types of hidden states are used to depict the local
dependencies from an internal tree node to its children. For training
acceleration, we propose a tree canonicalization method, which transforms trees
into equivalent ternary trees. Then, with a proposed tree-structured search
method, the model is able to generate the most probable responses in the form
of dependency trees, which are finally flattened into sequences as the system
output. Experimental results demonstrate that the proposed X2Tree framework
outperforms baseline methods over 11.15% increase of acceptance ratio.
| Ganbin Zhou, Ping Luo, Rongyu Cao, Yijun Xiao, Fen Lin, Bo Chen, Qing
He | null | 1705.00321 | null | null |
Scaling Active Search using Linear Similarity Functions | stat.ML cs.LG | Active Search has become an increasingly useful tool in information retrieval
problems where the goal is to discover as many target elements as possible
using only limited label queries. With the advent of big data, there is a
growing emphasis on the scalability of such techniques to handle very large and
very complex datasets.
In this paper, we consider the problem of Active Search where we are given a
similarity function between data points. We look at an algorithm introduced by
Wang et al. [2013] for Active Search over graphs and propose crucial
modifications which allow it to scale significantly. Their approach selects
points by minimizing an energy function over the graph induced by the
similarity function on the data. Our modifications require the similarity
function to be a dot-product between feature vectors of data points, equivalent
to having a linear kernel for the adjacency matrix. With this, we are able to
scale tremendously: for $n$ data points, the original algorithm runs in
$O(n^2)$ time per iteration while ours runs in only $O(nr + r^2)$ given
$r$-dimensional features.
We also describe a simple alternate approach using a weighted-neighbor
predictor which also scales well. In our experiments, we show that our method
is competitive with existing semi-supervised approaches. We also briefly
discuss conditions under which our algorithm performs well.
| Sibi Venkatesan, James K. Miller, Jeff Schneider and Artur Dubrawski | null | 1705.00334 | null | null |
Stabiliser states are efficiently PAC-learnable | quant-ph cs.LG | The exponential scaling of the wave function is a fundamental property of
quantum systems with far reaching implications in our ability to process
quantum information. A problem where these are particularly relevant is quantum
state tomography. State tomography, whose objective is to obtain a full
description of a quantum system, can be analysed in the framework of
computational learning theory. In this model, quantum states have been shown to
be Probably Approximately Correct (PAC)-learnable with sample complexity linear
in the number of qubits. However, it is conjectured that in general quantum
states require an exponential amount of computation to be learned. Here, using
results from the literature on the efficient classical simulation of quantum
systems, we show that stabiliser states are efficiently PAC-learnable. Our
results solve an open problem formulated by Aaronson [Proc. R. Soc. A, 2088,
(2007)] and propose learning theory as a tool for exploring the power of
quantum computation.
| Andrea Rocchetto | null | 1705.00345 | null | null |
Deep Learning in the Automotive Industry: Applications and Tools | cs.LG cs.CV cs.DC | Deep Learning refers to a set of machine learning techniques that utilize
neural networks with many hidden layers for tasks, such as image
classification, speech recognition, language understanding. Deep learning has
been proven to be very effective in these domains and is pervasively used by
many Internet services. In this paper, we describe different automotive uses
cases for deep learning in particular in the domain of computer vision. We
surveys the current state-of-the-art in libraries, tools and infrastructures
(e.\,g.\ GPUs and clouds) for implementing, training and deploying deep neural
networks. We particularly focus on convolutional neural networks and computer
vision use cases, such as the visual inspection process in manufacturing plants
and the analysis of social media data. To train neural networks, curated and
labeled datasets are essential. In particular, both the availability and scope
of such datasets is typically very limited. A main contribution of this paper
is the creation of an automotive dataset, that allows us to learn and
automatically recognize different vehicle properties. We describe an end-to-end
deep learning application utilizing a mobile app for data collection and
process support, and an Amazon-based cloud backend for storage and training.
For training we evaluate the use of cloud and on-premises infrastructures
(including multiple GPUs) in conjunction with different neural network
architectures and frameworks. We assess both the training times as well as the
accuracy of the classifier. Finally, we demonstrate the effectiveness of the
trained classifier in a real world setting during manufacturing process.
| Andre Luckow and Matthew Cook and Nathan Ashcraft and Edwin Weill and
Emil Djerekarov and Bennie Vorster | 10.1109/BigData.2016.7841045 | 1705.00346 | null | null |
Scalable Twin Neural Networks for Classification of Unbalanced Data | cs.LG | Twin Support Vector Machines (TWSVMs) have emerged an efficient alternative
to Support Vector Machines (SVM) for learning from imbalanced datasets. The
TWSVM learns two non-parallel classifying hyperplanes by solving a couple of
smaller sized problems. However, it is unsuitable for large datasets, as it
involves matrix operations. In this paper, we discuss a Twin Neural Network
(Twin NN) architecture for learning from large unbalanced datasets. The Twin NN
also learns an optimal feature map, allowing for better discrimination between
classes. We also present an extension of this network architecture for
multiclass datasets. Results presented in the paper demonstrate that the Twin
NN generalizes well and scales well on large unbalanced datasets.
| Jayadeva, Himanshu Pant, Sumit Soman and Mayank Sharma | 10.1016/j.neucom.2018.07.089 | 1705.00347 | null | null |
Targeted matrix completion | cs.LG stat.ML | Matrix completion is a problem that arises in many data-analysis settings
where the input consists of a partially-observed matrix (e.g., recommender
systems, traffic matrix analysis etc.). Classical approaches to matrix
completion assume that the input partially-observed matrix is low rank. The
success of these methods depends on the number of observed entries and the rank
of the matrix; the larger the rank, the more entries need to be observed in
order to accurately complete the matrix. In this paper, we deal with matrices
that are not necessarily low rank themselves, but rather they contain low-rank
submatrices. We propose Targeted, which is a general framework for completing
such matrices. In this framework, we first extract the low-rank submatrices and
then apply a matrix-completion algorithm to these low-rank submatrices as well
as the remainder matrix separately. Although for the completion itself we use
state-of-the-art completion methods, our results demonstrate that Targeted
achieves significantly smaller reconstruction errors than other classical
matrix-completion methods. One of the key technical contributions of the paper
lies in the identification of the low-rank submatrices from the input
partially-observed matrices.
| Natali Ruchansky and Mark Crovella and Evimaria Terzi | null | 1705.00375 | null | null |
Matrix completion with queries | cs.LG cs.SI | In many applications, e.g., recommender systems and traffic monitoring, the
data comes in the form of a matrix that is only partially observed and low
rank. A fundamental data-analysis task for these datasets is matrix completion,
where the goal is to accurately infer the entries missing from the matrix. Even
when the data satisfies the low-rank assumption, classical matrix-completion
methods may output completions with significant error -- in that the
reconstructed matrix differs significantly from the true underlying matrix.
Often, this is due to the fact that the information contained in the observed
entries is insufficient. In this work, we address this problem by proposing an
active version of matrix completion, where queries can be made to the true
underlying matrix. Subsequently, we design Order&Extend, which is the first
algorithm to unify a matrix-completion approach and a querying strategy into a
single algorithm. Order&Extend is able identify and alleviate insufficient
information by judiciously querying a small number of additional entries. In an
extensive experimental evaluation on real-world datasets, we demonstrate that
our algorithm is efficient and is able to accurately reconstruct the true
matrix while asking only a small number of queries.
| Natali Ruchansky and Mark Crovella and Evimaria Terzi | 10.1145/2783258.2783259 | 1705.00399 | null | null |
Spectrum Monitoring for Radar Bands using Deep Convolutional Neural
Networks | cs.NI cs.IT cs.LG math.IT | In this paper, we present a spectrum monitoring framework for the detection
of radar signals in spectrum sharing scenarios. The core of our framework is a
deep convolutional neural network (CNN) model that enables Measurement Capable
Devices to identify the presence of radar signals in the radio spectrum, even
when these signals are overlapped with other sources of interference, such as
commercial LTE and WLAN. We collected a large dataset of RF measurements, which
include the transmissions of multiple radar pulse waveforms, downlink LTE,
WLAN, and thermal noise. We propose a pre-processing data representation that
leverages the amplitude and phase shifts of the collected samples. This
representation allows our CNN model to achieve a classification accuracy of
99.6% on our testing dataset. The trained CNN model is then tested under
various SNR values, outperforming other models, such as spectrogram-based CNN
models.
| Ahmed Selim, Francisco Paisana, Jerome A. Arokkiam, Yi Zhang, Linda
Doyle, Luiz A. DaSilva | null | 1705.00462 | null | null |
A Riemannian gossip approach to subspace learning on Grassmann manifold | cs.LG math.OC | In this paper, we focus on subspace learning problems on the Grassmann
manifold. Interesting applications in this setting include low-rank matrix
completion and low-dimensional multivariate regression, among others. Motivated
by privacy concerns, we aim to solve such problems in a decentralized setting
where multiple agents have access to (and solve) only a part of the whole
optimization problem. The agents communicate with each other to arrive at a
consensus, i.e., agree on a common quantity, via the gossip protocol.
We propose a novel cost function for subspace learning on the Grassmann
manifold, which is a weighted sum of several sub-problems (each solved by an
agent) and the communication cost among the agents. The cost function has a
finite sum structure. In the proposed modeling approach, different agents learn
individual local subspace but they achieve asymptotic consensus on the global
learned subspace. The approach is scalable and parallelizable. Numerical
experiments show the efficacy of the proposed decentralized algorithms on
various matrix completion and multivariate regression benchmarks.
| Bamdev Mishra, Hiroyuki Kasai, Pratik Jawanpuria, and Atul Saroop | null | 1705.00467 | null | null |
Learning Multimodal Transition Dynamics for Model-Based Reinforcement
Learning | stat.ML cs.LG | In this paper we study how to learn stochastic, multimodal transition
dynamics in reinforcement learning (RL) tasks. We focus on evaluating
transition function estimation, while we defer planning over this model to
future work. Stochasticity is a fundamental property of many task environments.
However, discriminative function approximators have difficulty estimating
multimodal stochasticity. In contrast, deep generative models do capture
complex high-dimensional outcome distributions. First we discuss why, amongst
such models, conditional variational inference (VI) is theoretically most
appealing for model-based RL. Subsequently, we compare different VI models on
their ability to learn complex stochasticity on simulated functions, as well as
on a typical RL gridworld with multimodal dynamics. Results show VI
successfully predicts multimodal outcomes, but also robustly ignores these for
deterministic parts of the transition dynamics. In summary, we show a robust
method to learn multimodal transitions using function approximation, which is a
key preliminary for model-based RL in stochastic domains.
| Thomas M. Moerland, Joost Broekens and Catholijn M. Jonker | null | 1705.0047 | null | null |
Regularized Residual Quantization: a multi-layer sparse dictionary
learning approach | cs.LG cs.CV | The Residual Quantization (RQ) framework is revisited where the quantization
distortion is being successively reduced in multi-layers. Inspired by the
reverse-water-filling paradigm in rate-distortion theory, an efficient
regularization on the variances of the codewords is introduced which allows to
extend the RQ for very large numbers of layers and also for high dimensional
data, without getting over-trained. The proposed Regularized Residual
Quantization (RRQ) results in multi-layer dictionaries which are additionally
sparse, thanks to the soft-thresholding nature of the regularization when
applied to variance-decaying data which can arise from de-correlating
transformations applied to correlated data. Furthermore, we also propose a
general-purpose pre-processing for natural images which makes them suitable for
such quantization. The RRQ framework is first tested on synthetic
variance-decaying data to show its efficiency in quantization of
high-dimensional data. Next, we use the RRQ in super-resolution of a database
of facial images where it is shown that low-resolution facial images from the
test set quantized with codebooks trained on high-resolution images from the
training set show relevant high-frequency content when reconstructed with those
codebooks.
| Sohrab Ferdowsi, Slava Voloshynovskiy, Dimche Kostadinov | null | 1705.00522 | null | null |
Single image depth estimation by dilated deep residual convolutional
neural network and soft-weight-sum inference | cs.CV cs.LG | This paper proposes a new residual convolutional neural network (CNN)
architecture for single image depth estimation. Compared with existing deep CNN
based methods, our method achieves much better results with fewer training
examples and model parameters. The advantages of our method come from the usage
of dilated convolution, skip connection architecture and soft-weight-sum
inference. Experimental evaluation on the NYU Depth V2 dataset shows that our
method outperforms other state-of-the-art methods by a margin.
| Bo Li, Yuchao Dai, Huahui Chen, Mingyi He | null | 1705.00534 | null | null |
Discourse-Based Objectives for Fast Unsupervised Sentence Representation
Learning | cs.CL cs.LG cs.NE stat.ML | This work presents a novel objective function for the unsupervised training
of neural network sentence encoders. It exploits signals from paragraph-level
discourse coherence to train these models to understand text. Our objective is
purely discriminative, allowing us to train models many times faster than was
possible under prior methods, and it yields models which perform well in
extrinsic evaluations.
| Yacine Jernite, Samuel R. Bowman and David Sontag | null | 1705.00557 | null | null |
Forced to Learn: Discovering Disentangled Representations Without
Exhaustive Labels | cs.LG cs.NE | Learning a better representation with neural networks is a challenging
problem, which was tackled extensively from different prospectives in the past
few years. In this work, we focus on learning a representation that could be
used for a clustering task and introduce two novel loss components that
substantially improve the quality of produced clusters, are simple to apply to
an arbitrary model and cost function, and do not require a complicated training
procedure. We evaluate them on two most common types of models, Recurrent
Neural Networks and Convolutional Neural Networks, showing that the approach we
propose consistently improves the quality of KMeans clustering in terms of
Adjusted Mutual Information score and outperforms previously proposed methods.
| Alexey Romanov and Anna Rumshisky | null | 1705.00574 | null | null |
Towards well-specified semi-supervised model-based classifiers via
structural adaptation | cs.LG cs.AI | Semi-supervised learning plays an important role in large-scale machine
learning. Properly using additional unlabeled data (largely available nowadays)
often can improve the machine learning accuracy. However, if the machine
learning model is misspecified for the underlying true data distribution, the
model performance could be seriously jeopardized. This issue is known as model
misspecification. To address this issue, we focus on generative models and
propose a criterion to detect the onset of model misspecification by measuring
the performance difference between models obtained using supervised and
semi-supervised learning. Then, we propose to automatically modify the
generative models during model training to achieve an unbiased generative
model. Rigorous experiments were carried out to evaluate the proposed method
using two image classification data sets PASCAL VOC'07 and MIR Flickr. Our
proposed method has been demonstrated to outperform a number of
state-of-the-art semi-supervised learning approaches for the classification
task.
| Zhaocai Sun, William K. Cheung, Xiaofeng Zhang, Jun Yang | null | 1705.00597 | null | null |
Determinantal Point Processes for Mini-Batch Diversification | cs.LG stat.ML | We study a mini-batch diversification scheme for stochastic gradient descent
(SGD). While classical SGD relies on uniformly sampling data points to form a
mini-batch, we propose a non-uniform sampling scheme based on the Determinantal
Point Process (DPP). The DPP relies on a similarity measure between data points
and gives low probabilities to mini-batches which contain redundant data, and
higher probabilities to mini-batches with more diverse data. This
simultaneously balances the data and leads to stochastic gradients with lower
variance. We term this approach Diversified Mini-Batch SGD (DM-SGD). We show
that regular SGD and a biased version of stratified sampling emerge as special
cases. Furthermore, DM-SGD generalizes stratified sampling to cases where no
discrete features exist to bin the data into groups. We show experimentally
that our method results more interpretable and diverse features in unsupervised
setups, and in better classification accuracies in supervised setups.
| Cheng Zhang, Hedvig Kjellstrom, Stephan Mandt | null | 1705.00607 | null | null |
Twin Learning for Similarity and Clustering: A Unified Kernel Approach | cs.LG cs.CV stat.ML | Many similarity-based clustering methods work in two separate steps including
similarity matrix computation and subsequent spectral clustering. However,
similarity measurement is challenging because it is usually impacted by many
factors, e.g., the choice of similarity metric, neighborhood size, scale of
data, noise and outliers. Thus the learned similarity matrix is often not
suitable, let alone optimal, for the subsequent clustering. In addition,
nonlinear similarity often exists in many real world data which, however, has
not been effectively considered by most existing methods. To tackle these two
challenges, we propose a model to simultaneously learn cluster indicator matrix
and similarity information in kernel spaces in a principled way. We show
theoretical relationships to kernel k-means, k-means, and spectral clustering
methods. Then, to address the practical issue of how to select the most
suitable kernel for a particular clustering task, we further extend our model
with a multiple kernel learning ability. With this joint model, we can
automatically accomplish three subtasks of finding the best cluster indicator
matrix, the most accurate similarity relations and the optimal combination of
multiple kernels. By leveraging the interactions between these three subtasks
in a joint framework, each subtask can be iteratively boosted by using the
results of the others towards an overall optimal solution. Extensive
experiments are performed to demonstrate the effectiveness of our method.
| Zhao Kang, Chong Peng, Qiang Cheng | null | 1705.00678 | null | null |
Convex-constrained Sparse Additive Modeling and Its Extensions | cs.LG stat.ML | Sparse additive modeling is a class of effective methods for performing
high-dimensional nonparametric regression. In this work we show how shape
constraints such as convexity/concavity and their extensions, can be integrated
into additive models. The proposed sparse difference of convex additive models
(SDCAM) can estimate most continuous functions without any a priori smoothness
assumption. Motivated by a characterization of difference of convex functions,
our method incorporates a natural regularization functional to avoid
overfitting and to reduce model complexity. Computationally, we develop an
efficient backfitting algorithm with linear per-iteration complexity.
Experiments on both synthetic and real data verify that our method is
competitive against state-of-the-art sparse additive models, with improved
performance in most scenarios.
| Junming Yin and Yaoliang Yu | null | 1705.00687 | null | null |
Regularizing Model Complexity and Label Structure for Multi-Label Text
Classification | stat.ML cs.LG | Multi-label text classification is a popular machine learning task where each
document is assigned with multiple relevant labels. This task is challenging
due to high dimensional features and correlated labels. Multi-label text
classifiers need to be carefully regularized to prevent the severe over-fitting
in the high dimensional space, and also need to take into account label
dependencies in order to make accurate predictions under uncertainty. We
demonstrate significant and practical improvement by carefully regularizing the
model complexity during training phase, and also regularizing the label search
space during prediction phase. Specifically, we regularize the classifier
training using Elastic-net (L1+L2) penalty for reducing model complexity/size,
and employ early stopping to prevent overfitting. At prediction time, we apply
support inference to restrict the search space to label sets encountered in the
training set, and F-optimizer GFM to make optimal predictions for the F1
metric. We show that although support inference only provides density
estimations on existing label combinations, when combined with GFM predictor,
the algorithm can output unseen label combinations. Taken collectively, our
experiments show state of the art results on many benchmark datasets. Beyond
performance and practical contributions, we make some interesting observations.
Contrary to the prior belief, which deems support inference as purely an
approximate inference procedure, we show that support inference acts as a
strong regularizer on the label prediction structure. It allows the classifier
to take into account label dependencies during prediction even if the
classifiers had not modeled any label dependencies during training.
| Bingyu Wang, Cheng Li, Virgil Pavlu, Javed Aslam | null | 1705.0074 | null | null |
A Strategy for an Uncompromising Incremental Learner | cs.CV cs.LG | Multi-class supervised learning systems require the knowledge of the entire
range of labels they predict. Often when learnt incrementally, they suffer from
catastrophic forgetting. To avoid this, generous leeways have to be made to the
philosophy of incremental learning that either forces a part of the machine to
not learn, or to retrain the machine again with a selection of the historic
data. While these hacks work to various degrees, they do not adhere to the
spirit of incremental learning. In this article, we redefine incremental
learning with stringent conditions that do not allow for any undesirable
relaxations and assumptions. We design a strategy involving generative models
and the distillation of dark knowledge as a means of hallucinating data along
with appropriate targets from past distributions. We call this technique,
phantom sampling.We show that phantom sampling helps avoid catastrophic
forgetting during incremental learning. Using an implementation based on deep
neural networks, we demonstrate that phantom sampling dramatically avoids
catastrophic forgetting. We apply these strategies to competitive multi-class
incremental learning of deep neural networks. Using various benchmark datasets
and through our strategy, we demonstrate that strict incremental learning could
be achieved. We further put our strategy to test on challenging cases,
including cross-domain increments and incrementing on a novel label space. We
also propose a trivial extension to unbounded-continual learning and identify
potential for future development.
| Ragav Venkatesan, Hemanth Venkateswara, Sethuraman Panchanathan,
Baoxin Li | null | 1705.00744 | null | null |
One-Class Semi-Supervised Learning: Detecting Linearly Separable Class
by its Mean | stat.ML cs.LG | In this paper, we presented a novel semi-supervised one-class classification
algorithm which assumes that class is linearly separable from other elements.
We proved theoretically that class is linearly separable if and only if it is
maximal by probability within the sets with the same mean. Furthermore, we
presented an algorithm for identifying such linearly separable class utilizing
linear programming. We described three application cases including an
assumption of linear separability, Gaussian distribution, and the case of
linear separability in transformed space of kernel functions. Finally, we
demonstrated the work of the proposed algorithm on the USPS dataset and
analyzed the relationship of the performance of the algorithm and the size of
the initially labeled sample.
| Evgeny Bauman and Konstantin Bauman | null | 1705.00797 | null | null |
Transforming Bell's Inequalities into State Classifiers with Machine
Learning | quant-ph cs.LG | Quantum information science has profoundly changed the ways we understand,
store, and process information. A major challenge in this field is to look for
an efficient means for classifying quantum state. For instance, one may want to
determine if a given quantum state is entangled or not. However, the process of
a complete characterization of quantum states, known as quantum state
tomography, is a resource-consuming operation in general. An attractive
proposal would be the use of Bell's inequalities as an entanglement witness,
where only partial information of the quantum state is needed. The problem is
that entanglement is necessary but not sufficient for violating Bell's
inequalities, making it an unreliable state classifier. Here we aim at solving
this problem by the methods of machine learning. More precisely, given a family
of quantum states, we randomly picked a subset of it to construct a
quantum-state classifier, accepting only partial information of each quantum
state. Our results indicated that these transformed Bell-type inequalities can
perform significantly better than the original Bell's inequalities in
classifying entangled states. We further extended our analysis to three-qubit
and four-qubit systems, performing classification of quantum states into
multiple species. These results demonstrate how the tools in machine learning
can be applied to solving problems in quantum information science.
| Yue-Chi Ma, Man-Hong Yung | null | 1705.00813 | null | null |
Multi-view Unsupervised Feature Selection by Cross-diffused Matrix
Alignment | cs.LG | Multi-view high-dimensional data become increasingly popular in the big data
era. Feature selection is a useful technique for alleviating the curse of
dimensionality in multi-view learning. In this paper, we study unsupervised
feature selection for multi-view data, as class labels are usually expensive to
obtain. Traditional feature selection methods are mostly designed for
single-view data and cannot fully exploit the rich information from multi-view
data. Existing multi-view feature selection methods are usually based on noisy
cluster labels which might not preserve sufficient information from multi-view
data. To better utilize multi-view information, we propose a method, CDMA-FS,
to select features for each view by performing alignment on a cross diffused
matrix. We formulate it as a constrained optimization problem and solve it
using Quasi-Newton based method. Experiments results on four real-world
datasets show that the proposed method is more effective than the
state-of-the-art methods in multi-view setting.
| Xiaokai Wei, Bokai Cao and Philip S. Yu | null | 1705.00825 | null | null |
BLENDER: Enabling Local Search with a Hybrid Differential Privacy Model | cs.CR cs.CY cs.IR cs.LG | We propose a hybrid model of differential privacy that considers a
combination of regular and opt-in users who desire the differential privacy
guarantees of the local privacy model and the trusted curator model,
respectively. We demonstrate that within this model, it is possible to design a
new type of blended algorithm for the task of privately computing the head of a
search log. This blended approach provides significant improvements in the
utility of obtained data compared to related work while providing users with
their desired privacy guarantees. Specifically, on two large search click data
sets, comprising 1.75 and 16 GB respectively, our approach attains NDCG values
exceeding 95% across a range of privacy budget values.
| Brendan Avent, Aleksandra Korolova, David Zeber, Torgeir Hovden,
Benjamin Livshits | 10.29012/jpc.680 | 1705.00831 | null | null |
Pointed subspace approach to incomplete data | cs.LG | Incomplete data are often represented as vectors with filled missing
attributes joined with flag vectors indicating missing components. In this
paper we generalize this approach and represent incomplete data as pointed
affine subspaces. This allows to perform various affine transformations of
data, as whitening or dimensionality reduction. We embed such generalized
missing data into a vector space by mapping pointed affine subspace
(generalized missing data point) to a vector containing imputed values joined
with a corresponding projection matrix. Such an operation preserves the scalar
product of the embedding defined for flag vectors and allows to input
transformed incomplete data to typical classification methods.
| {\L}ukasz Struski, Marek \'Smieja, Jacek Tabor | null | 1705.0084 | null | null |
Random active path model of deep neural networks with diluted binary
synapses | cs.LG cond-mat.stat-mech stat.ML | Deep learning has become a powerful and popular tool for a variety of machine
learning tasks. However, it is challenging to understand the mechanism of deep
learning from a theoretical perspective. In this work, we propose a random
active path model to study collective properties of deep neural networks with
binary synapses, under the removal perturbation of connections between layers.
In the model, the path from input to output is randomly activated, and the
corresponding input unit constrains the weights along the path into the form of
a $p$-weight interaction glass model. A critical value of the perturbation is
observed to separate a spin glass regime from a paramagnetic regime, with the
transition being of the first order. The paramagnetic phase is conjectured to
have a poor generalization performance.
| Haiping Huang, and Alireza Goudarzi | 10.1103/PhysRevE.98.042311 | 1705.0085 | null | null |
Deep Neural Machine Translation with Linear Associative Unit | cs.CL cs.LG | Deep Neural Networks (DNNs) have provably enhanced the state-of-the-art
Neural Machine Translation (NMT) with their capability in modeling complex
functions and capturing complex linguistic structures. However NMT systems with
deep architecture in their encoder or decoder RNNs often suffer from severe
gradient diffusion due to the non-linear recurrent activations, which often
make the optimization much more difficult. To address this problem we propose
novel linear associative units (LAU) to reduce the gradient propagation length
inside the recurrent unit. Different from conventional approaches (LSTM unit
and GRU), LAUs utilizes linear associative connections between input and output
of the recurrent unit, which allows unimpeded information flow through both
space and time direction. The model is quite simple, but it is surprisingly
effective. Our empirical study on Chinese-English translation shows that our
model with proper configuration can improve by 11.7 BLEU upon Groundhog and the
best reported results in the same setting. On WMT14 English-German task and a
larger WMT14 English-French task, our model achieves comparable results with
the state-of-the-art.
| Mingxuan Wang, Zhengdong Lu, Jie Zhou, Qun Liu | null | 1705.00861 | null | null |
Show, Adapt and Tell: Adversarial Training of Cross-domain Image
Captioner | cs.CV cs.AI cs.LG | Impressive image captioning results are achieved in domains with plenty of
training image and sentence pairs (e.g., MSCOCO). However, transferring to a
target domain with significant domain shifts but no paired training data
(referred to as cross-domain image captioning) remains largely unexplored. We
propose a novel adversarial training procedure to leverage unpaired data in the
target domain. Two critic networks are introduced to guide the captioner,
namely domain critic and multi-modal critic. The domain critic assesses whether
the generated sentences are indistinguishable from sentences in the target
domain. The multi-modal critic assesses whether an image and its generated
sentence are a valid pair. During training, the critics and captioner act as
adversaries -- captioner aims to generate indistinguishable sentences, whereas
critics aim at distinguishing them. The assessment improves the captioner
through policy gradient updates. During inference, we further propose a novel
critic-based planning method to select high-quality sentences without
additional supervision (e.g., tags). To evaluate, we use MSCOCO as the source
domain and four other datasets (CUB-200-2011, Oxford-102, TGIF, and Flickr30k)
as the target domains. Our method consistently performs well on all datasets.
In particular, on CUB-200-2011, we achieve 21.8% CIDEr-D improvement after
adaptation. Utilizing critics during inference further gives another 4.5%
boost.
| Tseng-Hung Chen, Yuan-Hong Liao, Ching-Yao Chuang, Wan-Ting Hsu,
Jianlong Fu, Min Sun | null | 1705.0093 | null | null |
Deep Learning for Tumor Classification in Imaging Mass Spectrometry | stat.ML cs.LG | Motivation: Tumor classification using Imaging Mass Spectrometry (IMS) data
has a high potential for future applications in pathology. Due to the
complexity and size of the data, automated feature extraction and
classification steps are required to fully process the data. Deep learning
offers an approach to learn feature extraction and classification combined in a
single model. Commonly these steps are handled separately in IMS data analysis,
hence deep learning offers an alternative strategy worthwhile to explore.
Results: Methodologically, we propose an adapted architecture based on deep
convolutional networks to handle the characteristics of mass spectrometry data,
as well as a strategy to interpret the learned model in the spectral domain
based on a sensitivity analysis. The proposed methods are evaluated on two
challenging tumor classification tasks and compared to a baseline approach.
Competitiveness of the proposed methods are shown on both tasks by studying the
performance via cross-validation. Moreover, the learned models are analyzed by
the proposed sensitivity analysis revealing biologically plausible effects as
well as confounding factors of the considered task. Thus, this study may serve
as a starting point for further development of deep learning approaches in IMS
classification tasks.
| Jens Behrmann, Christian Etmann, Tobias Boskamp, Rita Casadonte,
J\"org Kriegsmann, Peter Maass | 10.1093/bioinformatics/btx724 | 1705.01015 | null | null |
Maximum Resilience of Artificial Neural Networks | cs.LG cs.AI cs.LO cs.SE | The deployment of Artificial Neural Networks (ANNs) in safety-critical
applications poses a number of new verification and certification challenges.
In particular, for ANN-enabled self-driving vehicles it is important to
establish properties about the resilience of ANNs to noisy or even maliciously
manipulated sensory input. We are addressing these challenges by defining
resilience properties of ANN-based classifiers as the maximal amount of input
or sensor perturbation which is still tolerated. This problem of computing
maximal perturbation bounds for ANNs is then reduced to solving mixed integer
optimization problems (MIP). A number of MIP encoding heuristics are developed
for drastically reducing MIP-solver runtimes, and using parallelization of
MIP-solvers results in an almost linear speed-up in the number (up to a certain
limit) of computing cores in our experiments. We demonstrate the effectiveness
and scalability of our approach by means of computing maximal resilience bounds
for a number of ANN benchmark sets ranging from typical image recognition
scenarios to the autonomous maneuvering of robots.
| Chih-Hong Cheng, Georg N\"uhrenberg, Harald Ruess | null | 1705.0104 | null | null |
PDE approach to the problem of online prediction with expert advice: a
construction of potential-based strategies | cs.LG | We consider a sequence of repeated prediction games and formally pass to the
limit. The supersolutions of the resulting non-linear parabolic partial
differential equation are closely related to the potential functions in the
sense of N.\,Cesa-Bianci, G.\,Lugosi (2003). Any such supersolution gives an
upper bound for forecaster's regret and suggests a potential-based prediction
strategy, satisfying the Blackwell condition. A conventional upper bound for
the worst-case regret is justified by a simple verification argument.
| Dmitry B. Rokhlin | null | 1705.01091 | null | null |
Summarized Network Behavior Prediction | cs.LG stat.ML | This work studies the entity-wise topical behavior from massive network logs.
Both the temporal and the spatial relationships of the behavior are explored
with the learning architectures combing the recurrent neural network (RNN) and
the convolutional neural network (CNN). To make the behavioral data appropriate
for the spatial learning in CNN, several reduction steps are taken to form the
topical metrics and place them homogeneously like pixels in the images. The
experimental result shows both the temporal- and the spatial- gains when
compared to a multilayer perceptron (MLP) network. A new learning framework
called spatially connected convolutional networks (SCCN) is introduced to more
efficiently predict the behavior.
| Shih-Chieh Su | null | 1705.01143 | null | null |
Analyzing Knowledge Transfer in Deep Q-Networks for Autonomously
Handling Multiple Intersections | cs.LG cs.AI | We analyze how the knowledge to autonomously handle one type of intersection,
represented as a Deep Q-Network, translates to other types of intersections
(tasks). We view intersection handling as a deep reinforcement learning
problem, which approximates the state action Q function as a deep neural
network. Using a traffic simulator, we show that directly copying a network
trained for one type of intersection to another type of intersection decreases
the success rate. We also show that when a network that is pre-trained on Task
A and then is fine-tuned on a Task B, the resulting network not only performs
better on the Task B than an network exclusively trained on Task A, but also
retained knowledge on the Task A. Finally, we examine a lifelong learning
setting, where we train a single network on five different types of
intersections sequentially and show that the resulting network exhibited
catastrophic forgetting of knowledge on previous tasks. This result suggests a
need for a long-term memory component to preserve knowledge.
| David Isele, Akansel Cosgun, Kikuo Fujimura | null | 1705.01197 | null | null |
Local Shrunk Discriminant Analysis (LSDA) | cs.LG | Dimensionality reduction is a crucial step for pattern recognition and data
mining tasks to overcome the curse of dimensionality. Principal component
analysis (PCA) is a traditional technique for unsupervised dimensionality
reduction, which is often employed to seek a projection to best represent the
data in a least-squares sense, but if the original data is nonlinear structure,
the performance of PCA will quickly drop. An supervised dimensionality
reduction algorithm called Linear discriminant analysis (LDA) seeks for an
embedding transformation, which can work well with Gaussian distribution data
or single-modal data, but for non-Gaussian distribution data or multimodal
data, it gives undesired results. What is worse, the dimension of LDA cannot be
more than the number of classes. In order to solve these issues, Local shrunk
discriminant analysis (LSDA) is proposed in this work to process the
non-Gaussian distribution data or multimodal data, which not only incorporate
both the linear and nonlinear structures of original data, but also learn the
pattern shrinking to make the data more flexible to fit the manifold structure.
Further, LSDA has more strong generalization performance, whose objective
function will become local LDA and traditional LDA when different extreme
parameters are utilized respectively. What is more, a new efficient
optimization algorithm is introduced to solve the non-convex objective function
with low computational cost. Compared with other related approaches, such as
PCA, LDA and local LDA, the proposed method can derive a subspace which is more
suitable for non-Gaussian distribution and real data. Promising experimental
results on different kinds of data sets demonstrate the effectiveness of the
proposed approach.
| Zan Gao, Guotai Zhang, Feiping Nie, Hua Zhang | null | 1705.01206 | null | null |
Lifelong Metric Learning | cs.LG cs.AI | The state-of-the-art online learning approaches are only capable of learning
the metric for predefined tasks. In this paper, we consider lifelong learning
problem to mimic "human learning", i.e., endowing a new capability to the
learned metric for a new task from new online samples and incorporating
previous experiences and knowledge. Therefore, we propose a new metric learning
framework: lifelong metric learning (LML), which only utilizes the data of the
new task to train the metric model while preserving the original capabilities.
More specifically, the proposed LML maintains a common subspace for all learned
metrics, named lifelong dictionary, transfers knowledge from the common
subspace to each new metric task with task-specific idiosyncrasy, and redefines
the common subspace over time to maximize performance across all metric tasks.
For model optimization, we apply online passive aggressive optimization
algorithm to solve the proposed LML framework, where the lifelong dictionary
and task-specific partition are optimized alternatively and consecutively.
Finally, we evaluate our approach by analyzing several multi-task metric
learning datasets. Extensive experimental results demonstrate effectiveness and
efficiency of the proposed framework.
| Gan Sun, Yang Cong, Ji Liu and Xiaowei Xu | null | 1705.01209 | null | null |
Formal Verification of Piece-Wise Linear Feed-Forward Neural Networks | cs.LO cs.AI cs.LG | We present an approach for the verification of feed-forward neural networks
in which all nodes have a piece-wise linear activation function. Such networks
are often used in deep learning and have been shown to be hard to verify for
modern satisfiability modulo theory (SMT) and integer linear programming (ILP)
solvers.
The starting point of our approach is the addition of a global linear
approximation of the overall network behavior to the verification problem that
helps with SMT-like reasoning over the network behavior. We present a
specialized verification algorithm that employs this approximation in a search
process in which it infers additional node phases for the non-linear nodes in
the network from partial node phase assignments, similar to unit propagation in
classical SAT solving. We also show how to infer additional conflict clauses
and safe node fixtures from the results of the analysis steps performed during
the search. The resulting approach is evaluated on collision avoidance and
handwritten digit recognition case studies.
| Ruediger Ehlers | null | 1705.0132 | null | null |
Going Wider: Recurrent Neural Network With Parallel Cells | cs.CL cs.LG cs.NE | Recurrent Neural Network (RNN) has been widely applied for sequence modeling.
In RNN, the hidden states at current step are full connected to those at
previous step, thus the influence from less related features at previous step
may potentially decrease model's learning ability. We propose a simple
technique called parallel cells (PCs) to enhance the learning ability of
Recurrent Neural Network (RNN). In each layer, we run multiple small RNN cells
rather than one single large cell. In this paper, we evaluate PCs on 2 tasks.
On language modeling task on PTB (Penn Tree Bank), our model outperforms state
of art models by decreasing perplexity from 78.6 to 75.3. On Chinese-English
translation task, our model increases BLEU score for 0.39 points than baseline
model.
| Danhao Zhu, Si Shen, Xin-Yu Dai and Jiajun Chen | null | 1705.01346 | null | null |
Data-Driven Synthesis of Smoke Flows with CNN-based Feature Descriptors | cs.GR cs.LG | We present a novel data-driven algorithm to synthesize high-resolution flow
simulations with reusable repositories of space-time flow data. In our work, we
employ a descriptor learning approach to encode the similarity between fluid
regions with differences in resolution and numerical viscosity. We use
convolutional neural networks to generate the descriptors from fluid data such
as smoke density and flow velocity. At the same time, we present a deformation
limiting patch advection method which allows us to robustly track deformable
fluid regions. With the help of this patch advection, we generate stable
space-time data sets from detailed fluids for our repositories. We can then use
our learned descriptors to quickly localize a suitable data set when running a
new simulation. This makes our approach very efficient, and resolution
independent. We will demonstrate with several examples that our method yields
volumes with very high effective resolutions, and non-dissipative small scale
details that naturally integrate into the motions of the underlying flow.
| Mengyu Chu and Nils Thuerey | 10.1145/3072959.3073643 | 1705.01425 | null | null |
Ternary Neural Networks with Fine-Grained Quantization | cs.LG cs.NE | We propose a novel fine-grained quantization (FGQ) method to ternarize
pre-trained full precision models, while also constraining activations to 8 and
4-bits. Using this method, we demonstrate a minimal loss in classification
accuracy on state-of-the-art topologies without additional training. We provide
an improved theoretical formulation that forms the basis for a higher quality
solution using FGQ. Our method involves ternarizing the original weight tensor
in groups of $N$ weights. Using $N=4$, we achieve Top-1 accuracy within $3.7\%$
and $4.2\%$ of the baseline full precision result for Resnet-101 and Resnet-50
respectively, while eliminating $75\%$ of all multiplications. These results
enable a full 8/4-bit inference pipeline, with best-reported accuracy using
ternary weights on ImageNet dataset, with a potential of $9\times$ improvement
in performance. Also, for smaller networks like AlexNet, FGQ achieves
state-of-the-art results. We further study the impact of group size on both
performance and accuracy. With a group size of $N=64$, we eliminate
$\approx99\%$ of the multiplications; however, this introduces a noticeable
drop in accuracy, which necessitates fine tuning the parameters at lower
precision. We address this by fine-tuning Resnet-50 with 8-bit activations and
ternary weights at $N=64$, improving the Top-1 accuracy to within $4\%$ of the
full precision result with $<30\%$ additional training overhead. Our final
quantized model can run on a full 8-bit compute pipeline using 2-bit weights
and has the potential of up to $15\times$ improvement in performance compared
to baseline full-precision models.
| Naveen Mellempudi, Abhisek Kundu, Dheevatsa Mudigere, Dipankar Das,
Bharat Kaul, Pradeep Dubey | null | 1705.01462 | null | null |
Efficient Spatio-Temporal Gaussian Regression via Kalman Filtering | cs.LG cs.AI stat.ML | In this work we study the non-parametric reconstruction of spatio-temporal
dynamical Gaussian processes (GPs) via GP regression from sparse and noisy
data. GPs have been mainly applied to spatial regression where they represent
one of the most powerful estimation approaches also thanks to their universal
representing properties. Their extension to dynamical processes has been
instead elusive so far since classical implementations lead to unscalable
algorithms. We then propose a novel procedure to address this problem by
coupling GP regression and Kalman filtering. In particular, assuming space/time
separability of the covariance (kernel) of the process and rational time
spectrum, we build a finite-dimensional discrete-time state-space process
representation amenable of Kalman filtering. With sampling over a finite set of
fixed spatial locations, our major finding is that the Kalman filter state at
instant $t_k$ represents a sufficient statistic to compute the minimum variance
estimate of the process at any $t \geq t_k$ over the entire spatial domain.
This result can be interpreted as a novel Kalman representer theorem for
dynamical GPs. We then extend the study to situations where the set of spatial
input locations can vary over time. The proposed algorithms are finally tested
on both synthetic and real field data, also providing comparisons with standard
GP and truncated GP regression techniques.
| Marco Todescato, Andrea Carron, Ruggero Carli, Gianluigi Pillonetto,
Luca Schenato | 10.1016/j.automatica.2020.109032 | 1705.01485 | null | null |
Balanced Excitation and Inhibition are Required for High-Capacity,
Noise-Robust Neuronal Selectivity | q-bio.NC cond-mat.dis-nn cs.LG cs.NE stat.ML | Neurons and networks in the cerebral cortex must operate reliably despite
multiple sources of noise. To evaluate the impact of both input and output
noise, we determine the robustness of single-neuron stimulus selective
responses, as well as the robustness of attractor states of networks of neurons
performing memory tasks. We find that robustness to output noise requires
synaptic connections to be in a balanced regime in which excitation and
inhibition are strong and largely cancel each other. We evaluate the conditions
required for this regime to exist and determine the properties of networks
operating within it. A plausible synaptic plasticity rule for learning that
balances weight configurations is presented. Our theory predicts an optimal
ratio of the number of excitatory and inhibitory synapses for maximizing the
encoding capacity of balanced networks for a given statistics of afferent
activations. Previous work has shown that balanced networks amplify
spatio-temporal variability and account for observed asynchronous irregular
states. Here we present a novel type of balanced network that amplifies small
changes in the impinging signals, and emerges automatically from learning to
perform neuronal and network functions robustly.
| Ran Rubin, L.F. Abbott and Haim Sompolinsky | 10.1073/pnas.1705841114 | 1705.01502 | null | null |
XES Tensorflow - Process Prediction using the Tensorflow Deep-Learning
Framework | cs.LG | Predicting the next activity of a running process is an important aspect of
process management. Recently, artificial neural networks, so called
deep-learning approaches, have been proposed to address this challenge. This
demo paper describes a software application that applies the Tensorflow
deep-learning framework to process prediction. The software application reads
industry-standard XES files for training and presents the user with an
easy-to-use graphical user interface for both training and prediction. The
system provides several improvements over earlier work. This demo paper focuses
on the software implementation and describes the architecture and user
interface.
| Joerg Evermann and Jana-Rebecca Rehse and Peter Fettke | null | 1705.01507 | null | null |
Semi-supervised cross-entropy clustering with information bottleneck
constraint | cs.LG stat.ML | In this paper, we propose a semi-supervised clustering method, CEC-IB, that
models data with a set of Gaussian distributions and that retrieves clusters
based on a partial labeling provided by the user (partition-level side
information). By combining the ideas from cross-entropy clustering (CEC) with
those from the information bottleneck method (IB), our method trades between
three conflicting goals: the accuracy with which the data set is modeled, the
simplicity of the model, and the consistency of the clustering with side
information. Experiments demonstrate that CEC-IB has a performance comparable
to Gaussian mixture models (GMM) in a classical semi-supervised scenario, but
is faster, more robust to noisy labels, automatically determines the optimal
number of clusters, and performs well when not all classes are present in the
side information. Moreover, in contrast to other semi-supervised models, it can
be successfully applied in discovering natural subgroups if the partition-level
side information is derived from the top levels of a hierarchical clustering.
| Marek \'Smieja and Bernhard C. Geiger | 10.1016/j.ins.2017.07.016 | 1705.01601 | null | null |
Compressing DMA Engine: Leveraging Activation Sparsity for Training Deep
Neural Networks | cs.LG cs.AR | Popular deep learning frameworks require users to fine-tune their memory
usage so that the training data of a deep neural network (DNN) fits within the
GPU physical memory. Prior work tries to address this restriction by
virtualizing the memory usage of DNNs, enabling both CPU and GPU memory to be
utilized for memory allocations. Despite its merits, virtualizing memory can
incur significant performance overheads when the time needed to copy data back
and forth from CPU memory is higher than the latency to perform the
computations required for DNN forward and backward propagation. We introduce a
high-performance virtualization strategy based on a "compressing DMA engine"
(cDMA) that drastically reduces the size of the data structures that are
targeted for CPU-side allocations. The cDMA engine offers an average 2.6x
(maximum 13.8x) compression ratio by exploiting the sparsity inherent in
offloaded data, improving the performance of virtualized DNNs by an average 32%
(maximum 61%).
| Minsoo Rhu, Mike O'Connor, Niladrish Chatterjee, Jeff Pool, Stephen W.
Keckler | null | 1705.01626 | null | null |
Generative Convolutional Networks for Latent Fingerprint Reconstruction | cs.CV cs.LG | Performance of fingerprint recognition depends heavily on the extraction of
minutiae points. Enhancement of the fingerprint ridge pattern is thus an
essential pre-processing step that noticeably reduces false positive and
negative detection rates. A particularly challenging setting is when the
fingerprint images are corrupted or partially missing. In this work, we apply
generative convolutional networks to denoise visible minutiae and predict the
missing parts of the ridge pattern. The proposed enhancement approach is tested
as a pre-processing step in combination with several standard feature
extraction methods such as MINDTCT, followed by biometric comparison using MCC
and BOZORTH3. We evaluate our method on several publicly available latent
fingerprint datasets captured using different sensors.
| Jan Svoboda, Federico Monti, Michael M. Bronstein | null | 1705.01707 | null | null |
Semi-Supervised AUC Optimization based on Positive-Unlabeled Learning | stat.ML cs.LG | Maximizing the area under the receiver operating characteristic curve (AUC)
is a standard approach to imbalanced classification. So far, various supervised
AUC optimization methods have been developed and they are also extended to
semi-supervised scenarios to cope with small sample problems. However, existing
semi-supervised AUC optimization methods rely on strong distributional
assumptions, which are rarely satisfied in real-world problems. In this paper,
we propose a novel semi-supervised AUC optimization method that does not
require such restrictive assumptions. We first develop an AUC optimization
method based only on positive and unlabeled data (PU-AUC) and then extend it to
semi-supervised learning by combining it with a supervised AUC optimization
method. We theoretically prove that, without the restrictive distributional
assumptions, unlabeled data contribute to improving the generalization
performance in PU and semi-supervised AUC optimization methods. Finally, we
demonstrate the practical usefulness of the proposed methods through
experiments.
| Tomoya Sakai, Gang Niu, Masashi Sugiyama | 10.1007/s10994-017-5678-9 | 1705.01708 | null | null |
Optimal Approximation with Sparsely Connected Deep Neural Networks | cs.LG cs.IT math.FA math.IT | We derive fundamental lower bounds on the connectivity and the memory
requirements of deep neural networks guaranteeing uniform approximation rates
for arbitrary function classes in $L^2(\mathbb R^d)$. In other words, we
establish a connection between the complexity of a function class and the
complexity of deep neural networks approximating functions from this class to
within a prescribed accuracy. Additionally, we prove that our lower bounds are
achievable for a broad family of function classes. Specifically, all function
classes that are optimally approximated by a general class of representation
systems---so-called \emph{affine systems}---can be approximated by deep neural
networks with minimal connectivity and memory requirements. Affine systems
encompass a wealth of representation systems from applied harmonic analysis
such as wavelets, ridgelets, curvelets, shearlets, $\alpha$-shearlets, and more
generally $\alpha$-molecules. Our central result elucidates a remarkable
universality property of neural networks and shows that they achieve the
optimum approximation properties of all affine systems combined. As a specific
example, we consider the class of $\alpha^{-1}$-cartoon-like functions, which
is approximated optimally by $\alpha$-shearlets. We also explain how our
results can be extended to the case of functions on low-dimensional immersed
manifolds. Finally, we present numerical experiments demonstrating that the
standard stochastic gradient descent algorithm generates deep neural networks
providing close-to-optimal approximation rates. Moreover, these results
indicate that stochastic gradient descent can actually learn approximations
that are sparse in the representation systems optimally sparsifying the
function class the network is trained on.
| Helmut B\"olcskei, Philipp Grohs, Gitta Kutyniok, Philipp Petersen | null | 1705.01714 | null | null |
Fast k-means based on KNN Graph | cs.LG cs.AI cs.CV | In the era of big data, k-means clustering has been widely adopted as a basic
processing tool in various contexts. However, its computational cost could be
prohibitively high as the data size and the cluster number are large. It is
well known that the processing bottleneck of k-means lies in the operation of
seeking closest centroid in each iteration. In this paper, a novel solution
towards the scalability issue of k-means is presented. In the proposal, k-means
is supported by an approximate k-nearest neighbors graph. In the k-means
iteration, each data sample is only compared to clusters that its nearest
neighbors reside. Since the number of nearest neighbors we consider is much
less than k, the processing cost in this step becomes minor and irrelevant to
k. The processing bottleneck is therefore overcome. The most interesting thing
is that k-nearest neighbor graph is constructed by iteratively calling the fast
$k$-means itself. Comparing with existing fast k-means variants, the proposed
algorithm achieves hundreds to thousands times speed-up while maintaining high
clustering quality. As it is tested on 10 million 512-dimensional data, it
takes only 5.2 hours to produce 1 million clusters. In contrast, to fulfill the
same scale of clustering, it would take 3 years for traditional k-means.
| Cheng-Hao Deng and Wan-Lei Zhao | null | 1705.01813 | null | null |
Semi-supervised model-based clustering with controlled clusters leakage | cs.LG stat.ML | In this paper, we focus on finding clusters in partially categorized data
sets. We propose a semi-supervised version of Gaussian mixture model, called
C3L, which retrieves natural subgroups of given categories. In contrast to
other semi-supervised models, C3L is parametrized by user-defined leakage
level, which controls maximal inconsistency between initial categorization and
resulting clustering. Our method can be implemented as a module in practical
expert systems to detect clusters, which combine expert knowledge with true
distribution of data. Moreover, it can be used for improving the results of
less flexible clustering techniques, such as projection pursuit clustering. The
paper presents extensive theoretical analysis of the model and fast algorithm
for its efficient optimization. Experimental results show that C3L finds high
quality clustering model, which can be applied in discovering meaningful groups
in partially classified data.
| Marek \'Smieja and {\L}ukasz Struski and Jacek Tabor | null | 1705.01877 | null | null |
Learning with Confident Examples: Rank Pruning for Robust Classification
with Noisy Labels | stat.ML cs.LG | Noisy PN learning is the problem of binary classification when training
examples may be mislabeled (flipped) uniformly with noise rate rho1 for
positive examples and rho0 for negative examples. We propose Rank Pruning (RP)
to solve noisy PN learning and the open problem of estimating the noise rates,
i.e. the fraction of wrong positive and negative labels. Unlike prior
solutions, RP is time-efficient and general, requiring O(T) for any
unrestricted choice of probabilistic classifier with T fitting time. We prove
RP has consistent noise estimation and equivalent expected risk as learning
with uncorrupted labels in ideal conditions, and derive closed-form solutions
when conditions are non-ideal. RP achieves state-of-the-art noise estimation
and F1, error, and AUC-PR for both MNIST and CIFAR datasets, regardless of the
amount of noise and performs similarly impressively when a large portion of
training examples are noise drawn from a third distribution. To highlight, RP
with a CNN classifier can predict if an MNIST digit is a "one"or "not" with
only 0.25% error, and 0.46 error across all digits, even when 50% of positive
examples are mislabeled and 50% of observed positive labels are mislabeled
negative examples.
| Curtis G. Northcutt, Tailin Wu, Isaac L. Chuang | null | 1705.01936 | null | null |
On Identifying Disaster-Related Tweets: Matching-based or
Learning-based? | cs.IR cs.LG | Social media such as tweets are emerging as platforms contributing to
situational awareness during disasters. Information shared on Twitter by both
affected population (e.g., requesting assistance, warning) and those outside
the impact zone (e.g., providing assistance) would help first responders,
decision makers, and the public to understand the situation first-hand.
Effective use of such information requires timely selection and analysis of
tweets that are relevant to a particular disaster. Even though abundant tweets
are promising as a data source, it is challenging to automatically identify
relevant messages since tweet are short and unstructured, resulting to
unsatisfactory classification performance of conventional learning-based
approaches. Thus, we propose a simple yet effective algorithm to identify
relevant messages based on matching keywords and hashtags, and provide a
comparison between matching-based and learning-based approaches. To evaluate
the two approaches, we put them into a framework specifically proposed for
analyzing disaster-related tweets. Analysis results on eleven datasets with
various disaster types show that our technique provides relevant tweets of
higher quality and more interpretable results of sentiment analysis tasks when
compared to learning approach.
| Hien To, Sumeet Agrawal, Seon Ho Kim, Cyrus Shahabi | null | 1705.02009 | null | null |
KATE: K-Competitive Autoencoder for Text | stat.ML cs.LG | Autoencoders have been successful in learning meaningful representations from
image datasets. However, their performance on text datasets has not been widely
studied. Traditional autoencoders tend to learn possibly trivial
representations of text documents due to their confounding properties such as
high-dimensionality, sparsity and power-law word distributions. In this paper,
we propose a novel k-competitive autoencoder, called KATE, for text documents.
Due to the competition between the neurons in the hidden layer, each neuron
becomes specialized in recognizing specific data patterns, and overall the
model can learn meaningful representations of textual data. A comprehensive set
of experiments show that KATE can learn better representations than traditional
autoencoders including denoising, contractive, variational, and k-sparse
autoencoders. Our model also outperforms deep generative models, probabilistic
topic models, and even word representation models (e.g., Word2Vec) in terms of
several downstream tasks such as document classification, regression, and
retrieval.
| Yu Chen and Mohammed J. Zaki | null | 1705.02033 | null | null |
Matrix Completion via Factorizing Polynomials | stat.ML cs.LG | Predicting unobserved entries of a partially observed matrix has found wide
applicability in several areas, such as recommender systems, computational
biology, and computer vision. Many scalable methods with rigorous theoretical
guarantees have been developed for algorithms where the matrix is factored into
low-rank components, and embeddings are learned for the row and column
entities. While there has been recent research on incorporating explicit side
information in the low-rank matrix factorization setting, often implicit
information can be gleaned from the data, via higher-order interactions among
entities. Such implicit information is especially useful in cases where the
data is very sparse, as is often the case in real-world datasets. In this
paper, we design a method to learn embeddings in the context of recommendation
systems, using the observation that higher powers of a graph transition
probability matrix encode the probability that a random walker will hit that
node in a given number of steps. We develop a coordinate descent algorithm to
solve the resulting optimization, that makes explicit computation of the higher
order powers of the matrix redundant, preserving sparsity and making
computations efficient. Experiments on several datasets show that our method,
that can use higher order information, outperforms methods that only use
explicitly available side information, those that use only second-order
implicit information and in some cases, methods based on deep neural networks
as well.
| Vatsal Shah, Nikhil Rao, Weicong Ding | null | 1705.02047 | null | null |
SLDR-DL: A Framework for SLD-Resolution with Deep Learning | cs.AI cs.LG cs.LO | This paper introduces an SLD-resolution technique based on deep learning.
This technique enables neural networks to learn from old and successful
resolution processes and to use learnt experiences to guide new resolution
processes. An implementation of this technique is named SLDR-DL. It includes a
Prolog library of deep feedforward neural networks and some essential functions
of resolution. In the SLDR-DL framework, users can define logical rules in the
form of definite clauses and teach neural networks to use the rules in
reasoning processes.
| Cheng-Hao Cai | null | 1705.0221 | null | null |
Group invariance principles for causal generative models | stat.ML cs.AI cs.LG math.ST stat.TH | The postulate of independence of cause and mechanism (ICM) has recently led
to several new causal discovery algorithms. The interpretation of independence
and the way it is utilized, however, varies across these methods. Our aim in
this paper is to propose a group theoretic framework for ICM to unify and
generalize these approaches. In our setting, the cause-mechanism relationship
is assessed by comparing it against a null hypothesis through the application
of random generic group transformations. We show that the group theoretic view
provides a very general tool to study the structure of data generating
mechanisms with direct applications to machine learning.
| Michel Besserve, Naji Shajarisales, Bernhard Sch\"olkopf and Dominik
Janzing | null | 1705.02212 | null | null |
Detecting Adversarial Samples Using Density Ratio Estimates | cs.LG cs.CV stat.ML | Machine learning models, especially based on deep architectures are used in
everyday applications ranging from self driving cars to medical diagnostics. It
has been shown that such models are dangerously susceptible to adversarial
samples, indistinguishable from real samples to human eye, adversarial samples
lead to incorrect classifications with high confidence. Impact of adversarial
samples is far-reaching and their efficient detection remains an open problem.
We propose to use direct density ratio estimation as an efficient model
agnostic measure to detect adversarial samples. Our proposed method works
equally well with single and multi-channel samples, and with different
adversarial sample generation methods. We also propose a method to use density
ratio estimates for generating adversarial samples with an added constraint of
preserving density ratio.
| Lovedeep Gondara | null | 1705.02224 | null | null |
Spherical Wards clustering and generalized Voronoi diagrams | cs.LG | Gaussian mixture model is very useful in many practical problems.
Nevertheless, it cannot be directly generalized to non Euclidean spaces. To
overcome this problem we present a spherical Gaussian-based clustering approach
for partitioning data sets with respect to arbitrary dissimilarity measure. The
proposed method is a combination of spherical Cross-Entropy Clustering with a
generalized Wards approach. The algorithm finds the optimal number of clusters
by automatically removing groups which carry no information. Moreover, it is
scale invariant and allows for forming of spherically-shaped clusters of
arbitrary sizes. In order to graphically represent and interpret the results
the notion of Voronoi diagram was generalized to non Euclidean spaces and
applied for introduced clustering method.
| Marek \'Smieja and Jacek Tabor | null | 1705.02232 | null | null |
Data Readiness Levels | cs.DB cs.AI cs.CY cs.LG | Application of models to data is fraught. Data-generating collaborators often
only have a very basic understanding of the complications of collating,
processing and curating data. Challenges include: poor data collection
practices, missing values, inconvenient storage mechanisms, intellectual
property, security and privacy. All these aspects obstruct the sharing and
interconnection of data, and the eventual interpretation of data through
machine learning or other approaches. In project reporting, a major challenge
is in encapsulating these problems and enabling goals to be built around the
processing of data. Project overruns can occur due to failure to account for
the amount of time required to curate and collate. But to understand these
failures we need to have a common language for assessing the readiness of a
particular data set. This position paper proposes the use of data readiness
levels: it gives a rough outline of three stages of data preparedness and
speculates on how formalisation of these levels into a common language for data
readiness could facilitate project management.
| Neil D. Lawrence | null | 1705.02245 | null | null |
Sequential Attention: A Context-Aware Alignment Function for Machine
Reading | cs.CL cs.LG | In this paper we propose a neural network model with a novel Sequential
Attention layer that extends soft attention by assigning weights to words in an
input sequence in a way that takes into account not just how well that word
matches a query, but how well surrounding words match. We evaluate this
approach on the task of reading comprehension (on the Who did What and CNN
datasets) and show that it dramatically improves a strong baseline--the
Stanford Reader--and is competitive with the state of the art.
| Sebastian Brarda, Philip Yeres, Samuel R. Bowman | null | 1705.02269 | null | null |
Analysis and Design of Convolutional Networks via Hierarchical Tensor
Decompositions | cs.LG cs.NE | The driving force behind convolutional networks - the most successful deep
learning architecture to date, is their expressive power. Despite its wide
acceptance and vast empirical evidence, formal analyses supporting this belief
are scarce. The primary notions for formally reasoning about expressiveness are
efficiency and inductive bias. Expressive efficiency refers to the ability of a
network architecture to realize functions that require an alternative
architecture to be much larger. Inductive bias refers to the prioritization of
some functions over others given prior knowledge regarding a task at hand. In
this paper we overview a series of works written by the authors, that through
an equivalence to hierarchical tensor decompositions, analyze the expressive
efficiency and inductive bias of various convolutional network architectural
features (depth, width, strides and more). The results presented shed light on
the demonstrated effectiveness of convolutional networks, and in addition,
provide new tools for network design.
| Nadav Cohen, Or Sharir, Yoav Levine, Ronen Tamari, David Yakira, Amnon
Shashua | null | 1705.02302 | null | null |
A Time-Vertex Signal Processing Framework | cs.LG | An emerging way to deal with high-dimensional non-euclidean data is to assume
that the underlying structure can be captured by a graph. Recently, ideas have
begun to emerge related to the analysis of time-varying graph signals. This
work aims to elevate the notion of joint harmonic analysis to a full-fledged
framework denoted as Time-Vertex Signal Processing, that links together the
time-domain signal processing techniques with the new tools of graph signal
processing. This entails three main contributions: (a) We provide a formal
motivation for harmonic time-vertex analysis as an analysis tool for the state
evolution of simple Partial Differential Equations on graphs. (b) We improve
the accuracy of joint filtering operators by up-to two orders of magnitude. (c)
Using our joint filters, we construct time-vertex dictionaries analyzing the
different scales and the local time-frequency content of a signal. The utility
of our tools is illustrated in numerous applications and datasets, such as
dynamic mesh denoising and classification, still-video inpainting, and source
localization in seismic events. Our results suggest that joint analysis of
time-vertex signals can bring benefits to regression and learning.
| Francesco Grassi, Andreas Loukas, Nathana\"el Perraudin, Benjamin
Ricaud | null | 1705.02307 | null | null |
Learning Representations of Emotional Speech with Deep Convolutional
Generative Adversarial Networks | cs.CL cs.LG stat.ML | Automatically assessing emotional valence in human speech has historically
been a difficult task for machine learning algorithms. The subtle changes in
the voice of the speaker that are indicative of positive or negative emotional
states are often "overshadowed" by voice characteristics relating to emotional
intensity or emotional activation. In this work we explore a representation
learning approach that automatically derives discriminative representations of
emotional speech. In particular, we investigate two machine learning strategies
to improve classifier performance: (1) utilization of unlabeled data using a
deep convolutional generative adversarial network (DCGAN), and (2) multitask
learning. Within our extensive experiments we leverage a multitask annotated
emotional corpus as well as a large unlabeled meeting corpus (around 100
hours). Our speaker-independent classification experiments show that in
particular the use of unlabeled data in our investigations improves performance
of the classifiers and both fully supervised baseline approaches are
outperformed considerably. We improve the classification of emotional valence
on a discrete 5-point scale to 43.88% and on a 3-point scale to 49.80%, which
is competitive to state-of-the-art performance.
| Jonathan Chang, Stefan Scherer | null | 1705.02394 | null | null |
On Using Active Learning and Self-Training when Mining Performance
Discussions on Stack Overflow | cs.CL cs.HC cs.LG cs.SE | Abundant data is the key to successful machine learning. However, supervised
learning requires annotated data that are often hard to obtain. In a
classification task with limited resources, Active Learning (AL) promises to
guide annotators to examples that bring the most value for a classifier. AL can
be successfully combined with self-training, i.e., extending a training set
with the unlabelled examples for which a classifier is the most certain. We
report our experiences on using AL in a systematic manner to train an SVM
classifier for Stack Overflow posts discussing performance of software
components. We show that the training examples deemed as the most valuable to
the classifier are also the most difficult for humans to annotate. Despite
carefully evolved annotation criteria, we report low inter-rater agreement, but
we also propose mitigation strategies. Finally, based on one annotator's work,
we show that self-training can improve the classification accuracy. We conclude
the paper by discussing implication for future text miners aspiring to use AL
and self-training.
| Markus Borg, Iben Lennerstad, Rasmus Ros, Elizabeth Bjarnason | null | 1705.02395 | null | null |
Max-Pooling Loss Training of Long Short-Term Memory Networks for
Small-Footprint Keyword Spotting | cs.CL cs.LG stat.ML | We propose a max-pooling based loss function for training Long Short-Term
Memory (LSTM) networks for small-footprint keyword spotting (KWS), with low
CPU, memory, and latency requirements. The max-pooling loss training can be
further guided by initializing with a cross-entropy loss trained network. A
posterior smoothing based evaluation approach is employed to measure keyword
spotting performance. Our experimental results show that LSTM models trained
using cross-entropy loss or max-pooling loss outperform a cross-entropy loss
trained baseline feed-forward Deep Neural Network (DNN). In addition,
max-pooling loss trained LSTM with randomly initialized network performs better
compared to cross-entropy loss trained LSTM. Finally, the max-pooling loss
trained LSTM initialized with a cross-entropy pre-trained network shows the
best performance, which yields $67.6\%$ relative reduction compared to baseline
feed-forward DNN in Area Under the Curve (AUC) measure.
| Ming Sun, Anirudh Raju, George Tucker, Sankaran Panchapagesan,
Gengshen Fu, Arindam Mandal, Spyros Matsoukas, Nikko Strom, Shiv Vitaladevuni | 10.1109/SLT.2016.7846306 | 1705.02411 | null | null |
A comprehensive study of batch construction strategies for recurrent
neural networks in MXNet | cs.LG stat.ML | In this work we compare different batch construction methods for mini-batch
training of recurrent neural networks. While popular implementations like
TensorFlow and MXNet suggest a bucketing approach to improve the
parallelization capabilities of the recurrent training process, we propose a
simple ordering strategy that arranges the training sequences in a stochastic
alternatingly sorted way. We compare our method to sequence bucketing as well
as various other batch construction strategies on the CHiME-4 noisy speech
recognition corpus. The experiments show that our alternated sorting approach
is able to compete both in training time and recognition performance while
being conceptually simpler to implement.
| Patrick Doetsch, Pavel Golik, Hermann Ney | null | 1705.02414 | null | null |
Analogical Inference for Multi-Relational Embeddings | cs.LG cs.AI cs.CL | Large-scale multi-relational embedding refers to the task of learning the
latent representations for entities and relations in large knowledge graphs. An
effective and scalable solution for this problem is crucial for the true
success of knowledge-based inference in a broad range of applications. This
paper proposes a novel framework for optimizing the latent representations with
respect to the \textit{analogical} properties of the embedded entities and
relations. By formulating the learning objective in a differentiable fashion,
our model enjoys both theoretical power and computational scalability, and
significantly outperformed a large number of representative baseline methods on
benchmark datasets. Furthermore, the model offers an elegant unification of
several well-known methods in multi-relational embedding, which can be proven
to be special instantiations of our framework.
| Hanxiao Liu, Yuexin Wu, Yiming Yang | null | 1705.02426 | null | null |
Nonlinear Information Bottleneck | cs.IT cs.LG math.IT stat.ML | Information bottleneck (IB) is a technique for extracting information in one
random variable $X$ that is relevant for predicting another random variable
$Y$. IB works by encoding $X$ in a compressed "bottleneck" random variable $M$
from which $Y$ can be accurately decoded. However, finding the optimal
bottleneck variable involves a difficult optimization problem, which until
recently has been considered for only two limited cases: discrete $X$ and $Y$
with small state spaces, and continuous $X$ and $Y$ with a Gaussian joint
distribution (in which case optimal encoding and decoding maps are linear). We
propose a method for performing IB on arbitrarily-distributed discrete and/or
continuous $X$ and $Y$, while allowing for nonlinear encoding and decoding
maps. Our approach relies on a novel non-parametric upper bound for mutual
information. We describe how to implement our method using neural networks. We
then show that it achieves better performance than the recently-proposed
"variational IB" method on several real-world datasets.
| Artemy Kolchinsky, Brendan D. Tracey, David H. Wolpert | 10.3390/e21121181 | 1705.02436 | null | null |
Face Super-Resolution Through Wasserstein GANs | cs.LG stat.ML | Generative adversarial networks (GANs) have received a tremendous amount of
attention in the past few years, and have inspired applications addressing a
wide range of problems. Despite its great potential, GANs are difficult to
train. Recently, a series of papers (Arjovsky & Bottou, 2017a; Arjovsky et al.
2017b; and Gulrajani et al. 2017) proposed using Wasserstein distance as the
training objective and promised easy, stable GAN training across architectures
with minimal hyperparameter tuning. In this paper, we compare the performance
of Wasserstein distance with other training objectives on a variety of GAN
architectures in the context of single image super-resolution. Our results
agree that Wasserstein GAN with gradient penalty (WGAN-GP) provides stable and
converging GAN training and that Wasserstein distance is an effective metric to
gauge training progress.
| Zhimin Chen, Yuguang Tong | null | 1705.02438 | null | null |
Experimental results : Reinforcement Learning of POMDPs using Spectral
Methods | cs.AI cs.LG stat.ML | We propose a new reinforcement learning algorithm for partially observable
Markov decision processes (POMDP) based on spectral decomposition methods.
While spectral methods have been previously employed for consistent learning of
(passive) latent variable models such as hidden Markov models, POMDPs are more
challenging since the learner interacts with the environment and possibly
changes the future observations in the process. We devise a learning algorithm
running through epochs, in each epoch we employ spectral techniques to learn
the POMDP parameters from a trajectory generated by a fixed policy. At the end
of the epoch, an optimization oracle returns the optimal memoryless planning
policy which maximizes the expected reward based on the estimated POMDP model.
We prove an order-optimal regret bound with respect to the optimal memoryless
policy and efficient scaling with respect to the dimensionality of observation
and action spaces.
| Kamyar Azizzadenesheli, Alessandro Lazaric, Animashree Anandkumar | null | 1705.02553 | null | null |
Classification and Representation via Separable Subspaces: Performance
Limits and Algorithms | cs.IT cs.LG math.IT stat.ML | We study the classification performance of Kronecker-structured models in two
asymptotic regimes and developed an algorithm for separable, fast and compact
K-S dictionary learning for better classification and representation of
multidimensional signals by exploiting the structure in the signal. First, we
study the classification performance in terms of diversity order and pairwise
geometry of the subspaces. We derive an exact expression for the diversity
order as a function of the signal and subspace dimensions of a K-S model. Next,
we study the classification capacity, the maximum rate at which the number of
classes can grow as the signal dimension goes to infinity. Then we describe a
fast algorithm for Kronecker-Structured Learning of Discriminative Dictionaries
(K-SLD2). Finally, we evaluate the empirical classification performance of K-S
models for the synthetic data, showing that they agree with the diversity order
analysis. We also evaluate the performance of K-SLD2 on synthetic and
real-world datasets showing that the K-SLD2 balances compact signal
representation and good classification performance.
| Ishan Jindal and Matthew Nokleby | 10.1109/JSTSP.2018.2838549 | 1705.02556 | null | null |
Learning Discriminative Relational Features for Sequence Labeling | cs.LG | Discovering relational structure between input features in sequence labeling
models has shown to improve their accuracy in several problem settings.
However, the search space of relational features is exponential in the number
of basic input features. Consequently, approaches that learn relational
features, tend to follow a greedy search strategy. In this paper, we study the
possibility of optimally learning and applying discriminative relational
features for sequence labeling. For learning features derived from inputs at a
particular sequence position, we propose a Hierarchical Kernels-based approach
(referred to as Hierarchical Kernel Learning for Structured Output Spaces -
StructHKL). This approach optimally and efficiently explores the hierarchical
structure of the feature space for problems with structured output spaces such
as sequence labeling. Since the StructHKL approach has limitations in learning
complex relational features derived from inputs at relative positions, we
propose two solutions to learn relational features namely, (i) enumerating
simple component features of complex relational features and discovering their
compositions using StructHKL and (ii) leveraging relational kernels, that
compute the similarity between instances implicitly, in the sequence labeling
problem. We perform extensive empirical evaluation on publicly available
datasets and record our observations on settings in which certain approaches
are effective.
| Naveen Nair, Ajay Nagesh, Ganesh Ramakrishnan | null | 1705.02562 | null | null |
A Design Methodology for Efficient Implementation of Deconvolutional
Neural Networks on an FPGA | cs.LG cs.NE | In recent years deep learning algorithms have shown extremely high
performance on machine learning tasks such as image classification and speech
recognition. In support of such applications, various FPGA accelerator
architectures have been proposed for convolutional neural networks (CNNs) that
enable high performance for classification tasks at lower power than CPU and
GPU processors. However, to date, there has been little research on the use of
FPGA implementations of deconvolutional neural networks (DCNNs). DCNNs, also
known as generative CNNs, encode high-dimensional probability distributions and
have been widely used for computer vision applications such as scene
completion, scene segmentation, image creation, image denoising, and
super-resolution imaging. We propose an FPGA architecture for deconvolutional
networks built around an accelerator which effectively handles the complex
memory access patterns needed to perform strided deconvolutions, and that
supports convolution as well. We also develop a three-step design optimization
method that systematically exploits statistical analysis, design space
exploration and VLSI optimization. To verify our FPGA deconvolutional
accelerator design methodology we train DCNNs offline on two representative
datasets using the generative adversarial network method (GAN) run on
Tensorflow, and then map these DCNNs to an FPGA DCNN-plus-accelerator
implementation to perform generative inference on a Xilinx Zynq-7000 FPGA. Our
DCNN implementation achieves a peak performance density of 0.012 GOPs/DSP.
| Xinyu Zhang, Srinjoy Das, Ojash Neopane and Ken Kreutz-Delgado | null | 1705.02583 | null | null |
Learning of Gaussian Processes in Distributed and Communication Limited
Systems | stat.ML cs.IT cs.LG math.IT | It is of fundamental importance to find algorithms obtaining optimal
performance for learning of statistical models in distributed and communication
limited systems. Aiming at characterizing the optimal strategies, we consider
learning of Gaussian Processes (GPs) in distributed systems as a pivotal
example. We first address a very basic problem: how many bits are required to
estimate the inner-products of Gaussian vectors across distributed machines?
Using information theoretic bounds, we obtain an optimal solution for the
problem which is based on vector quantization. Two suboptimal and more
practical schemes are also presented as substitute for the vector quantization
scheme. In particular, it is shown that the performance of one of the practical
schemes which is called per-symbol quantization is very close to the optimal
one. Schemes provided for the inner-product calculations are incorporated into
our proposed distributed learning methods for GPs. Experimental results show
that with spending few bits per symbol in our communication scheme, our
proposed methods outperform previous zero rate distributed GP learning schemes
such as Bayesian Committee Model (BCM) and Product of experts (PoE).
| Mostafa Tavassolipour, Seyed Abolfazl Motahari, Mohammad-Taghi Manzuri
Shalmani | null | 1705.02627 | null | null |
TrajectoryNet: An Embedded GPS Trajectory Representation for Point-based
Classification Using Recurrent Neural Networks | cs.CV cs.AI cs.LG | Understanding and discovering knowledge from GPS (Global Positioning System)
traces of human activities is an essential topic in mobility-based urban
computing. We propose TrajectoryNet-a neural network architecture for
point-based trajectory classification to infer real world human transportation
modes from GPS traces. To overcome the challenge of capturing the underlying
latent factors in the low-dimensional and heterogeneous feature space imposed
by GPS data, we develop a novel representation that embeds the original feature
space into another space that can be understood as a form of basis expansion.
We also enrich the feature space via segment-based information and use Maxout
activations to improve the predictive power of Recurrent Neural Networks
(RNNs). We achieve over 98% classification accuracy when detecting four types
of transportation modes, outperforming existing models without additional
sensory data or location-based prior knowledge.
| Xiang Jiang, Erico N de Souza, Ahmad Pesaranghader, Baifan Hu, Daniel
L. Silver and Stan Matwin | null | 1705.02636 | null | null |
DropIn: Making Reservoir Computing Neural Networks Robust to Missing
Inputs by Dropout | cs.LG cs.NE stat.ML | The paper presents a novel, principled approach to train recurrent neural
networks from the Reservoir Computing family that are robust to missing part of
the input features at prediction time. By building on the ensembling properties
of Dropout regularization, we propose a methodology, named DropIn, which
efficiently trains a neural model as a committee machine of subnetworks, each
capable of predicting with a subset of the original input features. We discuss
the application of the DropIn methodology in the context of Reservoir Computing
models and targeting applications characterized by input sources that are
unreliable or prone to be disconnected, such as in pervasive wireless sensor
networks and ambient intelligence. We provide an experimental assessment using
real-world data from such application domains, showing how the Dropin
methodology allows to maintain predictive performances comparable to those of a
model without missing features, even when 20\%-50\% of the inputs are not
available.
| Davide Bacciu and Francesco Crecchi and Davide Morelli | null | 1705.02643 | null | null |
Metacontrol for Adaptive Imagination-Based Optimization | cs.LG cs.AI | Many machine learning systems are built to solve the hardest examples of a
particular task, which often makes them large and expensive to run---especially
with respect to the easier examples, which might require much less computation.
For an agent with a limited computational budget, this "one-size-fits-all"
approach may result in the agent wasting valuable computation on easy examples,
while not spending enough on hard examples. Rather than learning a single,
fixed policy for solving all instances of a task, we introduce a metacontroller
which learns to optimize a sequence of "imagined" internal simulations over
predictive models of the world in order to construct a more informed, and more
economical, solution. The metacontroller component is a model-free
reinforcement learning agent, which decides both how many iterations of the
optimization procedure to run, as well as which model to consult on each
iteration. The models (which we call "experts") can be state transition models,
action-value functions, or any other mechanism that provides information useful
for solving the task, and can be learned on-policy or off-policy in parallel
with the metacontroller. When the metacontroller, controller, and experts were
trained with "interaction networks" (Battaglia et al., 2016) as expert models,
our approach was able to solve a challenging decision-making problem under
complex non-linear dynamics. The metacontroller learned to adapt the amount of
computation it performed to the difficulty of the task, and learned how to
choose which experts to consult by factoring in both their reliability and
individual computational resource costs. This allowed the metacontroller to
achieve a lower overall cost (task loss plus computational cost) than more
traditional fixed policy approaches. These results demonstrate that our
approach is a powerful framework for using rich forward models for efficient
model-based reinforcement learning.
| Jessica B. Hamrick, Andrew J. Ballard, Razvan Pascanu, Oriol Vinyals,
Nicolas Heess, Peter W. Battaglia | null | 1705.0267 | null | null |
Finding Bottlenecks: Predicting Student Attrition with Unsupervised
Classifier | stat.ML cs.AI cs.CY cs.LG stat.AP | With pressure to increase graduation rates and reduce time to degree in
higher education, it is important to identify at-risk students early. Automated
early warning systems are therefore highly desirable. In this paper, we use
unsupervised clustering techniques to predict the graduation status of declared
majors in five departments at California State University Northridge (CSUN),
based on a minimal number of lower division courses in each major. In addition,
we use the detected clusters to identify hidden bottleneck courses.
| Seyed Sajjadi, Bruce Shapiro, Christopher McKinlay, Allen Sarkisyan,
Carol Shubin, Efunwande Osoba | null | 1705.02687 | null | null |
Multimodal Affect Analysis for Product Feedback Assessment | cs.HC cs.AI cs.CV cs.LG | Consumers often react expressively to products such as food samples, perfume,
jewelry, sunglasses, and clothing accessories. This research discusses a
multimodal affect recognition system developed to classify whether a consumer
likes or dislikes a product tested at a counter or kiosk, by analyzing the
consumer's facial expression, body posture, hand gestures, and voice after
testing the product. A depth-capable camera and microphone system - Kinect for
Windows - is utilized. An emotion identification engine has been developed to
analyze the images and voice to determine affective state of the customer. The
image is segmented using skin color and adaptive threshold. Face, body and
hands are detected using the Haar cascade classifier. Canny edges are
identified and the lip, body and hand contours are extracted using spatial
filtering. Edge count and orientation around the mouth, cheeks, eyes,
shoulders, fingers and the location of the edges are used as features.
Classification is done by an emotion template mapping algorithm and training a
classifier using support vector machines. The real-time performance, accuracy
and feasibility for multimodal affect recognition in feedback assessment are
evaluated.
| Amol S Patwardhan and Gerald M Knapp | null | 1705.02694 | null | null |
Spatiotemporal Recurrent Convolutional Networks for Traffic Prediction
in Transportation Networks | cs.LG | Predicting large-scale transportation network traffic has become an important
and challenging topic in recent decades. Inspired by the domain knowledge of
motion prediction, in which the future motion of an object can be predicted
based on previous scenes, we propose a network grid representation method that
can retain the fine-scale structure of a transportation network. Network-wide
traffic speeds are converted into a series of static images and input into a
novel deep architecture, namely, spatiotemporal recurrent convolutional
networks (SRCNs), for traffic forecasting. The proposed SRCNs inherit the
advantages of deep convolutional neural networks (DCNNs) and long short-term
memory (LSTM) neural networks. The spatial dependencies of network-wide traffic
can be captured by DCNNs, and the temporal dynamics can be learned by LSTMs. An
experiment on a Beijing transportation network with 278 links demonstrates that
SRCNs outperform other deep learning-based algorithms in both short-term and
long-term traffic prediction.
| Haiyang Yu, Zhihai Wu, Shuqin Wang, Yunpeng Wang, Xiaolei Ma | null | 1705.02699 | null | null |
MIDA: Multiple Imputation using Denoising Autoencoders | cs.LG stat.ML | Missing data is a significant problem impacting all domains. State-of-the-art
framework for minimizing missing data bias is multiple imputation, for which
the choice of an imputation model remains nontrivial. We propose a multiple
imputation model based on overcomplete deep denoising autoencoders. Our
proposed model is capable of handling different data types, missingness
patterns, missingness proportions and distributions. Evaluation on several real
life datasets show our proposed model significantly outperforms current
state-of-the-art methods under varying conditions while simultaneously
improving end of the line analytics.
| Lovedeep Gondara, Ke Wang | null | 1705.02737 | null | null |
Deep Descriptor Transforming for Image Co-Localization | cs.CV cs.LG | Reusable model design becomes desirable with the rapid expansion of machine
learning applications. In this paper, we focus on the reusability of
pre-trained deep convolutional models. Specifically, different from treating
pre-trained models as feature extractors, we reveal more treasures beneath
convolutional layers, i.e., the convolutional activations could act as a
detector for the common object in the image co-localization problem. We propose
a simple but effective method, named Deep Descriptor Transforming (DDT), for
evaluating the correlations of descriptors and then obtaining the
category-consistent regions, which can accurately locate the common object in a
set of images. Empirical studies validate the effectiveness of the proposed DDT
method. On benchmark image co-localization datasets, DDT consistently
outperforms existing state-of-the-art methods by a large margin. Moreover, DDT
also demonstrates good generalization ability for unseen categories and
robustness for dealing with noisy data.
| Xiu-Shen Wei, Chen-Lin Zhang, Yao Li, Chen-Wei Xie, Jianxin Wu,
Chunhua Shen, Zhi-Hua Zhou | null | 1705.02758 | null | null |
Graph Embedding Techniques, Applications, and Performance: A Survey | cs.SI cs.LG physics.data-an | Graphs, such as social networks, word co-occurrence networks, and
communication networks, occur naturally in various real-world applications.
Analyzing them yields insight into the structure of society, language, and
different patterns of communication. Many approaches have been proposed to
perform the analysis. Recently, methods which use the representation of graph
nodes in vector space have gained traction from the research community. In this
survey, we provide a comprehensive and structured analysis of various graph
embedding techniques proposed in the literature. We first introduce the
embedding task and its challenges such as scalability, choice of
dimensionality, and features to be preserved, and their possible solutions. We
then present three categories of approaches based on factorization methods,
random walks, and deep learning, with examples of representative algorithms in
each category and analysis of their performance on various tasks. We evaluate
these state-of-the-art methods on a few common datasets and compare their
performance against one another. Our analysis concludes by suggesting some
potential applications and future directions. We finally present the
open-source Python library we developed, named GEM (Graph Embedding Methods,
available at https://github.com/palash1992/GEM), which provides all presented
algorithms within a unified interface to foster and facilitate research on the
topic.
| Palash Goyal, Emilio Ferrara | 10.1016/j.knosys.2018.03.022 | 1705.02801 | null | null |
Geometry and Dynamics for Markov Chain Monte Carlo | stat.CO cs.LG hep-lat math.NA stat.ML | Markov Chain Monte Carlo methods have revolutionised mathematical computation
and enabled statistical inference within many previously intractable models. In
this context, Hamiltonian dynamics have been proposed as an efficient way of
building chains which can explore probability densities efficiently. The method
emerges from physics and geometry and these links have been extensively studied
by a series of authors through the last thirty years. However, there is
currently a gap between the intuitions and knowledge of users of the
methodology and our deep understanding of these theoretical foundations. The
aim of this review is to provide a comprehensive introduction to the geometric
tools used in Hamiltonian Monte Carlo at a level accessible to statisticians,
machine learners and other users of the methodology with only a basic
understanding of Monte Carlo methods. This will be complemented with some
discussion of the most recent advances in the field which we believe will
become increasingly relevant to applied scientists.
| Alessandro Barp, Francois-Xavier Briol, Anthony D. Kennedy, Mark
Girolami | null | 1705.02891 | null | null |
Geometric GAN | stat.ML cond-mat.dis-nn cs.AI cs.CV cs.LG | Generative Adversarial Nets (GANs) represent an important milestone for
effective generative models, which has inspired numerous variants seemingly
different from each other. One of the main contributions of this paper is to
reveal a unified geometric structure in GAN and its variants. Specifically, we
show that the adversarial generative model training can be decomposed into
three geometric steps: separating hyperplane search, discriminator parameter
update away from the separating hyperplane, and the generator update along the
normal vector direction of the separating hyperplane. This geometric intuition
reveals the limitations of the existing approaches and leads us to propose a
new formulation called geometric GAN using SVM separating hyperplane that
maximizes the margin. Our theoretical analysis shows that the geometric GAN
converges to a Nash equilibrium between the discriminator and generator. In
addition, extensive numerical results show that the superior performance of
geometric GAN.
| Jae Hyun Lim and Jong Chul Ye | null | 1705.02894 | null | null |
Machine Learning with World Knowledge: The Position and Survey | cs.AI cs.LG stat.ML | Machine learning has become pervasive in multiple domains, impacting a wide
variety of applications, such as knowledge discovery and data mining, natural
language processing, information retrieval, computer vision, social and health
informatics, ubiquitous computing, etc. Two essential problems of machine
learning are how to generate features and how to acquire labels for machines to
learn. Particularly, labeling large amount of data for each domain-specific
problem can be very time consuming and costly. It has become a key obstacle in
making learning protocols realistic in applications. In this paper, we will
discuss how to use the existing general-purpose world knowledge to enhance
machine learning processes, by enriching the features or reducing the labeling
work. We start from the comparison of world knowledge with domain-specific
knowledge, and then introduce three key problems in using world knowledge in
learning processes, i.e., explicit and implicit feature representation,
inference for knowledge linking and disambiguation, and learning with direct or
indirect supervision. Finally we discuss the future directions of this research
topic.
| Yangqiu Song and Dan Roth | null | 1705.02908 | null | null |
Cross-label Suppression: A Discriminative and Fast Dictionary Learning
with Group Regularization | cs.LG cs.CV stat.ML | This paper addresses image classification through learning a compact and
discriminative dictionary efficiently. Given a structured dictionary with each
atom (columns in the dictionary matrix) related to some label, we propose
cross-label suppression constraint to enlarge the difference among
representations for different classes. Meanwhile, we introduce group
regularization to enforce representations to preserve label properties of
original samples, meaning the representations for the same class are encouraged
to be similar. Upon the cross-label suppression, we don't resort to
frequently-used $\ell_0$-norm or $\ell_1$-norm for coding, and obtain
computational efficiency without losing the discriminative power for
categorization. Moreover, two simple classification schemes are also developed
to take full advantage of the learnt dictionary. Extensive experiments on six
data sets including face recognition, object categorization, scene
classification, texture recognition and sport action categorization are
conducted, and the results show that the proposed approach can outperform lots
of recently presented dictionary algorithms on both recognition accuracy and
computational efficiency.
| Xiudong Wang and Yuantao Gu | 10.1109/TIP.2017.2703101 | 1705.02928 | null | null |
Non-negative Matrix Factorization via Archetypal Analysis | stat.ML cs.LG | Given a collection of data points, non-negative matrix factorization (NMF)
suggests to express them as convex combinations of a small set of `archetypes'
with non-negative entries. This decomposition is unique only if the true
archetypes are non-negative and sufficiently sparse (or the weights are
sufficiently sparse), a regime that is captured by the separability condition
and its generalizations.
In this paper, we study an approach to NMF that can be traced back to the
work of Cutler and Breiman (1994) and does not require the data to be
separable, while providing a generally unique decomposition. We optimize the
trade-off between two objectives: we minimize the distance of the data points
from the convex envelope of the archetypes (which can be interpreted as an
empirical risk), while minimizing the distance of the archetypes from the
convex envelope of the data (which can be interpreted as a data-dependent
regularization). The archetypal analysis method of (Cutler, Breiman, 1994) is
recovered as the limiting case in which the last term is given infinite weight.
We introduce a `uniqueness condition' on the data which is necessary for
exactly recovering the archetypes from noiseless data. We prove that, under
uniqueness (plus additional regularity conditions on the geometry of the
archetypes), our estimator is robust. While our approach requires solving a
non-convex optimization problem, we find that standard optimization methods
succeed in finding good solutions both for real and synthetic data.
| Hamid Javadi and Andrea Montanari | null | 1705.02994 | null | null |
Geometry of Optimization and Implicit Regularization in Deep Learning | cs.LG | We argue that the optimization plays a crucial role in generalization of deep
learning models through implicit regularization. We do this by demonstrating
that generalization ability is not controlled by network size but rather by
some other implicit control. We then demonstrate how changing the empirical
optimization procedure can improve generalization, even if actual optimization
quality is not affected. We do so by studying the geometry of the parameter
space of deep networks, and devising an optimization algorithm attuned to this
geometry.
| Behnam Neyshabur, Ryota Tomioka, Ruslan Salakhutdinov, Nathan Srebro | null | 1705.03071 | null | null |
Phonetic Temporal Neural Model for Language Identification | cs.CL cs.LG cs.NE | Deep neural models, particularly the LSTM-RNN model, have shown great
potential for language identification (LID). However, the use of phonetic
information has been largely overlooked by most existing neural LID methods,
although this information has been used very successfully in conventional
phonetic LID systems. We present a phonetic temporal neural model for LID,
which is an LSTM-RNN LID system that accepts phonetic features produced by a
phone-discriminative DNN as the input, rather than raw acoustic features. This
new model is similar to traditional phonetic LID methods, but the phonetic
knowledge here is much richer: it is at the frame level and involves compacted
information of all phones. Our experiments conducted on the Babel database and
the AP16-OLR database demonstrate that the temporal phonetic neural approach is
very effective, and significantly outperforms existing acoustic neural models.
It also outperforms the conventional i-vector approach on short utterances and
in noisy conditions.
| Zhiyuan Tang, Dong Wang, Yixiang Chen, Lantian Li and Andrew Abel | null | 1705.03151 | null | null |
Phone-aware Neural Language Identification | cs.CL cs.LG cs.NE | Pure acoustic neural models, particularly the LSTM-RNN model, have shown
great potential in language identification (LID). However, the phonetic
information has been largely overlooked by most of existing neural LID models,
although this information has been used in the conventional phonetic LID
systems with a great success. We present a phone-aware neural LID architecture,
which is a deep LSTM-RNN LID system but accepts output from an RNN-based ASR
system. By utilizing the phonetic knowledge, the LID performance can be
significantly improved. Interestingly, even if the test language is not
involved in the ASR training, the phonetic knowledge still presents a large
contribution. Our experiments conducted on four languages within the Babel
corpus demonstrated that the phone-aware approach is highly effective.
| Zhiyuan Tang, Dong Wang, Yixiang Chen, Ying Shi, Lantian Li | null | 1705.03152 | null | null |
Improving drug sensitivity predictions in precision medicine through
active expert knowledge elicitation | cs.AI cs.HC cs.LG stat.ML | Predicting the efficacy of a drug for a given individual, using
high-dimensional genomic measurements, is at the core of precision medicine.
However, identifying features on which to base the predictions remains a
challenge, especially when the sample size is small. Incorporating expert
knowledge offers a promising alternative to improve a prediction model, but
collecting such knowledge is laborious to the expert if the number of candidate
features is very large. We introduce a probabilistic model that can incorporate
expert feedback about the impact of genomic measurements on the sensitivity of
a cancer cell for a given drug. We also present two methods to intelligently
collect this feedback from the expert, using experimental design and
multi-armed bandit models. In a multiple myeloma blood cancer data set (n=51),
expert knowledge decreased the prediction error by 8%. Furthermore, the
intelligent approaches can be used to reduce the workload of feedback
collection to less than 30% on average compared to a naive approach.
| Iiris Sundin, Tomi Peltola, Muntasir Mamun Majumder, Pedram Daee,
Marta Soare, Homayun Afrabandpey, Caroline Heckman, Samuel Kaski and Pekka
Marttinen | 10.1093/bioinformatics/bty257 | 1705.0329 | null | null |
MotifMark: Finding Regulatory Motifs in DNA Sequences | q-bio.QM cs.LG q-bio.GN | The interaction between proteins and DNA is a key driving force in a
significant number of biological processes such as transcriptional regulation,
repair, recombination, splicing, and DNA modification. The identification of
DNA-binding sites and the specificity of target proteins in binding to these
regions are two important steps in understanding the mechanisms of these
biological activities. A number of high-throughput technologies have recently
emerged that try to quantify the affinity between proteins and DNA motifs.
Despite their success, these technologies have their own limitations and fall
short in precise characterization of motifs, and as a result, require further
downstream analysis to extract useful and interpretable information from a
haystack of noisy and inaccurate data. Here we propose MotifMark, a new
algorithm based on graph theory and machine learning, that can find binding
sites on candidate probes and rank their specificity in regard to the
underlying transcription factor. We developed a pipeline to analyze
experimental data derived from compact universal protein binding microarrays
and benchmarked it against two of the most accurate motif search methods. Our
results indicate that MotifMark can be a viable alternative technique for
prediction of motif from protein binding microarrays and possibly other related
high-throughput techniques.
| Hamid Reza Hassanzadeh, Pushkar Kolhe, Charles L. Isbell, May D. Wang | null | 1705.03321 | null | null |
Stable Architectures for Deep Neural Networks | cs.LG cs.NA math.NA math.OC | Deep neural networks have become invaluable tools for supervised machine
learning, e.g., classification of text or images. While often offering superior
results over traditional techniques and successfully expressing complicated
patterns in data, deep architectures are known to be challenging to design and
train such that they generalize well to new data. Important issues with deep
architectures are numerical instabilities in derivative-based learning
algorithms commonly called exploding or vanishing gradients. In this paper we
propose new forward propagation techniques inspired by systems of Ordinary
Differential Equations (ODE) that overcome this challenge and lead to
well-posed learning problems for arbitrarily deep networks.
The backbone of our approach is our interpretation of deep learning as a
parameter estimation problem of nonlinear dynamical systems. Given this
formulation, we analyze stability and well-posedness of deep learning and use
this new understanding to develop new network architectures. We relate the
exploding and vanishing gradient phenomenon to the stability of the discrete
ODE and present several strategies for stabilizing deep learning for very deep
networks. While our new architectures restrict the solution space, several
numerical experiments show their competitiveness with state-of-the-art
networks.
| Eldad Haber and Lars Ruthotto | 10.1088/1361-6420/aa9a90 | 1705.03341 | null | null |
Generative Adversarial Trainer: Defense to Adversarial Perturbations
with GAN | cs.LG stat.ML | We propose a novel technique to make neural network robust to adversarial
examples using a generative adversarial network. We alternately train both
classifier and generator networks. The generator network generates an
adversarial perturbation that can easily fool the classifier network by using a
gradient of each image. Simultaneously, the classifier network is trained to
classify correctly both original and adversarial images generated by the
generator. These procedures help the classifier network to become more robust
to adversarial perturbations. Furthermore, our adversarial training framework
efficiently reduces overfitting and outperforms other regularization methods
such as Dropout. We applied our method to supervised learning for CIFAR
datasets, and experimantal results show that our method significantly lowers
the generalization error of the network. To the best of our knowledge, this is
the first method which uses GAN to improve supervised learning.
| Hyeungill Lee, Sungyeob Han, Jungwoo Lee | null | 1705.03387 | null | null |
A Distributed Learning Dynamics in Social Groups | cs.LG cs.CY cs.DC cs.DS | We study a distributed learning process observed in human groups and other
social animals. This learning process appears in settings in which each
individual in a group is trying to decide over time, in a distributed manner,
which option to select among a shared set of options. Specifically, we consider
a stochastic dynamics in a group in which every individual selects an option in
the following two-step process: (1) select a random individual and observe the
option that individual chose in the previous time step, and (2) adopt that
option if its stochastic quality was good at that time step. Various
instantiations of such distributed learning appear in nature, and have also
been studied in the social science literature. From the perspective of an
individual, an attractive feature of this learning process is that it is a
simple heuristic that requires extremely limited computational capacities. But
what does it mean for the group -- could such a simple, distributed and
essentially memoryless process lead the group as a whole to perform optimally?
We show that the answer to this question is yes -- this distributed learning is
highly effective at identifying the best option and is close to optimal for the
group overall. Our analysis also gives quantitative bounds that show fast
convergence of these stochastic dynamics. Prior to our work the only
theoretical work related to such learning dynamics has been either in
deterministic special cases or in the asymptotic setting. Finally, we observe
that our infinite population dynamics is a stochastic variant of the classic
multiplicative weights update (MWU) method. Consequently, we arrive at the
following interesting converse: the learning dynamics on a finite population
considered here can be viewed as a novel distributed and low-memory
implementation of the classic MWU method.
| L. Elisa Celis, Peter M. Krafft, Nisheeth K. Vishnoi | null | 1705.03414 | null | null |
Learning Deep Networks from Noisy Labels with Dropout Regularization | cs.CV cs.LG stat.ML | Large datasets often have unreliable labels-such as those obtained from
Amazon's Mechanical Turk or social media platforms-and classifiers trained on
mislabeled datasets often exhibit poor performance. We present a simple,
effective technique for accounting for label noise when training deep neural
networks. We augment a standard deep network with a softmax layer that models
the label noise statistics. Then, we train the deep network and noise model
jointly via end-to-end stochastic gradient descent on the (perhaps mislabeled)
dataset. The augmented model is overdetermined, so in order to encourage the
learning of a non-trivial noise model, we apply dropout regularization to the
weights of the noise model during training. Numerical experiments on noisy
versions of the CIFAR-10 and MNIST datasets show that the proposed dropout
technique outperforms state-of-the-art methods.
| Ishan Jindal, Matthew Nokleby and Xuewen Chen | null | 1705.03419 | null | null |
Frequentist Consistency of Variational Bayes | stat.ML cs.LG math.ST stat.TH | A key challenge for modern Bayesian statistics is how to perform scalable
inference of posterior distributions. To address this challenge, variational
Bayes (VB) methods have emerged as a popular alternative to the classical
Markov chain Monte Carlo (MCMC) methods. VB methods tend to be faster while
achieving comparable predictive performance. However, there are few theoretical
results around VB. In this paper, we establish frequentist consistency and
asymptotic normality of VB methods. Specifically, we connect VB methods to
point estimates based on variational approximations, called frequentist
variational approximations, and we use the connection to prove a variational
Bernstein-von Mises theorem. The theorem leverages the theoretical
characterizations of frequentist variational approximations to understand
asymptotic properties of VB. In summary, we prove that (1) the VB posterior
converges to the Kullback-Leibler (KL) minimizer of a normal distribution,
centered at the truth and (2) the corresponding variational expectation of the
parameter is consistent and asymptotically normal. As applications of the
theorem, we derive asymptotic properties of VB posteriors in Bayesian mixture
models, Bayesian generalized linear mixed models, and Bayesian stochastic block
models. We conduct a simulation study to illustrate these theoretical results.
| Yixin Wang, David M. Blei | 10.1080/01621459.2018.1473776 | 1705.03439 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.