title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Building Diversified Multiple Trees for Classification in High
Dimensional Noisy Biomedical Data | cs.LG stat.ML | It is common that a trained classification model is applied to the operating
data that is deviated from the training data because of noise. This paper
demonstrates that an ensemble classifier, Diversified Multiple Tree (DMT), is
more robust in classifying noisy data than other widely used ensemble methods.
DMT is tested on three real world biomedical data sets from different
laboratories in comparison with four benchmark ensemble classifiers.
Experimental results show that DMT is significantly more accurate than other
benchmark ensemble classifiers on noisy test data. We also discuss a limitation
of DMT and its possible variations.
| Jiuyong Li, Lin Liu, Jixue Liu and Ryan Green | null | 1612.05888 | null | null |
Deep Multi-instance Networks with Sparse Label Assignment for Whole
Mammogram Classification | cs.CV cs.LG | Mammogram classification is directly related to computer-aided diagnosis of
breast cancer. Traditional methods requires great effort to annotate the
training data by costly manual labeling and specialized computational models to
detect these annotations during test. Inspired by the success of using deep
convolutional features for natural image analysis and multi-instance learning
for labeling a set of instances/patches, we propose end-to-end trained deep
multi-instance networks for mass classification based on whole mammogram
without the aforementioned costly need to annotate the training data. We
explore three different schemes to construct deep multi-instance networks for
whole mammogram classification. Experimental results on the INbreast dataset
demonstrate the robustness of proposed deep networks compared to previous work
using segmentation and detection annotations in the training.
| Wentao Zhu, Qi Lou, Yeeleng Scott Vang, Xiaohui Xie | null | 1612.05968 | null | null |
Adversarial Deep Structural Networks for Mammographic Mass Segmentation | cs.CV cs.LG | Mass segmentation is an important task in mammogram analysis, providing
effective morphological features and regions of interest (ROI) for mass
detection and classification. Inspired by the success of using deep
convolutional features for natural image analysis and conditional random fields
(CRF) for structural learning, we propose an end-to-end network for
mammographic mass segmentation. The network employs a fully convolutional
network (FCN) to model potential function, followed by a CRF to perform
structural learning. Because the mass distribution varies greatly with pixel
position, the FCN is combined with position priori for the task. Due to the
small size of mammogram datasets, we use adversarial training to control
over-fitting. Four models with different convolutional kernels are further
fused to improve the segmentation results. Experimental results on two public
datasets, INbreast and DDSM-BCRP, show that our end-to-end network combined
with adversarial training achieves the-state-of-the-art results.
| Wentao Zhu, Xiang Xiang, Trac D. Tran, Xiaohui Xie | null | 1612.0597 | null | null |
An IoT Endpoint System-on-Chip for Secure and Energy-Efficient
Near-Sensor Analytics | cs.AR cs.CR cs.LG cs.NE | Near-sensor data analytics is a promising direction for IoT endpoints, as it
minimizes energy spent on communication and reduces network load - but it also
poses security concerns, as valuable data is stored or sent over the network at
various stages of the analytics pipeline. Using encryption to protect sensitive
data at the boundary of the on-chip analytics engine is a way to address data
security issues. To cope with the combined workload of analytics and encryption
in a tight power envelope, we propose Fulmine, a System-on-Chip based on a
tightly-coupled multi-core cluster augmented with specialized blocks for
compute-intensive data processing and encryption functions, supporting software
programmability for regular computing tasks. The Fulmine SoC, fabricated in
65nm technology, consumes less than 20mW on average at 0.8V achieving an
efficiency of up to 70pJ/B in encryption, 50pJ/px in convolution, or up to
25MIPS/mW in software. As a strong argument for real-life flexible application
of our platform, we show experimental results for three secure analytics use
cases: secure autonomous aerial surveillance with a state-of-the-art deep CNN
consuming 3.16pJ per equivalent RISC op; local CNN-based face detection with
secured remote recognition in 5.74pJ/op; and seizure detection with encrypted
data collection from EEG within 12.7pJ/op.
| Francesco Conti, Robert Schilling, Pasquale Davide Schiavone, Antonio
Pullini, Davide Rossi, Frank Kagan G\"urkaynak, Michael Muehlberghuber,
Michael Gautschi, Igor Loi, Germain Haugou, Stefan Mangard, Luca Benini | 10.1109/TCSI.2017.2698019 | 1612.05974 | null | null |
Sample-efficient Deep Reinforcement Learning for Dialog Control | cs.AI cs.LG stat.ML | Representing a dialog policy as a recurrent neural network (RNN) is
attractive because it handles partial observability, infers a latent
representation of state, and can be optimized with supervised learning (SL) or
reinforcement learning (RL). For RL, a policy gradient approach is natural, but
is sample inefficient. In this paper, we present 3 methods for reducing the
number of dialogs required to optimize an RNN-based dialog policy with RL. The
key idea is to maintain a second RNN which predicts the value of the current
policy, and to apply experience replay to both networks. On two tasks, these
methods reduce the number of dialogs/episodes required by about a third, vs.
standard policy gradient methods.
| Kavosh Asadi, Jason D. Williams | null | 1612.06 | null | null |
Inexact Proximal Gradient Methods for Non-convex and Non-smooth
Optimization | cs.LG stat.ML | In machine learning research, the proximal gradient methods are popular for
solving various optimization problems with non-smooth regularization. Inexact
proximal gradient methods are extremely important when exactly solving the
proximal operator is time-consuming, or the proximal operator does not have an
analytic solution. However, existing inexact proximal gradient methods only
consider convex problems. The knowledge of inexact proximal gradient methods in
the non-convex setting is very limited. % Moreover, for some machine learning
models, there is still no proposed solver for exactly solving the proximal
operator. To address this challenge, in this paper, we first propose three
inexact proximal gradient algorithms, including the basic version and
Nesterov's accelerated version. After that, we provide the theoretical analysis
to the basic and Nesterov's accelerated versions. The theoretical results show
that our inexact proximal gradient algorithms can have the same convergence
rates as the ones of exact proximal gradient algorithms in the non-convex
setting.
Finally, we show the applications of our inexact proximal gradient algorithms
on three representative non-convex learning problems. All experimental results
confirm the superiority of our new inexact proximal gradient algorithms.
| Bin Gu and De Wang and Zhouyuan Huo and Heng Huang | null | 1612.06003 | null | null |
Self-Correcting Models for Model-Based Reinforcement Learning | cs.LG cs.AI | When an agent cannot represent a perfectly accurate model of its
environment's dynamics, model-based reinforcement learning (MBRL) can fail
catastrophically. Planning involves composing the predictions of the model;
when flawed predictions are composed, even minor errors can compound and render
the model useless for planning. Hallucinated Replay (Talvitie 2014) trains the
model to "correct" itself when it produces errors, substantially improving MBRL
with flawed models. This paper theoretically analyzes this approach,
illuminates settings in which it is likely to be effective or ineffective, and
presents a novel error bound, showing that a model's ability to self-correct is
more tightly related to MBRL performance than one-step prediction error. These
results inspire an MBRL algorithm for deterministic MDPs with performance
guarantees that are robust to model class limitations.
| Erik Talvitie | null | 1612.06018 | null | null |
Quantization and Training of Low Bit-Width Convolutional Neural Networks
for Object Detection | cs.LG | We present LBW-Net, an efficient optimization based method for quantization
and training of the low bit-width convolutional neural networks (CNNs).
Specifically, we quantize the weights to zero or powers of two by minimizing
the Euclidean distance between full-precision weights and quantized weights
during backpropagation. We characterize the combinatorial nature of the low
bit-width quantization problem. For 2-bit (ternary) CNNs, the quantization of
$N$ weights can be done by an exact formula in $O(N\log N)$ complexity. When
the bit-width is three and above, we further propose a semi-analytical
thresholding scheme with a single free parameter for quantization that is
computationally inexpensive. The free parameter is further determined by
network retraining and object detection tests. LBW-Net has several desirable
advantages over full-precision CNNs, including considerable memory savings,
energy efficiency, and faster deployment. Our experiments on PASCAL VOC dataset
show that compared with its 32-bit floating-point counterpart, the performance
of the 6-bit LBW-Net is nearly lossless in the object detection tasks, and can
even do better in some real world visual scenes, while empirically enjoying
more than 4$\times$ faster deployment.
| Penghang Yin, Shuai Zhang, Yingyong Qi, Jack Xin | null | 1612.06052 | null | null |
On Random Weights for Texture Generation in One Layer Neural Networks | cs.CV cs.LG | Recent work in the literature has shown experimentally that one can use the
lower layers of a trained convolutional neural network (CNN) to model natural
textures. More interestingly, it has also been experimentally shown that only
one layer with random filters can also model textures although with less
variability. In this paper we ask the question as to why one layer CNNs with
random filters are so effective in generating textures? We theoretically show
that one layer convolutional architectures (without a non-linearity) paired
with the an energy function used in previous literature, can in fact preserve
and modulate frequency coefficients in a manner so that random weights and
pretrained weights will generate the same type of images. Based on the results
of this analysis we question whether similar properties hold in the case where
one uses one convolution layer with a non-linearity. We show that in the case
of ReLu non-linearity there are situations where only one input will give the
minimum possible energy whereas in the case of no nonlinearity, there are
always infinite solutions that will give the minimum possible energy. Thus we
can show that in certain situations adding a ReLu non-linearity generates less
variable images.
| Mihir Mongia and Kundan Kumar and Akram Erraqabi and Yoshua Bengio | null | 1612.0607 | null | null |
Hierarchical Partitioning of the Output Space in Multi-label Data | stat.ML cs.LG | Hierarchy Of Multi-label classifiers (HOMER) is a multi-label learning
algorithm that breaks the initial learning task to several, easier sub-tasks by
first constructing a hierarchy of labels from a given label set and secondly
employing a given base multi-label classifier (MLC) to the resulting
sub-problems. The primary goal is to effectively address class imbalance and
scalability issues that often arise in real-world multi-label classification
problems. In this work, we present the general setup for a HOMER model and a
simple extension of the algorithm that is suited for MLCs that output rankings.
Furthermore, we provide a detailed analysis of the properties of the algorithm,
both from an aspect of effectiveness and computational complexity. A secondary
contribution involves the presentation of a balanced variant of the k means
algorithm, which serves in the first step of the label hierarchy construction.
We conduct extensive experiments on six real-world datasets, studying
empirically HOMER's parameters and providing examples of instantiations of the
algorithm with different clustering approaches and MLCs, The empirical results
demonstrate a significant improvement over the given base MLC.
| Yannis Papanikolaou, Ioannis Katakis, Grigorios Tsoumakas | null | 1612.06083 | null | null |
A recurrent neural network without chaos | cs.NE cs.CL cs.LG | We introduce an exceptionally simple gated recurrent neural network (RNN)
that achieves performance comparable to well-known gated architectures, such as
LSTMs and GRUs, on the word-level language modeling task. We prove that our
model has simple, predicable and non-chaotic dynamics. This stands in stark
contrast to more standard gated architectures, whose underlying dynamical
systems exhibit chaotic behavior.
| Thomas Laurent and James von Brecht | null | 1612.06212 | null | null |
Corralling a Band of Bandit Algorithms | cs.LG stat.ML | We study the problem of combining multiple bandit algorithms (that is, online
learning algorithms with partial feedback) with the goal of creating a master
algorithm that performs almost as well as the best base algorithm if it were to
be run on its own. The main challenge is that when run with a master, base
algorithms unavoidably receive much less feedback and it is thus critical that
the master not starve a base algorithm that might perform uncompetitively
initially but would eventually outperform others if given enough feedback. We
address this difficulty by devising a version of Online Mirror Descent with a
special mirror map together with a sophisticated learning rate scheme. We show
that this approach manages to achieve a more delicate balance between
exploiting and exploring base algorithms than previous works yielding superior
regret bounds.
Our results are applicable to many settings, such as multi-armed bandits,
contextual bandits, and convex bandits. As examples, we present two main
applications. The first is to create an algorithm that enjoys worst-case
robustness while at the same time performing much better when the environment
is relatively easy. The second is to create an algorithm that works
simultaneously under different assumptions of the environment, such as
different priors or different loss structures.
| Alekh Agarwal, Haipeng Luo, Behnam Neyshabur and Robert E. Schapire | null | 1612.06246 | null | null |
VAST : The Virtual Acoustic Space Traveler Dataset | cs.SD cs.LG | This paper introduces a new paradigm for sound source lo-calization referred
to as virtual acoustic space traveling (VAST) and presents a first dataset
designed for this purpose. Existing sound source localization methods are
either based on an approximate physical model (physics-driven) or on a
specific-purpose calibration set (data-driven). With VAST, the idea is to learn
a mapping from audio features to desired audio properties using a massive
dataset of simulated room impulse responses. This virtual dataset is designed
to be maximally representative of the potential audio scenes that the
considered system may be evolving in, while remaining reasonably compact. We
show that virtually-learned mappings on this dataset generalize to real data,
overcoming some intrinsic limitations of traditional binaural sound
localization methods based on time differences of arrival.
| Cl\'ement Gaultier (PANAMA), Saurabh Kataria (PANAMA, IIT Kanpur),
Antoine Deleforge (PANAMA) | null | 1612.06287 | null | null |
Simple Black-Box Adversarial Perturbations for Deep Networks | cs.LG cs.CR stat.ML | Deep neural networks are powerful and popular learning models that achieve
state-of-the-art pattern recognition performance on many computer vision,
speech, and language processing tasks. However, these networks have also been
shown susceptible to carefully crafted adversarial perturbations which force
misclassification of the inputs. Adversarial examples enable adversaries to
subvert the expected system behavior leading to undesired consequences and
could pose a security risk when these systems are deployed in the real world.
In this work, we focus on deep convolutional neural networks and demonstrate
that adversaries can easily craft adversarial examples even without any
internal knowledge of the target network. Our attacks treat the network as an
oracle (black-box) and only assume that the output of the network can be
observed on the probed inputs. Our first attack is based on a simple idea of
adding perturbation to a randomly selected single pixel or a small set of them.
We then improve the effectiveness of this attack by carefully constructing a
small set of pixels to perturb by using the idea of greedy local-search. Our
proposed attacks also naturally extend to a stronger notion of
misclassification. Our extensive experimental results illustrate that even
these elementary attacks can reveal a deep neural network's vulnerabilities.
The simplicity and effectiveness of our proposed schemes mean that they could
serve as a litmus test for designing robust networks.
| Nina Narodytska, Shiva Prasad Kasiviswanathan | null | 1612.06299 | null | null |
Computing Human-Understandable Strategies | cs.GT cs.AI cs.LG cs.MA stat.ML | Algorithms for equilibrium computation generally make no attempt to ensure
that the computed strategies are understandable by humans. For instance the
strategies for the strongest poker agents are represented as massive binary
files. In many situations, we would like to compute strategies that can
actually be implemented by humans, who may have computational limitations and
may only be able to remember a small number of features or components of the
strategies that have been computed. We study poker games where private
information distributions can be arbitrary. We create a large training set of
game instances and solutions, by randomly selecting the information
probabilities, and present algorithms that learn from the training instances in
order to perform well in games with unseen information distributions. We are
able to conclude several new fundamental rules about poker strategy that can be
easily implemented by humans.
| Sam Ganzfried and Farzana Yusuf | null | 1612.0634 | null | null |
Learning Features by Watching Objects Move | cs.CV cs.AI cs.LG cs.NE stat.ML | This paper presents a novel yet intuitive approach to unsupervised feature
learning. Inspired by the human visual system, we explore whether low-level
motion-based grouping cues can be used to learn an effective visual
representation. Specifically, we use unsupervised motion-based segmentation on
videos to obtain segments, which we use as 'pseudo ground truth' to train a
convolutional network to segment objects from a single frame. Given the
extensive evidence that motion plays a key role in the development of the human
visual system, we hope that this straightforward approach to unsupervised
learning will be more effective than cleverly designed 'pretext' tasks studied
in the literature. Indeed, our extensive experiments show that this is the
case. When used for transfer learning on object detection, our representation
significantly outperforms previous unsupervised approaches across multiple
settings, especially when training data for the target task is scarce.
| Deepak Pathak, Ross Girshick, Piotr Doll\'ar, Trevor Darrell, Bharath
Hariharan | null | 1612.0637 | null | null |
Randomized Clustered Nystrom for Large-Scale Kernel Machines | stat.ML cs.LG | The Nystrom method has been popular for generating the low-rank approximation
of kernel matrices that arise in many machine learning problems. The
approximation quality of the Nystrom method depends crucially on the number of
selected landmark points and the selection procedure. In this paper, we present
a novel algorithm to compute the optimal Nystrom low-approximation when the
number of landmark points exceed the target rank. Moreover, we introduce a
randomized algorithm for generating landmark points that is scalable to
large-scale data sets. The proposed method performs K-means clustering on
low-dimensional random projections of a data set and, thus, leads to
significant savings for high-dimensional data sets. Our theoretical results
characterize the tradeoffs between the accuracy and efficiency of our proposed
method. Extensive experiments demonstrate the competitive performance as well
as the efficiency of our proposed method.
| Farhad Pourkamali-Anaraki, Stephen Becker | null | 1612.0647 | null | null |
Parallelized Tensor Train Learning of Polynomial Classifiers | cs.LG cs.AI | In pattern classification, polynomial classifiers are well-studied methods as
they are capable of generating complex decision surfaces. Unfortunately, the
use of multivariate polynomials is limited to kernels as in support vector
machines, because polynomials quickly become impractical for high-dimensional
problems. In this paper, we effectively overcome the curse of dimensionality by
employing the tensor train format to represent a polynomial classifier. Based
on the structure of tensor trains, two learning algorithms are proposed which
involve solving different optimization problems of low computational
complexity. Furthermore, we show how both regularization to prevent overfitting
and parallelization, which enables the use of large training sets, are
incorporated into these methods. Both the efficiency and efficacy of our
tensor-based polynomial classifier are then demonstrated on the two popular
datasets USPS and MNIST.
| Zhongming Chen, Kim Batselier, Johan A.K. Suykens, Ngai Wong | null | 1612.06505 | null | null |
Exploring the Design Space of Deep Convolutional Neural Networks at
Large Scale | cs.CV cs.LG cs.NE | In recent years, the research community has discovered that deep neural
networks (DNNs) and convolutional neural networks (CNNs) can yield higher
accuracy than all previous solutions to a broad array of machine learning
problems. To our knowledge, there is no single CNN/DNN architecture that solves
all problems optimally. Instead, the "right" CNN/DNN architecture varies
depending on the application at hand. CNN/DNNs comprise an enormous design
space. Quantitatively, we find that a small region of the CNN design space
contains 30 billion different CNN architectures.
In this dissertation, we develop a methodology that enables systematic
exploration of the design space of CNNs. Our methodology is comprised of the
following four themes.
1. Judiciously choosing benchmarks and metrics.
2. Rapidly training CNN models.
3. Defining and describing the CNN design space.
4. Exploring the design space of CNN architectures.
Taken together, these four themes comprise an effective methodology for
discovering the "right" CNN architectures to meet the needs of practical
applications.
| Forrest Iandola | null | 1612.06519 | null | null |
WoCE: a framework for clustering ensemble by exploiting the wisdom of
Crowds theory | stat.ML cs.LG | The Wisdom of Crowds (WOC), as a theory in the social science, gets a new
paradigm in computer science. The WOC theory explains that the aggregate
decision made by a group is often better than those of its individual members
if specific conditions are satisfied. This paper presents a novel framework for
unsupervised and semi-supervised cluster ensemble by exploiting the WOC theory.
We employ four conditions in the WOC theory, i.e., diversity, independency,
decentralization and aggregation, to guide both the constructing of individual
clustering results and the final combination for clustering ensemble. Firstly,
independency criterion, as a novel mapping system on the raw data set, removes
the correlation between features on our proposed method. Then, decentralization
as a novel mechanism generates high-quality individual clustering results.
Next, uniformity as a new diversity metric evaluates the generated clustering
results. Further, weighted evidence accumulation clustering method is proposed
for the final aggregation without using thresholding procedure. Experimental
study on varied data sets demonstrates that the proposed approach achieves
superior performance to state-of-the-art methods.
| Muhammad Yousefnezhad, Sheng-Jun Huang, Daoqiang Zhang | null | 1612.06598 | null | null |
Supervised Learning for Optimal Power Flow as a Real-Time Proxy | cs.LG | In this work we design and compare different supervised learning algorithms
to compute the cost of Alternating Current Optimal Power Flow (ACOPF). The
motivation for quick calculation of OPF cost outcomes stems from the growing
need of algorithmic-based long-term and medium-term planning methodologies in
power networks. Integrated in a multiple time-horizon coordination framework,
we refer to this approximation module as a proxy for predicting short-term
decision outcomes without the need of actual simulation and optimization of
them. Our method enables fast approximate calculation of OPF cost with less
than 1% error on average, achieved in run-times that are several orders of
magnitude lower than of exact computation. Several test-cases such as
IEEE-RTS96 are used to demonstrate the efficiency of our approach.
| Raphael Canyasse, Gal Dalal, Shie Mannor | null | 1612.06623 | null | null |
Enhancing Observability in Distribution Grids using Smart Meter Data | math.OC cs.LG stat.ML | Due to limited metering infrastructure, distribution grids are currently
challenged by observability issues. On the other hand, smart meter data,
including local voltage magnitudes and power injections, are communicated to
the utility operator from grid buses with renewable generation and
demand-response programs. This work employs grid data from metered buses
towards inferring the underlying grid state. To this end, a coupled formulation
of the power flow problem (CPF) is put forth. Exploiting the high variability
of injections at metered buses, the controllability of solar inverters, and the
relative time-invariance of conventional loads, the idea is to solve the
non-linear power flow equations jointly over consecutive time instants. An
intuitive and easily verifiable rule pertaining to the locations of metered and
non-metered buses on the physical grid is shown to be a necessary and
sufficient criterion for local observability in radial networks. To account for
noisy smart meter readings, a coupled power system state estimation (CPSSE)
problem is further developed. Both CPF and CPSSE tasks are tackled via
augmented semi-definite program relaxations. The observability criterion along
with the CPF and CPSSE solvers are numerically corroborated using synthetic and
actual solar generation and load data on the IEEE 34-bus benchmark feeder.
| Siddharth Bhela, Vassilis Kekatos, Sriharsha Veeramachaneni | null | 1612.06669 | null | null |
Multivariate Industrial Time Series with Cyber-Attack Simulation: Fault
Detection Using an LSTM-based Predictive Data Model | cs.LG stat.ML | We adopted an approach based on an LSTM neural network to monitor and detect
faults in industrial multivariate time series data. To validate the approach we
created a Modelica model of part of a real gasoil plant. By introducing hacks
into the logic of the Modelica model, we were able to generate both the roots
and causes of fault behavior in the plant. Having a self-consistent data set
with labeled faults, we used an LSTM architecture with a forecasting error
threshold to obtain precision and recall quality metrics. The dependency of the
quality metric on the threshold level is considered. An appropriate mechanism
such as "one handle" was introduced for filtering faults that are outside of
the plant operator field of interest.
| Pavel Filonov, Andrey Lavrentyev, Artem Vorontsov | null | 1612.06676 | null | null |
Action-Driven Object Detection with Top-Down Visual Attentions | cs.CV cs.AI cs.LG | A dominant paradigm for deep learning based object detection relies on a
"bottom-up" approach using "passive" scoring of class agnostic proposals. These
approaches are efficient but lack of holistic analysis of scene-level context.
In this paper, we present an "action-driven" detection mechanism using our
"top-down" visual attention model. We localize an object by taking sequential
actions that the attention model provides. The attention model conditioned with
an image region provides required actions to get closer toward a target object.
An action at each time step is weak itself but an ensemble of the sequential
actions makes a bounding-box accurately converge to a target object boundary.
This attention model we call AttentionNet is composed of a convolutional neural
network. During our whole detection procedure, we only utilize the actions from
a single AttentionNet without any modules for object proposals nor post
bounding-box regression. We evaluate our top-down detection mechanism over the
PASCAL VOC series and ILSVRC CLS-LOC dataset, and achieve state-of-the-art
performances compared to the major bottom-up detection methods. In particular,
our detection mechanism shows a strong advantage in elaborate localization by
outperforming Faster R-CNN with a margin of +7.1% over PASCAL VOC 2007 when we
increase the IoU threshold for positive detection to 0.7.
| Donggeun Yoo, Sunggyun Park, Kyunghyun Paeng, Joon-Young Lee, In So
Kweon | null | 1612.06704 | null | null |
Beyond Skip Connections: Top-Down Modulation for Object Detection | cs.CV cs.LG | In recent years, we have seen tremendous progress in the field of object
detection. Most of the recent improvements have been achieved by targeting
deeper feedforward networks. However, many hard object categories such as
bottle, remote, etc. require representation of fine details and not just
coarse, semantic representations. But most of these fine details are lost in
the early convolutional layers. What we need is a way to incorporate finer
details from lower layers into the detection architecture. Skip connections
have been proposed to combine high-level and low-level features, but we argue
that selecting the right features from low-level requires top-down contextual
information. Inspired by the human visual pathway, in this paper we propose
top-down modulations as a way to incorporate fine details into the detection
framework. Our approach supplements the standard bottom-up, feedforward ConvNet
with a top-down modulation (TDM) network, connected using lateral connections.
These connections are responsible for the modulation of lower layer filters,
and the top-down network handles the selection and integration of contextual
information and low-level features. The proposed TDM architecture provides a
significant boost on the COCO testdev benchmark, achieving 28.6 AP for VGG16,
35.2 AP for ResNet101, and 37.3 for InceptionResNetv2 network, without any
bells and whistles (e.g., multi-scale, iterative box refinement, etc.).
| Abhinav Shrivastava, Rahul Sukthankar, Jitendra Malik, Abhinav Gupta | null | 1612.06851 | null | null |
Temporal Feature Selection on Networked Time Series | cs.LG | This paper formulates the problem of learning discriminative features
(\textit{i.e.,} segments) from networked time series data considering the
linked information among time series. For example, social network users are
considered to be social sensors that continuously generate social signals
(tweets) represented as a time series. The discriminative segments are often
referred to as \emph{shapelets} in a time series. Extracting shapelets for time
series classification has been widely studied. However, existing works on
shapelet selection assume that the time series are independent and identically
distributed (i.i.d.). This assumption restricts their applications to social
networked time series analysis, since a user's actions can be correlated to
his/her social affiliations. In this paper we propose a new Network Regularized
Least Squares (NetRLS) feature selection model that combines typical time
series data and user network data for analysis. Experiments on real-world
networked time series Twitter and DBLP data demonstrate the performance of the
proposed method. NetRLS performs better than LTS, the state-of-the-art time
series feature selection approach, on real-world data.
| Haishuai Wang, Jia Wu, Peng Zhang, Chengqi Zhang | null | 1612.06856 | null | null |
Robust mixture of experts modeling using the skew $t$ distribution | stat.ME cs.LG stat.ML | Mixture of Experts (MoE) is a popular framework in the fields of statistics
and machine learning for modeling heterogeneity in data for regression,
classification and clustering. MoE for continuous data are usually based on the
normal distribution. However, it is known that for data with asymmetric
behavior, heavy tails and atypical observations, the use of the normal
distribution is unsuitable. We introduce a new robust non-normal mixture of
experts modeling using the skew $t$ distribution. The proposed skew $t$ mixture
of experts, named STMoE, handles these issues of the normal mixtures experts
regarding possibly skewed, heavy-tailed and noisy data. We develop a dedicated
expectation conditional maximization (ECM) algorithm to estimate the model
parameters by monotonically maximizing the observed data log-likelihood. We
describe how the presented model can be used in prediction and in model-based
clustering of regression data. Numerical experiments carried out on simulated
data show the effectiveness and the robustness of the proposed model in fitting
non-linear regression functions as well as in model-based clustering. Then, the
proposed model is applied to the real-world data of tone perception for musical
data analysis, and the one of temperature anomalies for the analysis of climate
change data. The obtained results confirm the usefulness of the model for
practical data analysis applications.
| Faicel Chamroukhi | null | 1612.06879 | null | null |
CLEVR: A Diagnostic Dataset for Compositional Language and Elementary
Visual Reasoning | cs.CV cs.CL cs.LG | When building artificial intelligence systems that can reason and answer
questions about visual data, we need diagnostic tests to analyze our progress
and discover shortcomings. Existing benchmarks for visual question answering
can help, but have strong biases that models can exploit to correctly answer
questions without reasoning. They also conflate multiple sources of error,
making it hard to pinpoint model weaknesses. We present a diagnostic dataset
that tests a range of visual reasoning abilities. It contains minimal biases
and has detailed annotations describing the kind of reasoning each question
requires. We use this dataset to analyze a variety of modern visual reasoning
systems, providing novel insights into their abilities and limitations.
| Justin Johnson and Bharath Hariharan and Laurens van der Maaten and Li
Fei-Fei and C. Lawrence Zitnick and Ross Girshick | null | 1612.0689 | null | null |
Personalized Video Recommendation Using Rich Contents from Videos | cs.IR cs.LG | Video recommendation has become an essential way of helping people explore
the massive videos and discover the ones that may be of interest to them. In
the existing video recommender systems, the models make the recommendations
based on the user-video interactions and single specific content features. When
the specific content features are unavailable, the performance of the existing
models will seriously deteriorate. Inspired by the fact that rich contents
(e.g., text, audio, motion, and so on) exist in videos, in this paper, we
explore how to use these rich contents to overcome the limitations caused by
the unavailability of the specific ones. Specifically, we propose a novel
general framework that incorporates arbitrary single content feature with
user-video interactions, named as collaborative embedding regression (CER)
model, to make effective video recommendation in both in-matrix and
out-of-matrix scenarios. Our extensive experiments on two real-world
large-scale datasets show that CER beats the existing recommender models with
any single content feature and is more time efficient. In addition, we propose
a priority-based late fusion (PRI) method to gain the benefit brought by the
integrating the multiple content features. The corresponding experiment shows
that PRI brings real performance improvement to the baseline and outperforms
the existing fusion methods.
| Xingzhong Du, Hongzhi Yin, Ling Chen, Yang Wang, Yi Yang, Xiaofang
Zhou | null | 1612.06935 | null | null |
Robust Learning with Kernel Mean p-Power Error Loss | stat.ML cs.LG | Correntropy is a second order statistical measure in kernel space, which has
been successfully applied in robust learning and signal processing. In this
paper, we define a nonsecond order statistical measure in kernel space, called
the kernel mean-p power error (KMPE), including the correntropic loss (CLoss)
as a special case. Some basic properties of KMPE are presented. In particular,
we apply the KMPE to extreme learning machine (ELM) and principal component
analysis (PCA), and develop two robust learning algorithms, namely ELM-KMPE and
PCA-KMPE. Experimental results on synthetic and benchmark data show that the
developed algorithms can achieve consistently better performance when compared
with some existing methods.
| Badong Chen, Lei Xing, Xin Wang, Jing Qin, Nanning Zheng | null | 1612.07019 | null | null |
An Empirical Study of Language CNN for Image Captioning | cs.CV cs.LG | Language Models based on recurrent neural networks have dominated recent
image caption generation tasks. In this paper, we introduce a Language CNN
model which is suitable for statistical language modeling tasks and shows
competitive performance in image captioning. In contrast to previous models
which predict next word based on one previous word and hidden state, our
language CNN is fed with all the previous words and can model the long-range
dependencies of history words, which are critical for image captioning. The
effectiveness of our approach is validated on two datasets MS COCO and
Flickr30K. Our extensive experimental results show that our method outperforms
the vanilla recurrent neural network based language models and is competitive
with the state-of-the-art methods.
| Jiuxiang Gu, Gang Wang, Jianfei Cai, Tsuhan Chen | null | 1612.07086 | null | null |
Classification and Learning-to-rank Approaches for Cross-Device Matching
at CIKM Cup 2016 | cs.IR cs.LG | In this paper, we propose two methods for tackling the problem of
cross-device matching for online advertising at CIKM Cup 2016. The first method
considers the matching problem as a binary classification task and solve it by
utilizing ensemble learning techniques. The second method defines the matching
problem as a ranking task and effectively solve it with using learning-to-rank
algorithms. The results show that the proposed methods obtain promising
results, in which the ranking-based method outperforms the classification-based
method for the task.
| Nam Khanh Tran | null | 1612.07117 | null | null |
FINN: A Framework for Fast, Scalable Binarized Neural Network Inference | cs.CV cs.AR cs.LG | Research has shown that convolutional neural networks contain significant
redundancy, and high classification accuracy can be obtained even when weights
and activations are reduced from floating point to binary values. In this
paper, we present FINN, a framework for building fast and flexible FPGA
accelerators using a flexible heterogeneous streaming architecture. By
utilizing a novel set of optimizations that enable efficient mapping of
binarized neural networks to hardware, we implement fully connected,
convolutional and pooling layers, with per-layer compute resources being
tailored to user-provided throughput requirements. On a ZC706 embedded FPGA
platform drawing less than 25 W total system power, we demonstrate up to 12.3
million image classifications per second with 0.31 {\mu}s latency on the MNIST
dataset with 95.8% accuracy, and 21906 image classifications per second with
283 {\mu}s latency on the CIFAR-10 and SVHN datasets with respectively 80.1%
and 94.9% accuracy. To the best of our knowledge, ours are the fastest
classification rates reported to date on these benchmarks.
| Yaman Umuroglu, Nicholas J. Fraser, Giulio Gambardella, Michaela
Blott, Philip Leong, Magnus Jahre, Kees Vissers | 10.1145/3020078.3021744 | 1612.07119 | null | null |
A Survey of Deep Network Solutions for Learning Control in Robotics:
From Reinforcement to Imitation | cs.RO cs.AI cs.LG cs.SY | Deep learning techniques have been widely applied, achieving state-of-the-art
results in various fields of study. This survey focuses on deep learning
solutions that target learning control policies for robotics applications. We
carry out our discussions on the two main paradigms for learning control with
deep networks: deep reinforcement learning and imitation learning. For deep
reinforcement learning (DRL), we begin from traditional reinforcement learning
algorithms, showing how they are extended to the deep context and effective
mechanisms that could be added on top of the DRL algorithms. We then introduce
representative works that utilize DRL to solve navigation and manipulation
tasks in robotics. We continue our discussion on methods addressing the
challenge of the reality gap for transferring DRL policies trained in
simulation to real-world scenarios, and summarize robotics simulation platforms
for conducting DRL research. For imitation leaning, we go through its three
main categories, behavior cloning, inverse reinforcement learning and
generative adversarial imitation learning, by introducing their formulations
and their corresponding robotics applications. Finally, we discuss the open
challenges and research frontiers.
| Lei Tai and Jingwei Zhang and Ming Liu and Joschka Boedecker and
Wolfram Burgard | null | 1612.07139 | null | null |
Robust Classification of Graph-Based Data | cs.LG | A graph-based classification method is proposed for semi-supervised learning
in the case of Euclidean data and for classification in the case of graph data.
Our manifold learning technique is based on a convex optimization problem
involving a convex quadratic regularization term and a concave quadratic loss
function with a trade-off parameter carefully chosen so that the objective
function remains convex. As shown empirically, the advantage of considering a
concave loss function is that the learning problem becomes more robust in the
presence of noisy labels. Furthermore, the loss function considered here is
then more similar to a classification loss while several other methods treat
graph-based classification problems as regression problems.
| Carlos M. Ala\'iz, Micha\"el Fanuel, Johan A. K. Suykens | null | 1612.07141 | null | null |
Collaborative Filtering with User-Item Co-Autoregressive Models | cs.LG | Deep neural networks have shown promise in collaborative filtering (CF).
However, existing neural approaches are either user-based or item-based, which
cannot leverage all the underlying information explicitly. We propose CF-UIcA,
a neural co-autoregressive model for CF tasks, which exploits the structural
correlation in the domains of both users and items. The co-autoregression
allows extra desired properties to be incorporated for different tasks.
Furthermore, we develop an efficient stochastic learning algorithm to handle
large scale datasets. We evaluate CF-UIcA on two popular benchmarks: MovieLens
1M and Netflix, and achieve state-of-the-art performance in both rating
prediction and top-N recommendation tasks, which demonstrates the effectiveness
of CF-UIcA.
| Chao Du, Chongxuan Li, Yin Zheng, Jun Zhu, Bo Zhang | null | 1612.07146 | null | null |
Multi-Agent Cooperation and the Emergence of (Natural) Language | cs.CL cs.CV cs.GT cs.LG cs.MA | The current mainstream approach to train natural language systems is to
expose them to large amounts of text. This passive learning is problematic if
we are interested in developing interactive machines, such as conversational
agents. We propose a framework for language learning that relies on multi-agent
communication. We study this learning in the context of referential games. In
these games, a sender and a receiver see a pair of images. The sender is told
one of them is the target and is allowed to send a message from a fixed,
arbitrary vocabulary to the receiver. The receiver must rely on this message to
identify the target. Thus, the agents develop their own language interactively
out of the need to communicate. We show that two networks with simple
configurations are able to learn to coordinate in the referential game. We
further explore how to make changes to the game environment to cause the "word
meanings" induced in the game to better reflect intuitive semantic properties
of the images. In addition, we present a simple strategy for grounding the
agents' code into natural language. Both of these are necessary steps towards
developing machines that are able to communicate with humans productively.
| Angeliki Lazaridou, Alexander Peysakhovich, Marco Baroni | null | 1612.07182 | null | null |
Bayesian Decision Process for Cost-Efficient Dynamic Ranking via
Crowdsourcing | stat.ML cs.LG stat.ME | Rank aggregation based on pairwise comparisons over a set of items has a wide
range of applications. Although considerable research has been devoted to the
development of rank aggregation algorithms, one basic question is how to
efficiently collect a large amount of high-quality pairwise comparisons for the
ranking purpose. Because of the advent of many crowdsourcing services, a crowd
of workers are often hired to conduct pairwise comparisons with a small
monetary reward for each pair they compare. Since different workers have
different levels of reliability and different pairs have different levels of
ambiguity, it is desirable to wisely allocate the limited budget for
comparisons among the pairs of items and workers so that the global ranking can
be accurately inferred from the comparison results. To this end, we model the
active sampling problem in crowdsourced ranking as a Bayesian Markov decision
process, which dynamically selects item pairs and workers to improve the
ranking accuracy under a budget constraint. We further develop a
computationally efficient sampling policy based on knowledge gradient as well
as a moment matching technique for posterior approximation. Experimental
evaluations on both synthetic and real data show that the proposed policy
achieves high ranking accuracy with a lower labeling cost.
| Xi Chen, Kevin Jiao, Qihang Lin | null | 1612.07222 | null | null |
Loss is its own Reward: Self-Supervision for Reinforcement Learning | cs.LG | Reinforcement learning optimizes policies for expected cumulative reward.
Need the supervision be so narrow? Reward is delayed and sparse for many tasks,
making it a difficult and impoverished signal for end-to-end optimization. To
augment reward, we consider a range of self-supervised tasks that incorporate
states, actions, and successors to provide auxiliary losses. These losses offer
ubiquitous and instantaneous supervision for representation learning even in
the absence of reward. While current results show that learning from reward
alone is feasible, pure reinforcement learning methods are constrained by
computational and data efficiency issues that can be remedied by auxiliary
losses. Self-supervised pre-training and joint optimization improve the data
efficiency and policy returns of end-to-end reinforcement learning.
| Evan Shelhamer, Parsa Mahmoudieh, Max Argus, Trevor Darrell | null | 1612.07307 | null | null |
Distributed Dictionary Learning | math.OC cs.LG | The paper studies distributed Dictionary Learning (DL) problems where the
learning task is distributed over a multi-agent network with time-varying
(nonsymmetric) connectivity. This formulation is relevant, for instance, in
big-data scenarios where massive amounts of data are collected/stored in
different spatial locations and it is unfeasible to aggregate and/or process
all the data in a fusion center, due to resource limitations, communication
overhead or privacy considerations. We develop a general distributed
algorithmic framework for the (nonconvex) DL problem and establish its
asymptotic convergence. The new method hinges on Successive Convex
Approximation (SCA) techniques coupled with i) a gradient tracking mechanism
instrumental to locally estimate the missing global information; and ii) a
consensus step, as a mechanism to distribute the computations among the agents.
To the best of our knowledge, this is the first distributed algorithm with
provable convergence for the DL problem and, more in general, bi-convex
optimization problems over (time-varying) directed graphs.
| Amir Daneshmand, Gesualdo Scutari, Francisco Facchinei | null | 1612.07335 | null | null |
Detecting Unusual Input-Output Associations in Multivariate Conditional
Data | cs.LG stat.ML | Despite tremendous progress in outlier detection research in recent years,
the majority of existing methods are designed only to detect unconditional
outliers that correspond to unusual data patterns expressed in the joint space
of all data attributes. Such methods are not applicable when we seek to detect
conditional outliers that reflect unusual responses associated with a given
context or condition. This work focuses on multivariate conditional outlier
detection, a special type of the conditional outlier detection problem, where
data instances consist of multi-dimensional input (context) and output
(responses) pairs. We present a novel outlier detection framework that
identifies abnormal input-output associations in data with the help of a
decomposable conditional probabilistic model that is learned from all data
instances. Since components of this model can vary in their quality, we combine
them with the help of weights reflecting their reliability in assessment of
outliers. We study two ways of calculating the component weights: global that
relies on all data, and local that relies only on instances similar to the
target instance. Experimental results on data from various domains demonstrate
the ability of our framework to successfully identify multivariate conditional
outliers.
| Charmgil Hong, Milos Hauskrecht | null | 1612.07374 | null | null |
Microstructure Representation and Reconstruction of Heterogeneous
Materials via Deep Belief Network for Computational Material Design | cond-mat.mtrl-sci cs.LG stat.ML | Integrated Computational Materials Engineering (ICME) aims to accelerate
optimal design of complex material systems by integrating material science and
design automation. For tractable ICME, it is required that (1) a structural
feature space be identified to allow reconstruction of new designs, and (2) the
reconstruction process be property-preserving. The majority of existing
structural presentation schemes rely on the designer's understanding of
specific material systems to identify geometric and statistical features, which
could be biased and insufficient for reconstructing physically meaningful
microstructures of complex material systems. In this paper, we develop a
feature learning mechanism based on convolutional deep belief network to
automate a two-way conversion between microstructures and their
lower-dimensional feature representations, and to achieves a 1000-fold
dimension reduction from the microstructure space. The proposed model is
applied to a wide spectrum of heterogeneous material systems with distinct
microstructural features including Ti-6Al-4V alloy, Pb63-Sn37 alloy,
Fontainebleau sandstone, and Spherical colloids, to produce material
reconstructions that are close to the original samples with respect to 2-point
correlation functions and mean critical fracture strength. This capability is
not achieved by existing synthesis methods that rely on the Markovian
assumption of material microstructures.
| Ruijin Cang, Yaopengxiao Xu, Shaohua Chen, Yongming Liu, Yang Jiao,
Max Yi Ren | null | 1612.07401 | null | null |
A Context-aware Attention Network for Interactive Question Answering | cs.CL cs.LG | Neural network based sequence-to-sequence models in an encoder-decoder
framework have been successfully applied to solve Question Answering (QA)
problems, predicting answers from statements and questions. However, almost all
previous models have failed to consider detailed context information and
unknown states under which systems do not have enough information to answer
given questions. These scenarios with incomplete or ambiguous information are
very common in the setting of Interactive Question Answering (IQA). To address
this challenge, we develop a novel model, employing context-dependent
word-level attention for more accurate statement representations and
question-guided sentence-level attention for better context modeling. We also
generate unique IQA datasets to test our model, which will be made publicly
available. Employing these attention mechanisms, our model accurately
understands when it can output an answer or when it requires generating a
supplementary question for additional input depending on different contexts.
When available, user's feedback is encoded and directly applied to update
sentence-level attention to infer an answer. Extensive experiments on QA and
IQA datasets quantitatively demonstrate the effectiveness of our model with
significant improvement over state-of-the-art conventional QA models.
| Huayu Li, Martin Renqiang Min, Yong Ge, Asim Kadav | 10.1145/3097983.3098115 | 1612.07411 | null | null |
How to Train Your Deep Neural Network with Dictionary Learning | cs.LG stat.ML | Currently there are two predominant ways to train deep neural networks. The
first one uses restricted Boltzmann machine (RBM) and the second one
autoencoders. RBMs are stacked in layers to form deep belief network (DBN); the
final representation layer is attached to the target to complete the deep
neural network. Autoencoders are nested one inside the other to form stacked
autoencoders; once the stcaked autoencoder is learnt the decoder portion is
detached and the target attached to the deepest layer of the encoder to form
the deep neural network. This work proposes a new approach to train deep neural
networks using dictionary learning as the basic building block; the idea is to
use the features from the shallower layer as inputs for training the next
deeper layer. One can use any type of dictionary learning (unsupervised,
supervised, discriminative etc.) as basic units till the pre-final layer. In
the final layer one needs to use the label consistent dictionary learning
formulation for classification. We compare our proposed framework with existing
state-of-the-art deep learning techniques on benchmark problems; we are always
within the top 10 results. In actual problems of age and gender classification,
we are better than the best known techniques.
| Vanika Singhal, Shikha Singh and Angshul Majumdar | null | 1612.07454 | null | null |
On Coreset Constructions for the Fuzzy $K$-Means Problem | cs.LG cs.DS | The fuzzy $K$-means problem is a popular generalization of the well-known
$K$-means problem to soft clusterings. We present the first coresets for fuzzy
$K$-means with size linear in the dimension, polynomial in the number of
clusters, and poly-logarithmic in the number of points. We show that these
coresets can be employed in the computation of a $(1+\epsilon)$-approximation
for fuzzy $K$-means, improving previously presented results. We further show
that our coresets can be maintained in an insertion-only streaming setting,
where data points arrive one-by-one.
| Johannes Bl\"omer, Sascha Brauer, Kathrin Bujna | null | 1612.07516 | null | null |
Robustness of Voice Conversion Techniques Under Mismatched Conditions | cs.SD cs.LG stat.ML | Most of the existing studies on voice conversion (VC) are conducted in
acoustically matched conditions between source and target signal. However, the
robustness of VC methods in presence of mismatch remains unknown. In this
paper, we report a comparative analysis of different VC techniques under
mismatched conditions. The extensive experiments with five different VC
techniques on CMU ARCTIC corpus suggest that performance of VC methods
substantially degrades in noisy conditions. We have found that bilinear
frequency warping with amplitude scaling (BLFWAS) outperforms other methods in
most of the noisy conditions. We further explore the suitability of different
speech enhancement techniques for robust conversion. The objective evaluation
results indicate that spectral subtraction and log minimum mean square error
(logMMSE) based speech enhancement techniques can be used to improve the
performance in specific noisy conditions.
| Monisankha Pal, Dipjyoti Paul, Md Sahidullah, Goutam Saha | null | 1612.07523 | null | null |
Non-Deterministic Policy Improvement Stabilizes Approximated
Reinforcement Learning | cs.AI cs.LG stat.ML | This paper investigates a type of instability that is linked to the greedy
policy improvement in approximated reinforcement learning. We show empirically
that non-deterministic policy improvement can stabilize methods like LSPI by
controlling the improvements' stochasticity. Additionally we show that a
suitable representation of the value function also stabilizes the solution to
some degree. The presented approach is simple and should also be easily
transferable to more sophisticated algorithms like deep reinforcement learning.
| Wendelin B\"ohmer and Rong Guo and Klaus Obermayer | null | 1612.07548 | null | null |
On the function approximation error for risk-sensitive reinforcement
learning | cs.LG | In this paper we obtain several informative error bounds on function
approximation for the policy evaluation algorithm proposed by Basu et al. when
the aim is to find the risk-sensitive cost represented using exponential
utility. The main idea is to use classical Bapat's inequality and to use
Perron-Frobenius eigenvectors (exists if we assume irreducible Markov chain) to
get the new bounds. The novelty of our approach is that we use the
irreduciblity of Markov chain to get the new bounds whereas the earlier work by
Basu et al. used spectral variation bound which is true for any matrix. We also
give examples where all our bounds achieve the "actual error" whereas the
earlier bound given by Basu et al. is much weaker in comparison. We show that
this happens due to the absence of difference term in the earlier bound which
is always present in all our bounds when the state space is large.
Additionally, we discuss how all our bounds compare with each other. As a
corollary of our main result we provide a bound between largest eigenvalues of
two irreducibile matrices in terms of the matrix entries.
| Prasenjit Karmakar, Shalabh Bhatnagar | null | 1612.07562 | null | null |
Finding Statistically Significant Attribute Interactions | stat.ML cs.LG | In many data exploration tasks it is meaningful to identify groups of
attribute interactions that are specific to a variable of interest. For
instance, in a dataset where the attributes are medical markers and the
variable of interest (class variable) is binary indicating presence/absence of
disease, we would like to know which medical markers interact with respect to
the binary class label. These interactions are useful in several practical
applications, for example, to gain insight into the structure of the data, in
feature selection, and in data anonymisation. We present a novel method, based
on statistical significance testing, that can be used to verify if the data set
has been created by a given factorised class-conditional joint distribution,
where the distribution is parametrised by a partition of its attributes.
Furthermore, we provide a method, named ASTRID, for automatically finding a
partition of attributes describing the distribution that has generated the
data. State-of-the-art classifiers are utilised to capture the interactions
present in the data by systematically breaking attribute interactions and
observing the effect of this breaking on classifier performance. We empirically
demonstrate the utility of the proposed method with examples using real and
synthetic data.
| Andreas Henelius, Antti Ukkonen, Kai Puolam\"aki | null | 1612.07597 | null | null |
Deep Learning and Its Applications to Machine Health Monitoring: A
Survey | cs.LG stat.ML | Since 2006, deep learning (DL) has become a rapidly growing research
direction, redefining state-of-the-art performances in a wide range of areas
such as object recognition, image segmentation, speech recognition and machine
translation. In modern manufacturing systems, data-driven machine health
monitoring is gaining in popularity due to the widespread deployment of
low-cost sensors and their connection to the Internet. Meanwhile, deep learning
provides useful tools for processing and analyzing these big machinery data.
The main purpose of this paper is to review and summarize the emerging research
work of deep learning on machine health monitoring. After the brief
introduction of deep learning techniques, the applications of deep learning in
machine health monitoring systems are reviewed mainly from the following
aspects: Auto-encoder (AE) and its variants, Restricted Boltzmann Machines and
its variants including Deep Belief Network (DBN) and Deep Boltzmann Machines
(DBM), Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN).
Finally, some new trends of DL-based machine health monitoring methods are
discussed.
| Rui Zhao, Ruqiang Yan, Zhenghua Chen, Kezhi Mao, Peng Wang and Robert
X. Gao | null | 1612.0764 | null | null |
Structured Sequence Modeling with Graph Convolutional Recurrent Networks | stat.ML cs.LG | This paper introduces Graph Convolutional Recurrent Network (GCRN), a deep
learning model able to predict structured sequences of data. Precisely, GCRN is
a generalization of classical recurrent neural networks (RNN) to data
structured by an arbitrary graph. Such structured sequences can represent
series of frames in videos, spatio-temporal measurements on a network of
sensors, or random walks on a vocabulary graph for natural language modeling.
The proposed model combines convolutional neural networks (CNN) on graphs to
identify spatial structures and RNN to find dynamic patterns. We study two
possible architectures of GCRN, and apply the models to two practical problems:
predicting moving MNIST data, and modeling natural language with the Penn
Treebank dataset. Experiments show that exploiting simultaneously graph spatial
and dynamic information about data can improve both precision and learning
speed.
| Youngjoo Seo, Micha\"el Defferrard, Pierre Vandergheynst, Xavier
Bresson | null | 1612.07659 | null | null |
Stacking machine learning classifiers to identify Higgs bosons at the
LHC | hep-ph cs.LG physics.data-an | Machine learning (ML) algorithms have been employed in the problem of
classifying signal and background events with high accuracy in particle
physics. In this paper, we compare the performance of a widespread ML
technique, namely, \emph{stacked generalization}, against the results of two
state-of-art algorithms: (1) a deep neural network (DNN) in the task of
discovering a new neutral Higgs boson and (2) a scalable machine learning
system for tree boosting, in the Standard Model Higgs to tau leptons channel,
both at the 8 TeV LHC. In a cut-and-count analysis, \emph{stacking} three
algorithms performed around 16\% worse than DNN but demanding far less
computation efforts, however, the same \emph{stacking} outperforms boosted
decision trees. Using the stacked classifiers in a multivariate statistical
analysis (MVA), on the other hand, significantly enhances the statistical
significance compared to cut-and-count in both Higgs processes, suggesting that
combining an ensemble of simpler and faster ML algorithms with MVA tools is a
better approach than building a complex state-of-art algorithm for
cut-and-count.
| Alexandre Alves | 10.1088/1748-0221/12/05/T05005 | 1612.07725 | null | null |
Highway and Residual Networks learn Unrolled Iterative Estimation | cs.NE cs.AI cs.LG | The past year saw the introduction of new architectures such as Highway
networks and Residual networks which, for the first time, enabled the training
of feedforward networks with dozens to hundreds of layers using simple gradient
descent. While depth of representation has been posited as a primary reason for
their success, there are indications that these architectures defy a popular
view of deep learning as a hierarchical computation of increasingly abstract
features at each layer.
In this report, we argue that this view is incomplete and does not adequately
explain several recent findings. We propose an alternative viewpoint based on
unrolled iterative estimation -- a group of successive layers iteratively
refine their estimates of the same features instead of computing an entirely
new representation. We demonstrate that this viewpoint directly leads to the
construction of Highway and Residual networks. Finally we provide preliminary
experiments to discuss the similarities and differences between the two
architectures.
| Klaus Greff and Rupesh K. Srivastava and J\"urgen Schmidhuber | null | 1612.07771 | null | null |
Logic-based Clustering and Learning for Time-Series Data | cs.LG cs.LO | To effectively analyze and design cyberphysical systems (CPS), designers
today have to combat the data deluge problem, i.e., the burden of processing
intractably large amounts of data produced by complex models and experiments.
In this work, we utilize monotonic Parametric Signal Temporal Logic (PSTL) to
design features for unsupervised classification of time series data. This
enables using off-the-shelf machine learning tools to automatically cluster
similar traces with respect to a given PSTL formula. We demonstrate how this
technique produces interpretable formulas that are amenable to analysis and
understanding using a few representative examples. We illustrate this with case
studies related to automotive engine testing, highway traffic analysis, and
auto-grading massively open online courses.
| Marcell Vazquez-Chanlatte, Jyotirmoy V. Deshmukh, Xiaoqing Jin, Sanjit
A. Seshia | null | 1612.07823 | null | null |
Learning from Simulated and Unsupervised Images through Adversarial
Training | cs.CV cs.LG cs.NE | With recent progress in graphics, it has become more tractable to train
models on synthetic images, potentially avoiding the need for expensive
annotations. However, learning from synthetic images may not achieve the
desired performance due to a gap between synthetic and real image
distributions. To reduce this gap, we propose Simulated+Unsupervised (S+U)
learning, where the task is to learn a model to improve the realism of a
simulator's output using unlabeled real data, while preserving the annotation
information from the simulator. We develop a method for S+U learning that uses
an adversarial network similar to Generative Adversarial Networks (GANs), but
with synthetic images as inputs instead of random vectors. We make several key
modifications to the standard GAN algorithm to preserve annotations, avoid
artifacts, and stabilize training: (i) a 'self-regularization' term, (ii) a
local adversarial loss, and (iii) updating the discriminator using a history of
refined images. We show that this enables generation of highly realistic
images, which we demonstrate both qualitatively and with a user study. We
quantitatively evaluate the generated images by training models for gaze
estimation and hand pose estimation. We show a significant improvement over
using synthetic images, and achieve state-of-the-art results on the MPIIGaze
dataset without any labeled real data.
| Ashish Shrivastava, Tomas Pfister, Oncel Tuzel, Josh Susskind, Wenda
Wang, Russ Webb | null | 1612.07828 | null | null |
"What is Relevant in a Text Document?": An Interpretable Machine
Learning Approach | cs.CL cs.IR cs.LG stat.ML | Text documents can be described by a number of abstract concepts such as
semantic category, writing style, or sentiment. Machine learning (ML) models
have been trained to automatically map documents to these abstract concepts,
allowing to annotate very large text collections, more than could be processed
by a human in a lifetime. Besides predicting the text's category very
accurately, it is also highly desirable to understand how and why the
categorization process takes place. In this paper, we demonstrate that such
understanding can be achieved by tracing the classification decision back to
individual words using layer-wise relevance propagation (LRP), a recently
developed technique for explaining predictions of complex non-linear
classifiers. We train two word-based ML models, a convolutional neural network
(CNN) and a bag-of-words SVM classifier, on a topic categorization task and
adapt the LRP method to decompose the predictions of these models onto words.
Resulting scores indicate how much individual words contribute to the overall
classification decision. This enables one to distill relevant information from
text documents without an explicit semantic information extraction step. We
further use the word-wise relevance scores for generating novel vector-based
document representations which capture semantic information. Based on these
document vectors, we introduce a measure of model explanatory power and show
that, although the SVM and CNN models perform similarly in terms of
classification accuracy, the latter exhibits a higher level of explainability
which makes it more comprehensible for humans and potentially more useful for
other applications.
| Leila Arras, Franziska Horn, Gr\'egoire Montavon, Klaus-Robert
M\"uller, Wojciech Samek | 10.1371/journal.pone.0181142 | 1612.07843 | null | null |
Human Action Attribute Learning From Video Data Using Low-Rank
Representations | stat.ML cs.LG | Representation of human actions as a sequence of human body movements or
action attributes enables the development of models for human activity
recognition and summarization. We present an extension of the low-rank
representation (LRR) model, termed the clustering-aware structure-constrained
low-rank representation (CS-LRR) model, for unsupervised learning of human
action attributes from video data. Our model is based on the union-of-subspaces
(UoS) framework, and integrates spectral clustering into the LRR optimization
problem for better subspace clustering results. We lay out an efficient linear
alternating direction method to solve the CS-LRR optimization problem. We also
introduce a hierarchical subspace clustering approach, termed hierarchical
CS-LRR, to learn the attributes without the need for a priori specification of
their number. By visualizing and labeling these action attributes, the
hierarchical model can be used to semantically summarize long video sequences
of human actions at multiple resolutions. A human action or activity can also
be uniquely represented as a sequence of transitions from one action attribute
to another, which can then be used for human action recognition. We demonstrate
the effectiveness of the proposed model for semantic summarization and action
recognition through comprehensive experiments on five real-world human action
datasets.
| Tong Wu, Prudhvi Gurram, Raghuveer M. Rao, and Waheed U. Bajwa | 10.7282/t3-t7fe-4a02 | 1612.07857 | null | null |
A Base Camp for Scaling AI | cs.AI cs.LG | Modern statistical machine learning (SML) methods share a major limitation
with the early approaches to AI: there is no scalable way to adapt them to new
domains. Human learning solves this in part by leveraging a rich, shared,
updateable world model. Such scalability requires modularity: updating part of
the world model should not impact unrelated parts. We have argued that such
modularity will require both "correctability" (so that errors can be corrected
without introducing new errors) and "interpretability" (so that we can
understand what components need correcting).
To achieve this, one could attempt to adapt state of the art SML systems to
be interpretable and correctable; or one could see how far the simplest
possible interpretable, correctable learning methods can take us, and try to
control the limitations of SML methods by applying them only where needed. Here
we focus on the latter approach and we investigate two main ideas: "Teacher
Assisted Learning", which leverages crowd sourcing to learn language; and
"Factored Dialog Learning", which factors the process of application
development into roles where the language competencies needed are isolated,
enabling non-experts to quickly create new applications.
We test these ideas in an "Automated Personal Assistant" (APA) setting, with
two scenarios: that of detecting user intent from a user-APA dialog; and that
of creating a class of event reminder applications, where a non-expert
"teacher" can then create specific apps. For the intent detection task, we use
a dataset of a thousand labeled utterances from user dialogs with Cortana, and
we show that our approach matches state of the art SML methods, but in addition
provides full transparency: the whole (editable) model can be summarized on one
human-readable page. For the reminder app task, we ran small user studies to
verify the efficacy of the approach.
| C.J.C. Burges, T. Hart, Z. Yang, S. Cucerzan, R.W. White, A.
Pastusiak, J. Lewis | null | 1612.07896 | null | null |
Supervised Opinion Aspect Extraction by Exploiting Past Extraction
Results | cs.CL cs.LG | One of the key tasks of sentiment analysis of product reviews is to extract
product aspects or features that users have expressed opinions on. In this
work, we focus on using supervised sequence labeling as the base approach to
performing the task. Although several extraction methods using sequence
labeling methods such as Conditional Random Fields (CRF) and Hidden Markov
Models (HMM) have been proposed, we show that this supervised approach can be
significantly improved by exploiting the idea of concept sharing across
multiple domains. For example, "screen" is an aspect in iPhone, but not only
iPhone has a screen, many electronic devices have screens too. When "screen"
appears in a review of a new domain (or product), it is likely to be an aspect
too. Knowing this information enables us to do much better extraction in the
new domain. This paper proposes a novel extraction method exploiting this idea
in the context of supervised sequence labeling. Experimental results show that
it produces markedly better results than without using the past information.
| Lei Shu, Bing Liu, Hu Xu, Annice Kim | null | 1612.0794 | null | null |
DeMIAN: Deep Modality Invariant Adversarial Network | cs.LG stat.ML | Obtaining common representations from different modalities is important in
that they are interchangeable with each other in a classification problem. For
example, we can train a classifier on image features in the common
representations and apply it to the testing of the text features in the
representations. Existing multi-modal representation learning methods mainly
aim to extract rich information from paired samples and train a classifier by
the corresponding labels; however, collecting paired samples and their labels
simultaneously involves high labor costs. Addressing paired modal samples
without their labels and single modal data with their labels independently is
much easier than addressing labeled multi-modal data. To obtain the common
representations under such a situation, we propose to make the distributions
over different modalities similar in the learned representations, namely
modality-invariant representations. In particular, we propose a novel algorithm
for modality-invariant representation learning, named Deep Modality Invariant
Adversarial Network (DeMIAN), which utilizes the idea of Domain Adaptation
(DA). Using the modality-invariant representations learned by DeMIAN, we
achieved better classification accuracy than with the state-of-the-art methods,
especially for some benchmark datasets of zero-shot learning.
| Kuniaki Saito, Yusuke Mukuta, Yoshitaka Ushiku, Tatsuya Harada | null | 1612.07976 | null | null |
RSSL: Semi-supervised Learning in R | stat.ML cs.LG | In this paper, we introduce a package for semi-supervised learning research
in the R programming language called RSSL. We cover the purpose of the package,
the methods it includes and comment on their use and implementation. We then
show, using several code examples, how the package can be used to replicate
well-known results from the semi-supervised learning literature.
| Jesse H. Krijthe | null | 1612.07993 | null | null |
Constructing Effective Personalized Policies Using Counterfactual
Inference from Biased Data Sets with Many Features | stat.ML cs.LG | This paper proposes a novel approach for constructing effective personalized
policies when the observed data lacks counter-factual information, is biased
and possesses many features. The approach is applicable in a wide variety of
settings from healthcare to advertising to education to finance. These settings
have in common that the decision maker can observe, for each previous instance,
an array of features of the instance, the action taken in that instance, and
the reward realized -- but not the rewards of actions that were not taken: the
counterfactual information. Learning in such settings is made even more
difficult because the observed data is typically biased by the existing policy
(that generated the data) and because the array of features that might affect
the reward in a particular instance -- and hence should be taken into account
in deciding on an action in each particular instance -- is often vast. The
approach presented here estimates propensity scores for the observed data,
infers counterfactuals, identifies a (relatively small) number of features that
are (most) relevant for each possible action and instance, and prescribes a
policy to be followed. Comparison of the proposed algorithm against the
state-of-art algorithm on actual datasets demonstrates that the proposed
algorithm achieves a significant improvement in performance.
| Onur Atan, William R. Zame, Qiaojun Feng, Mihaela van der Schaar | null | 1612.08082 | null | null |
On Spectral Analysis of Directed Signed Graphs | cs.SI cs.LG physics.soc-ph | It has been shown that the adjacency eigenspace of a network contains key
information of its underlying structure. However, there has been no study on
spectral analysis of the adjacency matrices of directed signed graphs. In this
paper, we derive theoretical approximations of spectral projections from such
directed signed networks using matrix perturbation theory. We use the derived
theoretical results to study the influences of negative intra cluster and inter
cluster directed edges on node spectral projections. We then develop a spectral
clustering based graph partition algorithm, SC-DSG, and conduct evaluations on
both synthetic and real datasets. Both theoretical analysis and empirical
evaluation demonstrate the effectiveness of the proposed algorithm.
| Yuemeng Li, Xintao Wu, Aidong Lu | null | 1612.08102 | null | null |
Image-Text Multi-Modal Representation Learning by Adversarial
Backpropagation | cs.CV cs.CL cs.LG | We present novel method for image-text multi-modal representation learning.
In our knowledge, this work is the first approach of applying adversarial
learning concept to multi-modal learning and not exploiting image-text pair
information to learn multi-modal feature. We only use category information in
contrast with most previous methods using image-text pair information for
multi-modal embedding. In this paper, we show that multi-modal feature can be
achieved without image-text pair information and our method makes more similar
distribution with image and text in multi-modal feature space than other
methods which use image-text pair information. And we show our multi-modal
feature has universal semantic information, even though it was trained for
category prediction. Our model is end-to-end backpropagation, intuitive and
easily extended to other multi-modal learning work.
| Gwangbeen Park, Woobin Im | null | 1612.08354 | null | null |
Clustering Algorithms: A Comparative Approach | cs.LG stat.ML | Many real-world systems can be studied in terms of pattern recognition tasks,
so that proper use (and understanding) of machine learning methods in practical
applications becomes essential. While a myriad of classification methods have
been proposed, there is no consensus on which methods are more suitable for a
given dataset. As a consequence, it is important to comprehensively compare
methods in many possible scenarios. In this context, we performed a systematic
comparison of 7 well-known clustering methods available in the R language. In
order to account for the many possible variations of data, we considered
artificial datasets with several tunable properties (number of classes,
separation between classes, etc). In addition, we also evaluated the
sensitivity of the clustering methods with regard to their parameters
configuration. The results revealed that, when considering the default
configurations of the adopted methods, the spectral approach usually
outperformed the other clustering algorithms. We also found that the default
configuration of the adopted implementations was not accurate. In these cases,
a simple approach based on random selection of parameters values proved to be a
good alternative to improve the performance. All in all, the reported approach
provides subsidies guiding the choice of clustering algorithms.
| Mayra Z. Rodriguez, Cesar H. Comin, Dalcimar Casanova, Odemir M.
Bruno, Diego R. Amancio, Francisco A. Rodrigues, Luciano da F. Costa | null | 1612.08388 | null | null |
Multi-Region Neural Representation: A novel model for decoding visual
stimuli in human brains | stat.ML cs.LG q-bio.NC | Multivariate Pattern (MVP) classification holds enormous potential for
decoding visual stimuli in the human brain by employing task-based fMRI data
sets. There is a wide range of challenges in the MVP techniques, i.e.
decreasing noise and sparsity, defining effective regions of interest (ROIs),
visualizing results, and the cost of brain studies. In overcoming these
challenges, this paper proposes a novel model of neural representation, which
can automatically detect the active regions for each visual stimulus and then
utilize these anatomical regions for visualizing and analyzing the functional
activities. Therefore, this model provides an opportunity for neuroscientists
to ask this question: what is the effect of a stimulus on each of the detected
regions instead of just study the fluctuation of voxels in the manually
selected ROIs. Moreover, our method introduces analyzing snapshots of brain
image for decreasing sparsity rather than using the whole of fMRI time series.
Further, a new Gaussian smoothing method is proposed for removing noise of
voxels in the level of ROIs. The proposed method enables us to combine
different fMRI data sets for reducing the cost of brain studies. Experimental
studies on 4 visual categories (words, consonants, objects and nonsense photos)
confirm that the proposed method achieves superior performance to
state-of-the-art methods.
| Muhammad Yousefnezhad, Daoqiang Zhang | null | 1612.08392 | null | null |
Correlated signal inference by free energy exploration | stat.ML astro-ph.IM cs.IT cs.LG math.IT | The inference of correlated signal fields with unknown correlation structures
is of high scientific and technological relevance, but poses significant
conceptual and numerical challenges. To address these, we develop the
correlated signal inference (CSI) algorithm within information field theory
(IFT) and discuss its numerical implementation. To this end, we introduce the
free energy exploration (FrEE) strategy for numerical information field theory
(NIFTy) applications. The FrEE strategy is to let the mathematical structure of
the inference problem determine the dynamics of the numerical solver. FrEE uses
the Gibbs free energy formalism for all involved unknown fields and correlation
structures without marginalization of nuisance quantities. It thereby avoids
the complexity marginalization often impose to IFT equations. FrEE
simultaneously solves for the mean and the uncertainties of signal, nuisance,
and auxiliary fields, while exploiting any analytically calculable quantity.
Finally, FrEE uses a problem specific and self-tuning exploration strategy to
swiftly identify the optimal field estimates as well as their uncertainty maps.
For all estimated fields, properly weighted posterior samples drawn from their
exact, fully non-Gaussian distributions can be generated. Here, we develop the
FrEE strategies for the CSI of a normal, a log-normal, and a Poisson log-normal
IFT signal inference problem and demonstrate their performances via their NIFTy
implementations.
| Torsten A. En{\ss}lin, Jakob Knollm\"uller | null | 1612.08406 | null | null |
Unsupervised Learning for Computational Phenotyping | stat.ML cs.LG | With large volumes of health care data comes the research area of
computational phenotyping, making use of techniques such as machine learning to
describe illnesses and other clinical concepts from the data itself. The
"traditional" approach of using supervised learning relies on a domain expert,
and has two main limitations: requiring skilled humans to supply correct labels
limits its scalability and accuracy, and relying on existing clinical
descriptions limits the sorts of patterns that can be found. For instance, it
may fail to acknowledge that a disease treated as a single condition may really
have several subtypes with different phenotypes, as seems to be the case with
asthma and heart disease. Some recent papers cite successes instead using
unsupervised learning. This shows great potential for finding patterns in
Electronic Health Records that would otherwise be hidden and that can lead to
greater understanding of conditions and treatments. This work implements a
method derived strongly from Lasko et al., but implements it in Apache Spark
and Python and generalizes it to laboratory time-series data in MIMIC-III. It
is released as an open-source tool for exploration, analysis, and
visualization, available at https://github.com/Hodapp87/mimic3_phenotyping
| Chris Hodapp | null | 1612.08425 | null | null |
Randomized Block Frank-Wolfe for Convergent Large-Scale Learning | math.OC cs.LG cs.NA | Owing to their low-complexity iterations, Frank-Wolfe (FW) solvers are well
suited for various large-scale learning tasks. When block-separable constraints
are present, randomized block FW (RB-FW) has been shown to further reduce
complexity by updating only a fraction of coordinate blocks per iteration. To
circumvent the limitations of existing methods, the present work develops step
sizes for RB-FW that enable a flexible selection of the number of blocks to
update per iteration while ensuring convergence and feasibility of the
iterates. To this end, convergence rates of RB-FW are established through
computational bounds on a primal sub-optimality measure and on the duality gap.
The novel bounds extend the existing convergence analysis, which only applies
to a step-size sequence that does not generally lead to feasible iterates.
Furthermore, two classes of step-size sequences that guarantee feasibility of
the iterates are also proposed to enhance flexibility in choosing decay rates.
The novel convergence results are markedly broadened to encompass also
nonconvex objectives, and further assert that RB-FW with exact line-search
reaches a stationary point at rate $\mathcal{O}(1/\sqrt{t})$. Performance of
RB-FW with different step sizes and number of blocks is demonstrated in two
applications, namely charging of electrical vehicles and structural support
vector machines. Extensive simulated tests demonstrate the performance
improvement of RB-FW relative to existing randomized single-block FW methods.
| Liang Zhang, Gang Wang, Daniel Romero, Georgios B. Giannakis | 10.1109/TSP.2017.2755597 | 1612.08461 | null | null |
Steerable CNNs | cs.LG stat.ML | It has long been recognized that the invariance and equivariance properties
of a representation are critically important for success in many vision tasks.
In this paper we present Steerable Convolutional Neural Networks, an efficient
and flexible class of equivariant convolutional networks. We show that
steerable CNNs achieve state of the art results on the CIFAR image
classification benchmark. The mathematical theory of steerable representations
reveals a type system in which any steerable representation is a composition of
elementary feature types, each one associated with a particular kind of
symmetry. We show how the parameter cost of a steerable filter bank depends on
the types of the input and output features, and show how to use this knowledge
to construct CNNs that utilize parameters effectively.
| Taco S. Cohen, Max Welling | null | 1612.08498 | null | null |
Theory-guided Data Science: A New Paradigm for Scientific Discovery from
Data | cs.LG cs.AI stat.ML | Data science models, although successful in a number of commercial domains,
have had limited applicability in scientific problems involving complex
physical phenomena. Theory-guided data science (TGDS) is an emerging paradigm
that aims to leverage the wealth of scientific knowledge for improving the
effectiveness of data science models in enabling scientific discovery. The
overarching vision of TGDS is to introduce scientific consistency as an
essential component for learning generalizable models. Further, by producing
scientifically interpretable models, TGDS aims to advance our scientific
understanding by discovering novel domain insights. Indeed, the paradigm of
TGDS has started to gain prominence in a number of scientific disciplines such
as turbulence modeling, material discovery, quantum chemistry, bio-medical
science, bio-marker discovery, climate science, and hydrology. In this paper,
we formally conceptualize the paradigm of TGDS and present a taxonomy of
research themes in TGDS. We describe several approaches for integrating domain
knowledge in different research themes using illustrative examples from
different disciplines. We also highlight some of the promising avenues of novel
research for realizing the full potential of theory-guided data science.
| Anuj Karpatne, Gowtham Atluri, James Faghmous, Michael Steinbach,
Arindam Banerjee, Auroop Ganguly, Shashi Shekhar, Nagiza Samatova, and Vipin
Kumar | 10.1109/TKDE.2017.2720168 | 1612.08544 | null | null |
ASAP: Asynchronous Approximate Data-Parallel Computation | cs.DC cs.LG | Emerging workloads, such as graph processing and machine learning are
approximate because of the scale of data involved and the stochastic nature of
the underlying algorithms. These algorithms are often distributed over multiple
machines using bulk-synchronous processing (BSP) or other synchronous
processing paradigms such as map-reduce. However, data parallel processing
primitives such as repeated barrier and reduce operations introduce high
synchronization overheads. Hence, many existing data-processing platforms use
asynchrony and staleness to improve data-parallel job performance. Often, these
systems simply change the synchronous communication to asynchronous between the
worker nodes in the cluster. This improves the throughput of data processing
but results in poor accuracy of the final output since different workers may
progress at different speeds and process inconsistent intermediate outputs.
In this paper, we present ASAP, a model that provides asynchronous and
approximate processing semantics for data-parallel computation. ASAP provides
fine-grained worker synchronization using NOTIFY-ACK semantics that allows
independent workers to run asynchronously. ASAP also provides stochastic reduce
that provides approximate but guaranteed convergence to the same result as an
aggregated all-reduce. In our results, we show that ASAP can reduce
synchronization costs and provides 2-10X speedups in convergence and up to 10X
savings in network costs for distributed machine learning applications and
provides strong convergence guarantees.
| Asim Kadav, Erik Kruus | null | 1612.08608 | null | null |
A Sparse Nonlinear Classifier Design Using AUC Optimization | cs.AI cs.LG stat.ML | AUC (Area under the ROC curve) is an important performance measure for
applications where the data is highly imbalanced. Learning to maximize AUC
performance is thus an important research problem. Using a max-margin based
surrogate loss function, AUC optimization problem can be approximated as a
pairwise rankSVM learning problem. Batch learning methods for solving the
kernelized version of this problem suffer from scalability and may not result
in sparse classifiers. Recent years have witnessed an increased interest in the
development of online or single-pass online learning algorithms that design a
classifier by maximizing the AUC performance. The AUC performance of nonlinear
classifiers, designed using online methods, is not comparable with that of
nonlinear classifiers designed using batch learning algorithms on many
real-world datasets. Motivated by these observations, we design a scalable
algorithm for maximizing AUC performance by greedily adding the required number
of basis functions into the classifier model. The resulting sparse classifiers
perform faster inference. Our experimental results show that the level of
sparsity achievable can be order of magnitude smaller than the Kernel RankSVM
model without affecting the AUC performance much.
| Vishal Kakkar, Shirish K. Shevade, S Sundararajan, Dinesh Garg | null | 1612.08633 | null | null |
Reproducible Pattern Recognition Research: The Case of Optimistic SSL | stat.ML cs.LG | In this paper, we discuss the approaches we took and trade-offs involved in
making a paper on a conceptual topic in pattern recognition research fully
reproducible. We discuss our definition of reproducibility, the tools used, how
the analysis was set up, show some examples of alternative analyses the code
enables and discuss our views on reproducibility.
| Jesse H. Krijthe and Marco Loog | null | 1612.0865 | null | null |
A Hybrid Both Filter and Wrapper Feature Selection Method for Microarray
Classification | cs.LG | Gene expression data is widely used in disease analysis and cancer diagnosis.
However, since gene expression data could contain thousands of genes
simultaneously, successful microarray classification is rather difficult.
Feature selection is an important pre-treatment for any classification process.
Selecting a useful gene subset as a classifier not only decreases the
computational time and cost, but also increases classification accuracy. In
this study, we applied the information gain method as a filter approach, and an
improved binary particle swarm optimization as a wrapper approach to implement
feature selection; selected gene subsets were used to evaluate the performance
of classification. Experimental results show that by employing the proposed
method fewer gene subsets needed to be selected and better classification
accuracy could be obtained.
| Li-Yeh Chuang, Chao-Hsuan Ke, and Cheng-Hong Yang | null | 1612.08669 | null | null |
Clustering with Confidence: Finding Clusters with Statistical Guarantees | stat.ML cs.LG | Clustering is a widely used unsupervised learning method for finding
structure in the data. However, the resulting clusters are typically presented
without any guarantees on their robustness; slightly changing the used data
sample or re-running a clustering algorithm involving some stochastic component
may lead to completely different clusters. There is, hence, a need for
techniques that can quantify the instability of the generated clusters. In this
study, we propose a technique for quantifying the instability of a clustering
solution and for finding robust clusters, termed core clusters, which
correspond to clusters where the co-occurrence probability of each data item
within a cluster is at least $1 - \alpha$. We demonstrate how solving the core
clustering problem is linked to finding the largest maximal cliques in a graph.
We show that the method can be used with both clustering and classification
algorithms. The proposed method is tested on both simulated and real datasets.
The results show that the obtained clusters indeed meet the guarantees on
robustness.
| Andreas Henelius, Kai Puolam\"aki, Henrik Bostr\"om, Panagiotis
Papapetrou | null | 1612.08714 | null | null |
Automatic Composition and Optimization of Multicomponent Predictive
Systems With an Extended Auto-WEKA | cs.LG | Composition and parameterization of multicomponent predictive systems (MCPSs)
consisting of chains of data transformation steps are a challenging task.
Auto-WEKA is a tool to automate the combined algorithm selection and
hyperparameter (CASH) optimization problem. In this paper, we extend the CASH
problem and Auto-WEKA to support the MCPS, including preprocessing steps for
both classification and regression tasks. We define the optimization problem in
which the search space consists of suitably parameterized Petri nets forming
the sought MCPS solutions. In the experimental analysis, we focus on examining
the impact of considerably extending the search space (from approximately
22,000 to 812 billion possible combinations of methods and categorical
hyperparameters). In a range of extensive experiments, three different
optimization strategies are used to automatically compose MCPSs for 21 publicly
available data sets. The diversity of the composed MCPSs found is an indication
that fully and automatically exploiting different combinations of data cleaning
and preprocessing techniques is possible and highly beneficial for different
predictive models. We also present the results on seven data sets from real
chemical production processes. Our findings can have a major impact on the
development of high-quality predictive models as well as their maintenance and
scalability aspects needed in modern applications and deployment scenarios.
| Manuel Martin Salvador, Marcin Budka, Bogdan Gabrys | 10.1109/TASE.2018.2876430 | 1612.08789 | null | null |
Provable learning of Noisy-or Networks | cs.LG cs.DS stat.ML | Many machine learning applications use latent variable models to explain
structure in data, whereby visible variables (= coordinates of the given
datapoint) are explained as a probabilistic function of some hidden variables.
Finding parameters with the maximum likelihood is NP-hard even in very simple
settings. In recent years, provably efficient algorithms were nevertheless
developed for models with linear structures: topic models, mixture models,
hidden markov models, etc. These algorithms use matrix or tensor decomposition,
and make some reasonable assumptions about the parameters of the underlying
model.
But matrix or tensor decomposition seems of little use when the latent
variable model has nonlinearities. The current paper shows how to make
progress: tensor decomposition is applied for learning the single-layer {\em
noisy or} network, which is a textbook example of a Bayes net, and used for
example in the classic QMR-DT software for diagnosing which disease(s) a
patient may have by observing the symptoms he/she exhibits.
The technical novelty here, which should be useful in other settings in
future, is analysis of tensor decomposition in presence of systematic error
(i.e., where the noise/error is correlated with the signal, and doesn't
decrease as number of samples goes to infinity). This requires rethinking all
steps of tensor decomposition methods from the ground up.
For simplicity our analysis is stated assuming that the network parameters
were chosen from a probability distribution but the method seems more generally
applicable.
| Sanjeev Arora, Rong Ge, Tengyu Ma, Andrej Risteski | null | 1612.08795 | null | null |
The Predictron: End-To-End Learning and Planning | cs.LG cs.AI cs.NE | One of the key challenges of artificial intelligence is to learn models that
are effective in the context of planning. In this document we introduce the
predictron architecture. The predictron consists of a fully abstract model,
represented by a Markov reward process, that can be rolled forward multiple
"imagined" planning steps. Each forward pass of the predictron accumulates
internal rewards and values over multiple planning depths. The predictron is
trained end-to-end so as to make these accumulated values accurately
approximate the true value function. We applied the predictron to procedurally
generated random mazes and a simulator for the game of pool. The predictron
yielded significantly more accurate predictions than conventional deep neural
network architectures.
| David Silver, Hado van Hasselt, Matteo Hessel, Tom Schaul, Arthur
Guez, Tim Harley, Gabriel Dulac-Arnold, David Reichert, Neil Rabinowitz,
Andre Barreto, Thomas Degris | null | 1612.0881 | null | null |
The Pessimistic Limits and Possibilities of Margin-based Losses in
Semi-supervised Learning | stat.ML cs.LG | Consider a classification problem where we have both labeled and unlabeled
data available. We show that for linear classifiers defined by convex
margin-based surrogate losses that are decreasing, it is impossible to
construct any semi-supervised approach that is able to guarantee an improvement
over the supervised classifier measured by this surrogate loss on the labeled
and unlabeled data. For convex margin-based loss functions that also increase,
we demonstrate safe improvements are possible.
| Jesse H. Krijthe and Marco Loog | null | 1612.08875 | null | null |
Efficient iterative policy optimization | cs.AI cs.LG cs.RO | We tackle the issue of finding a good policy when the number of policy
updates is limited. This is done by approximating the expected policy reward as
a sequence of concave lower bounds which can be efficiently maximized,
drastically reducing the number of policy updates required to achieve good
performance. We also extend existing methods to negative rewards, enabling the
use of control variates.
| Nicolas Le Roux | null | 1612.08967 | null | null |
A Deep Learning Approach To Multiple Kernel Fusion | stat.ML cs.LG | Kernel fusion is a popular and effective approach for combining multiple
features that characterize different aspects of data. Traditional approaches
for Multiple Kernel Learning (MKL) attempt to learn the parameters for
combining the kernels through sophisticated optimization procedures. In this
paper, we propose an alternative approach that creates dense embeddings for
data using the kernel similarities and adopts a deep neural network
architecture for fusing the embeddings. In order to improve the effectiveness
of this network, we introduce the kernel dropout regularization strategy
coupled with the use of an expanded set of composition kernels. Experiment
results on a real-world activity recognition dataset show that the proposed
architecture is effective in fusing kernels and achieves state-of-the-art
performance.
| Huan Song, Jayaraman J. Thiagarajan, Prasanna Sattigeri, Karthikeyan
Natesan Ramamurthy, Andreas Spanias | null | 1612.09007 | null | null |
Meta-Unsupervised-Learning: A supervised approach to unsupervised
learning | cs.LG cs.AI cs.CV | We introduce a new paradigm to investigate unsupervised learning, reducing
unsupervised learning to supervised learning. Specifically, we mitigate the
subjectivity in unsupervised decision-making by leveraging knowledge acquired
from prior, possibly heterogeneous, supervised learning tasks. We demonstrate
the versatility of our framework via comprehensive expositions and detailed
experiments on several unsupervised problems such as (a) clustering, (b)
outlier detection, and (c) similarity prediction under a common umbrella of
meta-unsupervised-learning. We also provide rigorous PAC-agnostic bounds to
establish the theoretical foundations of our framework, and show that our
framing of meta-clustering circumvents Kleinberg's impossibility theorem for
clustering.
| Vikas K. Garg and Adam Tauman Kalai | null | 1612.0903 | null | null |
Geometric descent method for convex composite minimization | math.OC cs.LG stat.ML | In this paper, we extend the geometric descent method recently proposed by
Bubeck, Lee and Singh to tackle nonsmooth and strongly convex composite
problems. We prove that our proposed algorithm, dubbed geometric proximal
gradient method (GeoPG), converges with a linear rate $(1-1/\sqrt{\kappa})$ and
thus achieves the optimal rate among first-order methods, where $\kappa$ is the
condition number of the problem. Numerical results on linear regression and
logistic regression with elastic net regularization show that GeoPG compares
favorably with Nesterov's accelerated proximal gradient method, especially when
the problem is ill-conditioned.
| Shixiang Chen, Shiqian Ma, Wei Liu | null | 1612.09034 | null | null |
Deep Learning and Hierarchal Generative Models | cs.LG | It is argued that deep learning is efficient for data that is generated from
hierarchal generative models. Examples of such generative models include
wavelet scattering networks, functions of compositional structure, and deep
rendering models. Unfortunately so far, for all such models, it is either not
rigorously known that they can be learned efficiently, or it is not known that
"deep algorithms" are required in order to learn them.
We propose a simple family of "generative hierarchal models" which can be
efficiently learned and where "deep" algorithm are necessary for learning. Our
definition of "deep" algorithms is based on the empirical observation that deep
nets necessarily use correlations between features. More formally, we show that
in a semi-supervised setting, given access to low-order moments of the labeled
data and all of the unlabeled data, it is information theoretically impossible
to perform classification while at the same time there is an efficient
algorithm, that given all labelled and unlabeled data, perfectly labels all
unlabelled data with high probability.
For the proof, we use and strengthen the fact that Belief Propagation does
not admit a good approximation in terms of linear functions.
| Elchanan Mossel | null | 1612.09057 | null | null |
Selecting Bases in Spectral learning of Predictive State Representations
via Model Entropy | cs.LG stat.ML | Predictive State Representations (PSRs) are powerful techniques for modelling
dynamical systems, which represent a state as a vector of predictions about
future observable events (tests). In PSRs, one of the fundamental problems is
the learning of the PSR model of the underlying system. Recently, spectral
methods have been successfully used to address this issue by treating the
learning problem as the task of computing an singular value decomposition (SVD)
over a submatrix of a special type of matrix called the Hankel matrix. Under
the assumptions that the rows and columns of the submatrix of the Hankel Matrix
are sufficient~(which usually means a very large number of rows and columns,
and almost fails in practice) and the entries of the matrix can be estimated
accurately, it has been proven that the spectral approach for learning PSRs is
statistically consistent and the learned parameters can converge to the true
parameters. However, in practice, due to the limit of the computation ability,
only a finite set of rows or columns can be chosen to be used for the spectral
learning. While different sets of columns usually lead to variant accuracy of
the learned model, in this paper, we propose an approach for selecting the set
of columns, namely basis selection, by adopting a concept of model entropy to
measure the accuracy of the learned model. Experimental results are shown to
demonstrate the effectiveness of the proposed approach.
| Yunlong Liu and Hexing Zhu | null | 1612.09076 | null | null |
Sequence-to-point learning with neural networks for nonintrusive load
monitoring | stat.AP cs.LG | Energy disaggregation (a.k.a nonintrusive load monitoring, NILM), a
single-channel blind source separation problem, aims to decompose the mains
which records the whole house electricity consumption into appliance-wise
readings. This problem is difficult because it is inherently unidentifiable.
Recent approaches have shown that the identifiability problem could be reduced
by introducing domain knowledge into the model. Deep neural networks have been
shown to be a promising approach for these problems, but sliding windows are
necessary to handle the long sequences which arise in signal processing
problems, which raises issues about how to combine predictions from different
sliding windows. In this paper, we propose sequence-to-point learning, where
the input is a window of the mains and the output is a single point of the
target appliance. We use convolutional neural networks to train the model.
Interestingly, we systematically show that the convolutional neural networks
can inherently learn the signatures of the target appliances, which are
automatically added into the model to reduce the identifiability problem. We
applied the proposed neural network approaches to real-world household energy
data, and show that the methods achieve state-of-the-art performance, improving
two standard error measures by 84% and 92%.
| Chaoyun Zhang, Mingjun Zhong, Zongzuo Wang, Nigel Goddard, Charles
Sutton | null | 1612.09106 | null | null |
Modeling documents with Generative Adversarial Networks | cs.LG | This paper describes a method for using Generative Adversarial Networks to
learn distributed representations of natural language documents. We propose a
model that is based on the recently proposed Energy-Based GAN, but instead uses
a Denoising Autoencoder as the discriminator network. Document representations
are extracted from the hidden layer of the discriminator and evaluated both
quantitatively and qualitatively.
| John Glover | null | 1612.09122 | null | null |
Linear Learning with Sparse Data | cs.LG | Linear predictors are especially useful when the data is high-dimensional and
sparse. One of the standard techniques used to train a linear predictor is the
Averaged Stochastic Gradient Descent (ASGD) algorithm. We present an efficient
implementation of ASGD that avoids dense vector operations. We also describe a
translation invariant extension called Centered Averaged Stochastic Gradient
Descent (CASGD).
| Ofer Dekel | null | 1612.09147 | null | null |
The interplay between system identification and machine learning | cs.SY cs.LG stat.ML | Learning from examples is one of the key problems in science and engineering.
It deals with function reconstruction from a finite set of direct and noisy
samples. Regularization in reproducing kernel Hilbert spaces (RKHSs) is widely
used to solve this task and includes powerful estimators such as regularization
networks. Recent achievements include the proof of the statistical consistency
of these kernel- based approaches. Parallel to this, many different system
identification techniques have been developed but the interaction with machine
learning does not appear so strong yet. One reason is that the RKHSs usually
employed in machine learning do not embed the information available on dynamic
systems, e.g. BIBO stability. In addition, in system identification the
independent data assumptions routinely adopted in machine learning are never
satisfied in practice. This paper provides new results which strengthen the
connection between system identification and machine learning. Our starting
point is the introduction of RKHSs of dynamic systems. They contain functionals
over spaces defined by system inputs and allow to interpret system
identification as learning from examples. In both linear and nonlinear
settings, it is shown that this perspective permits to derive in a relatively
simple way conditions on RKHS stability (i.e. the property of containing only
BIBO stable systems or predictors), also facilitating the design of new kernels
for system identification. Furthermore, we prove the convergence of the
regularized estimator to the optimal predictor under conditions typical of
dynamic systems.
| Gianluigi Pillonetto | null | 1612.09158 | null | null |
Deep neural heart rate variability analysis | cs.NE cs.AI cs.LG | Despite of the pain and limited accuracy of blood tests for early recognition
of cardiovascular disease, they dominate risk screening and triage. On the
other hand, heart rate variability is non-invasive and cheap, but not
considered accurate enough for clinical practice. Here, we tackle heart beat
interval based classification with deep learning. We introduce an end to end
differentiable hybrid architecture, consisting of a layer of biological neuron
models of cardiac dynamics (modified FitzHugh Nagumo neurons) and several
layers of a standard feed-forward neural network. The proposed model is
evaluated on ECGs from 474 stable at-risk (coronary artery disease) patients,
and 1172 chest pain patients of an emergency department. We show that it can
significantly outperform models based on traditional heart rate variability
predictors, as well as approaching or in some cases outperforming clinical
blood tests, based only on 60 seconds of inter-beat intervals.
| Tamas Madl | null | 1612.09205 | null | null |
Generalized Intersection Kernel | stat.ML cs.LG | Following the very recent line of work on the ``generalized min-max'' (GMM)
kernel, this study proposes the ``generalized intersection'' (GInt) kernel and
the related ``normalized generalized min-max'' (NGMM) kernel. In computer
vision, the (histogram) intersection kernel has been popular, and the GInt
kernel generalizes it to data which can have both negative and positive
entries. Through an extensive empirical classification study on 40 datasets
from the UCI repository, we are able to show that this (tuning-free) GInt
kernel performs fairly well.
The empirical results also demonstrate that the NGMM kernel typically
outperforms the GInt kernel. Interestingly, the NGMM kernel has another
interpretation --- it is the ``asymmetrically transformed'' version of the GInt
kernel, based on the idea of ``asymmetric hashing''. Just like the GMM kernel,
the NGMM kernel can be efficiently linearized through (e.g.,) generalized
consistent weighted sampling (GCWS), as empirically validated in our study.
Owing to the discrete nature of hashed values, it also provides a scheme for
approximate near neighbor search.
| Ping Li | null | 1612.09283 | null | null |
Symmetry, Saddle Points, and Global Optimization Landscape of Nonconvex
Matrix Factorization | cs.LG math.OC stat.ML | We propose a general theory for studying the \xl{landscape} of nonconvex
\xl{optimization} with underlying symmetric structures \tz{for a class of
machine learning problems (e.g., low-rank matrix factorization, phase
retrieval, and deep linear neural networks)}. In specific, we characterize the
locations of stationary points and the null space of Hessian matrices \xl{of
the objective function} via the lens of invariant groups\removed{for associated
optimization problems, including low-rank matrix factorization, phase
retrieval, and deep linear neural networks}. As a major motivating example, we
apply the proposed general theory to characterize the global \xl{landscape} of
the \xl{nonconvex optimization in} low-rank matrix factorization problem. In
particular, we illustrate how the rotational symmetry group gives rise to
infinitely many nonisolated strict saddle points and equivalent global minima
of the objective function. By explicitly identifying all stationary points, we
divide the entire parameter space into three regions: ($\cR_1$) the region
containing the neighborhoods of all strict saddle points, where the objective
has negative curvatures; ($\cR_2$) the region containing neighborhoods of all
global minima, where the objective enjoys strong convexity along certain
directions; and ($\cR_3$) the complement of the above regions, where the
gradient has sufficiently large magnitudes. We further extend our result to the
matrix sensing problem. Such global landscape implies strong global convergence
guarantees for popular iterative algorithms with arbitrary initial solutions.
| Xingguo Li, Junwei Lu, Raman Arora, Jarvis Haupt, Han Liu, Zhaoran
Wang, Tuo Zhao | null | 1612.09296 | null | null |
The Neural Hawkes Process: A Neurally Self-Modulating Multivariate Point
Process | cs.LG stat.ML | Many events occur in the world. Some event types are stochastically excited
or inhibited---in the sense of having their probabilities elevated or
decreased---by patterns in the sequence of previous events. Discovering such
patterns can help us predict which type of event will happen next and when. We
model streams of discrete events in continuous time, by constructing a neurally
self-modulating multivariate point process in which the intensities of multiple
event types evolve according to a novel continuous-time LSTM. This generative
model allows past events to influence the future in complex and realistic ways,
by conditioning future event intensities on the hidden state of a recurrent
neural network that has consumed the stream of past events. Our model has
desirable qualitative properties. It achieves competitive likelihood and
predictive accuracy on real and synthetic datasets, including under
missing-data conditions.
| Hongyuan Mei and Jason Eisner | null | 1612.09328 | null | null |
Data driven estimation of Laplace-Beltrami operator | cs.CG cs.LG math.ST stat.TH | Approximations of Laplace-Beltrami operators on manifolds through graph
Lapla-cians have become popular tools in data analysis and machine learning.
These discretized operators usually depend on bandwidth parameters whose tuning
remains a theoretical and practical problem. In this paper, we address this
problem for the unnormalized graph Laplacian by establishing an oracle
inequality that opens the door to a well-founded data-driven procedure for the
bandwidth selection. Our approach relies on recent results by Lacour and
Massart [LM15] on the so-called Lepski's method.
| Fr\'ed\'eric Chazal (DATASHAPE), Ilaria Giulini (DATASHAPE), Bertrand
Michel (LSTA) | null | 1612.09434 | null | null |
Automatic Discoveries of Physical and Semantic Concepts via Association
Priors of Neuron Groups | cs.LG | The recent successful deep neural networks are largely trained in a
supervised manner. It {\it associates} complex patterns of input samples with
neurons in the last layer, which form representations of {\it concepts}. In
spite of their successes, the properties of complex patterns associated a
learned concept remain elusive. In this work, by analyzing how neurons are
associated with concepts in supervised networks, we hypothesize that with
proper priors to regulate learning, neural networks can automatically associate
neurons in the intermediate layers with concepts that are aligned with real
world concepts, when trained only with labels that associate concepts with top
level neurons, which is a plausible way for unsupervised learning. We develop a
prior to verify the hypothesis and experimentally find the proposed prior help
neural networks automatically learn both basic physical concepts at the lower
layers, e.g., rotation of filters, and highly semantic concepts at the higher
layers, e.g., fine-grained categories of an entry-level category.
| Shuai Li, Kui Jia, Xiaogang Wang | null | 1612.09438 | null | null |
Adaptive Lambda Least-Squares Temporal Difference Learning | cs.LG cs.AI stat.ML | Temporal Difference learning or TD($\lambda$) is a fundamental algorithm in
the field of reinforcement learning. However, setting TD's $\lambda$ parameter,
which controls the timescale of TD updates, is generally left up to the
practitioner. We formalize the $\lambda$ selection problem as a bias-variance
trade-off where the solution is the value of $\lambda$ that leads to the
smallest Mean Squared Value Error (MSVE). To solve this trade-off we suggest
applying Leave-One-Trajectory-Out Cross-Validation (LOTO-CV) to search the
space of $\lambda$ values. Unfortunately, this approach is too computationally
expensive for most practical applications. For Least Squares TD (LSTD) we show
that LOTO-CV can be implemented efficiently to automatically tune $\lambda$ and
apply function optimization methods to efficiently search the space of
$\lambda$ values. The resulting algorithm, ALLSTD, is parameter free and our
experiments demonstrate that ALLSTD is significantly computationally faster
than the na\"{i}ve LOTO-CV implementation while achieving similar performance.
| Timothy A. Mann and Hugo Penedones and Shie Mannor and Todd Hester | null | 1612.09465 | null | null |
Linking the Neural Machine Translation and the Prediction of Organic
Chemistry Reactions | cs.LG | Finding the main product of a chemical reaction is one of the important
problems of organic chemistry. This paper describes a method of applying a
neural machine translation model to the prediction of organic chemical
reactions. In order to translate 'reactants and reagents' to 'products', a
gated recurrent unit based sequence-to-sequence model and a parser to generate
input tokens for model from reaction SMILES strings were built. Training sets
are composed of reactions from the patent databases, and reactions manually
generated applying the elementary reactions in an organic chemistry textbook of
Wade. The trained models were tested by examples and problems in the textbook.
The prediction process does not need manual encoding of rules (e.g., SMARTS
transformations) to predict products, hence it only needs sufficient training
reaction sets to learn new types of reactions.
| Juno Nam and Jurae Kim | null | 1612.09529 | null | null |
Counterfactual Prediction with Deep Instrumental Variables Networks | stat.AP cs.LG stat.ML | We are in the middle of a remarkable rise in the use and capability of
artificial intelligence. Much of this growth has been fueled by the success of
deep learning architectures: models that map from observables to outputs via
multiple layers of latent representations. These deep learning algorithms are
effective tools for unstructured prediction, and they can be combined in AI
systems to solve complex automated reasoning problems. This paper provides a
recipe for combining ML algorithms to solve for causal effects in the presence
of instrumental variables -- sources of treatment randomization that are
conditionally independent from the response. We show that a flexible IV
specification resolves into two prediction tasks that can be solved with deep
neural nets: a first-stage network for treatment prediction and a second-stage
network whose loss function involves integration over the conditional treatment
distribution. This Deep IV framework imposes some specific structure on the
stochastic gradient descent routine used for training, but it is general enough
that we can take advantage of off-the-shelf ML capabilities and avoid extensive
algorithm customization. We outline how to obtain out-of-sample causal
validation in order to avoid over-fit. We also introduce schemes for both
Bayesian and frequentist inference: the former via a novel adaptation of
dropout training, and the latter via a data splitting routine.
| Jason Hartford, Greg Lewis, Kevin Leyton-Brown, Matt Taddy | null | 1612.09596 | null | null |
Deep Neural Networks to Enable Real-time Multimessenger Astrophysics | astro-ph.IM astro-ph.GA astro-ph.HE cs.LG gr-qc | Gravitational wave astronomy has set in motion a scientific revolution. To
further enhance the science reach of this emergent field, there is a pressing
need to increase the depth and speed of the gravitational wave algorithms that
have enabled these groundbreaking discoveries. To contribute to this effort, we
introduce Deep Filtering, a new highly scalable method for end-to-end
time-series signal processing, based on a system of two deep convolutional
neural networks, which we designed for classification and regression to rapidly
detect and estimate parameters of signals in highly noisy time-series data
streams. We demonstrate a novel training scheme with gradually increasing noise
levels, and a transfer learning procedure between the two networks. We showcase
the application of this method for the detection and parameter estimation of
gravitational waves from binary black hole mergers. Our results indicate that
Deep Filtering significantly outperforms conventional machine learning
techniques, achieves similar performance compared to matched-filtering while
being several orders of magnitude faster thus allowing real-time processing of
raw big data with minimal resources. More importantly, Deep Filtering extends
the range of gravitational wave signals that can be detected with ground-based
gravitational wave detectors. This framework leverages recent advances in
artificial intelligence algorithms and emerging hardware architectures, such as
deep-learning-optimized GPUs, to facilitate real-time searches of gravitational
wave sources and their electromagnetic and astro-particle counterparts.
| Daniel George, E. A. Huerta | 10.1103/PhysRevD.97.044039 | 1701.00008 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.