categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
list |
---|---|---|---|---|---|---|---|---|---|---|
cs.RO cs.AI cs.LG
| null |
1612.05533
| null | null |
http://arxiv.org/pdf/1612.05533v3
|
2017-07-23T16:36:33Z
|
2016-12-16T16:15:26Z
|
Deep Reinforcement Learning with Successor Features for Navigation
across Similar Environments
|
In this paper we consider the problem of robot navigation in simple maze-like
environments where the robot has to rely on its onboard sensors to perform the
navigation task. In particular, we are interested in solutions to this problem
that do not require localization, mapping or planning. Additionally, we require
that our solution can quickly adapt to new situations (e.g., changing
navigation goals and environments). To meet these criteria we frame this
problem as a sequence of related reinforcement learning tasks. We propose a
successor feature based deep reinforcement learning algorithm that can learn to
transfer knowledge from previously mastered navigation tasks to new problem
instances. Our algorithm substantially decreases the required learning time
after the first task instance has been solved, which makes it easily adaptable
to changing environments. We validate our method in both simulated and real
robot experiments with a Robotino and compare it to a set of baseline methods
including classical planning-based navigation.
|
[
"Jingwei Zhang, Jost Tobias Springenberg, Joschka Boedecker, Wolfram\n Burgard",
"['Jingwei Zhang' 'Jost Tobias Springenberg' 'Joschka Boedecker'\n 'Wolfram Burgard']"
] |
cs.LG
| null |
1612.05627
| null | null |
http://arxiv.org/pdf/1612.05627v1
|
2016-12-13T00:54:03Z
|
2016-12-13T00:54:03Z
|
Models, networks and algorithmic complexity
|
I aim to show that models, classification or generating functions,
invariances and datasets are algorithmically equivalent concepts once properly
defined, and provide some concrete examples of them. I then show that a) neural
networks (NNs) of different kinds can be seen to implement models, b) that
perturbations of inputs and nodes in NNs trained to optimally implement simple
models propagate strongly, c) that there is a framework in which recurrent,
deep and shallow networks can be seen to fall into a descriptive power
hierarchy in agreement with notions from the theory of recursive functions. The
motivation for these definitions and following analysis lies in the context of
cognitive neuroscience, and in particular in Ruffini (2016), where the concept
of model is used extensively, as is the concept of algorithmic complexity.
|
[
"Giulio Ruffini",
"['Giulio Ruffini']"
] |
cs.AI cs.LG stat.ML
| null |
1612.05628
| null | null |
http://arxiv.org/pdf/1612.05628v5
|
2017-06-14T14:29:04Z
|
2016-12-16T20:49:35Z
|
An Alternative Softmax Operator for Reinforcement Learning
|
A softmax operator applied to a set of values acts somewhat like the
maximization function and somewhat like an average. In sequential decision
making, softmax is often used in settings where it is necessary to maximize
utility but also to hedge against problems that arise from putting all of one's
weight behind a single maximum utility decision. The Boltzmann softmax operator
is the most commonly used softmax operator in this setting, but we show that
this operator is prone to misbehavior. In this work, we study a differentiable
softmax operator that, among other properties, is a non-expansion ensuring a
convergent behavior in learning and planning. We introduce a variant of SARSA
algorithm that, by utilizing the new operator, computes a Boltzmann policy with
a state-dependent temperature parameter. We show that the algorithm is
convergent and that it performs favorably in practice.
|
[
"['Kavosh Asadi' 'Michael L. Littman']",
"Kavosh Asadi, Michael L. Littman"
] |
cs.LG cs.AI cs.CL
| null |
1612.05688
| null | null |
http://arxiv.org/pdf/1612.05688v3
|
2017-11-13T05:52:42Z
|
2016-12-17T01:03:55Z
|
A User Simulator for Task-Completion Dialogues
|
Despite widespread interests in reinforcement-learning for task-oriented
dialogue systems, several obstacles can frustrate research and development
progress. First, reinforcement learners typically require interaction with the
environment, so conventional dialogue corpora cannot be used directly. Second,
each task presents specific challenges, requiring separate corpus of
task-specific annotated data. Third, collecting and annotating human-machine or
human-human conversations for task-oriented dialogues requires extensive domain
knowledge. Because building an appropriate dataset can be both financially
costly and time-consuming, one popular approach is to build a user simulator
based upon a corpus of example dialogues. Then, one can train reinforcement
learning agents in an online fashion as they interact with the simulator.
Dialogue agents trained on these simulators can serve as an effective starting
point. Once agents master the simulator, they may be deployed in a real
environment to interact with humans, and continue to be trained online. To ease
empirical algorithmic comparisons in dialogues, this paper introduces a new,
publicly available simulation framework, where our simulator, designed for the
movie-booking domain, leverages both rules and collected data. The simulator
supports two tasks: movie ticket booking and movie seeking. Finally, we
demonstrate several agents and detail the procedure to add and test your own
agent in the proposed framework.
|
[
"['Xiujun Li' 'Zachary C. Lipton' 'Bhuwan Dhingra' 'Lihong Li'\n 'Jianfeng Gao' 'Yun-Nung Chen']",
"Xiujun Li, Zachary C. Lipton, Bhuwan Dhingra, Lihong Li, Jianfeng Gao,\n Yun-Nung Chen"
] |
quant-ph cs.AI cs.LG cs.NE math.OC
| null |
1612.05695
| null | null |
http://arxiv.org/pdf/1612.05695v3
|
2019-01-03T20:49:47Z
|
2016-12-17T02:33:41Z
|
Reinforcement Learning Using Quantum Boltzmann Machines
|
We investigate whether quantum annealers with select chip layouts can
outperform classical computers in reinforcement learning tasks. We associate a
transverse field Ising spin Hamiltonian with a layout of qubits similar to that
of a deep Boltzmann machine (DBM) and use simulated quantum annealing (SQA) to
numerically simulate quantum sampling from this system. We design a
reinforcement learning algorithm in which the set of visible nodes representing
the states and actions of an optimal policy are the first and last layers of
the deep network. In absence of a transverse field, our simulations show that
DBMs are trained more effectively than restricted Boltzmann machines (RBM) with
the same number of nodes. We then develop a framework for training the network
as a quantum Boltzmann machine (QBM) in the presence of a significant
transverse field for reinforcement learning. This method also outperforms the
reinforcement learning method that uses RBMs.
|
[
"Daniel Crawford, Anna Levit, Navid Ghadermarzy, Jaspreet S. Oberoi,\n Pooya Ronagh",
"['Daniel Crawford' 'Anna Levit' 'Navid Ghadermarzy' 'Jaspreet S. Oberoi'\n 'Pooya Ronagh']"
] |
math.OC cs.LG stat.ML
| null |
1612.05708
| null | null |
http://arxiv.org/pdf/1612.05708v1
|
2016-12-17T05:26:46Z
|
2016-12-17T05:26:46Z
|
Mutual information for fitting deep nonlinear models
|
Deep nonlinear models pose a challenge for fitting parameters due to lack of
knowledge of the hidden layer and the potentially non-affine relation of the
initial and observed layers. In the present work we investigate the use of
information theoretic measures such as mutual information and Kullback-Leibler
(KL) divergence as objective functions for fitting such models without
knowledge of the hidden layer. We investigate one model as a proof of concept
and one application of cogntive performance. We further investigate the use of
optimizers with these methods. Mutual information is largely successful as an
objective, depending on the parameters. KL divergence is found to be similarly
succesful, given some knowledge of the statistics of the hidden layer.
|
[
"['Jacob S. Hunter' 'Nathan O. Hodas']",
"Jacob S. Hunter (1) and Nathan O. Hodas (1) ((1) Pacific Northwest\n National Laboratory)"
] |
cs.IR cs.AI cs.LG
| null |
1612.05729
| null | null |
http://arxiv.org/pdf/1612.05729v1
|
2016-12-17T10:50:41Z
|
2016-12-17T10:50:41Z
|
Exploiting sparsity to build efficient kernel based collaborative
filtering for top-N item recommendation
|
The increasing availability of implicit feedback datasets has raised the
interest in developing effective collaborative filtering techniques able to
deal asymmetrically with unambiguous positive feedback and ambiguous negative
feedback. In this paper, we propose a principled kernel-based collaborative
filtering method for top-N item recommendation with implicit feedback. We
present an efficient implementation using the linear kernel, and we show how to
generalize it to kernels of the dot product family preserving the efficiency.
We also investigate on the elements which influence the sparsity of a standard
cosine kernel. This analysis shows that the sparsity of the kernel strongly
depends on the properties of the dataset, in particular on the long tail
distribution. We compare our method with state-of-the-art algorithms achieving
good results both in terms of efficiency and effectiveness.
|
[
"Mirko Polato and Fabio Aiolli",
"['Mirko Polato' 'Fabio Aiolli']"
] |
stat.ML cs.LG
| null |
1612.0573
| null | null | null | null | null |
Towards Wide Learning: Experiments in Healthcare
|
In this paper, a Wide Learning architecture is proposed that attempts to
automate the feature engineering portion of the machine learning (ML) pipeline.
Feature engineering is widely considered as the most time consuming and expert
knowledge demanding portion of any ML task. The proposed feature recommendation
approach is tested on 3 healthcare datasets: a) PhysioNet Challenge 2016
dataset of phonocardiogram (PCG) signals, b) MIMIC II blood pressure
classification dataset of photoplethysmogram (PPG) signals and c) an emotion
classification dataset of PPG signals. While the proposed method beats the
state of the art techniques for 2nd and 3rd dataset, it reaches 94.38% of the
accuracy level of the winner of PhysioNet Challenge 2016. In all cases, the
effort to reach a satisfactory performance was drastically less (a few days)
than manual feature engineering.
|
[
"Snehasis Banerjee, Tanushyam Chattopadhyay, Swagata Biswas, Rohan\n Banerjee, Anirban Dutta Choudhury, Arpan Pal and Utpal Garain"
] |
null | null |
1612.05730
| null | null |
http://arxiv.org/pdf/1612.05730v2
|
2016-12-21T13:53:15Z
|
2016-12-17T11:00:49Z
|
Towards Wide Learning: Experiments in Healthcare
|
In this paper, a Wide Learning architecture is proposed that attempts to automate the feature engineering portion of the machine learning (ML) pipeline. Feature engineering is widely considered as the most time consuming and expert knowledge demanding portion of any ML task. The proposed feature recommendation approach is tested on 3 healthcare datasets: a) PhysioNet Challenge 2016 dataset of phonocardiogram (PCG) signals, b) MIMIC II blood pressure classification dataset of photoplethysmogram (PPG) signals and c) an emotion classification dataset of PPG signals. While the proposed method beats the state of the art techniques for 2nd and 3rd dataset, it reaches 94.38% of the accuracy level of the winner of PhysioNet Challenge 2016. In all cases, the effort to reach a satisfactory performance was drastically less (a few days) than manual feature engineering.
|
[
"['Snehasis Banerjee' 'Tanushyam Chattopadhyay' 'Swagata Biswas'\n 'Rohan Banerjee' 'Anirban Dutta Choudhury' 'Arpan Pal' 'Utpal Garain']"
] |
cs.LG stat.ML
| null |
1612.0574
| null | null | null | null | null |
Machine Learning, Linear and Bayesian Models for Logistic Regression in
Failure Detection Problems
|
In this work, we study the use of logistic regression in manufacturing
failures detection. As a data set for the analysis, we used the data from
Kaggle competition Bosch Production Line Performance. We considered the use of
machine learning, linear and Bayesian models. For machine learning approach, we
analyzed XGBoost tree based classifier to obtain high scored classification.
Using the generalized linear model for logistic regression makes it possible to
analyze the influence of the factors under study. The Bayesian approach for
logistic regression gives the statistical distribution for the parameters of
the model. It can be useful in the probabilistic analysis, e.g. risk
assessment.
|
[
"B. Pavlyshenko"
] |
null | null |
1612.05740
| null | null |
http://arxiv.org/pdf/1612.05740v1
|
2016-12-17T11:57:45Z
|
2016-12-17T11:57:45Z
|
Machine Learning, Linear and Bayesian Models for Logistic Regression in
Failure Detection Problems
|
In this work, we study the use of logistic regression in manufacturing failures detection. As a data set for the analysis, we used the data from Kaggle competition Bosch Production Line Performance. We considered the use of machine learning, linear and Bayesian models. For machine learning approach, we analyzed XGBoost tree based classifier to obtain high scored classification. Using the generalized linear model for logistic regression makes it possible to analyze the influence of the factors under study. The Bayesian approach for logistic regression gives the statistical distribution for the parameters of the model. It can be useful in the probabilistic analysis, e.g. risk assessment.
|
[
"['B. Pavlyshenko']"
] |
cs.CV cs.LG
| null |
1612.05753
| null | null |
http://arxiv.org/pdf/1612.05753v2
|
2017-02-18T21:23:14Z
|
2016-12-17T13:29:59Z
|
Learning to predict where to look in interactive environments using deep
recurrent q-learning
|
Bottom-Up (BU) saliency models do not perform well in complex interactive
environments where humans are actively engaged in tasks (e.g., sandwich making
and playing the video games). In this paper, we leverage Reinforcement Learning
(RL) to highlight task-relevant locations of input frames. We propose a soft
attention mechanism combined with the Deep Q-Network (DQN) model to teach an RL
agent how to play a game and where to look by focusing on the most pertinent
parts of its visual input. Our evaluations on several Atari 2600 games show
that the soft attention based model could predict fixation locations
significantly better than bottom-up models such as Itti-Kochs saliency and
Graph-Based Visual Saliency (GBVS) models.
|
[
"['Sajad Mousavi' 'Michael Schukat' 'Enda Howley' 'Ali Borji'\n 'Nasser Mozayani']",
"Sajad Mousavi, Michael Schukat, Enda Howley, Ali Borji and Nasser\n Mozayani"
] |
cs.LG
| null |
1612.05794
| null | null |
http://arxiv.org/pdf/1612.05794v1
|
2016-12-17T17:01:08Z
|
2016-12-17T17:01:08Z
|
A new recurrent neural network based predictive model for Faecal
Calprotectin analysis: A retrospective study
|
Faecal Calprotectin (FC) is a surrogate marker for intestinal inflammation,
termed Inflammatory Bowel Disease (IBD), but not for cancer. In this
retrospective study of 804 patients, an enhanced benchmark predictive model for
analyzing FC is developed, based on a novel state-of-the-art Echo State Network
(ESN), an advanced dynamic recurrent neural network which implements a
biologically plausible architecture, and a supervised learning mechanism. The
proposed machine learning driven predictive model is benchmarked against a
conventional logistic regression model, demonstrating statistically significant
performance improvements.
|
[
"Zeeshan Khawar Malik, Zain U. Hussain, Ziad Kobti, Charlie W. Lees,\n Newton Howard and Amir Hussain",
"['Zeeshan Khawar Malik' 'Zain U. Hussain' 'Ziad Kobti' 'Charlie W. Lees'\n 'Newton Howard' 'Amir Hussain']"
] |
cs.CV cs.LG cs.NE
| null |
1612.05836
| null | null |
http://arxiv.org/pdf/1612.05836v1
|
2016-12-17T23:33:37Z
|
2016-12-17T23:33:37Z
|
EgoTransfer: Transferring Motion Across Egocentric and Exocentric
Domains using Deep Neural Networks
|
Mirror neurons have been observed in the primary motor cortex of primate
species, in particular in humans and monkeys. A mirror neuron fires when a
person performs a certain action, and also when he observes the same action
being performed by another person. A crucial step towards building fully
autonomous intelligent systems with human-like learning abilities is the
capability in modeling the mirror neuron. On one hand, the abundance of
egocentric cameras in the past few years has offered the opportunity to study a
lot of vision problems from the first-person perspective. A great deal of
interesting research has been done during the past few years, trying to explore
various computer vision tasks from the perspective of the self. On the other
hand, videos recorded by traditional static cameras, capture humans performing
different actions from an exocentric third-person perspective. In this work, we
take the first step towards relating motion information across these two
perspectives. We train models that predict motion in an egocentric view, by
observing it from an exocentric view, and vice versa. This allows models to
predict how an egocentric motion would look like from outside. To do so, we
train linear and nonlinear models and evaluate their performance in terms of
retrieving the egocentric (exocentric) motion features, while having access to
an exocentric (egocentric) motion feature. Our experimental results demonstrate
that motion information can be successfully transferred across the two views.
|
[
"Shervin Ardeshir, Krishna Regmi, and Ali Borji",
"['Shervin Ardeshir' 'Krishna Regmi' 'Ali Borji']"
] |
cs.LG stat.ML
| null |
1612.05888
| null | null |
http://arxiv.org/pdf/1612.05888v2
|
2017-06-26T10:48:23Z
|
2016-12-18T10:21:20Z
|
Building Diversified Multiple Trees for Classification in High
Dimensional Noisy Biomedical Data
|
It is common that a trained classification model is applied to the operating
data that is deviated from the training data because of noise. This paper
demonstrates that an ensemble classifier, Diversified Multiple Tree (DMT), is
more robust in classifying noisy data than other widely used ensemble methods.
DMT is tested on three real world biomedical data sets from different
laboratories in comparison with four benchmark ensemble classifiers.
Experimental results show that DMT is significantly more accurate than other
benchmark ensemble classifiers on noisy test data. We also discuss a limitation
of DMT and its possible variations.
|
[
"['Jiuyong Li' 'Lin Liu' 'Jixue Liu' 'Ryan Green']",
"Jiuyong Li, Lin Liu, Jixue Liu and Ryan Green"
] |
cs.CV cs.LG
| null |
1612.05968
| null | null |
http://arxiv.org/pdf/1612.05968v1
|
2016-12-18T18:31:11Z
|
2016-12-18T18:31:11Z
|
Deep Multi-instance Networks with Sparse Label Assignment for Whole
Mammogram Classification
|
Mammogram classification is directly related to computer-aided diagnosis of
breast cancer. Traditional methods requires great effort to annotate the
training data by costly manual labeling and specialized computational models to
detect these annotations during test. Inspired by the success of using deep
convolutional features for natural image analysis and multi-instance learning
for labeling a set of instances/patches, we propose end-to-end trained deep
multi-instance networks for mass classification based on whole mammogram
without the aforementioned costly need to annotate the training data. We
explore three different schemes to construct deep multi-instance networks for
whole mammogram classification. Experimental results on the INbreast dataset
demonstrate the robustness of proposed deep networks compared to previous work
using segmentation and detection annotations in the training.
|
[
"Wentao Zhu, Qi Lou, Yeeleng Scott Vang, Xiaohui Xie",
"['Wentao Zhu' 'Qi Lou' 'Yeeleng Scott Vang' 'Xiaohui Xie']"
] |
cs.CV cs.LG
| null |
1612.0597
| null | null | null | null | null |
Adversarial Deep Structural Networks for Mammographic Mass Segmentation
|
Mass segmentation is an important task in mammogram analysis, providing
effective morphological features and regions of interest (ROI) for mass
detection and classification. Inspired by the success of using deep
convolutional features for natural image analysis and conditional random fields
(CRF) for structural learning, we propose an end-to-end network for
mammographic mass segmentation. The network employs a fully convolutional
network (FCN) to model potential function, followed by a CRF to perform
structural learning. Because the mass distribution varies greatly with pixel
position, the FCN is combined with position priori for the task. Due to the
small size of mammogram datasets, we use adversarial training to control
over-fitting. Four models with different convolutional kernels are further
fused to improve the segmentation results. Experimental results on two public
datasets, INbreast and DDSM-BCRP, show that our end-to-end network combined
with adversarial training achieves the-state-of-the-art results.
|
[
"Wentao Zhu, Xiang Xiang, Trac D. Tran, Xiaohui Xie"
] |
null | null |
1612.05970
| null | null |
http://arxiv.org/pdf/1612.05970v2
|
2017-06-09T21:32:38Z
|
2016-12-18T18:40:21Z
|
Adversarial Deep Structural Networks for Mammographic Mass Segmentation
|
Mass segmentation is an important task in mammogram analysis, providing effective morphological features and regions of interest (ROI) for mass detection and classification. Inspired by the success of using deep convolutional features for natural image analysis and conditional random fields (CRF) for structural learning, we propose an end-to-end network for mammographic mass segmentation. The network employs a fully convolutional network (FCN) to model potential function, followed by a CRF to perform structural learning. Because the mass distribution varies greatly with pixel position, the FCN is combined with position priori for the task. Due to the small size of mammogram datasets, we use adversarial training to control over-fitting. Four models with different convolutional kernels are further fused to improve the segmentation results. Experimental results on two public datasets, INbreast and DDSM-BCRP, show that our end-to-end network combined with adversarial training achieves the-state-of-the-art results.
|
[
"['Wentao Zhu' 'Xiang Xiang' 'Trac D. Tran' 'Xiaohui Xie']"
] |
cs.AR cs.CR cs.LG cs.NE
|
10.1109/TCSI.2017.2698019
|
1612.05974
| null | null |
http://arxiv.org/abs/1612.05974v3
|
2017-04-23T17:39:09Z
|
2016-12-18T19:20:42Z
|
An IoT Endpoint System-on-Chip for Secure and Energy-Efficient
Near-Sensor Analytics
|
Near-sensor data analytics is a promising direction for IoT endpoints, as it
minimizes energy spent on communication and reduces network load - but it also
poses security concerns, as valuable data is stored or sent over the network at
various stages of the analytics pipeline. Using encryption to protect sensitive
data at the boundary of the on-chip analytics engine is a way to address data
security issues. To cope with the combined workload of analytics and encryption
in a tight power envelope, we propose Fulmine, a System-on-Chip based on a
tightly-coupled multi-core cluster augmented with specialized blocks for
compute-intensive data processing and encryption functions, supporting software
programmability for regular computing tasks. The Fulmine SoC, fabricated in
65nm technology, consumes less than 20mW on average at 0.8V achieving an
efficiency of up to 70pJ/B in encryption, 50pJ/px in convolution, or up to
25MIPS/mW in software. As a strong argument for real-life flexible application
of our platform, we show experimental results for three secure analytics use
cases: secure autonomous aerial surveillance with a state-of-the-art deep CNN
consuming 3.16pJ per equivalent RISC op; local CNN-based face detection with
secured remote recognition in 5.74pJ/op; and seizure detection with encrypted
data collection from EEG within 12.7pJ/op.
|
[
"Francesco Conti, Robert Schilling, Pasquale Davide Schiavone, Antonio\n Pullini, Davide Rossi, Frank Kagan G\\\"urkaynak, Michael Muehlberghuber,\n Michael Gautschi, Igor Loi, Germain Haugou, Stefan Mangard, Luca Benini",
"['Francesco Conti' 'Robert Schilling' 'Pasquale Davide Schiavone'\n 'Antonio Pullini' 'Davide Rossi' 'Frank Kagan Gürkaynak'\n 'Michael Muehlberghuber' 'Michael Gautschi' 'Igor Loi' 'Germain Haugou'\n 'Stefan Mangard' 'Luca Benini']"
] |
cs.AI cs.LG stat.ML
| null |
1612.06
| null | null | null | null | null |
Sample-efficient Deep Reinforcement Learning for Dialog Control
|
Representing a dialog policy as a recurrent neural network (RNN) is
attractive because it handles partial observability, infers a latent
representation of state, and can be optimized with supervised learning (SL) or
reinforcement learning (RL). For RL, a policy gradient approach is natural, but
is sample inefficient. In this paper, we present 3 methods for reducing the
number of dialogs required to optimize an RNN-based dialog policy with RL. The
key idea is to maintain a second RNN which predicts the value of the current
policy, and to apply experience replay to both networks. On two tasks, these
methods reduce the number of dialogs/episodes required by about a third, vs.
standard policy gradient methods.
|
[
"Kavosh Asadi, Jason D. Williams"
] |
null | null |
1612.06000
| null | null |
http://arxiv.org/pdf/1612.06000v1
|
2016-12-18T21:51:10Z
|
2016-12-18T21:51:10Z
|
Sample-efficient Deep Reinforcement Learning for Dialog Control
|
Representing a dialog policy as a recurrent neural network (RNN) is attractive because it handles partial observability, infers a latent representation of state, and can be optimized with supervised learning (SL) or reinforcement learning (RL). For RL, a policy gradient approach is natural, but is sample inefficient. In this paper, we present 3 methods for reducing the number of dialogs required to optimize an RNN-based dialog policy with RL. The key idea is to maintain a second RNN which predicts the value of the current policy, and to apply experience replay to both networks. On two tasks, these methods reduce the number of dialogs/episodes required by about a third, vs. standard policy gradient methods.
|
[
"['Kavosh Asadi' 'Jason D. Williams']"
] |
cs.LG stat.ML
| null |
1612.06003
| null | null |
http://arxiv.org/pdf/1612.06003v2
|
2018-09-08T12:38:31Z
|
2016-12-18T22:14:36Z
|
Inexact Proximal Gradient Methods for Non-convex and Non-smooth
Optimization
|
In machine learning research, the proximal gradient methods are popular for
solving various optimization problems with non-smooth regularization. Inexact
proximal gradient methods are extremely important when exactly solving the
proximal operator is time-consuming, or the proximal operator does not have an
analytic solution. However, existing inexact proximal gradient methods only
consider convex problems. The knowledge of inexact proximal gradient methods in
the non-convex setting is very limited. % Moreover, for some machine learning
models, there is still no proposed solver for exactly solving the proximal
operator. To address this challenge, in this paper, we first propose three
inexact proximal gradient algorithms, including the basic version and
Nesterov's accelerated version. After that, we provide the theoretical analysis
to the basic and Nesterov's accelerated versions. The theoretical results show
that our inexact proximal gradient algorithms can have the same convergence
rates as the ones of exact proximal gradient algorithms in the non-convex
setting.
Finally, we show the applications of our inexact proximal gradient algorithms
on three representative non-convex learning problems. All experimental results
confirm the superiority of our new inexact proximal gradient algorithms.
|
[
"Bin Gu and De Wang and Zhouyuan Huo and Heng Huang",
"['Bin Gu' 'De Wang' 'Zhouyuan Huo' 'Heng Huang']"
] |
cs.LG cs.AI
| null |
1612.06018
| null | null |
http://arxiv.org/pdf/1612.06018v2
|
2017-07-26T18:53:51Z
|
2016-12-19T01:09:23Z
|
Self-Correcting Models for Model-Based Reinforcement Learning
|
When an agent cannot represent a perfectly accurate model of its
environment's dynamics, model-based reinforcement learning (MBRL) can fail
catastrophically. Planning involves composing the predictions of the model;
when flawed predictions are composed, even minor errors can compound and render
the model useless for planning. Hallucinated Replay (Talvitie 2014) trains the
model to "correct" itself when it produces errors, substantially improving MBRL
with flawed models. This paper theoretically analyzes this approach,
illuminates settings in which it is likely to be effective or ineffective, and
presents a novel error bound, showing that a model's ability to self-correct is
more tightly related to MBRL performance than one-step prediction error. These
results inspire an MBRL algorithm for deterministic MDPs with performance
guarantees that are robust to model class limitations.
|
[
"Erik Talvitie",
"['Erik Talvitie']"
] |
cs.LG
| null |
1612.06052
| null | null |
http://arxiv.org/pdf/1612.06052v2
|
2017-08-17T06:56:17Z
|
2016-12-19T05:54:18Z
|
Quantization and Training of Low Bit-Width Convolutional Neural Networks
for Object Detection
|
We present LBW-Net, an efficient optimization based method for quantization
and training of the low bit-width convolutional neural networks (CNNs).
Specifically, we quantize the weights to zero or powers of two by minimizing
the Euclidean distance between full-precision weights and quantized weights
during backpropagation. We characterize the combinatorial nature of the low
bit-width quantization problem. For 2-bit (ternary) CNNs, the quantization of
$N$ weights can be done by an exact formula in $O(N\log N)$ complexity. When
the bit-width is three and above, we further propose a semi-analytical
thresholding scheme with a single free parameter for quantization that is
computationally inexpensive. The free parameter is further determined by
network retraining and object detection tests. LBW-Net has several desirable
advantages over full-precision CNNs, including considerable memory savings,
energy efficiency, and faster deployment. Our experiments on PASCAL VOC dataset
show that compared with its 32-bit floating-point counterpart, the performance
of the 6-bit LBW-Net is nearly lossless in the object detection tasks, and can
even do better in some real world visual scenes, while empirically enjoying
more than 4$\times$ faster deployment.
|
[
"['Penghang Yin' 'Shuai Zhang' 'Yingyong Qi' 'Jack Xin']",
"Penghang Yin, Shuai Zhang, Yingyong Qi, Jack Xin"
] |
cs.CV cs.LG
| null |
1612.0607
| null | null | null | null | null |
On Random Weights for Texture Generation in One Layer Neural Networks
|
Recent work in the literature has shown experimentally that one can use the
lower layers of a trained convolutional neural network (CNN) to model natural
textures. More interestingly, it has also been experimentally shown that only
one layer with random filters can also model textures although with less
variability. In this paper we ask the question as to why one layer CNNs with
random filters are so effective in generating textures? We theoretically show
that one layer convolutional architectures (without a non-linearity) paired
with the an energy function used in previous literature, can in fact preserve
and modulate frequency coefficients in a manner so that random weights and
pretrained weights will generate the same type of images. Based on the results
of this analysis we question whether similar properties hold in the case where
one uses one convolution layer with a non-linearity. We show that in the case
of ReLu non-linearity there are situations where only one input will give the
minimum possible energy whereas in the case of no nonlinearity, there are
always infinite solutions that will give the minimum possible energy. Thus we
can show that in certain situations adding a ReLu non-linearity generates less
variable images.
|
[
"Mihir Mongia and Kundan Kumar and Akram Erraqabi and Yoshua Bengio"
] |
null | null |
1612.06070
| null | null |
http://arxiv.org/pdf/1612.06070v1
|
2016-12-19T08:21:04Z
|
2016-12-19T08:21:04Z
|
On Random Weights for Texture Generation in One Layer Neural Networks
|
Recent work in the literature has shown experimentally that one can use the lower layers of a trained convolutional neural network (CNN) to model natural textures. More interestingly, it has also been experimentally shown that only one layer with random filters can also model textures although with less variability. In this paper we ask the question as to why one layer CNNs with random filters are so effective in generating textures? We theoretically show that one layer convolutional architectures (without a non-linearity) paired with the an energy function used in previous literature, can in fact preserve and modulate frequency coefficients in a manner so that random weights and pretrained weights will generate the same type of images. Based on the results of this analysis we question whether similar properties hold in the case where one uses one convolution layer with a non-linearity. We show that in the case of ReLu non-linearity there are situations where only one input will give the minimum possible energy whereas in the case of no nonlinearity, there are always infinite solutions that will give the minimum possible energy. Thus we can show that in certain situations adding a ReLu non-linearity generates less variable images.
|
[
"['Mihir Mongia' 'Kundan Kumar' 'Akram Erraqabi' 'Yoshua Bengio']"
] |
stat.ML cs.LG
| null |
1612.06083
| null | null |
http://arxiv.org/pdf/1612.06083v1
|
2016-12-19T09:08:59Z
|
2016-12-19T09:08:59Z
|
Hierarchical Partitioning of the Output Space in Multi-label Data
|
Hierarchy Of Multi-label classifiers (HOMER) is a multi-label learning
algorithm that breaks the initial learning task to several, easier sub-tasks by
first constructing a hierarchy of labels from a given label set and secondly
employing a given base multi-label classifier (MLC) to the resulting
sub-problems. The primary goal is to effectively address class imbalance and
scalability issues that often arise in real-world multi-label classification
problems. In this work, we present the general setup for a HOMER model and a
simple extension of the algorithm that is suited for MLCs that output rankings.
Furthermore, we provide a detailed analysis of the properties of the algorithm,
both from an aspect of effectiveness and computational complexity. A secondary
contribution involves the presentation of a balanced variant of the k means
algorithm, which serves in the first step of the label hierarchy construction.
We conduct extensive experiments on six real-world datasets, studying
empirically HOMER's parameters and providing examples of instantiations of the
algorithm with different clustering approaches and MLCs, The empirical results
demonstrate a significant improvement over the given base MLC.
|
[
"Yannis Papanikolaou, Ioannis Katakis, Grigorios Tsoumakas",
"['Yannis Papanikolaou' 'Ioannis Katakis' 'Grigorios Tsoumakas']"
] |
cs.NE cs.CL cs.LG
| null |
1612.06212
| null | null |
http://arxiv.org/pdf/1612.06212v1
|
2016-12-19T14:59:14Z
|
2016-12-19T14:59:14Z
|
A recurrent neural network without chaos
|
We introduce an exceptionally simple gated recurrent neural network (RNN)
that achieves performance comparable to well-known gated architectures, such as
LSTMs and GRUs, on the word-level language modeling task. We prove that our
model has simple, predicable and non-chaotic dynamics. This stands in stark
contrast to more standard gated architectures, whose underlying dynamical
systems exhibit chaotic behavior.
|
[
"Thomas Laurent and James von Brecht",
"['Thomas Laurent' 'James von Brecht']"
] |
cs.LG stat.ML
| null |
1612.06246
| null | null |
http://arxiv.org/pdf/1612.06246v3
|
2017-06-06T03:21:09Z
|
2016-12-19T16:17:56Z
|
Corralling a Band of Bandit Algorithms
|
We study the problem of combining multiple bandit algorithms (that is, online
learning algorithms with partial feedback) with the goal of creating a master
algorithm that performs almost as well as the best base algorithm if it were to
be run on its own. The main challenge is that when run with a master, base
algorithms unavoidably receive much less feedback and it is thus critical that
the master not starve a base algorithm that might perform uncompetitively
initially but would eventually outperform others if given enough feedback. We
address this difficulty by devising a version of Online Mirror Descent with a
special mirror map together with a sophisticated learning rate scheme. We show
that this approach manages to achieve a more delicate balance between
exploiting and exploring base algorithms than previous works yielding superior
regret bounds.
Our results are applicable to many settings, such as multi-armed bandits,
contextual bandits, and convex bandits. As examples, we present two main
applications. The first is to create an algorithm that enjoys worst-case
robustness while at the same time performing much better when the environment
is relatively easy. The second is to create an algorithm that works
simultaneously under different assumptions of the environment, such as
different priors or different loss structures.
|
[
"['Alekh Agarwal' 'Haipeng Luo' 'Behnam Neyshabur' 'Robert E. Schapire']",
"Alekh Agarwal, Haipeng Luo, Behnam Neyshabur and Robert E. Schapire"
] |
cs.SD cs.LG
| null |
1612.06287
| null | null |
http://arxiv.org/pdf/1612.06287v1
|
2016-12-14T15:40:44Z
|
2016-12-14T15:40:44Z
|
VAST : The Virtual Acoustic Space Traveler Dataset
|
This paper introduces a new paradigm for sound source lo-calization referred
to as virtual acoustic space traveling (VAST) and presents a first dataset
designed for this purpose. Existing sound source localization methods are
either based on an approximate physical model (physics-driven) or on a
specific-purpose calibration set (data-driven). With VAST, the idea is to learn
a mapping from audio features to desired audio properties using a massive
dataset of simulated room impulse responses. This virtual dataset is designed
to be maximally representative of the potential audio scenes that the
considered system may be evolving in, while remaining reasonably compact. We
show that virtually-learned mappings on this dataset generalize to real data,
overcoming some intrinsic limitations of traditional binaural sound
localization methods based on time differences of arrival.
|
[
"Cl\\'ement Gaultier (PANAMA), Saurabh Kataria (PANAMA, IIT Kanpur),\n Antoine Deleforge (PANAMA)",
"['Clément Gaultier' 'Saurabh Kataria' 'Antoine Deleforge']"
] |
cs.LG cs.CR stat.ML
| null |
1612.06299
| null | null |
http://arxiv.org/pdf/1612.06299v1
|
2016-12-19T18:12:20Z
|
2016-12-19T18:12:20Z
|
Simple Black-Box Adversarial Perturbations for Deep Networks
|
Deep neural networks are powerful and popular learning models that achieve
state-of-the-art pattern recognition performance on many computer vision,
speech, and language processing tasks. However, these networks have also been
shown susceptible to carefully crafted adversarial perturbations which force
misclassification of the inputs. Adversarial examples enable adversaries to
subvert the expected system behavior leading to undesired consequences and
could pose a security risk when these systems are deployed in the real world.
In this work, we focus on deep convolutional neural networks and demonstrate
that adversaries can easily craft adversarial examples even without any
internal knowledge of the target network. Our attacks treat the network as an
oracle (black-box) and only assume that the output of the network can be
observed on the probed inputs. Our first attack is based on a simple idea of
adding perturbation to a randomly selected single pixel or a small set of them.
We then improve the effectiveness of this attack by carefully constructing a
small set of pixels to perturb by using the idea of greedy local-search. Our
proposed attacks also naturally extend to a stronger notion of
misclassification. Our extensive experimental results illustrate that even
these elementary attacks can reveal a deep neural network's vulnerabilities.
The simplicity and effectiveness of our proposed schemes mean that they could
serve as a litmus test for designing robust networks.
|
[
"['Nina Narodytska' 'Shiva Prasad Kasiviswanathan']",
"Nina Narodytska, Shiva Prasad Kasiviswanathan"
] |
cs.GT cs.AI cs.LG cs.MA stat.ML
| null |
1612.0634
| null | null | null | null | null |
Computing Human-Understandable Strategies
|
Algorithms for equilibrium computation generally make no attempt to ensure
that the computed strategies are understandable by humans. For instance the
strategies for the strongest poker agents are represented as massive binary
files. In many situations, we would like to compute strategies that can
actually be implemented by humans, who may have computational limitations and
may only be able to remember a small number of features or components of the
strategies that have been computed. We study poker games where private
information distributions can be arbitrary. We create a large training set of
game instances and solutions, by randomly selecting the information
probabilities, and present algorithms that learn from the training instances in
order to perform well in games with unseen information distributions. We are
able to conclude several new fundamental rules about poker strategy that can be
easily implemented by humans.
|
[
"Sam Ganzfried and Farzana Yusuf"
] |
null | null |
1612.06340
| null | null |
http://arxiv.org/pdf/1612.06340v2
|
2017-02-20T17:54:11Z
|
2016-12-19T20:40:19Z
|
Computing Human-Understandable Strategies
|
Algorithms for equilibrium computation generally make no attempt to ensure that the computed strategies are understandable by humans. For instance the strategies for the strongest poker agents are represented as massive binary files. In many situations, we would like to compute strategies that can actually be implemented by humans, who may have computational limitations and may only be able to remember a small number of features or components of the strategies that have been computed. We study poker games where private information distributions can be arbitrary. We create a large training set of game instances and solutions, by randomly selecting the information probabilities, and present algorithms that learn from the training instances in order to perform well in games with unseen information distributions. We are able to conclude several new fundamental rules about poker strategy that can be easily implemented by humans.
|
[
"['Sam Ganzfried' 'Farzana Yusuf']"
] |
cs.CV cs.AI cs.LG cs.NE stat.ML
| null |
1612.0637
| null | null | null | null | null |
Learning Features by Watching Objects Move
|
This paper presents a novel yet intuitive approach to unsupervised feature
learning. Inspired by the human visual system, we explore whether low-level
motion-based grouping cues can be used to learn an effective visual
representation. Specifically, we use unsupervised motion-based segmentation on
videos to obtain segments, which we use as 'pseudo ground truth' to train a
convolutional network to segment objects from a single frame. Given the
extensive evidence that motion plays a key role in the development of the human
visual system, we hope that this straightforward approach to unsupervised
learning will be more effective than cleverly designed 'pretext' tasks studied
in the literature. Indeed, our extensive experiments show that this is the
case. When used for transfer learning on object detection, our representation
significantly outperforms previous unsupervised approaches across multiple
settings, especially when training data for the target task is scarce.
|
[
"Deepak Pathak, Ross Girshick, Piotr Doll\\'ar, Trevor Darrell, Bharath\n Hariharan"
] |
null | null |
1612.06370
| null | null |
http://arxiv.org/pdf/1612.06370v2
|
2017-04-12T04:28:47Z
|
2016-12-19T20:56:04Z
|
Learning Features by Watching Objects Move
|
This paper presents a novel yet intuitive approach to unsupervised feature learning. Inspired by the human visual system, we explore whether low-level motion-based grouping cues can be used to learn an effective visual representation. Specifically, we use unsupervised motion-based segmentation on videos to obtain segments, which we use as 'pseudo ground truth' to train a convolutional network to segment objects from a single frame. Given the extensive evidence that motion plays a key role in the development of the human visual system, we hope that this straightforward approach to unsupervised learning will be more effective than cleverly designed 'pretext' tasks studied in the literature. Indeed, our extensive experiments show that this is the case. When used for transfer learning on object detection, our representation significantly outperforms previous unsupervised approaches across multiple settings, especially when training data for the target task is scarce.
|
[
"['Deepak Pathak' 'Ross Girshick' 'Piotr Dollár' 'Trevor Darrell'\n 'Bharath Hariharan']"
] |
stat.ML cs.LG
| null |
1612.0647
| null | null | null | null | null |
Randomized Clustered Nystrom for Large-Scale Kernel Machines
|
The Nystrom method has been popular for generating the low-rank approximation
of kernel matrices that arise in many machine learning problems. The
approximation quality of the Nystrom method depends crucially on the number of
selected landmark points and the selection procedure. In this paper, we present
a novel algorithm to compute the optimal Nystrom low-approximation when the
number of landmark points exceed the target rank. Moreover, we introduce a
randomized algorithm for generating landmark points that is scalable to
large-scale data sets. The proposed method performs K-means clustering on
low-dimensional random projections of a data set and, thus, leads to
significant savings for high-dimensional data sets. Our theoretical results
characterize the tradeoffs between the accuracy and efficiency of our proposed
method. Extensive experiments demonstrate the competitive performance as well
as the efficiency of our proposed method.
|
[
"Farhad Pourkamali-Anaraki, Stephen Becker"
] |
null | null |
1612.06470
| null | null |
http://arxiv.org/pdf/1612.06470v1
|
2016-12-20T01:07:04Z
|
2016-12-20T01:07:04Z
|
Randomized Clustered Nystrom for Large-Scale Kernel Machines
|
The Nystrom method has been popular for generating the low-rank approximation of kernel matrices that arise in many machine learning problems. The approximation quality of the Nystrom method depends crucially on the number of selected landmark points and the selection procedure. In this paper, we present a novel algorithm to compute the optimal Nystrom low-approximation when the number of landmark points exceed the target rank. Moreover, we introduce a randomized algorithm for generating landmark points that is scalable to large-scale data sets. The proposed method performs K-means clustering on low-dimensional random projections of a data set and, thus, leads to significant savings for high-dimensional data sets. Our theoretical results characterize the tradeoffs between the accuracy and efficiency of our proposed method. Extensive experiments demonstrate the competitive performance as well as the efficiency of our proposed method.
|
[
"['Farhad Pourkamali-Anaraki' 'Stephen Becker']"
] |
cs.LG cs.AI
| null |
1612.06505
| null | null |
http://arxiv.org/pdf/1612.06505v4
|
2017-11-06T01:03:44Z
|
2016-12-20T04:54:49Z
|
Parallelized Tensor Train Learning of Polynomial Classifiers
|
In pattern classification, polynomial classifiers are well-studied methods as
they are capable of generating complex decision surfaces. Unfortunately, the
use of multivariate polynomials is limited to kernels as in support vector
machines, because polynomials quickly become impractical for high-dimensional
problems. In this paper, we effectively overcome the curse of dimensionality by
employing the tensor train format to represent a polynomial classifier. Based
on the structure of tensor trains, two learning algorithms are proposed which
involve solving different optimization problems of low computational
complexity. Furthermore, we show how both regularization to prevent overfitting
and parallelization, which enables the use of large training sets, are
incorporated into these methods. Both the efficiency and efficacy of our
tensor-based polynomial classifier are then demonstrated on the two popular
datasets USPS and MNIST.
|
[
"Zhongming Chen, Kim Batselier, Johan A.K. Suykens, Ngai Wong",
"['Zhongming Chen' 'Kim Batselier' 'Johan A. K. Suykens' 'Ngai Wong']"
] |
cs.CV cs.LG cs.NE
| null |
1612.06519
| null | null |
http://arxiv.org/pdf/1612.06519v1
|
2016-12-20T06:20:43Z
|
2016-12-20T06:20:43Z
|
Exploring the Design Space of Deep Convolutional Neural Networks at
Large Scale
|
In recent years, the research community has discovered that deep neural
networks (DNNs) and convolutional neural networks (CNNs) can yield higher
accuracy than all previous solutions to a broad array of machine learning
problems. To our knowledge, there is no single CNN/DNN architecture that solves
all problems optimally. Instead, the "right" CNN/DNN architecture varies
depending on the application at hand. CNN/DNNs comprise an enormous design
space. Quantitatively, we find that a small region of the CNN design space
contains 30 billion different CNN architectures.
In this dissertation, we develop a methodology that enables systematic
exploration of the design space of CNNs. Our methodology is comprised of the
following four themes.
1. Judiciously choosing benchmarks and metrics.
2. Rapidly training CNN models.
3. Defining and describing the CNN design space.
4. Exploring the design space of CNN architectures.
Taken together, these four themes comprise an effective methodology for
discovering the "right" CNN architectures to meet the needs of practical
applications.
|
[
"['Forrest Iandola']",
"Forrest Iandola"
] |
stat.ML cs.LG
| null |
1612.06598
| null | null |
http://arxiv.org/pdf/1612.06598v1
|
2016-12-20T10:39:45Z
|
2016-12-20T10:39:45Z
|
WoCE: a framework for clustering ensemble by exploiting the wisdom of
Crowds theory
|
The Wisdom of Crowds (WOC), as a theory in the social science, gets a new
paradigm in computer science. The WOC theory explains that the aggregate
decision made by a group is often better than those of its individual members
if specific conditions are satisfied. This paper presents a novel framework for
unsupervised and semi-supervised cluster ensemble by exploiting the WOC theory.
We employ four conditions in the WOC theory, i.e., diversity, independency,
decentralization and aggregation, to guide both the constructing of individual
clustering results and the final combination for clustering ensemble. Firstly,
independency criterion, as a novel mapping system on the raw data set, removes
the correlation between features on our proposed method. Then, decentralization
as a novel mechanism generates high-quality individual clustering results.
Next, uniformity as a new diversity metric evaluates the generated clustering
results. Further, weighted evidence accumulation clustering method is proposed
for the final aggregation without using thresholding procedure. Experimental
study on varied data sets demonstrates that the proposed approach achieves
superior performance to state-of-the-art methods.
|
[
"Muhammad Yousefnezhad, Sheng-Jun Huang, Daoqiang Zhang",
"['Muhammad Yousefnezhad' 'Sheng-Jun Huang' 'Daoqiang Zhang']"
] |
cs.LG
| null |
1612.06623
| null | null |
http://arxiv.org/pdf/1612.06623v1
|
2016-12-20T12:15:17Z
|
2016-12-20T12:15:17Z
|
Supervised Learning for Optimal Power Flow as a Real-Time Proxy
|
In this work we design and compare different supervised learning algorithms
to compute the cost of Alternating Current Optimal Power Flow (ACOPF). The
motivation for quick calculation of OPF cost outcomes stems from the growing
need of algorithmic-based long-term and medium-term planning methodologies in
power networks. Integrated in a multiple time-horizon coordination framework,
we refer to this approximation module as a proxy for predicting short-term
decision outcomes without the need of actual simulation and optimization of
them. Our method enables fast approximate calculation of OPF cost with less
than 1% error on average, achieved in run-times that are several orders of
magnitude lower than of exact computation. Several test-cases such as
IEEE-RTS96 are used to demonstrate the efficiency of our approach.
|
[
"['Raphael Canyasse' 'Gal Dalal' 'Shie Mannor']",
"Raphael Canyasse, Gal Dalal, Shie Mannor"
] |
math.OC cs.LG stat.ML
| null |
1612.06669
| null | null |
http://arxiv.org/pdf/1612.06669v1
|
2016-12-20T14:12:58Z
|
2016-12-20T14:12:58Z
|
Enhancing Observability in Distribution Grids using Smart Meter Data
|
Due to limited metering infrastructure, distribution grids are currently
challenged by observability issues. On the other hand, smart meter data,
including local voltage magnitudes and power injections, are communicated to
the utility operator from grid buses with renewable generation and
demand-response programs. This work employs grid data from metered buses
towards inferring the underlying grid state. To this end, a coupled formulation
of the power flow problem (CPF) is put forth. Exploiting the high variability
of injections at metered buses, the controllability of solar inverters, and the
relative time-invariance of conventional loads, the idea is to solve the
non-linear power flow equations jointly over consecutive time instants. An
intuitive and easily verifiable rule pertaining to the locations of metered and
non-metered buses on the physical grid is shown to be a necessary and
sufficient criterion for local observability in radial networks. To account for
noisy smart meter readings, a coupled power system state estimation (CPSSE)
problem is further developed. Both CPF and CPSSE tasks are tackled via
augmented semi-definite program relaxations. The observability criterion along
with the CPF and CPSSE solvers are numerically corroborated using synthetic and
actual solar generation and load data on the IEEE 34-bus benchmark feeder.
|
[
"Siddharth Bhela, Vassilis Kekatos, Sriharsha Veeramachaneni",
"['Siddharth Bhela' 'Vassilis Kekatos' 'Sriharsha Veeramachaneni']"
] |
cs.LG stat.ML
| null |
1612.06676
| null | null |
http://arxiv.org/pdf/1612.06676v2
|
2016-12-26T11:26:03Z
|
2016-12-20T14:24:49Z
|
Multivariate Industrial Time Series with Cyber-Attack Simulation: Fault
Detection Using an LSTM-based Predictive Data Model
|
We adopted an approach based on an LSTM neural network to monitor and detect
faults in industrial multivariate time series data. To validate the approach we
created a Modelica model of part of a real gasoil plant. By introducing hacks
into the logic of the Modelica model, we were able to generate both the roots
and causes of fault behavior in the plant. Having a self-consistent data set
with labeled faults, we used an LSTM architecture with a forecasting error
threshold to obtain precision and recall quality metrics. The dependency of the
quality metric on the threshold level is considered. An appropriate mechanism
such as "one handle" was introduced for filtering faults that are outside of
the plant operator field of interest.
|
[
"['Pavel Filonov' 'Andrey Lavrentyev' 'Artem Vorontsov']",
"Pavel Filonov, Andrey Lavrentyev, Artem Vorontsov"
] |
cs.CV cs.AI cs.LG
| null |
1612.06704
| null | null |
http://arxiv.org/pdf/1612.06704v1
|
2016-12-20T15:24:46Z
|
2016-12-20T15:24:46Z
|
Action-Driven Object Detection with Top-Down Visual Attentions
|
A dominant paradigm for deep learning based object detection relies on a
"bottom-up" approach using "passive" scoring of class agnostic proposals. These
approaches are efficient but lack of holistic analysis of scene-level context.
In this paper, we present an "action-driven" detection mechanism using our
"top-down" visual attention model. We localize an object by taking sequential
actions that the attention model provides. The attention model conditioned with
an image region provides required actions to get closer toward a target object.
An action at each time step is weak itself but an ensemble of the sequential
actions makes a bounding-box accurately converge to a target object boundary.
This attention model we call AttentionNet is composed of a convolutional neural
network. During our whole detection procedure, we only utilize the actions from
a single AttentionNet without any modules for object proposals nor post
bounding-box regression. We evaluate our top-down detection mechanism over the
PASCAL VOC series and ILSVRC CLS-LOC dataset, and achieve state-of-the-art
performances compared to the major bottom-up detection methods. In particular,
our detection mechanism shows a strong advantage in elaborate localization by
outperforming Faster R-CNN with a margin of +7.1% over PASCAL VOC 2007 when we
increase the IoU threshold for positive detection to 0.7.
|
[
"['Donggeun Yoo' 'Sunggyun Park' 'Kyunghyun Paeng' 'Joon-Young Lee'\n 'In So Kweon']",
"Donggeun Yoo, Sunggyun Park, Kyunghyun Paeng, Joon-Young Lee, In So\n Kweon"
] |
cs.CV cs.LG
| null |
1612.06851
| null | null |
http://arxiv.org/pdf/1612.06851v2
|
2017-09-19T22:37:40Z
|
2016-12-20T20:57:59Z
|
Beyond Skip Connections: Top-Down Modulation for Object Detection
|
In recent years, we have seen tremendous progress in the field of object
detection. Most of the recent improvements have been achieved by targeting
deeper feedforward networks. However, many hard object categories such as
bottle, remote, etc. require representation of fine details and not just
coarse, semantic representations. But most of these fine details are lost in
the early convolutional layers. What we need is a way to incorporate finer
details from lower layers into the detection architecture. Skip connections
have been proposed to combine high-level and low-level features, but we argue
that selecting the right features from low-level requires top-down contextual
information. Inspired by the human visual pathway, in this paper we propose
top-down modulations as a way to incorporate fine details into the detection
framework. Our approach supplements the standard bottom-up, feedforward ConvNet
with a top-down modulation (TDM) network, connected using lateral connections.
These connections are responsible for the modulation of lower layer filters,
and the top-down network handles the selection and integration of contextual
information and low-level features. The proposed TDM architecture provides a
significant boost on the COCO testdev benchmark, achieving 28.6 AP for VGG16,
35.2 AP for ResNet101, and 37.3 for InceptionResNetv2 network, without any
bells and whistles (e.g., multi-scale, iterative box refinement, etc.).
|
[
"['Abhinav Shrivastava' 'Rahul Sukthankar' 'Jitendra Malik' 'Abhinav Gupta']",
"Abhinav Shrivastava, Rahul Sukthankar, Jitendra Malik, Abhinav Gupta"
] |
cs.LG
| null |
1612.06856
| null | null |
http://arxiv.org/pdf/1612.06856v2
|
2016-12-22T20:58:04Z
|
2016-12-20T19:33:35Z
|
Temporal Feature Selection on Networked Time Series
|
This paper formulates the problem of learning discriminative features
(\textit{i.e.,} segments) from networked time series data considering the
linked information among time series. For example, social network users are
considered to be social sensors that continuously generate social signals
(tweets) represented as a time series. The discriminative segments are often
referred to as \emph{shapelets} in a time series. Extracting shapelets for time
series classification has been widely studied. However, existing works on
shapelet selection assume that the time series are independent and identically
distributed (i.i.d.). This assumption restricts their applications to social
networked time series analysis, since a user's actions can be correlated to
his/her social affiliations. In this paper we propose a new Network Regularized
Least Squares (NetRLS) feature selection model that combines typical time
series data and user network data for analysis. Experiments on real-world
networked time series Twitter and DBLP data demonstrate the performance of the
proposed method. NetRLS performs better than LTS, the state-of-the-art time
series feature selection approach, on real-world data.
|
[
"Haishuai Wang, Jia Wu, Peng Zhang, Chengqi Zhang",
"['Haishuai Wang' 'Jia Wu' 'Peng Zhang' 'Chengqi Zhang']"
] |
stat.ME cs.LG stat.ML
| null |
1612.06879
| null | null |
http://arxiv.org/pdf/1612.06879v1
|
2016-12-09T19:25:27Z
|
2016-12-09T19:25:27Z
|
Robust mixture of experts modeling using the skew $t$ distribution
|
Mixture of Experts (MoE) is a popular framework in the fields of statistics
and machine learning for modeling heterogeneity in data for regression,
classification and clustering. MoE for continuous data are usually based on the
normal distribution. However, it is known that for data with asymmetric
behavior, heavy tails and atypical observations, the use of the normal
distribution is unsuitable. We introduce a new robust non-normal mixture of
experts modeling using the skew $t$ distribution. The proposed skew $t$ mixture
of experts, named STMoE, handles these issues of the normal mixtures experts
regarding possibly skewed, heavy-tailed and noisy data. We develop a dedicated
expectation conditional maximization (ECM) algorithm to estimate the model
parameters by monotonically maximizing the observed data log-likelihood. We
describe how the presented model can be used in prediction and in model-based
clustering of regression data. Numerical experiments carried out on simulated
data show the effectiveness and the robustness of the proposed model in fitting
non-linear regression functions as well as in model-based clustering. Then, the
proposed model is applied to the real-world data of tone perception for musical
data analysis, and the one of temperature anomalies for the analysis of climate
change data. The obtained results confirm the usefulness of the model for
practical data analysis applications.
|
[
"['Faicel Chamroukhi']",
"Faicel Chamroukhi"
] |
cs.CV cs.CL cs.LG
| null |
1612.0689
| null | null | null | null | null |
CLEVR: A Diagnostic Dataset for Compositional Language and Elementary
Visual Reasoning
|
When building artificial intelligence systems that can reason and answer
questions about visual data, we need diagnostic tests to analyze our progress
and discover shortcomings. Existing benchmarks for visual question answering
can help, but have strong biases that models can exploit to correctly answer
questions without reasoning. They also conflate multiple sources of error,
making it hard to pinpoint model weaknesses. We present a diagnostic dataset
that tests a range of visual reasoning abilities. It contains minimal biases
and has detailed annotations describing the kind of reasoning each question
requires. We use this dataset to analyze a variety of modern visual reasoning
systems, providing novel insights into their abilities and limitations.
|
[
"Justin Johnson and Bharath Hariharan and Laurens van der Maaten and Li\n Fei-Fei and C. Lawrence Zitnick and Ross Girshick"
] |
null | null |
1612.06890
| null | null |
http://arxiv.org/pdf/1612.06890v1
|
2016-12-20T21:40:40Z
|
2016-12-20T21:40:40Z
|
CLEVR: A Diagnostic Dataset for Compositional Language and Elementary
Visual Reasoning
|
When building artificial intelligence systems that can reason and answer questions about visual data, we need diagnostic tests to analyze our progress and discover shortcomings. Existing benchmarks for visual question answering can help, but have strong biases that models can exploit to correctly answer questions without reasoning. They also conflate multiple sources of error, making it hard to pinpoint model weaknesses. We present a diagnostic dataset that tests a range of visual reasoning abilities. It contains minimal biases and has detailed annotations describing the kind of reasoning each question requires. We use this dataset to analyze a variety of modern visual reasoning systems, providing novel insights into their abilities and limitations.
|
[
"['Justin Johnson' 'Bharath Hariharan' 'Laurens van der Maaten'\n 'Li Fei-Fei' 'C. Lawrence Zitnick' 'Ross Girshick']"
] |
cs.IR cs.LG
| null |
1612.06935
| null | null |
http://arxiv.org/pdf/1612.06935v6
|
2018-12-05T03:56:00Z
|
2016-12-21T01:01:49Z
|
Personalized Video Recommendation Using Rich Contents from Videos
|
Video recommendation has become an essential way of helping people explore
the massive videos and discover the ones that may be of interest to them. In
the existing video recommender systems, the models make the recommendations
based on the user-video interactions and single specific content features. When
the specific content features are unavailable, the performance of the existing
models will seriously deteriorate. Inspired by the fact that rich contents
(e.g., text, audio, motion, and so on) exist in videos, in this paper, we
explore how to use these rich contents to overcome the limitations caused by
the unavailability of the specific ones. Specifically, we propose a novel
general framework that incorporates arbitrary single content feature with
user-video interactions, named as collaborative embedding regression (CER)
model, to make effective video recommendation in both in-matrix and
out-of-matrix scenarios. Our extensive experiments on two real-world
large-scale datasets show that CER beats the existing recommender models with
any single content feature and is more time efficient. In addition, we propose
a priority-based late fusion (PRI) method to gain the benefit brought by the
integrating the multiple content features. The corresponding experiment shows
that PRI brings real performance improvement to the baseline and outperforms
the existing fusion methods.
|
[
"Xingzhong Du, Hongzhi Yin, Ling Chen, Yang Wang, Yi Yang, Xiaofang\n Zhou",
"['Xingzhong Du' 'Hongzhi Yin' 'Ling Chen' 'Yang Wang' 'Yi Yang'\n 'Xiaofang Zhou']"
] |
stat.ML cs.LG
| null |
1612.07019
| null | null |
http://arxiv.org/pdf/1612.07019v1
|
2016-12-21T09:10:48Z
|
2016-12-21T09:10:48Z
|
Robust Learning with Kernel Mean p-Power Error Loss
|
Correntropy is a second order statistical measure in kernel space, which has
been successfully applied in robust learning and signal processing. In this
paper, we define a nonsecond order statistical measure in kernel space, called
the kernel mean-p power error (KMPE), including the correntropic loss (CLoss)
as a special case. Some basic properties of KMPE are presented. In particular,
we apply the KMPE to extreme learning machine (ELM) and principal component
analysis (PCA), and develop two robust learning algorithms, namely ELM-KMPE and
PCA-KMPE. Experimental results on synthetic and benchmark data show that the
developed algorithms can achieve consistently better performance when compared
with some existing methods.
|
[
"['Badong Chen' 'Lei Xing' 'Xin Wang' 'Jing Qin' 'Nanning Zheng']",
"Badong Chen, Lei Xing, Xin Wang, Jing Qin, Nanning Zheng"
] |
cs.CV cs.LG
| null |
1612.07086
| null | null |
http://arxiv.org/pdf/1612.07086v3
|
2017-08-02T12:33:50Z
|
2016-12-21T13:04:18Z
|
An Empirical Study of Language CNN for Image Captioning
|
Language Models based on recurrent neural networks have dominated recent
image caption generation tasks. In this paper, we introduce a Language CNN
model which is suitable for statistical language modeling tasks and shows
competitive performance in image captioning. In contrast to previous models
which predict next word based on one previous word and hidden state, our
language CNN is fed with all the previous words and can model the long-range
dependencies of history words, which are critical for image captioning. The
effectiveness of our approach is validated on two datasets MS COCO and
Flickr30K. Our extensive experimental results show that our method outperforms
the vanilla recurrent neural network based language models and is competitive
with the state-of-the-art methods.
|
[
"['Jiuxiang Gu' 'Gang Wang' 'Jianfei Cai' 'Tsuhan Chen']",
"Jiuxiang Gu, Gang Wang, Jianfei Cai, Tsuhan Chen"
] |
cs.IR cs.LG
| null |
1612.07117
| null | null |
http://arxiv.org/pdf/1612.07117v1
|
2016-12-20T15:02:41Z
|
2016-12-20T15:02:41Z
|
Classification and Learning-to-rank Approaches for Cross-Device Matching
at CIKM Cup 2016
|
In this paper, we propose two methods for tackling the problem of
cross-device matching for online advertising at CIKM Cup 2016. The first method
considers the matching problem as a binary classification task and solve it by
utilizing ensemble learning techniques. The second method defines the matching
problem as a ranking task and effectively solve it with using learning-to-rank
algorithms. The results show that the proposed methods obtain promising
results, in which the ranking-based method outperforms the classification-based
method for the task.
|
[
"Nam Khanh Tran",
"['Nam Khanh Tran']"
] |
cs.CV cs.AR cs.LG
|
10.1145/3020078.3021744
|
1612.07119
| null | null |
http://arxiv.org/abs/1612.07119v1
|
2016-12-01T22:19:47Z
|
2016-12-01T22:19:47Z
|
FINN: A Framework for Fast, Scalable Binarized Neural Network Inference
|
Research has shown that convolutional neural networks contain significant
redundancy, and high classification accuracy can be obtained even when weights
and activations are reduced from floating point to binary values. In this
paper, we present FINN, a framework for building fast and flexible FPGA
accelerators using a flexible heterogeneous streaming architecture. By
utilizing a novel set of optimizations that enable efficient mapping of
binarized neural networks to hardware, we implement fully connected,
convolutional and pooling layers, with per-layer compute resources being
tailored to user-provided throughput requirements. On a ZC706 embedded FPGA
platform drawing less than 25 W total system power, we demonstrate up to 12.3
million image classifications per second with 0.31 {\mu}s latency on the MNIST
dataset with 95.8% accuracy, and 21906 image classifications per second with
283 {\mu}s latency on the CIFAR-10 and SVHN datasets with respectively 80.1%
and 94.9% accuracy. To the best of our knowledge, ours are the fastest
classification rates reported to date on these benchmarks.
|
[
"['Yaman Umuroglu' 'Nicholas J. Fraser' 'Giulio Gambardella'\n 'Michaela Blott' 'Philip Leong' 'Magnus Jahre' 'Kees Vissers']",
"Yaman Umuroglu, Nicholas J. Fraser, Giulio Gambardella, Michaela\n Blott, Philip Leong, Magnus Jahre, Kees Vissers"
] |
cs.RO cs.AI cs.LG cs.SY
| null |
1612.07139
| null | null |
http://arxiv.org/pdf/1612.07139v4
|
2018-04-09T03:46:53Z
|
2016-12-21T14:31:47Z
|
A Survey of Deep Network Solutions for Learning Control in Robotics:
From Reinforcement to Imitation
|
Deep learning techniques have been widely applied, achieving state-of-the-art
results in various fields of study. This survey focuses on deep learning
solutions that target learning control policies for robotics applications. We
carry out our discussions on the two main paradigms for learning control with
deep networks: deep reinforcement learning and imitation learning. For deep
reinforcement learning (DRL), we begin from traditional reinforcement learning
algorithms, showing how they are extended to the deep context and effective
mechanisms that could be added on top of the DRL algorithms. We then introduce
representative works that utilize DRL to solve navigation and manipulation
tasks in robotics. We continue our discussion on methods addressing the
challenge of the reality gap for transferring DRL policies trained in
simulation to real-world scenarios, and summarize robotics simulation platforms
for conducting DRL research. For imitation leaning, we go through its three
main categories, behavior cloning, inverse reinforcement learning and
generative adversarial imitation learning, by introducing their formulations
and their corresponding robotics applications. Finally, we discuss the open
challenges and research frontiers.
|
[
"['Lei Tai' 'Jingwei Zhang' 'Ming Liu' 'Joschka Boedecker'\n 'Wolfram Burgard']",
"Lei Tai and Jingwei Zhang and Ming Liu and Joschka Boedecker and\n Wolfram Burgard"
] |
cs.LG
| null |
1612.07141
| null | null |
http://arxiv.org/pdf/1612.07141v3
|
2019-01-28T13:20:31Z
|
2016-12-21T14:33:32Z
|
Robust Classification of Graph-Based Data
|
A graph-based classification method is proposed for semi-supervised learning
in the case of Euclidean data and for classification in the case of graph data.
Our manifold learning technique is based on a convex optimization problem
involving a convex quadratic regularization term and a concave quadratic loss
function with a trade-off parameter carefully chosen so that the objective
function remains convex. As shown empirically, the advantage of considering a
concave loss function is that the learning problem becomes more robust in the
presence of noisy labels. Furthermore, the loss function considered here is
then more similar to a classification loss while several other methods treat
graph-based classification problems as regression problems.
|
[
"['Carlos M. Alaíz' 'Michaël Fanuel' 'Johan A. K. Suykens']",
"Carlos M. Ala\\'iz, Micha\\\"el Fanuel, Johan A. K. Suykens"
] |
cs.LG
| null |
1612.07146
| null | null |
http://arxiv.org/pdf/1612.07146v3
|
2018-07-05T08:28:41Z
|
2016-12-21T14:35:26Z
|
Collaborative Filtering with User-Item Co-Autoregressive Models
|
Deep neural networks have shown promise in collaborative filtering (CF).
However, existing neural approaches are either user-based or item-based, which
cannot leverage all the underlying information explicitly. We propose CF-UIcA,
a neural co-autoregressive model for CF tasks, which exploits the structural
correlation in the domains of both users and items. The co-autoregression
allows extra desired properties to be incorporated for different tasks.
Furthermore, we develop an efficient stochastic learning algorithm to handle
large scale datasets. We evaluate CF-UIcA on two popular benchmarks: MovieLens
1M and Netflix, and achieve state-of-the-art performance in both rating
prediction and top-N recommendation tasks, which demonstrates the effectiveness
of CF-UIcA.
|
[
"Chao Du, Chongxuan Li, Yin Zheng, Jun Zhu, Bo Zhang",
"['Chao Du' 'Chongxuan Li' 'Yin Zheng' 'Jun Zhu' 'Bo Zhang']"
] |
cs.CL cs.CV cs.GT cs.LG cs.MA
| null |
1612.07182
| null | null |
http://arxiv.org/pdf/1612.07182v2
|
2017-03-05T21:40:51Z
|
2016-12-21T15:27:06Z
|
Multi-Agent Cooperation and the Emergence of (Natural) Language
|
The current mainstream approach to train natural language systems is to
expose them to large amounts of text. This passive learning is problematic if
we are interested in developing interactive machines, such as conversational
agents. We propose a framework for language learning that relies on multi-agent
communication. We study this learning in the context of referential games. In
these games, a sender and a receiver see a pair of images. The sender is told
one of them is the target and is allowed to send a message from a fixed,
arbitrary vocabulary to the receiver. The receiver must rely on this message to
identify the target. Thus, the agents develop their own language interactively
out of the need to communicate. We show that two networks with simple
configurations are able to learn to coordinate in the referential game. We
further explore how to make changes to the game environment to cause the "word
meanings" induced in the game to better reflect intuitive semantic properties
of the images. In addition, we present a simple strategy for grounding the
agents' code into natural language. Both of these are necessary steps towards
developing machines that are able to communicate with humans productively.
|
[
"['Angeliki Lazaridou' 'Alexander Peysakhovich' 'Marco Baroni']",
"Angeliki Lazaridou, Alexander Peysakhovich, Marco Baroni"
] |
stat.ML cs.LG stat.ME
| null |
1612.07222
| null | null |
http://arxiv.org/pdf/1612.07222v1
|
2016-12-21T16:24:27Z
|
2016-12-21T16:24:27Z
|
Bayesian Decision Process for Cost-Efficient Dynamic Ranking via
Crowdsourcing
|
Rank aggregation based on pairwise comparisons over a set of items has a wide
range of applications. Although considerable research has been devoted to the
development of rank aggregation algorithms, one basic question is how to
efficiently collect a large amount of high-quality pairwise comparisons for the
ranking purpose. Because of the advent of many crowdsourcing services, a crowd
of workers are often hired to conduct pairwise comparisons with a small
monetary reward for each pair they compare. Since different workers have
different levels of reliability and different pairs have different levels of
ambiguity, it is desirable to wisely allocate the limited budget for
comparisons among the pairs of items and workers so that the global ranking can
be accurately inferred from the comparison results. To this end, we model the
active sampling problem in crowdsourced ranking as a Bayesian Markov decision
process, which dynamically selects item pairs and workers to improve the
ranking accuracy under a budget constraint. We further develop a
computationally efficient sampling policy based on knowledge gradient as well
as a moment matching technique for posterior approximation. Experimental
evaluations on both synthetic and real data show that the proposed policy
achieves high ranking accuracy with a lower labeling cost.
|
[
"['Xi Chen' 'Kevin Jiao' 'Qihang Lin']",
"Xi Chen, Kevin Jiao, Qihang Lin"
] |
cs.LG
| null |
1612.07307
| null | null |
http://arxiv.org/pdf/1612.07307v2
|
2017-03-09T18:29:09Z
|
2016-12-21T20:29:26Z
|
Loss is its own Reward: Self-Supervision for Reinforcement Learning
|
Reinforcement learning optimizes policies for expected cumulative reward.
Need the supervision be so narrow? Reward is delayed and sparse for many tasks,
making it a difficult and impoverished signal for end-to-end optimization. To
augment reward, we consider a range of self-supervised tasks that incorporate
states, actions, and successors to provide auxiliary losses. These losses offer
ubiquitous and instantaneous supervision for representation learning even in
the absence of reward. While current results show that learning from reward
alone is feasible, pure reinforcement learning methods are constrained by
computational and data efficiency issues that can be remedied by auxiliary
losses. Self-supervised pre-training and joint optimization improve the data
efficiency and policy returns of end-to-end reinforcement learning.
|
[
"Evan Shelhamer, Parsa Mahmoudieh, Max Argus, Trevor Darrell",
"['Evan Shelhamer' 'Parsa Mahmoudieh' 'Max Argus' 'Trevor Darrell']"
] |
math.OC cs.LG
| null |
1612.07335
| null | null |
http://arxiv.org/pdf/1612.07335v1
|
2016-12-21T21:12:27Z
|
2016-12-21T21:12:27Z
|
Distributed Dictionary Learning
|
The paper studies distributed Dictionary Learning (DL) problems where the
learning task is distributed over a multi-agent network with time-varying
(nonsymmetric) connectivity. This formulation is relevant, for instance, in
big-data scenarios where massive amounts of data are collected/stored in
different spatial locations and it is unfeasible to aggregate and/or process
all the data in a fusion center, due to resource limitations, communication
overhead or privacy considerations. We develop a general distributed
algorithmic framework for the (nonconvex) DL problem and establish its
asymptotic convergence. The new method hinges on Successive Convex
Approximation (SCA) techniques coupled with i) a gradient tracking mechanism
instrumental to locally estimate the missing global information; and ii) a
consensus step, as a mechanism to distribute the computations among the agents.
To the best of our knowledge, this is the first distributed algorithm with
provable convergence for the DL problem and, more in general, bi-convex
optimization problems over (time-varying) directed graphs.
|
[
"['Amir Daneshmand' 'Gesualdo Scutari' 'Francisco Facchinei']",
"Amir Daneshmand, Gesualdo Scutari, Francisco Facchinei"
] |
cs.LG stat.ML
| null |
1612.07374
| null | null |
http://arxiv.org/pdf/1612.07374v1
|
2016-12-21T22:43:08Z
|
2016-12-21T22:43:08Z
|
Detecting Unusual Input-Output Associations in Multivariate Conditional
Data
|
Despite tremendous progress in outlier detection research in recent years,
the majority of existing methods are designed only to detect unconditional
outliers that correspond to unusual data patterns expressed in the joint space
of all data attributes. Such methods are not applicable when we seek to detect
conditional outliers that reflect unusual responses associated with a given
context or condition. This work focuses on multivariate conditional outlier
detection, a special type of the conditional outlier detection problem, where
data instances consist of multi-dimensional input (context) and output
(responses) pairs. We present a novel outlier detection framework that
identifies abnormal input-output associations in data with the help of a
decomposable conditional probabilistic model that is learned from all data
instances. Since components of this model can vary in their quality, we combine
them with the help of weights reflecting their reliability in assessment of
outliers. We study two ways of calculating the component weights: global that
relies on all data, and local that relies only on instances similar to the
target instance. Experimental results on data from various domains demonstrate
the ability of our framework to successfully identify multivariate conditional
outliers.
|
[
"Charmgil Hong, Milos Hauskrecht",
"['Charmgil Hong' 'Milos Hauskrecht']"
] |
cond-mat.mtrl-sci cs.LG stat.ML
| null |
1612.07401
| null | null |
http://arxiv.org/pdf/1612.07401v3
|
2017-04-28T00:11:29Z
|
2016-12-22T00:29:25Z
|
Microstructure Representation and Reconstruction of Heterogeneous
Materials via Deep Belief Network for Computational Material Design
|
Integrated Computational Materials Engineering (ICME) aims to accelerate
optimal design of complex material systems by integrating material science and
design automation. For tractable ICME, it is required that (1) a structural
feature space be identified to allow reconstruction of new designs, and (2) the
reconstruction process be property-preserving. The majority of existing
structural presentation schemes rely on the designer's understanding of
specific material systems to identify geometric and statistical features, which
could be biased and insufficient for reconstructing physically meaningful
microstructures of complex material systems. In this paper, we develop a
feature learning mechanism based on convolutional deep belief network to
automate a two-way conversion between microstructures and their
lower-dimensional feature representations, and to achieves a 1000-fold
dimension reduction from the microstructure space. The proposed model is
applied to a wide spectrum of heterogeneous material systems with distinct
microstructural features including Ti-6Al-4V alloy, Pb63-Sn37 alloy,
Fontainebleau sandstone, and Spherical colloids, to produce material
reconstructions that are close to the original samples with respect to 2-point
correlation functions and mean critical fracture strength. This capability is
not achieved by existing synthesis methods that rely on the Markovian
assumption of material microstructures.
|
[
"['Ruijin Cang' 'Yaopengxiao Xu' 'Shaohua Chen' 'Yongming Liu' 'Yang Jiao'\n 'Max Yi Ren']",
"Ruijin Cang, Yaopengxiao Xu, Shaohua Chen, Yongming Liu, Yang Jiao,\n Max Yi Ren"
] |
cs.CL cs.LG
|
10.1145/3097983.3098115
|
1612.07411
| null | null |
http://arxiv.org/abs/1612.07411v2
|
2017-09-03T21:41:07Z
|
2016-12-22T01:25:20Z
|
A Context-aware Attention Network for Interactive Question Answering
|
Neural network based sequence-to-sequence models in an encoder-decoder
framework have been successfully applied to solve Question Answering (QA)
problems, predicting answers from statements and questions. However, almost all
previous models have failed to consider detailed context information and
unknown states under which systems do not have enough information to answer
given questions. These scenarios with incomplete or ambiguous information are
very common in the setting of Interactive Question Answering (IQA). To address
this challenge, we develop a novel model, employing context-dependent
word-level attention for more accurate statement representations and
question-guided sentence-level attention for better context modeling. We also
generate unique IQA datasets to test our model, which will be made publicly
available. Employing these attention mechanisms, our model accurately
understands when it can output an answer or when it requires generating a
supplementary question for additional input depending on different contexts.
When available, user's feedback is encoded and directly applied to update
sentence-level attention to infer an answer. Extensive experiments on QA and
IQA datasets quantitatively demonstrate the effectiveness of our model with
significant improvement over state-of-the-art conventional QA models.
|
[
"Huayu Li, Martin Renqiang Min, Yong Ge, Asim Kadav",
"['Huayu Li' 'Martin Renqiang Min' 'Yong Ge' 'Asim Kadav']"
] |
cs.LG stat.ML
| null |
1612.07454
| null | null |
http://arxiv.org/pdf/1612.07454v1
|
2016-12-22T06:17:01Z
|
2016-12-22T06:17:01Z
|
How to Train Your Deep Neural Network with Dictionary Learning
|
Currently there are two predominant ways to train deep neural networks. The
first one uses restricted Boltzmann machine (RBM) and the second one
autoencoders. RBMs are stacked in layers to form deep belief network (DBN); the
final representation layer is attached to the target to complete the deep
neural network. Autoencoders are nested one inside the other to form stacked
autoencoders; once the stcaked autoencoder is learnt the decoder portion is
detached and the target attached to the deepest layer of the encoder to form
the deep neural network. This work proposes a new approach to train deep neural
networks using dictionary learning as the basic building block; the idea is to
use the features from the shallower layer as inputs for training the next
deeper layer. One can use any type of dictionary learning (unsupervised,
supervised, discriminative etc.) as basic units till the pre-final layer. In
the final layer one needs to use the label consistent dictionary learning
formulation for classification. We compare our proposed framework with existing
state-of-the-art deep learning techniques on benchmark problems; we are always
within the top 10 results. In actual problems of age and gender classification,
we are better than the best known techniques.
|
[
"['Vanika Singhal' 'Shikha Singh' 'Angshul Majumdar']",
"Vanika Singhal, Shikha Singh and Angshul Majumdar"
] |
cs.LG cs.DS
| null |
1612.07516
| null | null |
http://arxiv.org/pdf/1612.07516v3
|
2018-09-27T06:50:30Z
|
2016-12-22T10:10:11Z
|
On Coreset Constructions for the Fuzzy $K$-Means Problem
|
The fuzzy $K$-means problem is a popular generalization of the well-known
$K$-means problem to soft clusterings. We present the first coresets for fuzzy
$K$-means with size linear in the dimension, polynomial in the number of
clusters, and poly-logarithmic in the number of points. We show that these
coresets can be employed in the computation of a $(1+\epsilon)$-approximation
for fuzzy $K$-means, improving previously presented results. We further show
that our coresets can be maintained in an insertion-only streaming setting,
where data points arrive one-by-one.
|
[
"Johannes Bl\\\"omer, Sascha Brauer, Kathrin Bujna",
"['Johannes Blömer' 'Sascha Brauer' 'Kathrin Bujna']"
] |
cs.SD cs.LG stat.ML
| null |
1612.07523
| null | null |
http://arxiv.org/pdf/1612.07523v1
|
2016-12-22T10:14:59Z
|
2016-12-22T10:14:59Z
|
Robustness of Voice Conversion Techniques Under Mismatched Conditions
|
Most of the existing studies on voice conversion (VC) are conducted in
acoustically matched conditions between source and target signal. However, the
robustness of VC methods in presence of mismatch remains unknown. In this
paper, we report a comparative analysis of different VC techniques under
mismatched conditions. The extensive experiments with five different VC
techniques on CMU ARCTIC corpus suggest that performance of VC methods
substantially degrades in noisy conditions. We have found that bilinear
frequency warping with amplitude scaling (BLFWAS) outperforms other methods in
most of the noisy conditions. We further explore the suitability of different
speech enhancement techniques for robust conversion. The objective evaluation
results indicate that spectral subtraction and log minimum mean square error
(logMMSE) based speech enhancement techniques can be used to improve the
performance in specific noisy conditions.
|
[
"['Monisankha Pal' 'Dipjyoti Paul' 'Md Sahidullah' 'Goutam Saha']",
"Monisankha Pal, Dipjyoti Paul, Md Sahidullah, Goutam Saha"
] |
cs.AI cs.LG stat.ML
| null |
1612.07548
| null | null |
http://arxiv.org/pdf/1612.07548v1
|
2016-12-22T11:30:35Z
|
2016-12-22T11:30:35Z
|
Non-Deterministic Policy Improvement Stabilizes Approximated
Reinforcement Learning
|
This paper investigates a type of instability that is linked to the greedy
policy improvement in approximated reinforcement learning. We show empirically
that non-deterministic policy improvement can stabilize methods like LSPI by
controlling the improvements' stochasticity. Additionally we show that a
suitable representation of the value function also stabilizes the solution to
some degree. The presented approach is simple and should also be easily
transferable to more sophisticated algorithms like deep reinforcement learning.
|
[
"Wendelin B\\\"ohmer and Rong Guo and Klaus Obermayer",
"['Wendelin Böhmer' 'Rong Guo' 'Klaus Obermayer']"
] |
cs.LG
| null |
1612.07562
| null | null | null | null | null |
On the function approximation error for risk-sensitive reinforcement
learning
|
In this paper we obtain several informative error bounds on function
approximation for the policy evaluation algorithm proposed by Basu et al. when
the aim is to find the risk-sensitive cost represented using exponential
utility. The main idea is to use classical Bapat's inequality and to use
Perron-Frobenius eigenvectors (exists if we assume irreducible Markov chain) to
get the new bounds. The novelty of our approach is that we use the
irreduciblity of Markov chain to get the new bounds whereas the earlier work by
Basu et al. used spectral variation bound which is true for any matrix. We also
give examples where all our bounds achieve the "actual error" whereas the
earlier bound given by Basu et al. is much weaker in comparison. We show that
this happens due to the absence of difference term in the earlier bound which
is always present in all our bounds when the state space is large.
Additionally, we discuss how all our bounds compare with each other. As a
corollary of our main result we provide a bound between largest eigenvalues of
two irreducibile matrices in terms of the matrix entries.
|
[
"Prasenjit Karmakar, Shalabh Bhatnagar"
] |
null | null |
1612.07562v
| null | null |
http://arxiv.org/pdf/1612.07562v15
|
2019-10-22T14:48:35Z
|
2016-12-22T12:05:29Z
|
On the function approximation error for risk-sensitive reinforcement
learning
|
In this paper we obtain several informative error bounds on function approximation for the policy evaluation algorithm proposed by Basu et al. when the aim is to find the risk-sensitive cost represented using exponential utility. The main idea is to use classical Bapat's inequality and to use Perron-Frobenius eigenvectors (exists if we assume irreducible Markov chain) to get the new bounds. The novelty of our approach is that we use the irreduciblity of Markov chain to get the new bounds whereas the earlier work by Basu et al. used spectral variation bound which is true for any matrix. We also give examples where all our bounds achieve the "actual error" whereas the earlier bound given by Basu et al. is much weaker in comparison. We show that this happens due to the absence of difference term in the earlier bound which is always present in all our bounds when the state space is large. Additionally, we discuss how all our bounds compare with each other. As a corollary of our main result we provide a bound between largest eigenvalues of two irreducibile matrices in terms of the matrix entries.
|
[
"['Prasenjit Karmakar' 'Shalabh Bhatnagar']"
] |
stat.ML cs.LG
| null |
1612.07597
| null | null |
http://arxiv.org/pdf/1612.07597v2
|
2017-03-16T12:21:36Z
|
2016-12-22T13:53:42Z
|
Finding Statistically Significant Attribute Interactions
|
In many data exploration tasks it is meaningful to identify groups of
attribute interactions that are specific to a variable of interest. For
instance, in a dataset where the attributes are medical markers and the
variable of interest (class variable) is binary indicating presence/absence of
disease, we would like to know which medical markers interact with respect to
the binary class label. These interactions are useful in several practical
applications, for example, to gain insight into the structure of the data, in
feature selection, and in data anonymisation. We present a novel method, based
on statistical significance testing, that can be used to verify if the data set
has been created by a given factorised class-conditional joint distribution,
where the distribution is parametrised by a partition of its attributes.
Furthermore, we provide a method, named ASTRID, for automatically finding a
partition of attributes describing the distribution that has generated the
data. State-of-the-art classifiers are utilised to capture the interactions
present in the data by systematically breaking attribute interactions and
observing the effect of this breaking on classifier performance. We empirically
demonstrate the utility of the proposed method with examples using real and
synthetic data.
|
[
"['Andreas Henelius' 'Antti Ukkonen' 'Kai Puolamäki']",
"Andreas Henelius, Antti Ukkonen, Kai Puolam\\\"aki"
] |
cs.LG stat.ML
| null |
1612.0764
| null | null | null | null | null |
Deep Learning and Its Applications to Machine Health Monitoring: A
Survey
|
Since 2006, deep learning (DL) has become a rapidly growing research
direction, redefining state-of-the-art performances in a wide range of areas
such as object recognition, image segmentation, speech recognition and machine
translation. In modern manufacturing systems, data-driven machine health
monitoring is gaining in popularity due to the widespread deployment of
low-cost sensors and their connection to the Internet. Meanwhile, deep learning
provides useful tools for processing and analyzing these big machinery data.
The main purpose of this paper is to review and summarize the emerging research
work of deep learning on machine health monitoring. After the brief
introduction of deep learning techniques, the applications of deep learning in
machine health monitoring systems are reviewed mainly from the following
aspects: Auto-encoder (AE) and its variants, Restricted Boltzmann Machines and
its variants including Deep Belief Network (DBN) and Deep Boltzmann Machines
(DBM), Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN).
Finally, some new trends of DL-based machine health monitoring methods are
discussed.
|
[
"Rui Zhao, Ruqiang Yan, Zhenghua Chen, Kezhi Mao, Peng Wang and Robert\n X. Gao"
] |
null | null |
1612.07640
| null | null |
http://arxiv.org/pdf/1612.07640v1
|
2016-12-16T04:56:30Z
|
2016-12-16T04:56:30Z
|
Deep Learning and Its Applications to Machine Health Monitoring: A
Survey
|
Since 2006, deep learning (DL) has become a rapidly growing research direction, redefining state-of-the-art performances in a wide range of areas such as object recognition, image segmentation, speech recognition and machine translation. In modern manufacturing systems, data-driven machine health monitoring is gaining in popularity due to the widespread deployment of low-cost sensors and their connection to the Internet. Meanwhile, deep learning provides useful tools for processing and analyzing these big machinery data. The main purpose of this paper is to review and summarize the emerging research work of deep learning on machine health monitoring. After the brief introduction of deep learning techniques, the applications of deep learning in machine health monitoring systems are reviewed mainly from the following aspects: Auto-encoder (AE) and its variants, Restricted Boltzmann Machines and its variants including Deep Belief Network (DBN) and Deep Boltzmann Machines (DBM), Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN). Finally, some new trends of DL-based machine health monitoring methods are discussed.
|
[
"['Rui Zhao' 'Ruqiang Yan' 'Zhenghua Chen' 'Kezhi Mao' 'Peng Wang'\n 'Robert X. Gao']"
] |
stat.ML cs.LG
| null |
1612.07659
| null | null |
http://arxiv.org/pdf/1612.07659v1
|
2016-12-22T15:53:57Z
|
2016-12-22T15:53:57Z
|
Structured Sequence Modeling with Graph Convolutional Recurrent Networks
|
This paper introduces Graph Convolutional Recurrent Network (GCRN), a deep
learning model able to predict structured sequences of data. Precisely, GCRN is
a generalization of classical recurrent neural networks (RNN) to data
structured by an arbitrary graph. Such structured sequences can represent
series of frames in videos, spatio-temporal measurements on a network of
sensors, or random walks on a vocabulary graph for natural language modeling.
The proposed model combines convolutional neural networks (CNN) on graphs to
identify spatial structures and RNN to find dynamic patterns. We study two
possible architectures of GCRN, and apply the models to two practical problems:
predicting moving MNIST data, and modeling natural language with the Penn
Treebank dataset. Experiments show that exploiting simultaneously graph spatial
and dynamic information about data can improve both precision and learning
speed.
|
[
"Youngjoo Seo, Micha\\\"el Defferrard, Pierre Vandergheynst, Xavier\n Bresson",
"['Youngjoo Seo' 'Michaël Defferrard' 'Pierre Vandergheynst'\n 'Xavier Bresson']"
] |
hep-ph cs.LG physics.data-an
|
10.1088/1748-0221/12/05/T05005
|
1612.07725
| null | null |
http://arxiv.org/abs/1612.07725v3
|
2017-05-30T19:03:03Z
|
2016-12-21T20:01:37Z
|
Stacking machine learning classifiers to identify Higgs bosons at the
LHC
|
Machine learning (ML) algorithms have been employed in the problem of
classifying signal and background events with high accuracy in particle
physics. In this paper, we compare the performance of a widespread ML
technique, namely, \emph{stacked generalization}, against the results of two
state-of-art algorithms: (1) a deep neural network (DNN) in the task of
discovering a new neutral Higgs boson and (2) a scalable machine learning
system for tree boosting, in the Standard Model Higgs to tau leptons channel,
both at the 8 TeV LHC. In a cut-and-count analysis, \emph{stacking} three
algorithms performed around 16\% worse than DNN but demanding far less
computation efforts, however, the same \emph{stacking} outperforms boosted
decision trees. Using the stacked classifiers in a multivariate statistical
analysis (MVA), on the other hand, significantly enhances the statistical
significance compared to cut-and-count in both Higgs processes, suggesting that
combining an ensemble of simpler and faster ML algorithms with MVA tools is a
better approach than building a complex state-of-art algorithm for
cut-and-count.
|
[
"Alexandre Alves",
"['Alexandre Alves']"
] |
cs.NE cs.AI cs.LG
| null |
1612.07771
| null | null |
http://arxiv.org/pdf/1612.07771v3
|
2017-03-14T21:27:03Z
|
2016-12-22T19:57:35Z
|
Highway and Residual Networks learn Unrolled Iterative Estimation
|
The past year saw the introduction of new architectures such as Highway
networks and Residual networks which, for the first time, enabled the training
of feedforward networks with dozens to hundreds of layers using simple gradient
descent. While depth of representation has been posited as a primary reason for
their success, there are indications that these architectures defy a popular
view of deep learning as a hierarchical computation of increasingly abstract
features at each layer.
In this report, we argue that this view is incomplete and does not adequately
explain several recent findings. We propose an alternative viewpoint based on
unrolled iterative estimation -- a group of successive layers iteratively
refine their estimates of the same features instead of computing an entirely
new representation. We demonstrate that this viewpoint directly leads to the
construction of Highway and Residual networks. Finally we provide preliminary
experiments to discuss the similarities and differences between the two
architectures.
|
[
"Klaus Greff and Rupesh K. Srivastava and J\\\"urgen Schmidhuber",
"['Klaus Greff' 'Rupesh K. Srivastava' 'Jürgen Schmidhuber']"
] |
cs.LG cs.LO
| null |
1612.07823
| null | null |
http://arxiv.org/pdf/1612.07823v3
|
2017-05-15T20:07:13Z
|
2016-12-22T21:58:32Z
|
Logic-based Clustering and Learning for Time-Series Data
|
To effectively analyze and design cyberphysical systems (CPS), designers
today have to combat the data deluge problem, i.e., the burden of processing
intractably large amounts of data produced by complex models and experiments.
In this work, we utilize monotonic Parametric Signal Temporal Logic (PSTL) to
design features for unsupervised classification of time series data. This
enables using off-the-shelf machine learning tools to automatically cluster
similar traces with respect to a given PSTL formula. We demonstrate how this
technique produces interpretable formulas that are amenable to analysis and
understanding using a few representative examples. We illustrate this with case
studies related to automotive engine testing, highway traffic analysis, and
auto-grading massively open online courses.
|
[
"Marcell Vazquez-Chanlatte, Jyotirmoy V. Deshmukh, Xiaoqing Jin, Sanjit\n A. Seshia",
"['Marcell Vazquez-Chanlatte' 'Jyotirmoy V. Deshmukh' 'Xiaoqing Jin'\n 'Sanjit A. Seshia']"
] |
cs.CV cs.LG cs.NE
| null |
1612.07828
| null | null |
http://arxiv.org/pdf/1612.07828v2
|
2017-07-19T21:24:52Z
|
2016-12-22T22:10:51Z
|
Learning from Simulated and Unsupervised Images through Adversarial
Training
|
With recent progress in graphics, it has become more tractable to train
models on synthetic images, potentially avoiding the need for expensive
annotations. However, learning from synthetic images may not achieve the
desired performance due to a gap between synthetic and real image
distributions. To reduce this gap, we propose Simulated+Unsupervised (S+U)
learning, where the task is to learn a model to improve the realism of a
simulator's output using unlabeled real data, while preserving the annotation
information from the simulator. We develop a method for S+U learning that uses
an adversarial network similar to Generative Adversarial Networks (GANs), but
with synthetic images as inputs instead of random vectors. We make several key
modifications to the standard GAN algorithm to preserve annotations, avoid
artifacts, and stabilize training: (i) a 'self-regularization' term, (ii) a
local adversarial loss, and (iii) updating the discriminator using a history of
refined images. We show that this enables generation of highly realistic
images, which we demonstrate both qualitatively and with a user study. We
quantitatively evaluate the generated images by training models for gaze
estimation and hand pose estimation. We show a significant improvement over
using synthetic images, and achieve state-of-the-art results on the MPIIGaze
dataset without any labeled real data.
|
[
"['Ashish Shrivastava' 'Tomas Pfister' 'Oncel Tuzel' 'Josh Susskind'\n 'Wenda Wang' 'Russ Webb']",
"Ashish Shrivastava, Tomas Pfister, Oncel Tuzel, Josh Susskind, Wenda\n Wang, Russ Webb"
] |
cs.CL cs.IR cs.LG stat.ML
|
10.1371/journal.pone.0181142
|
1612.07843
| null | null |
http://arxiv.org/abs/1612.07843v1
|
2016-12-23T00:31:30Z
|
2016-12-23T00:31:30Z
|
"What is Relevant in a Text Document?": An Interpretable Machine
Learning Approach
|
Text documents can be described by a number of abstract concepts such as
semantic category, writing style, or sentiment. Machine learning (ML) models
have been trained to automatically map documents to these abstract concepts,
allowing to annotate very large text collections, more than could be processed
by a human in a lifetime. Besides predicting the text's category very
accurately, it is also highly desirable to understand how and why the
categorization process takes place. In this paper, we demonstrate that such
understanding can be achieved by tracing the classification decision back to
individual words using layer-wise relevance propagation (LRP), a recently
developed technique for explaining predictions of complex non-linear
classifiers. We train two word-based ML models, a convolutional neural network
(CNN) and a bag-of-words SVM classifier, on a topic categorization task and
adapt the LRP method to decompose the predictions of these models onto words.
Resulting scores indicate how much individual words contribute to the overall
classification decision. This enables one to distill relevant information from
text documents without an explicit semantic information extraction step. We
further use the word-wise relevance scores for generating novel vector-based
document representations which capture semantic information. Based on these
document vectors, we introduce a measure of model explanatory power and show
that, although the SVM and CNN models perform similarly in terms of
classification accuracy, the latter exhibits a higher level of explainability
which makes it more comprehensible for humans and potentially more useful for
other applications.
|
[
"Leila Arras, Franziska Horn, Gr\\'egoire Montavon, Klaus-Robert\n M\\\"uller, Wojciech Samek",
"['Leila Arras' 'Franziska Horn' 'Grégoire Montavon' 'Klaus-Robert Müller'\n 'Wojciech Samek']"
] |
stat.ML cs.LG
|
10.7282/t3-t7fe-4a02
|
1612.07857
| null | null | null | null | null |
Human Action Attribute Learning From Video Data Using Low-Rank
Representations
|
Representation of human actions as a sequence of human body movements or
action attributes enables the development of models for human activity
recognition and summarization. We present an extension of the low-rank
representation (LRR) model, termed the clustering-aware structure-constrained
low-rank representation (CS-LRR) model, for unsupervised learning of human
action attributes from video data. Our model is based on the union-of-subspaces
(UoS) framework, and integrates spectral clustering into the LRR optimization
problem for better subspace clustering results. We lay out an efficient linear
alternating direction method to solve the CS-LRR optimization problem. We also
introduce a hierarchical subspace clustering approach, termed hierarchical
CS-LRR, to learn the attributes without the need for a priori specification of
their number. By visualizing and labeling these action attributes, the
hierarchical model can be used to semantically summarize long video sequences
of human actions at multiple resolutions. A human action or activity can also
be uniquely represented as a sequence of transitions from one action attribute
to another, which can then be used for human action recognition. We demonstrate
the effectiveness of the proposed model for semantic summarization and action
recognition through comprehensive experiments on five real-world human action
datasets.
|
[
"Tong Wu, Prudhvi Gurram, Raghuveer M. Rao, and Waheed U. Bajwa"
] |
cs.AI cs.LG
| null |
1612.07896
| null | null |
http://arxiv.org/pdf/1612.07896v1
|
2016-12-23T08:03:20Z
|
2016-12-23T08:03:20Z
|
A Base Camp for Scaling AI
|
Modern statistical machine learning (SML) methods share a major limitation
with the early approaches to AI: there is no scalable way to adapt them to new
domains. Human learning solves this in part by leveraging a rich, shared,
updateable world model. Such scalability requires modularity: updating part of
the world model should not impact unrelated parts. We have argued that such
modularity will require both "correctability" (so that errors can be corrected
without introducing new errors) and "interpretability" (so that we can
understand what components need correcting).
To achieve this, one could attempt to adapt state of the art SML systems to
be interpretable and correctable; or one could see how far the simplest
possible interpretable, correctable learning methods can take us, and try to
control the limitations of SML methods by applying them only where needed. Here
we focus on the latter approach and we investigate two main ideas: "Teacher
Assisted Learning", which leverages crowd sourcing to learn language; and
"Factored Dialog Learning", which factors the process of application
development into roles where the language competencies needed are isolated,
enabling non-experts to quickly create new applications.
We test these ideas in an "Automated Personal Assistant" (APA) setting, with
two scenarios: that of detecting user intent from a user-APA dialog; and that
of creating a class of event reminder applications, where a non-expert
"teacher" can then create specific apps. For the intent detection task, we use
a dataset of a thousand labeled utterances from user dialogs with Cortana, and
we show that our approach matches state of the art SML methods, but in addition
provides full transparency: the whole (editable) model can be summarized on one
human-readable page. For the reminder app task, we ran small user studies to
verify the efficacy of the approach.
|
[
"['C. J. C. Burges' 'T. Hart' 'Z. Yang' 'S. Cucerzan' 'R. W. White'\n 'A. Pastusiak' 'J. Lewis']",
"C.J.C. Burges, T. Hart, Z. Yang, S. Cucerzan, R.W. White, A.\n Pastusiak, J. Lewis"
] |
cs.CL cs.LG
| null |
1612.0794
| null | null | null | null | null |
Supervised Opinion Aspect Extraction by Exploiting Past Extraction
Results
|
One of the key tasks of sentiment analysis of product reviews is to extract
product aspects or features that users have expressed opinions on. In this
work, we focus on using supervised sequence labeling as the base approach to
performing the task. Although several extraction methods using sequence
labeling methods such as Conditional Random Fields (CRF) and Hidden Markov
Models (HMM) have been proposed, we show that this supervised approach can be
significantly improved by exploiting the idea of concept sharing across
multiple domains. For example, "screen" is an aspect in iPhone, but not only
iPhone has a screen, many electronic devices have screens too. When "screen"
appears in a review of a new domain (or product), it is likely to be an aspect
too. Knowing this information enables us to do much better extraction in the
new domain. This paper proposes a novel extraction method exploiting this idea
in the context of supervised sequence labeling. Experimental results show that
it produces markedly better results than without using the past information.
|
[
"Lei Shu, Bing Liu, Hu Xu, Annice Kim"
] |
null | null |
1612.07940
| null | null |
http://arxiv.org/pdf/1612.07940v1
|
2016-12-23T11:32:37Z
|
2016-12-23T11:32:37Z
|
Supervised Opinion Aspect Extraction by Exploiting Past Extraction
Results
|
One of the key tasks of sentiment analysis of product reviews is to extract product aspects or features that users have expressed opinions on. In this work, we focus on using supervised sequence labeling as the base approach to performing the task. Although several extraction methods using sequence labeling methods such as Conditional Random Fields (CRF) and Hidden Markov Models (HMM) have been proposed, we show that this supervised approach can be significantly improved by exploiting the idea of concept sharing across multiple domains. For example, "screen" is an aspect in iPhone, but not only iPhone has a screen, many electronic devices have screens too. When "screen" appears in a review of a new domain (or product), it is likely to be an aspect too. Knowing this information enables us to do much better extraction in the new domain. This paper proposes a novel extraction method exploiting this idea in the context of supervised sequence labeling. Experimental results show that it produces markedly better results than without using the past information.
|
[
"['Lei Shu' 'Bing Liu' 'Hu Xu' 'Annice Kim']"
] |
cs.LG stat.ML
| null |
1612.07976
| null | null |
http://arxiv.org/pdf/1612.07976v2
|
2016-12-28T02:29:15Z
|
2016-12-23T14:07:01Z
|
DeMIAN: Deep Modality Invariant Adversarial Network
|
Obtaining common representations from different modalities is important in
that they are interchangeable with each other in a classification problem. For
example, we can train a classifier on image features in the common
representations and apply it to the testing of the text features in the
representations. Existing multi-modal representation learning methods mainly
aim to extract rich information from paired samples and train a classifier by
the corresponding labels; however, collecting paired samples and their labels
simultaneously involves high labor costs. Addressing paired modal samples
without their labels and single modal data with their labels independently is
much easier than addressing labeled multi-modal data. To obtain the common
representations under such a situation, we propose to make the distributions
over different modalities similar in the learned representations, namely
modality-invariant representations. In particular, we propose a novel algorithm
for modality-invariant representation learning, named Deep Modality Invariant
Adversarial Network (DeMIAN), which utilizes the idea of Domain Adaptation
(DA). Using the modality-invariant representations learned by DeMIAN, we
achieved better classification accuracy than with the state-of-the-art methods,
especially for some benchmark datasets of zero-shot learning.
|
[
"['Kuniaki Saito' 'Yusuke Mukuta' 'Yoshitaka Ushiku' 'Tatsuya Harada']",
"Kuniaki Saito, Yusuke Mukuta, Yoshitaka Ushiku, Tatsuya Harada"
] |
stat.ML cs.LG
| null |
1612.07993
| null | null |
http://arxiv.org/pdf/1612.07993v1
|
2016-12-23T15:02:54Z
|
2016-12-23T15:02:54Z
|
RSSL: Semi-supervised Learning in R
|
In this paper, we introduce a package for semi-supervised learning research
in the R programming language called RSSL. We cover the purpose of the package,
the methods it includes and comment on their use and implementation. We then
show, using several code examples, how the package can be used to replicate
well-known results from the semi-supervised learning literature.
|
[
"['Jesse H. Krijthe']",
"Jesse H. Krijthe"
] |
stat.ML cs.LG
| null |
1612.08082
| null | null |
http://arxiv.org/pdf/1612.08082v3
|
2018-07-10T08:16:55Z
|
2016-12-23T20:29:52Z
|
Constructing Effective Personalized Policies Using Counterfactual
Inference from Biased Data Sets with Many Features
|
This paper proposes a novel approach for constructing effective personalized
policies when the observed data lacks counter-factual information, is biased
and possesses many features. The approach is applicable in a wide variety of
settings from healthcare to advertising to education to finance. These settings
have in common that the decision maker can observe, for each previous instance,
an array of features of the instance, the action taken in that instance, and
the reward realized -- but not the rewards of actions that were not taken: the
counterfactual information. Learning in such settings is made even more
difficult because the observed data is typically biased by the existing policy
(that generated the data) and because the array of features that might affect
the reward in a particular instance -- and hence should be taken into account
in deciding on an action in each particular instance -- is often vast. The
approach presented here estimates propensity scores for the observed data,
infers counterfactuals, identifies a (relatively small) number of features that
are (most) relevant for each possible action and instance, and prescribes a
policy to be followed. Comparison of the proposed algorithm against the
state-of-art algorithm on actual datasets demonstrates that the proposed
algorithm achieves a significant improvement in performance.
|
[
"['Onur Atan' 'William R. Zame' 'Qiaojun Feng' 'Mihaela van der Schaar']",
"Onur Atan, William R. Zame, Qiaojun Feng, Mihaela van der Schaar"
] |
cs.SI cs.LG physics.soc-ph
| null |
1612.08102
| null | null |
http://arxiv.org/pdf/1612.08102v1
|
2016-12-23T21:20:55Z
|
2016-12-23T21:20:55Z
|
On Spectral Analysis of Directed Signed Graphs
|
It has been shown that the adjacency eigenspace of a network contains key
information of its underlying structure. However, there has been no study on
spectral analysis of the adjacency matrices of directed signed graphs. In this
paper, we derive theoretical approximations of spectral projections from such
directed signed networks using matrix perturbation theory. We use the derived
theoretical results to study the influences of negative intra cluster and inter
cluster directed edges on node spectral projections. We then develop a spectral
clustering based graph partition algorithm, SC-DSG, and conduct evaluations on
both synthetic and real datasets. Both theoretical analysis and empirical
evaluation demonstrate the effectiveness of the proposed algorithm.
|
[
"['Yuemeng Li' 'Xintao Wu' 'Aidong Lu']",
"Yuemeng Li, Xintao Wu, Aidong Lu"
] |
cs.CV cs.CL cs.LG
| null |
1612.08354
| null | null |
http://arxiv.org/pdf/1612.08354v1
|
2016-12-26T09:51:18Z
|
2016-12-26T09:51:18Z
|
Image-Text Multi-Modal Representation Learning by Adversarial
Backpropagation
|
We present novel method for image-text multi-modal representation learning.
In our knowledge, this work is the first approach of applying adversarial
learning concept to multi-modal learning and not exploiting image-text pair
information to learn multi-modal feature. We only use category information in
contrast with most previous methods using image-text pair information for
multi-modal embedding. In this paper, we show that multi-modal feature can be
achieved without image-text pair information and our method makes more similar
distribution with image and text in multi-modal feature space than other
methods which use image-text pair information. And we show our multi-modal
feature has universal semantic information, even though it was trained for
category prediction. Our model is end-to-end backpropagation, intuitive and
easily extended to other multi-modal learning work.
|
[
"Gwangbeen Park, Woobin Im",
"['Gwangbeen Park' 'Woobin Im']"
] |
cs.LG stat.ML
| null |
1612.08388
| null | null |
http://arxiv.org/pdf/1612.08388v1
|
2016-12-26T14:25:32Z
|
2016-12-26T14:25:32Z
|
Clustering Algorithms: A Comparative Approach
|
Many real-world systems can be studied in terms of pattern recognition tasks,
so that proper use (and understanding) of machine learning methods in practical
applications becomes essential. While a myriad of classification methods have
been proposed, there is no consensus on which methods are more suitable for a
given dataset. As a consequence, it is important to comprehensively compare
methods in many possible scenarios. In this context, we performed a systematic
comparison of 7 well-known clustering methods available in the R language. In
order to account for the many possible variations of data, we considered
artificial datasets with several tunable properties (number of classes,
separation between classes, etc). In addition, we also evaluated the
sensitivity of the clustering methods with regard to their parameters
configuration. The results revealed that, when considering the default
configurations of the adopted methods, the spectral approach usually
outperformed the other clustering algorithms. We also found that the default
configuration of the adopted implementations was not accurate. In these cases,
a simple approach based on random selection of parameters values proved to be a
good alternative to improve the performance. All in all, the reported approach
provides subsidies guiding the choice of clustering algorithms.
|
[
"['Mayra Z. Rodriguez' 'Cesar H. Comin' 'Dalcimar Casanova'\n 'Odemir M. Bruno' 'Diego R. Amancio' 'Francisco A. Rodrigues'\n 'Luciano da F. Costa']",
"Mayra Z. Rodriguez, Cesar H. Comin, Dalcimar Casanova, Odemir M.\n Bruno, Diego R. Amancio, Francisco A. Rodrigues, Luciano da F. Costa"
] |
stat.ML cs.LG q-bio.NC
| null |
1612.08392
| null | null |
http://arxiv.org/pdf/1612.08392v1
|
2016-12-26T14:37:57Z
|
2016-12-26T14:37:57Z
|
Multi-Region Neural Representation: A novel model for decoding visual
stimuli in human brains
|
Multivariate Pattern (MVP) classification holds enormous potential for
decoding visual stimuli in the human brain by employing task-based fMRI data
sets. There is a wide range of challenges in the MVP techniques, i.e.
decreasing noise and sparsity, defining effective regions of interest (ROIs),
visualizing results, and the cost of brain studies. In overcoming these
challenges, this paper proposes a novel model of neural representation, which
can automatically detect the active regions for each visual stimulus and then
utilize these anatomical regions for visualizing and analyzing the functional
activities. Therefore, this model provides an opportunity for neuroscientists
to ask this question: what is the effect of a stimulus on each of the detected
regions instead of just study the fluctuation of voxels in the manually
selected ROIs. Moreover, our method introduces analyzing snapshots of brain
image for decreasing sparsity rather than using the whole of fMRI time series.
Further, a new Gaussian smoothing method is proposed for removing noise of
voxels in the level of ROIs. The proposed method enables us to combine
different fMRI data sets for reducing the cost of brain studies. Experimental
studies on 4 visual categories (words, consonants, objects and nonsense photos)
confirm that the proposed method achieves superior performance to
state-of-the-art methods.
|
[
"['Muhammad Yousefnezhad' 'Daoqiang Zhang']",
"Muhammad Yousefnezhad, Daoqiang Zhang"
] |
stat.ML astro-ph.IM cs.IT cs.LG math.IT
| null |
1612.08406
| null | null |
http://arxiv.org/pdf/1612.08406v2
|
2017-02-13T22:21:17Z
|
2016-12-26T15:42:22Z
|
Correlated signal inference by free energy exploration
|
The inference of correlated signal fields with unknown correlation structures
is of high scientific and technological relevance, but poses significant
conceptual and numerical challenges. To address these, we develop the
correlated signal inference (CSI) algorithm within information field theory
(IFT) and discuss its numerical implementation. To this end, we introduce the
free energy exploration (FrEE) strategy for numerical information field theory
(NIFTy) applications. The FrEE strategy is to let the mathematical structure of
the inference problem determine the dynamics of the numerical solver. FrEE uses
the Gibbs free energy formalism for all involved unknown fields and correlation
structures without marginalization of nuisance quantities. It thereby avoids
the complexity marginalization often impose to IFT equations. FrEE
simultaneously solves for the mean and the uncertainties of signal, nuisance,
and auxiliary fields, while exploiting any analytically calculable quantity.
Finally, FrEE uses a problem specific and self-tuning exploration strategy to
swiftly identify the optimal field estimates as well as their uncertainty maps.
For all estimated fields, properly weighted posterior samples drawn from their
exact, fully non-Gaussian distributions can be generated. Here, we develop the
FrEE strategies for the CSI of a normal, a log-normal, and a Poisson log-normal
IFT signal inference problem and demonstrate their performances via their NIFTy
implementations.
|
[
"['Torsten A. Enßlin' 'Jakob Knollmüller']",
"Torsten A. En{\\ss}lin, Jakob Knollm\\\"uller"
] |
stat.ML cs.LG
| null |
1612.08425
| null | null |
http://arxiv.org/pdf/1612.08425v2
|
2016-12-29T16:25:34Z
|
2016-12-26T18:47:11Z
|
Unsupervised Learning for Computational Phenotyping
|
With large volumes of health care data comes the research area of
computational phenotyping, making use of techniques such as machine learning to
describe illnesses and other clinical concepts from the data itself. The
"traditional" approach of using supervised learning relies on a domain expert,
and has two main limitations: requiring skilled humans to supply correct labels
limits its scalability and accuracy, and relying on existing clinical
descriptions limits the sorts of patterns that can be found. For instance, it
may fail to acknowledge that a disease treated as a single condition may really
have several subtypes with different phenotypes, as seems to be the case with
asthma and heart disease. Some recent papers cite successes instead using
unsupervised learning. This shows great potential for finding patterns in
Electronic Health Records that would otherwise be hidden and that can lead to
greater understanding of conditions and treatments. This work implements a
method derived strongly from Lasko et al., but implements it in Apache Spark
and Python and generalizes it to laboratory time-series data in MIMIC-III. It
is released as an open-source tool for exploration, analysis, and
visualization, available at https://github.com/Hodapp87/mimic3_phenotyping
|
[
"Chris Hodapp",
"['Chris Hodapp']"
] |
math.OC cs.LG cs.NA
|
10.1109/TSP.2017.2755597
|
1612.08461
| null | null |
http://arxiv.org/abs/1612.08461v2
|
2017-09-22T21:59:46Z
|
2016-12-27T00:01:13Z
|
Randomized Block Frank-Wolfe for Convergent Large-Scale Learning
|
Owing to their low-complexity iterations, Frank-Wolfe (FW) solvers are well
suited for various large-scale learning tasks. When block-separable constraints
are present, randomized block FW (RB-FW) has been shown to further reduce
complexity by updating only a fraction of coordinate blocks per iteration. To
circumvent the limitations of existing methods, the present work develops step
sizes for RB-FW that enable a flexible selection of the number of blocks to
update per iteration while ensuring convergence and feasibility of the
iterates. To this end, convergence rates of RB-FW are established through
computational bounds on a primal sub-optimality measure and on the duality gap.
The novel bounds extend the existing convergence analysis, which only applies
to a step-size sequence that does not generally lead to feasible iterates.
Furthermore, two classes of step-size sequences that guarantee feasibility of
the iterates are also proposed to enhance flexibility in choosing decay rates.
The novel convergence results are markedly broadened to encompass also
nonconvex objectives, and further assert that RB-FW with exact line-search
reaches a stationary point at rate $\mathcal{O}(1/\sqrt{t})$. Performance of
RB-FW with different step sizes and number of blocks is demonstrated in two
applications, namely charging of electrical vehicles and structural support
vector machines. Extensive simulated tests demonstrate the performance
improvement of RB-FW relative to existing randomized single-block FW methods.
|
[
"['Liang Zhang' 'Gang Wang' 'Daniel Romero' 'Georgios B. Giannakis']",
"Liang Zhang, Gang Wang, Daniel Romero, Georgios B. Giannakis"
] |
cs.LG stat.ML
| null |
1612.08498
| null | null |
http://arxiv.org/pdf/1612.08498v1
|
2016-12-27T04:38:28Z
|
2016-12-27T04:38:28Z
|
Steerable CNNs
|
It has long been recognized that the invariance and equivariance properties
of a representation are critically important for success in many vision tasks.
In this paper we present Steerable Convolutional Neural Networks, an efficient
and flexible class of equivariant convolutional networks. We show that
steerable CNNs achieve state of the art results on the CIFAR image
classification benchmark. The mathematical theory of steerable representations
reveals a type system in which any steerable representation is a composition of
elementary feature types, each one associated with a particular kind of
symmetry. We show how the parameter cost of a steerable filter bank depends on
the types of the input and output features, and show how to use this knowledge
to construct CNNs that utilize parameters effectively.
|
[
"['Taco S. Cohen' 'Max Welling']",
"Taco S. Cohen, Max Welling"
] |
cs.LG cs.AI stat.ML
|
10.1109/TKDE.2017.2720168
|
1612.08544
| null | null |
http://arxiv.org/abs/1612.08544v2
|
2017-11-13T17:42:12Z
|
2016-12-27T09:14:16Z
|
Theory-guided Data Science: A New Paradigm for Scientific Discovery from
Data
|
Data science models, although successful in a number of commercial domains,
have had limited applicability in scientific problems involving complex
physical phenomena. Theory-guided data science (TGDS) is an emerging paradigm
that aims to leverage the wealth of scientific knowledge for improving the
effectiveness of data science models in enabling scientific discovery. The
overarching vision of TGDS is to introduce scientific consistency as an
essential component for learning generalizable models. Further, by producing
scientifically interpretable models, TGDS aims to advance our scientific
understanding by discovering novel domain insights. Indeed, the paradigm of
TGDS has started to gain prominence in a number of scientific disciplines such
as turbulence modeling, material discovery, quantum chemistry, bio-medical
science, bio-marker discovery, climate science, and hydrology. In this paper,
we formally conceptualize the paradigm of TGDS and present a taxonomy of
research themes in TGDS. We describe several approaches for integrating domain
knowledge in different research themes using illustrative examples from
different disciplines. We also highlight some of the promising avenues of novel
research for realizing the full potential of theory-guided data science.
|
[
"Anuj Karpatne, Gowtham Atluri, James Faghmous, Michael Steinbach,\n Arindam Banerjee, Auroop Ganguly, Shashi Shekhar, Nagiza Samatova, and Vipin\n Kumar",
"['Anuj Karpatne' 'Gowtham Atluri' 'James Faghmous' 'Michael Steinbach'\n 'Arindam Banerjee' 'Auroop Ganguly' 'Shashi Shekhar' 'Nagiza Samatova'\n 'Vipin Kumar']"
] |
cs.DC cs.LG
| null |
1612.08608
| null | null |
http://arxiv.org/pdf/1612.08608v1
|
2016-12-27T12:40:39Z
|
2016-12-27T12:40:39Z
|
ASAP: Asynchronous Approximate Data-Parallel Computation
|
Emerging workloads, such as graph processing and machine learning are
approximate because of the scale of data involved and the stochastic nature of
the underlying algorithms. These algorithms are often distributed over multiple
machines using bulk-synchronous processing (BSP) or other synchronous
processing paradigms such as map-reduce. However, data parallel processing
primitives such as repeated barrier and reduce operations introduce high
synchronization overheads. Hence, many existing data-processing platforms use
asynchrony and staleness to improve data-parallel job performance. Often, these
systems simply change the synchronous communication to asynchronous between the
worker nodes in the cluster. This improves the throughput of data processing
but results in poor accuracy of the final output since different workers may
progress at different speeds and process inconsistent intermediate outputs.
In this paper, we present ASAP, a model that provides asynchronous and
approximate processing semantics for data-parallel computation. ASAP provides
fine-grained worker synchronization using NOTIFY-ACK semantics that allows
independent workers to run asynchronously. ASAP also provides stochastic reduce
that provides approximate but guaranteed convergence to the same result as an
aggregated all-reduce. In our results, we show that ASAP can reduce
synchronization costs and provides 2-10X speedups in convergence and up to 10X
savings in network costs for distributed machine learning applications and
provides strong convergence guarantees.
|
[
"['Asim Kadav' 'Erik Kruus']",
"Asim Kadav, Erik Kruus"
] |
cs.AI cs.LG stat.ML
| null |
1612.08633
| null | null |
http://arxiv.org/pdf/1612.08633v1
|
2016-12-27T13:52:56Z
|
2016-12-27T13:52:56Z
|
A Sparse Nonlinear Classifier Design Using AUC Optimization
|
AUC (Area under the ROC curve) is an important performance measure for
applications where the data is highly imbalanced. Learning to maximize AUC
performance is thus an important research problem. Using a max-margin based
surrogate loss function, AUC optimization problem can be approximated as a
pairwise rankSVM learning problem. Batch learning methods for solving the
kernelized version of this problem suffer from scalability and may not result
in sparse classifiers. Recent years have witnessed an increased interest in the
development of online or single-pass online learning algorithms that design a
classifier by maximizing the AUC performance. The AUC performance of nonlinear
classifiers, designed using online methods, is not comparable with that of
nonlinear classifiers designed using batch learning algorithms on many
real-world datasets. Motivated by these observations, we design a scalable
algorithm for maximizing AUC performance by greedily adding the required number
of basis functions into the classifier model. The resulting sparse classifiers
perform faster inference. Our experimental results show that the level of
sparsity achievable can be order of magnitude smaller than the Kernel RankSVM
model without affecting the AUC performance much.
|
[
"['Vishal Kakkar' 'Shirish K. Shevade' 'S Sundararajan' 'Dinesh Garg']",
"Vishal Kakkar, Shirish K. Shevade, S Sundararajan, Dinesh Garg"
] |
stat.ML cs.LG
| null |
1612.0865
| null | null | null | null | null |
Reproducible Pattern Recognition Research: The Case of Optimistic SSL
|
In this paper, we discuss the approaches we took and trade-offs involved in
making a paper on a conceptual topic in pattern recognition research fully
reproducible. We discuss our definition of reproducibility, the tools used, how
the analysis was set up, show some examples of alternative analyses the code
enables and discuss our views on reproducibility.
|
[
"Jesse H. Krijthe and Marco Loog"
] |
null | null |
1612.08650
| null | null |
http://arxiv.org/pdf/1612.08650v1
|
2016-12-27T14:57:22Z
|
2016-12-27T14:57:22Z
|
Reproducible Pattern Recognition Research: The Case of Optimistic SSL
|
In this paper, we discuss the approaches we took and trade-offs involved in making a paper on a conceptual topic in pattern recognition research fully reproducible. We discuss our definition of reproducibility, the tools used, how the analysis was set up, show some examples of alternative analyses the code enables and discuss our views on reproducibility.
|
[
"['Jesse H. Krijthe' 'Marco Loog']"
] |
cs.LG
| null |
1612.08669
| null | null |
http://arxiv.org/pdf/1612.08669v1
|
2016-12-27T16:25:28Z
|
2016-12-27T16:25:28Z
|
A Hybrid Both Filter and Wrapper Feature Selection Method for Microarray
Classification
|
Gene expression data is widely used in disease analysis and cancer diagnosis.
However, since gene expression data could contain thousands of genes
simultaneously, successful microarray classification is rather difficult.
Feature selection is an important pre-treatment for any classification process.
Selecting a useful gene subset as a classifier not only decreases the
computational time and cost, but also increases classification accuracy. In
this study, we applied the information gain method as a filter approach, and an
improved binary particle swarm optimization as a wrapper approach to implement
feature selection; selected gene subsets were used to evaluate the performance
of classification. Experimental results show that by employing the proposed
method fewer gene subsets needed to be selected and better classification
accuracy could be obtained.
|
[
"['Li-Yeh Chuang' 'Chao-Hsuan Ke' 'Cheng-Hong Yang']",
"Li-Yeh Chuang, Chao-Hsuan Ke, and Cheng-Hong Yang"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.