title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Convex Formulation of Multiple Instance Learning from Positive and
Unlabeled Bags | cs.LG | Multiple instance learning (MIL) is a variation of traditional supervised
learning problems where data (referred to as bags) are composed of sub-elements
(referred to as instances) and only bag labels are available. MIL has a variety
of applications such as content-based image retrieval, text categorization and
medical diagnosis. Most of the previous work for MIL assume that the training
bags are fully labeled. However, it is often difficult to obtain an enough
number of labeled bags in practical situations, while many unlabeled bags are
available. A learning framework called PU learning (positive and unlabeled
learning) can address this problem. In this paper, we propose a convex PU
learning method to solve an MIL problem. We experimentally show that the
proposed method achieves better performance with significantly lower
computational costs than an existing method for PU-MIL.
| Han Bao, Tomoya Sakai, Issei Sato, Masashi Sugiyama | null | 1704.06767 | null | null |
Geometric Matrix Completion with Recurrent Multi-Graph Neural Networks | cs.LG cs.IR cs.NA stat.ML | Matrix completion models are among the most common formulations of
recommender systems. Recent works have showed a boost of performance of these
techniques when introducing the pairwise relationships between users/items in
the form of graphs, and imposing smoothness priors on these graphs. However,
such techniques do not fully exploit the local stationarity structures of
user/item graphs, and the number of parameters to learn is linear w.r.t. the
number of users and items. We propose a novel approach to overcome these
limitations by using geometric deep learning on graphs. Our matrix completion
architecture combines graph convolutional neural networks and recurrent neural
networks to learn meaningful statistical graph-structured patterns and the
non-linear diffusion process that generates the known ratings. This neural
network system requires a constant number of parameters independent of the
matrix size. We apply our method on both synthetic and real datasets, showing
that it outperforms state-of-the-art techniques.
| Federico Monti, Michael M. Bronstein, Xavier Bresson | null | 1704.06803 | null | null |
Testing Symmetric Markov Chains from a Single Trajectory | cs.LG cs.DS | Classical distribution testing assumes access to i.i.d. samples from the
distribution that is being tested. We initiate the study of Markov chain
testing, assuming access to a single trajectory of a Markov Chain. In
particular, we observe a single trajectory X0,...,Xt,... of an unknown,
symmetric, and finite state Markov Chain M. We do not control the starting
state X0, and we cannot restart the chain. Given our single trajectory, the
goal is to test whether M is identical to a model Markov Chain M0 , or far from
it under an appropriate notion of difference. We propose a measure of
difference between two Markov chains, motivated by the early work of Kazakos
[Kaz78], which captures the scaling behavior of the total variation distance
between trajectories sampled from the Markov chains as the length of these
trajectories grows. We provide efficient testers and information-theoretic
lower bounds for testing identity of symmetric Markov chains under our proposed
measure of difference, which are tight up to logarithmic factors if the hitting
times of the model chain M0 is O(n) in the size of the state space n.
| Constantinos Daskalakis, Nishanth Dikkala, Nick Gravin | null | 1704.0685 | null | null |
Learning to Skim Text | cs.CL cs.LG | Recurrent Neural Networks are showing much promise in many sub-areas of
natural language processing, ranging from document classification to machine
translation to automatic question answering. Despite their promise, many
recurrent models have to read the whole text word by word, making it slow to
handle long documents. For example, it is difficult to use a recurrent network
to read a book and answer questions about it. In this paper, we present an
approach of reading text while skipping irrelevant information if needed. The
underlying model is a recurrent network that learns how far to jump after
reading a few words of the input text. We employ a standard policy gradient
method to train the model to make discrete jumping decisions. In our benchmarks
on four different tasks, including number prediction, sentiment analysis, news
article classification and automatic Q\&A, our proposed model, a modified LSTM
with jumping, is up to 6 times faster than the standard sequential LSTM, while
maintaining the same or even better accuracy.
| Adams Wei Yu, Hongrae Lee, Quoc V. Le | null | 1704.06877 | null | null |
Misspecified Linear Bandits | cs.LG | We consider the problem of online learning in misspecified linear stochastic
multi-armed bandit problems. Regret guarantees for state-of-the-art linear
bandit algorithms such as Optimism in the Face of Uncertainty Linear bandit
(OFUL) hold under the assumption that the arms expected rewards are perfectly
linear in their features. It is, however, of interest to investigate the impact
of potential misspecification in linear bandit models, where the expected
rewards are perturbed away from the linear subspace determined by the arms
features. Although OFUL has recently been shown to be robust to relatively
small deviations from linearity, we show that any linear bandit algorithm that
enjoys optimal regret performance in the perfectly linear setting (e.g., OFUL)
must suffer linear regret under a sparse additive perturbation of the linear
model. In an attempt to overcome this negative result, we define a natural
class of bandit models characterized by a non-sparse deviation from linearity.
We argue that the OFUL algorithm can fail to achieve sublinear regret even
under models that have non-sparse deviation.We finally develop a novel bandit
algorithm, comprising a hypothesis test for linearity followed by a decision to
use either the OFUL or Upper Confidence Bound (UCB) algorithm. For perfectly
linear bandit models, the algorithm provably exhibits OFULs favorable regret
performance, while for misspecified models satisfying the non-sparse deviation
property, the algorithm avoids the linear regret phenomenon and falls back on
UCBs sublinear regret scaling. Numerical experiments on synthetic data, and on
recommendation data from the public Yahoo! Learning to Rank Challenge dataset,
empirically support our findings.
| Avishek Ghosh, Sayak Ray Chowdhury, Aditya Gopalan | null | 1704.0688 | null | null |
A General Theory for Training Learning Machine | stat.ML cs.AI cs.CV cs.LG | Though the deep learning is pushing the machine learning to a new stage,
basic theories of machine learning are still limited. The principle of
learning, the role of the a prior knowledge, the role of neuron bias, and the
basis for choosing neural transfer function and cost function, etc., are still
far from clear. In this paper, we present a general theoretical framework for
machine learning. We classify the prior knowledge into common and
problem-dependent parts, and consider that the aim of learning is to maximally
incorporate them. The principle we suggested for maximizing the former is the
design risk minimization principle, while the neural transfer function, the
cost function, as well as pretreatment of samples, are endowed with the role
for maximizing the latter. The role of the neuron bias is explained from a
different angle. We develop a Monte Carlo algorithm to establish the
input-output responses, and we control the input-output sensitivity of a
learning machine by controlling that of individual neurons. Applications of
function approaching and smoothing, pattern recognition and classification, are
provided to illustrate how to train general learning machines based on our
theory and algorithm. Our method may in addition induce new applications, such
as the transductive inference.
| Hong Zhao | null | 1704.06885 | null | null |
Learning weakly supervised multimodal phoneme embeddings | cs.CL cs.LG | Recent works have explored deep architectures for learning multimodal speech
representation (e.g. audio and images, articulation and audio) in a supervised
way. Here we investigate the role of combining different speech modalities,
i.e. audio and visual information representing the lips movements, in a weakly
supervised way using Siamese networks and lexical same-different side
information. In particular, we ask whether one modality can benefit from the
other to provide a richer representation for phone recognition in a weakly
supervised setting. We introduce mono-task and multi-task methods for merging
speech and visual modalities for phone recognition. The mono-task learning
consists in applying a Siamese network on the concatenation of the two
modalities, while the multi-task learning receives several different
combinations of modalities at train time. We show that multi-task learning
enhances discriminability for visual and multimodal inputs while minimally
impacting auditory inputs. Furthermore, we present a qualitative analysis of
the obtained phone embeddings, and show that cross-modal visual input can
improve the discriminability of phonological features which are visually
discernable (rounding, open/close, labial place of articulation), resulting in
representations that are closer to abstract linguistic features than those
based on audio only.
| Rahma Chaabouni, Ewan Dunbar, Neil Zeghidour, Emmanuel Dupoux | null | 1704.06913 | null | null |
Adversarial Neural Machine Translation | cs.CL cs.LG stat.ML | In this paper, we study a new learning paradigm for Neural Machine
Translation (NMT). Instead of maximizing the likelihood of the human
translation as in previous works, we minimize the distinction between human
translation and the translation given by an NMT model. To achieve this goal,
inspired by the recent success of generative adversarial networks (GANs), we
employ an adversarial training architecture and name it as Adversarial-NMT. In
Adversarial-NMT, the training of the NMT model is assisted by an adversary,
which is an elaborately designed Convolutional Neural Network (CNN). The goal
of the adversary is to differentiate the translation result generated by the
NMT model from that by human. The goal of the NMT model is to produce high
quality translations so as to cheat the adversary. A policy gradient method is
leveraged to co-train the NMT model and the adversary. Experimental results on
English$\rightarrow$French and German$\rightarrow$English translation tasks
show that Adversarial-NMT can achieve significantly better translation quality
than several strong baselines.
| Lijun Wu, Yingce Xia, Li Zhao, Fei Tian, Tao Qin, Jianhuang Lai,
Tie-Yan Liu | null | 1704.06933 | null | null |
Naturalizing a Programming Language via Interactive Learning | cs.CL cs.AI cs.HC cs.LG | Our goal is to create a convenient natural language interface for performing
well-specified but complex actions such as analyzing data, manipulating text,
and querying databases. However, existing natural language interfaces for such
tasks are quite primitive compared to the power one wields with a programming
language. To bridge this gap, we start with a core programming language and
allow users to "naturalize" the core language incrementally by defining
alternative, more natural syntax and increasingly complex concepts in terms of
compositions of simpler ones. In a voxel world, we show that a community of
users can simultaneously teach a common system a diverse language and use it to
build hundreds of complex voxel structures. Over the course of three days,
these users went from using only the core language to using the naturalized
language in 85.9\% of the last 10K utterances.
| Sida I. Wang and Samuel Ginn and Percy Liang and Christoper D. Manning | null | 1704.06956 | null | null |
Differentiable Scheduled Sampling for Credit Assignment | cs.CL cs.LG cs.NE | We demonstrate that a continuous relaxation of the argmax operation can be
used to create a differentiable approximation to greedy decoding for
sequence-to-sequence (seq2seq) models. By incorporating this approximation into
the scheduled sampling training procedure (Bengio et al., 2015)--a well-known
technique for correcting exposure bias--we introduce a new training objective
that is continuous and differentiable everywhere and that can provide
informative gradients near points where previous decoding decisions change
their value. In addition, by using a related approximation, we demonstrate a
similar approach to sampled-based training. Finally, we show that our approach
outperforms cross-entropy training and scheduled sampling procedures in two
sequence prediction tasks: named entity recognition and machine translation.
| Kartik Goyal, Chris Dyer and Taylor Berg-Kirkpatrick | null | 1704.0697 | null | null |
Probabilistic Vehicle Trajectory Prediction over Occupancy Grid Map via
Recurrent Neural Network | cs.LG | In this paper, we propose an efficient vehicle trajectory prediction
framework based on recurrent neural network. Basically, the characteristic of
the vehicle's trajectory is different from that of regular moving objects since
it is affected by various latent factors including road structure, traffic
rules, and driver's intention. Previous state of the art approaches use
sophisticated vehicle behavior model describing these factors and derive the
complex trajectory prediction algorithm, which requires a system designer to
conduct intensive model optimization for practical use. Our approach is
data-driven and simple to use in that it learns complex behavior of the
vehicles from the massive amount of trajectory data through deep neural network
model. The proposed trajectory prediction method employs the recurrent neural
network called long short-term memory (LSTM) to analyze the temporal behavior
and predict the future coordinate of the surrounding vehicles. The proposed
scheme feeds the sequence of vehicles' coordinates obtained from sensor
measurements to the LSTM and produces the probabilistic information on the
future location of the vehicles over occupancy grid map. The experiments
conducted using the data collected from highway driving show that the proposed
method can produce reasonably good estimate of future trajectory.
| ByeoungDo Kim, Chang Mook Kang, Seung Hi Lee, Hyunmin Chae, Jaekyum
Kim, Chung Choo Chung, and Jun Won Choi | null | 1704.07049 | null | null |
Using Global Constraints and Reranking to Improve Cognates Detection | cs.CL cs.LG stat.ML | Global constraints and reranking have not been used in cognates detection
research to date. We propose methods for using global constraints by performing
rescoring of the score matrices produced by state of the art cognates detection
systems. Using global constraints to perform rescoring is complementary to
state of the art methods for performing cognates detection and results in
significant performance improvements beyond current state of the art
performance on publicly available datasets with different language pairs and
various conditions such as different levels of baseline state of the art
performance and different data size conditions, including with more realistic
large data size conditions than have been evaluated with in the past.
| Michael Bloodgood and Benjamin Strauss | 10.18653/v1/P17-1181 | 1704.0705 | null | null |
k-FFNN: A priori knowledge infused Feed-forward Neural Networks | cs.LG cs.NE | Recurrent neural network (RNN) are being extensively used over feed-forward
neural networks (FFNN) because of their inherent capability to capture temporal
relationships that exist in the sequential data such as speech. This aspect of
RNN is advantageous especially when there is no a priori knowledge about the
temporal correlations within the data. However, RNNs require large amount of
data to learn these temporal correlations, limiting their advantage in low
resource scenarios. It is not immediately clear (a) how a priori temporal
knowledge can be used in a FFNN architecture (b) how a FFNN performs when
provided with this knowledge about temporal correlations (assuming available)
during training. The objective of this paper is to explore k-FFNN, namely a
FFNN architecture that can incorporate the a priori knowledge of the temporal
relationships within the data sequence during training and compare k-FFNN
performance with RNN in a low resource scenario. We evaluate the performance of
k-FFNN and RNN by extensive experimentation on MediaEval 2016 audio data
("Emotional Impact of Movies" task). Experimental results show that the
performance of k-FFNN is comparable to RNN, and in some scenarios k-FFNN
performs better than RNN when temporal knowledge is injected into FFNN
architecture. The main contributions of this paper are (a) fusing a priori
knowledge into FFNN architecture to construct a k-FFNN and (b) analyzing the
performance of k-FFNN with respect to RNN for different size of training data.
| Sri Harsha Dumpala, Rupayan Chakraborty, Sunil Kumar Kopparapu | null | 1704.07055 | null | null |
Diffusion geometry unravels the emergence of functional clusters in
collective phenomena | physics.soc-ph cond-mat.dis-nn cs.LG cs.SI | Collective phenomena emerge from the interaction of natural or artificial
units with a complex organization. The interplay between structural patterns
and dynamics might induce functional clusters that, in general, are different
from topological ones. In biological systems, like the human brain, the overall
functionality is often favored by the interplay between connectivity and
synchronization dynamics, with functional clusters that do not coincide with
anatomical modules in most cases. In social, socio-technical and engineering
systems, the quest for consensus favors the emergence of clusters.
Despite the unquestionable evidence for mesoscale organization of many
complex systems and the heterogeneity of their inter-connectivity, a way to
predict and identify the emergence of functional modules in collective
phenomena continues to elude us. Here, we propose an approach based on random
walk dynamics to define the diffusion distance between any pair of units in a
networked system. Such a metric allows to exploit the underlying diffusion
geometry to provide a unifying framework for the intimate relationship between
metastable synchronization, consensus and random search dynamics in complex
networks, pinpointing the functional mesoscale organization of synthetic and
biological systems.
| Manlio De Domenico | 10.1103/PhysRevLett.118.168301 | 1704.07068 | null | null |
Being Negative but Constructively: Lessons Learnt from Creating Better
Visual Question Answering Datasets | cs.CL cs.AI cs.CV cs.LG | Visual question answering (Visual QA) has attracted a lot of attention
lately, seen essentially as a form of (visual) Turing test that artificial
intelligence should strive to achieve. In this paper, we study a crucial
component of this task: how can we design good datasets for the task? We focus
on the design of multiple-choice based datasets where the learner has to select
the right answer from a set of candidate ones including the target (\ie the
correct one) and the decoys (\ie the incorrect ones). Through careful analysis
of the results attained by state-of-the-art learning models and human
annotators on existing datasets, we show that the design of the decoy answers
has a significant impact on how and what the learning models learn from the
datasets. In particular, the resulting learner can ignore the visual
information, the question, or both while still doing well on the task. Inspired
by this, we propose automatic procedures to remedy such design deficiencies. We
apply the procedures to re-construct decoy answers for two popular Visual QA
datasets as well as to create a new Visual QA dataset from the Visual Genome
project, resulting in the largest dataset for this task. Extensive empirical
studies show that the design deficiencies have been alleviated in the remedied
datasets and the performance on them is likely a more faithful indicator of the
difference among learning models. The datasets are released and publicly
available via http://www.teds.usc.edu/website_vqa/.
| Wei-Lun Chao, Hexiang Hu, Fei Sha | null | 1704.07121 | null | null |
An Aposteriorical Clusterability Criterion for $k$-Means++ and
Simplicity of Clustering | cs.LG | We define the notion of a well-clusterable data set combining the point of
view of the objective of $k$-means clustering algorithm (minimising the centric
spread of data elements) and common sense (clusters shall be separated by
gaps). We identify conditions under which the optimum of $k$-means objective
coincides with a clustering under which the data is separated by predefined
gaps.
We investigate two cases: when the whole clusters are separated by some gap
and when only the cores of the clusters meet some separation condition.
We overcome a major obstacle in using clusterability criteria due to the fact
that known approaches to clusterability checking had the disadvantage that they
are related to the optimal clustering which is NP hard to identify.
Compared to other approaches to clusterability, the novelty consists in the
possibility of an a posteriori (after running $k$-means) check if the data set
is well-clusterable or not. As the $k$-means algorithm applied for this purpose
has polynomial complexity so does therefore the appropriate check.
Additionally, if $k$-means++ fails to identify a clustering that meets
clusterability criteria, with high probability the data is not
well-clusterable.
| Mieczys{\l}aw A. K{\l}opotek | 10.1007/s42979-020-0079-8 | 1704.07139 | null | null |
A Neural Network model with Bidirectional Whitening | stat.ML cs.LG | We present here a new model and algorithm which performs an efficient Natural
gradient descent for Multilayer Perceptrons. Natural gradient descent was
originally proposed from a point of view of information geometry, and it
performs the steepest descent updates on manifolds in a Riemannian space. In
particular, we extend an approach taken by the "Whitened neural networks"
model. We make the whitening process not only in feed-forward direction as in
the original model, but also in the back-propagation phase. Its efficacy is
shown by an application of this "Bidirectional whitened neural networks" model
to a handwritten character recognition data (MNIST data).
| Yuki Fujimoto and Toru Ohira | 10.1007/978-3-319-91253-0_5 | 1704.07147 | null | null |
Semi-supervised Multitask Learning for Sequence Labeling | cs.CL cs.LG cs.NE | We propose a sequence labeling framework with a secondary training objective,
learning to predict surrounding words for every word in the dataset. This
language modeling objective incentivises the system to learn general-purpose
patterns of semantic and syntactic composition, which are also useful for
improving accuracy on different sequence labeling tasks. The architecture was
evaluated on a range of datasets, covering the tasks of error detection in
learner texts, named entity recognition, chunking and POS-tagging. The novel
language modeling objective provided consistent performance improvements on
every benchmark, without requiring any additional annotated or unannotated
data.
| Marek Rei | null | 1704.07156 | null | null |
Reinforcement Learning Based Dynamic Selection of Auxiliary Objectives
with Preserving of the Best Found Solution | cs.NE cs.LG | Efficiency of single-objective optimization can be improved by introducing
some auxiliary objectives. Ideally, auxiliary objectives should be helpful.
However, in practice, objectives may be efficient on some optimization stages
but obstructive on others. In this paper we propose a modification of the EA+RL
method which dynamically selects optimized objectives using reinforcement
learning. The proposed modification prevents from losing the best found
solution. We analysed the proposed modification and compared it with the EA+RL
method and Random Local Search on XdivK, Generalized OneMax and LeadingOnes
problems. The proposed modification outperforms the EA+RL method on all problem
instances. It also outperforms the single objective approach on the most
problem instances. We also provide detailed analysis of how different
components of the considered algorithms influence efficiency of optimization.
In addition, we present theoretical analysis of the proposed modification on
the XdivK problem.
| Irina Petrova, Arina Buzdalova | null | 1704.07187 | null | null |
Predicting membrane protein contacts from non-membrane proteins by deep
transfer learning | q-bio.BM cs.LG cs.NE q-bio.QM | Computational prediction of membrane protein (MP) structures is very
challenging partially due to lack of sufficient solved structures for homology
modeling. Recently direct evolutionary coupling analysis (DCA) sheds some light
on protein contact prediction and accordingly, contact-assisted folding, but
DCA is effective only on some very large-sized families since it uses
information only in a single protein family. This paper presents a deep
transfer learning method that can significantly improve MP contact prediction
by learning contact patterns and complex sequence-contact relationship from
thousands of non-membrane proteins (non-MPs). Tested on 510 non-redundant MPs,
our deep model (learned from only non-MPs) has top L/10 long-range contact
prediction accuracy 0.69, better than our deep model trained by only MPs (0.63)
and much better than a representative DCA method CCMpred (0.47) and the CASP11
winner MetaPSICOV (0.55). The accuracy of our deep model can be further
improved to 0.72 when trained by a mix of non-MPs and MPs. When only contacts
in transmembrane regions are evaluated, our method has top L/10 long-range
accuracy 0.62, 0.57, and 0.53 when trained by a mix of non-MPs and MPs, by
non-MPs only, and by MPs only, respectively, still much better than MetaPSICOV
(0.45) and CCMpred (0.40). All these results suggest that sequence-structure
relationship learned by our deep model from non-MPs generalizes well to MP
contact prediction. Improved contact prediction also leads to better
contact-assisted folding. Using only top predicted contacts as restraints, our
deep learning method can fold 160 and 200 of 510 MPs with TMscore>0.6 when
trained by non-MPs only and by a mix of non-MPs and MPs, respectively, while
CCMpred and MetaPSICOV can do so for only 56 and 77 MPs, respectively. Our
contact-assisted folding also greatly outperforms homology modeling.
| Zhen Li, Sheng Wang, Yizhou Yu and Jinbo Xu | null | 1704.07207 | null | null |
Learning from Comparisons and Choices | stat.ML cs.LG | When tracking user-specific online activities, each user's preference is
revealed in the form of choices and comparisons. For example, a user's purchase
history is a record of her choices, i.e. which item was chosen among a subset
of offerings. A user's preferences can be observed either explicitly as in
movie ratings or implicitly as in viewing times of news articles. Given such
individualized ordinal data in the form of comparisons and choices, we address
the problem of collaboratively learning representations of the users and the
items. The learned features can be used to predict a user's preference of an
unseen item to be used in recommendation systems. This also allows one to
compute similarities among users and items to be used for categorization and
search. Motivated by the empirical successes of the MultiNomial Logit (MNL)
model in marketing and transportation, and also more recent successes in word
embedding and crowdsourced image embedding, we pose this problem as learning
the MNL model parameters that best explain the data. We propose a convex
relaxation for learning the MNL model, and show that it is minimax optimal up
to a logarithmic factor by comparing its performance to a fundamental lower
bound. This characterizes the minimax sample complexity of the problem, and
proves that the proposed estimator cannot be improved upon other than by a
logarithmic factor. Further, the analysis identifies how the accuracy depends
on the topology of sampling via the spectrum of the sampling graph. This
provides a guideline for designing surveys when one can choose which items are
to be compared. This is accompanied by numerical simulations on synthetic and
real data sets, confirming our theoretical predictions.
| Sahand Negahban and Sewoong Oh and Kiran K. Thekumparampil and Jiaming
Xu | null | 1704.07228 | null | null |
Parsing Speech: A Neural Approach to Integrating Lexical and
Acoustic-Prosodic Information | cs.CL cs.LG cs.SD | In conversational speech, the acoustic signal provides cues that help
listeners disambiguate difficult parses. For automatically parsing spoken
utterances, we introduce a model that integrates transcribed text and
acoustic-prosodic features using a convolutional neural network over energy and
pitch trajectories coupled with an attention-based recurrent neural network
that accepts text and prosodic features. We find that different types of
acoustic-prosodic features are individually helpful, and together give
statistically significant improvements in parse and disfluency detection F1
scores over a strong text-only baseline. For this study with known sentence
boundaries, error analyses show that the main benefit of acoustic-prosodic
features is in sentences with disfluencies, attachment decisions are most
improved, and transcription errors obscure gains from prosody.
| Trang Tran, Shubham Toshniwal, Mohit Bansal, Kevin Gimpel, Karen
Livescu, Mari Ostendorf | null | 1704.07287 | null | null |
Structured low-rank matrix learning: algorithms and applications | stat.ML cs.LG | We consider the problem of learning a low-rank matrix, constrained to lie in
a linear subspace, and introduce a novel factorization for modeling such
matrices. A salient feature of the proposed factorization scheme is it
decouples the low-rank and the structural constraints onto separate factors. We
formulate the optimization problem on the Riemannian spectrahedron manifold,
where the Riemannian framework allows to develop computationally efficient
conjugate gradient and trust-region algorithms. Experiments on problems such as
standard/robust/non-negative matrix completion, Hankel matrix learning and
multi-task learning demonstrate the efficacy of our approach. A shorter version
of this work has been published in ICML'18.
| Pratik Jawanpuria, Bamdev Mishra | null | 1704.07352 | null | null |
Active Bias: Training More Accurate Neural Networks by Emphasizing High
Variance Samples | stat.ML cs.LG | Self-paced learning and hard example mining re-weight training instances to
improve learning accuracy. This paper presents two improved alternatives based
on lightweight estimates of sample uncertainty in stochastic gradient descent
(SGD): the variance in predicted probability of the correct class across
iterations of mini-batch SGD, and the proximity of the correct class
probability to the decision threshold. Extensive experimental results on six
datasets show that our methods reliably improve accuracy in various network
architectures, including additional gains on top of other popular training
techniques, such as residual learning, momentum, ADAM, batch normalization,
dropout, and distillation.
| Haw-Shiuan Chang and Erik Learned-Miller and Andrew McCallum | null | 1704.07433 | null | null |
GaKCo: a Fast GApped k-mer string Kernel using COunting | cs.LG cs.AI cs.CC cs.CL cs.DS | String Kernel (SK) techniques, especially those using gapped $k$-mers as
features (gk), have obtained great success in classifying sequences like DNA,
protein, and text. However, the state-of-the-art gk-SK runs extremely slow when
we increase the dictionary size ($\Sigma$) or allow more mismatches ($M$). This
is because current gk-SK uses a trie-based algorithm to calculate co-occurrence
of mismatched substrings resulting in a time cost proportional to
$O(\Sigma^{M})$. We propose a \textbf{fast} algorithm for calculating
\underline{Ga}pped $k$-mer \underline{K}ernel using \underline{Co}unting
(GaKCo). GaKCo uses associative arrays to calculate the co-occurrence of
substrings using cumulative counting. This algorithm is fast, scalable to
larger $\Sigma$ and $M$, and naturally parallelizable. We provide a rigorous
asymptotic analysis that compares GaKCo with the state-of-the-art gk-SK.
Theoretically, the time cost of GaKCo is independent of the $\Sigma^{M}$ term
that slows down the trie-based approach. Experimentally, we observe that GaKCo
achieves the same accuracy as the state-of-the-art and outperforms its speed by
factors of 2, 100, and 4, on classifying sequences of DNA (5 datasets), protein
(12 datasets), and character-based English text (2 datasets), respectively.
GaKCo is shared as an open source tool at
\url{https://github.com/QData/GaKCo-SVM}
| Ritambhara Singh, Arshdeep Sekhon, Kamran Kowsari, Jack Lanchantin,
Beilun Wang and Yanjun Qi | null | 1704.07468 | null | null |
Continuously Differentiable Exponential Linear Units | cs.LG | Exponential Linear Units (ELUs) are a useful rectifier for constructing deep
learning architectures, as they may speed up and otherwise improve learning by
virtue of not have vanishing gradients and by having mean activations near
zero. However, the ELU activation as parametrized in [1] is not continuously
differentiable with respect to its input when the shape parameter alpha is not
equal to 1. We present an alternative parametrization which is C1 continuous
for all values of alpha, making the rectifier easier to reason about and making
alpha easier to tune. This alternative parametrization has several other useful
properties that the original parametrization of ELU does not: 1) its derivative
with respect to x is bounded, 2) it contains both the linear transfer function
and ReLU as special cases, and 3) it is scale-similar with respect to alpha.
| Jonathan T. Barron | null | 1704.07483 | null | null |
Bootstrapping Graph Convolutional Neural Networks for Autism Spectrum
Disorder Classification | stat.ML cs.LG | Using predictive models to identify patterns that can act as biomarkers for
different neuropathoglogical conditions is becoming highly prevalent. In this
paper, we consider the problem of Autism Spectrum Disorder (ASD) classification
where previous work has shown that it can be beneficial to incorporate a wide
variety of meta features, such as socio-cultural traits, into predictive
modeling. A graph-based approach naturally suits these scenarios, where a
contextual graph captures traits that characterize a population, while the
specific brain activity patterns are utilized as a multivariate signal at the
nodes. Graph neural networks have shown improvements in inferencing with
graph-structured data. Though the underlying graph strongly dictates the
overall performance, there exists no systematic way of choosing an appropriate
graph in practice, thus making predictive models non-robust. To address this,
we propose a bootstrapped version of graph convolutional neural networks
(G-CNNs) that utilizes an ensemble of weakly trained G-CNNs, and reduce the
sensitivity of models on the choice of graph construction. We demonstrate its
effectiveness on the challenging Autism Brain Imaging Data Exchange (ABIDE)
dataset and show that our approach improves upon recently proposed graph-based
neural networks. We also show that our method remains more robust to noisy
graphs.
| Rushil Anirudh, Jayaraman J. Thiagarajan | null | 1704.07487 | null | null |
Leveraging Patient Similarity and Time Series Data in Healthcare
Predictive Models | cs.AI cs.LG | Patient time series classification faces challenges in high degrees of
dimensionality and missingness. In light of patient similarity theory, this
study explores effective temporal feature engineering and reduction, missing
value imputation, and change point detection methods that can afford
similarity-based classification models with desirable accuracy enhancement. We
select a piecewise aggregation approximation method to extract fine-grain
temporal features and propose a minimalist method to impute missing values in
temporal features. For dimensionality reduction, we adopt a gradient descent
search method for feature weight assignment. We propose new patient status and
directional change definitions based on medical knowledge or clinical
guidelines about the value ranges for different patient status levels, and
develop a method to detect change points indicating positive or negative
patient status changes. We evaluate the effectiveness of the proposed methods
in the context of early Intensive Care Unit mortality prediction. The
evaluation results show that the k-Nearest Neighbor algorithm that incorporates
methods we select and propose significantly outperform the relevant benchmarks
for early ICU mortality prediction. This study makes contributions to time
series classification and early ICU mortality prediction via identifying and
enhancing temporal feature engineering and reduction methods for
similarity-based time series classification.
| Mohammad Amin Morid, Olivia R. Liu Sheng, Samir Abdelrahman | null | 1704.07498 | null | null |
PPMF: A Patient-based Predictive Modeling Framework for Early ICU
Mortality Prediction | cs.LG cs.AI | To date, developing a good model for early intensive care unit (ICU)
mortality prediction is still challenging. This paper presents a patient based
predictive modeling framework (PPMF) to improve the performance of ICU
mortality prediction using data collected during the first 48 hours of ICU
admission. PPMF consists of three main components verifying three related
research hypotheses. The first component captures dynamic changes of patients
status in the ICU using their time series data (e.g., vital signs and
laboratory tests). The second component is a local approximation algorithm that
classifies patients based on their similarities. The third component is a
Gradient Decent wrapper that updates feature weights according to the
classification feedback. Experiments using data from MIMICIII show that PPMF
significantly outperforms: (1) the severity score systems, namely SASP III,
APACHE IV, and MPM0III, (2) the aggregation based classifiers that utilize
summarized time series, and (3) baseline feature selection methods.
| Mohammad Amin Morid, Olivia R. Liu Sheng, Samir Abdelrahman | null | 1704.07499 | null | null |
Learning of Human-like Algebraic Reasoning Using Deep Feedforward Neural
Networks | cs.AI cs.LG cs.LO | There is a wide gap between symbolic reasoning and deep learning. In this
research, we explore the possibility of using deep learning to improve symbolic
reasoning. Briefly, in a reasoning system, a deep feedforward neural network is
used to guide rewriting processes after learning from algebraic reasoning
examples produced by humans. To enable the neural network to recognise patterns
of algebraic expressions with non-deterministic sizes, reduced partial trees
are used to represent the expressions. Also, to represent both top-down and
bottom-up information of the expressions, a centralisation technique is used to
improve the reduced partial trees. Besides, symbolic association vectors and
rule application records are used to improve the rewriting processes.
Experimental results reveal that the algebraic reasoning examples can be
accurately learnt only if the feedforward neural network has enough hidden
layers. Also, the centralisation technique, the symbolic association vectors
and the rule application records can reduce error rates of reasoning. In
particular, the above approaches have led to 4.6% error rate of reasoning on a
dataset of linear equations, differentials and integrals.
| Cheng-Hao Cai, Dengfeng Ke, Yanyan Xu, Kaile Su | 10.1016/j.bica.2018.07.004 | 1704.07503 | null | null |
Dynamic Model Selection for Prediction Under a Budget | stat.ML cs.LG | We present a dynamic model selection approach for resource-constrained
prediction. Given an input instance at test-time, a gating function identifies
a prediction model for the input among a collection of models. Our objective is
to minimize overall average cost without sacrificing accuracy. We learn gating
and prediction models on fully labeled training data by means of a bottom-up
strategy. Our novel bottom-up method is a recursive scheme whereby a
high-accuracy complex model is first trained. Then a low-complexity gating and
prediction model are subsequently learnt to adaptively approximate the
high-accuracy model in regions where low-cost models are capable of making
highly accurate predictions. We pose an empirical loss minimization problem
with cost constraints to jointly train gating and prediction models. On a
number of benchmark datasets our method outperforms state-of-the-art achieving
higher accuracy for the same cost.
| Feng Nan and Venkatesh Saligrama | null | 1704.07505 | null | null |
Some Like it Hoax: Automated Fake News Detection in Social Networks | cs.LG cs.HC cs.SI | In recent years, the reliability of information on the Internet has emerged
as a crucial issue of modern society. Social network sites (SNSs) have
revolutionized the way in which information is spread by allowing users to
freely share content. As a consequence, SNSs are also increasingly used as
vectors for the diffusion of misinformation and hoaxes. The amount of
disseminated information and the rapidity of its diffusion make it practically
impossible to assess reliability in a timely manner, highlighting the need for
automatic hoax detection systems.
As a contribution towards this objective, we show that Facebook posts can be
classified with high accuracy as hoaxes or non-hoaxes on the basis of the users
who "liked" them. We present two classification techniques, one based on
logistic regression, the other on a novel adaptation of boolean crowdsourcing
algorithms. On a dataset consisting of 15,500 Facebook posts and 909,236 users,
we obtain classification accuracies exceeding 99% even when the training set
contains less than 1% of the posts. We further show that our techniques are
robust: they work even when we restrict our attention to the users who like
both hoax and non-hoax posts. These results suggest that mapping the diffusion
pattern of information can be a useful component of automatic hoax detection
systems.
| Eugenio Tacchini, Gabriele Ballarin, Marco L. Della Vedova, Stefano
Moret, Luca de Alfaro | null | 1704.07506 | null | null |
Scalable Planning with Tensorflow for Hybrid Nonlinear Domains | cs.LG | Given recent deep learning results that demonstrate the ability to
effectively optimize high-dimensional non-convex functions with gradient
descent optimization on GPUs, we ask in this paper whether symbolic gradient
optimization tools such as Tensorflow can be effective for planning in hybrid
(mixed discrete and continuous) nonlinear domains with high dimensional state
and action spaces? To this end, we demonstrate that hybrid planning with
Tensorflow and RMSProp gradient descent is competitive with mixed integer
linear program (MILP) based optimization on piecewise linear planning domains
(where we can compute optimal solutions) and substantially outperforms
state-of-the-art interior point methods for nonlinear planning domains.
Furthermore, we remark that Tensorflow is highly scalable, converging to a
strong plan on a large-scale concurrent domain with a total of 576,000
continuous action parameters distributed over a horizon of 96 time steps and
100 parallel instances in only 4 minutes. We provide a number of insights that
clarify such strong performance including observations that despite long
horizons, RMSProp avoids both the vanishing and exploding gradient problems.
Together these results suggest a new frontier for highly scalable planning in
nonlinear hybrid domains by leveraging GPUs and the power of recent advances in
gradient descent with highly optimized toolkits like Tensorflow.
| Ga Wu, Buser Say, Scott Sanner | null | 1704.07511 | null | null |
Deep Over-sampling Framework for Classifying Imbalanced Data | cs.LG stat.ML | Class imbalance is a challenging issue in practical classification problems
for deep learning models as well as traditional models. Traditionally
successful countermeasures such as synthetic over-sampling have had limited
success with complex, structured data handled by deep learning models. In this
paper, we propose Deep Over-sampling (DOS), a framework for extending the
synthetic over-sampling method to exploit the deep feature space acquired by a
convolutional neural network (CNN). Its key feature is an explicit, supervised
representation learning, for which the training data presents each raw input
sample with a synthetic embedding target in the deep feature space, which is
sampled from the linear subspace of in-class neighbors. We implement an
iterative process of training the CNN and updating the targets, which induces
smaller in-class variance among the embeddings, to increase the discriminative
power of the deep representation. We present an empirical study using public
benchmarks, which shows that the DOS framework not only counteracts class
imbalance better than the existing method, but also improves the performance of
the CNN in the standard, balanced settings.
| Shin Ando and Chun-Yuan Huang | null | 1704.07515 | null | null |
Abstract Syntax Networks for Code Generation and Semantic Parsing | cs.CL cs.AI cs.LG stat.ML | Tasks like code generation and semantic parsing require mapping unstructured
(or partially structured) inputs to well-formed, executable outputs. We
introduce abstract syntax networks, a modeling framework for these problems.
The outputs are represented as abstract syntax trees (ASTs) and constructed by
a decoder with a dynamically-determined modular structure paralleling the
structure of the output tree. On the benchmark Hearthstone dataset for code
generation, our model obtains 79.2 BLEU and 22.7% exact match accuracy,
compared to previous state-of-the-art values of 67.1 and 6.1%. Furthermore, we
perform competitively on the Atis, Jobs, and Geo semantic parsing datasets with
no task-specific engineering.
| Maxim Rabinovich, Mitchell Stern, Dan Klein | null | 1704.07535 | null | null |
Semi-supervised Bayesian Deep Multi-modal Emotion Recognition | cs.AI cs.LG stat.ML | In emotion recognition, it is difficult to recognize human's emotional states
using just a single modality. Besides, the annotation of physiological
emotional data is particularly expensive. These two aspects make the building
of effective emotion recognition model challenging. In this paper, we first
build a multi-view deep generative model to simulate the generative process of
multi-modality emotional data. By imposing a mixture of Gaussians assumption on
the posterior approximation of the latent variables, our model can learn the
shared deep representation from multiple modalities. To solve the
labeled-data-scarcity problem, we further extend our multi-view model to
semi-supervised learning scenario by casting the semi-supervised classification
problem as a specialized missing data imputation task. Our semi-supervised
multi-view deep generative framework can leverage both labeled and unlabeled
data from multiple modalities, where the weight factor for each modality can be
learned automatically. Compared with previous emotion recognition methods, our
method is more robust and flexible. The experiments conducted on two real
multi-modal emotion datasets have demonstrated the superiority of our framework
over a number of competitors.
| Changde Du, Changying Du, Jinpeng Li, Wei-long Zheng, Bao-liang Lu,
Huiguang He | null | 1704.07548 | null | null |
Learning Agents in Black-Scholes Financial Markets: Consensus Dynamics
and Volatility Smiles | q-fin.MF cs.LG cs.MA | Black-Scholes (BS) is the standard mathematical model for option pricing in
financial markets. Option prices are calculated using an analytical formula
whose main inputs are strike (at which price to exercise) and volatility. The
BS framework assumes that volatility remains constant across all strikes,
however, in practice it varies. How do traders come to learn these parameters?
We introduce natural models of learning agents, in which they update their
beliefs about the true implied volatility based on the opinions of other
traders. We prove convergence of these opinion dynamics using techniques from
control theory and leader-follower models, thus providing a resolution between
theory and market practices. We allow for two different models, one with
feedback and one with an unknown leader.
| Tushar Vaidya and Carlos Murguia and Georgios Piliouras | null | 1704.07597 | null | null |
Decision Stream: Cultivating Deep Decision Trees | cs.LG | Various modifications of decision trees have been extensively used during the
past years due to their high efficiency and interpretability. Tree node
splitting based on relevant feature selection is a key step of decision tree
learning, at the same time being their major shortcoming: the recursive nodes
partitioning leads to geometric reduction of data quantity in the leaf nodes,
which causes an excessive model complexity and data overfitting. In this paper,
we present a novel architecture - a Decision Stream, - aimed to overcome this
problem. Instead of building a tree structure during the learning process, we
propose merging nodes from different branches based on their similarity that is
estimated with two-sample test statistics, which leads to generation of a deep
directed acyclic graph of decision rules that can consist of hundreds of
levels. To evaluate the proposed solution, we test it on several common machine
learning problems - credit scoring, twitter sentiment analysis, aircraft flight
control, MNIST and CIFAR image classification, synthetic data classification
and regression. Our experimental results reveal that the proposed approach
significantly outperforms the standard decision tree learning methods on both
regression and classification tasks, yielding a prediction error decrease up to
35%.
| Dmitry Ignatov and Andrey Ignatov | null | 1704.07657 | null | null |
An All-Pair Quantum SVM Approach for Big Data Multiclass Classification | cs.LG quant-ph | In this paper, we have discussed a quantum approach for the all-pair
multiclass classification problem. We have shown that the multiclass support
vector machine for big data classification with a quantum all-pair approach can
be implemented in logarithm runtime complexity on a quantum computer. In an
all-pair approach, there is one binary classification problem for each pair of
classes, and so there are k (k-1)/2 classifiers for a k-class problem. As
compared to the classical multiclass support vector machine that can be
implemented with polynomial run time complexity, our approach exhibits
exponential speed up in the quantum version. The quantum all-pair algorithm can
be used with other classification algorithms, and a speed up gain can be
achieved as compared to their classical counterparts.
| Arit Kumar Bishwas, Ashish Mani, Vasile Palade | 10.1007/s11128-018-2046-z | 1704.07664 | null | null |
Single-Pass PCA of Large High-Dimensional Data | cs.DS cs.LG math.NA | Principal component analysis (PCA) is a fundamental dimension reduction tool
in statistics and machine learning. For large and high-dimensional data,
computing the PCA (i.e., the singular vectors corresponding to a number of
dominant singular values of the data matrix) becomes a challenging task. In
this work, a single-pass randomized algorithm is proposed to compute PCA with
only one pass over the data. It is suitable for processing extremely large and
high-dimensional data stored in slow memory (hard disk) or the data generated
in a streaming fashion. Experiments with synthetic and real data validate the
algorithm's accuracy, which has orders of magnitude smaller error than an
existing single-pass algorithm. For a set of high-dimensional data stored as a
150 GB file, the proposed algorithm is able to compute the first 50 principal
components in just 24 minutes on a typical 24-core computer, with less than 1
GB memory cost.
| Wenjian Yu, Yu Gu, Jian Li, Shenghua Liu, and Yaohang Li | null | 1704.07669 | null | null |
Automatic Anomaly Detection in the Cloud Via Statistical Learning | cs.LG | Performance and high availability have become increasingly important drivers,
amongst other drivers, for user retention in the context of web services such
as social networks, and web search. Exogenic and/or endogenic factors often
give rise to anomalies, making it very challenging to maintain high
availability, while also delivering high performance. Given that
service-oriented architectures (SOA) typically have a large number of services,
with each service having a large set of metrics, automatic detection of
anomalies is non-trivial.
Although there exists a large body of prior research in anomaly detection,
existing techniques are not applicable in the context of social network data,
owing to the inherent seasonal and trend components in the time series data.
To this end, we developed two novel statistical techniques for automatically
detecting anomalies in cloud infrastructure data. Specifically, the techniques
employ statistical learning to detect anomalies in both application, and system
metrics. Seasonal decomposition is employed to filter the trend and seasonal
components of the time series, followed by the use of robust statistical
metrics -- median and median absolute deviation (MAD) -- to accurately detect
anomalies, even in the presence of seasonal spikes.
We demonstrate the efficacy of the proposed techniques from three different
perspectives, viz., capacity planning, user behavior, and supervised learning.
In particular, we used production data for evaluation, and we report Precision,
Recall, and F-measure in each case.
| Jordan Hochenbaum, Owen S. Vallis, Arun Kejariwal | null | 1704.07706 | null | null |
Fine-Grained Entity Typing with High-Multiplicity Assignments | cs.CL cs.AI cs.IR cs.LG stat.ML | As entity type systems become richer and more fine-grained, we expect the
number of types assigned to a given entity to increase. However, most
fine-grained typing work has focused on datasets that exhibit a low degree of
type multiplicity. In this paper, we consider the high-multiplicity regime
inherent in data sources such as Wikipedia that have semi-open type systems. We
introduce a set-prediction approach to this problem and show that our model
outperforms unstructured baselines on a new Wikipedia-based fine-grained typing
corpus.
| Maxim Rabinovich, Dan Klein | null | 1704.07751 | null | null |
FWDA: a Fast Wishart Discriminant Analysis with its Application to
Electronic Health Records Data Classification | cs.LG | Linear Discriminant Analysis (LDA) on Electronic Health Records (EHR) data is
widely-used for early detection of diseases. Classical LDA for EHR data
classification, however, suffers from two handicaps: the ill-posed estimation
of LDA parameters (e.g., covariance matrix), and the "linear inseparability" of
EHR data. To handle these two issues, in this paper, we propose a novel
classifier FWDA -- Fast Wishart Discriminant Analysis, that makes predictions
in an ensemble way. Specifically, FWDA first surrogates the distribution of
inverse covariance matrices using a Wishart distribution estimated from the
training data, then "weighted-averages" the classification results of multiple
LDA classifiers parameterized by the sampled inverse covariance matrices via a
Bayesian Voting scheme. The weights for voting are optimally updated to adapt
each new input data, so as to enable the nonlinear classification. Theoretical
analysis indicates that FWDA possesses a fast convergence rate and a robust
performance on high dimensional data. Extensive experiments on large-scale EHR
dataset show that our approach outperforms state-of-the-art algorithms by a
large margin.
| Haoyi Xiong, Wei Cheng, Wenqing Hu, Jiang Bian, and Zhishan Guo | null | 1704.0779 | null | null |
A decentralized proximal-gradient method with network independent
step-sizes and separated convergence rates | math.OC cs.DC cs.LG cs.NA math.NA stat.ML | This paper proposes a novel proximal-gradient algorithm for a decentralized
optimization problem with a composite objective containing smooth and
non-smooth terms. Specifically, the smooth and nonsmooth terms are dealt with
by gradient and proximal updates, respectively. The proposed algorithm is
closely related to a previous algorithm, PG-EXTRA \cite{shi2015proximal}, but
has a few advantages. First of all, agents use uncoordinated step-sizes, and
the stable upper bounds on step-sizes are independent of network topologies.
The step-sizes depend on local objective functions, and they can be as large as
those of the gradient descent. Secondly, for the special case without
non-smooth terms, linear convergence can be achieved under the strong convexity
assumption. The dependence of the convergence rate on the objective functions
and the network are separated, and the convergence rate of the new algorithm is
as good as one of the two convergence rates that match the typical rates for
the general gradient descent and the consensus averaging. We provide numerical
experiments to demonstrate the efficacy of the introduced algorithm and
validate our theoretical discoveries.
| Zhi Li and Wei Shi and Ming Yan | 10.1109/TSP.2019.2926022 | 1704.07807 | null | null |
Introspective Classification with Convolutional Nets | cs.CV cs.LG cs.NE | We propose introspective convolutional networks (ICN) that emphasize the
importance of having convolutional neural networks empowered with generative
capabilities. We employ a reclassification-by-synthesis algorithm to perform
training using a formulation stemmed from the Bayes theory. Our ICN tries to
iteratively: (1) synthesize pseudo-negative samples; and (2) enhance itself by
improving the classification. The single CNN classifier learned is at the same
time generative --- being able to directly synthesize new samples within its
own discriminative model. We conduct experiments on benchmark datasets
including MNIST, CIFAR-10, and SVHN using state-of-the-art CNN architectures,
and observe improved classification results.
| Long Jin, Justin Lazarow, Zhuowen Tu | null | 1704.07816 | null | null |
Introspective Generative Modeling: Decide Discriminatively | cs.CV cs.LG cs.NE | We study unsupervised learning by developing introspective generative
modeling (IGM) that attains a generator using progressively learned deep
convolutional neural networks. The generator is itself a discriminator, capable
of introspection: being able to self-evaluate the difference between its
generated samples and the given training data. When followed by repeated
discriminative learning, desirable properties of modern discriminative
classifiers are directly inherited by the generator. IGM learns a cascade of
CNN classifiers using a synthesis-by-classification algorithm. In the
experiments, we observe encouraging results on a number of applications
including texture modeling, artistic style transferring, face modeling, and
semi-supervised learning.
| Justin Lazarow, Long Jin, Zhuowen Tu | null | 1704.0782 | null | null |
Generating Liquid Simulations with Deformation-aware Neural Networks | cs.GR cs.LG | We propose a novel approach for deformation-aware neural networks that learn
the weighting and synthesis of dense volumetric deformation fields. Our method
specifically targets the space-time representation of physical surfaces from
liquid simulations. Liquids exhibit highly complex, non-linear behavior under
changing simulation conditions such as different initial conditions. Our
algorithm captures these complex phenomena in two stages: a first neural
network computes a weighting function for a set of pre-computed deformations,
while a second network directly generates a deformation field for refining the
surface. Key for successful training runs in this setting is a suitable loss
function that encodes the effect of the deformations, and a robust calculation
of the corresponding gradients. To demonstrate the effectiveness of our
approach, we showcase our method with several complex examples of flowing
liquids with topology changes. Our representation makes it possible to rapidly
generate the desired implicit surfaces. We have implemented a mobile
application to demonstrate that real-time interactions with complex liquid
effects are possible with our approach.
| Lukas Prantl, Boris Bonev, Nils Thuerey | null | 1704.07854 | null | null |
Stochastic Optimization from Distributed, Streaming Data in Rate-limited
Networks | stat.ML cs.LG | Motivated by machine learning applications in networks of sensors,
internet-of-things (IoT) devices, and autonomous agents, we propose techniques
for distributed stochastic convex learning from high-rate data streams. The
setup involves a network of nodes---each one of which has a stream of data
arriving at a constant rate---that solve a stochastic convex optimization
problem by collaborating with each other over rate-limited communication links.
To this end, we present and analyze two algorithms---termed distributed
stochastic approximation mirror descent (D-SAMD) and accelerated distributed
stochastic approximation mirror descent (AD-SAMD)---that are based on two
stochastic variants of mirror descent and in which nodes collaborate via
approximate averaging of the local, noisy subgradients using distributed
consensus. Our main contributions are (i) bounds on the convergence rates of
D-SAMD and AD-SAMD in terms of the number of nodes, network topology, and ratio
of the data streaming and communication rates, and (ii) sufficient conditions
for order-optimum convergence of these algorithms. In particular, we show that
for sufficiently well-connected networks, distributed learning schemes can
obtain order-optimum convergence even if the communications rate is small.
Further we find that the use of accelerated methods significantly enlarges the
regime in which order-optimum convergence is achieved; this is in contrast to
the centralized setting, where accelerated methods usually offer only a modest
improvement. Finally, we demonstrate the effectiveness of the proposed
algorithms using numerical experiments.
| Matthew Nokleby and Waheed U. Bajwa | 10.1109/TSIPN.2018.2866320 | 1704.07888 | null | null |
Explaining How a Deep Neural Network Trained with End-to-End Learning
Steers a Car | cs.CV cs.LG cs.NE cs.RO | As part of a complete software stack for autonomous driving, NVIDIA has
created a neural-network-based system, known as PilotNet, which outputs
steering angles given images of the road ahead. PilotNet is trained using road
images paired with the steering angles generated by a human driving a
data-collection car. It derives the necessary domain knowledge by observing
human drivers. This eliminates the need for human engineers to anticipate what
is important in an image and foresee all the necessary rules for safe driving.
Road tests demonstrated that PilotNet can successfully perform lane keeping in
a wide variety of driving conditions, regardless of whether lane markings are
present or not.
The goal of the work described here is to explain what PilotNet learns and
how it makes its decisions. To this end we developed a method for determining
which elements in the road image most influence PilotNet's steering decision.
Results show that PilotNet indeed learns to recognize relevant objects on the
road.
In addition to learning the obvious features such as lane markings, edges of
roads, and other cars, PilotNet learns more subtle features that would be hard
to anticipate and program by engineers, for example, bushes lining the edge of
the road and atypical vehicle classes.
| Mariusz Bojarski, Philip Yeres, Anna Choromanska, Krzysztof
Choromanski, Bernhard Firner, Lawrence Jackel, Urs Muller | null | 1704.07911 | null | null |
From Language to Programs: Bridging Reinforcement Learning and Maximum
Marginal Likelihood | cs.AI cs.LG stat.ML | Our goal is to learn a semantic parser that maps natural language utterances
into executable programs when only indirect supervision is available: examples
are labeled with the correct execution result, but not the program itself.
Consequently, we must search the space of programs for those that output the
correct result, while not being misled by spurious programs: incorrect programs
that coincidentally output the correct result. We connect two common learning
paradigms, reinforcement learning (RL) and maximum marginal likelihood (MML),
and then present a new learning algorithm that combines the strengths of both.
The new algorithm guards against spurious programs by combining the systematic
search traditionally employed in MML with the randomized exploration of RL, and
by updating parameters such that probability is spread more evenly across
consistent programs. We apply our learning algorithm to a new neural semantic
parser and show significant gains over existing state-of-the-art results on a
recent context-dependent semantic parsing task.
| Kelvin Guu, Panupong Pasupat, Evan Zheran Liu, Percy Liang | null | 1704.07926 | null | null |
An ensemble-based online learning algorithm for streaming data | cs.LG | In this study, we introduce an ensemble-based approach for online machine
learning. The ensemble of base classifiers in our approach is obtained by
learning Naive Bayes classifiers on different training sets which are generated
by projecting the original training set to lower dimensional space. We propose
a mechanism to learn sequences of data using data chunks paradigm. The
experiments conducted on a number of UCI datasets and one synthetic dataset
demonstrate that the proposed approach performs significantly better than some
well-known online learning algorithms.
| Tien Thanh Nguyen, Thi Thu Thuy Nguyen, Xuan Cuong Pham, Alan
Wee-Chung Liew, James C. Bezdek | null | 1704.07938 | null | null |
Reward Maximization Under Uncertainty: Leveraging Side-Observations on
Networks | cs.LG stat.ML | We study the stochastic multi-armed bandit (MAB) problem in the presence of
side-observations across actions that occur as a result of an underlying
network structure. In our model, a bipartite graph captures the relationship
between actions and a common set of unknowns such that choosing an action
reveals observations for the unknowns that it is connected to. This models a
common scenario in online social networks where users respond to their friends'
activity, thus providing side information about each other's preferences. Our
contributions are as follows: 1) We derive an asymptotic lower bound (with
respect to time) as a function of the bi-partite network structure on the
regret of any uniformly good policy that achieves the maximum long-term average
reward. 2) We propose two policies - a randomized policy; and a policy based on
the well-known upper confidence bound (UCB) policies - both of which explore
each action at a rate that is a function of its network position. We show,
under mild assumptions, that these policies achieve the asymptotic lower bound
on the regret up to a multiplicative factor, independent of the network
structure. Finally, we use numerical examples on a real-world social network
and a routing example network to demonstrate the benefits obtained by our
policies over other existing policies.
| Swapna Buccapatnam, Fang Liu, Atilla Eryilmaz, Ness B. Shroff | null | 1704.07943 | null | null |
Linear Convergence of Accelerated Stochastic Gradient Descent for
Nonconvex Nonsmooth Optimization | math.OC cs.LG stat.ML | In this paper, we study the stochastic gradient descent (SGD) method for the
nonconvex nonsmooth optimization, and propose an accelerated SGD method by
combining the variance reduction technique with Nesterov's extrapolation
technique. Moreover, based on the local error bound condition, we establish the
linear convergence of our method to obtain a stationary point of the nonconvex
optimization. In particular, we prove that not only the sequence generated
linearly converges to a stationary point of the problem, but also the
corresponding sequence of objective values is linearly convergent. Finally,
some numerical experiments demonstrate the effectiveness of our method. To the
best of our knowledge, it is first proved that the accelerated SGD method
converges linearly to the local minimum of the nonconvex optimization.
| Feihu Huang and Songcan Chen | null | 1704.07953 | null | null |
A Flexible Framework for Hypothesis Testing in High-dimensions | math.ST cs.LG stat.AP stat.ME stat.ML stat.TH | Hypothesis testing in the linear regression model is a fundamental
statistical problem. We consider linear regression in the high-dimensional
regime where the number of parameters exceeds the number of samples ($p> n$).
In order to make informative inference, we assume that the model is
approximately sparse, that is the effect of covariates on the response can be
well approximated by conditioning on a relatively small number of covariates
whose identities are unknown. We develop a framework for testing very general
hypotheses regarding the model parameters. Our framework encompasses testing
whether the parameter lies in a convex cone, testing the signal strength, and
testing arbitrary functionals of the parameter. We show that the proposed
procedure controls the type I error, and also analyze the power of the
procedure. Our numerical experiments confirm our theoretical findings and
demonstrate that we control false positive rate (type I error) near the nominal
level, and have high power. By duality between hypotheses testing and
confidence intervals, the proposed framework can be used to obtain valid
confidence intervals for various functionals of the model parameters. For
linear functionals, the length of confidence intervals is shown to be minimax
rate optimal.
| Adel Javanmard and Jason D. Lee | null | 1704.07971 | null | null |
On Improving Deep Reinforcement Learning for POMDPs | cs.LG | Deep Reinforcement Learning (RL) recently emerged as one of the most
competitive approaches for learning in sequential decision making problems with
fully observable environments, e.g., computer Go. However, very little work has
been done in deep RL to handle partially observable environments. We propose a
new architecture called Action-specific Deep Recurrent Q-Network (ADRQN) to
enhance learning performance in partially observable domains. Actions are
encoded by a fully connected layer and coupled with a convolutional observation
to form an action-observation pair. The time series of action-observation pairs
are then integrated by an LSTM layer that learns latent states based on which a
fully connected layer computes Q-values as in conventional Deep Q-Networks
(DQNs). We demonstrate the effectiveness of our new architecture in several
partially observable domains, including flickering Atari games.
| Pengfei Zhu, Xin Li, Pascal Poupart, Guanghui Miao | null | 1704.07978 | null | null |
Training L1-Regularized Models with Orthant-Wise Passive Descent
Algorithms | cs.LG stat.ML | The $L_1$-regularized models are widely used for sparse regression or
classification tasks. In this paper, we propose the orthant-wise passive
descent algorithm (OPDA) for optimizing $L_1$-regularized models, as an
improved substitute of proximal algorithms, which are the standard tools for
optimizing the models nowadays. OPDA uses a stochastic variance-reduced
gradient (SVRG) to initialize the descent direction, then apply a novel
alignment operator to encourage each element keeping the same sign after one
iteration of update, so the parameter remains in the same orthant as before. It
also explicitly suppresses the magnitude of each element to impose sparsity.
The quasi-Newton update can be utilized to incorporate curvature information
and accelerate the speed. We prove a linear convergence rate for OPDA on
general smooth and strongly-convex loss functions. By conducting experiments on
$L_1$-regularized logistic regression and convolutional neural networks, we
show that OPDA outperforms state-of-the-art stochastic proximal algorithms,
implying a wide range of applications in training sparse models.
| Jianqiao Wangni | null | 1704.07987 | null | null |
Deep Text Classification Can be Fooled | cs.CR cs.LG | In this paper, we present an effective method to craft text adversarial
samples, revealing one important yet underestimated fact that DNN-based text
classifiers are also prone to adversarial sample attack. Specifically,
confronted with different adversarial scenarios, the text items that are
important for classification are identified by computing the cost gradients of
the input (white-box attack) or generating a series of occluded test samples
(black-box attack). Based on these items, we design three perturbation
strategies, namely insertion, modification, and removal, to generate
adversarial samples. The experiment results show that the adversarial samples
generated by our method can successfully fool both state-of-the-art
character-level and word-level DNN-based text classifiers. The adversarial
samples can be perturbed to any desirable classes without compromising their
utilities. At the same time, the introduced perturbation is difficult to be
perceived.
| Bin Liang and Hongcheng Li and Miaoqiang Su and Pan Bian and Xirong Li
and Wenchang Shi | 10.24963/ijcai.2018/585 | 1704.08006 | null | null |
The loss surface of deep and wide neural networks | cs.LG cs.AI cs.CV cs.NE stat.ML | While the optimization problem behind deep neural networks is highly
non-convex, it is frequently observed in practice that training deep networks
seems possible without getting stuck in suboptimal points. It has been argued
that this is the case as all local minima are close to being globally optimal.
We show that this is (almost) true, in fact almost all local minima are
globally optimal, for a fully connected network with squared loss and analytic
activation function given that the number of hidden units of one layer of the
network is larger than the number of training points and the network structure
from this layer on is pyramidal.
| Quynh Nguyen and Matthias Hein | null | 1704.08045 | null | null |
Exploiting random projections and sparsity with random forests and
gradient boosting methods -- Application to multi-label and multi-output
learning, random forest model compression and leveraging input sparsity | stat.ML cs.LG | Within machine learning, the supervised learning field aims at modeling the
input-output relationship of a system, from past observations of its behavior.
Decision trees characterize the input-output relationship through a series of
nested $if-then-else$ questions, the testing nodes, leading to a set of
predictions, the leaf nodes. Several of such trees are often combined together
for state-of-the-art performance: random forest ensembles average the
predictions of randomized decision trees trained independently in parallel,
while tree boosting ensembles train decision trees sequentially to refine the
predictions made by the previous ones. The emergence of new applications
requires scalable supervised learning algorithms in terms of computational
power and memory space with respect to the number of inputs, outputs, and
observations without sacrificing accuracy. In this thesis, we identify three
main areas where decision tree methods could be improved for which we provide
and evaluate original algorithmic solutions: (i) learning over high dimensional
output spaces, (ii) learning with large sample datasets and stringent memory
constraints at prediction time and (iii) learning over high dimensional sparse
input spaces.
| Arnaud Joly | null | 1704.08067 | null | null |
Understanding the Feedforward Artificial Neural Network Model From the
Perspective of Network Flow | cs.LG | In recent years, deep learning based on artificial neural network (ANN) has
achieved great success in pattern recognition. However, there is no clear
understanding of such neural computational models. In this paper, we try to
unravel "black-box" structure of Ann model from network flow. Specifically, we
consider the feed forward Ann as a network flow model, which consists of many
directional class-pathways. Each class-pathway encodes one class. The
class-pathway of a class is obtained by connecting the activated neural nodes
in each layer from input to output, where activation value of neural node
(node-value) is defined by the weights of each layer in a trained
ANN-classifier. From the perspective of the class-pathway, training an
ANN-classifier can be regarded as the formulation process of class-pathways of
different classes. By analyzing the the distances of each two class-pathways in
a trained ANN-classifiers, we try to answer the questions, why the classifier
performs so? At last, from the neural encodes view, we define the importance of
each neural node through the class-pathways, which is helpful to optimize the
structure of a classifier. Experiments for two types of ANN model including
multi-layer MLP and CNN verify that the network flow based on class-pathway is
a reasonable explanation for ANN models.
| Dawei Dai and Weimin Tan and Hong Zhan | null | 1704.08068 | null | null |
A Recurrent Neural Model with Attention for the Recognition of Chinese
Implicit Discourse Relations | cs.CL cs.AI cs.LG cs.NE | We introduce an attention-based Bi-LSTM for Chinese implicit discourse
relations and demonstrate that modeling argument pairs as a joint sequence can
outperform word order-agnostic approaches. Our model benefits from a partial
sampling scheme and is conceptually simple, yet achieves state-of-the-art
performance on the Chinese Discourse Treebank. We also visualize its attention
activity to illustrate the model's ability to selectively focus on the relevant
parts of an input sequence.
| Samuel R\"onnqvist, Niko Schenk, Christian Chiarcos | 10.18653/v1/P17-2040 | 1704.08092 | null | null |
Multimodal MRI brain tumor segmentation using random forests with
features learned from fully convolutional neural network | cs.CV cs.LG | In this paper, we propose a novel learning based method for automated
segmenta-tion of brain tumor in multimodal MRI images. The machine learned
features from fully convolutional neural network (FCN) and hand-designed texton
fea-tures are used to classify the MRI image voxels. The score map with
pixel-wise predictions is used as a feature map which is learned from
multimodal MRI train-ing dataset using the FCN. The learned features are then
applied to random for-ests to classify each MRI image voxel into normal brain
tissues and different parts of tumor. The method was evaluated on BRATS 2013
challenge dataset. The results show that the application of the random forest
classifier to multimodal MRI images using machine-learned features based on FCN
and hand-designed features based on textons provides promising segmentations.
The Dice overlap measure for automatic brain tumor segmentation against ground
truth is 0.88, 080 and 0.73 for complete tumor, core and enhancing tumor,
respectively.
| Mohammadreza Soltaninejad, Lei Zhang, Tryphon Lambrou, Nigel Allinson,
Xujiong Ye | null | 1704.08134 | null | null |
A Generalization of Convolutional Neural Networks to Graph-Structured
Data | stat.ML cs.AI cs.CV cs.LG | This paper introduces a generalization of Convolutional Neural Networks
(CNNs) from low-dimensional grid data, such as images, to graph-structured
data. We propose a novel spatial convolution utilizing a random walk to uncover
the relations within the input, analogous to the way the standard convolution
uses the spatial neighborhood of a pixel on the grid. The convolution has an
intuitive interpretation, is efficient and scalable and can also be used on
data with varying graph structure. Furthermore, this generalization can be
applied to many standard regression or classification problems, by learning the
the underlying graph. We empirically demonstrate the performance of the
proposed CNN on MNIST, and challenge the state-of-the-art on Merck molecular
activity data set.
| Yotam Hechtlinger, Purvasha Chakravarti and Jining Qin | null | 1704.08165 | null | null |
Accelerating Stochastic Gradient Descent For Least Squares Regression | stat.ML cs.LG math.OC math.ST stat.TH | There is widespread sentiment that it is not possible to effectively utilize
fast gradient methods (e.g. Nesterov's acceleration, conjugate gradient, heavy
ball) for the purposes of stochastic optimization due to their instability and
error accumulation, a notion made precise in d'Aspremont 2008 and Devolder,
Glineur, and Nesterov 2014. This work considers these issues for the special
case of stochastic approximation for the least squares regression problem, and
our main result refutes the conventional wisdom by showing that acceleration
can be made robust to statistical errors. In particular, this work introduces
an accelerated stochastic gradient method that provably achieves the minimax
optimal statistical risk faster than stochastic gradient descent. Critical to
the analysis is a sharp characterization of accelerated stochastic gradient
descent as a stochastic process. We hope this characterization gives insights
towards the broader question of designing simple and effective accelerated
stochastic methods for more general convex and non-convex optimization
problems.
| Prateek Jain, Sham M. Kakade, Rahul Kidambi, Praneeth Netrapalli and
Aaron Sidford | null | 1704.08227 | null | null |
C-VQA: A Compositional Split of the Visual Question Answering (VQA) v1.0
Dataset | cs.CV cs.AI cs.CL cs.LG | Visual Question Answering (VQA) has received a lot of attention over the past
couple of years. A number of deep learning models have been proposed for this
task. However, it has been shown that these models are heavily driven by
superficial correlations in the training data and lack compositionality -- the
ability to answer questions about unseen compositions of seen concepts. This
compositionality is desirable and central to intelligence. In this paper, we
propose a new setting for Visual Question Answering where the test
question-answer pairs are compositionally novel compared to training
question-answer pairs. To facilitate developing models under this setting, we
present a new compositional split of the VQA v1.0 dataset, which we call
Compositional VQA (C-VQA). We analyze the distribution of questions and answers
in the C-VQA splits. Finally, we evaluate several existing VQA models under
this new setting and show that the performances of these models degrade by a
significant amount compared to the original VQA setting.
| Aishwarya Agrawal, Aniruddha Kembhavi, Dhruv Batra, Devi Parikh | null | 1704.08243 | null | null |
Relative Error Tensor Low Rank Approximation | cs.DS cs.CC cs.LG | We consider relative error low rank approximation of $tensors$ with respect
to the Frobenius norm: given an order-$q$ tensor $A \in
\mathbb{R}^{\prod_{i=1}^q n_i}$, output a rank-$k$ tensor $B$ for which
$\|A-B\|_F^2 \leq (1+\epsilon)$OPT, where OPT $= \inf_{\textrm{rank-}k~A'}
\|A-A'\|_F^2$. Despite the success on obtaining relative error low rank
approximations for matrices, no such results were known for tensors. One
structural issue is that there may be no rank-$k$ tensor $A_k$ achieving the
above infinum. Another, computational issue, is that an efficient relative
error low rank approximation algorithm for tensors would allow one to compute
the rank of a tensor, which is NP-hard. We bypass these issues via (1)
bicriteria and (2) parameterized complexity solutions:
(1) We give an algorithm which outputs a rank $k' = O((k/\epsilon)^{q-1})$
tensor $B$ for which $\|A-B\|_F^2 \leq (1+\epsilon)$OPT in $nnz(A) + n \cdot
\textrm{poly}(k/\epsilon)$ time in the real RAM model. Here $nnz(A)$ is the
number of non-zero entries in $A$.
(2) We give an algorithm for any $\delta >0$ which outputs a rank $k$ tensor
$B$ for which $\|A-B\|_F^2 \leq (1+\epsilon)$OPT and runs in $ ( nnz(A) + n
\cdot \textrm{poly}(k/\epsilon) + \exp(k^2/\epsilon) ) \cdot n^\delta$ time in
the unit cost RAM model.
For outputting a rank-$k$ tensor, or even a bicriteria solution with
rank-$Ck$ for a certain constant $C > 1$, we show a $2^{\Omega(k^{1-o(1)})}$
time lower bound under the Exponential Time Hypothesis.
Our results give the first relative error low rank approximations for tensors
for a large number of robust error measures for which nothing was known, as
well as column row and tube subset selection. We also obtain new results for
matrices, such as $nnz(A)$-time CUR decompositions, improving previous
$nnz(A)\log n$-time algorithms, which may be of independent interest.
| Zhao Song, David P. Woodruff, Peilin Zhong | null | 1704.08246 | null | null |
Pruning variable selection ensembles | stat.ML cs.LG | In the context of variable selection, ensemble learning has gained increasing
interest due to its great potential to improve selection accuracy and to reduce
false discovery rate. A novel ordering-based selective ensemble learning
strategy is designed in this paper to obtain smaller but more accurate
ensembles. In particular, a greedy sorting strategy is proposed to rearrange
the order by which the members are included into the integration process.
Through stopping the fusion process early, a smaller subensemble with higher
selection accuracy can be obtained. More importantly, the sequential inclusion
criterion reveals the fundamental strength-diversity trade-off among ensemble
members. By taking stability selection (abbreviated as StabSel) as an example,
some experiments are conducted with both simulated and real-world data to
examine the performance of the novel algorithm. Experimental results
demonstrate that pruned StabSel generally achieves higher selection accuracy
and lower false discovery rates than StabSel and several other benchmark
methods.
| Chunxia Zhang, Yilei Wu and Mu Zhu | null | 1704.08265 | null | null |
Spectral Ergodicity in Deep Learning Architectures via Surrogate Random
Matrices | stat.ML cond-mat.stat-mech cs.LG | In this work a novel method to quantify spectral ergodicity for random
matrices is presented. The new methodology combines approaches rooted in the
metrics of Thirumalai-Mountain (TM) and Kullbach-Leibler (KL) divergence. The
method is applied to a general study of deep and recurrent neural networks via
the analysis of random matrix ensembles mimicking typical weight matrices of
those systems. In particular, we examine circular random matrix ensembles:
circular unitary ensemble (CUE), circular orthogonal ensemble (COE), and
circular symplectic ensemble (CSE). Eigenvalue spectra and spectral ergodicity
are computed for those ensembles as a function of network size. It is observed
that as the matrix size increases the level of spectral ergodicity of the
ensemble rises, i.e., the eigenvalue spectra obtained for a single realisation
at random from the ensemble is closer to the spectra obtained averaging over
the whole ensemble. Based on previous results we conjecture that success of
deep learning architectures is strongly bound to the concept of spectral
ergodicity. The method to compute spectral ergodicity proposed in this work
could be used to optimise the size and architecture of deep as well as
recurrent neural networks.
| Mehmet S\"uzen, Cornelius Weber and Joan J. Cerd\`a | 10.5281/zenodo.822411 and 10.5281/zenodo.579642 | 1704.08303 | null | null |
Limits of End-to-End Learning | cs.LG stat.ML | End-to-end learning refers to training a possibly complex learning system by
applying gradient-based learning to the system as a whole. End-to-end learning
system is specifically designed so that all modules are differentiable. In
effect, not only a central learning machine, but also all "peripheral" modules
like representation learning and memory formation are covered by a holistic
learning process. The power of end-to-end learning has been demonstrated on
many tasks, like playing a whole array of Atari video games with a single
architecture. While pushing for solutions to more challenging tasks, network
architectures keep growing more and more complex.
In this paper we ask the question whether and to what extent end-to-end
learning is a future-proof technique in the sense of scaling to complex and
diverse data processing architectures. We point out potential inefficiencies,
and we argue in particular that end-to-end learning does not make optimal use
of the modular design of present neural networks. Our surprisingly simple
experiments demonstrate these inefficiencies, up to the complete breakdown of
learning.
| Tobias Glasmachers | null | 1704.08305 | null | null |
Identifying Similarities in Epileptic Patients for Drug Resistance
Prediction | cs.LG stat.ML | Currently, approximately 30% of epileptic patients treated with antiepileptic
drugs (AEDs) remain resistant to treatment (known as refractory patients). This
project seeks to understand the underlying similarities in refractory patients
vs. other epileptic patients, identify features contributing to drug resistance
across underlying phenotypes for refractory patients, and develop predictive
models for drug resistance in epileptic patients. In this study, epileptic
patient data was examined to attempt to observe discernable similarities or
differences in refractory patients (case) and other non-refractory patients
(control) to map underlying mechanisms in causality. For the first part of the
study, unsupervised algorithms such as Kmeans, Spectral Clustering, and
Gaussian Mixture Models were used to examine patient features projected into a
lower dimensional space. Results from this study showed a high degree of
non-linearity in the underlying feature space. For the second part of this
study, classification algorithms such as Logistic Regression, Gradient Boosted
Decision Trees, and SVMs, were tested on the reduced-dimensionality features,
with accuracy results of 0.83(+/-0.3) testing using 7 fold cross validation.
Observations of test results indicate using a radial basis function kernel PCA
to reduce features ingested by a Gradient Boosted Decision Tree Ensemble lead
to gains in improved accuracy in mapping a binary decision to highly non-linear
features collected from epileptic patients.
| David Von Dollen | null | 1704.08361 | null | null |
A New Type of Neurons for Machine Learning | cs.NE cs.LG | In machine learning, the use of an artificial neural network is the
mainstream approach. Such a network consists of layers of neurons. These
neurons are of the same type characterized by the two features: (1) an inner
product of an input vector and a matching weighting vector of trainable
parameters and (2) a nonlinear excitation function. Here we investigate the
possibility of replacing the inner product with a quadratic function of the
input vector, thereby upgrading the 1st order neuron to the 2nd order neuron,
empowering individual neurons, and facilitating the optimization of neural
networks. Also, numerical examples are provided to illustrate the feasibility
and merits of the 2nd order neurons. Finally, further topics are discussed.
| Fenglei Fan, Wenxiang Cong, Ge Wang | null | 1704.08362 | null | null |
Large-scale Feature Selection of Risk Genetic Factors for Alzheimer's
Disease via Distributed Group Lasso Regression | cs.LG stat.ML | Genome-wide association studies (GWAS) have achieved great success in the
genetic study of Alzheimer's disease (AD). Collaborative imaging genetics
studies across different research institutions show the effectiveness of
detecting genetic risk factors. However, the high dimensionality of GWAS data
poses significant challenges in detecting risk SNPs for AD. Selecting relevant
features is crucial in predicting the response variable. In this study, we
propose a novel Distributed Feature Selection Framework (DFSF) to conduct the
large-scale imaging genetics studies across multiple institutions. To speed up
the learning process, we propose a family of distributed group Lasso screening
rules to identify irrelevant features and remove them from the optimization.
Then we select the relevant group features by performing the group Lasso
feature selection process in a sequence of parameters. Finally, we employ the
stability selection to rank the top risk SNPs that might help detect the early
stage of AD. To the best of our knowledge, this is the first distributed
feature selection model integrated with group Lasso feature selection as well
as detecting the risk genetic factors across multiple research institutions
system. Empirical studies are conducted on 809 subjects with 5.9 million SNPs
which are distributed across several individual institutions, demonstrating the
efficiency and effectiveness of the proposed method.
| Qingyang Li, Dajiang Zhu, Jie Zhang, Derrek Paul Hibar, Neda
Jahanshad, Yalin Wang, Jieping Ye, Paul M. Thompson, Jie Wang | null | 1704.08383 | null | null |
Multimodal Word Distributions | stat.ML cs.AI cs.CL cs.LG | Word embeddings provide point representations of words containing useful
semantic information. We introduce multimodal word distributions formed from
Gaussian mixtures, for multiple word meanings, entailment, and rich uncertainty
information. To learn these distributions, we propose an energy-based
max-margin objective. We show that the resulting approach captures uniquely
expressive semantic information, and outperforms alternatives, such as word2vec
skip-grams, and Gaussian embeddings, on benchmark datasets such as word
similarity and entailment.
| Ben Athiwaratkun, Andrew Gordon Wilson | null | 1704.08424 | null | null |
DeepCCI: End-to-end Deep Learning for Chemical-Chemical Interaction
Prediction | cs.LG | Chemical-chemical interaction (CCI) plays a key role in predicting candidate
drugs, toxicity, therapeutic effects, and biological functions. In various
types of chemical analyses, computational approaches are often required due to
the amount of data that needs to be handled. The recent remarkable growth and
outstanding performance of deep learning have attracted considerable research
attention. However,even in state-of-the-art drug analysis methods, deep
learning continues to be used only as a classifier, although deep learning is
capable of not only simple classification but also automated feature
extraction. In this paper, we propose the first end-to-end learning method for
CCI, named DeepCCI. Hidden features are derived from a simplified molecular
input line entry system (SMILES), which is a string notation representing the
chemical structure, instead of learning from crafted features. To discover
hidden representations for the SMILES strings, we use convolutional neural
networks (CNNs). To guarantee the commutative property for homogeneous
interaction, we apply model sharing and hidden representation merging
techniques. The performance of DeepCCI was compared with a plain deep
classifier and conventional machine learning methods. The proposed DeepCCI
showed the best performance in all seven evaluation metrics used. In addition,
the commutative property was experimentally validated. The automatically
extracted features through end-to-end SMILES learning alleviates the
significant efforts required for manual feature engineering. It is expected to
improve prediction performance, in drug analyses.
| Sunyoung Kwon, Sungroh Yoon | null | 1704.08432 | null | null |
DNA Steganalysis Using Deep Recurrent Neural Networks | cs.LG cs.MM | Recent advances in next-generation sequencing technologies have facilitated
the use of deoxyribonucleic acid (DNA) as a novel covert channels in
steganography. There are various methods that exist in other domains to detect
hidden messages in conventional covert channels. However, they have not been
applied to DNA steganography. The current most common detection approaches,
namely frequency analysis-based methods, often overlook important signals when
directly applied to DNA steganography because those methods depend on the
distribution of the number of sequence characters. To address this limitation,
we propose a general sequence learning-based DNA steganalysis framework. The
proposed approach learns the intrinsic distribution of coding and non-coding
sequences and detects hidden messages by exploiting distribution variations
after hiding these messages. Using deep recurrent neural networks (RNNs), our
framework identifies the distribution variations by using the classification
score to predict whether a sequence is to be a coding or non-coding sequence.
We compare our proposed method to various existing methods and biological
sequence analysis methods implemented on top of our framework. According to our
experimental results, our approach delivers a robust detection performance
compared to other tools.
| Ho Bae, Byunghan Lee, Sunyoung Kwon, Sungroh Yoon | null | 1704.08443 | null | null |
Optimal client recommendation for market makers in illiquid financial
products | q-fin.CP cs.LG stat.ML | The process of liquidity provision in financial markets can result in
prolonged exposure to illiquid instruments for market makers. In this case,
where a proprietary position is not desired, pro-actively targeting the right
client who is likely to be interested can be an effective means to offset this
position, rather than relying on commensurate interest arising through natural
demand. In this paper, we consider the inference of a client profile for the
purpose of corporate bond recommendation, based on typical recorded information
available to the market maker. Given a historical record of corporate bond
transactions and bond meta-data, we use a topic-modelling analogy to develop a
probabilistic technique for compiling a curated list of client recommendations
for a particular bond that needs to be traded, ranked by probability of
interest. We show that a model based on Latent Dirichlet Allocation offers
promising performance to deliver relevant recommendations for sales traders.
| Dieter Hendricks and Stephen J. Roberts | null | 1704.08488 | null | null |
Complex spectrogram enhancement by convolutional neural network with
multi-metrics learning | stat.ML cs.LG cs.SD | This paper aims to address two issues existing in the current speech
enhancement methods: 1) the difficulty of phase estimations; 2) a single
objective function cannot consider multiple metrics simultaneously. To solve
the first problem, we propose a novel convolutional neural network (CNN) model
for complex spectrogram enhancement, namely estimating clean real and imaginary
(RI) spectrograms from noisy ones. The reconstructed RI spectrograms are
directly used to synthesize enhanced speech waveforms. In addition, since
log-power spectrogram (LPS) can be represented as a function of RI
spectrograms, its reconstruction is also considered as another target. Thus a
unified objective function, which combines these two targets (reconstruction of
RI spectrograms and LPS), is equivalent to simultaneously optimizing two
commonly used objective metrics: segmental signal-to-noise ratio (SSNR) and
logspectral distortion (LSD). Therefore, the learning process is called
multi-metrics learning (MML). Experimental results confirm the effectiveness of
the proposed CNN with RI spectrograms and MML in terms of improved standardized
evaluation metrics on a speech enhancement task.
| Szu-Wei Fu, Ting-yao Hu, Yu Tsao, and Xugang Lu | null | 1704.08504 | null | null |
EEG-Based User Reaction Time Estimation Using Riemannian Geometry
Features | cs.HC cs.LG | Riemannian geometry has been successfully used in many brain-computer
interface (BCI) classification problems and demonstrated superior performance.
In this paper, for the first time, it is applied to BCI regression problems, an
important category of BCI applications. More specifically, we propose a new
feature extraction approach for Electroencephalogram (EEG) based BCI regression
problems: a spatial filter is first used to increase the signal quality of the
EEG trials and also to reduce the dimensionality of the covariance matrices,
and then Riemannian tangent space features are extracted. We validate the
performance of the proposed approach in reaction time estimation from EEG
signals measured in a large-scale sustained-attention psychomotor vigilance
task, and show that compared with the traditional powerband features, the
tangent space features can reduce the root mean square estimation error by
4.30-8.30%, and increase the estimation correlation coefficient by 6.59-11.13%.
| Dongrui Wu and Brent J. Lance and Vernon J. Lawhern and Stephen Gordon
and Tzyy-Ping Jung and Chin-Teng Lin | null | 1704.08533 | null | null |
Learning the structure of Bayesian Networks: A quantitative assessment
of the effect of different algorithmic schemes | cs.LG cs.AI stat.ML | One of the most challenging tasks when adopting Bayesian Networks (BNs) is
the one of learning their structure from data. This task is complicated by the
huge search space of possible solutions, and by the fact that the problem is
NP-hard. Hence, full enumeration of all the possible solutions is not always
feasible and approximations are often required. However, to the best of our
knowledge, a quantitative analysis of the performance and characteristics of
the different heuristics to solve this problem has never been done before.
For this reason, in this work, we provide a detailed comparison of many
different state-of-the-arts methods for structural learning on simulated data
considering both BNs with discrete and continuous variables, and with different
rates of noise in the data. In particular, we investigate the performance of
different widespread scores and algorithmic approaches proposed for the
inference and the statistical pitfalls within them.
| Stefano Beretta and Mauro Castelli and Ivo Goncalves and Roberto
Henriques and Daniele Ramazzotti | null | 1704.08676 | null | null |
Matrix Completion and Related Problems via Strong Duality | cs.DS cs.LG stat.ML | This work studies the strong duality of non-convex matrix factorization
problems: we show that under certain dual conditions, these problems and its
dual have the same optimum. This has been well understood for convex
optimization, but little was known for non-convex problems. We propose a novel
analytical framework and show that under certain dual conditions, the optimal
solution of the matrix factorization program is the same as its bi-dual and
thus the global optimality of the non-convex program can be achieved by solving
its bi-dual which is convex. These dual conditions are satisfied by a wide
class of matrix factorization problems, although matrix factorization problems
are hard to solve in full generality. This analytical framework may be of
independent interest to non-convex optimization more broadly.
We apply our framework to two prototypical matrix factorization problems:
matrix completion and robust Principal Component Analysis (PCA). These are
examples of efficiently recovering a hidden matrix given limited reliable
observations of it. Our framework shows that exact recoverability and strong
duality hold with nearly-optimal sample complexity guarantees for matrix
completion and robust PCA.
| Maria-Florina Balcan and Yingyu Liang and David P. Woodruff and
Hongyang Zhang | null | 1704.08683 | null | null |
A Siamese Deep Forest | stat.ML cs.LG | A Siamese Deep Forest (SDF) is proposed in the paper. It is based on the Deep
Forest or gcForest proposed by Zhou and Feng and can be viewed as a gcForest
modification. It can be also regarded as an alternative to the well-known
Siamese neural networks. The SDF uses a modified training set consisting of
concatenated pairs of vectors. Moreover, it defines the class distributions in
the deep forest as the weighted sum of the tree class probabilities such that
the weights are determined in order to reduce distances between similar pairs
and to increase them between dissimilar points. We show that the weights can be
obtained by solving a quadratic optimization problem. The SDF aims to prevent
overfitting which takes place in neural networks when only limited training
data are available. The numerical experiments illustrate the proposed distance
metric method.
| Lev V. Utkin and Mikhail A. Ryabinin | null | 1704.08715 | null | null |
A Network Perspective on Stratification of Multi-Label Data | stat.ML cs.LG stat.ME | In the recent years, we have witnessed the development of multi-label
classification methods which utilize the structure of the label space in a
divide and conquer approach to improve classification performance and allow
large data sets to be classified efficiently. Yet most of the available data
sets have been provided in train/test splits that did not account for
maintaining a distribution of higher-order relationships between labels among
splits or folds. We present a new approach to stratifying multi-label data for
classification purposes based on the iterative stratification approach proposed
by Sechidis et. al. in an ECML PKDD 2011 paper. Our method extends the
iterative approach to take into account second-order relationships between
labels. Obtained results are evaluated using statistical properties of obtained
strata as presented by Sechidis. We also propose new statistical measures
relevant to second-order quality: label pairs distribution, the percentage of
label pairs without positive evidence in folds and label pair - fold pairs that
have no positive evidence for the label pair. We verify the impact of new
methods on classification performance of Binary Relevance, Label Powerset and a
fast greedy community detection based label space partitioning classifier.
Random Forests serve as base classifiers. We check the variation of the number
of communities obtained per fold, and the stability of their modularity score.
Second-Order Iterative Stratification is compared to standard k-fold, label
set, and iterative stratification. The proposed approach lowers the variance of
classification quality, improves label pair oriented measures and example
distribution while maintaining a competitive quality in label-oriented
measures. We also witness an increase in stability of network characteristics.
| Piotr Szyma\'nski, Tomasz Kajdanowicz | null | 1704.08756 | null | null |
Deep Face Deblurring | cs.CV cs.AI cs.LG | Blind deblurring consists a long studied task, however the outcomes of
generic methods are not effective in real world blurred images. Domain-specific
methods for deblurring targeted object categories, e.g. text or faces,
frequently outperform their generic counterparts, hence they are attracting an
increasing amount of attention. In this work, we develop such a domain-specific
method to tackle deblurring of human faces, henceforth referred to as face
deblurring. Studying faces is of tremendous significance in computer vision,
however face deblurring has yet to demonstrate some convincing results. This
can be partly attributed to the combination of i) poor texture and ii) highly
structure shape that yield the contour/gradient priors (that are typically
used) sub-optimal. In our work instead of making assumptions over the prior, we
adopt a learning approach by inserting weak supervision that exploits the
well-documented structure of the face. Namely, we utilise a deep network to
perform the deblurring and employ a face alignment technique to pre-process
each face. We additionally surpass the requirement of the deep network for
thousands training samples, by introducing an efficient framework that allows
the generation of a large dataset. We utilised this framework to create 2MF2, a
dataset of over two million frames. We conducted experiments with real world
blurred facial images and report that our method returns a result close to the
sharp natural latent image.
| Grigorios G. Chrysos, Stefanos Zafeiriou | null | 1704.08772 | null | null |
Learning Quadratic Variance Function (QVF) DAG models via OverDispersion
Scoring (ODS) | stat.ML cs.LG | Learning DAG or Bayesian network models is an important problem in
multi-variate causal inference. However, a number of challenges arises in
learning large-scale DAG models including model identifiability and
computational complexity since the space of directed graphs is huge. In this
paper, we address these issues in a number of steps for a broad class of DAG
models where the noise or variance is signal-dependent. Firstly we introduce a
new class of identifiable DAG models, where each node has a distribution where
the variance is a quadratic function of the mean (QVF DAG models). Our QVF DAG
models include many interesting classes of distributions such as Poisson,
Binomial, Geometric, Exponential, Gamma and many other distributions in which
the noise variance depends on the mean. We prove that this class of QVF DAG
models is identifiable, and introduce a new algorithm, the OverDispersion
Scoring (ODS) algorithm, for learning large-scale QVF DAG models. Our algorithm
is based on firstly learning the moralized or undirected graphical model
representation of the DAG to reduce the DAG search-space, and then exploiting
the quadratic variance property to learn the causal ordering. We show through
theoretical results and simulations that our algorithm is statistically
consistent in the high-dimensional p>n setting provided that the degree of the
moralized graph is bounded and performs well compared to state-of-the-art
DAG-learning algorithms.
| Gunwoong Park, Garvesh Raskutti | null | 1704.08783 | null | null |
DeepArchitect: Automatically Designing and Training Deep Architectures | stat.ML cs.LG | In deep learning, performance is strongly affected by the choice of
architecture and hyperparameters. While there has been extensive work on
automatic hyperparameter optimization for simple spaces, complex spaces such as
the space of deep architectures remain largely unexplored. As a result, the
choice of architecture is done manually by the human expert through a slow
trial and error process guided mainly by intuition. In this paper we describe a
framework for automatically designing and training deep models. We propose an
extensible and modular language that allows the human expert to compactly
represent complex search spaces over architectures and their hyperparameters.
The resulting search spaces are tree-structured and therefore easy to traverse.
Models can be automatically compiled to computational graphs once values for
all hyperparameters have been chosen. We can leverage the structure of the
search space to introduce different model search algorithms, such as random
search, Monte Carlo tree search (MCTS), and sequential model-based optimization
(SMBO). We present experiments comparing the different algorithms on CIFAR-10
and show that MCTS and SMBO outperform random search. In addition, these
experiments show that our framework can be used effectively for model
discovery, as it is possible to describe expressive search spaces and discover
competitive models without much effort from the human expert. Code for our
framework and experiments has been made publicly available.
| Renato Negrinho, Geoff Gordon | null | 1704.08792 | null | null |
Risk Stratification of Lung Nodules Using 3D CNN-Based Multi-task
Learning | cs.CV cs.LG | Risk stratification of lung nodules is a task of primary importance in lung
cancer diagnosis. Any improvement in robust and accurate nodule
characterization can assist in identifying cancer stage, prognosis, and
improving treatment planning. In this study, we propose a 3D Convolutional
Neural Network (CNN) based nodule characterization strategy. With a completely
3D approach, we utilize the volumetric information from a CT scan which would
be otherwise lost in the conventional 2D CNN based approaches. In order to
address the need for a large amount for training data for CNN, we resort to
transfer learning to obtain highly discriminative features. Moreover, we also
acquire the task dependent feature representation for six high-level nodule
attributes and fuse this complementary information via a Multi-task learning
(MTL) framework. Finally, we propose to incorporate potential disagreement
among radiologists while scoring different nodule attributes in a graph
regularized sparse multi-task learning. We evaluated our proposed approach on
one of the largest publicly available lung nodule datasets comprising 1018
scans and obtained state-of-the-art results in regressing the malignancy
scores.
| Sarfaraz Hussein, Kunlin Cao, Qi Song, Ulas Bagci | 10.1007/978-3-319-59050-9_20 | 1704.08797 | null | null |
Neural Ranking Models with Weak Supervision | cs.IR cs.CL cs.LG | Despite the impressive improvements achieved by unsupervised deep neural
networks in computer vision and NLP tasks, such improvements have not yet been
observed in ranking for information retrieval. The reason may be the complexity
of the ranking problem, as it is not obvious how to learn from queries and
documents when no supervised signal is available. Hence, in this paper, we
propose to train a neural ranking model using weak supervision, where labels
are obtained automatically without human annotators or any external resources
(e.g., click data). To this aim, we use the output of an unsupervised ranking
model, such as BM25, as a weak supervision signal. We further train a set of
simple yet effective ranking models based on feed-forward neural networks. We
study their effectiveness under various learning scenarios (point-wise and
pair-wise models) and using different input representations (i.e., from
encoding query-document pairs into dense/sparse vectors to using word embedding
representation). We train our networks using tens of millions of training
instances and evaluate it on two standard collections: a homogeneous news
collection(Robust) and a heterogeneous large-scale web collection (ClueWeb).
Our experiments indicate that employing proper objective functions and letting
the networks to learn the input representation based on weakly supervised data
leads to impressive performance, with over 13% and 35% MAP improvements over
the BM25 model on the Robust and the ClueWeb collections. Our findings also
suggest that supervised neural ranking models can greatly benefit from
pre-training on large amounts of weakly labeled data that can be easily
obtained from unsupervised IR models.
| Mostafa Dehghani, Hamed Zamani, Aliaksei Severyn, Jaap Kamps, W. Bruce
Croft | null | 1704.08803 | null | null |
A Tribe Competition-Based Genetic Algorithm for Feature Selection in
Pattern Classification | cs.LG cs.NE | Feature selection has always been a critical step in pattern recognition, in
which evolutionary algorithms, such as the genetic algorithm (GA), are most
commonly used. However, the individual encoding scheme used in various GAs
would either pose a bias on the solution or require a pre-specified number of
features, and hence may lead to less accurate results. In this paper, a tribe
competition-based genetic algorithm (TCbGA) is proposed for feature selection
in pattern classification. The population of individuals is divided into
multiple tribes, and the initialization and evolutionary operations are
modified to ensure that the number of selected features in each tribe follows a
Gaussian distribution. Thus each tribe focuses on exploring a specific part of
the solution space. Meanwhile, tribe competition is introduced to the evolution
process, which allows the winning tribes, which produce better individuals, to
enlarge their sizes, i.e. having more individuals to search their parts of the
solution space. This algorithm, therefore, avoids the bias on solutions and
requirement of a pre-specified number of features. We have evaluated our
algorithm against several state-of-the-art feature selection approaches on 20
benchmark datasets. Our results suggest that the proposed TCbGA algorithm can
identify the optimal feature subset more effectively and produce more accurate
pattern classification.
| Benteng Ma, Yong Xia | 10.1016/j.asoc.2017.04.042 | 1704.08818 | null | null |
Deep Feature Learning for Graphs | stat.ML cs.LG cs.SI | This paper presents a general graph representation learning framework called
DeepGL for learning deep node and edge representations from large (attributed)
graphs. In particular, DeepGL begins by deriving a set of base features (e.g.,
graphlet features) and automatically learns a multi-layered hierarchical graph
representation where each successive layer leverages the output from the
previous layer to learn features of a higher-order. Contrary to previous work,
DeepGL learns relational functions (each representing a feature) that
generalize across-networks and therefore useful for graph-based transfer
learning tasks. Moreover, DeepGL naturally supports attributed graphs, learns
interpretable features, and is space-efficient (by learning sparse feature
vectors). In addition, DeepGL is expressive, flexible with many interchangeable
components, efficient with a time complexity of $\mathcal{O}(|E|)$, and
scalable for large networks via an efficient parallel implementation. Compared
with the state-of-the-art method, DeepGL is (1) effective for across-network
transfer learning tasks and attributed graph representation learning, (2)
space-efficient requiring up to 6x less memory, (3) fast with up to 182x
speedup in runtime performance, and (4) accurate with an average improvement of
20% or more on many learning tasks.
| Ryan A. Rossi, Rong Zhou, and Nesreen K. Ahmed | null | 1704.08829 | null | null |
Parseval Networks: Improving Robustness to Adversarial Examples | stat.ML cs.AI cs.CR cs.LG | We introduce Parseval networks, a form of deep neural networks in which the
Lipschitz constant of linear, convolutional and aggregation layers is
constrained to be smaller than 1. Parseval networks are empirically and
theoretically motivated by an analysis of the robustness of the predictions
made by deep neural networks when their input is subject to an adversarial
perturbation. The most important feature of Parseval networks is to maintain
weight matrices of linear and convolutional layers to be (approximately)
Parseval tight frames, which are extensions of orthogonal matrices to
non-square matrices. We describe how these constraints can be maintained
efficiently during SGD. We show that Parseval networks match the
state-of-the-art in terms of accuracy on CIFAR-10/100 and Street View House
Numbers (SVHN) while being more robust than their vanilla counterpart against
adversarial examples. Incidentally, Parseval networks also tend to train faster
and make a better usage of the full capacity of the networks.
| Moustapha Cisse, Piotr Bojanowski, Edouard Grave, Yann Dauphin,
Nicolas Usunier | null | 1704.08847 | null | null |
On weight initialization in deep neural networks | cs.LG | A proper initialization of the weights in a neural network is critical to its
convergence. Current insights into weight initialization come primarily from
linear activation functions. In this paper, I develop a theory for weight
initializations with non-linear activations. First, I derive a general weight
initialization strategy for any neural network using activation functions
differentiable at 0. Next, I derive the weight initialization strategy for the
Rectified Linear Unit (RELU), and provide theoretical insights into why the
Xavier initialization is a poor choice with RELU activations. My analysis
provides a clear demonstration of the role of non-linearities in determining
the proper weight initializations.
| Siddharth Krishna Kumar | null | 1704.08863 | null | null |
Traffic Light Control Using Deep Policy-Gradient and Value-Function
Based Reinforcement Learning | cs.LG | Recent advances in combining deep neural network architectures with
reinforcement learning techniques have shown promising potential results in
solving complex control problems with high dimensional state and action spaces.
Inspired by these successes, in this paper, we build two kinds of reinforcement
learning algorithms: deep policy-gradient and value-function based agents which
can predict the best possible traffic signal for a traffic intersection. At
each time step, these adaptive traffic light control agents receive a snapshot
of the current state of a graphical traffic simulator and produce control
signals. The policy-gradient based agent maps its observation directly to the
control signal, however the value-function based agent first estimates values
for all legal control signals. The agent then selects the optimal control
action with the highest value. Our methods show promising results in a traffic
network simulated in the SUMO traffic simulator, without suffering from
instability issues during the training process.
| Seyed Sajad Mousavi, Michael Schukat, Enda Howley | null | 1704.08883 | null | null |
Adaptation and learning over networks for nonlinear system modeling | stat.ML cs.LG | In this chapter, we analyze nonlinear filtering problems in distributed
environments, e.g., sensor networks or peer-to-peer protocols. In these
scenarios, the agents in the environment receive measurements in a streaming
fashion, and they are required to estimate a common (nonlinear) model by
alternating local computations and communications with their neighbors. We
focus on the important distinction between single-task problems, where the
underlying model is common to all agents, and multitask problems, where each
agent might converge to a different model due to, e.g., spatial dependencies or
other factors. Currently, most of the literature on distributed learning in the
nonlinear case has focused on the single-task case, which may be a strong
limitation in real-world scenarios. After introducing the problem and reviewing
the existing approaches, we describe a simple kernel-based algorithm tailored
for the multitask case. We evaluate the proposal on a simulated benchmark task,
and we conclude by detailing currently open problems and lines of research.
| Simone Scardapane, Jie Chen, C\'edric Richard | null | 1704.08913 | null | null |
Past, Present, Future: A Computational Investigation of the Typology of
Tense in 1000 Languages | cs.CL cs.AI cs.LG | We present SuperPivot, an analysis method for low-resource languages that
occur in a superparallel corpus, i.e., in a corpus that contains an order of
magnitude more languages than parallel corpora currently in use. We show that
SuperPivot performs well for the crosslingual analysis of the linguistic
phenomenon of tense. We produce analysis results for more than 1000 languages,
conducting - to the best of our knowledge - the largest crosslingual
computational study performed to date. We extend existing methodology for
leveraging parallel corpora for typological analysis by overcoming a limiting
assumption of earlier work: We only require that a linguistic feature is
overtly marked in a few of thousands of languages as opposed to requiring that
it be marked in all languages under investigation.
| Ehsaneddin Asgari and Hinrich Sch\"utze | null | 1704.08914 | null | null |
Mostly Exploration-Free Algorithms for Contextual Bandits | stat.ML cs.LG | The contextual bandit literature has traditionally focused on algorithms that
address the exploration-exploitation tradeoff. In particular, greedy algorithms
that exploit current estimates without any exploration may be sub-optimal in
general. However, exploration-free greedy algorithms are desirable in practical
settings where exploration may be costly or unethical (e.g., clinical trials).
Surprisingly, we find that a simple greedy algorithm can be rate optimal
(achieves asymptotically optimal regret) if there is sufficient randomness in
the observed contexts (covariates). We prove that this is always the case for a
two-armed bandit under a general class of context distributions that satisfy a
condition we term covariate diversity. Furthermore, even absent this condition,
we show that a greedy algorithm can be rate optimal with positive probability.
Thus, standard bandit algorithms may unnecessarily explore. Motivated by these
results, we introduce Greedy-First, a new algorithm that uses only observed
contexts and rewards to determine whether to follow a greedy algorithm or to
explore. We prove that this algorithm is rate optimal without any additional
assumptions on the context distribution or the number of arms. Extensive
simulations demonstrate that Greedy-First successfully reduces exploration and
outperforms existing (exploration-based) contextual bandit algorithms such as
Thompson sampling or upper confidence bound (UCB).
| Hamsa Bastani and Mohsen Bayati and Khashayar Khosravi | null | 1704.09011 | null | null |
Time-Sensitive Bandit Learning and Satisficing Thompson Sampling | cs.LG | The literature on bandit learning and regret analysis has focused on contexts
where the goal is to converge on an optimal action in a manner that limits
exploration costs. One shortcoming imposed by this orientation is that it does
not treat time preference in a coherent manner. Time preference plays an
important role when the optimal action is costly to learn relative to
near-optimal actions. This limitation has not only restricted the relevance of
theoretical results but has also influenced the design of algorithms. Indeed,
popular approaches such as Thompson sampling and UCB can fare poorly in such
situations. In this paper, we consider discounted rather than cumulative
regret, where a discount factor encodes time preference. We propose satisficing
Thompson sampling -- a variation of Thompson sampling -- and establish a strong
discounted regret bound for this new algorithm.
| Daniel Russo and David Tse and Benjamin Van Roy | null | 1704.09028 | null | null |
Random Forest Ensemble of Support Vector Regression Models for Solar
Power Forecasting | cs.LG cs.CE | To mitigate the uncertainty of variable renewable resources, two
off-the-shelf machine learning tools are deployed to forecast the solar power
output of a solar photovoltaic system. The support vector machines generate the
forecasts and the random forest acts as an ensemble learning method to combine
the forecasts. The common ensemble technique in wind and solar power
forecasting is the blending of meteorological data from several sources. In
this study though, the present and the past solar power forecasts from several
models, as well as the associated meteorological data, are incorporated into
the random forest to combine and improve the accuracy of the day-ahead solar
power forecasts. The performance of the combined model is evaluated over the
entire year and compared with other combining techniques.
| Mohamed Abuella and Badrul Chowdhury | null | 1705.00033 | null | null |
Deep Multi-view Models for Glitch Classification | cs.LG cs.CV | Non-cosmic, non-Gaussian disturbances known as "glitches", show up in
gravitational-wave data of the Advanced Laser Interferometer Gravitational-wave
Observatory, or aLIGO. In this paper, we propose a deep multi-view
convolutional neural network to classify glitches automatically. The primary
purpose of classifying glitches is to understand their characteristics and
origin, which facilitates their removal from the data or from the detector
entirely. We visualize glitches as spectrograms and leverage the
state-of-the-art image classification techniques in our model. The suggested
classifier is a multi-view deep neural network that exploits four different
views for classification. The experimental results demonstrate that the
proposed model improves the overall accuracy of the classification compared to
traditional single view algorithms.
| Sara Bahaadini, Neda Rohani, Scott Coughlin, Michael Zevin, Vicky
Kalogera, and Aggelos K Katsaggelos | null | 1705.00034 | null | null |
Cnvlutin2: Ineffectual-Activation-and-Weight-Free Deep Neural Network
Computing | cs.LG | We discuss several modifications and extensions over the previous proposed
Cnvlutin (CNV) accelerator for convolutional and fully-connected layers of Deep
Learning Network. We first describe different encodings of the activations that
are deemed ineffectual. The encodings have different memory overhead and energy
characteristics. We propose using a level of indirection when accessing
activations from memory to reduce their memory footprint by storing only the
effectual activations. We also present a modified organization that detects the
activations that are deemed as ineffectual while fetching them from memory.
This is different than the original design that instead detected them at the
output of the preceding layer. Finally, we present an extended CNV that can
also skip ineffectual weights.
| Patrick Judd, Alberto Delmas, Sayeh Sharify and Andreas Moshovos | null | 1705.00125 | null | null |
Online Learning with Automata-based Expert Sequences | cs.LG | We consider a general framework of online learning with expert advice where
regret is defined with respect to sequences of experts accepted by a weighted
automaton. Our framework covers several problems previously studied, including
competing against k-shifting experts. We give a series of algorithms for this
problem, including an automata-based algorithm extending weighted-majority and
more efficient algorithms based on the notion of failure transitions. We
further present efficient algorithms based on an approximation of the
competitor automaton, in particular n-gram models obtained by minimizing the
\infty-R\'{e}nyi divergence, and present an extensive study of the
approximation properties of such models. Finally, we also extend our algorithms
and results to the framework of sleeping experts.
| Mehryar Mohri, Scott Yang | null | 1705.00132 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.