title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Local Maxima in the Likelihood of Gaussian Mixture Models: Structural
Results and Algorithmic Consequences | stat.ML cs.LG math.OC | We provide two fundamental results on the population (infinite-sample)
likelihood function of Gaussian mixture models with $M \geq 3$ components. Our
first main result shows that the population likelihood function has bad local
maxima even in the special case of equally-weighted mixtures of well-separated
and spherical Gaussians. We prove that the log-likelihood value of these bad
local maxima can be arbitrarily worse than that of any global optimum, thereby
resolving an open question of Srebro (2007). Our second main result shows that
the EM algorithm (or a first-order variant of it) with random initialization
will converge to bad critical points with probability at least
$1-e^{-\Omega(M)}$. We further establish that a first-order variant of EM will
not converge to strict saddle points almost surely, indicating that the poor
performance of the first-order method can be attributed to the existence of bad
local maxima rather than bad saddle points. Overall, our results highlight the
necessity of careful initialization when using the EM algorithm in practice,
even when applied in highly favorable settings.
| Chi Jin, Yuchen Zhang, Sivaraman Balakrishnan, Martin J. Wainwright,
Michael Jordan | null | 1609.00978 | null | null |
Convexified Convolutional Neural Networks | cs.LG | We describe the class of convexified convolutional neural networks (CCNNs),
which capture the parameter sharing of convolutional neural networks in a
convex manner. By representing the nonlinear convolutional filters as vectors
in a reproducing kernel Hilbert space, the CNN parameters can be represented as
a low-rank matrix, which can be relaxed to obtain a convex optimization
problem. For learning two-layer convolutional neural networks, we prove that
the generalization error obtained by a convexified CNN converges to that of the
best possible CNN. For learning deeper networks, we train CCNNs in a layer-wise
manner. Empirically, CCNNs achieve performance competitive with CNNs trained by
backpropagation, SVMs, fully-connected neural networks, stacked denoising
auto-encoders, and other baseline methods.
| Yuchen Zhang, Percy Liang, Martin J. Wainwright | null | 1609.01 | null | null |
Distribution-Specific Hardness of Learning Neural Networks | cs.LG cs.NE stat.ML | Although neural networks are routinely and successfully trained in practice
using simple gradient-based methods, most existing theoretical results are
negative, showing that learning such networks is difficult, in a worst-case
sense over all data distributions. In this paper, we take a more nuanced view,
and consider whether specific assumptions on the "niceness" of the input
distribution, or "niceness" of the target function (e.g. in terms of
smoothness, non-degeneracy, incoherence, random choice of parameters etc.), are
sufficient to guarantee learnability using gradient-based methods. We provide
evidence that neither class of assumptions alone is sufficient: On the one
hand, for any member of a class of "nice" target functions, there are difficult
input distributions. On the other hand, we identify a family of simple target
functions, which are difficult to learn even if the input distribution is
"nice". To prove our results, we develop some tools which may be of independent
interest, such as extending Fourier-based hardness techniques developed in the
context of statistical queries \cite{blum1994weakly}, from the Boolean cube to
Euclidean space and to more general classes of functions.
| Ohad Shamir | null | 1609.01037 | null | null |
Classifying and sorting cluttered piles of unknown objects with robots:
a learning approach | cs.RO cs.CV cs.LG | We consider the problem of sorting a densely cluttered pile of unknown
objects using a robot. This yet unsolved problem is relevant in the robotic
waste sorting business.
By extending previous active learning approaches to grasping, we show a
system that learns the task autonomously. Instead of predicting just whether a
grasp succeeds, we predict the classes of the objects that end up being picked
and thrown onto the target conveyor. Segmenting and identifying objects from
the uncluttered target conveyor, as opposed to the working area, is easier due
to the added structure since the thrown objects will be the only ones present.
Instead of trying to segment or otherwise understand the cluttered working
area in any way, we simply allow the controller to learn a mapping from an RGBD
image in the neighborhood of the grasp to a predicted result---all segmentation
etc. in the working area is implicit in the learned function. The grasp
selection operates in two stages: The first stage is hardcoded and outputs a
distribution of possible grasps that sometimes succeed. The second stage uses a
purely learned criterion to choose the grasp to make from the proposal
distribution created by the first stage.
In an experiment, the system quickly learned to make good pickups and predict
correctly, in advance, which class of object it was going to pick up and was
able to sort the objects from a densely cluttered pile by color.
| Janne V. Kujala, Tuomas J. Lukka, Harri Holopainen | null | 1609.01044 | null | null |
The Player Kernel: Learning Team Strengths Based on Implicit Player
Contributions | cs.LG stat.AP | In this work, we draw attention to a connection between skill-based models of
game outcomes and Gaussian process classification models. The Gaussian process
perspective enables a) a principled way of dealing with uncertainty and b) rich
models, specified through kernel functions. Using this connection, we tackle
the problem of predicting outcomes of football matches between national teams.
We develop a player kernel that relates any two football matches through the
players lined up on the field. This makes it possible to share knowledge gained
from observing matches between clubs (available in large quantities) and
matches between national teams (available only in limited quantities). We
evaluate our approach on the Euro 2008, 2012 and 2016 final tournaments.
| Lucas Maystre, Victor Kristof, Antonio J. Gonz\'alez Ferrer, Matthias
Grossglauser | null | 1609.01176 | null | null |
Live Orchestral Piano, a system for real-time orchestral music
generation | cs.LG | This paper introduces the first system for performing automatic orchestration
based on a real-time piano input. We believe that it is possible to learn the
underlying regularities existing between piano scores and their orchestrations
by notorious composers, in order to automatically perform this task on novel
piano inputs. To that end, we investigate a class of statistical inference
models called conditional Restricted Boltzmann Machine (cRBM). We introduce a
specific evaluation framework for orchestral generation based on a prediction
task in order to assess the quality of different models. As prediction and
creation are two widely different endeavours, we discuss the potential biases
in evaluating temporal generative models through prediction tasks and their
impact on a creative system. Finally, we introduce an implementation of the
proposed model called Live Orchestral Piano (LOP), which allows to perform
real-time projective orchestration of a MIDI keyboard input.
| L\'eopold Crestel and Philippe Esling | null | 1609.01203 | null | null |
The Robustness of Estimator Composition | cs.LG stat.ML | We formalize notions of robustness for composite estimators via the notion of
a breakdown point. A composite estimator successively applies two (or more)
estimators: on data decomposed into disjoint parts, it applies the first
estimator on each part, then the second estimator on the outputs of the first
estimator. And so on, if the composition is of more than two estimators.
Informally, the breakdown point is the minimum fraction of data points which if
significantly modified will also significantly modify the output of the
estimator, so it is typically desirable to have a large breakdown point. Our
main result shows that, under mild conditions on the individual estimators, the
breakdown point of the composite estimator is the product of the breakdown
points of the individual estimators. We also demonstrate several scenarios,
ranging from regression to statistical testing, where this analysis is easy to
apply, useful in understanding worst case robustness, and sheds powerful
insights onto the associated data analysis.
| Pingfan Tang, Jeff M. Phillips | null | 1609.01226 | null | null |
Evolutionary Synthesis of Deep Neural Networks via Synaptic
Cluster-driven Genetic Encoding | cs.LG cs.CV cs.NE stat.ML | There has been significant recent interest towards achieving highly efficient
deep neural network architectures. A promising paradigm for achieving this is
the concept of evolutionary deep intelligence, which attempts to mimic
biological evolution processes to synthesize highly-efficient deep neural
networks over successive generations. An important aspect of evolutionary deep
intelligence is the genetic encoding scheme used to mimic heredity, which can
have a significant impact on the quality of offspring deep neural networks.
Motivated by the neurobiological phenomenon of synaptic clustering, we
introduce a new genetic encoding scheme where synaptic probability is driven
towards the formation of a highly sparse set of synaptic clusters. Experimental
results for the task of image classification demonstrated that the synthesized
offspring networks using this synaptic cluster-driven genetic encoding scheme
can achieve state-of-the-art performance while having network architectures
that are not only significantly more efficient (with a ~125-fold decrease in
synapses for MNIST) compared to the original ancestor network, but also
tailored for GPU-accelerated machine learning applications.
| Mohammad Javad Shafiee and Alexander Wong | null | 1609.0136 | null | null |
Learning Model Predictive Control for iterative tasks. A Data-Driven
Control Framework | cs.SY cs.LG math.OC | A Learning Model Predictive Controller (LMPC) for iterative tasks is
presented. The controller is reference-free and is able to improve its
performance by learning from previous iterations. A safe set and a terminal
cost function are used in order to guarantee recursive feasibility and
non-increasing performance at each iteration. The paper presents the control
design approach, and shows how to recursively construct terminal set and
terminal cost from state and input trajectories of previous iterations.
Simulation results show the effectiveness of the proposed control logic.
| Ugo Rosolia and Francesco Borrelli | null | 1609.01387 | null | null |
Q-Learning with Basic Emotions | cs.LG cs.AI stat.ML | Q-learning is a simple and powerful tool in solving dynamic problems where
environments are unknown. It uses a balance of exploration and exploitation to
find an optimal solution to the problem. In this paper, we propose using four
basic emotions: joy, sadness, fear, and anger to influence a Qlearning agent.
Simulations show that the proposed affective agent requires lesser number of
steps to find the optimal path. We found when affective agent finds the optimal
path, the ratio between exploration to exploitation gradually decreases,
indicating lower total step count in the long run
| Wilfredo Badoy Jr. and Kardi Teknomo | null | 1609.01468 | null | null |
Towards Learning and Verifying Invariants of Cyber-Physical Systems by
Code Mutation | cs.SE cs.LG cs.LO | Cyber-physical systems (CPS), which integrate algorithmic control with
physical processes, often consist of physically distributed components
communicating over a network. A malfunctioning or compromised component in such
a CPS can lead to costly consequences, especially in the context of public
infrastructure. In this short paper, we argue for the importance of
constructing invariants (or models) of the physical behaviour exhibited by CPS,
motivated by their applications to the control, monitoring, and attestation of
components. To achieve this despite the inherent complexity of CPS, we propose
a new technique for learning invariants that combines machine learning with
ideas from mutation testing. We present a preliminary study on a water
treatment system that suggests the efficacy of this approach, propose
strategies for establishing confidence in the correctness of invariants, then
summarise some research questions and the steps we are taking to investigate
them.
| Yuqi Chen, Christopher M. Poskitt, Jun Sun | 10.1007/978-3-319-48989-6_10 | 1609.01491 | null | null |
Low-rank Bandits with Latent Mixtures | cs.LG | We study the task of maximizing rewards from recommending items (actions) to
users sequentially interacting with a recommender system. Users are modeled as
latent mixtures of C many representative user classes, where each class
specifies a mean reward profile across actions. Both the user features (mixture
distribution over classes) and the item features (mean reward vector per class)
are unknown a priori. The user identity is the only contextual information
available to the learner while interacting. This induces a low-rank structure
on the matrix of expected rewards r a,b from recommending item a to user b. The
problem reduces to the well-known linear bandit when either user or item-side
features are perfectly known. In the setting where each user, with its
stochastically sampled taste profile, interacts only for a small number of
sessions, we develop a bandit algorithm for the two-sided uncertainty. It
combines the Robust Tensor Power Method of Anandkumar et al. (2014b) with the
OFUL linear bandit algorithm of Abbasi-Yadkori et al. (2011). We provide the
first rigorous regret analysis of this combination, showing that its regret
after T user interactions is $\tilde O(C\sqrt{BT})$, with B the number of
users. An ingredient towards this result is a novel robustness property of
OFUL, of independent interest.
| Aditya Gopalan, Odalric-Ambrym Maillard and Mohammadi Zaki | null | 1609.01508 | null | null |
A Bootstrap Machine Learning Approach to Identify Rare Disease Patients
from Electronic Health Records | cs.LG cs.CL | Rare diseases are very difficult to identify among large number of other
possible diagnoses. Better availability of patient data and improvement in
machine learning algorithms empower us to tackle this problem computationally.
In this paper, we target one such rare disease - cardiac amyloidosis. We aim to
automate the process of identifying potential cardiac amyloidosis patients with
the help of machine learning algorithms and also learn most predictive factors.
With the help of experienced cardiologists, we prepared a gold standard with 73
positive (cardiac amyloidosis) and 197 negative instances. We achieved high
average cross-validation F1 score of 0.98 using an ensemble machine learning
classifier. Some of the predictive variables were: Age and Diagnosis of cardiac
arrest, chest pain, congestive heart failure, hypertension, prim open angle
glaucoma, and shoulder arthritis. Further studies are needed to validate the
accuracy of the system across an entire health system and its generalizability
for other diseases.
| Ravi Garg, Shu Dong, Sanjiv Shah, Siddhartha R Jonnalagadda | null | 1609.01586 | null | null |
Direct Feedback Alignment Provides Learning in Deep Neural Networks | stat.ML cs.LG | Artificial neural networks are most commonly trained with the
back-propagation algorithm, where the gradient for learning is provided by
back-propagating the error, layer by layer, from the output layer to the hidden
layers. A recently discovered method called feedback-alignment shows that the
weights used for propagating the error backward don't have to be symmetric with
the weights used for propagation the activation forward. In fact, random
feedback weights work evenly well, because the network learns how to make the
feedback useful. In this work, the feedback alignment principle is used for
training hidden layers more independently from the rest of the network, and
from a zero initial condition. The error is propagated through fixed random
feedback connections directly from the output layer to each hidden layer. This
simple method is able to achieve zero training error even in convolutional
networks and very deep networks, completely without error back-propagation. The
method is a step towards biologically plausible machine learning because the
error signal is almost local, and no symmetric or reciprocal weights are
required. Experiments show that the test performance on MNIST and CIFAR is
almost as good as those obtained with back-propagation for fully connected
networks. If combined with dropout, the method achieves 1.45% error on the
permutation invariant MNIST task.
| Arild N{\o}kland | null | 1609.01596 | null | null |
Hierarchical Multiscale Recurrent Neural Networks | cs.LG | Learning both hierarchical and temporal representation has been among the
long-standing challenges of recurrent neural networks. Multiscale recurrent
neural networks have been considered as a promising approach to resolve this
issue, yet there has been a lack of empirical evidence showing that this type
of models can actually capture the temporal dependencies by discovering the
latent hierarchical structure of the sequence. In this paper, we propose a
novel multiscale approach, called the hierarchical multiscale recurrent neural
networks, which can capture the latent hierarchical structure in the sequence
by encoding the temporal dependencies with different timescales using a novel
update mechanism. We show some evidence that our proposed multiscale
architecture can discover underlying hierarchical structure in the sequences
without using explicit boundary information. We evaluate our proposed model on
character-level language modelling and handwriting sequence modelling.
| Junyoung Chung and Sungjin Ahn and Yoshua Bengio | null | 1609.01704 | null | null |
Semantic Video Trailers | cs.LG cs.CV | Query-based video summarization is the task of creating a brief visual
trailer, which captures the parts of the video (or a collection of videos) that
are most relevant to the user-issued query. In this paper, we propose an
unsupervised label propagation approach for this task. Our approach effectively
captures the multimodal semantics of queries and videos using state-of-the-art
deep neural networks and creates a summary that is both semantically coherent
and visually attractive. We describe the theoretical framework of our
graph-based approach and empirically evaluate its effectiveness in creating
relevant and attractive trailers. Finally, we showcase example video trailers
generated by our system.
| Harrie Oosterhuis, Sujith Ravi, Michael Bendersky | null | 1609.01819 | null | null |
Learning Boltzmann Machine with EM-like Method | cs.LG stat.ML | We propose an expectation-maximization-like(EMlike) method to train Boltzmann
machine with unconstrained connectivity. It adopts Monte Carlo approximation in
the E-step, and replaces the intractable likelihood objective with efficiently
computed objectives or directly approximates the gradient of likelihood
objective in the M-step. The EM-like method is a modification of alternating
minimization. We prove that EM-like method will be the exactly same with
contrastive divergence in restricted Boltzmann machine if the M-step of this
method adopts special approximation. We also propose a new measure to assess
the performance of Boltzmann machine as generative models of data, and its
computational complexity is O(Rmn). Finally, we demonstrate the performance of
EM-like method using numerical experiments.
| Jinmeng Song, Chun Yuan | null | 1609.0184 | null | null |
Chaining Bounds for Empirical Risk Minimization | stat.ML cs.LG | This paper extends the standard chaining technique to prove excess risk upper
bounds for empirical risk minimization with random design settings even if the
magnitude of the noise and the estimates is unbounded. The bound applies to
many loss functions besides the squared loss, and scales only with the
sub-Gaussian or subexponential parameters without further statistical
assumptions such as the bounded kurtosis condition over the hypothesis class. A
detailed analysis is provided for slope constrained and penalized linear least
squares regression with a sub-Gaussian setting, which often proves tight sample
complexity bounds up to logartihmic factors.
| G\'abor Bal\'azs, Andr\'as Gy\"orgy, Csaba Szepesv\'ari | null | 1609.01872 | null | null |
Polysemous codes | cs.CV cs.DB cs.IT cs.LG math.IT | This paper considers the problem of approximate nearest neighbor search in
the compressed domain. We introduce polysemous codes, which offer both the
distance estimation quality of product quantization and the efficient
comparison of binary codes with Hamming distance. Their design is inspired by
algorithms introduced in the 90's to construct channel-optimized vector
quantizers. At search time, this dual interpretation accelerates the search.
Most of the indexed vectors are filtered out with Hamming distance, letting
only a fraction of the vectors to be ranked with an asymmetric distance
estimator.
The method is complementary with a coarse partitioning of the feature space
such as the inverted multi-index. This is shown by our experiments performed on
several public benchmarks such as the BIGANN dataset comprising one billion
vectors, for which we report state-of-the-art results for query times below
0.3\,millisecond per core. Last but not least, our approach allows the
approximate computation of the k-NN graph associated with the Yahoo Flickr
Creative Commons 100M, described by CNN image descriptors, in less than 8 hours
on a single machine.
| Matthijs Douze, Herv\'e J\'egou and Florent Perronnin | null | 1609.01882 | null | null |
DAiSEE: Towards User Engagement Recognition in the Wild | cs.CV cs.LG | We introduce DAiSEE, the first multi-label video classification dataset
comprising of 9068 video snippets captured from 112 users for recognizing the
user affective states of boredom, confusion, engagement, and frustration in the
wild. The dataset has four levels of labels namely - very low, low, high, and
very high for each of the affective states, which are crowd annotated and
correlated with a gold standard annotation created using a team of expert
psychologists. We have also established benchmark results on this dataset using
state-of-the-art video classification methods that are available today. We
believe that DAiSEE will provide the research community with challenges in
feature extraction, context-based inference, and development of suitable
machine learning methods for related tasks, thus providing a springboard for
further research. The dataset is available for download at
https://people.iith.ac.in/vineethnb/resources/daisee/index.html.
| Abhay Gupta, Arjun D'Cunha, Kamal Awasthi, Vineeth Balasubramanian | null | 1609.01885 | null | null |
Doubly Stochastic Neighbor Embedding on Spheres | cs.LG | Stochastic Neighbor Embedding (SNE) methods minimize the divergence between
the similarity matrix of a high-dimensional data set and its counterpart from a
low-dimensional embedding, leading to widely applied tools for data
visualization. Despite their popularity, the current SNE methods experience a
crowding problem when the data include highly imbalanced similarities. This
implies that the data points with higher total similarity tend to get crowded
around the display center. To solve this problem, we introduce a fast
normalization method and normalize the similarity matrix to be doubly
stochastic such that all the data points have equal total similarities.
Furthermore, we show empirically and theoretically that the doubly
stochasticity constraint often leads to embeddings which are approximately
spherical. This suggests replacing a flat space with spheres as the embedding
space. The spherical embedding eliminates the discrepancy between the center
and the periphery in visualization, which efficiently resolves the crowding
problem. We compared the proposed method (DOSNES) with the state-of-the-art SNE
method on three real-world datasets and the results clearly indicate that our
method is more favorable in terms of visualization quality.
| Yao Lu, Jukka Corander, Zhirong Yang | null | 1609.01977 | null | null |
Human Body Orientation Estimation using Convolutional Neural Network | cs.RO cs.CV cs.LG | Personal robots are expected to interact with the user by recognizing the
user's face. However, in most of the service robot applications, the user needs
to move himself/herself to allow the robot to see him/her face to face. To
overcome such limitations, a method for estimating human body orientation is
required. Previous studies used various components such as feature extractors
and classification models to classify the orientation which resulted in low
performance. For a more robust and accurate approach, we propose the light
weight convolutional neural networks, an end to end system, for estimating
human body orientation. Our body orientation estimation model achieved 81.58%
and 94% accuracy with the benchmark dataset and our own dataset respectively.
The proposed method can be used in a wide range of service robot applications
which depend on the ability to estimate human body orientation. To show its
usefulness in service robot applications, we designed a simple robot
application which allows the robot to move towards the user's frontal plane.
With this, we demonstrated an improved face detection rate.
| Jinyoung Choi, Beom-Jin Lee, and Byoung-Tak Zhang | null | 1609.01984 | null | null |
Random matrices meet machine learning: a large dimensional analysis of
LS-SVM | stat.ML cs.LG | This article proposes a performance analysis of kernel least squares support
vector machines (LS-SVMs) based on a random matrix approach, in the regime
where both the dimension of data $p$ and their number $n$ grow large at the
same rate. Under a two-class Gaussian mixture model for the input data, we
prove that the LS-SVM decision function is asymptotically normal with means and
covariances shown to depend explicitly on the derivatives of the kernel
function. This provides improved understanding along with new insights into the
internal workings of SVM-type methods for large datasets.
| Zhenyu Liao, Romain Couillet | null | 1609.0202 | null | null |
Deep Markov Random Field for Image Modeling | cs.CV cs.AI cs.LG | Markov Random Fields (MRFs), a formulation widely used in generative image
modeling, have long been plagued by the lack of expressive power. This issue is
primarily due to the fact that conventional MRFs formulations tend to use
simplistic factors to capture local patterns. In this paper, we move beyond
such limitations, and propose a novel MRF model that uses fully-connected
neurons to express the complex interactions among pixels. Through theoretical
analysis, we reveal an inherent connection between this model and recurrent
neural networks, and thereon derive an approximated feed-forward network that
couples multiple RNNs along opposite directions. This formulation combines the
expressive power of deep neural networks and the cyclic dependency structure of
MRF in a unified model, bringing the modeling capability to a new level. The
feed-forward approximation also allows it to be efficiently learned from data.
Experimental results on a variety of low-level vision tasks show notable
improvement over state-of-the-arts.
| Zhirong Wu, Dahua Lin, Xiaoou Tang | null | 1609.02036 | null | null |
An improved uncertainty decoding scheme with weighted samples for
DNN-HMM hybrid systems | cs.LG cs.CL cs.SD | In this paper, we advance a recently-proposed uncertainty decoding scheme for
DNN-HMM (deep neural network - hidden Markov model) hybrid systems. This
numerical sampling concept averages DNN outputs produced by a finite set of
feature samples (drawn from a probabilistic distortion model) to approximate
the posterior likelihoods of the context-dependent HMM states. As main
innovation, we propose a weighted DNN-output averaging based on a minimum
classification error criterion and apply it to a probabilistic distortion model
for spatial diffuseness features. The experimental evaluation is performed on
the 8-channel REVERB Challenge task using a DNN-HMM hybrid system with
multichannel front-end signal enhancement. We show that the recognition
accuracy of the DNN-HMM hybrid system improves by incorporating uncertainty
decoding based on random sampling and that the proposed weighted DNN-output
averaging further reduces the word error rate scores.
| Christian Huemmer, Ram\'on Fern\'andez Astudillo and Walter Kellermann | null | 1609.02082 | null | null |
Ask the GRU: Multi-Task Learning for Deep Text Recommendations | stat.ML cs.CL cs.LG | In a variety of application domains the content to be recommended to users is
associated with text. This includes research papers, movies with associated
plot summaries, news articles, blog posts, etc. Recommendation approaches based
on latent factor models can be extended naturally to leverage text by employing
an explicit mapping from text to factors. This enables recommendations for new,
unseen content, and may generalize better, since the factors for all items are
produced by a compactly-parametrized model. Previous work has used topic models
or averages of word embeddings for this mapping. In this paper we present a
method leveraging deep recurrent neural networks to encode the text sequence
into a latent vector, specifically gated recurrent units (GRUs) trained
end-to-end on the collaborative filtering task. For the task of scientific
paper recommendation, this yields models with significantly higher accuracy. In
cold-start scenarios, we beat the previous state-of-the-art, all of which
ignore word order. Performance is further improved by multi-task learning,
where the text encoder network is trained for a combination of content
recommendation and item metadata prediction. This regularizes the collaborative
filtering model, ameliorating the problem of sparsity of the observed rating
matrix.
| Trapit Bansal, David Belanger, Andrew McCallum | 10.1145/2959100.2959180 | 1609.02116 | null | null |
UberNet: Training a `Universal' Convolutional Neural Network for Low-,
Mid-, and High-Level Vision using Diverse Datasets and Limited Memory | cs.CV cs.AI cs.LG | In this work we introduce a convolutional neural network (CNN) that jointly
handles low-, mid-, and high-level vision tasks in a unified architecture that
is trained end-to-end. Such a universal network can act like a `swiss knife'
for vision tasks; we call this architecture an UberNet to indicate its
overarching nature.
We address two main technical challenges that emerge when broadening up the
range of tasks handled by a single CNN: (i) training a deep architecture while
relying on diverse training sets and (ii) training many (potentially unlimited)
tasks with a limited memory budget. Properly addressing these two problems
allows us to train accurate predictors for a host of tasks, without
compromising accuracy.
Through these advances we train in an end-to-end manner a CNN that
simultaneously addresses (a) boundary detection (b) normal estimation (c)
saliency estimation (d) semantic segmentation (e) human part segmentation (f)
semantic boundary detection, (g) region proposal generation and object
detection. We obtain competitive performance while jointly addressing all of
these tasks in 0.7 seconds per frame on a single GPU. A demonstration of this
system can be found at http://cvn.ecp.fr/ubernet/.
| Iasonas Kokkinos | null | 1609.02132 | null | null |
Discrete Variational Autoencoders | stat.ML cs.LG | Probabilistic models with discrete latent variables naturally capture
datasets composed of discrete classes. However, they are difficult to train
efficiently, since backpropagation through discrete variables is generally not
possible. We present a novel method to train a class of probabilistic models
with discrete latent variables using the variational autoencoder framework,
including backpropagation through the discrete latent variables. The associated
class of probabilistic models comprises an undirected discrete component and a
directed hierarchical continuous component. The discrete component captures the
distribution over the disconnected smooth manifolds induced by the continuous
component. As a result, this class of models efficiently learns both the class
of objects in an image, and their specific realization in pixels, from
unsupervised data, and outperforms state-of-the-art methods on the
permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
| Jason Tyler Rolfe | null | 1609.022 | null | null |
Breaking the Bandwidth Barrier: Geometrical Adaptive Entropy Estimation | cs.IT cs.LG math.IT stat.ML | Estimators of information theoretic measures such as entropy and mutual
information are a basic workhorse for many downstream applications in modern
data science. State of the art approaches have been either geometric (nearest
neighbor (NN) based) or kernel based (with a globally chosen bandwidth). In
this paper, we combine both these approaches to design new estimators of
entropy and mutual information that outperform state of the art methods. Our
estimator uses local bandwidth choices of $k$-NN distances with a finite $k$,
independent of the sample size. Such a local and data dependent choice improves
performance in practice, but the bandwidth is vanishing at a fast rate, leading
to a non-vanishing bias. We show that the asymptotic bias of the proposed
estimator is universal; it is independent of the underlying distribution.
Hence, it can be pre-computed and subtracted from the estimate. As a byproduct,
we obtain a unified way of obtaining both kernel and NN estimators. The
corresponding theoretical contribution relating the asymptotic geometry of
nearest neighbors to order statistics is of independent mathematical interest.
| Weihao Gao and Sewoong Oh and Pramod Viswanath | null | 1609.02208 | null | null |
Fitted Learning: Models with Awareness of their Limits | cs.AI cs.LG cs.NE | Though deep learning has pushed the boundaries of classification forward, in
recent years hints of the limits of standard classification have begun to
emerge. Problems such as fooling, adding new classes over time, and the need to
retrain learning models only for small changes to the original problem all
point to a potential shortcoming in the classic classification regime, where a
comprehensive a priori knowledge of the possible classes or concepts is
critical. Without such knowledge, classifiers misjudge the limits of their
knowledge and overgeneralization therefore becomes a serious obstacle to
consistent performance. In response to these challenges, this paper extends the
classic regime by reframing classification instead with the assumption that
concepts present in the training set are only a sample of the hypothetical
final set of concepts. To bring learning models into this new paradigm, a novel
elaboration of standard architectures called the competitive overcomplete
output layer (COOL) neural network is introduced. Experiments demonstrate the
effectiveness of COOL by applying it to fooling, separable concept learning,
one-class neural networks, and standard classification benchmarks. The results
suggest that, unlike conventional classifiers, the amount of generalization in
COOL networks can be tuned to match the problem.
| Navid Kardan, Kenneth O. Stanley | null | 1609.02226 | null | null |
Learning to learn with backpropagation of Hebbian plasticity | cs.NE cs.AI cs.LG q-bio.NC | Hebbian plasticity is a powerful principle that allows biological brains to
learn from their lifetime experience. By contrast, artificial neural networks
trained with backpropagation generally have fixed connection weights that do
not change once training is complete. While recent methods can endow neural
networks with long-term memories, Hebbian plasticity is currently not amenable
to gradient descent. Here we derive analytical expressions for activity
gradients in neural networks with Hebbian plastic connections. Using these
expressions, we can use backpropagation to train not just the baseline weights
of the connections, but also their plasticity. As a result, the networks "learn
how to learn" in order to solve the problem at hand: the trained networks
automatically perform fast learning of unpredictable environmental features
during their lifetime, expanding the range of solvable problems. We test the
algorithm on various on-line learning tasks, including pattern completion,
one-shot learning, and reversal learning. The algorithm successfully learns how
to learn the relevant associations from one-shot instruction, and fine-tunes
the temporal dynamics of plasticity to allow for continual learning in response
to changing environmental parameters. We conclude that backpropagation of
Hebbian plasticity offers a powerful model for lifelong learning.
| Thomas Miconi | null | 1609.02228 | null | null |
Improved Optimistic Mirror Descent for Sparsity and Curvature | cs.LG | Online Convex Optimization plays a key role in large scale machine learning.
Early approaches to this problem were conservative, in which the main focus was
protection against the worst case scenario. But recently several algorithms
have been developed for tightening the regret bounds in easy data instances
such as sparsity, predictable sequences, and curved losses. In this work we
unify some of these existing techniques to obtain new update rules for the
cases when these easy instances occur together. First we analyse an adaptive
and optimistic update rule which achieves tighter regret bound when the loss
sequence is sparse and predictable. Then we explain an update rule that
dynamically adapts to the curvature of the loss function and utilizes the
predictable nature of the loss sequence as well. Finally we extend these
results to composite losses.
| Parameswaran Kamalaruban | null | 1609.02383 | null | null |
Non-Backtracking Spectrum of Degree-Corrected Stochastic Block Models | math.PR cs.LG cs.SI stat.ML | Motivated by community detection, we characterise the spectrum of the
non-backtracking matrix $B$ in the Degree-Corrected Stochastic Block Model.
Specifically, we consider a random graph on $n$ vertices partitioned into two
equal-sized clusters. The vertices have i.i.d. weights $\{ \phi_u \}_{u=1}^n$
with second moment $\Phi^{(2)}$. The intra-cluster connection probability for
vertices $u$ and $v$ is $\frac{\phi_u \phi_v}{n}a$ and the inter-cluster
connection probability is $\frac{\phi_u \phi_v}{n}b$.
We show that with high probability, the following holds: The leading
eigenvalue of the non-backtracking matrix $B$ is asymptotic to $\rho =
\frac{a+b}{2} \Phi^{(2)}$. The second eigenvalue is asymptotic to $\mu_2 =
\frac{a-b}{2} \Phi^{(2)}$ when $\mu_2^2 > \rho$, but asymptotically bounded by
$\sqrt{\rho}$ when $\mu_2^2 \leq \rho$. All the remaining eigenvalues are
asymptotically bounded by $\sqrt{\rho}$. As a result, a clustering
positively-correlated with the true communities can be obtained based on the
second eigenvector of $B$ in the regime where $\mu_2^2 > \rho.$
In a previous work we obtained that detection is impossible when $\mu_2^2 <
\rho,$ meaning that there occurs a phase-transition in the sparse regime of the
Degree-Corrected Stochastic Block Model.
As a corollary, we obtain that Degree-Corrected Erd\H{o}s-R\'enyi graphs
asymptotically satisfy the graph Riemann hypothesis, a quasi-Ramanujan
property.
A by-product of our proof is a weak law of large numbers for
local-functionals on Degree-Corrected Stochastic Block Models, which could be
of independent interest.
| Lennart Gulikers, Marc Lelarge, Laurent Massouli\'e | null | 1609.02487 | null | null |
Fashion DNA: Merging Content and Sales Data for Recommendation and
Article Mapping | cs.IR cs.LG | We present a method to determine Fashion DNA, coordinate vectors locating
fashion items in an abstract space. Our approach is based on a deep neural
network architecture that ingests curated article information such as tags and
images, and is trained to predict sales for a large set of frequent customers.
In the process, a dual space of customer style preferences naturally arises.
Interpretation of the metric of these spaces is straightforward: The product of
Fashion DNA and customer style vectors yields the forecast purchase likelihood
for the customer-item pair, while the angle between Fashion DNA vectors is a
measure of item similarity. Importantly, our models are able to generate
unbiased purchase probabilities for fashion items based solely on article
information, even in absence of sales data, thus circumventing the "cold-start
problem" of collaborative recommendation approaches. Likewise, it generalizes
easily and reliably to customers outside the training set. We experiment with
Fashion DNA models based on visual and/or tag item data, evaluate their
recommendation power, and discuss the resulting article similarities.
| Christian Bracher, Sebastian Heinz and Roland Vollgraf | null | 1609.02489 | null | null |
Functorial Hierarchical Clustering with Overlaps | cs.LG stat.ML | This work draws inspiration from three important sources of research on
dissimilarity-based clustering and intertwines those three threads into a
consistent principled functorial theory of clustering. Those three are the
overlapping clustering of Jardine and Sibson, the functorial approach of
Carlsson and M\'{e}moli to partition-based clustering, and the Isbell/Dress
school's study of injective envelopes. Carlsson and M\'{e}moli introduce the
idea of viewing clustering methods as functors from a category of metric spaces
to a category of clusters, with functoriality subsuming many desirable
properties. Our first series of results extends their theory of functorial
clustering schemes to methods that allow overlapping clusters in the spirit of
Jardine and Sibson. This obviates some of the unpleasant effects of chaining
that occur, for example with single-linkage clustering. We prove an equivalence
between these general overlapping clustering functors and projections of weight
spaces to what we term clustering domains, by focusing on the order structure
determined by the morphisms. As a specific application of this machinery, we
are able to prove that there are no functorial projections to cut metrics, or
even to tree metrics. Finally, although we focus less on the construction of
clustering methods (clustering domains) derived from injective envelopes, we
lay out some preliminary results, that hopefully will give a feel for how the
third leg of the stool comes into play.
| Jared Culbertson, Dan P. Guralnik, Peter F. Stiller | 10.1016/j.dam.2017.10.015 | 1609.02513 | null | null |
DiSMEC - Distributed Sparse Machines for Extreme Multi-label
Classification | stat.ML cs.LG | Extreme multi-label classification refers to supervised multi-label learning
involving hundreds of thousands or even millions of labels. Datasets in extreme
classification exhibit fit to power-law distribution, i.e. a large fraction of
labels have very few positive instances in the data distribution. Most
state-of-the-art approaches for extreme multi-label classification attempt to
capture correlation among labels by embedding the label matrix to a
low-dimensional linear sub-space. However, in the presence of power-law
distributed extremely large and diverse label spaces, structural assumptions
such as low rank can be easily violated.
In this work, we present DiSMEC, which is a large-scale distributed framework
for learning one-versus-rest linear classifiers coupled with explicit capacity
control to control model size. Unlike most state-of-the-art methods, DiSMEC
does not make any low rank assumptions on the label matrix. Using double layer
of parallelization, DiSMEC can learn classifiers for datasets consisting
hundreds of thousands labels within few hours. The explicit capacity control
mechanism filters out spurious parameters which keep the model compact in size,
without losing prediction accuracy. We conduct extensive empirical evaluation
on publicly available real-world datasets consisting upto 670,000 labels. We
compare DiSMEC with recent state-of-the-art approaches, including - SLEEC which
is a leading approach for learning sparse local embeddings, and FastXML which
is a tree-based approach optimizing ranking based loss function. On some of the
datasets, DiSMEC can significantly boost prediction accuracies - 10% better
compared to SLECC and 15% better compared to FastXML, in absolute terms.
| Rohit Babbar and Bernhard Shoelkopf | null | 1609.02521 | null | null |
Quantum-Assisted Learning of Hardware-Embedded Probabilistic Graphical
Models | quant-ph cs.LG | Mainstream machine-learning techniques such as deep learning and
probabilistic programming rely heavily on sampling from generally intractable
probability distributions. There is increasing interest in the potential
advantages of using quantum computing technologies as sampling engines to speed
up these tasks or to make them more effective. However, some pressing
challenges in state-of-the-art quantum annealers have to be overcome before we
can assess their actual performance. The sparse connectivity, resulting from
the local interaction between quantum bits in physical hardware
implementations, is considered the most severe limitation to the quality of
constructing powerful generative unsupervised machine-learning models. Here we
use embedding techniques to add redundancy to data sets, allowing us to
increase the modeling capacity of quantum annealers. We illustrate our findings
by training hardware-embedded graphical models on a binarized data set of
handwritten digits and two synthetic data sets in experiments with up to 940
quantum bits. Our model can be trained in quantum hardware without full
knowledge of the effective parameters specifying the corresponding quantum
Gibbs-like distribution; therefore, this approach avoids the need to infer the
effective temperature at each iteration, speeding up learning; it also
mitigates the effect of noise in the control parameters, making it robust to
deviations from the reference Gibbs distribution. Our approach demonstrates the
feasibility of using quantum annealers for implementing generative models, and
it provides a suitable framework for benchmarking these quantum technologies on
machine-learning-related tasks.
| Marcello Benedetti, John Realpe-G\'omez, Rupak Biswas, Alejandro
Perdomo-Ortiz | 10.1103/PhysRevX.7.041052 | 1609.02542 | null | null |
On Sequential Elimination Algorithms for Best-Arm Identification in
Multi-Armed Bandits | stat.ML cs.LG | We consider the best-arm identification problem in multi-armed bandits, which
focuses purely on exploration. A player is given a fixed budget to explore a
finite set of arms, and the rewards of each arm are drawn independently from a
fixed, unknown distribution. The player aims to identify the arm with the
largest expected reward. We propose a general framework to unify sequential
elimination algorithms, where the arms are dismissed iteratively until a unique
arm is left. Our analysis reveals a novel performance measure expressed in
terms of the sampling mechanism and number of eliminated arms at each round.
Based on this result, we develop an algorithm that divides the budget according
to a nonlinear function of remaining arms at each round. We provide theoretical
guarantees for the algorithm, characterizing the suitable nonlinearity for
different problem environments described by the number of competitive arms.
Matching the theoretical results, our experiments show that the nonlinear
algorithm outperforms the state-of-the-art. We finally study the
side-observation model, where pulling an arm reveals the rewards of its related
arms, and we establish improved theoretical guarantees in the pure-exploration
setting.
| Shahin Shahrampour, Mohammad Noshad, Vahid Tarokh | 10.1109/TSP.2017.2706192 | 1609.02606 | null | null |
Generating Videos with Scene Dynamics | cs.CV cs.GR cs.LG | We capitalize on large amounts of unlabeled video in order to learn a model
of scene dynamics for both video recognition tasks (e.g. action classification)
and video generation tasks (e.g. future prediction). We propose a generative
adversarial network for video with a spatio-temporal convolutional architecture
that untangles the scene's foreground from the background. Experiments suggest
this model can generate tiny videos up to a second at full frame rate better
than simple baselines, and we show its utility at predicting plausible futures
of static images. Moreover, experiments and visualizations show the model
internally learns useful features for recognizing actions with minimal
supervision, suggesting scene dynamics are a promising signal for
representation learning. We believe generative video models can impact many
applications in video understanding and simulation.
| Carl Vondrick and Hamed Pirsiavash and Antonio Torralba | null | 1609.02612 | null | null |
Why is Differential Evolution Better than Grid Search for Tuning Defect
Predictors? | cs.SE cs.LG stat.ML | Context: One of the black arts of data mining is learning the magic
parameters which control the learners. In software analytics, at least for
defect prediction, several methods, like grid search and differential evolution
(DE), have been proposed to learn these parameters, which has been proved to be
able to improve the performance scores of learners.
Objective: We want to evaluate which method can find better parameters in
terms of performance score and runtime cost.
Methods: This paper compares grid search to differential evolution, which is
an evolutionary algorithm that makes extensive use of stochastic jumps around
the search space.
Results: We find that the seemingly complete approach of grid search does no
better, and sometimes worse, than the stochastic search. When repeated 20 times
to check for conclusion validity, DE was over 210 times faster than grid search
to tune Random Forests on 17 testing data sets with F-Measure
Conclusions: These results are puzzling: why does a quick partial search be
just as effective as a much slower, and much more, extensive search? To answer
that question, we turned to the theoretical optimization literature. Bergstra
and Bengio conjecture that grid search is not more effective than more
randomized searchers if the underlying search space is inherently low
dimensional. This is significant since recent results show that defect
prediction exhibits very low intrinsic dimensionality-- an observation that
explains why a fast method like DE may work as well as a seemingly more
thorough grid search. This suggests, as a future research direction, that it
might be possible to peek at data sets before doing any optimization in order
to match the optimization algorithm to the problem at hand.
| Wei Fu and Vivek Nair and Tim Menzies | null | 1609.02613 | null | null |
Distributed Processing of Biosignal-Database for Emotion Recognition
with Mahout | stat.ML cs.LG | This paper investigates the use of distributed processing on the problem of
emotion recognition from physiological sensors using a popular machine learning
library on distributed mode. Specifically, we run a random forests classifier
on the biosignal-data, which have been pre-processed to form exclusive groups
in an unsupervised fashion, on a Cloudera cluster using Mahout. The use of
distributed processing significantly reduces the time required for the offline
training of the classifier, enabling processing of large physiological datasets
through many iterations.
| Varvara Kollia, Oguz H. Elibol | null | 1609.02631 | null | null |
Machine Learning with Guarantees using Descriptive Complexity and SMT
Solvers | cs.LG cs.LO | Machine learning is a thriving part of computer science. There are many
efficient approaches to machine learning that do not provide strong theoretical
guarantees, and a beautiful general learning theory. Unfortunately, machine
learning approaches that give strong theoretical guarantees have not been
efficient enough to be applicable. In this paper we introduce a logical
approach to machine learning. Models are represented by tuples of logical
formulas and inputs and outputs are logical structures. We present our
framework together with several applications where we evaluate it using SAT and
SMT solvers. We argue that this approach to machine learning is particularly
suited to bridge the gap between efficiency and theoretical soundness. We
exploit results from descriptive complexity theory to prove strong theoretical
guarantees for our approach. To show its applicability, we present experimental
results including learning complexity-theoretic reductions rules for board
games. We also explain how neural networks fit into our framework, although the
current implementation does not scale to provide guarantees for real-world
neural networks.
| Charles Jordan and {\L}ukasz Kaiser | null | 1609.02664 | null | null |
Identifying Topology of Power Distribution Networks Based on Smart Meter
Data | cs.SY cs.LG | In a power distribution network, the network topology information is
essential for an efficient operation of the network. This information of
network connectivity is not accurately available, at the low voltage level, due
to uninformed changes that happen from time to time. In this paper, we propose
a novel data--driven approach to identify the underlying network topology
including the load phase connectivity from time series of energy measurements.
The proposed method involves the application of Principal Component Analysis
(PCA) and its graph-theoretic interpretation to infer the topology from smart
meter energy measurements. The method is demonstrated through simulation on
randomly generated networks and also on IEEE recognized Roy Billinton
distribution test system.
| Jayadev P Satya and Nirav Bhatt and Ramkrishna Pasumarthy and Aravind
Rajeswaran | null | 1609.02678 | null | null |
Automatic Selection of Stochastic Watershed Hierarchies | cs.CV cs.LG | The segmentation, seen as the association of a partition with an image, is a
difficult task. It can be decomposed in two steps: at first, a family of
contours associated with a series of nested partitions (or hierarchy) is
created and organized, then pertinent contours are extracted. A coarser
partition is obtained by merging adjacent regions of a finer partition. The
strength of a contour is then measured by the level of the hierarchy for which
its two adjacent regions merge. We present an automatic segmentation strategy
using a wide range of stochastic watershed hierarchies. For a given set of
homogeneous images, our approach selects automatically the best hierarchy and
cut level to perform image simplification given an evaluation score.
Experimental results illustrate the advantages of our approach on several
real-life images datasets.
| Amin Fehri (CMM), Santiago Velasco-Forero (CMM), Fernand Meyer (CMM) | null | 1609.02715 | null | null |
Detecting Singleton Review Spammers Using Semantic Similarity | cs.CL cs.LG | Online reviews have increasingly become a very important resource for
consumers when making purchases. Though it is becoming more and more difficult
for people to make well-informed buying decisions without being deceived by
fake reviews. Prior works on the opinion spam problem mostly considered
classifying fake reviews using behavioral user patterns. They focused on
prolific users who write more than a couple of reviews, discarding one-time
reviewers. The number of singleton reviewers however is expected to be high for
many review websites. While behavioral patterns are effective when dealing with
elite users, for one-time reviewers, the review text needs to be exploited. In
this paper we tackle the problem of detecting fake reviews written by the same
person using multiple names, posting each review under a different name. We
propose two methods to detect similar reviews and show the results generally
outperform the vectorial similarity measures used in prior works. The first
method extends the semantic similarity between words to the reviews level. The
second method is based on topic modeling and exploits the similarity of the
reviews topic distributions using two models: bag-of-words and
bag-of-opinion-phrases. The experiments were conducted on reviews from three
different datasets: Yelp (57K reviews), Trustpilot (9K reviews) and Ott dataset
(800 reviews).
| Vlad Sandulescu, Martin Ester | 10.1145/2740908.2742570 | 1609.02727 | null | null |
Predicting the future relevance of research institutions - The winning
solution of the KDD Cup 2016 | cs.LG cs.DL cs.SI physics.soc-ph | The world's collective knowledge is evolving through research and new
scientific discoveries. It is becoming increasingly difficult to objectively
rank the impact research institutes have on global advancements. However, since
the funding, governmental support, staff and students quality all mirror the
projected quality of the institution, it becomes essential to measure the
affiliation's rating in a transparent and widely accepted way. We propose and
investigate several methods to rank affiliations based on the number of their
accepted papers at future academic conferences. We carry out our investigation
using publicly available datasets such as the Microsoft Academic Graph, a
heterogeneous graph which contains various information about academic papers.
We analyze several models, starting with a simple probabilities-based method
and then gradually expand our training dataset, engineer many more features and
use mixed models and gradient boosted decision trees models to improve our
predictions.
| Vlad Sandulescu, Mihai Chiru | null | 1609.02728 | null | null |
A Hierarchical Model of Reviews for Aspect-based Sentiment Analysis | cs.CL cs.LG | Opinion mining from customer reviews has become pervasive in recent years.
Sentences in reviews, however, are usually classified independently, even
though they form part of a review's argumentative structure. Intuitively,
sentences in a review build and elaborate upon each other; knowledge of the
review structure and sentential context should thus inform the classification
of each sentence. We demonstrate this hypothesis for the task of aspect-based
sentiment analysis by modeling the interdependencies of sentences in a review
with a hierarchical bidirectional LSTM. We show that the hierarchical model
outperforms two non-hierarchical baselines, obtains results competitive with
the state-of-the-art, and outperforms the state-of-the-art on five
multilingual, multi-domain datasets without any hand-engineered features or
external resources.
| Sebastian Ruder, Parsa Ghaffari, and John G. Breslin | null | 1609.02745 | null | null |
INSIGHT-1 at SemEval-2016 Task 4: Convolutional Neural Networks for
Sentiment Classification and Quantification | cs.CL cs.LG | This paper describes our deep learning-based approach to sentiment analysis
in Twitter as part of SemEval-2016 Task 4. We use a convolutional neural
network to determine sentiment and participate in all subtasks, i.e. two-point,
three-point, and five-point scale sentiment classification and two-point and
five-point scale sentiment quantification. We achieve competitive results for
two-point scale sentiment classification and quantification, ranking fifth and
a close fourth (third and second by alternative metrics) respectively despite
using only pre-trained embeddings that contain no sentiment information. We
achieve good performance on three-point scale sentiment classification, ranking
eighth out of 35, while performing poorly on five-point scale sentiment
classification and quantification. An error analysis reveals that this is due
to low expressiveness of the model to capture negative sentiment as well as an
inability to take into account ordinal information. We propose improvements in
order to address these and other issues.
| Sebastian Ruder, Parsa Ghaffari, and John G. Breslin | null | 1609.02746 | null | null |
INSIGHT-1 at SemEval-2016 Task 5: Deep Learning for Multilingual
Aspect-based Sentiment Analysis | cs.CL cs.LG | This paper describes our deep learning-based approach to multilingual
aspect-based sentiment analysis as part of SemEval 2016 Task 5. We use a
convolutional neural network (CNN) for both aspect extraction and aspect-based
sentiment analysis. We cast aspect extraction as a multi-label classification
problem, outputting probabilities over aspects parameterized by a threshold. To
determine the sentiment towards an aspect, we concatenate an aspect vector with
every word embedding and apply a convolution over it. Our constrained system
(unconstrained for English) achieves competitive results across all languages
and domains, placing first or second in 5 and 7 out of 11 language-domain pairs
for aspect category detection (slot 1) and sentiment polarity (slot 3)
respectively, thereby demonstrating the viability of a deep learning-based
approach for multilingual aspect-based sentiment analysis.
| Sebastian Ruder, Parsa Ghaffari, and John G. Breslin | null | 1609.02748 | null | null |
By-passing the Kohn-Sham equations with machine learning | physics.comp-ph cs.LG physics.chem-ph stat.ML | Last year, at least 30,000 scientific papers used the Kohn-Sham scheme of
density functional theory to solve electronic structure problems in a wide
variety of scientific fields, ranging from materials science to biochemistry to
astrophysics. Machine learning holds the promise of learning the kinetic energy
functional via examples, by-passing the need to solve the Kohn-Sham equations.
This should yield substantial savings in computer time, allowing either larger
systems or longer time-scales to be tackled, but attempts to machine-learn this
functional have been limited by the need to find its derivative. The present
work overcomes this difficulty by directly learning the density-potential and
energy-density maps for test systems and various molecules. Both improved
accuracy and lower computational cost with this method are demonstrated by
reproducing DFT energies for a range of molecular geometries generated during
molecular dynamics simulations. Moreover, the methodology could be applied
directly to quantum chemical calculations, allowing construction of density
functionals of quantum-chemical accuracy.
| Felix Brockherde, Leslie Vogt, Li Li, Mark E. Tuckerman, Kieron Burke,
Klaus-Robert M\"uller | 10.1038/s41467-017-00839-3 | 1609.02815 | null | null |
Distributed Online Optimization in Dynamic Environments Using Mirror
Descent | math.OC cs.DC cs.LG stat.ML | This work addresses decentralized online optimization in non-stationary
environments. A network of agents aim to track the minimizer of a global
time-varying convex function. The minimizer evolves according to a known
dynamics corrupted by an unknown, unstructured noise. At each time, the global
function can be cast as a sum of a finite number of local functions, each of
which is assigned to one agent in the network. Moreover, the local functions
become available to agents sequentially, and agents do not have a prior
knowledge of the future cost functions. Therefore, agents must communicate with
each other to build an online approximation of the global function. We propose
a decentralized variation of the celebrated Mirror Descent, developed by
Nemirovksi and Yudin. Using the notion of Bregman divergence in lieu of
Euclidean distance for projection, Mirror Descent has been shown to be a
powerful tool in large-scale optimization. Our algorithm builds on Mirror
Descent, while ensuring that agents perform a consensus step to follow the
global function and take into account the dynamics of the global minimizer. To
measure the performance of the proposed online algorithm, we compare it to its
offline counterpart, where the global functions are available a priori. The gap
between the two is called dynamic regret. We establish a regret bound that
scales inversely in the spectral gap of the network, and more notably it
represents the deviation of minimizer sequence with respect to the given
dynamics. We then show that our results subsume a number of results in
distributed optimization. We demonstrate the application of our method to
decentralized tracking of dynamic parameters and verify the results via
numerical experiments.
| Shahin Shahrampour, Ali Jadbabaie | null | 1609.02845 | null | null |
Robust Spectral Detection of Global Structures in the Data by Learning a
Regularization | stat.ML cs.LG cs.SI physics.soc-ph | Spectral methods are popular in detecting global structures in the given data
that can be represented as a matrix. However when the data matrix is sparse or
noisy, classic spectral methods usually fail to work, due to localization of
eigenvectors (or singular vectors) induced by the sparsity or noise. In this
work, we propose a general method to solve the localization problem by learning
a regularization matrix from the localized eigenvectors. Using matrix
perturbation analysis, we demonstrate that the learned regularizations suppress
down the eigenvalues associated with localized eigenvectors and enable us to
recover the informative eigenvectors representing the global structure. We show
applications of our method in several inference problems: community detection
in networks, clustering from pairwise similarities, rank estimation and matrix
completion problems. Using extensive experiments, we illustrate that our method
solves the localization problem and works down to the theoretical detectability
limits in different kinds of synthetic data. This is in contrast with existing
spectral algorithms based on data matrix, non-backtracking matrix, Laplacians
and those with rank-one regularizations, which perform poorly in the sparse
case with noise.
| Pan Zhang | null | 1609.02906 | null | null |
Semi-Supervised Classification with Graph Convolutional Networks | cs.LG stat.ML | We present a scalable approach for semi-supervised learning on
graph-structured data that is based on an efficient variant of convolutional
neural networks which operate directly on graphs. We motivate the choice of our
convolutional architecture via a localized first-order approximation of
spectral graph convolutions. Our model scales linearly in the number of graph
edges and learns hidden layer representations that encode both local graph
structure and features of nodes. In a number of experiments on citation
networks and on a knowledge graph dataset we demonstrate that our approach
outperforms related methods by a significant margin.
| Thomas N. Kipf, Max Welling | null | 1609.02907 | null | null |
Stealing Machine Learning Models via Prediction APIs | cs.CR cs.LG stat.ML | Machine learning (ML) models may be deemed confidential due to their
sensitive training data, commercial value, or use in security applications.
Increasingly often, confidential ML models are being deployed with publicly
accessible query interfaces. ML-as-a-service ("predictive analytics") systems
are an example: Some allow users to train models on potentially sensitive data
and charge others for access on a pay-per-query basis.
The tension between model confidentiality and public access motivates our
investigation of model extraction attacks. In such attacks, an adversary with
black-box access, but no prior knowledge of an ML model's parameters or
training data, aims to duplicate the functionality of (i.e., "steal") the
model. Unlike in classical learning theory settings, ML-as-a-service offerings
may accept partial feature vectors as inputs and include confidence values with
predictions. Given these practices, we show simple, efficient attacks that
extract target ML models with near-perfect fidelity for popular model classes
including logistic regression, neural networks, and decision trees. We
demonstrate these attacks against the online services of BigML and Amazon
Machine Learning. We further show that the natural countermeasure of omitting
confidence values from model outputs still admits potentially harmful model
extraction attacks. Our results highlight the need for careful ML model
deployment and new model extraction countermeasures.
| Florian Tram\`er, Fan Zhang, Ari Juels, Michael K. Reiter, Thomas
Ristenpart | null | 1609.02943 | null | null |
An Integrated Classification Model for Financial Data Mining | cs.AI cs.LG | Nowadays, financial data analysis is becoming increasingly important in the
business market. As companies collect more and more data from daily operations,
they expect to extract useful knowledge from existing collected data to help
make reasonable decisions for new customer requests, e.g. user credit category,
churn analysis, real estate analysis, etc. Financial institutes have applied
different data mining techniques to enhance their business performance.
However, simple ap-proach of these techniques could raise a performance issue.
Besides, there are very few general models for both understanding and
forecasting different finan-cial fields. We present in this paper a new
classification model for analyzing fi-nancial data. We also evaluate this model
with different real-world data to show its performance.
| Fan Cai, Nhien-An Le-Khac, M-T. Kechadi | null | 1609.02976 | null | null |
Episodic Exploration for Deep Deterministic Policies: An Application to
StarCraft Micromanagement Tasks | cs.AI cs.LG | We consider scenarios from the real-time strategy game StarCraft as new
benchmarks for reinforcement learning algorithms. We propose micromanagement
tasks, which present the problem of the short-term, low-level control of army
members during a battle. From a reinforcement learning point of view, these
scenarios are challenging because the state-action space is very large, and
because there is no obvious feature representation for the state-action
evaluation function. We describe our approach to tackle the micromanagement
scenarios with deep neural network controllers from raw state features given by
the game engine. In addition, we present a heuristic reinforcement learning
algorithm which combines direct exploration in the policy space and
backpropagation. This algorithm allows for the collection of traces for
learning using deterministic policies, which appears much more efficient than,
for example, {\epsilon}-greedy exploration. Experiments show that with this
algorithm, we successfully learn non-trivial strategies for scenarios with
armies of up to 15 agents, where both Q-learning and REINFORCE struggle.
| Nicolas Usunier, Gabriel Synnaeve, Zeming Lin, Soumith Chintala | null | 1609.02993 | null | null |
Iteratively Reweighted Least Squares Algorithms for L1-Norm Principal
Component Analysis | stat.ML cs.LG | Principal component analysis (PCA) is often used to reduce the dimension of
data by selecting a few orthonormal vectors that explain most of the variance
structure of the data. L1 PCA uses the L1 norm to measure error, whereas the
conventional PCA uses the L2 norm. For the L1 PCA problem minimizing the
fitting error of the reconstructed data, we propose an exact reweighted and an
approximate algorithm based on iteratively reweighted least squares. We provide
convergence analyses, and compare their performance against benchmark
algorithms in the literature. The computational experiment shows that the
proposed algorithms consistently perform best.
| Young Woong Park, Diego Klabjan | 10.1109/ICDM.2016.0054 | 1609.02997 | null | null |
New Steps on the Exact Learning of CNF | cs.LG | A major problem in computational learning theory is whether the class of
formulas in conjunctive normal form (CNF) is efficiently learnable. Although it
is known that this class cannot be polynomially learned using either membership
or equivalence queries alone, it is open whether CNF can be polynomially
learned using both types of queries. One of the most important results
concerning a restriction of the class CNF is that propositional Horn formulas
are polynomial time learnable in Angluin's exact learning model with membership
and equivalence queries. In this work we push this boundary and show that the
class of multivalued dependency formulas (MVDF) is polynomially learnable from
interpretations. We then provide a notion of reduction between learning
problems in Angluin's model, showing that a transformation of the algorithm
suffices to efficiently learn multivalued database dependencies from data
relations. We also show via reductions that our main result extends well known
previous results and allows us to find alternative solutions for them.
| Montserrat Hermo and Ana Ozaki | null | 1609.03054 | null | null |
Energy-based Generative Adversarial Network | cs.LG stat.ML | We introduce the "Energy-based Generative Adversarial Network" model (EBGAN)
which views the discriminator as an energy function that attributes low
energies to the regions near the data manifold and higher energies to other
regions. Similar to the probabilistic GANs, a generator is seen as being
trained to produce contrastive samples with minimal energies, while the
discriminator is trained to assign high energies to these generated samples.
Viewing the discriminator as an energy function allows to use a wide variety of
architectures and loss functionals in addition to the usual binary classifier
with logistic output. Among them, we show one instantiation of EBGAN framework
as using an auto-encoder architecture, with the energy being the reconstruction
error, in place of the discriminator. We show that this form of EBGAN exhibits
more stable behavior than regular GANs during training. We also show that a
single-scale architecture can be trained to generate high-resolution images.
| Junbo Zhao, Michael Mathieu and Yann LeCun | null | 1609.03126 | null | null |
On the Relationship between Online Gaussian Process Regression and
Kernel Least Mean Squares Algorithms | stat.ML cs.IT cs.LG math.IT | We study the relationship between online Gaussian process (GP) regression and
kernel least mean squares (KLMS) algorithms. While the latter have no capacity
of storing the entire posterior distribution during online learning, we
discover that their operation corresponds to the assumption of a fixed
posterior covariance that follows a simple parametric model. Interestingly,
several well-known KLMS algorithms correspond to specific cases of this model.
The probabilistic perspective allows us to understand how each of them handles
uncertainty, which could explain some of their performance differences.
| Steven Van Vaerenbergh, Jesus Fernandez-Bes, V\'ictor Elvira | null | 1609.03164 | null | null |
Wav2Letter: an End-to-End ConvNet-based Speech Recognition System | cs.LG cs.AI cs.CL | This paper presents a simple end-to-end model for speech recognition,
combining a convolutional network based acoustic model and a graph decoding. It
is trained to output letters, with transcribed speech, without the need for
force alignment of phonemes. We introduce an automatic segmentation criterion
for training from sequence annotation without alignment that is on par with CTC
while being simpler. We show competitive results in word error rate on the
Librispeech corpus with MFCC features, and promising results from raw waveform.
| Ronan Collobert, Christian Puhrsch, Gabriel Synnaeve | null | 1609.03193 | null | null |
Multiplex lexical networks reveal patterns in early word acquisition in
children | physics.soc-ph cond-mat.dis-nn cs.CL cs.LG | Network models of language have provided a way of linking cognitive processes
to the structure and connectivity of language. However, one shortcoming of
current approaches is focusing on only one type of linguistic relationship at a
time, missing the complex multi-relational nature of language. In this work, we
overcome this limitation by modelling the mental lexicon of English-speaking
toddlers as a multiplex lexical network, i.e. a multi-layered network where
N=529 words/nodes are connected according to four types of relationships: (i)
free associations, (ii) feature sharing, (iii) co-occurrence, and (iv)
phonological similarity. We provide analysis of the topology of the resulting
multiplex and then proceed to evaluate single layers as well as the full
multiplex structure on their ability to predict empirically observed age of
acquisition data of English speaking toddlers. We find that the emerging
multiplex network topology is an important proxy of the cognitive processes of
acquisition, capable of capturing emergent lexicon structure. In fact, we show
that the multiplex topology is fundamentally more powerful than individual
layers in predicting the ordering with which words are acquired. Furthermore,
multiplex analysis allows for a quantification of distinct phases of lexical
acquisition in early learners: while initially all the multiplex layers
contribute to word learning, after about month 23 free associations take the
lead in driving word acquisition.
| Massimo Stella, Nicole M. Beckage and Markus Brede | 10.1038/srep46730 | 1609.03207 | null | null |
Sharing Hash Codes for Multiple Purposes | stat.ML cs.LG | Locality sensitive hashing (LSH) is a powerful tool for sublinear-time
approximate nearest neighbor search, and a variety of hashing schemes have been
proposed for different dissimilarity measures. However, hash codes
significantly depend on the dissimilarity, which prohibits users from adjusting
the dissimilarity at query time. In this paper, we propose {multiple purpose
LSH (mp-LSH) which shares the hash codes for different dissimilarities. mp-LSH
supports L2, cosine, and inner product dissimilarities, and their corresponding
weighted sums, where the weights can be adjusted at query time. It also allows
us to modify the importance of pre-defined groups of features. Thus, mp-LSH
enables us, for example, to retrieve similar items to a query with the user
preference taken into account, to find a similar material to a query with some
properties (stability, utility, etc.) optimized, and to turn on or off a part
of multi-modal information (brightness, color, audio, text, etc.) in
image/video retrieval. We theoretically and empirically analyze the performance
of three variants of mp-LSH, and demonstrate their usefulness on real-world
data sets.
| Wikor Pronobis, Danny Panknin, Johannes Kirschnick, Vignesh
Srinivasan, Wojciech Samek, Volker Markl, Manohar Kaul, Klaus-Robert Mueller,
Shinichi Nakajima | null | 1609.03219 | null | null |
Non-square matrix sensing without spurious local minima via the
Burer-Monteiro approach | stat.ML cs.IT cs.LG math.IT math.NA math.OC | We consider the non-square matrix sensing problem, under restricted isometry
property (RIP) assumptions. We focus on the non-convex formulation, where any
rank-$r$ matrix $X \in \mathbb{R}^{m \times n}$ is represented as $UV^\top$,
where $U \in \mathbb{R}^{m \times r}$ and $V \in \mathbb{R}^{n \times r}$. In
this paper, we complement recent findings on the non-convex geometry of the
analogous PSD setting [5], and show that matrix factorization does not
introduce any spurious local minima, under RIP.
| Dohyung Park, Anastasios Kyrillidis, Constantine Caramanis, Sujay
Sanghavi | null | 1609.0324 | null | null |
Less than a Single Pass: Stochastically Controlled Stochastic Gradient
Method | math.OC cs.DS cs.LG stat.ML | We develop and analyze a procedure for gradient-based optimization that we
refer to as stochastically controlled stochastic gradient (SCSG). As a member
of the SVRG family of algorithms, SCSG makes use of gradient estimates at two
scales, with the number of updates at the faster scale being governed by a
geometric random variable. Unlike most existing algorithms in this family, both
the computation cost and the communication cost of SCSG do not necessarily
scale linearly with the sample size $n$; indeed, these costs are independent of
$n$ when the target accuracy is low. An experimental evaluation on real
datasets confirms the effectiveness of SCSG.
| Lihua Lei and Michael I. Jordan | null | 1609.03261 | null | null |
CompAdaGrad: A Compressed, Complementary, Computationally-Efficient
Adaptive Gradient Method | cs.LG stat.ML | The adaptive gradient online learning method known as AdaGrad has seen
widespread use in the machine learning community in stochastic and adversarial
online learning problems and more recently in deep learning methods. The
method's full-matrix incarnation offers much better theoretical guarantees and
potentially better empirical performance than its diagonal version; however,
this version is computationally prohibitive and so the simpler diagonal version
often is used in practice. We introduce a new method, CompAdaGrad, that
navigates the space between these two schemes and show that this method can
yield results much better than diagonal AdaGrad while avoiding the (effectively
intractable) $O(n^3)$ computational complexity of full-matrix AdaGrad for
dimension $n$. CompAdaGrad essentially performs full-matrix regularization in a
low-dimensional subspace while performing diagonal regularization in the
complementary subspace. We derive CompAdaGrad's updates for composite mirror
descent in case of the squared $\ell_2$ norm and the $\ell_1$ norm, demonstrate
that its complexity per iteration is linear in the dimension, and establish
guarantees for the method independent of the choice of composite regularizer.
Finally, we show preliminary results on several datasets.
| Nishant A. Mehta and Alistair Rendell and Anish Varghese and
Christfried Webers | null | 1609.03319 | null | null |
Stride Length Estimation with Deep Learning | cs.LG | Accurate estimation of spatial gait characteristics is critical to assess
motor impairments resulting from neurological or musculoskeletal disease.
Currently, however, methodological constraints limit clinical applicability of
state-of-the-art double integration approaches to gait patterns with a clear
zero-velocity phase. We describe a novel approach to stride length estimation
that uses deep convolutional neural networks to map stride-specific inertial
sensor data to the resulting stride length. The model is trained on a publicly
available and clinically relevant benchmark dataset consisting of 1220 strides
from 101 geriatric patients. Evaluation is done in a 10-fold cross validation
and for three different stride definitions. Even though best results are
achieved with strides defined from mid-stance to mid-stance with average
accuracy and precision of 0.01 $\pm$ 5.37 cm, performance does not strongly
depend on stride definition. The achieved precision outperforms
state-of-the-art methods evaluated on this benchmark dataset by 3.0 cm (36%).
Due to the independence of stride definition, the proposed method is not
subject to the methodological constrains that limit applicability of
state-of-the-art double integration methods. Furthermore, precision on the
benchmark dataset could be improved. With more precise mobile stride length
estimation, new insights to the progression of neurological disease or early
indications might be gained. Due to the independence of stride definition,
previously uncharted diseases in terms of mobile gait analysis can now be
investigated by re-training and applying the proposed method.
| Julius Hannink, Thomas Kautz, Cristian F. Pasluosta, Jens Barth,
Samuel Sch\"ulein, Karl-G\"unter Ga{\ss}mann, Jochen Klucken, Bjoern M.
Eskofier | 10.1109/JBHI.2017.2679486 | 1609.03321 | null | null |
Sensor-based Gait Parameter Extraction with Deep Convolutional Neural
Networks | cs.LG | Measurement of stride-related, biomechanical parameters is the common
rationale for objective gait impairment scoring. State-of-the-art double
integration approaches to extract these parameters from inertial sensor data
are, however, limited in their clinical applicability due to the underlying
assumptions. To overcome this, we present a method to translate the abstract
information provided by wearable sensors to context-related expert features
based on deep convolutional neural networks. Regarding mobile gait analysis,
this enables integration-free and data-driven extraction of a set of 8
spatio-temporal stride parameters. To this end, two modelling approaches are
compared: A combined network estimating all parameters of interest and an
ensemble approach that spawns less complex networks for each parameter
individually. The ensemble approach is outperforming the combined modelling in
the current application. On a clinically relevant and publicly available
benchmark dataset, we estimate stride length, width and medio-lateral change in
foot angle up to ${-0.15\pm6.09}$ cm, ${-0.09\pm4.22}$ cm and ${0.13 \pm
3.78^\circ}$ respectively. Stride, swing and stance time as well as heel and
toe contact times are estimated up to ${\pm 0.07}$, ${\pm0.05}$, ${\pm 0.07}$,
${\pm0.07}$ and ${\pm0.12}$ s respectively. This is comparable to and in parts
outperforming or defining state-of-the-art. Our results further indicate that
the proposed change in methodology could substitute assumption-driven
double-integration methods and enable mobile assessment of spatio-temporal
stride parameters in clinically critical situations as e.g. in the case of
spastic gait impairments.
| Julius Hannink, Thomas Kautz, Cristian F. Pasluosta, Karl-G\"unter
Ga{\ss}mann, Jochen Klucken, Bjoern M. Eskofier | 10.1109/JBHI.2016.2636456 | 1609.03323 | null | null |
Finite-sample and asymptotic analysis of generalization ability with an
application to penalized regression | stat.ML cs.LG math.ST q-fin.EC stat.CO stat.TH | In this paper, we study the performance of extremum estimators from the
perspective of generalization ability (GA): the ability of a model to predict
outcomes in new samples from the same population. By adapting the classical
concentration inequalities, we derive upper bounds on the empirical
out-of-sample prediction errors as a function of the in-sample errors,
in-sample data size, heaviness in the tails of the error distribution, and
model complexity. We show that the error bounds may be used for tuning key
estimation hyper-parameters, such as the number of folds $K$ in
cross-validation. We also show how $K$ affects the bias-variance trade-off for
cross-validation. We demonstrate that the $\mathcal{L}_2$-norm difference
between penalized and the corresponding un-penalized regression estimates is
directly explained by the GA of the estimates and the GA of empirical moment
conditions. Lastly, we prove that all penalized regression estimates are
$L_2$-consistent for both the $n \geqslant p$ and the $n < p$ cases.
Simulations are used to demonstrate key results.
Keywords: generalization ability, upper bound of generalization error,
penalized regression, cross-validation, bias-variance trade-off,
$\mathcal{L}_2$ difference between penalized and unpenalized regression, lasso,
high-dimensional data.
| Ning Xu, Jian Hong, Timothy C.G. Fisher | null | 1609.03344 | null | null |
A Threshold-based Scheme for Reinforcement Learning in Neural Networks | cs.LG cs.NE | A generic and scalable Reinforcement Learning scheme for Artificial Neural
Networks is presented, providing a general purpose learning machine. By
reference to a node threshold three features are described 1) A mechanism for
Primary Reinforcement, capable of solving linearly inseparable problems 2) The
learning scheme is extended to include a mechanism for Conditioned
Reinforcement, capable of forming long term strategy 3) The learning scheme is
modified to use a threshold-based deep learning algorithm, providing a robust
and biologically inspired alternative to backpropagation. The model may be used
for supervised as well as unsupervised training regimes.
| Thomas H. Ward | null | 1609.03348 | null | null |
Multi-Label Learning with Provable Guarantee | cs.LG | Here we study the problem of learning labels for large text corpora where
each text can be assigned a variable number of labels. The problem might seem
trivial when the label dimensionality is small and can be easily solved using a
series of one-vs-all classifiers. However, as the label dimensionality
increases to several thousand, the parameter space becomes extremely large, and
it is no longer possible to use the one-vs-all technique. Here we propose a
model based on the factorization of higher order moments of the words in the
corpora, as well as the cross moment between the labels and the words for
multi-label prediction. Our model provides guaranteed convergence bounds on the
estimated parameters. Further, our model takes only three passes through the
training dataset to extract the parameters, resulting in a highly scalable
algorithm that can train on GB's of data consisting of millions of documents
with hundreds of thousands of labels using a nominal resource of a single
processor with 16GB RAM. Our model achieves 10x-15x order of speed-up on
large-scale datasets while producing competitive performance in comparison with
existing benchmark algorithms.
| Sayantan Dasgupta | null | 1609.03426 | null | null |
Learning Sparse Graphs Under Smoothness Prior | cs.LG eess.SP | In this paper, we are interested in learning the underlying graph structure
behind training data. Solving this basic problem is essential to carry out any
graph signal processing or machine learning task. To realize this, we assume
that the data is smooth with respect to the graph topology, and we parameterize
the graph topology using an edge sampling function. That is, the graph
Laplacian is expressed in terms of a sparse edge selection vector, which
provides an explicit handle to control the sparsity level of the graph. We
solve the sparse graph learning problem given some training data in both the
noiseless and noisy settings. Given the true smooth data, the posed sparse
graph learning problem can be solved optimally and is based on simple rank
ordering. Given the noisy data, we show that the joint sparse graph learning
and denoising problem can be simplified to designing only the sparse edge
selection vector, which can be solved using convex optimization.
| Sundeep Prabhakar Chepuri, Sijia Liu, Geert Leus, Alfred O. Hero III | null | 1609.03448 | null | null |
Transfer String Kernel for Cross-Context DNA-Protein Binding Prediction | cs.LG | Through sequence-based classification, this paper tries to accurately predict
the DNA binding sites of transcription factors (TFs) in an unannotated cellular
context. Related methods in the literature fail to perform such predictions
accurately, since they do not consider sample distribution shift of sequence
segments from an annotated (source) context to an unannotated (target) context.
We, therefore, propose a method called "Transfer String Kernel" (TSK) that
achieves improved prediction of transcription factor binding site (TFBS) using
knowledge transfer via cross-context sample adaptation. TSK maps sequence
segments to a high-dimensional feature space using a discriminative mismatch
string kernel framework. In this high-dimensional space, labeled examples of
the source context are re-weighted so that the revised sample distribution
matches the target context more closely. We have experimentally verified TSK
for TFBS identifications on fourteen different TFs under a cross-organism
setting. We find that TSK consistently outperforms the state-of the-art TFBS
tools, especially when working with TFs whose binding sequences are not
conserved across contexts. We also demonstrate the generalizability of TSK by
showing its cutting-edge performance on a different set of cross-context tasks
for the MHC peptide binding predictions.
| Ritambhara Singh, Jack Lanchantin, Gabriel Robins, and Yanjun Qi | 10.1109/TCBB.2016.2609918 | 1609.0349 | null | null |
WaveNet: A Generative Model for Raw Audio | cs.SD cs.LG | This paper introduces WaveNet, a deep neural network for generating raw audio
waveforms. The model is fully probabilistic and autoregressive, with the
predictive distribution for each audio sample conditioned on all previous ones;
nonetheless we show that it can be efficiently trained on data with tens of
thousands of samples per second of audio. When applied to text-to-speech, it
yields state-of-the-art performance, with human listeners rating it as
significantly more natural sounding than the best parametric and concatenative
systems for both English and Mandarin. A single WaveNet can capture the
characteristics of many different speakers with equal fidelity, and can switch
between them by conditioning on the speaker identity. When trained to model
music, we find that it generates novel and often highly realistic musical
fragments. We also show that it can be employed as a discriminative model,
returning promising results for phoneme recognition.
| Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol
Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, Koray Kavukcuoglu | null | 1609.03499 | null | null |
ZaliQL: A SQL-Based Framework for Drawing Causal Inference from Big Data | cs.DB cs.AI cs.LG cs.PF | Causal inference from observational data is a subject of active research and
development in statistics and computer science. Many toolkits have been
developed for this purpose that depends on statistical software. However, these
toolkits do not scale to large datasets. In this paper we describe a suite of
techniques for expressing causal inference tasks from observational data in
SQL. This suite supports the state-of-the-art methods for causal inference and
run at scale within a database engine. In addition, we introduce several
optimization techniques that significantly speedup causal inference, both in
the online and offline setting. We evaluate the quality and performance of our
techniques by experiments of real datasets.
| Babak Salimi, Dan Suciu | null | 1609.0354 | null | null |
Comment on "Why does deep and cheap learning work so well?"
[arXiv:1608.08225] | cond-mat.dis-nn cs.LG stat.ML | In a recent paper, "Why does deep and cheap learning work so well?", Lin and
Tegmark claim to show that the mapping between deep belief networks and the
variational renormalization group derived in [arXiv:1410.3831] is invalid, and
present a "counterexample" that claims to show that this mapping does not hold.
In this comment, we show that these claims are incorrect and stem from a
misunderstanding of the variational RG procedure proposed by Kadanoff. We also
explain why the "counterexample" of Lin and Tegmark is compatible with the
mapping proposed in [arXiv:1410.3831].
| David J. Schwab, Pankaj Mehta | null | 1609.03541 | null | null |
Online Data Thinning via Multi-Subspace Tracking | stat.ML cs.LG | In an era of ubiquitous large-scale streaming data, the availability of data
far exceeds the capacity of expert human analysts. In many settings, such data
is either discarded or stored unprocessed in datacenters. This paper proposes a
method of online data thinning, in which large-scale streaming datasets are
winnowed to preserve unique, anomalous, or salient elements for timely expert
analysis. At the heart of this proposed approach is an online anomaly detection
method based on dynamic, low-rank Gaussian mixture models. Specifically, the
high-dimensional covariances matrices associated with the Gaussian components
are associated with low-rank models. According to this model, most observations
lie near a union of subspaces. The low-rank modeling mitigates the curse of
dimensionality associated with anomaly detection for high-dimensional data, and
recent advances in subspace clustering and subspace tracking allow the proposed
method to adapt to dynamic environments. Furthermore, the proposed method
allows subsampling, is robust to missing data, and uses a mini-batch online
optimization approach. The resulting algorithms are scalable, efficient, and
are capable of operating in real time. Experiments on wide-area motion imagery
and e-mail databases illustrate the efficacy of the proposed approach.
| Xin Jiang, Rebecca Willett | null | 1609.03544 | null | null |
Co-active Learning to Adapt Humanoid Movement for Manipulation | cs.RO cs.LG cs.SY | In this paper we address the problem of robot movement adaptation under
various environmental constraints interactively. Motion primitives are
generally adopted to generate target motion from demonstrations. However, their
generalization capability is weak while facing novel environments.
Additionally, traditional motion generation methods do not consider the
versatile constraints from various users, tasks, and environments. In this
work, we propose a co-active learning framework for learning to adapt robot
end-effector's movement for manipulation tasks. It is designed to adapt the
original imitation trajectories, which are learned from demonstrations, to
novel situations with various constraints. The framework also considers user's
feedback towards the adapted trajectories, and it learns to adapt movement
through human-in-the-loop interactions. The implemented system generalizes
trained motion primitives to various situations with different constraints
considering user preferences. Experiments on a humanoid platform validate the
effectiveness of our approach.
| Ren Mao, John S. Baras, Yezhou Yang, Cornelia Fermuller | null | 1609.03628 | null | null |
An Experimental Study of LSTM Encoder-Decoder Model for Text
Simplification | cs.CL cs.LG | Text simplification (TS) aims to reduce the lexical and structural complexity
of a text, while still retaining the semantic meaning. Current automatic TS
techniques are limited to either lexical-level applications or manually
defining a large amount of rules. Since deep neural networks are powerful
models that have achieved excellent performance over many difficult tasks, in
this paper, we propose to use the Long Short-Term Memory (LSTM) Encoder-Decoder
model for sentence level TS, which makes minimal assumptions about word
sequence. We conduct preliminary experiments to find that the model is able to
learn operation rules such as reversing, sorting and replacing from sequence
pairs, which shows that the model may potentially discover and apply rules such
as modifying sentence structure, substituting words, and removing words for TS.
| Tong Wang, Ping Chen, Kevin Amaral, Jipeng Qiang | null | 1609.03663 | null | null |
A Greedy Algorithm to Cluster Specialists | cs.LG stat.ML | Several recent deep neural networks experiments leverage the
generalist-specialist paradigm for classification. However, no formal study
compared the performance of different clustering algorithms for class
assignment. In this paper we perform such a study, suggest slight modifications
to the clustering procedures, and propose a novel algorithm designed to
optimize the performance of of the specialist-generalist classification system.
Our experiments on the CIFAR-10 and CIFAR-100 datasets allow us to investigate
situations for varying number of classes on similar data. We find that our
\emph{greedy pairs} clustering algorithm consistently outperforms other
alternatives, while the choice of the confusion matrix has little impact on the
final performance.
| S\'ebastien Arnold | null | 1609.03666 | null | null |
Deep Coevolutionary Network: Embedding User and Item Features for
Recommendation | cs.LG cs.IR | Recommender systems often use latent features to explain the behaviors of
users and capture the properties of items. As users interact with different
items over time, user and item features can influence each other, evolve and
co-evolve over time. The compatibility of user and item's feature further
influence the future interaction between users and items. Recently, point
process based models have been proposed in the literature aiming to capture the
temporally evolving nature of these latent features. However, these models
often make strong parametric assumptions about the evolution process of the
user and item latent features, which may not reflect the reality, and has
limited power in expressing the complex and nonlinear dynamics underlying these
processes. To address these limitations, we propose a novel deep coevolutionary
network model (DeepCoevolve), for learning user and item features based on
their interaction graph. DeepCoevolve use recurrent neural network (RNN) over
evolving networks to define the intensity function in point processes, which
allows the model to capture complex mutual influence between users and items,
and the feature evolution over time. We also develop an efficient procedure for
training the model parameters, and show that the learned models lead to
significant improvements in recommendation and activity prediction compared to
previous state-of-the-arts parametric models.
| Hanjun Dai, Yichen Wang, Rakshit Trivedi, Le Song | null | 1609.03675 | null | null |
Unsupervised Monocular Depth Estimation with Left-Right Consistency | cs.CV cs.LG stat.ML | Learning based methods have shown very promising results for the task of
depth estimation in single images. However, most existing approaches treat
depth prediction as a supervised regression problem and as a result, require
vast quantities of corresponding ground truth depth data for training. Just
recording quality depth data in a range of environments is a challenging
problem. In this paper, we innovate beyond existing approaches, replacing the
use of explicit depth data during training with easier-to-obtain binocular
stereo footage.
We propose a novel training objective that enables our convolutional neural
network to learn to perform single image depth estimation, despite the absence
of ground truth depth data. Exploiting epipolar geometry constraints, we
generate disparity images by training our network with an image reconstruction
loss. We show that solving for image reconstruction alone results in poor
quality depth images. To overcome this problem, we propose a novel training
loss that enforces consistency between the disparities produced relative to
both the left and right images, leading to improved performance and robustness
compared to existing approaches. Our method produces state of the art results
for monocular depth estimation on the KITTI driving dataset, even outperforming
supervised methods that have been trained with ground truth depth.
| Cl\'ement Godard, Oisin Mac Aodha and Gabriel J. Brostow | null | 1609.03677 | null | null |
Making Deep Neural Networks Robust to Label Noise: a Loss Correction
Approach | stat.ML cs.LG | We present a theoretically grounded approach to train deep neural networks,
including recurrent networks, subject to class-dependent label noise. We
propose two procedures for loss correction that are agnostic to both
application domain and network architecture. They simply amount to at most a
matrix inversion and multiplication, provided that we know the probability of
each class being corrupted into another. We further show how one can estimate
these probabilities, adapting a recent technique for noise estimation to the
multi-class setting, and thus providing an end-to-end framework. Extensive
experiments on MNIST, IMDB, CIFAR-10, CIFAR-100 and a large scale dataset of
clothing images employing a diversity of architectures --- stacking dense,
convolutional, pooling, dropout, batch normalization, word embedding, LSTM and
residual layers --- demonstrate the noise robustness of our proposals.
Incidentally, we also prove that, when ReLU is the only non-linearity, the loss
curvature is immune to class-dependent label noise.
| Giorgio Patrini, Alessandro Rozza, Aditya Menon, Richard Nock, Lizhen
Qu | null | 1609.03683 | null | null |
3D Simulation for Robot Arm Control with Deep Q-Learning | cs.RO cs.CV cs.LG | Recent trends in robot arm control have seen a shift towards end-to-end
solutions, using deep reinforcement learning to learn a controller directly
from raw sensor data, rather than relying on a hand-crafted, modular pipeline.
However, the high dimensionality of the state space often means that it is
impractical to generate sufficient training data with real-world experiments.
As an alternative solution, we propose to learn a robot controller in
simulation, with the potential of then transferring this to a real robot.
Building upon the recent success of deep Q-networks, we present an approach
which uses 3D simulations to train a 7-DOF robotic arm in a control task
without any prior knowledge. The controller accepts images of the environment
as its only input, and outputs motor actions for the task of locating and
grasping a cube, over a range of initial configurations. To encourage efficient
learning, a structured reward function is designed with intermediate rewards.
We also present preliminary results in direct transfer of policies over to a
real robot, without any further training.
| Stephen James and Edward Johns | null | 1609.03759 | null | null |
Analysis of Kelner and Levin graph sparsification algorithm for a
streaming setting | stat.ML cs.DS cs.LG | We derive a new proof to show that the incremental resparsification algorithm
proposed by Kelner and Levin (2013) produces a spectral sparsifier in high
probability. We rigorously take into account the dependencies across subsequent
resparsifications using martingale inequalities, fixing a flaw in the original
analysis.
| Daniele Calandriello, Alessandro Lazaric and Michal Valko | null | 1609.03769 | null | null |
Learning conditional independence structure for high-dimensional
uncorrelated vector processes | stat.ML cs.LG | We formulate and analyze a graphical model selection method for inferring the
conditional independence graph of a high-dimensional nonstationary Gaussian
random process (time series) from a finite-length observation. The observed
process samples are assumed uncorrelated over time and having a time-varying
marginal distribution. The selection method is based on testing conditional
variances obtained for small subsets of process components. This allows to cope
with the high-dimensional regime, where the sample size can be (drastically)
smaller than the process dimension. We characterize the required sample size
such that the proposed selection method is successful with high probability.
| Nguyen Tran Quang and Alexander Jung | null | 1609.03772 | null | null |
Character-Level Language Modeling with Hierarchical Recurrent Neural
Networks | cs.LG cs.CL cs.NE | Recurrent neural network (RNN) based character-level language models (CLMs)
are extremely useful for modeling out-of-vocabulary words by nature. However,
their performance is generally much worse than the word-level language models
(WLMs), since CLMs need to consider longer history of tokens to properly
predict the next one. We address this problem by proposing hierarchical RNN
architectures, which consist of multiple modules with different timescales.
Despite the multi-timescale structures, the input and output layers operate
with the character-level clock, which allows the existing RNN CLM training
approaches to be directly applicable without any modifications. Our CLM models
show better perplexity than Kneser-Ney (KN) 5-gram WLMs on the One Billion Word
Benchmark with only 2% of parameters. Also, we present real-time
character-level end-to-end speech recognition examples on the Wall Street
Journal (WSJ) corpus, where replacing traditional mono-clock RNN CLMs with the
proposed models results in better recognition accuracies even though the number
of parameters are reduced to 30%.
| Kyuyeon Hwang, Wonyong Sung | null | 1609.03777 | null | null |
Crafting a multi-task CNN for viewpoint estimation | cs.CV cs.LG cs.NE | Convolutional Neural Networks (CNNs) were recently shown to provide
state-of-the-art results for object category viewpoint estimation. However
different ways of formulating this problem have been proposed and the competing
approaches have been explored with very different design choices. This paper
presents a comparison of these approaches in a unified setting as well as a
detailed analysis of the key factors that impact performance. Followingly, we
present a new joint training method with the detection task and demonstrate its
benefit. We also highlight the superiority of classification approaches over
regression approaches, quantify the benefits of deeper architectures and
extended training data, and demonstrate that synthetic data is beneficial even
when using ImageNet training data. By combining all these elements, we
demonstrate an improvement of approximately 5% mAVP over previous
state-of-the-art results on the Pascal3D+ dataset. In particular for their most
challenging 24 view classification task we improve the results from 31.1% to
36.1% mAVP.
| Francisco Massa, Renaud Marlet, Mathieu Aubry | null | 1609.03894 | null | null |
Information Theoretic Structure Learning with Confidence | cs.IT cs.LG math.IT stat.ML | Information theoretic measures (e.g. the Kullback Liebler divergence and
Shannon mutual information) have been used for exploring possibly nonlinear
multivariate dependencies in high dimension. If these dependencies are assumed
to follow a Markov factor graph model, this exploration process is called
structure discovery. For discrete-valued samples, estimates of the information
divergence over the parametric class of multinomial models lead to structure
discovery methods whose mean squared error achieves parametric convergence
rates as the sample size grows. However, a naive application of this method to
continuous nonparametric multivariate models converges much more slowly. In
this paper we introduce a new method for nonparametric structure discovery that
uses weighted ensemble divergence estimators that achieve parametric
convergence rates and obey an asymptotic central limit theorem that facilitates
hypothesis testing and other types of statistical validation.
| Kevin R. Moon, Morteza Noshad, Salimeh Yasaei Sekeh, Alfred O. Hero
III | 10.1109/ICASSP.2017.7953327 | 1609.03912 | null | null |
Noisy Inductive Matrix Completion Under Sparse Factor Models | stat.ML cs.LG math.ST stat.TH | Inductive Matrix Completion (IMC) is an important class of matrix completion
problems that allows direct inclusion of available features to enhance
estimation capabilities. These models have found applications in personalized
recommendation systems, multilabel learning, dictionary learning, etc. This
paper examines a general class of noisy matrix completion tasks where the
underlying matrix is following an IMC model i.e., it is formed by a mixing
matrix (a priori unknown) sandwiched between two known feature matrices. The
mixing matrix here is assumed to be well approximated by the product of two
sparse matrices---referred here to as "sparse factor models." We leverage the
main theorem of Soni:2016:NMC and extend it to provide theoretical error bounds
for the sparsity-regularized maximum likelihood estimators for the class of
problems discussed in this paper. The main result is general in the sense that
it can be used to derive error bounds for various noise models. In this paper,
we instantiate our main result for the case of Gaussian noise and provide
corresponding error bounds in terms of squared loss.
| Akshay Soni, Troy Chevalier, Swayambhoo Jain | null | 1609.03958 | null | null |
Self-Sustaining Iterated Learning | math.OC cs.LG stat.ML | An important result from psycholinguistics (Griffiths & Kalish, 2005) states
that no language can be learned iteratively by rational agents in a
self-sustaining manner. We show how to modify the learning process slightly in
order to achieve self-sustainability. Our work is in two parts. First, we
characterize iterated learnability in geometric terms and show how a slight,
steady increase in the lengths of the training sessions ensures
self-sustainability for any discrete language class. In the second part, we
tackle the nondiscrete case and investigate self-sustainability for iterated
linear regression. We discuss the implications of our findings to issues of
non-equilibrium dynamics in natural algorithms.
| Bernard Chazelle, Chu Wang | null | 1609.0396 | null | null |
Tracking Tensor Subspaces with Informative Random Sampling for Real-Time
MR Imaging | cs.LG cs.CV cs.IT math.IT stat.CO | Magnetic resonance imaging (MRI) nowadays serves as an important modality for
diagnostic and therapeutic guidance in clinics. However, the {\it slow
acquisition} process, the dynamic deformation of organs, as well as the need
for {\it real-time} reconstruction, pose major challenges toward obtaining
artifact-free images. To cope with these challenges, the present paper
advocates a novel subspace learning framework that permeates benefits from
parallel factor (PARAFAC) decomposition of tensors (multiway data) to low-rank
modeling of temporal sequence of images. Treating images as multiway data
arrays, the novel method preserves spatial structures and unravels the latent
correlations across various dimensions by means of the tensor subspace.
Leveraging the spatio-temporal correlation of images, Tykhonov regularization
is adopted as a rank surrogate for a least-squares optimization program.
Alteranating majorization minimization is adopted to develop online algorithms
that recursively procure the reconstruction upon arrival of a new undersampled
$k$-space frame. The developed algorithms are {\it provably convergent} and
highly {\it parallelizable} with lightweight FFT tasks per iteration. To
further accelerate the acquisition process, randomized subsampling policies are
devised that leverage intermediate estimates of the tensor subspace, offered by
the online scheme, to {\it randomly} acquire {\it informative} $k$-space
samples. In a nutshell, the novel approach enables tracking motion dynamics
under low acquisition rates `on the fly.' GPU-based tests with real {\it in
vivo} MRI datasets of cardiac cine images corroborate the merits of the novel
approach relative to state-of-the-art alternatives.
| Morteza Mardani, Georgios B. Giannakis, and Kamil Ugurbil | null | 1609.04104 | null | null |
Network learning via multi-agent inverse transportation problems | cs.MA cs.CE cs.LG math.ST stat.TH | Despite the ubiquity of transportation data, methods to infer the state
parameters of a network either ignore sensitivity of route decisions, require
route enumeration for parameterizing descriptive models of route selection, or
require complex bilevel models of route assignment behavior. These limitations
prevent modelers from fully exploiting ubiquitous data in monitoring
transportation networks. Inverse optimization methods that capture network
route choice behavior can address this gap, but they are designed to take
observations of the same model to learn the parameters of that model, which is
statistically inefficient (e.g. requires estimating population route and link
flows). New inverse optimization models and supporting algorithms are proposed
to learn the parameters of heterogeneous travelers' route behavior to infer
shared network state parameters (e.g. link capacity dual prices). The inferred
values are consistent with observations of each agent's optimization behavior.
We prove that the method can obtain unique dual prices for a network shared by
these agents in polynomial time. Four experiments are conducted. The first one,
conducted on a 4-node network, verifies the methodology to obtain heterogeneous
link cost parameters even when multinomial or mixed logit models would not be
meaningfully estimated. The second is a parameter recovery test on the
Nguyen-Dupuis network that shows that unique latent link capacity dual prices
can be inferred using the proposed method. The third test on the same network
demonstrates how a monitoring system in an online learning environment can be
designed using this method. The last test demonstrates this learning on real
data obtained from a freeway network in Queens, New York, using only real-time
Google Maps queries.
| Susan Jia Xu, Mehdi Nourinejad, Xuebo Lai, Joseph Y. J. Chow | 10.1287/trsc.2017.0805 | 1609.04117 | null | null |
Proceedings of the third "international Traveling Workshop on
Interactions between Sparse models and Technology" (iTWIST'16) | cs.NA cs.CV cs.IT cs.LG math.IT math.OC | The third edition of the "international - Traveling Workshop on Interactions
between Sparse models and Technology" (iTWIST) took place in Aalborg, the 4th
largest city in Denmark situated beautifully in the northern part of the
country, from the 24th to 26th of August 2016. The workshop venue was at the
Aalborg University campus. One implicit objective of this biennial workshop is
to foster collaboration between international scientific teams by disseminating
ideas through both specific oral/poster presentations and free discussions. For
this third edition, iTWIST'16 gathered about 50 international participants and
features 8 invited talks, 12 oral presentations, and 12 posters on the
following themes, all related to the theory, application and generalization of
the "sparsity paradigm": Sparsity-driven data sensing and processing (e.g.,
optics, computer vision, genomics, biomedical, digital communication, channel
estimation, astronomy); Application of sparse models in non-convex/non-linear
inverse problems (e.g., phase retrieval, blind deconvolution, self
calibration); Approximate probabilistic inference for sparse problems; Sparse
machine learning and inference; "Blind" inverse problems and dictionary
learning; Optimization for sparse modelling; Information theory, geometry and
randomness; Sparsity? What's next? (Discrete-valued signals; Union of
low-dimensional spaces, Cosparsity, mixed/group norm, model-based,
low-complexity models, ...); Matrix/manifold sensing/processing (graph,
low-rank approximation, ...); Complexity/accuracy tradeoffs in numerical
methods/optimization; Electronic/optical compressive sensors (hardware).
| V. Abrol, O. Absil, P.-A. Absil, S. Anthoine, P. Antoine, T. Arildsen,
N. Bertin, F. Bleichrodt, J. Bobin, A. Bol, A. Bonnefoy, F. Caltagirone, V.
Cambareri, C. Chenot, V. Crnojevi\'c, M. Da\v{n}kov\'a, K. Degraux, J.
Eisert, J. M. Fadili, M. Gabri\'e, N. Gac, D. Giacobello, A. Gonzalez, C. A.
Gomez Gonzalez, A. Gonz\'alez, P.-Y. Gousenbourger, M. Gr{\ae}sb{\o}ll
Christensen, R. Gribonval, S. Gu\'erit, S. Huang, P. Irofti, L. Jacques, U.
S. Kamilov, S. Kitic\'c, M. Kliesch, F. Krzakala, J. A. Lee, W. Liao, T.
Lindstr{\o}m Jensen, A. Manoel, H. Mansour, A. Mohammad-Djafari, A.
Moshtaghpour, F. Ngol\`e, B. Pairet, M. Pani\'c, G. Peyr\'e, A. Pi\v{z}urica,
P. Rajmic, M. Roblin, I. Roth, A. K. Sao, P. Sharma, J.-L. Starck, E. W.
Tramel, T. van Waterschoot, D. Vukobratovic, L. Wang, B. Wirth, G. Wunder, H.
Zhang | null | 1609.04167 | null | null |
Formalizing Neurath's Ship: Approximate Algorithms for Online Causal
Learning | cs.LG | Higher-level cognition depends on the ability to learn models of the world.
We can characterize this at the computational level as a structure-learning
problem with the goal of best identifying the prevailing causal relationships
among a set of relata. However, the computational cost of performing exact
Bayesian inference over causal models grows rapidly as the number of relata
increases. This implies that the cognitive processes underlying causal learning
must be substantially approximate. A powerful class of approximations that
focuses on the sequential absorption of successive inputs is captured by the
Neurath's ship metaphor in philosophy of science, where theory change is cast
as a stochastic and gradual process shaped as much by people's limited
willingness to abandon their current theory when considering alternatives as by
the ground truth they hope to approach. Inspired by this metaphor and by
algorithms for approximating Bayesian inference in machine learning, we propose
an algorithmic-level model of causal structure learning under which learners
represent only a single global hypothesis that they update locally as they
gather evidence. We propose a related scheme for understanding how, under these
limitations, learners choose informative interventions that manipulate the
causal system to help elucidate its workings. We find support for our approach
in the analysis of four experiments.
| Neil R. Bramley, Peter Dayan, Thomas L. Griffiths and David A. Lagnado | 10.1037/rev0000061 | 1609.04212 | null | null |
Convolutional Recurrent Neural Networks for Music Classification | cs.NE cs.LG cs.MM cs.SD | We introduce a convolutional recurrent neural network (CRNN) for music
tagging. CRNNs take advantage of convolutional neural networks (CNNs) for local
feature extraction and recurrent neural networks for temporal summarisation of
the extracted features. We compare CRNN with three CNN structures that have
been used for music tagging while controlling the number of parameters with
respect to their performance and training time per sample. Overall, we found
that CRNNs show a strong performance with respect to the number of parameter
and training time, indicating the effectiveness of its hybrid structure in
music feature extraction and feature summarisation.
| Keunwoo Choi, George Fazekas, Mark Sandler, Kyunghyun Cho | null | 1609.04243 | null | null |
Efficient softmax approximation for GPUs | cs.CL cs.LG | We propose an approximate strategy to efficiently train neural network based
language models over very large vocabularies. Our approach, called adaptive
softmax, circumvents the linear dependency on the vocabulary size by exploiting
the unbalanced word distribution to form clusters that explicitly minimize the
expectation of computation time. Our approach further reduces the computational
time by exploiting the specificities of modern architectures and matrix-matrix
vector operations, making it particularly suited for graphical processing
units. Our experiments carried out on standard benchmarks, such as EuroParl and
One Billion Word, show that our approach brings a large gain in efficiency over
standard approximations while achieving an accuracy close to that of the full
softmax. The code of our method is available at
https://github.com/facebookresearch/adaptive-softmax.
| Edouard Grave, Armand Joulin, Moustapha Ciss\'e, David Grangier,
Herv\'e J\'egou | null | 1609.04309 | null | null |
Very Simple Classifier: a Concept Binary Classifier toInvestigate
Features Based on Subsampling and Localility | cs.LG stat.ML | We propose Very Simple Classifier (VSC) a novel method designed to
incorporate the concepts of subsampling and locality in the definition of
features to be used as the input of a perceptron. The rationale is that
locality theoretically guarantees a bound on the generalization error. Each
feature in VSC is a max-margin classifier built on randomly-selected pairs of
samples. The locality in VSC is achieved by multiplying the value of the
feature by a confidence measure that can be characterized in terms of the
Chebichev inequality. The output of the layer is then fed in a output layer of
neurons. The weights of the output layer are then determined by a regularized
pseudoinverse. Extensive comparison of VSC against 9 competitors in the task of
binary classification is carried out. Results on 22 benchmark datasets with
fixed parameters show that VSC is competitive with the Multi Layer Perceptron
(MLP) and outperforms the other competitors. An exploration of the parameter
space shows VSC can outperform MLP.
| Luca Masera, Enrico Blanzieri | null | 1609.04321 | null | null |
A Perspective on Deep Imaging | q-bio.QM cs.CV cs.LG | The combination of tomographic imaging and deep learning, or machine learning
in general, promises to empower not only image analysis but also image
reconstruction. The latter aspect is considered in this perspective article
with an emphasis on medical imaging to develop a new generation of image
reconstruction theories and techniques. This direction might lead to
intelligent utilization of domain knowledge from big data, innovative
approaches for image reconstruction, and superior performance in clinical and
preclinical applications. To realize the full impact of machine learning on
medical imaging, major challenges must be addressed.
| Ge Wang | null | 1609.04375 | null | null |
Bayesian Reinforcement Learning: A Survey | cs.AI cs.LG stat.ML | Bayesian methods for machine learning have been widely investigated, yielding
principled methods for incorporating prior information into inference
algorithms. In this survey, we provide an in-depth review of the role of
Bayesian methods for the reinforcement learning (RL) paradigm. The major
incentives for incorporating Bayesian reasoning in RL are: 1) it provides an
elegant approach to action-selection (exploration/exploitation) as a function
of the uncertainty in learning; and 2) it provides a machinery to incorporate
prior knowledge into the algorithms. We first discuss models and methods for
Bayesian inference in the simple single-step Bandit model. We then review the
extensive recent literature on Bayesian methods for model-based RL, where prior
information can be expressed on the parameters of the Markov model. We also
present Bayesian methods for model-free RL, where priors are expressed over the
value function or policy class. The objective of the paper is to provide a
comprehensive survey on Bayesian RL algorithms and their theoretical and
empirical properties.
| Mohammad Ghavamzadeh, Shie Mannor, Joelle Pineau, and Aviv Tamar | 10.1561/2200000049 | 1609.04436 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.