title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Transformation-Based Models of Video Sequences | cs.LG cs.CV | In this work we propose a simple unsupervised approach for next frame
prediction in video. Instead of directly predicting the pixels in a frame given
past frames, we predict the transformations needed for generating the next
frame in a sequence, given the transformations of the past frames. This leads
to sharper results, while using a smaller prediction model. In order to enable
a fair comparison between different video frame prediction models, we also
propose a new evaluation protocol. We use generated frames as input to a
classifier trained with ground truth sequences. This criterion guarantees that
models scoring high are those producing sequences which preserve discriminative
features, as opposed to merely penalizing any deviation, plausible or not, from
the ground truth. Our proposed approach compares favourably against more
sophisticated ones on the UCF-101 data set, while also being more efficient in
terms of the number of parameters and computational cost.
| Joost van Amersfoort, Anitha Kannan, Marc'Aurelio Ranzato, Arthur
Szlam, Du Tran and Soumith Chintala | null | 1701.08435 | null | null |
Predicting SMT Solver Performance for Software Verification | cs.SE cs.LG cs.LO | The Why3 IDE and verification system facilitates the use of a wide range of
Satisfiability Modulo Theories (SMT) solvers through a driver-based
architecture. We present Where4: a portfolio-based approach to discharge Why3
proof obligations. We use data analysis and machine learning techniques on
static metrics derived from program source code. Our approach benefits software
engineers by providing a single utility to delegate proof obligations to the
solvers most likely to return a useful result. It does this in a time-efficient
way using existing Why3 and solver installations - without requiring low-level
knowledge about SMT solver operation from the user.
| Andrew Healy (Maynooth University), Rosemary Monahan (Maynooth
University), James F. Power (Maynooth University) | 10.4204/EPTCS.240.2 | 1701.08466 | null | null |
Model-based Classification and Novelty Detection For Point Pattern Data | cs.LG stat.ML | Point patterns are sets or multi-sets of unordered elements that can be found
in numerous data sources. However, in data analysis tasks such as
classification and novelty detection, appropriate statistical models for point
pattern data have not received much attention. This paper proposes the
modelling of point pattern data via random finite sets (RFS). In particular, we
propose appropriate likelihood functions, and a maximum likelihood estimator
for learning a tractable family of RFS models. In novelty detection, we propose
novel ranking functions based on RFS models, which substantially improve
performance.
| Ba-Ngu Vo, Quang N. Tran, Dinh Phung, Ba-Tuong Vo | null | 1701.08473 | null | null |
Binary adaptive embeddings from order statistics of random projections | cs.LG cs.IR | We use some of the largest order statistics of the random projections of a
reference signal to construct a binary embedding that is adapted to signals
correlated with such signal. The embedding is characterized from the analytical
standpoint and shown to provide improved performance on tasks such as
classification in a reduced-dimensionality space.
| Diego Valsesia, Enrico Magli | 10.1109/LSP.2016.2639036 | 1701.08511 | null | null |
Self-Adaptation of Activity Recognition Systems to New Sensors | cs.CV cs.LG stat.ML | Traditional activity recognition systems work on the basis of training,
taking a fixed set of sensors into account. In this article, we focus on the
question how pattern recognition can leverage new information sources without
any, or with minimal user input. Thus, we present an approach for opportunistic
activity recognition, where ubiquitous sensors lead to dynamically changing
input spaces. Our method is a variation of well-established principles of
machine learning, relying on unsupervised clustering to discover structure in
data and inferring cluster labels from a small number of labeled dates in a
semi-supervised manner. Elaborating the challenges, evaluations of over 3000
sensor combinations from three multi-user experiments are presented in detail
and show the potential benefit of our approach.
| David Bannach, Martin J\"anicke, Vitor F. Rey, Sven Tomforde, Bernhard
Sick, Paul Lukowicz | null | 1701.08528 | null | null |
Variational Policy for Guiding Point Processes | cs.LG cs.SI cs.SY math.OC | Temporal point processes have been widely applied to model event sequence
data generated by online users. In this paper, we consider the problem of how
to design the optimal control policy for point processes, such that the
stochastic system driven by the point process is steered to a target state. In
particular, we exploit the key insight to view the stochastic optimal control
problem from the perspective of optimal measure and variational inference. We
further propose a convex optimization framework and an efficient algorithm to
update the policy adaptively to the current system state. Experiments on
synthetic and real-world data show that our algorithm can steer the user
activities much more accurately and efficiently than other stochastic control
methods.
| Yichen Wang, Grady Williams, Evangelos Theodorou, Le Song | null | 1701.08585 | null | null |
A Comparative Study on Different Types of Approaches to Bengali document
Categorization | cs.CL cs.LG | Document categorization is a technique where the category of a document is
determined. In this paper three well-known supervised learning techniques which
are Support Vector Machine(SVM), Na\"ive Bayes(NB) and Stochastic Gradient
Descent(SGD) compared for Bengali document categorization. Besides classifier,
classification also depends on how feature is selected from dataset. For
analyzing those classifier performances on predicting a document against twelve
categories several feature selection techniques are also applied in this
article namely Chi square distribution, normalized TFIDF (term
frequency-inverse document frequency) with word analyzer. So, we attempt to
explore the efficiency of those three-classification algorithms by using two
different feature selection techniques in this article.
| Md. Saiful Islam, Fazla Elahi Md Jubayer and Syed Ikhtiar Ahmed | null | 1701.08694 | null | null |
Predicting Auction Price of Vehicle License Plate with Deep Recurrent
Neural Network | cs.CL cs.LG q-fin.EC stat.ML | In Chinese societies, superstition is of paramount importance, and vehicle
license plates with desirable numbers can fetch very high prices in auctions.
Unlike other valuable items, license plates are not allocated an estimated
price before auction. I propose that the task of predicting plate prices can be
viewed as a natural language processing (NLP) task, as the value depends on the
meaning of each individual character on the plate and its semantics. I
construct a deep recurrent neural network (RNN) to predict the prices of
vehicle license plates in Hong Kong, based on the characters on a plate. I
demonstrate the importance of having a deep network and of retraining.
Evaluated on 13 years of historical auction prices, the deep RNN's predictions
can explain over 80 percent of price variations, outperforming previous models
by a significant margin. I also demonstrate how the model can be extended to
become a search engine for plates and to provide estimates of the expected
price distribution.
| Vinci Chow | 10.1016/j.eswa.2019.113008 | 1701.08711 | null | null |
Does Weather Matter? Causal Analysis of TV Logs | cs.CY cs.LG | Weather affects our mood and behaviors, and many aspects of our life. When it
is sunny, most people become happier; but when it rains, some people get
depressed. Despite this evidence and the abundance of data, weather has mostly
been overlooked in the machine learning and data science research. This work
presents a causal analysis of how weather affects TV watching patterns. We show
that some weather attributes, such as pressure and precipitation, cause major
changes in TV watching patterns. To the best of our knowledge, this is the
first large-scale causal study of the impact of weather on TV watching
patterns.
| Shi Zong, Branislav Kveton, Shlomo Berkovsky, Azin Ashkan, Nikos
Vlassis, Zheng Wen | null | 1701.08716 | null | null |
Memory Augmented Neural Networks with Wormhole Connections | cs.LG cs.NE stat.ML | Recent empirical results on long-term dependency tasks have shown that neural
networks augmented with an external memory can learn the long-term dependency
tasks more easily and achieve better generalization than vanilla recurrent
neural networks (RNN). We suggest that memory augmented neural networks can
reduce the effects of vanishing gradients by creating shortcut (or wormhole)
connections. Based on this observation, we propose a novel memory augmented
neural network model called TARDIS (Temporal Automatic Relation Discovery in
Sequences). The controller of TARDIS can store a selective set of embeddings of
its own previous hidden states into an external memory and revisit them as and
when needed. For TARDIS, memory acts as a storage for wormhole connections to
the past to propagate the gradients more effectively and it helps to learn the
temporal dependencies. The memory structure of TARDIS has similarities to both
Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but
both read and write operations of TARDIS are simpler and more efficient. We use
discrete addressing for read/write operations which helps to substantially to
reduce the vanishing gradient problem with very long sequences. Read and write
operations in TARDIS are tied with a heuristic once the memory becomes full,
and this makes the learning problem simpler when compared to NTM or D-NTM type
of architectures. We provide a detailed analysis on the gradient propagation in
general for MANNs. We evaluate our models on different long-term dependency
tasks and report competitive results in all of them.
| Caglar Gulcehre, Sarath Chandar, Yoshua Bengio | null | 1701.08718 | null | null |
PathNet: Evolution Channels Gradient Descent in Super Neural Networks | cs.NE cs.LG | For artificial general intelligence (AGI) it would be efficient if multiple
users trained the same giant neural network, permitting parameter reuse,
without catastrophic forgetting. PathNet is a first step in this direction. It
is a neural network algorithm that uses agents embedded in the neural network
whose task is to discover which parts of the network to re-use for new tasks.
Agents are pathways (views) through the network which determine the subset of
parameters that are used and updated by the forwards and backwards passes of
the backpropogation algorithm. During learning, a tournament selection genetic
algorithm is used to select pathways through the neural network for replication
and mutation. Pathway fitness is the performance of that pathway measured
according to a cost function. We demonstrate successful transfer learning;
fixing the parameters along a path learned on task A and re-evolving a new
population of paths for task B, allows task B to be learned faster than it
could be learned from scratch or after fine-tuning. Paths evolved on task B
re-use parts of the optimal path evolved on task A. Positive transfer was
demonstrated for binary MNIST, CIFAR, and SVHN supervised learning
classification tasks, and a set of Atari and Labyrinth reinforcement learning
tasks, suggesting PathNets have general applicability for neural network
training. Finally, PathNet also significantly improves the robustness to
hyperparameter choices of a parallel asynchronous reinforcement learning
algorithm (A3C).
| Chrisantha Fernando, Dylan Banarse, Charles Blundell, Yori Zwols,
David Ha, Andrei A. Rusu, Alexander Pritzel, Daan Wierstra | null | 1701.08734 | null | null |
Click Through Rate Prediction for Contextual Advertisment Using Linear
Regression | cs.IR cs.AI cs.LG | This research presents an innovative and unique way of solving the
advertisement prediction problem which is considered as a learning problem over
the past several years. Online advertising is a multi-billion-dollar industry
and is growing every year with a rapid pace. The goal of this research is to
enhance click through rate of the contextual advertisements using Linear
Regression. In order to address this problem, a new technique propose in this
paper to predict the CTR which will increase the overall revenue of the system
by serving the advertisements more suitable to the viewers with the help of
feature extraction and displaying the advertisements based on context of the
publishers. The important steps include the data collection, feature
extraction, CTR prediction and advertisement serving. The statistical results
obtained from the dynamically used technique show an efficient outcome by
fitting the data close to perfection for the LR technique using optimized
feature selection.
| Muhammad Junaid Effendi and Syed Abbas Ali | null | 1701.08744 | null | null |
Bayesian Learning of Consumer Preferences for Residential Demand
Response | cs.LG cs.SY stat.ML | In coming years residential consumers will face real-time electricity tariffs
with energy prices varying day to day, and effective energy saving will require
automation - a recommender system, which learns consumer's preferences from her
actions. A consumer chooses a scenario of home appliance use to balance her
comfort level and the energy bill. We propose a Bayesian learning algorithm to
estimate the comfort level function from the history of appliance use. In
numeric experiments with datasets generated from a simulation model of a
consumer interacting with small home appliances the algorithm outperforms
popular regression analysis tools. Our approach can be extended to control an
air heating and conditioning system, which is responsible for up to half of a
household's energy bill.
| Mikhail V. Goubko and Sergey O. Kuznetsov and Alexey A. Neznanov and
Dmitry I. Ignatov | 10.1016/j.ifacol.2016.12.184 | 1701.08757 | null | null |
Dynamic Task Allocation for Crowdsourcing Settings | cs.LG stat.ML | We consider the problem of optimal budget allocation for crowdsourcing
problems, allocating users to tasks to maximize our final confidence in the
crowdsourced answers. Such an optimized worker assignment method allows us to
boost the efficacy of any popular crowdsourcing estimation algorithm. We
consider a mutual information interpretation of the crowdsourcing problem,
which leads to a stochastic subset selection problem with a submodular
objective function. We present experimental simulation results which
demonstrate the effectiveness of our dynamic task allocation method for
achieving higher accuracy, possibly requiring fewer labels, as well as
improving upon a previous method which is sensitive to the proportion of users
to questions.
| Angela Zhou, Irineo Cabreros, Karan Singh | null | 1701.08795 | null | null |
Learning from various labeling strategies for suicide-related messages
on social media: An experimental study | cs.LG cs.CY cs.SI | Suicide is an important but often misunderstood problem, one that researchers
are now seeking to better understand through social media. Due in large part to
the fuzzy nature of what constitutes suicidal risks, most supervised approaches
for learning to automatically detect suicide-related activity in social media
require a great deal of human labor to train. However, humans themselves have
diverse or conflicting views on what constitutes suicidal thoughts. So how to
obtain reliable gold standard labels is fundamentally challenging and, we
hypothesize, depends largely on what is asked of the annotators and what slice
of the data they label. We conducted multiple rounds of data labeling and
collected annotations from crowdsourcing workers and domain experts. We
aggregated the resulting labels in various ways to train a series of supervised
models. Our preliminary evaluations show that using unanimously agreed labels
from multiple annotators is helpful to achieve robust machine models.
| Tong Liu and Qijin Cheng and Christopher M. Homan and Vincent M.B.
Silenzio | null | 1701.08796 | null | null |
Reinforcement Learning Algorithm Selection | stat.ML cs.AI cs.LG math.OC | This paper formalises the problem of online algorithm selection in the
context of Reinforcement Learning. The setup is as follows: given an episodic
task and a finite number of off-policy RL algorithms, a meta-algorithm has to
decide which RL algorithm is in control during the next episode so as to
maximize the expected return. The article presents a novel meta-algorithm,
called Epochal Stochastic Bandit Algorithm Selection (ESBAS). Its principle is
to freeze the policy updates at each epoch, and to leave a rebooted stochastic
bandit in charge of the algorithm selection. Under some assumptions, a thorough
theoretical analysis demonstrates its near-optimality considering the
structural sampling budget limitations. ESBAS is first empirically evaluated on
a dialogue task where it is shown to outperform each individual algorithm in
most configurations. ESBAS is then adapted to a true online setting where
algorithms update their policies after each transition, which we call SSBAS.
SSBAS is evaluated on a fruit collection task where it is shown to adapt the
stepsize parameter more efficiently than the classical hyperbolic decay, and on
an Atari game, where it improves the performance by a wide margin.
| Romain Laroche and Raphael Feraud | null | 1701.0881 | null | null |
Fully Convolutional Architectures for Multi-Class Segmentation in Chest
Radiographs | cs.CV cs.LG | The success of deep convolutional neural networks on image classification and
recognition tasks has led to new applications in very diversified contexts,
including the field of medical imaging. In this paper we investigate and
propose neural network architectures for automated multi-class segmentation of
anatomical organs in chest radiographs, namely for lungs, clavicles and heart.
We address several open challenges including model overfitting, reducing number
of parameters and handling of severely imbalanced data in CXR by fusing recent
concepts in convolutional networks and adapting them to the segmentation
problem task in CXR. We demonstrate that our architecture combining delayed
subsampling, exponential linear units, highly restrictive regularization and a
large number of high resolution low level abstract features outperforms
state-of-the-art methods on all considered organs, as well as the human
observer on lungs and heart. The models use a multi-class configuration with
three target classes and are trained and tested on the publicly available JSRT
database, consisting of 247 X-ray images the ground-truth masks for which are
available in the SCR database. Our best performing model, trained with the loss
function based on the Dice coefficient, reached mean Jaccard overlap scores of
95.0\% for lungs, 86.8\% for clavicles and 88.2\% for heart. This architecture
outperformed the human observer results for lungs and heart.
| Alexey A. Novikov, Dimitrios Lenis, David Major, Jiri Hlad\r{u}vka,
Maria Wimmer, Katja B\"uhler | null | 1701.08816 | null | null |
Emergence of Selective Invariance in Hierarchical Feed Forward Networks | cs.LG cs.CV | Many theories have emerged which investigate how in- variance is generated in
hierarchical networks through sim- ple schemes such as max and mean pooling.
The restriction to max/mean pooling in theoretical and empirical studies has
diverted attention away from a more general way of generating invariance to
nuisance transformations. We con- jecture that hierarchically building
selective invariance (i.e. carefully choosing the range of the transformation
to be in- variant to at each layer of a hierarchical network) is im- portant
for pattern recognition. We utilize a novel pooling layer called adaptive
pooling to find linear pooling weights within networks. These networks with the
learnt pooling weights have performances on object categorization tasks that
are comparable to max/mean pooling networks. In- terestingly, adaptive pooling
can converge to mean pooling (when initialized with random pooling weights),
find more general linear pooling schemes or even decide not to pool at all. We
illustrate the general notion of selective invari- ance through object
categorization experiments on large- scale datasets such as SVHN and ILSVRC
2012.
| Dipan K. Pal, Vishnu Boddeti, Marios Savvides | null | 1701.08837 | null | null |
Spatial Projection of Multiple Climate Variables using Hierarchical
Multitask Learning | cs.LG stat.ML | Future projection of climate is typically obtained by combining outputs from
multiple Earth System Models (ESMs) for several climate variables such as
temperature and precipitation. While IPCC has traditionally used a simple model
output average, recent work has illustrated potential advantages of using a
multitask learning (MTL) framework for projections of individual climate
variables. In this paper we introduce a framework for hierarchical multitask
learning (HMTL) with two levels of tasks such that each super-task, i.e., task
at the top level, is itself a multitask learning problem over sub-tasks. For
climate projections, each super-task focuses on projections of specific climate
variables spatially using an MTL formulation. For the proposed HMTL approach, a
group lasso regularization is added to couple parameters across the
super-tasks, which in the climate context helps exploit relationships among the
behavior of different climate variables at a given spatial location. We show
that some recent works on MTL based on learning task dependency structures can
be viewed as special cases of HMTL. Experiments on synthetic and real climate
data show that HMTL produces better results than decoupled MTL methods applied
separately on the super-tasks and HMTL significantly outperforms baselines for
climate projection.
| Andr\'e R. Gon\c{c}alves, Arindam Banerjee, Fernando J. Von Zuben | null | 1701.0884 | null | null |
Flow Navigation by Smart Microswimmers via Reinforcement Learning | physics.flu-dyn cond-mat.stat-mech cs.LG nlin.CD | Smart active particles can acquire some limited knowledge of the fluid
environment from simple mechanical cues and exert a control on their preferred
steering direction. Their goal is to learn the best way to navigate by
exploiting the underlying flow whenever possible. As an example, we focus our
attention on smart gravitactic swimmers. These are active particles whose task
is to reach the highest altitude within some time horizon, given the
constraints enforced by fluid mechanics. By means of numerical experiments, we
show that swimmers indeed learn nearly optimal strategies just by experience. A
reinforcement learning algorithm allows particles to learn effective strategies
even in difficult situations when, in the absence of control, they would end up
being trapped by flow structures. These strategies are highly nontrivial and
cannot be easily guessed in advance. This Letter illustrates the potential of
reinforcement learning algorithms to model adaptive behavior in complex flows
and paves the way towards the engineering of smart microswimmers that solve
difficult navigation problems.
| Simona Colabrese, Kristian Gustavsson, Antonio Celani and Luca
Biferale | 10.1103/PhysRevLett.118.158004 | 1701.08848 | null | null |
SenseGen: A Deep Learning Architecture for Synthetic Sensor Data
Generation | cs.LG cs.CV | Our ability to synthesize sensory data that preserves specific statistical
properties of the real data has had tremendous implications on data privacy and
big data analytics. The synthetic data can be used as a substitute for
selective real data segments,that are sensitive to the user, thus protecting
privacy and resulting in improved analytics.However, increasingly adversarial
roles taken by data recipients such as mobile apps, or other cloud-based
analytics services, mandate that the synthetic data, in addition to preserving
statistical properties, should also be difficult to distinguish from the real
data. Typically, visual inspection has been used as a test to distinguish
between datasets. But more recently, sophisticated classifier models
(discriminators), corresponding to a set of events, have also been employed to
distinguish between synthesized and real data. The model operates on both
datasets and the respective event outputs are compared for consistency. In this
paper, we take a step towards generating sensory data that can pass a deep
learning based discriminator model test, and make two specific contributions:
first, we present a deep learning based architecture for synthesizing sensory
data. This architecture comprises of a generator model, which is a stack of
multiple Long-Short-Term-Memory (LSTM) networks and a Mixture Density Network.
second, we use another LSTM network based discriminator model for
distinguishing between the true and the synthesized data. Using a dataset of
accelerometer traces, collected using smartphones of users doing their daily
activities, we show that the deep learning based discriminator model can only
distinguish between the real and synthesized traces with an accuracy in the
neighborhood of 50%.
| Moustafa Alzantot, Supriyo Chakraborty, Mani B. Srivastava | null | 1701.08886 | null | null |
Deep Reinforcement Learning for Visual Object Tracking in Videos | cs.CV cs.LG | In this paper we introduce a fully end-to-end approach for visual tracking in
videos that learns to predict the bounding box locations of a target object at
every frame. An important insight is that the tracking problem can be
considered as a sequential decision-making process and historical semantics
encode highly relevant information for future decisions. Based on this
intuition, we formulate our model as a recurrent convolutional neural network
agent that interacts with a video overtime, and our model can be trained with
reinforcement learning (RL) algorithms to learn good tracking policies that pay
attention to continuous, inter-frame correlation and maximize tracking
performance in the long run. The proposed tracking algorithm achieves
state-of-the-art performance in an existing tracking benchmark and operates at
frame-rates faster than real-time. To the best of our knowledge, our tracker is
the first neural-network tracker that combines convolutional and recurrent
networks with RL algorithms.
| Da Zhang, Hamid Maei, Xin Wang, Yuan-Fang Wang | null | 1701.08936 | null | null |
Deep Submodular Functions | cs.LG | We start with an overview of a class of submodular functions called SCMMs
(sums of concave composed with non-negative modular functions plus a final
arbitrary modular). We then define a new class of submodular functions we call
{\em deep submodular functions} or DSFs. We show that DSFs are a flexible
parametric family of submodular functions that share many of the properties and
advantages of deep neural networks (DNNs). DSFs can be motivated by considering
a hierarchy of descriptive concepts over ground elements and where one wishes
to allow submodular interaction throughout this hierarchy. Results in this
paper show that DSFs constitute a strictly larger class of submodular functions
than SCMMs. We show that, for any integer $k>0$, there are $k$-layer DSFs that
cannot be represented by a $k'$-layer DSF for any $k'<k$. This implies that,
like DNNs, there is a utility to depth, but unlike DNNs, the family of DSFs
strictly increase with depth. Despite this, we show (using a "backpropagation"
like method) that DSFs, even with arbitrarily large $k$, do not comprise all
submodular functions. In offering the above results, we also define the notion
of an antitone superdifferential of a concave function and show how this
relates to submodular functions (in general), DSFs (in particular), negative
second-order partial derivatives, continuous submodularity, and concave
extensions. To further motivate our analysis, we provide various special case
results from matroid theory, comparing DSFs with forms of matroid rank, in
particular the laminar matroid. Lastly, we discuss strategies to learn DSFs,
and define the classes of deep supermodular functions, deep difference of
submodular functions, and deep multivariate submodular functions, and discuss
where these can be useful in applications.
| Jeffrey Bilmes, Wenruo Bai | null | 1701.08939 | null | null |
Variable selection for clustering with Gaussian mixture models: state of
the art | stat.ML cs.LG | The mixture models have become widely used in clustering, given its
probabilistic framework in which its based, however, for modern databases that
are characterized by their large size, these models behave disappointingly in
setting out the model, making essential the selection of relevant variables for
this type of clustering. After recalling the basics of clustering based on a
model, this article will examine the variable selection methods for model-based
clustering, as well as presenting opportunities for improvement of these
methods.
| Abdelghafour Talibi and Boujem\^aa Achchab and Rafik Lasri | null | 1701.08946 | null | null |
CommAI: Evaluating the first steps towards a useful general AI | cs.LG cs.AI cs.CL | With machine learning successfully applied to new daunting problems almost
every day, general AI starts looking like an attainable goal. However, most
current research focuses instead on important but narrow applications, such as
image classification or machine translation. We believe this to be largely due
to the lack of objective ways to measure progress towards broad machine
intelligence. In order to fill this gap, we propose here a set of concrete
desiderata for general AI, together with a platform to test machines on how
well they satisfy such desiderata, while keeping all further complexities to a
minimum.
| Marco Baroni, Armand Joulin, Allan Jabri, Germ\`an Kruszewski,
Angeliki Lazaridou, Klemen Simonic, Tomas Mikolov | null | 1701.08954 | null | null |
Towards Adversarial Retinal Image Synthesis | cs.CV cs.LG stat.ML | Synthesizing images of the eye fundus is a challenging task that has been
previously approached by formulating complex models of the anatomy of the eye.
New images can then be generated by sampling a suitable parameter space. In
this work, we propose a method that learns to synthesize eye fundus images
directly from data. For that, we pair true eye fundus images with their
respective vessel trees, by means of a vessel segmentation technique. These
pairs are then used to learn a mapping from a binary vessel tree to a new
retinal image. For this purpose, we use a recent image-to-image translation
technique, based on the idea of adversarial learning. Experimental results show
that the original and the generated images are visually different in terms of
their global appearance, in spite of sharing the same vessel tree.
Additionally, a quantitative quality analysis of the synthetic retinal images
confirms that the produced images retain a high proportion of the true image
set quality.
| Pedro Costa, Adrian Galdran, Maria In\^es Meyer, Michael David
Abr\`amoff, Meindert Niemeijer, Ana Maria Mendon\c{c}a, Aur\'elio Campilho | null | 1701.08974 | null | null |
Mixed Low-precision Deep Learning Inference using Dynamic Fixed Point | cs.LG cs.NE | We propose a cluster-based quantization method to convert pre-trained full
precision weights into ternary weights with minimal impact on the accuracy. In
addition, we also constrain the activations to 8-bits thus enabling sub 8-bit
full integer inference pipeline. Our method uses smaller clusters of N filters
with a common scaling factor to minimize the quantization loss, while also
maximizing the number of ternary operations. We show that with a cluster size
of N=4 on Resnet-101, can achieve 71.8% TOP-1 accuracy, within 6% of the best
full precision results while replacing ~85% of all multiplications with 8-bit
accumulations. Using the same method with 4-bit weights achieves 76.3% TOP-1
accuracy which within 2% of the full precision result. We also study the impact
of the size of the cluster on both performance and accuracy, larger cluster
sizes N=64 can replace ~98% of the multiplications with ternary operations but
introduces significant drop in accuracy which necessitates fine tuning the
parameters with retraining the network at lower precision. To address this we
have also trained low-precision Resnet-50 with 8-bit activations and ternary
weights by pre-initializing the network with full precision weights and achieve
68.9% TOP-1 accuracy within 4 additional epochs. Our final quantized model can
run on a full 8-bit compute pipeline, with a potential 16x improvement in
performance compared to baseline full-precision models.
| Naveen Mellempudi, Abhisek Kundu, Dipankar Das, Dheevatsa Mudigere,
and Bharat Kaul | null | 1701.08978 | null | null |
Efficient Rank Aggregation via Lehmer Codes | cs.LG cs.AI | We propose a novel rank aggregation method based on converting permutations
into their corresponding Lehmer codes or other subdiagonal images. Lehmer
codes, also known as inversion vectors, are vector representations of
permutations in which each coordinate can take values not restricted by the
values of other coordinates. This transformation allows for decoupling of the
coordinates and for performing aggregation via simple scalar median or mode
computations. We present simulation results illustrating the performance of
this completely parallelizable approach and analytically prove that both the
mode and median aggregation procedure recover the correct centroid aggregate
with small sample complexity when the permutations are drawn according to the
well-known Mallows models. The proposed Lehmer code approach may also be used
on partial rankings, with similar performance guarantees.
| Pan Li, Arya Mazumdar and Olgica Milenkovic | null | 1701.09083 | null | null |
Skip Connections Eliminate Singularities | cs.NE cs.LG | Skip connections made the training of very deep networks possible and have
become an indispensable component in a variety of neural architectures. A
completely satisfactory explanation for their success remains elusive. Here, we
present a novel explanation for the benefits of skip connections in training
very deep networks. The difficulty of training deep networks is partly due to
the singularities caused by the non-identifiability of the model. Several such
singularities have been identified in previous works: (i) overlap singularities
caused by the permutation symmetry of nodes in a given layer, (ii) elimination
singularities corresponding to the elimination, i.e. consistent deactivation,
of nodes, (iii) singularities generated by the linear dependence of the nodes.
These singularities cause degenerate manifolds in the loss landscape that slow
down learning. We argue that skip connections eliminate these singularities by
breaking the permutation symmetry of nodes, by reducing the possibility of node
elimination and by making the nodes less linearly dependent. Moreover, for
typical initializations, skip connections move the network away from the
"ghosts" of these singularities and sculpt the landscape around them to
alleviate the learning slow-down. These hypotheses are supported by evidence
from simplified models, as well as from experiments with deep networks trained
on real-world datasets.
| A. Emin Orhan, Xaq Pitkow | null | 1701.09175 | null | null |
A Dirichlet Mixture Model of Hawkes Processes for Event Sequence
Clustering | cs.LG stat.ML | We propose an effective method to solve the event sequence clustering
problems based on a novel Dirichlet mixture model of a special but significant
type of point processes --- Hawkes process. In this model, each event sequence
belonging to a cluster is generated via the same Hawkes process with specific
parameters, and different clusters correspond to different Hawkes processes.
The prior distribution of the Hawkes processes is controlled via a Dirichlet
distribution. We learn the model via a maximum likelihood estimator (MLE) and
propose an effective variational Bayesian inference algorithm. We specifically
analyze the resulting EM-type algorithm in the context of inner-outer
iterations and discuss several inner iteration allocation strategies. The
identifiability of our model, the convergence of our learning method, and its
sample complexity are analyzed in both theoretical and empirical ways, which
demonstrate the superiority of our method to other competitors. The proposed
method learns the number of clusters automatically and is robust to model
misspecification. Experiments on both synthetic and real-world data show that
our method can learn diverse triggering patterns hidden in asynchronous event
sequences and achieve encouraging performance on clustering purity and
consistency.
| Hongteng Xu and Hongyuan Zha | null | 1701.09177 | null | null |
Learning the distribution with largest mean: two bandit frameworks | cs.LG math.ST stat.ML stat.TH | Over the past few years, the multi-armed bandit model has become increasingly
popular in the machine learning community, partly because of applications
including online content optimization. This paper reviews two different
sequential learning tasks that have been considered in the bandit literature ;
they can be formulated as (sequentially) learning which distribution has the
highest mean among a set of distributions, with some constraints on the
learning process. For both of them (regret minimization and best arm
identification) we present recent, asymptotically optimal algorithms. We
compare the behaviors of the sampling rule of each algorithm as well as the
complexity terms associated to each problem.
| Emilie Kaufmann (SEQUEL, CRIStAL, CNRS), Aur\'elien Garivier (IMT) | null | 1702.00001 | null | null |
Towards "AlphaChem": Chemical Synthesis Planning with Tree Search and
Deep Neural Network Policies | cs.AI cs.LG physics.chem-ph | Retrosynthesis is a technique to plan the chemical synthesis of organic
molecules, for example drugs, agro- and fine chemicals. In retrosynthesis, a
search tree is built by analysing molecules recursively and dissecting them
into simpler molecular building blocks until one obtains a set of known
building blocks. The search space is intractably large, and it is difficult to
determine the value of retrosynthetic positions. Here, we propose to model
retrosynthesis as a Markov Decision Process. In combination with a Deep Neural
Network policy learned from essentially the complete published knowledge of
chemistry, Monte Carlo Tree Search (MCTS) can be used to evaluate positions. In
exploratory studies, we demonstrate that MCTS with neural network policies
outperforms the traditionally used best-first search with hand-coded
heuristics.
| Marwin Segler, Mike Preu{\ss}, Mark P. Waller | null | 1702.0002 | null | null |
Representation of big data by dimension reduction | cs.IT cs.LG math.IT stat.ML | Suppose the data consist of a set $S$ of points $x_j, 1 \leq j \leq J$,
distributed in a bounded domain $D \subset R^N$, where $N$ and $J$ are large
numbers. In this paper an algorithm is proposed for checking whether there
exists a manifold $\mathbb{M}$ of low dimension near which many of the points
of $S$ lie and finding such $\mathbb{M}$ if it exists. There are many dimension
reduction algorithms, both linear and non-linear. Our algorithm is simple to
implement and has some advantages compared with the known algorithms. If there
is a manifold of low dimension near which most of the data points lie, the
proposed algorithm will find it. Some numerical results are presented
illustrating the algorithm and analyzing its performance compared to the
classical PCA (principal component analysis) and Isomap.
| A.G.Ramm, C. Van | null | 1702.00027 | null | null |
On orthogonality and learning recurrent networks with long term
dependencies | cs.LG cs.NE | It is well known that it is challenging to train deep neural networks and
recurrent neural networks for tasks that exhibit long term dependencies. The
vanishing or exploding gradient problem is a well known issue associated with
these challenges. One approach to addressing vanishing and exploding gradients
is to use either soft or hard constraints on weight matrices so as to encourage
or enforce orthogonality. Orthogonal matrices preserve gradient norm during
backpropagation and may therefore be a desirable property. This paper explores
issues with optimization convergence, speed and gradient stability when
encouraging or enforcing orthogonality. To perform this analysis, we propose a
weight matrix factorization and parameterization strategy through which we can
bound matrix norms and therein control the degree of expansivity induced during
backpropagation. We find that hard constraints on orthogonality can negatively
affect the speed of convergence and model performance.
| Eugene Vorontsov, Chiheb Trabelsi, Samuel Kadoury, Chris Pal | null | 1702.00071 | null | null |
Stochastic Graphlet Embedding | cs.CV cs.LG stat.ML | Graph-based methods are known to be successful in many machine learning and
pattern classification tasks. These methods consider semi-structured data as
graphs where nodes correspond to primitives (parts, interest points, segments,
etc.) and edges characterize the relationships between these primitives.
However, these non-vectorial graph data cannot be straightforwardly plugged
into off-the-shelf machine learning algorithms without a preliminary step of --
explicit/implicit -- graph vectorization and embedding. This embedding process
should be resilient to intra-class graph variations while being highly
discriminant. In this paper, we propose a novel high-order stochastic graphlet
embedding (SGE) that maps graphs into vector spaces. Our main contribution
includes a new stochastic search procedure that efficiently parses a given
graph and extracts/samples unlimitedly high-order graphlets. We consider these
graphlets, with increasing orders, to model local primitives as well as their
increasingly complex interactions. In order to build our graph representation,
we measure the distribution of these graphlets into a given graph, using
particular hash functions that efficiently assign sampled graphlets into
isomorphic sets with a very low probability of collision. When combined with
maximum margin classifiers, these graphlet-based representations have positive
impact on the performance of pattern comparison and recognition as corroborated
through extensive experiments using standard benchmark databases.
| Anjan Dutta and Hichem Sahbi | 10.1109/TNNLS.2018.2884700 | 1702.00156 | null | null |
PCA-Initialized Deep Neural Networks Applied To Document Image Analysis | cs.LG stat.ML | In this paper, we present a novel approach for initializing deep neural
networks, i.e., by turning PCA into neural layers. Usually, the initialization
of the weights of a deep neural network is done in one of the three following
ways: 1) with random values, 2) layer-wise, usually as Deep Belief Network or
as auto-encoder, and 3) re-use of layers from another network (transfer
learning). Therefore, typically, many training epochs are needed before
meaningful weights are learned, or a rather similar dataset is required for
seeding a fine-tuning of transfer learning. In this paper, we describe how to
turn a PCA into an auto-encoder, by generating an encoder layer of the PCA
parameters and furthermore adding a decoding layer. We analyze the
initialization technique on real documents. First, we show that a PCA-based
initialization is quick and leads to a very stable initialization. Furthermore,
for the task of layout analysis we investigate the effectiveness of PCA-based
initialization and show that it outperforms state-of-the-art random weight
initialization methods.
| Mathias Seuret, Michele Alberti, Rolf Ingold, Marcus Liwicki | 10.1109/ICDAR.2017.148 | 1702.00177 | null | null |
On the Futility of Learning Complex Frame-Level Language Models for
Chord Recognition | cs.SD cs.LG | Chord recognition systems use temporal models to post-process frame-wise
chord preditions from acoustic models. Traditionally, first-order models such
as Hidden Markov Models were used for this task, with recent works suggesting
to apply Recurrent Neural Networks instead. Due to their ability to learn
longer-term dependencies, these models are supposed to learn and to apply
musical knowledge, instead of just smoothing the output of the acoustic model.
In this paper, we argue that learning complex temporal models at the level of
audio frames is futile on principle, and that non-Markovian models do not
perform better than their first-order counterparts. We support our argument
through three experiments on the McGill Billboard dataset. The first two show
1) that when learning complex temporal models at the frame level, improvements
in chord sequence modelling are marginal; and 2) that these improvements do not
translate when applied within a full chord recognition system. The third, still
rather preliminary experiment gives first indications that the use of complex
sequential models for chord prediction at higher temporal levels might be more
promising.
| Filip Korzeniowski and Gerhard Widmer | 10.17743/aesconf.2017.978-1-942220-15-2 | 1702.00178 | null | null |
Communication-Optimal Distributed Clustering | cs.DS cs.LG | Clustering large datasets is a fundamental problem with a number of
applications in machine learning. Data is often collected on different sites
and clustering needs to be performed in a distributed manner with low
communication. We would like the quality of the clustering in the distributed
setting to match that in the centralized setting for which all the data resides
on a single site. In this work, we study both graph and geometric clustering
problems in two distributed models: (1) a point-to-point model, and (2) a model
with a broadcast channel. We give protocols in both models which we show are
nearly optimal by proving almost matching communication lower bounds. Our work
highlights the surprising power of a broadcast channel for clustering problems;
roughly speaking, to spectrally cluster $n$ points or $n$ vertices in a graph
distributed across $s$ servers, for a worst-case partitioning the communication
complexity in a point-to-point model is $n \cdot s$, while in the broadcast
model it is $n + s$. A similar phenomenon holds for the geometric setting as
well. We implement our algorithms and demonstrate this phenomenon on real life
datasets, showing that our algorithms are also very efficient in practice.
| Jiecao Chen and He Sun and David P. Woodruff and Qin Zhang | null | 1702.00196 | null | null |
Machine learning based compact photonic structure design for strong
light confinement | physics.optics cs.LG | We present a novel approach based on machine learning for designing photonic
structures. In particular, we focus on strong light confinement that allows the
design of an efficient free-space-to-waveguide coupler which is made of Si-
slab overlying on the top of silica substrate. The learning algorithm is
implemented using bitwise square Si- cells and the whole optimized device has a
footprint of $\boldsymbol{2 \, \mu m \times 1\, \mu m}$, which is the smallest
size ever achieved numerically. To find the effect of Si- slab thickness on the
sub-wavelength focusing and strong coupling characteristics of optimized
photonic structure, we carried out three-dimensional time-domain numerical
calculations. Corresponding optimum values of full width at half maximum and
coupling efficiency were calculated as $\boldsymbol{0.158 \lambda}$ and
$\boldsymbol{-1.87\,dB}$ with slab thickness of $\boldsymbol{280nm}$. Compared
to the conventional counterparts, the optimized lens and coupler designs are
easy-to-fabricate via optical lithography techniques, quite compact, and can
operate at telecommunication wavelengths. The outcomes of the presented study
show that machine learning can be beneficial for efficient photonic designs in
various potential applications such as polarization-division, beam manipulation
and optical interconnects.
| Mirbek Turduev, \c{C}a\u{g}r{\i} Latifo\u{g}lu, \.Ibrahim Halil Giden,
Y. Sinan Hanay | null | 1702.0026 | null | null |
On SGD's Failure in Practice: Characterizing and Overcoming Stalling | stat.ML cs.LG math.OC stat.CO | Stochastic Gradient Descent (SGD) is widely used in machine learning problems
to efficiently perform empirical risk minimization, yet, in practice, SGD is
known to stall before reaching the actual minimizer of the empirical risk. SGD
stalling has often been attributed to its sensitivity to the conditioning of
the problem; however, as we demonstrate, SGD will stall even when applied to a
simple linear regression problem with unity condition number for standard
learning rates. Thus, in this work, we numerically demonstrate and
mathematically argue that stalling is a crippling and generic limitation of SGD
and its variants in practice. Once we have established the problem of stalling,
we generalize an existing framework for hedging against its effects, which (1)
deters SGD and its variants from stalling, (2) still provides convergence
guarantees, and (3) makes SGD and its variants more practical methods for
minimization.
| Vivak Patel | null | 1702.00317 | null | null |
Generative Adversarial Networks recover features in astrophysical images
of galaxies beyond the deconvolution limit | astro-ph.IM astro-ph.GA cs.LG stat.ML | Observations of astrophysical objects such as galaxies are limited by various
sources of random and systematic noise from the sky background, the optical
system of the telescope and the detector used to record the data. Conventional
deconvolution techniques are limited in their ability to recover features in
imaging data by the Shannon-Nyquist sampling theorem. Here we train a
generative adversarial network (GAN) on a sample of $4,550$ images of nearby
galaxies at $0.01<z<0.02$ from the Sloan Digital Sky Survey and conduct
$10\times$ cross validation to evaluate the results. We present a method using
a GAN trained on galaxy images that can recover features from artificially
degraded images with worse seeing and higher noise than the original with a
performance which far exceeds simple deconvolution. The ability to better
recover detailed features such as galaxy morphology from low-signal-to-noise
and low angular resolution imaging data significantly increases our ability to
study existing data sets of astrophysical objects as well as future
observations with observatories such as the Large Synoptic Sky Telescope (LSST)
and the Hubble and James Webb space telescopes.
| Kevin Schawinski, Ce Zhang, Hantian Zhang, Lucas Fowler and Gokula
Krishnan Santhanam | 10.1093/mnrasl/slx008 | 1702.00403 | null | null |
Convergence Results for Neural Networks via Electrodynamics | cs.DS cs.LG physics.data-an | We study whether a depth two neural network can learn another depth two
network using gradient descent. Assuming a linear output node, we show that the
question of whether gradient descent converges to the target function is
equivalent to the following question in electrodynamics: Given $k$ fixed
protons in $\mathbb{R}^d,$ and $k$ electrons, each moving due to the attractive
force from the protons and repulsive force from the remaining electrons,
whether at equilibrium all the electrons will be matched up with the protons,
up to a permutation. Under the standard electrical force, this follows from the
classic Earnshaw's theorem. In our setting, the force is determined by the
activation function and the input distribution. Building on this equivalence,
we prove the existence of an activation function such that gradient descent
learns at least one of the hidden nodes in the target network. Iterating, we
show that gradient descent can be used to learn the entire network one node at
a time.
| Rina Panigrahy, Sushant Sachdeva, Qiuyi Zhang | null | 1702.00458 | null | null |
Algorithmic Performance-Accuracy Trade-off in 3D Vision Applications
Using HyperMapper | cs.CV cs.DC cs.LG cs.PF | In this paper we investigate an emerging application, 3D scene understanding,
likely to be significant in the mobile space in the near future. The goal of
this exploration is to reduce execution time while meeting our quality of
result objectives. In previous work we showed for the first time that it is
possible to map this application to power constrained embedded systems,
highlighting that decision choices made at the algorithmic design-level have
the most impact.
As the algorithmic design space is too large to be exhaustively evaluated, we
use a previously introduced multi-objective Random Forest Active Learning
prediction framework dubbed HyperMapper, to find good algorithmic designs. We
show that HyperMapper generalizes on a recent cutting edge 3D scene
understanding algorithm and on a modern GPU-based computer architecture.
HyperMapper is able to beat an expert human hand-tuning the algorithmic
parameters of the class of Computer Vision applications taken under
consideration in this paper automatically. In addition, we use crowd-sourcing
using a 3D scene understanding Android app to show that the Pareto front
obtained on an embedded system can be used to accelerate the same application
on all the 83 smart-phones and tablets crowd-sourced with speedups ranging from
2 to over 12.
| Luigi Nardi, Bruno Bodin, Sajad Saeedi, Emanuele Vespa, Andrew J.
Davison, Paul H. J. Kelly | null | 1702.00505 | null | null |
Segmentation of optic disc, fovea and retinal vasculature using a single
convolutional neural network | cs.CV cs.LG | We have developed and trained a convolutional neural network to automatically
and simultaneously segment optic disc, fovea and blood vessels. Fundus images
were normalised before segmentation was performed to enforce consistency in
background lighting and contrast. For every effective point in the fundus
image, our algorithm extracted three channels of input from the neighbourhood
of the point and forward the response across the 7 layer network. In average,
our segmentation achieved an accuracy of 92.68 percent on the testing set from
Drive database.
| Jen Hong Tan, U. Rajendra Acharya, Sulatha V. Bhandary, Kuang Chua
Chua, Sobha Sivaprasad | null | 1702.00509 | null | null |
Recovering True Classifier Performance in Positive-Unlabeled Learning | stat.ML cs.LG | A common approach in positive-unlabeled learning is to train a classification
model between labeled and unlabeled data. This strategy is in fact known to
give an optimal classifier under mild conditions; however, it results in biased
empirical estimates of the classifier performance. In this work, we show that
the typically used performance measures such as the receiver operating
characteristic curve, or the precision-recall curve obtained on such data can
be corrected with the knowledge of class priors; i.e., the proportions of the
positive and negative examples in the unlabeled data. We extend the results to
a noisy setting where some of the examples labeled positive are in fact
negative and show that the correction also requires the knowledge of the
proportion of noisy examples in the labeled positives. Using state-of-the-art
algorithms to estimate the positive class prior and the proportion of noise, we
experimentally evaluate two correction approaches and demonstrate their
efficacy on real-life data.
| Shantanu Jain, Martha White, Predrag Radivojac | null | 1702.00518 | null | null |
Deep Learning the Indus Script | cs.CV cs.CL cs.LG | Standardized corpora of undeciphered scripts, a necessary starting point for
computational epigraphy, requires laborious human effort for their preparation
from raw archaeological records. Automating this process through machine
learning algorithms can be of significant aid to epigraphical research. Here,
we take the first steps in this direction and present a deep learning pipeline
that takes as input images of the undeciphered Indus script, as found in
archaeological artifacts, and returns as output a string of graphemes, suitable
for inclusion in a standard corpus. The image is first decomposed into regions
using Selective Search and these regions are classified as containing textual
and/or graphical information using a convolutional neural network. Regions
classified as potentially containing text are hierarchically merged and trimmed
to remove non-textual information. The remaining textual part of the image is
segmented using standard image processing techniques to isolate individual
graphemes. This set is finally passed to a second convolutional neural network
to classify the graphemes, based on a standard corpus. The classifier can
identify the presence or absence of the most frequent Indus grapheme, the "jar"
sign, with an accuracy of 92%. Our results demonstrate the great potential of
deep learning approaches in computational epigraphy and, more generally, in the
digital humanities.
| Satish Palaniappan and Ronojoy Adhikari | null | 1702.00523 | null | null |
Optimal Schemes for Discrete Distribution Estimation under Locally
Differential Privacy | cs.LG cs.IT math.IT | We consider the minimax estimation problem of a discrete distribution with
support size $k$ under privacy constraints. A privatization scheme is applied
to each raw sample independently, and we need to estimate the distribution of
the raw samples from the privatized samples. A positive number $\epsilon$
measures the privacy level of a privatization scheme. For a given $\epsilon,$
we consider the problem of constructing optimal privatization schemes with
$\epsilon$-privacy level, i.e., schemes that minimize the expected estimation
loss for the worst-case distribution. Two schemes in the literature provide
order optimal performance in the high privacy regime where $\epsilon$ is very
close to $0,$ and in the low privacy regime where $e^{\epsilon}\approx k,$
respectively.
In this paper, we propose a new family of schemes which substantially improve
the performance of the existing schemes in the medium privacy regime when $1\ll
e^{\epsilon} \ll k.$ More concretely, we prove that when $3.8 < \epsilon
<\ln(k/9) ,$ our schemes reduce the expected estimation loss by $50\%$ under
$\ell_2^2$ metric and by $30\%$ under $\ell_1$ metric over the existing
schemes. We also prove a lower bound for the region $e^{\epsilon} \ll k,$ which
implies that our schemes are order optimal in this regime.
| Min Ye and Alexander Barg | null | 1702.0061 | null | null |
IQN: An Incremental Quasi-Newton Method with Local Superlinear
Convergence Rate | math.OC cs.LG | The problem of minimizing an objective that can be written as the sum of a
set of $n$ smooth and strongly convex functions is considered. The Incremental
Quasi-Newton (IQN) method proposed here belongs to the family of stochastic and
incremental methods that have a cost per iteration independent of $n$. IQN
iterations are a stochastic version of BFGS iterations that use memory to
reduce the variance of stochastic approximations. The convergence properties of
IQN bridge a gap between deterministic and stochastic quasi-Newton methods.
Deterministic quasi-Newton methods exploit the possibility of approximating the
Newton step using objective gradient differences. They are appealing because
they have a smaller computational cost per iteration relative to Newton's
method and achieve a superlinear convergence rate under customary regularity
assumptions. Stochastic quasi-Newton methods utilize stochastic gradient
differences in lieu of actual gradient differences. This makes their
computational cost per iteration independent of the number of objective
functions $n$. However, existing stochastic quasi-Newton methods have sublinear
or linear convergence at best. IQN is the first stochastic quasi-Newton method
proven to converge superlinearly in a local neighborhood of the optimal
solution. IQN differs from state-of-the-art incremental quasi-Newton methods in
three aspects: (i) The use of aggregated information of variables, gradients,
and quasi-Newton Hessian approximation matrices to reduce the noise of gradient
and Hessian approximations. (ii) The approximation of each individual function
by its Taylor's expansion in which the linear and quadratic terms are evaluated
with respect to the same iterate. (iii) The use of a cyclic scheme to update
the functions in lieu of a random selection routine. We use these fundamental
properties of IQN to establish its local superlinear convergence rate.
| Aryan Mokhtari and Mark Eisen and Alejandro Ribeiro | null | 1702.00709 | null | null |
HashNet: Deep Learning to Hash by Continuation | cs.LG cs.CV | Learning to hash has been widely applied to approximate nearest neighbor
search for large-scale multimedia retrieval, due to its computation efficiency
and retrieval quality. Deep learning to hash, which improves retrieval quality
by end-to-end representation learning and hash encoding, has received
increasing attention recently. Subject to the ill-posed gradient difficulty in
the optimization with sign activations, existing deep learning to hash methods
need to first learn continuous representations and then generate binary hash
codes in a separated binarization step, which suffer from substantial loss of
retrieval quality. This work presents HashNet, a novel deep architecture for
deep learning to hash by continuation method with convergence guarantees, which
learns exactly binary hash codes from imbalanced similarity data. The key idea
is to attack the ill-posed gradient problem in optimizing deep networks with
non-smooth binary activations by continuation method, in which we begin from
learning an easier network with smoothed activation function and let it evolve
during the training, until it eventually goes back to being the original,
difficult to optimize, deep network with the sign activation function.
Comprehensive empirical evidence shows that HashNet can generate exactly binary
hash codes and yield state-of-the-art multimedia retrieval performance on
standard benchmarks.
| Zhangjie Cao, Mingsheng Long, Jianmin Wang, Philip S. Yu | null | 1702.00758 | null | null |
Natasha: Faster Non-Convex Stochastic Optimization Via Strongly
Non-Convex Parameter | math.OC cs.DS cs.LG stat.ML | Given a nonconvex function that is an average of $n$ smooth functions, we
design stochastic first-order methods to find its approximate stationary
points. The convergence of our new methods depends on the smallest (negative)
eigenvalue $-\sigma$ of the Hessian, a parameter that describes how nonconvex
the function is.
Our methods outperform known results for a range of parameter $\sigma$, and
can be used to find approximate local minima. Our result implies an interesting
dichotomy: there exists a threshold $\sigma_0$ so that the currently fastest
methods for $\sigma>\sigma_0$ and for $\sigma<\sigma_0$ have different
behaviors: the former scales with $n^{2/3}$ and the latter scales with
$n^{3/4}$.
| Zeyuan Allen-Zhu | null | 1702.00763 | null | null |
Pixel Recursive Super Resolution | cs.CV cs.LG | We present a pixel recursive super resolution model that synthesizes
realistic details into images while enhancing their resolution. A low
resolution image may correspond to multiple plausible high resolution images,
thus modeling the super resolution process with a pixel independent conditional
model often results in averaging different details--hence blurry edges. By
contrast, our model is able to represent a multimodal conditional distribution
by properly modeling the statistical dependencies among the high resolution
image pixels, conditioned on a low resolution input. We employ a PixelCNN
architecture to define a strong prior over natural images and jointly optimize
this prior with a deep conditioning convolutional network. Human evaluations
indicate that samples from our proposed model look more photo realistic than a
strong L2 regression baseline.
| Ryan Dahl, Mohammad Norouzi, Jonathon Shlens | null | 1702.00783 | null | null |
An Introduction to Deep Learning for the Physical Layer | cs.IT cs.LG cs.NI math.IT | We present and discuss several novel applications of deep learning for the
physical layer. By interpreting a communications system as an autoencoder, we
develop a fundamental new way to think about communications system design as an
end-to-end reconstruction task that seeks to jointly optimize transmitter and
receiver components in a single process. We show how this idea can be extended
to networks of multiple transmitters and receivers and present the concept of
radio transformer networks as a means to incorporate expert domain knowledge in
the machine learning model. Lastly, we demonstrate the application of
convolutional neural networks on raw IQ samples for modulation classification
which achieves competitive accuracy with respect to traditional schemes relying
on expert features. The paper is concluded with a discussion of open challenges
and areas for future investigation.
| Timothy J. O'Shea, Jakob Hoydis | null | 1702.00832 | null | null |
Recurrent Neural Networks for anomaly detection in the Post-Mortem time
series of LHC superconducting magnets | physics.ins-det cs.LG physics.acc-ph | This paper presents a model based on Deep Learning algorithms of LSTM and GRU
for facilitating an anomaly detection in Large Hadron Collider superconducting
magnets. We used high resolution data available in Post Mortem database to
train a set of models and chose the best possible set of their
hyper-parameters. Using Deep Learning approach allowed to examine a vast body
of data and extract the fragments which require further experts examination and
are regarded as anomalies. The presented method does not require tedious manual
threshold setting and operator attention at the stage of the system setup.
Instead, the automatic approach is proposed, which achieves according to our
experiments accuracy of 99%. This is reached for the largest dataset of 302 MB
and the following architecture of the network: single layer LSTM, 128 cells, 20
epochs of training, look_back=16, look_ahead=128, grid=100 and optimizer Adam.
All the experiments were run on GPU Nvidia Tesla K80
| Maciej Wielgosz and Andrzej Skocze\'n and Matej Mertik | null | 1702.00833 | null | null |
Structured Attention Networks | cs.CL cs.LG cs.NE | Attention networks have proven to be an effective approach for embedding
categorical inference within a deep neural network. However, for many tasks we
may want to model richer structural dependencies without abandoning end-to-end
training. In this work, we experiment with incorporating richer structural
distributions, encoded using graphical models, within deep networks. We show
that these structured attention networks are simple extensions of the basic
attention procedure, and that they allow for extending attention beyond the
standard soft-selection approach, such as attending to partial segmentations or
to subtrees. We experiment with two different classes of structured attention
networks: a linear-chain conditional random field and a graph-based parsing
model, and describe how these models can be practically implemented as neural
network layers. Experiments show that this approach is effective for
incorporating structural biases, and structured attention networks outperform
baseline attention models on a variety of synthetic and real tasks: tree
transduction, neural machine translation, question answering, and natural
language inference. We further find that models trained in this way learn
interesting unsupervised hidden representations that generalize simple
attention.
| Yoon Kim, Carl Denton, Luong Hoang, Alexander M. Rush | null | 1702.00887 | null | null |
Deep Learning with Low Precision by Half-wave Gaussian Quantization | cs.CV cs.AI cs.LG | The problem of quantizing the activations of a deep neural network is
considered. An examination of the popular binary quantization approach shows
that this consists of approximating a classical non-linearity, the hyperbolic
tangent, by two functions: a piecewise constant sign function, which is used in
feedforward network computations, and a piecewise linear hard tanh function,
used in the backpropagation step during network learning. The problem of
approximating the ReLU non-linearity, widely used in the recent deep learning
literature, is then considered. An half-wave Gaussian quantizer (HWGQ) is
proposed for forward approximation and shown to have efficient implementation,
by exploiting the statistics of of network activations and batch normalization
operations commonly used in the literature. To overcome the problem of gradient
mismatch, due to the use of different forward and backward approximations,
several piece-wise backward approximators are then investigated. The
implementation of the resulting quantized network, denoted as HWGQ-Net, is
shown to achieve much closer performance to full precision networks, such as
AlexNet, ResNet, GoogLeNet and VGG-Net, than previously available low-precision
networks, with 1-bit binary weights and 2-bit quantized activations.
| Zhaowei Cai, Xiaodong He, Jian Sun, Nuno Vasconcelos | null | 1702.00953 | null | null |
Intrinsic Grassmann Averages for Online Linear, Robust and Nonlinear
Subspace Learning | cs.LG cs.CV | Principal Component Analysis (PCA) and Kernel Principal Component Analysis
(KPCA) are fundamental methods in machine learning for dimensionality
reduction. The former is a technique for finding this approximation in finite
dimensions and the latter is often in an infinite dimensional Reproducing
Kernel Hilbert-space (RKHS). In this paper, we present a geometric framework
for computing the principal linear subspaces in both situations as well as for
the robust PCA case, that amounts to computing the intrinsic average on the
space of all subspaces: the Grassmann manifold. Points on this manifold are
defined as the subspaces spanned by $K$-tuples of observations. The intrinsic
Grassmann average of these subspaces are shown to coincide with the principal
components of the observations when they are drawn from a Gaussian
distribution. We show similar results in the RKHS case and provide an efficient
algorithm for computing the projection onto the this average subspace. The
result is a method akin to KPCA which is substantially faster. Further, we
present a novel online version of the KPCA using our geometric framework.
Competitive performance of all our algorithms are demonstrated on a variety of
real and synthetic data sets.
| Rudrasis Chakraborty, S{\o}ren Hauberg, Baba C. Vemuri | null | 1702.01005 | null | null |
Uncertainty-Aware Reinforcement Learning for Collision Avoidance | cs.LG cs.RO | Reinforcement learning can enable complex, adaptive behavior to be learned
automatically for autonomous robotic platforms. However, practical deployment
of reinforcement learning methods must contend with the fact that the training
process itself can be unsafe for the robot. In this paper, we consider the
specific case of a mobile robot learning to navigate an a priori unknown
environment while avoiding collisions. In order to learn collision avoidance,
the robot must experience collisions at training time. However, high-speed
collisions, even at training time, could damage the robot. A successful
learning method must therefore proceed cautiously, experiencing only low-speed
collisions until it gains confidence. To this end, we present an
uncertainty-aware model-based learning algorithm that estimates the probability
of collision together with a statistical estimate of uncertainty. By
formulating an uncertainty-dependent cost function, we show that the algorithm
naturally chooses to proceed cautiously in unfamiliar environments, and
increases the velocity of the robot in settings where it has high confidence.
Our predictive model is based on bootstrapped neural networks using dropout,
allowing it to process raw sensory inputs from high-bandwidth sensors such as
cameras. Our experimental evaluation demonstrates that our method effectively
minimizes dangerous collisions at training time in an obstacle avoidance task
for a simulated and real-world quadrotor, and a real-world RC car. Videos of
the experiments can be found at https://sites.google.com/site/probcoll.
| Gregory Kahn, Adam Villaflor, Vitchyr Pong, Pieter Abbeel, Sergey
Levine | null | 1702.01182 | null | null |
Fuzzy Clustering Data Given on the Ordinal Scale Based on Membership and
Likelihood Functions Sharing | cs.LG | A task of clustering data given in the ordinal scale under conditions of
overlapping clusters has been considered. It's proposed to use an approach
based on memberhsip and likelihood functions sharing. A number of performed
experiments proved effectiveness of the proposed method. The proposed method is
characterized by robustness to outliers due to a way of ordering values while
constructing membership functions.
| Zhengbing Hu, Yevgeniy V. Bodyanskiy, Oleksii K. Tyshchenko and
Viktoriia O. Samitova | 10.5815/ijisa.2017.02.01 | 1702.012 | null | null |
Traffic Lights with Auction-Based Controllers: Algorithms and Real-World
Data | cs.AI cs.LG cs.SY | Real-time optimization of traffic flow addresses important practical
problems: reducing a driver's wasted time, improving city-wide efficiency,
reducing gas emissions and improving air quality. Much of the current research
in traffic-light optimization relies on extending the capabilities of traffic
lights to either communicate with each other or communicate with vehicles.
However, before such capabilities become ubiquitous, opportunities exist to
improve traffic lights by being more responsive to current traffic situations
within the current, already deployed, infrastructure. In this paper, we
introduce a traffic light controller that employs bidding within micro-auctions
to efficiently incorporate traffic sensor information; no other outside sources
of information are assumed. We train and test traffic light controllers on
large-scale data collected from opted-in Android cell-phone users over a period
of several months in Mountain View, California and the River North neighborhood
of Chicago, Illinois. The learned auction-based controllers surpass (in both
the relevant metrics of road-capacity and mean travel time) the currently
deployed lights, optimized static-program lights, and longer-term planning
approaches, in both cities, measured using real user driving data.
| Shumeet Baluja, Michele Covell, Rahul Sukthankar | null | 1702.01205 | null | null |
A Theoretical Analysis of First Heuristics of Crowdsourced Entity
Resolution | cs.DB cs.AI cs.LG | Entity resolution (ER) is the task of identifying all records in a database
that refer to the same underlying entity, and are therefore duplicates of each
other. Due to inherent ambiguity of data representation and poor data quality,
ER is a challenging task for any automated process. As a remedy, human-powered
ER via crowdsourcing has become popular in recent years. Using crowd to answer
queries is costly and time consuming. Furthermore, crowd-answers can often be
faulty. Therefore, crowd-based ER methods aim to minimize human participation
without sacrificing the quality and use a computer generated similarity matrix
actively. While, some of these methods perform well in practice, no theoretical
analysis exists for them, and further their worst case performances do not
reflect the experimental findings. This creates a disparity in the
understanding of the popular heuristics for this problem. In this paper, we
make the first attempt to close this gap. We provide a thorough analysis of the
prominent heuristic algorithms for crowd-based ER. We justify experimental
observations with our analysis and information theoretic lower bounds.
| Arya Mazumdar, Barna Saha | null | 1702.01208 | null | null |
Towards Better Analysis of Machine Learning Models: A Visual Analytics
Perspective | cs.LG stat.ML | Interactive model analysis, the process of understanding, diagnosing, and
refining a machine learning model with the help of interactive visualization,
is very important for users to efficiently solve real-world artificial
intelligence and data mining problems. Dramatic advances in big data analytics
has led to a wide variety of interactive model analysis tasks. In this paper,
we present a comprehensive analysis and interpretation of this rapidly
developing area. Specifically, we classify the relevant work into three
categories: understanding, diagnosis, and refinement. Each category is
exemplified by recent influential work. Possible future research opportunities
are also explored and discussed.
| Shixia Liu, Xiting Wang, Mengchen Liu, Jun Zhu | null | 1702.01226 | null | null |
A Learning-Based Approach for Lane Departure Warning Systems with a
Personalized Driver Model | cs.LG cs.SY | Misunderstanding of driver correction behaviors (DCB) is the primary reason
for false warnings of lane-departure-prediction systems. We propose a
learning-based approach to predicting unintended lane-departure behaviors (LDB)
and the chance for drivers to bring the vehicle back to the lane. First, in
this approach, a personalized driver model for lane-departure and lane-keeping
behavior is established by combining the Gaussian mixture model and the hidden
Markov model. Second, based on this model, we develop an online model-based
prediction algorithm to predict the forthcoming vehicle trajectory and judge
whether the driver will demonstrate an LDB or a DCB. We also develop a warning
strategy based on the model-based prediction algorithm that allows the
lane-departure warning system to be acceptable for drivers according to the
predicted trajectory. In addition, the naturalistic driving data of 10 drivers
is collected through the University of Michigan Safety Pilot Model Deployment
program to train the personalized driver model and validate this approach. We
compare the proposed method with a basic time-to-lane-crossing (TLC) method and
a TLC-directional sequence of piecewise lateral slopes (TLC-DSPLS) method. The
results show that the proposed approach can reduce the false-warning rate to
3.07\%.
| Wenshuo Wang and Ding Zhao and Junqiang Xi and Wei Han | null | 1702.01228 | null | null |
Simple to Complex Cross-modal Learning to Rank | cs.LG stat.ML | The heterogeneity-gap between different modalities brings a significant
challenge to multimedia information retrieval. Some studies formalize the
cross-modal retrieval tasks as a ranking problem and learn a shared multi-modal
embedding space to measure the cross-modality similarity. However, previous
methods often establish the shared embedding space based on linear mapping
functions which might not be sophisticated enough to reveal more complicated
inter-modal correspondences. Additionally, current studies assume that the
rankings are of equal importance, and thus all rankings are used
simultaneously, or a small number of rankings are selected randomly to train
the embedding space at each iteration. Such strategies, however, always suffer
from outliers as well as reduced generalization capability due to their lack of
insightful understanding of procedure of human cognition. In this paper, we
involve the self-paced learning theory with diversity into the cross-modal
learning to rank and learn an optimal multi-modal embedding space based on
non-linear mapping functions. This strategy enhances the model's robustness to
outliers and achieves better generalization via training the model gradually
from easy rankings by diverse queries to more complex ones. An efficient
alternative algorithm is exploited to solve the proposed challenging problem
with fast convergence in practice. Extensive experimental results on several
benchmark datasets indicate that the proposed method achieves significant
improvements over the state-of-the-arts in this literature.
| Minnan Luo and Xiaojun Chang and Zhihui Li and Liqiang Nie and
Alexander G. Hauptmann and Qinghua Zheng | null | 1702.01229 | null | null |
Network-based methods for outcome prediction in the "sample space" | cs.LG stat.ML | In this thesis we present the novel semi-supervised network-based algorithm
P-Net, which is able to rank and classify patients with respect to a specific
phenotype or clinical outcome under study. The peculiar and innovative
characteristic of this method is that it builds a network of samples/patients,
where the nodes represent the samples and the edges are functional or genetic
relationships between individuals (e.g. similarity of expression profiles), to
predict the phenotype under study. In other words, it constructs the network in
the "sample space" and not in the "biomarker space" (where nodes represent
biomolecules (e.g. genes, proteins) and edges represent functional or genetic
relationships between nodes), as usual in state-of-the-art methods. To assess
the performances of P-Net, we apply it on three different publicly available
datasets from patients afflicted with a specific type of tumor: pancreatic
cancer, melanoma and ovarian cancer dataset, by using the data and following
the experimental set-up proposed in two recently published papers [Barter et
al., 2014, Winter et al., 2012]. We show that network-based methods in the
"sample space" can achieve results competitive with classical supervised
inductive systems. Moreover, the graph representation of the samples can be
easily visualized through networks and can be used to gain visual clues about
the relationships between samples, taking into account the phenotype associated
or predicted for each sample. To our knowledge this is one of the first works
that proposes graph-based algorithms working in the "sample space" of the
biomolecular profiles of the patients to predict their phenotype or outcome,
thus contributing to a novel research line in the framework of the Network
Medicine.
| Jessica Gliozzo | null | 1702.01268 | null | null |
Latent Hinge-Minimax Risk Minimization for Inference from a Small Number
of Training Samples | cs.LG cs.CV | Deep Learning (DL) methods show very good performance when trained on large,
balanced data sets. However, many practical problems involve imbalanced data
sets, or/and classes with a small number of training samples. The performance
of DL methods as well as more traditional classifiers drops significantly in
such settings. Most of the existing solutions for imbalanced problems focus on
customizing the data for training. A more principled solution is to use mixed
Hinge-Minimax risk [19] specifically designed to solve binary problems with
imbalanced training sets. Here we propose a Latent Hinge Minimax (LHM) risk and
a training algorithm that generalizes this paradigm to an ensemble of
hyperplanes that can form arbitrary complex, piecewise linear boundaries. To
extract good features, we combine LHM model with CNN via transfer learning. To
solve multi-class problem we map pre-trained category-specific LHM classifiers
to a multi-class neural network and adjust the weights with very fast tuning.
LHM classifier enables the use of unlabeled data in its training and the
mapping allows for multi-class inference, resulting in a classifier that
performs better than alternatives when trained on a small number of training
samples.
| Dolev Raviv and Margarita Osadchy | null | 1702.01293 | null | null |
Cluster-based Kriging Approximation Algorithms for Complexity Reduction | cs.LG cs.AI stat.ML | Kriging or Gaussian Process Regression is applied in many fields as a
non-linear regression model as well as a surrogate model in the field of
evolutionary computation. However, the computational and space complexity of
Kriging, that is cubic and quadratic in the number of data points respectively,
becomes a major bottleneck with more and more data available nowadays. In this
paper, we propose a general methodology for the complexity reduction, called
cluster Kriging, where the whole data set is partitioned into smaller clusters
and multiple Kriging models are built on top of them. In addition, four Kriging
approximation algorithms are proposed as candidate algorithms within the new
framework. Each of these algorithms can be applied to much larger data sets
while maintaining the advantages and power of Kriging. The proposed algorithms
are explained in detail and compared empirically against a broad set of
existing state-of-the-art Kriging approximation methods on a well-defined
testing framework. According to the empirical study, the proposed algorithms
consistently outperform the existing algorithms. Moreover, some practical
suggestions are provided for using the proposed algorithms.
| Bas van Stein, Hao Wang, Wojtek Kowalczyk, Michael Emmerich, Thomas
B\"ack | null | 1702.01313 | null | null |
An Experimental Study of Deep Convolutional Features For Iris
Recognition | cs.CV cs.LG | Iris is one of the popular biometrics that is widely used for identity
authentication. Different features have been used to perform iris recognition
in the past. Most of them are based on hand-crafted features designed by
biometrics experts. Due to tremendous success of deep learning in computer
vision problems, there has been a lot of interest in applying features learned
by convolutional neural networks on general image recognition to other tasks
such as segmentation, face recognition, and object detection. In this paper, we
have investigated the application of deep features extracted from VGG-Net for
iris recognition. The proposed scheme has been tested on two well-known iris
databases, and has shown promising results with the best accuracy rate of
99.4\%, which outperforms the previous best result.
| Shervin Minaee, Amirali Abdolrashidi and Yao Wang | null | 1702.01334 | null | null |
Deep learning and the Schr\"odinger equation | cond-mat.mtrl-sci cs.LG physics.chem-ph | We have trained a deep (convolutional) neural network to predict the
ground-state energy of an electron in four classes of confining two-dimensional
electrostatic potentials. On randomly generated potentials, for which there is
no analytic form for either the potential or the ground-state energy, the
neural network model was able to predict the ground-state energy to within
chemical accuracy, with a median absolute error of 1.49 mHa. We also
investigate the performance of the model in predicting other quantities such as
the kinetic energy and the first excited-state energy of random potentials.
| Kyle Mills, Michael Spanner, and Isaac Tamblyn | 10.1103/PhysRevA.96.042113 | 1702.01361 | null | null |
A scikit-based Python environment for performing multi-label
classification | cs.LG cs.MS | scikit-multilearn is a Python library for performing multi-label
classification. The library is compatible with the scikit/scipy ecosystem and
uses sparse matrices for all internal operations. It provides native Python
implementations of popular multi-label classification methods alongside a novel
framework for label space partitioning and division. It includes modern
algorithm adaptation methods, network-based label space division approaches,
which extracts label dependency information and multi-label embedding
classifiers. It provides python wrapped access to the extensive multi-label
method stack from Java libraries and makes it possible to extend deep learning
single-label methods for multi-label tasks. The library allows multi-label
stratification and data set management. The implementation is more efficient in
problem transformation than other established libraries, has good test coverage
and follows PEP8. Source code and documentation can be downloaded from
http://scikit.ml and also via pip. The library follows BSD licensing scheme.
| Piotr Szyma\'nski, Tomasz Kajdanowicz | null | 1702.0146 | null | null |
Optimizing Cost-Sensitive SVM for Imbalanced Data :Connecting Cluster to
Classification | cs.LG | Class imbalance is one of the challenging problems for machine learning in
many real-world applications, such as coal and gas burst accident monitoring:
the burst premonition data is extreme smaller than the normal data, however,
which is the highlight we truly focus on. Cost-sensitive adjustment approach is
a typical algorithm-level method resisting the data set imbalance. For SVMs
classifier, which is modified to incorporate varying penalty parameter(C) for
each of considered groups of examples. However, the C value is determined
empirically, or is calculated according to the evaluation metric, which need to
be computed iteratively and time consuming. This paper presents a novel
cost-sensitive SVM method whose penalty parameter C optimized on the basis of
cluster probability density function(PDF) and the cluster PDF is estimated only
according to similarity matrix and some predefined hyper-parameters.
Experimental results on various standard benchmark data sets and real-world
data with different ratios of imbalance show that the proposed method is
effective in comparison with commonly used cost-sensitive techniques.
| Qiuyan Yan, Shixiong Xia, Fanrong Meng | null | 1702.01504 | null | null |
Calibrating Energy-based Generative Adversarial Networks | cs.LG | In this paper, we propose to equip Generative Adversarial Networks with the
ability to produce direct energy estimates for samples.Specifically, we propose
a flexible adversarial training framework, and prove this framework not only
ensures the generator converges to the true data distribution, but also enables
the discriminator to retain the density information at the global optimal. We
derive the analytic form of the induced solution, and analyze the properties.
In order to make the proposed framework trainable in practice, we introduce two
effective approximation techniques. Empirically, the experiment results closely
match our theoretical analysis, verifying the discriminator is able to recover
the energy of data distribution.
| Zihang Dai, Amjad Almahairi, Philip Bachman, Eduard Hovy, Aaron
Courville | null | 1702.01691 | null | null |
Search Intelligence: Deep Learning For Dominant Category Prediction | cs.IR cs.LG stat.ML | Deep Neural Networks, and specifically fully-connected convolutional neural
networks are achieving remarkable results across a wide variety of domains.
They have been trained to achieve state-of-the-art performance when applied to
problems such as speech recognition, image classification, natural language
processing and bioinformatics. Most of these deep learning models when applied
to classification employ the softmax activation function for prediction and aim
to minimize cross-entropy loss. In this paper, we have proposed a supervised
model for dominant category prediction to improve search recall across all eBay
classifieds platforms. The dominant category label for each query in the last
90 days is first calculated by summing the total number of collaborative clicks
among all categories. The category having the highest number of collaborative
clicks for the given query will be considered its dominant category. Second,
each query is transformed to a numeric vector by mapping each unique word in
the query document to a unique integer value; all padded to equal length based
on the maximum document length within the pre-defined vocabulary size. A
fully-connected deep convolutional neural network (CNN) is then applied for
classification. The proposed model achieves very high classification accuracy
compared to other state-of-the-art machine learning techniques.
| Zeeshan Khawar Malik, Mo Kobrosli and Peter Maas | null | 1702.01717 | null | null |
Toward the automated analysis of complex diseases in genome-wide
association studies using genetic programming | cs.NE cs.LG q-bio.QM stat.ML | Machine learning has been gaining traction in recent years to meet the demand
for tools that can efficiently analyze and make sense of the ever-growing
databases of biomedical data in health care systems around the world. However,
effectively using machine learning methods requires considerable domain
expertise, which can be a barrier of entry for bioinformaticians new to
computational data science methods. Therefore, off-the-shelf tools that make
machine learning more accessible can prove invaluable for bioinformaticians. To
this end, we have developed an open source pipeline optimization tool
(TPOT-MDR) that uses genetic programming to automatically design machine
learning pipelines for bioinformatics studies. In TPOT-MDR, we implement
Multifactor Dimensionality Reduction (MDR) as a feature construction method for
modeling higher-order feature interactions, and combine it with a new expert
knowledge-guided feature selector for large biomedical data sets. We
demonstrate TPOT-MDR's capabilities using a combination of simulated and real
world data sets from human genetics and find that TPOT-MDR significantly
outperforms modern machine learning methods such as logistic regression and
eXtreme Gradient Boosting (XGBoost). We further analyze the best pipeline
discovered by TPOT-MDR for a real world problem and highlight TPOT-MDR's
ability to produce a high-accuracy solution that is also easily interpretable.
| Andrew Sohn and Randal S. Olson and Jason H. Moore | null | 1702.0178 | null | null |
Predicting Pairwise Relations with Neural Similarity Encoders | stat.ML cs.LG | Matrix factorization is at the heart of many machine learning algorithms, for
example, dimensionality reduction (e.g. kernel PCA) or recommender systems
relying on collaborative filtering. Understanding a singular value
decomposition (SVD) of a matrix as a neural network optimization problem
enables us to decompose large matrices efficiently while dealing naturally with
missing values in the given matrix. But most importantly, it allows us to learn
the connection between data points' feature vectors and the matrix containing
information about their pairwise relations. In this paper we introduce a novel
neural network architecture termed Similarity Encoder (SimEc), which is
designed to simultaneously factorize a given target matrix while also learning
the mapping to project the data points' feature vectors into a similarity
preserving embedding space. This makes it possible to, for example, easily
compute out-of-sample solutions for new data points. Additionally, we
demonstrate that SimEc can preserve non-metric similarities and even predict
multiple pairwise relations between data points at once.
| Franziska Horn and Klaus-Robert M\"uller | 10.24425/bpas.2018.125929 | 1702.01824 | null | null |
Neural Discourse Structure for Text Categorization | cs.CL cs.LG | We show that discourse structure, as defined by Rhetorical Structure Theory
and provided by an existing discourse parser, benefits text categorization. Our
approach uses a recursive neural network and a newly proposed attention
mechanism to compute a representation of the text that focuses on salient
content, from the perspective of both RST and the task. Experiments consider
variants of the approach and illustrate its strengths and weaknesses.
| Yangfeng Ji, Noah Smith | null | 1702.01829 | null | null |
Low Rank Matrix Recovery with Simultaneous Presence of Outliers and
Sparse Corruption | stat.ML cs.CV cs.LG | We study a data model in which the data matrix D can be expressed as D = L +
S + C, where L is a low rank matrix, S an element-wise sparse matrix and C a
matrix whose non-zero columns are outlying data points. To date, robust PCA
algorithms have solely considered models with either S or C, but not both. As
such, existing algorithms cannot account for simultaneous element-wise and
column-wise corruptions. In this paper, a new robust PCA algorithm that is
robust to simultaneous types of corruption is proposed. Our approach hinges on
the sparse approximation of a sparsely corrupted column so that the sparse
expansion of a column with respect to the other data points is used to
distinguish a sparsely corrupted inlier column from an outlying data point. We
also develop a randomized design which provides a scalable implementation of
the proposed approach. The core idea of sparse approximation is analyzed
analytically where we show that the underlying ell_1-norm minimization can
obtain the representation of an inlier in presence of sparse corruptions.
| Mostafa Rahmani, George Atia | 10.1109/JSTSP.2018.2876604 | 1702.01847 | null | null |
A multi-channel approach for automatic microseismic event association
using RANSAC-based arrival time event clustering(RATEC) | physics.geo-ph cs.LG | In the presence of background noise, arrival times picked from a surface
microseismic data set usually include a number of false picks that can lead to
uncertainty in location estimation. To eliminate false picks and improve the
accuracy of location estimates, we develop an association algorithm termed
RANSAC-based Arrival Time Event Clustering (RATEC) that clusters picked arrival
times into event groups based on random sampling and fitting moveout curves
that approximate hyperbolas. Arrival times far from the fitted hyperbolas are
classified as false picks and removed from the data set prior to location
estimation. Simulations of synthetic data for a 1-D linear array show that
RATEC is robust under different noise conditions and generally applicable to
various types of subsurface structures. By generalizing the underlying moveout
model, RATEC is extended to the case of a 2-D surface monitoring array. The
effectiveness of event location for the 2-D case is demonstrated using a data
set collected by the 5200-element dense Long Beach array. The obtained results
suggest that RATEC is effective in removing false picks and hence can be used
for phase association before location estimates.
| Lijun Zhu, Lindsay Chuang, James H. McClellan, Entao Liu, and Zhigang
Peng | null | 1702.01856 | null | null |
Sparse Algorithm for Robust LSSVM in Primal Space | cs.LG stat.ML | As enjoying the closed form solution, least squares support vector machine
(LSSVM) has been widely used for classification and regression problems having
the comparable performance with other types of SVMs. However, LSSVM has two
drawbacks: sensitive to outliers and lacking sparseness. Robust LSSVM (R-LSSVM)
overcomes the first partly via nonconvex truncated loss function, but the
current algorithms for R-LSSVM with the dense solution are faced with the
second drawback and are inefficient for training large-scale problems. In this
paper, we interpret the robustness of R-LSSVM from a re-weighted viewpoint and
give a primal R-LSSVM by the representer theorem. The new model may have sparse
solution if the corresponding kernel matrix has low rank. Then approximating
the kernel matrix by a low-rank matrix and smoothing the loss function by
entropy penalty function, we propose a convergent sparse R-LSSVM (SR-LSSVM)
algorithm to achieve the sparse solution of primal R-LSSVM, which overcomes two
drawbacks of LSSVM simultaneously. The proposed algorithm has lower complexity
than the existing algorithms and is very efficient for training large-scale
problems. Many experimental results illustrate that SR-LSSVM can achieve better
or comparable performance with less training time than related algorithms,
especially for training large scale problems.
| Li Chen and Shuisheng Zhou | null | 1702.01935 | null | null |
Continuous-Time User Modeling in the Presence of Badges: A Probabilistic
Approach | cs.SI cs.LG | User modeling plays an important role in delivering customized web services
to the users and improving their engagement. However, most user models in the
literature do not explicitly consider the temporal behavior of users. More
recently, continuous-time user modeling has gained considerable attention and
many user behavior models have been proposed based on temporal point processes.
However, typical point process based models often considered the impact of peer
influence and content on the user participation and neglected other factors.
Gamification elements, are among those factors that are neglected, while they
have a strong impact on user participation in online services. In this paper,
we propose interdependent multi-dimensional temporal point processes that
capture the impact of badges on user participation besides the peer influence
and content factors. We extend the proposed processes to model user actions
over the community based question and answering websites, and propose an
inference algorithm based on Variational-EM that can efficiently learn the
model parameters. Extensive experiments on both synthetic and real data
gathered from Stack Overflow show that our inference algorithm learns the
parameters efficiently and the proposed method can better predict the user
behavior compared to the alternatives.
| Ali Khodadadi, Seyed Abbas Hosseini, Erfan Tavakoli, Hamid R. Rabiee | null | 1702.01948 | null | null |
Representations of language in a model of visually grounded speech
signal | cs.CL cs.AI cs.LG | We present a visually grounded model of speech perception which projects
spoken utterances and images to a joint semantic space. We use a multi-layer
recurrent highway network to model the temporal nature of spoken speech, and
show that it learns to extract both form and meaning-based linguistic knowledge
from the input signal. We carry out an in-depth analysis of the representations
used by different components of the trained model and show that encoding of
semantic aspects tends to become richer as we go up the hierarchy of layers,
whereas encoding of form-related aspects of the language input tends to
initially increase and then plateau or decrease.
| Grzegorz Chrupa{\l}a, Lieke Gelderloos, Afra Alishahi | 10.18653/v1/P17-1057 | 1702.01991 | null | null |
Gated Multimodal Units for Information Fusion | stat.ML cs.LG | This paper presents a novel model for multimodal learning based on gated
neural networks. The Gated Multimodal Unit (GMU) model is intended to be used
as an internal unit in a neural network architecture whose purpose is to find
an intermediate representation based on a combination of data from different
modalities. The GMU learns to decide how modalities influence the activation of
the unit using multiplicative gates. It was evaluated on a multilabel scenario
for genre classification of movies using the plot and the poster. The GMU
improved the macro f-score performance of single-modality approaches and
outperformed other fusion strategies, including mixture of experts models.
Along with this work, the MM-IMDb dataset is released which, to the best of our
knowledge, is the largest publicly available multimodal dataset for genre
prediction on movies.
| John Arevalo, Thamar Solorio, Manuel Montes-y-G\'omez, Fabio A.
Gonz\'alez | null | 1702.01992 | null | null |
Truncated Variational EM for Semi-Supervised Neural Simpletrons | stat.ML cs.LG | Inference and learning for probabilistic generative networks is often very
challenging and typically prevents scalability to as large networks as used for
deep discriminative approaches. To obtain efficiently trainable, large-scale
and well performing generative networks for semi-supervised learning, we here
combine two recent developments: a neural network reformulation of hierarchical
Poisson mixtures (Neural Simpletrons), and a novel truncated variational EM
approach (TV-EM). TV-EM provides theoretical guarantees for learning in
generative networks, and its application to Neural Simpletrons results in
particularly compact, yet approximately optimal, modifications of learning
equations. If applied to standard benchmarks, we empirically find, that
learning converges in fewer EM iterations, that the complexity per EM iteration
is reduced, and that final likelihood values are higher on average. For the
task of classification on data sets with few labels, learning improvements
result in consistently lower error rates if compared to applications without
truncation. Experiments on the MNIST data set herein allow for comparison to
standard and state-of-the-art models in the semi-supervised setting. Further
experiments on the NIST SD19 data set show the scalability of the approach when
a manifold of additional unlabeled data is available.
| Dennis Forster and J\"org L\"ucke | null | 1702.01997 | null | null |
Empirical Risk Minimization for Stochastic Convex Optimization:
$O(1/n)$- and $O(1/n^2)$-type of Risk Bounds | cs.LG | Although there exist plentiful theories of empirical risk minimization (ERM)
for supervised learning, current theoretical understandings of ERM for a
related problem---stochastic convex optimization (SCO), are limited. In this
work, we strengthen the realm of ERM for SCO by exploiting smoothness and
strong convexity conditions to improve the risk bounds. First, we establish an
$\widetilde{O}(d/n + \sqrt{F_*/n})$ risk bound when the random function is
nonnegative, convex and smooth, and the expected function is Lipschitz
continuous, where $d$ is the dimensionality of the problem, $n$ is the number
of samples, and $F_*$ is the minimal risk. Thus, when $F_*$ is small we obtain
an $\widetilde{O}(d/n)$ risk bound, which is analogous to the
$\widetilde{O}(1/n)$ optimistic rate of ERM for supervised learning. Second, if
the objective function is also $\lambda$-strongly convex, we prove an
$\widetilde{O}(d/n + \kappa F_*/n )$ risk bound where $\kappa$ is the condition
number, and improve it to $O(1/[\lambda n^2] + \kappa F_*/n)$ when
$n=\widetilde{\Omega}(\kappa d)$. As a result, we obtain an $O(\kappa/n^2)$
risk bound under the condition that $n$ is large and $F_*$ is small, which to
the best of our knowledge, is the first $O(1/n^2)$-type of risk bound of ERM.
Third, we stress that the above results are established in a unified framework,
which allows us to derive new risk bounds under weaker conditions, e.g.,
without convexity of the random function and Lipschitz continuity of the
expected function. Finally, we demonstrate that to achieve an $O(1/[\lambda
n^2] + \kappa F_*/n)$ risk bound for supervised learning, the
$\widetilde{\Omega}(\kappa d)$ requirement on $n$ can be replaced with
$\Omega(\kappa^2)$, which is dimensionality-independent.
| Lijun Zhang, Tianbao Yang, Rong Jin | null | 1702.0203 | null | null |
Preference-based Teaching | cs.LG | We introduce a new model of teaching named "preference-based teaching" and a
corresponding complexity parameter---the preference-based teaching dimension
(PBTD)---representing the worst-case number of examples needed to teach any
concept in a given concept class. Although the PBTD coincides with the
well-known recursive teaching dimension (RTD) on finite classes, it is
radically different on infinite ones: the RTD becomes infinite already for
trivial infinite classes (such as half-intervals) whereas the PBTD evaluates to
reasonably small values for a wide collection of infinite classes including
classes consisting of so-called closed sets w.r.t. a given closure operator,
including various classes related to linear sets over $\mathbb{N}_0$ (whose RTD
had been studied quite recently) and including the class of Euclidean
half-spaces. On top of presenting these concrete results, we provide the reader
with a theoretical framework (of a combinatorial flavor) which helps to derive
bounds on the PBTD.
| Ziyuan Gao, Christoph Ries, Hans Ulrich Simon and Sandra Zilles | null | 1702.02047 | null | null |
Knowledge Adaptation: Teaching to Adapt | cs.CL cs.LG | Domain adaptation is crucial in many real-world applications where the
distribution of the training data differs from the distribution of the test
data. Previous Deep Learning-based approaches to domain adaptation need to be
trained jointly on source and target domain data and are therefore unappealing
in scenarios where models need to be adapted to a large number of domains or
where a domain is evolving, e.g. spam detection where attackers continuously
change their tactics.
To fill this gap, we propose Knowledge Adaptation, an extension of Knowledge
Distillation (Bucilua et al., 2006; Hinton et al., 2015) to the domain
adaptation scenario. We show how a student model achieves state-of-the-art
results on unsupervised domain adaptation from multiple sources on a standard
sentiment analysis benchmark by taking into account the domain-specific
expertise of multiple teachers and the similarities between their domains.
When learning from a single teacher, using domain similarity to gauge
trustworthiness is inadequate. To this end, we propose a simple metric that
correlates well with the teacher's accuracy in the target domain. We
demonstrate that incorporating high-confidence examples selected by this metric
enables the student model to achieve state-of-the-art performance in the
single-source scenario.
| Sebastian Ruder, Parsa Ghaffari, and John G. Breslin | null | 1702.02052 | null | null |
Estimation of classrooms occupancy using a multi-layer perceptron | cs.NE cs.LG | This paper presents a multi-layer perceptron model for the estimation of
classrooms number of occupants from sensed indoor environmental data-relative
humidity, air temperature, and carbon dioxide concentration. The modelling
datasets were collected from two classrooms in the Secondary School of Pombal,
Portugal. The number of occupants and occupation periods were obtained from
class attendance reports. However, post-class occupancy was unknown and the
developed model is used to reconstruct the classrooms occupancy by filling the
unreported periods. Different model structure and environment variables
combination were tested. The model with best accuracy had as input vector 10
variables of five averaged time intervals of relative humidity and carbon
dioxide concentration. The model presented a mean square error of 1.99,
coefficient of determination of 0.96 with a significance of p-value < 0.001,
and a mean absolute error of 1 occupant. These results show promising
estimation capabilities in uncertain indoor environment conditions.
| Eug\'enio Rodrigues and Lu\'isa Dias Pereira and Ad\'elio Rodrigues
Gaspar and \'Alvaro Gomes and Manuel Carlos Gameiro da Silva | null | 1702.02125 | null | null |
Rapid parametric density estimation | cs.LG | Parametric density estimation, for example as Gaussian distribution, is the
base of the field of statistics. Machine learning requires inexpensive
estimation of much more complex densities, and the basic approach is relatively
costly maximum likelihood estimation (MLE). There will be discussed inexpensive
density estimation, for example literally fitting a polynomial (or Fourier
series) to the sample, which coefficients are calculated by just averaging
monomials (or sine/cosine) over the sample. Another discussed basic application
is fitting distortion to some standard distribution like Gaussian - analogously
to ICA, but additionally allowing to reconstruct the disturbed density.
Finally, by using weighted average, it can be also applied for estimation of
non-probabilistic densities, like modelling mass distribution, or for various
clustering problems by using negative (or complex) weights: fitting a function
which sign (or argument) determines clusters. The estimated parameters are
approaching the optimal values with error dropping like $1/\sqrt{n}$, where $n$
is the sample size.
| Jarek Duda | null | 1702.02144 | null | null |
Deep Learning with Dynamic Computation Graphs | cs.NE cs.LG stat.ML | Neural networks that compute over graph structures are a natural fit for
problems in a variety of domains, including natural language (parse trees) and
cheminformatics (molecular graphs). However, since the computation graph has a
different shape and size for every input, such networks do not directly support
batched training or inference. They are also difficult to implement in popular
deep learning libraries, which are based on static data-flow graphs. We
introduce a technique called dynamic batching, which not only batches together
operations between different input graphs of dissimilar shape, but also between
different nodes within a single input graph. The technique allows us to create
static graphs, using popular libraries, that emulate dynamic computation graphs
of arbitrary shape and size. We further present a high-level library of
compositional blocks that simplifies the creation of dynamic graph models.
Using the library, we demonstrate concise and batch-wise parallel
implementations for a variety of models from the literature.
| Moshe Looks, Marcello Herreshoff, DeLesley Hutchins, Peter Norvig | null | 1702.02181 | null | null |
Transfer from Multiple Linear Predictive State Representations (PSR) | cs.LG | In this paper, we tackle the problem of transferring policy from multiple
partially observable source environments to a partially observable target
environment modeled as predictive state representation. This is an entirely new
approach with no previous work, other than the case of transfer in fully
observable domains. We develop algorithms to successfully achieve policy
transfer when we have the model of both the source and target tasks and discuss
in detail their performance and shortcomings. These algorithms could be a
starting point for the field of transfer learning in partial observability.
| Sri Ramana Sekharan, Ramkumar Natarajan, Siddharthan Rajasekaran | null | 1702.02184 | null | null |
Semi-Supervised QA with Generative Domain-Adaptive Nets | cs.CL cs.LG | We study the problem of semi-supervised question answering----utilizing
unlabeled text to boost the performance of question answering models. We
propose a novel training framework, the Generative Domain-Adaptive Nets. In
this framework, we train a generative model to generate questions based on the
unlabeled text, and combine model-generated questions with human-generated
questions for training question answering models. We develop novel domain
adaptation algorithms, based on reinforcement learning, to alleviate the
discrepancy between the model-generated data distribution and the
human-generated data distribution. Experiments show that our proposed framework
obtains substantial improvement from unlabeled text.
| Zhilin Yang, Junjie Hu, Ruslan Salakhutdinov, William W. Cohen | null | 1702.02206 | null | null |
Integration of Machine Learning Techniques to Evaluate Dynamic Customer
Segmentation Analysis for Mobile Customers | cs.CY cs.LG stat.ML | The telecommunications industry is highly competitive, which means that the
mobile providers need a business intelligence model that can be used to achieve
an optimal level of churners, as well as a minimal level of cost in marketing
activities. Machine learning applications can be used to provide guidance on
marketing strategies. Furthermore, data mining techniques can be used in the
process of customer segmentation. The purpose of this paper is to provide a
detailed analysis of the C.5 algorithm, within naive Bayesian modelling for the
task of segmenting telecommunication customers behavioural profiling according
to their billing and socio-demographic aspects. Results have been
experimentally implemented.
| Cormac Dullaghan and Eleni Rozaki | 10.5121/ijdkp.2017.7102 | 1702.02215 | null | null |
Clustering For Point Pattern Data | cs.LG stat.ML | Clustering is one of the most common unsupervised learning tasks in machine
learning and data mining. Clustering algorithms have been used in a plethora of
applications across several scientific fields. However, there has been limited
research in the clustering of point patterns - sets or multi-sets of unordered
elements - that are found in numerous applications and data sources. In this
paper, we propose two approaches for clustering point patterns. The first is a
non-parametric method based on novel distances for sets. The second is a
model-based approach, formulated via random finite set theory, and solved by
the Expectation-Maximization algorithm. Numerical experiments show that the
proposed methods perform well on both simulated and real data.
| Quang N. Tran, Ba-Ngu Vo, Dinh Phung and Ba-Tuong Vo | null | 1702.02262 | null | null |
Matrix Completion from $O(n)$ Samples in Linear Time | stat.ML cs.DS cs.LG math.OC | We consider the problem of reconstructing a rank-$k$ $n \times n$ matrix $M$
from a sampling of its entries. Under a certain incoherence assumption on $M$
and for the case when both the rank and the condition number of $M$ are
bounded, it was shown in \cite{CandesRecht2009, CandesTao2010, keshavan2010,
Recht2011, Jain2012, Hardt2014} that $M$ can be recovered exactly or
approximately (depending on some trade-off between accuracy and computational
complexity) using $O(n \, \text{poly}(\log n))$ samples in super-linear time
$O(n^{a} \, \text{poly}(\log n))$ for some constant $a \geq 1$.
In this paper, we propose a new matrix completion algorithm using a novel
sampling scheme based on a union of independent sparse random regular bipartite
graphs. We show that under the same conditions w.h.p. our algorithm recovers an
$\epsilon$-approximation of $M$ in terms of the Frobenius norm using $O(n
\log^2(1/\epsilon))$ samples and in linear time $O(n \log^2(1/\epsilon))$. This
provides the best known bounds both on the sample complexity and computational
complexity for reconstructing (approximately) an unknown low-rank matrix.
The novelty of our algorithm is two new steps of thresholding singular values
and rescaling singular vectors in the application of the "vanilla" alternating
minimization algorithm. The structure of sparse random regular graphs is used
heavily for controlling the impact of these regularization steps.
| David Gamarnik, Quan Li and Hongyi Zhang | null | 1702.02267 | null | null |
Adversarial Attacks on Neural Network Policies | cs.LG cs.CR stat.ML | Machine learning classifiers are known to be vulnerable to inputs maliciously
constructed by adversaries to force misclassification. Such adversarial
examples have been extensively studied in the context of computer vision
applications. In this work, we show adversarial attacks are also effective when
targeting neural network policies in reinforcement learning. Specifically, we
show existing adversarial example crafting techniques can be used to
significantly degrade test-time performance of trained policies. Our threat
model considers adversaries capable of introducing small perturbations to the
raw input of the policy. We characterize the degree of vulnerability across
tasks and training algorithms, for a subclass of adversarial-example attacks in
white-box and black-box settings. Regardless of the learned task or training
algorithm, we observe a significant drop in performance, even with small
adversarial perturbations that do not interfere with human perception. Videos
are available at http://rll.berkeley.edu/adversarial.
| Sandy Huang, Nicolas Papernot, Ian Goodfellow, Yan Duan, Pieter Abbeel | null | 1702.02284 | null | null |
Data Selection Strategies for Multi-Domain Sentiment Analysis | cs.CL cs.LG | Domain adaptation is important in sentiment analysis as sentiment-indicating
words vary between domains. Recently, multi-domain adaptation has become more
pervasive, but existing approaches train on all available source domains
including dissimilar ones. However, the selection of appropriate training data
is as important as the choice of algorithm. We undertake -- to our knowledge
for the first time -- an extensive study of domain similarity metrics in the
context of sentiment analysis and propose novel representations, metrics, and a
new scope for data selection. We evaluate the proposed methods on two
large-scale multi-domain adaptation settings on tweets and reviews and
demonstrate that they consistently outperform strong random and balanced
baselines, while our proposed selection strategy outperforms instance-level
selection and yields the best score on a large reviews corpus.
| Sebastian Ruder, Parsa Ghaffari, and John G. Breslin | null | 1702.02426 | null | null |
Trainable Greedy Decoding for Neural Machine Translation | cs.CL cs.LG | Recent research in neural machine translation has largely focused on two
aspects; neural network architectures and end-to-end learning algorithms. The
problem of decoding, however, has received relatively little attention from the
research community. In this paper, we solely focus on the problem of decoding
given a trained neural machine translation model. Instead of trying to build a
new decoding algorithm for any specific decoding objective, we propose the idea
of trainable decoding algorithm in which we train a decoding algorithm to find
a translation that maximizes an arbitrary decoding objective. More
specifically, we design an actor that observes and manipulates the hidden state
of the neural machine translation decoder and propose to train it using a
variant of deterministic policy gradient. We extensively evaluate the proposed
algorithm using four language pairs and two decoding objectives and show that
we can indeed train a trainable greedy decoder that generates a better
translation (in terms of a target decoding objective) with minimal
computational overhead.
| Jiatao Gu, Kyunghyun Cho and Victor O.K. Li | null | 1702.02429 | null | null |
Preparing for the Unknown: Learning a Universal Policy with Online
System Identification | cs.LG cs.RO cs.SY | We present a new method of learning control policies that successfully
operate under unknown dynamic models. We create such policies by leveraging a
large number of training examples that are generated using a physical
simulator. Our system is made of two components: a Universal Policy (UP) and a
function for Online System Identification (OSI). We describe our control policy
as universal because it is trained over a wide array of dynamic models. These
variations in the dynamic model may include differences in mass and inertia of
the robots' components, variable friction coefficients, or unknown mass of an
object to be manipulated. By training the Universal Policy with this variation,
the control policy is prepared for a wider array of possible conditions when
executed in an unknown environment. The second part of our system uses the
recent state and action history of the system to predict the dynamics model
parameters mu. The value of mu from the Online System Identification is then
provided as input to the control policy (along with the system state).
Together, UP-OSI is a robust control policy that can be used across a wide
range of dynamic models, and that is also responsive to sudden changes in the
environment. We have evaluated the performance of this system on a variety of
tasks, including the problem of cart-pole swing-up, the double inverted
pendulum, locomotion of a hopper, and block-throwing of a manipulator. UP-OSI
is effective at these tasks across a wide range of dynamic models. Moreover,
when tested with dynamic models outside of the training range, UP-OSI
outperforms the Universal Policy alone, even when UP is given the actual value
of the model dynamics. In addition to the benefits of creating more robust
controllers, UP-OSI also holds out promise of narrowing the Reality Gap between
simulated and real physical systems.
| Wenhao Yu, Jie Tan, C. Karen Liu, Greg Turk | null | 1702.02453 | null | null |
Video Frame Synthesis using Deep Voxel Flow | cs.CV cs.GR cs.LG | We address the problem of synthesizing new video frames in an existing video,
either in-between existing frames (interpolation), or subsequent to them
(extrapolation). This problem is challenging because video appearance and
motion can be highly complex. Traditional optical-flow-based solutions often
fail where flow estimation is challenging, while newer neural-network-based
methods that hallucinate pixel values directly often produce blurry results. We
combine the advantages of these two methods by training a deep network that
learns to synthesize video frames by flowing pixel values from existing ones,
which we call deep voxel flow. Our method requires no human supervision, and
any video can be used as training data by dropping, and then learning to
predict, existing frames. The technique is efficient, and can be applied at any
video resolution. We demonstrate that our method produces results that both
quantitatively and qualitatively improve upon the state-of-the-art.
| Ziwei Liu, Raymond A. Yeh, Xiaoou Tang, Yiming Liu, Aseem Agarwala | null | 1702.02463 | null | null |
Deep Generalized Canonical Correlation Analysis | cs.LG cs.AI stat.ML | We present Deep Generalized Canonical Correlation Analysis (DGCCA) -- a
method for learning nonlinear transformations of arbitrarily many views of
data, such that the resulting transformations are maximally informative of each
other. While methods for nonlinear two-view representation learning (Deep CCA,
(Andrew et al., 2013)) and linear many-view representation learning
(Generalized CCA (Horst, 1961)) exist, DGCCA is the first CCA-style multiview
representation learning technique that combines the flexibility of nonlinear
(deep) representation learning with the statistical power of incorporating
information from many independent sources, or views. We present the DGCCA
formulation as well as an efficient stochastic optimization algorithm for
solving it. We learn DGCCA representations on two distinct datasets for three
downstream tasks: phonetic transcription from acoustic and articulatory
measurements, and recommending hashtags and friends on a dataset of Twitter
users. We find that DGCCA representations soundly beat existing methods at
phonetic transcription and hashtag recommendation, and in general perform no
worse than standard linear many-view techniques.
| Adrian Benton, Huda Khayrallah, Biman Gujral, Dee Ann Reisinger, Sheng
Zhang, Raman Arora | null | 1702.02519 | null | null |
Deep Kernelized Autoencoders | stat.ML cs.LG cs.NE | In this paper we introduce the deep kernelized autoencoder, a neural network
model that allows an explicit approximation of (i) the mapping from an input
space to an arbitrary, user-specified kernel space and (ii) the back-projection
from such a kernel space to input space. The proposed method is based on
traditional autoencoders and is trained through a new unsupervised loss
function. During training, we optimize both the reconstruction accuracy of
input samples and the alignment between a kernel matrix given as prior and the
inner products of the hidden representations computed by the autoencoder.
Kernel alignment provides control over the hidden representation learned by the
autoencoder. Experiments have been performed to evaluate both reconstruction
and kernel alignment performance. Additionally, we applied our method to
emulate kPCA on a denoising task obtaining promising results.
| Michael Kampffmeyer, Sigurd L{\o}kse, Filippo Maria Bianchi, Robert
Jenssen and Lorenzo Livi | null | 1702.02526 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.