categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
list |
---|---|---|---|---|---|---|---|---|---|---|
cs.LG cs.SI
| null |
1701.06751
| null | null |
http://arxiv.org/pdf/1701.06751v1
|
2017-01-24T07:07:15Z
|
2017-01-24T07:07:15Z
|
Collective Vertex Classification Using Recursive Neural Network
|
Collective classification of vertices is a task of assigning categories to
each vertex in a graph based on both vertex attributes and link structure.
Nevertheless, some existing approaches do not use the features of neighbouring
vertices properly, due to the noise introduced by these features. In this
paper, we propose a graph-based recursive neural network framework for
collective vertex classification. In this framework, we generate hidden
representations from both attributes of vertices and representations of
neighbouring vertices via recursive neural networks. Under this framework, we
explore two types of recursive neural units, naive recursive neural unit and
long short-term memory unit. We have conducted experiments on four real-world
network datasets. The experimental results show that our frame- work with long
short-term memory model achieves better results and outperforms several
competitive baseline methods.
|
[
"Qiongkai Xu, Qing Wang, Chenchen Xu and Lizhen Qu",
"['Qiongkai Xu' 'Qing Wang' 'Chenchen Xu' 'Lizhen Qu']"
] |
cs.LG
| null |
1701.06796
| null | null |
http://arxiv.org/pdf/1701.06796v2
|
2017-02-28T14:17:16Z
|
2017-01-24T10:29:31Z
|
Discriminative Neural Topic Models
|
We propose a neural network based approach for learning topics from text and
image datasets. The model makes no assumptions about the conditional
distribution of the observed features given the latent topics. This allows us
to perform topic modelling efficiently using sentences of documents and patches
of images as observed features, rather than limiting ourselves to words.
Moreover, the proposed approach is online, and hence can be used for streaming
data. Furthermore, since the approach utilizes neural networks, it can be
implemented on GPU with ease, and hence it is very scalable.
|
[
"['Gaurav Pandey' 'Ambedkar Dukkipati']",
"Gaurav Pandey and Ambedkar Dukkipati"
] |
quant-ph cs.CC cs.LG
| null |
1701.06806
| null | null |
http://arxiv.org/pdf/1701.06806v3
|
2017-07-28T09:40:37Z
|
2017-01-24T10:53:07Z
|
A Survey of Quantum Learning Theory
|
This paper surveys quantum learning theory: the theoretical aspects of
machine learning using quantum computers. We describe the main results known
for three models of learning: exact learning from membership queries, and
Probably Approximately Correct (PAC) and agnostic learning from classical or
quantum examples.
|
[
"Srinivasan Arunachalam (CWI) and Ronald de Wolf (CWI and U of\n Amsterdam)",
"['Srinivasan Arunachalam' 'Ronald de Wolf']"
] |
cs.AI cs.LG cs.LO
| null |
1701.06972
| null | null |
http://arxiv.org/pdf/1701.06972v1
|
2017-01-24T16:39:05Z
|
2017-01-24T16:39:05Z
|
Deep Network Guided Proof Search
|
Deep learning techniques lie at the heart of several significant AI advances
in recent years including object recognition and detection, image captioning,
machine translation, speech recognition and synthesis, and playing the game of
Go. Automated first-order theorem provers can aid in the formalization and
verification of mathematical theorems and play a crucial role in program
analysis, theory reasoning, security, interpolation, and system verification.
Here we suggest deep learning based guidance in the proof search of the theorem
prover E. We train and compare several deep neural network models on the traces
of existing ATP proofs of Mizar statements and use them to select processed
clauses during proof search. We give experimental evidence that with a hybrid,
two-phase approach, deep learning based guidance can significantly reduce the
average number of proof search steps while increasing the number of theorems
proved. Using a few proof guidance strategies that leverage deep neural
networks, we have found first-order proofs of 7.36% of the first-order logic
translations of the Mizar Mathematical Library theorems that did not previously
have ATP generated proofs. This increases the ratio of statements in the corpus
with ATP generated proofs from 56% to 59%.
|
[
"['Sarah Loos' 'Geoffrey Irving' 'Christian Szegedy' 'Cezary Kaliszyk']",
"Sarah Loos, Geoffrey Irving, Christian Szegedy, Cezary Kaliszyk"
] |
cs.LG
| null |
1701.07114
| null | null |
http://arxiv.org/pdf/1701.07114v1
|
2017-01-24T23:57:32Z
|
2017-01-24T23:57:32Z
|
On the Effectiveness of Discretizing Quantitative Attributes in Linear
Classifiers
|
Learning algorithms that learn linear models often have high representation
bias on real-world problems. In this paper, we show that this representation
bias can be greatly reduced by discretization. Discretization is a common
procedure in machine learning that is used to convert a quantitative attribute
into a qualitative one. It is often motivated by the limitation of some
learners to qualitative data. Discretization loses information, as fewer
distinctions between instances are possible using discretized data relative to
undiscretized data. In consequence, where discretization is not essential, it
might appear desirable to avoid it. However, it has been shown that
discretization often substantially reduces the error of the linear generative
Bayesian classifier naive Bayes. This motivates a systematic study of the
effectiveness of discretizing quantitative attributes for other linear
classifiers. In this work, we study the effect of discretization on the
performance of linear classifiers optimizing three distinct discriminative
objective functions --- logistic regression (optimizing negative
log-likelihood), support vector classifiers (optimizing hinge loss) and a
zero-hidden layer artificial neural network (optimizing mean-square-error). We
show that discretization can greatly increase the accuracy of these linear
discriminative learners by reducing their representation bias, especially on
big datasets. We substantiate our claims with an empirical study on $42$
benchmark datasets.
|
[
"Nayyar A. Zaidi, Yang Du, Geoffrey I. Webb",
"['Nayyar A. Zaidi' 'Yang Du' 'Geoffrey I. Webb']"
] |
cs.PL cs.HC cs.LG cs.LO
|
10.4204/EPTCS.239.2
|
1701.07125
| null | null |
http://arxiv.org/abs/1701.07125v1
|
2017-01-25T01:21:14Z
|
2017-01-25T01:21:14Z
|
jsCoq: Towards Hybrid Theorem Proving Interfaces
|
We describe jsCcoq, a new platform and user environment for the Coq
interactive proof assistant. The jsCoq system targets the HTML5-ECMAScript 2015
specification, and it is typically run inside a standards-compliant browser,
without the need of external servers or services. Targeting educational use,
jsCoq allows the user to start interaction with proof scripts right away,
thanks to its self-contained nature. Indeed, a full Coq environment is packed
along the proof scripts, easing distribution and installation. Starting to use
jsCoq is as easy as clicking on a link. The current release ships more than 10
popular Coq libraries, and supports popular books such as Software Foundations
or Certified Programming with Dependent Types. The new target platform has
opened up new interaction and display possibilities. It has also fostered the
development of some new Coq-related technology. In particular, we have
implemented a new serialization-based protocol for interaction with the proof
assistant, as well as a new package format for library distribution.
|
[
"Emilio Jes\\'us Gallego Arias (MINES ParisTech, PSL Research\n University, France), Beno\\^it Pin (MINES ParisTech, PSL Research University,\n France), Pierre Jouvelot (MINES ParisTech, PSL Research University, France)",
"['Emilio Jesús Gallego Arias' 'Benoît Pin' 'Pierre Jouvelot']"
] |
cs.LG
| null |
1701.07148
| null | null |
http://arxiv.org/pdf/1701.07148v1
|
2017-01-25T02:58:06Z
|
2017-01-25T02:58:06Z
|
CP-decomposition with Tensor Power Method for Convolutional Neural
Networks Compression
|
Convolutional Neural Networks (CNNs) has shown a great success in many areas
including complex image classification tasks. However, they need a lot of
memory and computational cost, which hinders them from running in relatively
low-end smart devices such as smart phones. We propose a CNN compression method
based on CP-decomposition and Tensor Power Method. We also propose an iterative
fine tuning, with which we fine-tune the whole network after decomposing each
layer, but before decomposing the next layer. Significant reduction in memory
and computation cost is achieved compared to state-of-the-art previous work
with no more accuracy loss.
|
[
"Marcella Astrid and Seung-Ik Lee",
"['Marcella Astrid' 'Seung-Ik Lee']"
] |
cs.DC cs.HC cs.LG
| null |
1701.07166
| null | null |
http://arxiv.org/pdf/1701.07166v1
|
2017-01-25T05:22:35Z
|
2017-01-25T05:22:35Z
|
Personalized Classifier Ensemble Pruning Framework for Mobile
Crowdsourcing
|
Ensemble learning has been widely employed by mobile applications, ranging
from environmental sensing to activity recognitions. One of the fundamental
issue in ensemble learning is the trade-off between classification accuracy and
computational costs, which is the goal of ensemble pruning. During
crowdsourcing, the centralized aggregator releases ensemble learning models to
a large number of mobile participants for task evaluation or as the
crowdsourcing learning results, while different participants may seek for
different levels of the accuracy-cost trade-off. However, most of existing
ensemble pruning approaches consider only one identical level of such
trade-off. In this study, we present an efficient ensemble pruning framework
for personalized accuracy-cost trade-offs via multi-objective optimization.
Specifically, for the commonly used linear-combination style of the trade-off,
we provide an objective-mixture optimization to further reduce the number of
ensemble candidates. Experimental results show that our framework is highly
efficient for personalized ensemble pruning, and achieves much better pruning
performance with objective-mixture optimization when compared to state-of-art
approaches.
|
[
"Shaowei Wang, Liusheng Huang, Pengzhan Wang, Hongli Xu, Wei Yang",
"['Shaowei Wang' 'Liusheng Huang' 'Pengzhan Wang' 'Hongli Xu' 'Wei Yang']"
] |
cs.LG cs.CR
| null |
1701.07179
| null | null |
http://arxiv.org/pdf/1701.07179v3
|
2019-08-21T10:38:24Z
|
2017-01-25T06:46:14Z
|
Malicious URL Detection using Machine Learning: A Survey
|
Malicious URL, a.k.a. malicious website, is a common and serious threat to
cybersecurity. Malicious URLs host unsolicited content (spam, phishing,
drive-by exploits, etc.) and lure unsuspecting users to become victims of scams
(monetary loss, theft of private information, and malware installation), and
cause losses of billions of dollars every year. It is imperative to detect and
act on such threats in a timely manner. Traditionally, this detection is done
mostly through the usage of blacklists. However, blacklists cannot be
exhaustive, and lack the ability to detect newly generated malicious URLs. To
improve the generality of malicious URL detectors, machine learning techniques
have been explored with increasing attention in recent years. This article aims
to provide a comprehensive survey and a structural understanding of Malicious
URL Detection techniques using machine learning. We present the formal
formulation of Malicious URL Detection as a machine learning task, and
categorize and review the contributions of literature studies that addresses
different dimensions of this problem (feature representation, algorithm design,
etc.). Further, this article provides a timely and comprehensive survey for a
range of different audiences, not only for machine learning researchers and
engineers in academia, but also for professionals and practitioners in
cybersecurity industry, to help them understand the state of the art and
facilitate their own research and practical applications. We also discuss
practical issues in system design, open research challenges, and point out some
important directions for future research.
|
[
"['Doyen Sahoo' 'Chenghao Liu' 'Steven C. H. Hoi']",
"Doyen Sahoo, Chenghao Liu, and Steven C.H. Hoi"
] |
stat.ML cs.LG
| null |
1701.07194
| null | null |
http://arxiv.org/pdf/1701.07194v1
|
2017-01-25T07:43:13Z
|
2017-01-25T07:43:13Z
|
Privileged Multi-label Learning
|
This paper presents privileged multi-label learning (PrML) to explore and
exploit the relationship between labels in multi-label learning problems. We
suggest that for each individual label, it cannot only be implicitly connected
with other labels via the low-rank constraint over label predictors, but also
its performance on examples can receive the explicit comments from other labels
together acting as an \emph{Oracle teacher}. We generate privileged label
feature for each example and its individual label, and then integrate it into
the framework of low-rank based multi-label learning. The proposed algorithm
can therefore comprehensively explore and exploit label relationships by
inheriting all the merits of privileged information and low-rank constraints.
We show that PrML can be efficiently solved by dual coordinate descent
algorithm using iterative optimization strategy with cheap updates. Experiments
on benchmark datasets show that through privileged label features, the
performance can be significantly improved and PrML is superior to several
competing methods in most cases.
|
[
"Shan You, Chang Xu, Yunhe Wang, Chao Xu, Dacheng Tao",
"['Shan You' 'Chang Xu' 'Yunhe Wang' 'Chao Xu' 'Dacheng Tao']"
] |
cs.DS cs.AI cs.LG
| null |
1701.07204
| null | null |
http://arxiv.org/pdf/1701.07204v4
|
2018-04-25T10:36:08Z
|
2017-01-25T08:44:04Z
|
Fast Exact k-Means, k-Medians and Bregman Divergence Clustering in 1D
|
The $k$-Means clustering problem on $n$ points is NP-Hard for any dimension
$d\ge 2$, however, for the 1D case there exists exact polynomial time
algorithms. Previous literature reported an $O(kn^2)$ time dynamic programming
algorithm that uses $O(kn)$ space. It turns out that the problem has been
considered under a different name more than twenty years ago. We present all
the existing work that had been overlooked and compare the various solutions
theoretically. Moreover, we show how to reduce the space usage for some of
them, as well as generalize them to data structures that can quickly report an
optimal $k$-Means clustering for any $k$. Finally we also generalize all the
algorithms to work for the absolute distance and to work for any Bregman
Divergence. We complement our theoretical contributions by experiments that
compare the practical performance of the various algorithms.
|
[
"['Allan Grønlund' 'Kasper Green Larsen' 'Alexander Mathiasen'\n 'Jesper Sindahl Nielsen' 'Stefan Schneider' 'Mingzhou Song']",
"Allan Gr{\\o}nlund and Kasper Green Larsen and Alexander Mathiasen and\n Jesper Sindahl Nielsen and Stefan Schneider and Mingzhou Song"
] |
cs.AI cs.CR cs.LG cs.PL cs.SE
| null |
1701.07232
| null | null |
http://arxiv.org/pdf/1701.07232v1
|
2017-01-25T10:01:39Z
|
2017-01-25T10:01:39Z
|
Learn&Fuzz: Machine Learning for Input Fuzzing
|
Fuzzing consists of repeatedly testing an application with modified, or
fuzzed, inputs with the goal of finding security vulnerabilities in
input-parsing code. In this paper, we show how to automate the generation of an
input grammar suitable for input fuzzing using sample inputs and
neural-network-based statistical machine-learning techniques. We present a
detailed case study with a complex input format, namely PDF, and a large
complex security-critical parser for this format, namely, the PDF parser
embedded in Microsoft's new Edge browser. We discuss (and measure) the tension
between conflicting learning and fuzzing goals: learning wants to capture the
structure of well-formed inputs, while fuzzing wants to break that structure in
order to cover unexpected code paths and find bugs. We also present a new
algorithm for this learn&fuzz challenge which uses a learnt input probability
distribution to intelligently guide where to fuzz inputs.
|
[
"['Patrice Godefroid' 'Hila Peleg' 'Rishabh Singh']",
"Patrice Godefroid, Hila Peleg, Rishabh Singh"
] |
q-bio.NC cs.LG q-bio.QM
| null |
1701.07243
| null | null |
http://arxiv.org/pdf/1701.07243v1
|
2017-01-25T10:25:59Z
|
2017-01-25T10:25:59Z
|
Decoding Epileptogenesis in a Reduced State Space
|
We describe here the recent results of a multidisciplinary effort to design a
biomarker that can actively and continuously decode the progressive changes in
neuronal organization leading to epilepsy, a process known as epileptogenesis.
Using an animal model of acquired epilepsy, wechronically record hippocampal
evoked potentials elicited by an auditory stimulus. Using a set of reduced
coordinates, our algorithm can identify universal smooth low-dimensional
configurations of the auditory evoked potentials that correspond to distinct
stages of epileptogenesis. We use a hidden Markov model to learn the dynamics
of the evoked potential, as it evolves along these smooth low-dimensional
subsets. We provide experimental evidence that the biomarker is able to exploit
subtle changes in the evoked potential to reliably decode the stage of
epileptogenesis and predict whether an animal will eventually recover from the
injury, or develop spontaneous seizures.
|
[
"['François G. Meyer' 'Alexander M. Benison' 'Zachariah Smith'\n 'Daniel S. Barth']",
"Fran\\c{c}ois G. Meyer, Alexander M. Benison, Zachariah Smith, and\n Daniel S. Barth"
] |
stat.ML cs.LG
| null |
1701.07266
| null | null |
http://arxiv.org/pdf/1701.07266v1
|
2017-01-25T11:18:18Z
|
2017-01-25T11:18:18Z
|
k*-Nearest Neighbors: From Global to Local
|
The weighted k-nearest neighbors algorithm is one of the most fundamental
non-parametric methods in pattern recognition and machine learning. The
question of setting the optimal number of neighbors as well as the optimal
weights has received much attention throughout the years, nevertheless this
problem seems to have remained unsettled. In this paper we offer a simple
approach to locally weighted regression/classification, where we make the
bias-variance tradeoff explicit. Our formulation enables us to phrase a notion
of optimal weights, and to efficiently find these weights as well as the
optimal number of neighbors efficiently and adaptively, for each data point
whose value we wish to estimate. The applicability of our approach is
demonstrated on several datasets, showing superior performance over standard
locally weighted methods.
|
[
"['Oren Anava' 'Kfir Y. Levy']",
"Oren Anava, Kfir Y. Levy"
] |
cs.LG
| null |
1701.07274
| null | null |
http://arxiv.org/pdf/1701.07274v6
|
2018-11-26T04:56:31Z
|
2017-01-25T11:52:11Z
|
Deep Reinforcement Learning: An Overview
|
We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update.
|
[
"Yuxi Li",
"['Yuxi Li']"
] |
cs.LG cs.GR
| null |
1701.07403
| null | null |
http://arxiv.org/pdf/1701.07403v2
|
2017-08-15T12:57:10Z
|
2017-01-25T17:50:19Z
|
Learning Light Transport the Reinforced Way
|
We show that the equations of reinforcement learning and light transport
simulation are related integral equations. Based on this correspondence, a
scheme to learn importance while sampling path space is derived. The new
approach is demonstrated in a consistent light transport simulation algorithm
that uses reinforcement learning to progressively learn where light comes from.
As using this information for importance sampling includes information about
visibility, too, the number of light transport paths with zero contribution is
dramatically reduced, resulting in much less noisy images within a fixed time
budget.
|
[
"['Ken Dahm' 'Alexander Keller']",
"Ken Dahm and Alexander Keller"
] |
cs.LG stat.ML
| null |
1701.07422
| null | null |
http://arxiv.org/pdf/1701.07422v3
|
2017-10-17T15:17:59Z
|
2017-01-25T18:49:45Z
|
A Convex Similarity Index for Sparse Recovery of Missing Image Samples
|
This paper investigates the problem of recovering missing samples using
methods based on sparse representation adapted especially for image signals.
Instead of $l_2$-norm or Mean Square Error (MSE), a new perceptual quality
measure is used as the similarity criterion between the original and the
reconstructed images. The proposed criterion called Convex SIMilarity (CSIM)
index is a modified version of the Structural SIMilarity (SSIM) index, which
despite its predecessor, is convex and uni-modal. We derive mathematical
properties for the proposed index and show how to optimally choose the
parameters of the proposed criterion, investigating the Restricted Isometry
(RIP) and error-sensitivity properties. We also propose an iterative sparse
recovery method based on a constrained $l_1$-norm minimization problem,
incorporating CSIM as the fidelity criterion. The resulting convex optimization
problem is solved via an algorithm based on Alternating Direction Method of
Multipliers (ADMM). Taking advantage of the convexity of the CSIM index, we
also prove the convergence of the algorithm to the globally optimal solution of
the proposed optimization problem, starting from any arbitrary point.
Simulation results confirm the performance of the new similarity index as well
as the proposed algorithm for missing sample recovery of image patch signals.
|
[
"Amirhossein Javaheri, Hadi Zayyani and Farokh Marvasti",
"['Amirhossein Javaheri' 'Hadi Zayyani' 'Farokh Marvasti']"
] |
stat.ME cs.LG stat.ML
|
10.1016/j.neunet.2016.03.002
|
1701.07429
| null | null |
http://arxiv.org/abs/1701.07429v1
|
2016-12-09T14:42:40Z
|
2016-12-09T14:42:40Z
|
Robust mixture of experts modeling using the $t$ distribution
|
Mixture of Experts (MoE) is a popular framework for modeling heterogeneity in
data for regression, classification, and clustering. For regression and cluster
analyses of continuous data, MoE usually use normal experts following the
Gaussian distribution. However, for a set of data containing a group or groups
of observations with heavy tails or atypical observations, the use of normal
experts is unsuitable and can unduly affect the fit of the MoE model. We
introduce a robust MoE modeling using the $t$ distribution. The proposed $t$
MoE (TMoE) deals with these issues regarding heavy-tailed and noisy data. We
develop a dedicated expectation-maximization (EM) algorithm to estimate the
parameters of the proposed model by monotonically maximizing the observed data
log-likelihood. We describe how the presented model can be used in prediction
and in model-based clustering of regression data. The proposed model is
validated on numerical experiments carried out on simulated data, which show
the effectiveness and the robustness of the proposed model in terms of modeling
non-linear regression functions as well as in model-based clustering. Then, it
is applied to the real-world data of tone perception for musical data analysis,
and the one of temperature anomalies for the analysis of climate change data.
The obtained results show the usefulness of the TMoE model for practical
applications.
|
[
"['Faicel Chamroukhi']",
"Faicel Chamroukhi"
] |
cs.LG stat.ML
| null |
1701.07474
| null | null |
http://arxiv.org/pdf/1701.07474v1
|
2017-01-25T20:25:29Z
|
2017-01-25T20:25:29Z
|
Exploiting Convolutional Neural Network for Risk Prediction with Medical
Feature Embedding
|
The widespread availability of electronic health records (EHRs) promises to
usher in the era of personalized medicine. However, the problem of extracting
useful clinical representations from longitudinal EHR data remains challenging.
In this paper, we explore deep neural network models with learned medical
feature embedding to deal with the problems of high dimensionality and
temporality. Specifically, we use a multi-layer convolutional neural network
(CNN) to parameterize the model and is thus able to capture complex non-linear
longitudinal evolution of EHRs. Our model can effectively capture local/short
temporal dependency in EHRs, which is beneficial for risk prediction. To
account for high dimensionality, we use the embedding medical features in the
CNN model which hold the natural medical concepts. Our initial experiments
produce promising results and demonstrate the effectiveness of both the medical
feature embedding and the proposed convolutional neural network in risk
prediction on cohorts of congestive heart failure and diabetes patients
compared with several strong baselines.
|
[
"Zhengping Che, Yu Cheng, Zhaonan Sun, Yan Liu",
"['Zhengping Che' 'Yu Cheng' 'Zhaonan Sun' 'Yan Liu']"
] |
stat.ME cs.LG stat.AP stat.ML
| null |
1701.07483
| null | null |
http://arxiv.org/pdf/1701.07483v1
|
2017-01-25T20:47:40Z
|
2017-01-25T20:47:40Z
|
A Model-based Projection Technique for Segmenting Customers
|
We consider the problem of segmenting a large population of customers into
non-overlapping groups with similar preferences, using diverse preference
observations such as purchases, ratings, clicks, etc. over subsets of items. We
focus on the setting where the universe of items is large (ranging from
thousands to millions) and unstructured (lacking well-defined attributes) and
each customer provides observations for only a few items. These data
characteristics limit the applicability of existing techniques in marketing and
machine learning. To overcome these limitations, we propose a model-based
projection technique, which transforms the diverse set of observations into a
more comparable scale and deals with missing data by projecting the transformed
data onto a low-dimensional space. We then cluster the projected data to obtain
the customer segments. Theoretically, we derive precise necessary and
sufficient conditions that guarantee asymptotic recovery of the true customer
segments. Empirically, we demonstrate the speed and performance of our method
in two real-world case studies: (a) 84% improvement in the accuracy of new
movie recommendations on the MovieLens data set and (b) 6% improvement in the
performance of similar item recommendations algorithm on an offline dataset at
eBay. We show that our method outperforms standard latent-class and
demographic-based techniques.
|
[
"['Srikanth Jagabathula' 'Lakshminarayanan Subramanian'\n 'Ashwin Venkataraman']",
"Srikanth Jagabathula, Lakshminarayanan Subramanian, Ashwin\n Venkataraman"
] |
cs.LG astro-ph.IM cs.RO
| null |
1701.07543
| null | null |
http://arxiv.org/pdf/1701.07543v1
|
2017-01-26T01:52:11Z
|
2017-01-26T01:52:11Z
|
FPGA Architecture for Deep Learning and its application to Planetary
Robotics
|
Autonomous control systems onboard planetary rovers and spacecraft benefit
from having cognitive capabilities like learning so that they can adapt to
unexpected situations in-situ. Q-learning is a form of reinforcement learning
and it has been efficient in solving certain class of learning problems.
However, embedded systems onboard planetary rovers and spacecraft rarely
implement learning algorithms due to the constraints faced in the field, like
processing power, chip size, convergence rate and costs due to the need for
radiation hardening. These challenges present a compelling need for a portable,
low-power, area efficient hardware accelerator to make learning algorithms
practical onboard space hardware. This paper presents a FPGA implementation of
Q-learning with Artificial Neural Networks (ANN). This method matches the
massive parallelism inherent in neural network software with the fine-grain
parallelism of an FPGA hardware thereby dramatically reducing processing time.
Mars Science Laboratory currently uses Xilinx-Space-grade Virtex FPGA devices
for image processing, pyrotechnic operation control and obstacle avoidance. We
simulate and program our architecture on a Xilinx Virtex 7 FPGA. The
architectural implementation for a single neuron Q-learning and a more complex
Multilayer Perception (MLP) Q-learning accelerator has been demonstrated. The
results show up to a 43-fold speed up by Virtex 7 FPGAs compared to a
conventional Intel i5 2.3 GHz CPU. Finally, we simulate the proposed
architecture using the Symphony simulator and compiler from Xilinx, and
evaluate the performance and power consumption.
|
[
"Pranay Gankidi and Jekan Thangavelautham",
"['Pranay Gankidi' 'Jekan Thangavelautham']"
] |
cs.LG
| null |
1701.0757
| null | null | null | null | null |
Dynamic Regret of Strongly Adaptive Methods
|
To cope with changing environments, recent developments in online learning
have introduced the concepts of adaptive regret and dynamic regret
independently. In this paper, we illustrate an intrinsic connection between
these two concepts by showing that the dynamic regret can be expressed in terms
of the adaptive regret and the functional variation. This observation implies
that strongly adaptive algorithms can be directly leveraged to minimize the
dynamic regret. As a result, we present a series of strongly adaptive
algorithms that have small dynamic regrets for convex functions, exponentially
concave functions, and strongly convex functions, respectively. To the best of
our knowledge, this is the first time that exponential concavity is utilized to
upper bound the dynamic regret. Moreover, all of those adaptive algorithms do
not need any prior knowledge of the functional variation, which is a
significant advantage over previous specialized methods for minimizing dynamic
regret.
|
[
"Lijun Zhang, Tianbao Yang, Rong Jin, Zhi-Hua Zhou"
] |
null | null |
1701.07570
| null | null |
http://arxiv.org/pdf/1701.07570v3
|
2018-06-04T13:03:22Z
|
2017-01-26T03:54:21Z
|
Dynamic Regret of Strongly Adaptive Methods
|
To cope with changing environments, recent developments in online learning have introduced the concepts of adaptive regret and dynamic regret independently. In this paper, we illustrate an intrinsic connection between these two concepts by showing that the dynamic regret can be expressed in terms of the adaptive regret and the functional variation. This observation implies that strongly adaptive algorithms can be directly leveraged to minimize the dynamic regret. As a result, we present a series of strongly adaptive algorithms that have small dynamic regrets for convex functions, exponentially concave functions, and strongly convex functions, respectively. To the best of our knowledge, this is the first time that exponential concavity is utilized to upper bound the dynamic regret. Moreover, all of those adaptive algorithms do not need any prior knowledge of the functional variation, which is a significant advantage over previous specialized methods for minimizing dynamic regret.
|
[
"['Lijun Zhang' 'Tianbao Yang' 'Rong Jin' 'Zhi-Hua Zhou']"
] |
cs.DS cs.LG stat.ML
|
10.1145/3132847.3132980
|
1701.07681
| null | null |
http://arxiv.org/abs/1701.07681v1
|
2017-01-26T13:09:48Z
|
2017-01-26T13:09:48Z
|
Fast and Accurate Time Series Classification with WEASEL
|
Time series (TS) occur in many scientific and commercial applications,
ranging from earth surveillance to industry automation to the smart grids. An
important type of TS analysis is classification, which can, for instance,
improve energy load forecasting in smart grids by detecting the types of
electronic devices based on their energy consumption profiles recorded by
automatic sensors. Such sensor-driven applications are very often characterized
by (a) very long TS and (b) very large TS datasets needing classification.
However, current methods to time series classification (TSC) cannot cope with
such data volumes at acceptable accuracy; they are either scalable but offer
only inferior classification quality, or they achieve state-of-the-art
classification quality but cannot scale to large data volumes.
In this paper, we present WEASEL (Word ExtrAction for time SEries
cLassification), a novel TSC method which is both scalable and accurate. Like
other state-of-the-art TSC methods, WEASEL transforms time series into feature
vectors, using a sliding-window approach, which are then analyzed through a
machine learning classifier. The novelty of WEASEL lies in its specific method
for deriving features, resulting in a much smaller yet much more discriminative
feature set. On the popular UCR benchmark of 85 TS datasets, WEASEL is more
accurate than the best current non-ensemble algorithms at orders-of-magnitude
lower classification and training times, and it is almost as accurate as
ensemble classifiers, whose computational complexity makes them inapplicable
even for mid-size datasets. The outstanding robustness of WEASEL is also
confirmed by experiments on two real smart grid datasets, where it
out-of-the-box achieves almost the same accuracy as highly tuned,
domain-specific methods.
|
[
"['Patrick Schäfer' 'Ulf Leser']",
"Patrick Sch\\\"afer and Ulf Leser"
] |
stat.ML cs.LG
| null |
1701.07761
| null | null |
http://arxiv.org/pdf/1701.07761v2
|
2018-02-14T16:19:23Z
|
2017-01-26T16:23:39Z
|
Theoretical Foundations of Forward Feature Selection Methods based on
Mutual Information
|
Feature selection problems arise in a variety of applications, such as
microarray analysis, clinical prediction, text categorization, image
classification and face recognition, multi-label learning, and classification
of internet traffic. Among the various classes of methods, forward feature
selection methods based on mutual information have become very popular and are
widely used in practice. However, comparative evaluations of these methods have
been limited by being based on specific datasets and classifiers. In this
paper, we develop a theoretical framework that allows evaluating the methods
based on their theoretical properties. Our framework is grounded on the
properties of the target objective function that the methods try to
approximate, and on a novel categorization of features, according to their
contribution to the explanation of the class; we derive upper and lower bounds
for the target objective function and relate these bounds with the feature
types. Then, we characterize the types of approximations taken by the methods,
and analyze how these approximations cope with the good properties of the
target objective function. Additionally, we develop a distributional setting
designed to illustrate the various deficiencies of the methods, and provide
several examples of wrong feature selections. Based on our work, we identify
clearly the methods that should be avoided, and the methods that currently have
the best performance.
|
[
"Francisco Macedo and M. Ros\\'ario Oliveira and Ant\\'onio Pacheco and\n Rui Valadas",
"['Francisco Macedo' 'M. Rosário Oliveira' 'António Pacheco' 'Rui Valadas']"
] |
cs.LG stat.ML
| null |
1701.07767
| null | null |
http://arxiv.org/pdf/1701.07767v1
|
2017-01-26T16:38:00Z
|
2017-01-26T16:38:00Z
|
Riemannian-geometry-based modeling and clustering of network-wide
non-stationary time series: The brain-network case
|
This paper advocates Riemannian multi-manifold modeling in the context of
network-wide non-stationary time-series analysis. Time-series data, collected
sequentially over time and across a network, yield features which are viewed as
points in or close to a union of multiple submanifolds of a Riemannian
manifold, and distinguishing disparate time series amounts to clustering
multiple Riemannian submanifolds. To support the claim that exploiting the
latent Riemannian geometry behind many statistical features of time series is
beneficial to learning from network data, this paper focuses on brain networks
and puts forth two feature-generation schemes for network-wide dynamic time
series. The first is motivated by Granger-causality arguments and uses an
auto-regressive moving average model to map low-rank linear vector subspaces,
spanned by column vectors of appropriately defined observability matrices, to
points into the Grassmann manifold. The second utilizes (non-linear)
dependencies among network nodes by introducing kernel-based partial
correlations to generate points in the manifold of positive-definite matrices.
Capitilizing on recently developed research on clustering Riemannian
submanifolds, an algorithm is provided for distinguishing time series based on
their geometrical properties, revealed within Riemannian feature spaces.
Extensive numerical tests demonstrate that the proposed framework outperforms
classical and state-of-the-art techniques in clustering brain-network
states/structures hidden beneath synthetic fMRI time series and brain-activity
signals generated from real brain-network structural connectivity matrices.
|
[
"['Konstantinos Slavakis' 'Shiva Salsabilian' 'David S. Wack'\n 'Sarah F. Muldoon' 'Henry E. Baidoo-Williams' 'Jean M. Vettel'\n 'Matthew Cieslak' 'Scott T. Grafton']",
"Konstantinos Slavakis and Shiva Salsabilian and David S. Wack and\n Sarah F. Muldoon and Henry E. Baidoo-Williams and Jean M. Vettel and Matthew\n Cieslak and Scott T. Grafton"
] |
stat.ML cs.LG
| null |
1701.07808
| null | null |
http://arxiv.org/pdf/1701.07808v4
|
2017-04-02T18:43:11Z
|
2017-01-26T18:37:34Z
|
Linear convergence of SDCA in statistical estimation
|
In this paper, we consider stochastic dual coordinate (SDCA) {\em without}
strongly convex assumption or convex assumption. We show that SDCA converges
linearly under mild conditions termed restricted strong convexity. This covers
a wide array of popular statistical models including Lasso, group Lasso, and
logistic regression with $\ell_1$ regularization, corrected Lasso and linear
regression with SCAD regularizer. This significantly improves previous
convergence results on SDCA for problems that are not strongly convex. As a by
product, we derive a dual free form of SDCA that can handle general
regularization term, which is of interest by itself.
|
[
"Chao Qu, Huan Xu",
"['Chao Qu' 'Huan Xu']"
] |
cs.LO cs.LG cs.PL
| null |
1701.07842
| null | null |
http://arxiv.org/pdf/1701.07842v3
|
2018-03-02T18:45:04Z
|
2017-01-26T19:06:45Z
|
DroidStar: Callback Typestates for Android Classes
|
Event-driven programming frameworks, such as Android, are based on components
with asynchronous interfaces. The protocols for interacting with these
components can often be described by finite-state machines we dub *callback
typestates*. Callback typestates are akin to classical typestates, with the
difference that their outputs (callbacks) are produced asynchronously. While
useful, these specifications are not commonly available, because writing them
is difficult and error-prone.
Our goal is to make the task of producing callback typestates significantly
easier. We present a callback typestate assistant tool, DroidStar, that
requires only limited user interaction to produce a callback typestate. Our
approach is based on an active learning algorithm, L*. We improved the
scalability of equivalence queries (a key component of L*), thus making active
learning tractable on the Android system.
We use DroidStar to learn callback typestates for Android classes both for
cases where one is already provided by the documentation, and for cases where
the documentation is unclear. The results show that DroidStar learns callback
typestates accurately and efficiently. Moreover, in several cases, the
synthesized callback typestates uncovered surprising and undocumented
behaviors.
|
[
"Arjun Radhakrishna, Nicholas V. Lewchenko, Shawn Meier, Sergio Mover,\n Krishna Chaitanya Sripada, Damien Zufferey, Bor-Yuh Evan Chang, and Pavol\n \\v{C}ern\\'y",
"['Arjun Radhakrishna' 'Nicholas V. Lewchenko' 'Shawn Meier' 'Sergio Mover'\n 'Krishna Chaitanya Sripada' 'Damien Zufferey' 'Bor-Yuh Evan Chang'\n 'Pavol Černý']"
] |
cs.LG
|
10.1109/SECON.2016.7506650
|
1701.07852
| null | null | null | null | null |
An Empirical Analysis of Feature Engineering for Predictive Modeling
|
Machine learning models, such as neural networks, decision trees, random
forests, and gradient boosting machines, accept a feature vector, and provide a
prediction. These models learn in a supervised fashion where we provide feature
vectors mapped to the expected output. It is common practice to engineer new
features from the provided feature set. Such engineered features will either
augment or replace portions of the existing feature vector. These engineered
features are essentially calculated fields based on the values of the other
features.
Engineering such features is primarily a manual, time-consuming task.
Additionally, each type of model will respond differently to different kinds of
engineered features. This paper reports empirical research to demonstrate what
kinds of engineered features are best suited to various machine learning model
types. We provide this recommendation by generating several datasets that we
designed to benefit from a particular type of engineered feature. The
experiment demonstrates to what degree the machine learning model can
synthesize the needed feature on its own. If a model can synthesize a planned
feature, it is not necessary to provide that feature. The research demonstrated
that the studied models do indeed perform differently with various types of
engineered features.
|
[
"Jeff Heaton"
] |
stat.ML cs.LG
| null |
1701.07875
| null | null |
http://arxiv.org/pdf/1701.07875v3
|
2017-12-06T20:01:54Z
|
2017-01-26T21:10:29Z
|
Wasserstein GAN
|
We introduce a new algorithm named WGAN, an alternative to traditional GAN
training. In this new model, we show that we can improve the stability of
learning, get rid of problems like mode collapse, and provide meaningful
learning curves useful for debugging and hyperparameter searches. Furthermore,
we show that the corresponding optimization problem is sound, and provide
extensive theoretical work highlighting the deep connections to other distances
between distributions.
|
[
"['Martin Arjovsky' 'Soumith Chintala' 'Léon Bottou']",
"Martin Arjovsky, Soumith Chintala, L\\'eon Bottou"
] |
cs.LG cs.IT math.IT stat.ML
| null |
1701.07895
| null | null |
http://arxiv.org/pdf/1701.07895v2
|
2017-05-06T23:11:33Z
|
2017-01-26T22:43:20Z
|
Information Theoretic Limits for Linear Prediction with Graph-Structured
Sparsity
|
We analyze the necessary number of samples for sparse vector recovery in a
noisy linear prediction setup. This model includes problems such as linear
regression and classification. We focus on structured graph models. In
particular, we prove that sufficient number of samples for the weighted graph
model proposed by Hegde and others is also necessary. We use the Fano's
inequality on well constructed ensembles as our main tool in establishing
information theoretic lower bounds.
|
[
"Adarsh Barik, Jean Honorio, Mohit Tawarmalani",
"['Adarsh Barik' 'Jean Honorio' 'Mohit Tawarmalani']"
] |
cs.LG stat.ML
| null |
1701.07953
| null | null |
http://arxiv.org/pdf/1701.07953v2
|
2017-06-13T21:25:12Z
|
2017-01-27T06:17:14Z
|
The Price of Differential Privacy For Online Learning
|
We design differentially private algorithms for the problem of online linear
optimization in the full information and bandit settings with optimal
$\tilde{O}(\sqrt{T})$ regret bounds. In the full-information setting, our
results demonstrate that $\epsilon$-differential privacy may be ensured for
free -- in particular, the regret bounds scale as
$O(\sqrt{T})+\tilde{O}\left(\frac{1}{\epsilon}\right)$. For bandit linear
optimization, and as a special case, for non-stochastic multi-armed bandits,
the proposed algorithm achieves a regret of
$\tilde{O}\left(\frac{1}{\epsilon}\sqrt{T}\right)$, while the previously known
best regret bound was
$\tilde{O}\left(\frac{1}{\epsilon}T^{\frac{2}{3}}\right)$.
|
[
"['Naman Agarwal' 'Karan Singh']",
"Naman Agarwal and Karan Singh"
] |
cs.LG cs.NE
| null |
1701.07974
| null | null |
http://arxiv.org/pdf/1701.07974v5
|
2017-11-22T08:27:39Z
|
2017-01-27T08:49:19Z
|
Reinforced stochastic gradient descent for deep neural network learning
|
Stochastic gradient descent (SGD) is a standard optimization method to
minimize a training error with respect to network parameters in modern neural
network learning. However, it typically suffers from proliferation of saddle
points in the high-dimensional parameter space. Therefore, it is highly
desirable to design an efficient algorithm to escape from these saddle points
and reach a parameter region of better generalization capabilities. Here, we
propose a simple extension of SGD, namely reinforced SGD, which simply adds
previous first-order gradients in a stochastic manner with a probability that
increases with learning time. As verified in a simple synthetic dataset, this
method significantly accelerates learning compared with the original SGD.
Surprisingly, it dramatically reduces over-fitting effects, even compared with
state-of-the-art adaptive learning algorithm---Adam. For a benchmark
handwritten digits dataset, the learning performance is comparable to Adam, yet
with an extra advantage of requiring one-fold less computer memory. The
reinforced SGD is also compared with SGD with fixed or adaptive momentum
parameter and Nesterov's momentum, which shows that the proposed framework is
able to reach a similar generalization accuracy with less computational costs.
Overall, our method introduces stochastic memory into gradients, which plays an
important role in understanding how gradient-based training algorithms can work
and its relationship with generalization abilities of deep networks.
|
[
"['Haiping Huang' 'Taro Toyoizumi']",
"Haiping Huang and Taro Toyoizumi"
] |
stat.ML cs.LG stat.AP stat.ME
| null |
1701.08055
| null | null |
http://arxiv.org/pdf/1701.08055v1
|
2017-01-27T14:01:53Z
|
2017-01-27T14:01:53Z
|
Modelling Competitive Sports: Bradley-Terry-\'{E}l\H{o} Models for
Supervised and On-Line Learning of Paired Competition Outcomes
|
Prediction and modelling of competitive sports outcomes has received much
recent attention, especially from the Bayesian statistics and machine learning
communities. In the real world setting of outcome prediction, the seminal
\'{E}l\H{o} update still remains, after more than 50 years, a valuable baseline
which is difficult to improve upon, though in its original form it is a
heuristic and not a proper statistical "model". Mathematically, the \'{E}l\H{o}
rating system is very closely related to the Bradley-Terry models, which are
usually used in an explanatory fashion rather than in a predictive supervised
or on-line learning setting.
Exploiting this close link between these two model classes and some newly
observed similarities, we propose a new supervised learning framework with
close similarities to logistic regression, low-rank matrix completion and
neural networks. Building on it, we formulate a class of structured log-odds
models, unifying the desirable properties found in the above: supervised
probabilistic prediction of scores and wins/draws/losses, batch/epoch and
on-line learning, as well as the possibility to incorporate features in the
prediction, without having to sacrifice simplicity, parsimony of the
Bradley-Terry models, or computational efficiency of \'{E}l\H{o}'s original
approach.
We validate the structured log-odds modelling approach in synthetic
experiments and English Premier League outcomes, where the added expressivity
yields the best predictions reported in the state-of-art, close to the quality
of contemporary betting odds.
|
[
"Franz J. Kir\\'aly and Zhaozhi Qian",
"['Franz J. Király' 'Zhaozhi Qian']"
] |
cs.SY cs.LG
| null |
1701.08074
| null | null |
http://arxiv.org/pdf/1701.08074v2
|
2017-02-24T15:59:34Z
|
2017-01-27T15:15:54Z
|
Model-Free Control of Thermostatically Controlled Loads Connected to a
District Heating Network
|
Optimal control of thermostatically controlled loads connected to a district
heating network is considered a sequential decision- making problem under
uncertainty. The practicality of a direct model-based approach is compromised
by two challenges, namely scalability due to the large dimensionality of the
problem and the system identification required to identify an accurate model.
To help in mitigating these problems, this paper leverages on recent
developments in reinforcement learning in combination with a market-based
multi-agent system to obtain a scalable solution that obtains a significant
performance improvement in a practical learning time. The control approach is
applied on a scenario comprising 100 thermostatically controlled loads
connected to a radial district heating network supplied by a central combined
heat and power plant. Both for an energy arbitrage and a peak shaving
objective, the control approach requires 60 days to obtain a performance within
65% of a theoretical lower bound on the cost.
|
[
"Bert J. Claessens, Dirk Vanhoudt, Johan Desmedt, Frederik Ruelens",
"['Bert J. Claessens' 'Dirk Vanhoudt' 'Johan Desmedt' 'Frederik Ruelens']"
] |
cs.SE cs.LG
|
10.1007/s1051
|
1701.08106
| null | null |
http://arxiv.org/abs/1701.08106v2
|
2017-08-03T21:15:47Z
|
2017-01-27T16:36:09Z
|
Faster Discovery of Faster System Configurations with Spectral Learning
|
Despite the huge spread and economical importance of configurable software
systems, there is unsatisfactory support in utilizing the full potential of
these systems with respect to finding performance-optimal configurations. Prior
work on predicting the performance of software configurations suffered from
either (a) requiring far too many sample configurations or (b) large variances
in their predictions. Both these problems can be avoided using the WHAT
spectral learner. WHAT's innovation is the use of the spectrum (eigenvalues) of
the distance matrix between the configurations of a configurable software
system, to perform dimensionality reduction. Within that reduced configuration
space, many closely associated configurations can be studied by executing only
a few sample configurations. For the subject systems studied here, a few dozen
samples yield accurate and stable predictors - less than 10% prediction error,
with a standard deviation of less than 2%. When compared to the state of the
art, WHAT (a) requires 2 to 10 times fewer samples to achieve similar
prediction accuracies, and (b) its predictions are more stable (i.e., have
lower standard deviation). Furthermore, we demonstrate that predictive models
generated by WHAT can be used by optimizers to discover system configurations
that closely approach the optimal performance.
|
[
"Vivek Nair, Tim Menzies, Norbert Siegmund, Sven Apel",
"['Vivek Nair' 'Tim Menzies' 'Norbert Siegmund' 'Sven Apel']"
] |
cs.LG cs.AI q-bio.QM stat.ML
| null |
1701.08305
| null | null |
http://arxiv.org/pdf/1701.08305v1
|
2017-01-28T17:45:58Z
|
2017-01-28T17:45:58Z
|
Multiclass MinMax Rank Aggregation
|
We introduce a new family of minmax rank aggregation problems under two
distance measures, the Kendall {\tau} and the Spearman footrule. As the
problems are NP-hard, we proceed to describe a number of constant-approximation
algorithms for solving them. We conclude with illustrative applications of the
aggregation methods on the Mallows model and genomic data.
|
[
"Pan Li and Olgica Milenkovic",
"['Pan Li' 'Olgica Milenkovic']"
] |
q-bio.QM cs.LG q-bio.BM stat.ML
| null |
1701.08318
| null | null |
http://arxiv.org/pdf/1701.08318v1
|
2017-01-28T19:33:59Z
|
2017-01-28T19:33:59Z
|
Deep Recurrent Neural Network for Protein Function Prediction from
Sequence
|
As high-throughput biological sequencing becomes faster and cheaper, the need
to extract useful information from sequencing becomes ever more paramount,
often limited by low-throughput experimental characterizations. For proteins,
accurate prediction of their functions directly from their primary amino-acid
sequences has been a long standing challenge. Here, machine learning using
artificial recurrent neural networks (RNN) was applied towards classification
of protein function directly from primary sequence without sequence alignment,
heuristic scoring or feature engineering. The RNN models containing
long-short-term-memory (LSTM) units trained on public, annotated datasets from
UniProt achieved high performance for in-class prediction of four important
protein functions tested, particularly compared to other machine learning
algorithms using sequence-derived protein features. RNN models were used also
for out-of-class predictions of phylogenetically distinct protein families with
similar functions, including proteins of the CRISPR-associated nuclease,
ferritin-like iron storage and cytochrome P450 families. Applying the trained
RNN models on the partially unannotated UniRef100 database predicted not only
candidates validated by existing annotations but also currently unannotated
sequences. Some RNN predictions for the ferritin-like iron sequestering
function were experimentally validated, even though their sequences differ
significantly from known, characterized proteins and from each other and cannot
be easily predicted using popular bioinformatics methods. As sequencing and
experimental characterization data increases rapidly, the machine-learning
approach based on RNN could be useful for discovery and prediction of
homologues for a wide range of protein functions.
|
[
"Xueliang Liu",
"['Xueliang Liu']"
] |
cs.CV cs.AI cs.LG
| null |
1701.08374
| null | null |
http://arxiv.org/pdf/1701.08374v1
|
2017-01-29T13:19:07Z
|
2017-01-29T13:19:07Z
|
Feature base fusion for splicing forgery detection based on neuro fuzzy
|
Most of researches on image forensics have been mainly focused on detection
of artifacts introduced by a single processing tool. They lead in the
development of many specialized algorithms looking for one or more particular
footprints under specific settings. Naturally, the performance of such
algorithms are not perfect, and accordingly the provided output might be noisy,
inaccurate and only partially correct. Furthermore, a forged image in practical
scenarios is often the result of utilizing several tools available by
image-processing software systems. Therefore, reliable tamper detection
requires developing more poweful tools to deal with various tempering
scenarios. Fusion of forgery detection tools based on Fuzzy Inference System
has been used before for addressing this problem. Adjusting the membership
functions and defining proper fuzzy rules for attaining to better results are
time-consuming processes. This can be accounted as main disadvantage of fuzzy
inference systems. In this paper, a Neuro-Fuzzy inference system for fusion of
forgery detection tools is developed. The neural network characteristic of
these systems provides appropriate tool for automatically adjusting the
membership functions. Moreover, initial fuzzy inference system is generated
based on fuzzy clustering techniques. The proposed framework is implemented and
validated on a benchmark image splicing data set in which three forgery
detection tools are fused based on adaptive Neuro-Fuzzy inference system. The
outcome of the proposed method reveals that applying Neuro Fuzzy inference
systems could be a better approach for fusion of forgery detection tools.
|
[
"['Habib Ghaffari Hadigheh' 'Ghazali bin sulong']",
"Habib Ghaffari Hadigheh and Ghazali bin sulong"
] |
cs.LG cs.CV
|
10.1109/LSP.2017.2704359
|
1701.08401
| null | null |
http://arxiv.org/abs/1701.08401v2
|
2017-03-21T12:53:26Z
|
2017-01-29T17:11:13Z
|
When Slepian Meets Fiedler: Putting a Focus on the Graph Spectrum
|
The study of complex systems benefits from graph models and their analysis.
In particular, the eigendecomposition of the graph Laplacian lets emerge
properties of global organization from local interactions; e.g., the Fiedler
vector has the smallest non-zero eigenvalue and plays a key role for graph
clustering. Graph signal processing focusses on the analysis of signals that
are attributed to the graph nodes. The eigendecomposition of the graph
Laplacian allows to define the graph Fourier transform and extend conventional
signal-processing operations to graphs. Here, we introduce the design of
Slepian graph signals, by maximizing energy concentration in a predefined
subgraph for a graph spectral bandlimit. We establish a novel link with
classical Laplacian embedding and graph clustering, which provides a meaning to
localized graph frequencies.
|
[
"Dimitri Van De Ville, Robin Demesmaeker, Maria Giulia Preti",
"['Dimitri Van De Ville' 'Robin Demesmaeker' 'Maria Giulia Preti']"
] |
cs.DS cs.CG cs.LG
| null |
1701.08423
| null | null |
http://arxiv.org/pdf/1701.08423v3
|
2017-08-10T09:46:07Z
|
2017-01-29T19:55:27Z
|
On the Local Structure of Stable Clustering Instances
|
We study the classic $k$-median and $k$-means clustering objectives in the
beyond-worst-case scenario. We consider three well-studied notions of
structured data that aim at characterizing real-world inputs: Distribution
Stability (introduced by Awasthi, Blum, and Sheffet, FOCS 2010), Spectral
Separability (introduced by Kumar and Kannan, FOCS 2010), Perturbation
Resilience (introduced by Bilu and Linial, ICS 2010).
We prove structural results showing that inputs satisfying at least one of
the conditions are inherently "local". Namely, for any such input, any local
optimum is close both in term of structure and in term of objective value to
the global optima.
As a corollary we obtain that the widely-used Local Search algorithm has
strong performance guarantees for both the tasks of recovering the underlying
optimal clustering and obtaining a clustering of small cost. This is a
significant step toward understanding the success of local search heuristics in
clustering applications.
|
[
"['Vincent Cohen-Addad' 'Chris Schwiegelshohn']",
"Vincent Cohen-Addad, Chris Schwiegelshohn"
] |
cs.LG cs.CV
| null |
1701.08435
| null | null | null | null | null |
Transformation-Based Models of Video Sequences
|
In this work we propose a simple unsupervised approach for next frame
prediction in video. Instead of directly predicting the pixels in a frame given
past frames, we predict the transformations needed for generating the next
frame in a sequence, given the transformations of the past frames. This leads
to sharper results, while using a smaller prediction model. In order to enable
a fair comparison between different video frame prediction models, we also
propose a new evaluation protocol. We use generated frames as input to a
classifier trained with ground truth sequences. This criterion guarantees that
models scoring high are those producing sequences which preserve discriminative
features, as opposed to merely penalizing any deviation, plausible or not, from
the ground truth. Our proposed approach compares favourably against more
sophisticated ones on the UCF-101 data set, while also being more efficient in
terms of the number of parameters and computational cost.
|
[
"Joost van Amersfoort, Anitha Kannan, Marc'Aurelio Ranzato, Arthur\n Szlam, Du Tran and Soumith Chintala"
] |
cs.SE cs.LG cs.LO
|
10.4204/EPTCS.240.2
|
1701.08466
| null | null |
http://arxiv.org/abs/1701.08466v1
|
2017-01-30T03:32:24Z
|
2017-01-30T03:32:24Z
|
Predicting SMT Solver Performance for Software Verification
|
The Why3 IDE and verification system facilitates the use of a wide range of
Satisfiability Modulo Theories (SMT) solvers through a driver-based
architecture. We present Where4: a portfolio-based approach to discharge Why3
proof obligations. We use data analysis and machine learning techniques on
static metrics derived from program source code. Our approach benefits software
engineers by providing a single utility to delegate proof obligations to the
solvers most likely to return a useful result. It does this in a time-efficient
way using existing Why3 and solver installations - without requiring low-level
knowledge about SMT solver operation from the user.
|
[
"Andrew Healy (Maynooth University), Rosemary Monahan (Maynooth\n University), James F. Power (Maynooth University)",
"['Andrew Healy' 'Rosemary Monahan' 'James F. Power']"
] |
cs.LG stat.ML
| null |
1701.08473
| null | null |
http://arxiv.org/pdf/1701.08473v2
|
2017-02-08T03:44:39Z
|
2017-01-30T03:47:44Z
|
Model-based Classification and Novelty Detection For Point Pattern Data
|
Point patterns are sets or multi-sets of unordered elements that can be found
in numerous data sources. However, in data analysis tasks such as
classification and novelty detection, appropriate statistical models for point
pattern data have not received much attention. This paper proposes the
modelling of point pattern data via random finite sets (RFS). In particular, we
propose appropriate likelihood functions, and a maximum likelihood estimator
for learning a tractable family of RFS models. In novelty detection, we propose
novel ranking functions based on RFS models, which substantially improve
performance.
|
[
"Ba-Ngu Vo, Quang N. Tran, Dinh Phung, Ba-Tuong Vo",
"['Ba-Ngu Vo' 'Quang N. Tran' 'Dinh Phung' 'Ba-Tuong Vo']"
] |
cs.LG cs.IR
|
10.1109/LSP.2016.2639036
|
1701.08511
| null | null |
http://arxiv.org/abs/1701.08511v1
|
2017-01-30T08:37:25Z
|
2017-01-30T08:37:25Z
|
Binary adaptive embeddings from order statistics of random projections
|
We use some of the largest order statistics of the random projections of a
reference signal to construct a binary embedding that is adapted to signals
correlated with such signal. The embedding is characterized from the analytical
standpoint and shown to provide improved performance on tasks such as
classification in a reduced-dimensionality space.
|
[
"['Diego Valsesia' 'Enrico Magli']",
"Diego Valsesia, Enrico Magli"
] |
cs.CV cs.LG stat.ML
| null |
1701.08528
| null | null |
http://arxiv.org/pdf/1701.08528v1
|
2017-01-30T10:01:38Z
|
2017-01-30T10:01:38Z
|
Self-Adaptation of Activity Recognition Systems to New Sensors
|
Traditional activity recognition systems work on the basis of training,
taking a fixed set of sensors into account. In this article, we focus on the
question how pattern recognition can leverage new information sources without
any, or with minimal user input. Thus, we present an approach for opportunistic
activity recognition, where ubiquitous sensors lead to dynamically changing
input spaces. Our method is a variation of well-established principles of
machine learning, relying on unsupervised clustering to discover structure in
data and inferring cluster labels from a small number of labeled dates in a
semi-supervised manner. Elaborating the challenges, evaluations of over 3000
sensor combinations from three multi-user experiments are presented in detail
and show the potential benefit of our approach.
|
[
"David Bannach, Martin J\\\"anicke, Vitor F. Rey, Sven Tomforde, Bernhard\n Sick, Paul Lukowicz",
"['David Bannach' 'Martin Jänicke' 'Vitor F. Rey' 'Sven Tomforde'\n 'Bernhard Sick' 'Paul Lukowicz']"
] |
cs.LG cs.SI cs.SY math.OC
| null |
1701.08585
| null | null |
http://arxiv.org/pdf/1701.08585v4
|
2017-11-10T15:09:15Z
|
2017-01-30T13:24:07Z
|
Variational Policy for Guiding Point Processes
|
Temporal point processes have been widely applied to model event sequence
data generated by online users. In this paper, we consider the problem of how
to design the optimal control policy for point processes, such that the
stochastic system driven by the point process is steered to a target state. In
particular, we exploit the key insight to view the stochastic optimal control
problem from the perspective of optimal measure and variational inference. We
further propose a convex optimization framework and an efficient algorithm to
update the policy adaptively to the current system state. Experiments on
synthetic and real-world data show that our algorithm can steer the user
activities much more accurately and efficiently than other stochastic control
methods.
|
[
"['Yichen Wang' 'Grady Williams' 'Evangelos Theodorou' 'Le Song']",
"Yichen Wang, Grady Williams, Evangelos Theodorou, Le Song"
] |
cs.CL cs.LG
| null |
1701.08694
| null | null |
http://arxiv.org/pdf/1701.08694v1
|
2017-01-27T13:08:08Z
|
2017-01-27T13:08:08Z
|
A Comparative Study on Different Types of Approaches to Bengali document
Categorization
|
Document categorization is a technique where the category of a document is
determined. In this paper three well-known supervised learning techniques which
are Support Vector Machine(SVM), Na\"ive Bayes(NB) and Stochastic Gradient
Descent(SGD) compared for Bengali document categorization. Besides classifier,
classification also depends on how feature is selected from dataset. For
analyzing those classifier performances on predicting a document against twelve
categories several feature selection techniques are also applied in this
article namely Chi square distribution, normalized TFIDF (term
frequency-inverse document frequency) with word analyzer. So, we attempt to
explore the efficiency of those three-classification algorithms by using two
different feature selection techniques in this article.
|
[
"Md. Saiful Islam, Fazla Elahi Md Jubayer and Syed Ikhtiar Ahmed",
"['Md. Saiful Islam' 'Fazla Elahi Md Jubayer' 'Syed Ikhtiar Ahmed']"
] |
cs.CL cs.LG q-fin.EC stat.ML
|
10.1016/j.eswa.2019.113008
|
1701.08711
| null | null |
http://arxiv.org/abs/1701.08711v5
|
2019-10-08T16:25:45Z
|
2017-01-30T17:14:25Z
|
Predicting Auction Price of Vehicle License Plate with Deep Recurrent
Neural Network
|
In Chinese societies, superstition is of paramount importance, and vehicle
license plates with desirable numbers can fetch very high prices in auctions.
Unlike other valuable items, license plates are not allocated an estimated
price before auction. I propose that the task of predicting plate prices can be
viewed as a natural language processing (NLP) task, as the value depends on the
meaning of each individual character on the plate and its semantics. I
construct a deep recurrent neural network (RNN) to predict the prices of
vehicle license plates in Hong Kong, based on the characters on a plate. I
demonstrate the importance of having a deep network and of retraining.
Evaluated on 13 years of historical auction prices, the deep RNN's predictions
can explain over 80 percent of price variations, outperforming previous models
by a significant margin. I also demonstrate how the model can be extended to
become a search engine for plates and to provide estimates of the expected
price distribution.
|
[
"Vinci Chow",
"['Vinci Chow']"
] |
cs.CY cs.LG
| null |
1701.08716
| null | null |
http://arxiv.org/pdf/1701.08716v2
|
2017-03-24T18:14:26Z
|
2017-01-25T17:07:33Z
|
Does Weather Matter? Causal Analysis of TV Logs
|
Weather affects our mood and behaviors, and many aspects of our life. When it
is sunny, most people become happier; but when it rains, some people get
depressed. Despite this evidence and the abundance of data, weather has mostly
been overlooked in the machine learning and data science research. This work
presents a causal analysis of how weather affects TV watching patterns. We show
that some weather attributes, such as pressure and precipitation, cause major
changes in TV watching patterns. To the best of our knowledge, this is the
first large-scale causal study of the impact of weather on TV watching
patterns.
|
[
"Shi Zong, Branislav Kveton, Shlomo Berkovsky, Azin Ashkan, Nikos\n Vlassis, Zheng Wen",
"['Shi Zong' 'Branislav Kveton' 'Shlomo Berkovsky' 'Azin Ashkan'\n 'Nikos Vlassis' 'Zheng Wen']"
] |
cs.LG cs.NE stat.ML
| null |
1701.08718
| null | null |
http://arxiv.org/pdf/1701.08718v1
|
2017-01-30T17:34:51Z
|
2017-01-30T17:34:51Z
|
Memory Augmented Neural Networks with Wormhole Connections
|
Recent empirical results on long-term dependency tasks have shown that neural
networks augmented with an external memory can learn the long-term dependency
tasks more easily and achieve better generalization than vanilla recurrent
neural networks (RNN). We suggest that memory augmented neural networks can
reduce the effects of vanishing gradients by creating shortcut (or wormhole)
connections. Based on this observation, we propose a novel memory augmented
neural network model called TARDIS (Temporal Automatic Relation Discovery in
Sequences). The controller of TARDIS can store a selective set of embeddings of
its own previous hidden states into an external memory and revisit them as and
when needed. For TARDIS, memory acts as a storage for wormhole connections to
the past to propagate the gradients more effectively and it helps to learn the
temporal dependencies. The memory structure of TARDIS has similarities to both
Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but
both read and write operations of TARDIS are simpler and more efficient. We use
discrete addressing for read/write operations which helps to substantially to
reduce the vanishing gradient problem with very long sequences. Read and write
operations in TARDIS are tied with a heuristic once the memory becomes full,
and this makes the learning problem simpler when compared to NTM or D-NTM type
of architectures. We provide a detailed analysis on the gradient propagation in
general for MANNs. We evaluate our models on different long-term dependency
tasks and report competitive results in all of them.
|
[
"['Caglar Gulcehre' 'Sarath Chandar' 'Yoshua Bengio']",
"Caglar Gulcehre, Sarath Chandar, Yoshua Bengio"
] |
cs.NE cs.LG
| null |
1701.08734
| null | null |
http://arxiv.org/pdf/1701.08734v1
|
2017-01-30T18:06:07Z
|
2017-01-30T18:06:07Z
|
PathNet: Evolution Channels Gradient Descent in Super Neural Networks
|
For artificial general intelligence (AGI) it would be efficient if multiple
users trained the same giant neural network, permitting parameter reuse,
without catastrophic forgetting. PathNet is a first step in this direction. It
is a neural network algorithm that uses agents embedded in the neural network
whose task is to discover which parts of the network to re-use for new tasks.
Agents are pathways (views) through the network which determine the subset of
parameters that are used and updated by the forwards and backwards passes of
the backpropogation algorithm. During learning, a tournament selection genetic
algorithm is used to select pathways through the neural network for replication
and mutation. Pathway fitness is the performance of that pathway measured
according to a cost function. We demonstrate successful transfer learning;
fixing the parameters along a path learned on task A and re-evolving a new
population of paths for task B, allows task B to be learned faster than it
could be learned from scratch or after fine-tuning. Paths evolved on task B
re-use parts of the optimal path evolved on task A. Positive transfer was
demonstrated for binary MNIST, CIFAR, and SVHN supervised learning
classification tasks, and a set of Atari and Labyrinth reinforcement learning
tasks, suggesting PathNets have general applicability for neural network
training. Finally, PathNet also significantly improves the robustness to
hyperparameter choices of a parallel asynchronous reinforcement learning
algorithm (A3C).
|
[
"['Chrisantha Fernando' 'Dylan Banarse' 'Charles Blundell' 'Yori Zwols'\n 'David Ha' 'Andrei A. Rusu' 'Alexander Pritzel' 'Daan Wierstra']",
"Chrisantha Fernando, Dylan Banarse, Charles Blundell, Yori Zwols,\n David Ha, Andrei A. Rusu, Alexander Pritzel, Daan Wierstra"
] |
cs.IR cs.AI cs.LG
| null |
1701.08744
| null | null |
http://arxiv.org/pdf/1701.08744v1
|
2017-01-30T18:32:59Z
|
2017-01-30T18:32:59Z
|
Click Through Rate Prediction for Contextual Advertisment Using Linear
Regression
|
This research presents an innovative and unique way of solving the
advertisement prediction problem which is considered as a learning problem over
the past several years. Online advertising is a multi-billion-dollar industry
and is growing every year with a rapid pace. The goal of this research is to
enhance click through rate of the contextual advertisements using Linear
Regression. In order to address this problem, a new technique propose in this
paper to predict the CTR which will increase the overall revenue of the system
by serving the advertisements more suitable to the viewers with the help of
feature extraction and displaying the advertisements based on context of the
publishers. The important steps include the data collection, feature
extraction, CTR prediction and advertisement serving. The statistical results
obtained from the dynamically used technique show an efficient outcome by
fitting the data close to perfection for the LR technique using optimized
feature selection.
|
[
"['Muhammad Junaid Effendi' 'Syed Abbas Ali']",
"Muhammad Junaid Effendi and Syed Abbas Ali"
] |
cs.LG cs.SY stat.ML
|
10.1016/j.ifacol.2016.12.184
|
1701.08757
| null | null |
http://arxiv.org/abs/1701.08757v1
|
2017-01-27T20:45:31Z
|
2017-01-27T20:45:31Z
|
Bayesian Learning of Consumer Preferences for Residential Demand
Response
|
In coming years residential consumers will face real-time electricity tariffs
with energy prices varying day to day, and effective energy saving will require
automation - a recommender system, which learns consumer's preferences from her
actions. A consumer chooses a scenario of home appliance use to balance her
comfort level and the energy bill. We propose a Bayesian learning algorithm to
estimate the comfort level function from the history of appliance use. In
numeric experiments with datasets generated from a simulation model of a
consumer interacting with small home appliances the algorithm outperforms
popular regression analysis tools. Our approach can be extended to control an
air heating and conditioning system, which is responsible for up to half of a
household's energy bill.
|
[
"Mikhail V. Goubko and Sergey O. Kuznetsov and Alexey A. Neznanov and\n Dmitry I. Ignatov",
"['Mikhail V. Goubko' 'Sergey O. Kuznetsov' 'Alexey A. Neznanov'\n 'Dmitry I. Ignatov']"
] |
cs.LG stat.ML
| null |
1701.08795
| null | null |
http://arxiv.org/pdf/1701.08795v2
|
2017-02-25T07:02:37Z
|
2017-01-30T19:37:21Z
|
Dynamic Task Allocation for Crowdsourcing Settings
|
We consider the problem of optimal budget allocation for crowdsourcing
problems, allocating users to tasks to maximize our final confidence in the
crowdsourced answers. Such an optimized worker assignment method allows us to
boost the efficacy of any popular crowdsourcing estimation algorithm. We
consider a mutual information interpretation of the crowdsourcing problem,
which leads to a stochastic subset selection problem with a submodular
objective function. We present experimental simulation results which
demonstrate the effectiveness of our dynamic task allocation method for
achieving higher accuracy, possibly requiring fewer labels, as well as
improving upon a previous method which is sensitive to the proportion of users
to questions.
|
[
"['Angela Zhou' 'Irineo Cabreros' 'Karan Singh']",
"Angela Zhou, Irineo Cabreros, Karan Singh"
] |
cs.LG cs.CY cs.SI
| null |
1701.08796
| null | null |
http://arxiv.org/pdf/1701.08796v1
|
2017-01-30T19:41:04Z
|
2017-01-30T19:41:04Z
|
Learning from various labeling strategies for suicide-related messages
on social media: An experimental study
|
Suicide is an important but often misunderstood problem, one that researchers
are now seeking to better understand through social media. Due in large part to
the fuzzy nature of what constitutes suicidal risks, most supervised approaches
for learning to automatically detect suicide-related activity in social media
require a great deal of human labor to train. However, humans themselves have
diverse or conflicting views on what constitutes suicidal thoughts. So how to
obtain reliable gold standard labels is fundamentally challenging and, we
hypothesize, depends largely on what is asked of the annotators and what slice
of the data they label. We conducted multiple rounds of data labeling and
collected annotations from crowdsourcing workers and domain experts. We
aggregated the resulting labels in various ways to train a series of supervised
models. Our preliminary evaluations show that using unanimously agreed labels
from multiple annotators is helpful to achieve robust machine models.
|
[
"Tong Liu and Qijin Cheng and Christopher M. Homan and Vincent M.B.\n Silenzio",
"['Tong Liu' 'Qijin Cheng' 'Christopher M. Homan' 'Vincent M. B. Silenzio']"
] |
stat.ML cs.AI cs.LG math.OC
| null |
1701.0881
| null | null | null | null | null |
Reinforcement Learning Algorithm Selection
|
This paper formalises the problem of online algorithm selection in the
context of Reinforcement Learning. The setup is as follows: given an episodic
task and a finite number of off-policy RL algorithms, a meta-algorithm has to
decide which RL algorithm is in control during the next episode so as to
maximize the expected return. The article presents a novel meta-algorithm,
called Epochal Stochastic Bandit Algorithm Selection (ESBAS). Its principle is
to freeze the policy updates at each epoch, and to leave a rebooted stochastic
bandit in charge of the algorithm selection. Under some assumptions, a thorough
theoretical analysis demonstrates its near-optimality considering the
structural sampling budget limitations. ESBAS is first empirically evaluated on
a dialogue task where it is shown to outperform each individual algorithm in
most configurations. ESBAS is then adapted to a true online setting where
algorithms update their policies after each transition, which we call SSBAS.
SSBAS is evaluated on a fruit collection task where it is shown to adapt the
stepsize parameter more efficiently than the classical hyperbolic decay, and on
an Atari game, where it improves the performance by a wide margin.
|
[
"Romain Laroche and Raphael Feraud"
] |
null | null |
1701.08810
| null | null |
http://arxiv.org/pdf/1701.08810v3
|
2017-11-14T21:08:17Z
|
2017-01-30T20:13:17Z
|
Reinforcement Learning Algorithm Selection
|
This paper formalises the problem of online algorithm selection in the context of Reinforcement Learning. The setup is as follows: given an episodic task and a finite number of off-policy RL algorithms, a meta-algorithm has to decide which RL algorithm is in control during the next episode so as to maximize the expected return. The article presents a novel meta-algorithm, called Epochal Stochastic Bandit Algorithm Selection (ESBAS). Its principle is to freeze the policy updates at each epoch, and to leave a rebooted stochastic bandit in charge of the algorithm selection. Under some assumptions, a thorough theoretical analysis demonstrates its near-optimality considering the structural sampling budget limitations. ESBAS is first empirically evaluated on a dialogue task where it is shown to outperform each individual algorithm in most configurations. ESBAS is then adapted to a true online setting where algorithms update their policies after each transition, which we call SSBAS. SSBAS is evaluated on a fruit collection task where it is shown to adapt the stepsize parameter more efficiently than the classical hyperbolic decay, and on an Atari game, where it improves the performance by a wide margin.
|
[
"['Romain Laroche' 'Raphael Feraud']"
] |
cs.CV cs.LG
| null |
1701.08816
| null | null |
http://arxiv.org/pdf/1701.08816v4
|
2018-02-13T16:12:40Z
|
2017-01-30T20:21:57Z
|
Fully Convolutional Architectures for Multi-Class Segmentation in Chest
Radiographs
|
The success of deep convolutional neural networks on image classification and
recognition tasks has led to new applications in very diversified contexts,
including the field of medical imaging. In this paper we investigate and
propose neural network architectures for automated multi-class segmentation of
anatomical organs in chest radiographs, namely for lungs, clavicles and heart.
We address several open challenges including model overfitting, reducing number
of parameters and handling of severely imbalanced data in CXR by fusing recent
concepts in convolutional networks and adapting them to the segmentation
problem task in CXR. We demonstrate that our architecture combining delayed
subsampling, exponential linear units, highly restrictive regularization and a
large number of high resolution low level abstract features outperforms
state-of-the-art methods on all considered organs, as well as the human
observer on lungs and heart. The models use a multi-class configuration with
three target classes and are trained and tested on the publicly available JSRT
database, consisting of 247 X-ray images the ground-truth masks for which are
available in the SCR database. Our best performing model, trained with the loss
function based on the Dice coefficient, reached mean Jaccard overlap scores of
95.0\% for lungs, 86.8\% for clavicles and 88.2\% for heart. This architecture
outperformed the human observer results for lungs and heart.
|
[
"Alexey A. Novikov, Dimitrios Lenis, David Major, Jiri Hlad\\r{u}vka,\n Maria Wimmer, Katja B\\\"uhler",
"['Alexey A. Novikov' 'Dimitrios Lenis' 'David Major' 'Jiri Hladůvka'\n 'Maria Wimmer' 'Katja Bühler']"
] |
cs.LG cs.CV
| null |
1701.08837
| null | null |
http://arxiv.org/pdf/1701.08837v1
|
2017-01-30T21:44:27Z
|
2017-01-30T21:44:27Z
|
Emergence of Selective Invariance in Hierarchical Feed Forward Networks
|
Many theories have emerged which investigate how in- variance is generated in
hierarchical networks through sim- ple schemes such as max and mean pooling.
The restriction to max/mean pooling in theoretical and empirical studies has
diverted attention away from a more general way of generating invariance to
nuisance transformations. We con- jecture that hierarchically building
selective invariance (i.e. carefully choosing the range of the transformation
to be in- variant to at each layer of a hierarchical network) is im- portant
for pattern recognition. We utilize a novel pooling layer called adaptive
pooling to find linear pooling weights within networks. These networks with the
learnt pooling weights have performances on object categorization tasks that
are comparable to max/mean pooling networks. In- terestingly, adaptive pooling
can converge to mean pooling (when initialized with random pooling weights),
find more general linear pooling schemes or even decide not to pool at all. We
illustrate the general notion of selective invari- ance through object
categorization experiments on large- scale datasets such as SVHN and ILSVRC
2012.
|
[
"['Dipan K. Pal' 'Vishnu Boddeti' 'Marios Savvides']",
"Dipan K. Pal, Vishnu Boddeti, Marios Savvides"
] |
cs.LG stat.ML
| null |
1701.0884
| null | null | null | null | null |
Spatial Projection of Multiple Climate Variables using Hierarchical
Multitask Learning
|
Future projection of climate is typically obtained by combining outputs from
multiple Earth System Models (ESMs) for several climate variables such as
temperature and precipitation. While IPCC has traditionally used a simple model
output average, recent work has illustrated potential advantages of using a
multitask learning (MTL) framework for projections of individual climate
variables. In this paper we introduce a framework for hierarchical multitask
learning (HMTL) with two levels of tasks such that each super-task, i.e., task
at the top level, is itself a multitask learning problem over sub-tasks. For
climate projections, each super-task focuses on projections of specific climate
variables spatially using an MTL formulation. For the proposed HMTL approach, a
group lasso regularization is added to couple parameters across the
super-tasks, which in the climate context helps exploit relationships among the
behavior of different climate variables at a given spatial location. We show
that some recent works on MTL based on learning task dependency structures can
be viewed as special cases of HMTL. Experiments on synthetic and real climate
data show that HMTL produces better results than decoupled MTL methods applied
separately on the super-tasks and HMTL significantly outperforms baselines for
climate projection.
|
[
"Andr\\'e R. Gon\\c{c}alves, Arindam Banerjee, Fernando J. Von Zuben"
] |
null | null |
1701.08840
| null | null |
http://arxiv.org/pdf/1701.08840v1
|
2017-01-30T21:56:18Z
|
2017-01-30T21:56:18Z
|
Spatial Projection of Multiple Climate Variables using Hierarchical
Multitask Learning
|
Future projection of climate is typically obtained by combining outputs from multiple Earth System Models (ESMs) for several climate variables such as temperature and precipitation. While IPCC has traditionally used a simple model output average, recent work has illustrated potential advantages of using a multitask learning (MTL) framework for projections of individual climate variables. In this paper we introduce a framework for hierarchical multitask learning (HMTL) with two levels of tasks such that each super-task, i.e., task at the top level, is itself a multitask learning problem over sub-tasks. For climate projections, each super-task focuses on projections of specific climate variables spatially using an MTL formulation. For the proposed HMTL approach, a group lasso regularization is added to couple parameters across the super-tasks, which in the climate context helps exploit relationships among the behavior of different climate variables at a given spatial location. We show that some recent works on MTL based on learning task dependency structures can be viewed as special cases of HMTL. Experiments on synthetic and real climate data show that HMTL produces better results than decoupled MTL methods applied separately on the super-tasks and HMTL significantly outperforms baselines for climate projection.
|
[
"['André R. Gonçalves' 'Arindam Banerjee' 'Fernando J. Von Zuben']"
] |
physics.flu-dyn cond-mat.stat-mech cs.LG nlin.CD
|
10.1103/PhysRevLett.118.158004
|
1701.08848
| null | null |
http://arxiv.org/abs/1701.08848v3
|
2017-07-26T14:14:43Z
|
2017-01-30T22:09:04Z
|
Flow Navigation by Smart Microswimmers via Reinforcement Learning
|
Smart active particles can acquire some limited knowledge of the fluid
environment from simple mechanical cues and exert a control on their preferred
steering direction. Their goal is to learn the best way to navigate by
exploiting the underlying flow whenever possible. As an example, we focus our
attention on smart gravitactic swimmers. These are active particles whose task
is to reach the highest altitude within some time horizon, given the
constraints enforced by fluid mechanics. By means of numerical experiments, we
show that swimmers indeed learn nearly optimal strategies just by experience. A
reinforcement learning algorithm allows particles to learn effective strategies
even in difficult situations when, in the absence of control, they would end up
being trapped by flow structures. These strategies are highly nontrivial and
cannot be easily guessed in advance. This Letter illustrates the potential of
reinforcement learning algorithms to model adaptive behavior in complex flows
and paves the way towards the engineering of smart microswimmers that solve
difficult navigation problems.
|
[
"['Simona Colabrese' 'Kristian Gustavsson' 'Antonio Celani' 'Luca Biferale']",
"Simona Colabrese, Kristian Gustavsson, Antonio Celani and Luca\n Biferale"
] |
cs.LG cs.CV
| null |
1701.08886
| null | null |
http://arxiv.org/pdf/1701.08886v1
|
2017-01-31T01:59:58Z
|
2017-01-31T01:59:58Z
|
SenseGen: A Deep Learning Architecture for Synthetic Sensor Data
Generation
|
Our ability to synthesize sensory data that preserves specific statistical
properties of the real data has had tremendous implications on data privacy and
big data analytics. The synthetic data can be used as a substitute for
selective real data segments,that are sensitive to the user, thus protecting
privacy and resulting in improved analytics.However, increasingly adversarial
roles taken by data recipients such as mobile apps, or other cloud-based
analytics services, mandate that the synthetic data, in addition to preserving
statistical properties, should also be difficult to distinguish from the real
data. Typically, visual inspection has been used as a test to distinguish
between datasets. But more recently, sophisticated classifier models
(discriminators), corresponding to a set of events, have also been employed to
distinguish between synthesized and real data. The model operates on both
datasets and the respective event outputs are compared for consistency. In this
paper, we take a step towards generating sensory data that can pass a deep
learning based discriminator model test, and make two specific contributions:
first, we present a deep learning based architecture for synthesizing sensory
data. This architecture comprises of a generator model, which is a stack of
multiple Long-Short-Term-Memory (LSTM) networks and a Mixture Density Network.
second, we use another LSTM network based discriminator model for
distinguishing between the true and the synthesized data. Using a dataset of
accelerometer traces, collected using smartphones of users doing their daily
activities, we show that the deep learning based discriminator model can only
distinguish between the real and synthesized traces with an accuracy in the
neighborhood of 50%.
|
[
"['Moustafa Alzantot' 'Supriyo Chakraborty' 'Mani B. Srivastava']",
"Moustafa Alzantot, Supriyo Chakraborty, Mani B. Srivastava"
] |
cs.CV cs.LG
| null |
1701.08936
| null | null |
http://arxiv.org/pdf/1701.08936v2
|
2017-04-10T20:34:43Z
|
2017-01-31T07:48:56Z
|
Deep Reinforcement Learning for Visual Object Tracking in Videos
|
In this paper we introduce a fully end-to-end approach for visual tracking in
videos that learns to predict the bounding box locations of a target object at
every frame. An important insight is that the tracking problem can be
considered as a sequential decision-making process and historical semantics
encode highly relevant information for future decisions. Based on this
intuition, we formulate our model as a recurrent convolutional neural network
agent that interacts with a video overtime, and our model can be trained with
reinforcement learning (RL) algorithms to learn good tracking policies that pay
attention to continuous, inter-frame correlation and maximize tracking
performance in the long run. The proposed tracking algorithm achieves
state-of-the-art performance in an existing tracking benchmark and operates at
frame-rates faster than real-time. To the best of our knowledge, our tracker is
the first neural-network tracker that combines convolutional and recurrent
networks with RL algorithms.
|
[
"['Da Zhang' 'Hamid Maei' 'Xin Wang' 'Yuan-Fang Wang']",
"Da Zhang, Hamid Maei, Xin Wang, Yuan-Fang Wang"
] |
cs.LG
| null |
1701.08939
| null | null |
http://arxiv.org/pdf/1701.08939v1
|
2017-01-31T08:06:33Z
|
2017-01-31T08:06:33Z
|
Deep Submodular Functions
|
We start with an overview of a class of submodular functions called SCMMs
(sums of concave composed with non-negative modular functions plus a final
arbitrary modular). We then define a new class of submodular functions we call
{\em deep submodular functions} or DSFs. We show that DSFs are a flexible
parametric family of submodular functions that share many of the properties and
advantages of deep neural networks (DNNs). DSFs can be motivated by considering
a hierarchy of descriptive concepts over ground elements and where one wishes
to allow submodular interaction throughout this hierarchy. Results in this
paper show that DSFs constitute a strictly larger class of submodular functions
than SCMMs. We show that, for any integer $k>0$, there are $k$-layer DSFs that
cannot be represented by a $k'$-layer DSF for any $k'<k$. This implies that,
like DNNs, there is a utility to depth, but unlike DNNs, the family of DSFs
strictly increase with depth. Despite this, we show (using a "backpropagation"
like method) that DSFs, even with arbitrarily large $k$, do not comprise all
submodular functions. In offering the above results, we also define the notion
of an antitone superdifferential of a concave function and show how this
relates to submodular functions (in general), DSFs (in particular), negative
second-order partial derivatives, continuous submodularity, and concave
extensions. To further motivate our analysis, we provide various special case
results from matroid theory, comparing DSFs with forms of matroid rank, in
particular the laminar matroid. Lastly, we discuss strategies to learn DSFs,
and define the classes of deep supermodular functions, deep difference of
submodular functions, and deep multivariate submodular functions, and discuss
where these can be useful in applications.
|
[
"['Jeffrey Bilmes' 'Wenruo Bai']",
"Jeffrey Bilmes, Wenruo Bai"
] |
stat.ML cs.LG
| null |
1701.08946
| null | null |
http://arxiv.org/pdf/1701.08946v1
|
2017-01-31T08:51:59Z
|
2017-01-31T08:51:59Z
|
Variable selection for clustering with Gaussian mixture models: state of
the art
|
The mixture models have become widely used in clustering, given its
probabilistic framework in which its based, however, for modern databases that
are characterized by their large size, these models behave disappointingly in
setting out the model, making essential the selection of relevant variables for
this type of clustering. After recalling the basics of clustering based on a
model, this article will examine the variable selection methods for model-based
clustering, as well as presenting opportunities for improvement of these
methods.
|
[
"Abdelghafour Talibi and Boujem\\^aa Achchab and Rafik Lasri",
"['Abdelghafour Talibi' 'Boujemâa Achchab' 'Rafik Lasri']"
] |
cs.LG cs.AI cs.CL
| null |
1701.08954
| null | null |
http://arxiv.org/pdf/1701.08954v2
|
2017-03-27T18:47:01Z
|
2017-01-31T09:20:17Z
|
CommAI: Evaluating the first steps towards a useful general AI
|
With machine learning successfully applied to new daunting problems almost
every day, general AI starts looking like an attainable goal. However, most
current research focuses instead on important but narrow applications, such as
image classification or machine translation. We believe this to be largely due
to the lack of objective ways to measure progress towards broad machine
intelligence. In order to fill this gap, we propose here a set of concrete
desiderata for general AI, together with a platform to test machines on how
well they satisfy such desiderata, while keeping all further complexities to a
minimum.
|
[
"['Marco Baroni' 'Armand Joulin' 'Allan Jabri' 'Germàn Kruszewski'\n 'Angeliki Lazaridou' 'Klemen Simonic' 'Tomas Mikolov']",
"Marco Baroni, Armand Joulin, Allan Jabri, Germ\\`an Kruszewski,\n Angeliki Lazaridou, Klemen Simonic, Tomas Mikolov"
] |
cs.CV cs.LG stat.ML
| null |
1701.08974
| null | null |
http://arxiv.org/pdf/1701.08974v1
|
2017-01-31T10:17:13Z
|
2017-01-31T10:17:13Z
|
Towards Adversarial Retinal Image Synthesis
|
Synthesizing images of the eye fundus is a challenging task that has been
previously approached by formulating complex models of the anatomy of the eye.
New images can then be generated by sampling a suitable parameter space. In
this work, we propose a method that learns to synthesize eye fundus images
directly from data. For that, we pair true eye fundus images with their
respective vessel trees, by means of a vessel segmentation technique. These
pairs are then used to learn a mapping from a binary vessel tree to a new
retinal image. For this purpose, we use a recent image-to-image translation
technique, based on the idea of adversarial learning. Experimental results show
that the original and the generated images are visually different in terms of
their global appearance, in spite of sharing the same vessel tree.
Additionally, a quantitative quality analysis of the synthetic retinal images
confirms that the produced images retain a high proportion of the true image
set quality.
|
[
"Pedro Costa, Adrian Galdran, Maria In\\^es Meyer, Michael David\n Abr\\`amoff, Meindert Niemeijer, Ana Maria Mendon\\c{c}a, Aur\\'elio Campilho",
"['Pedro Costa' 'Adrian Galdran' 'Maria Inês Meyer'\n 'Michael David Abràmoff' 'Meindert Niemeijer' 'Ana Maria Mendonça'\n 'Aurélio Campilho']"
] |
cs.LG cs.NE
| null |
1701.08978
| null | null |
http://arxiv.org/pdf/1701.08978v2
|
2017-02-01T04:09:31Z
|
2017-01-31T10:28:37Z
|
Mixed Low-precision Deep Learning Inference using Dynamic Fixed Point
|
We propose a cluster-based quantization method to convert pre-trained full
precision weights into ternary weights with minimal impact on the accuracy. In
addition, we also constrain the activations to 8-bits thus enabling sub 8-bit
full integer inference pipeline. Our method uses smaller clusters of N filters
with a common scaling factor to minimize the quantization loss, while also
maximizing the number of ternary operations. We show that with a cluster size
of N=4 on Resnet-101, can achieve 71.8% TOP-1 accuracy, within 6% of the best
full precision results while replacing ~85% of all multiplications with 8-bit
accumulations. Using the same method with 4-bit weights achieves 76.3% TOP-1
accuracy which within 2% of the full precision result. We also study the impact
of the size of the cluster on both performance and accuracy, larger cluster
sizes N=64 can replace ~98% of the multiplications with ternary operations but
introduces significant drop in accuracy which necessitates fine tuning the
parameters with retraining the network at lower precision. To address this we
have also trained low-precision Resnet-50 with 8-bit activations and ternary
weights by pre-initializing the network with full precision weights and achieve
68.9% TOP-1 accuracy within 4 additional epochs. Our final quantized model can
run on a full 8-bit compute pipeline, with a potential 16x improvement in
performance compared to baseline full-precision models.
|
[
"['Naveen Mellempudi' 'Abhisek Kundu' 'Dipankar Das' 'Dheevatsa Mudigere'\n 'Bharat Kaul']",
"Naveen Mellempudi, Abhisek Kundu, Dipankar Das, Dheevatsa Mudigere,\n and Bharat Kaul"
] |
cs.LG cs.AI
| null |
1701.09083
| null | null |
http://arxiv.org/pdf/1701.09083v1
|
2017-01-28T19:28:29Z
|
2017-01-28T19:28:29Z
|
Efficient Rank Aggregation via Lehmer Codes
|
We propose a novel rank aggregation method based on converting permutations
into their corresponding Lehmer codes or other subdiagonal images. Lehmer
codes, also known as inversion vectors, are vector representations of
permutations in which each coordinate can take values not restricted by the
values of other coordinates. This transformation allows for decoupling of the
coordinates and for performing aggregation via simple scalar median or mode
computations. We present simulation results illustrating the performance of
this completely parallelizable approach and analytically prove that both the
mode and median aggregation procedure recover the correct centroid aggregate
with small sample complexity when the permutations are drawn according to the
well-known Mallows models. The proposed Lehmer code approach may also be used
on partial rankings, with similar performance guarantees.
|
[
"Pan Li, Arya Mazumdar and Olgica Milenkovic",
"['Pan Li' 'Arya Mazumdar' 'Olgica Milenkovic']"
] |
cs.NE cs.LG
| null |
1701.09175
| null | null |
http://arxiv.org/pdf/1701.09175v8
|
2018-03-04T22:23:18Z
|
2017-01-31T18:41:07Z
|
Skip Connections Eliminate Singularities
|
Skip connections made the training of very deep networks possible and have
become an indispensable component in a variety of neural architectures. A
completely satisfactory explanation for their success remains elusive. Here, we
present a novel explanation for the benefits of skip connections in training
very deep networks. The difficulty of training deep networks is partly due to
the singularities caused by the non-identifiability of the model. Several such
singularities have been identified in previous works: (i) overlap singularities
caused by the permutation symmetry of nodes in a given layer, (ii) elimination
singularities corresponding to the elimination, i.e. consistent deactivation,
of nodes, (iii) singularities generated by the linear dependence of the nodes.
These singularities cause degenerate manifolds in the loss landscape that slow
down learning. We argue that skip connections eliminate these singularities by
breaking the permutation symmetry of nodes, by reducing the possibility of node
elimination and by making the nodes less linearly dependent. Moreover, for
typical initializations, skip connections move the network away from the
"ghosts" of these singularities and sculpt the landscape around them to
alleviate the learning slow-down. These hypotheses are supported by evidence
from simplified models, as well as from experiments with deep networks trained
on real-world datasets.
|
[
"['A. Emin Orhan' 'Xaq Pitkow']",
"A. Emin Orhan, Xaq Pitkow"
] |
cs.LG stat.ML
| null |
1701.09177
| null | null |
http://arxiv.org/pdf/1701.09177v5
|
2017-09-21T15:47:34Z
|
2017-01-31T18:42:19Z
|
A Dirichlet Mixture Model of Hawkes Processes for Event Sequence
Clustering
|
We propose an effective method to solve the event sequence clustering
problems based on a novel Dirichlet mixture model of a special but significant
type of point processes --- Hawkes process. In this model, each event sequence
belonging to a cluster is generated via the same Hawkes process with specific
parameters, and different clusters correspond to different Hawkes processes.
The prior distribution of the Hawkes processes is controlled via a Dirichlet
distribution. We learn the model via a maximum likelihood estimator (MLE) and
propose an effective variational Bayesian inference algorithm. We specifically
analyze the resulting EM-type algorithm in the context of inner-outer
iterations and discuss several inner iteration allocation strategies. The
identifiability of our model, the convergence of our learning method, and its
sample complexity are analyzed in both theoretical and empirical ways, which
demonstrate the superiority of our method to other competitors. The proposed
method learns the number of clusters automatically and is robust to model
misspecification. Experiments on both synthetic and real-world data show that
our method can learn diverse triggering patterns hidden in asynchronous event
sequences and achieve encouraging performance on clustering purity and
consistency.
|
[
"['Hongteng Xu' 'Hongyuan Zha']",
"Hongteng Xu and Hongyuan Zha"
] |
cs.LG math.ST stat.ML stat.TH
| null |
1702.00001
| null | null |
http://arxiv.org/pdf/1702.00001v3
|
2017-11-07T07:06:06Z
|
2017-01-31T07:45:32Z
|
Learning the distribution with largest mean: two bandit frameworks
|
Over the past few years, the multi-armed bandit model has become increasingly
popular in the machine learning community, partly because of applications
including online content optimization. This paper reviews two different
sequential learning tasks that have been considered in the bandit literature ;
they can be formulated as (sequentially) learning which distribution has the
highest mean among a set of distributions, with some constraints on the
learning process. For both of them (regret minimization and best arm
identification) we present recent, asymptotically optimal algorithms. We
compare the behaviors of the sampling rule of each algorithm as well as the
complexity terms associated to each problem.
|
[
"Emilie Kaufmann (SEQUEL, CRIStAL, CNRS), Aur\\'elien Garivier (IMT)",
"['Emilie Kaufmann' 'Aurélien Garivier']"
] |
cs.AI cs.LG physics.chem-ph
| null |
1702.0002
| null | null | null | null | null |
Towards "AlphaChem": Chemical Synthesis Planning with Tree Search and
Deep Neural Network Policies
|
Retrosynthesis is a technique to plan the chemical synthesis of organic
molecules, for example drugs, agro- and fine chemicals. In retrosynthesis, a
search tree is built by analysing molecules recursively and dissecting them
into simpler molecular building blocks until one obtains a set of known
building blocks. The search space is intractably large, and it is difficult to
determine the value of retrosynthetic positions. Here, we propose to model
retrosynthesis as a Markov Decision Process. In combination with a Deep Neural
Network policy learned from essentially the complete published knowledge of
chemistry, Monte Carlo Tree Search (MCTS) can be used to evaluate positions. In
exploratory studies, we demonstrate that MCTS with neural network policies
outperforms the traditionally used best-first search with hand-coded
heuristics.
|
[
"Marwin Segler, Mike Preu{\\ss}, Mark P. Waller"
] |
null | null |
1702.00020
| null | null |
http://arxiv.org/pdf/1702.00020v1
|
2017-01-31T19:07:43Z
|
2017-01-31T19:07:43Z
|
Towards "AlphaChem": Chemical Synthesis Planning with Tree Search and
Deep Neural Network Policies
|
Retrosynthesis is a technique to plan the chemical synthesis of organic molecules, for example drugs, agro- and fine chemicals. In retrosynthesis, a search tree is built by analysing molecules recursively and dissecting them into simpler molecular building blocks until one obtains a set of known building blocks. The search space is intractably large, and it is difficult to determine the value of retrosynthetic positions. Here, we propose to model retrosynthesis as a Markov Decision Process. In combination with a Deep Neural Network policy learned from essentially the complete published knowledge of chemistry, Monte Carlo Tree Search (MCTS) can be used to evaluate positions. In exploratory studies, we demonstrate that MCTS with neural network policies outperforms the traditionally used best-first search with hand-coded heuristics.
|
[
"['Marwin Segler' 'Mike Preuß' 'Mark P. Waller']"
] |
cs.IT cs.LG math.IT stat.ML
| null |
1702.00027
| null | null |
http://arxiv.org/pdf/1702.00027v1
|
2017-01-31T19:25:44Z
|
2017-01-31T19:25:44Z
|
Representation of big data by dimension reduction
|
Suppose the data consist of a set $S$ of points $x_j, 1 \leq j \leq J$,
distributed in a bounded domain $D \subset R^N$, where $N$ and $J$ are large
numbers. In this paper an algorithm is proposed for checking whether there
exists a manifold $\mathbb{M}$ of low dimension near which many of the points
of $S$ lie and finding such $\mathbb{M}$ if it exists. There are many dimension
reduction algorithms, both linear and non-linear. Our algorithm is simple to
implement and has some advantages compared with the known algorithms. If there
is a manifold of low dimension near which most of the data points lie, the
proposed algorithm will find it. Some numerical results are presented
illustrating the algorithm and analyzing its performance compared to the
classical PCA (principal component analysis) and Isomap.
|
[
"A.G.Ramm, C. Van",
"['A. G. Ramm' 'C. Van']"
] |
cs.LG cs.NE
| null |
1702.00071
| null | null |
http://arxiv.org/pdf/1702.00071v4
|
2017-10-12T17:18:51Z
|
2017-01-31T22:14:59Z
|
On orthogonality and learning recurrent networks with long term
dependencies
|
It is well known that it is challenging to train deep neural networks and
recurrent neural networks for tasks that exhibit long term dependencies. The
vanishing or exploding gradient problem is a well known issue associated with
these challenges. One approach to addressing vanishing and exploding gradients
is to use either soft or hard constraints on weight matrices so as to encourage
or enforce orthogonality. Orthogonal matrices preserve gradient norm during
backpropagation and may therefore be a desirable property. This paper explores
issues with optimization convergence, speed and gradient stability when
encouraging or enforcing orthogonality. To perform this analysis, we propose a
weight matrix factorization and parameterization strategy through which we can
bound matrix norms and therein control the degree of expansivity induced during
backpropagation. We find that hard constraints on orthogonality can negatively
affect the speed of convergence and model performance.
|
[
"Eugene Vorontsov, Chiheb Trabelsi, Samuel Kadoury, Chris Pal",
"['Eugene Vorontsov' 'Chiheb Trabelsi' 'Samuel Kadoury' 'Chris Pal']"
] |
cs.CV cs.LG stat.ML
|
10.1109/TNNLS.2018.2884700
|
1702.00156
| null | null |
http://arxiv.org/abs/1702.00156v3
|
2018-12-05T01:59:34Z
|
2017-02-01T08:16:03Z
|
Stochastic Graphlet Embedding
|
Graph-based methods are known to be successful in many machine learning and
pattern classification tasks. These methods consider semi-structured data as
graphs where nodes correspond to primitives (parts, interest points, segments,
etc.) and edges characterize the relationships between these primitives.
However, these non-vectorial graph data cannot be straightforwardly plugged
into off-the-shelf machine learning algorithms without a preliminary step of --
explicit/implicit -- graph vectorization and embedding. This embedding process
should be resilient to intra-class graph variations while being highly
discriminant. In this paper, we propose a novel high-order stochastic graphlet
embedding (SGE) that maps graphs into vector spaces. Our main contribution
includes a new stochastic search procedure that efficiently parses a given
graph and extracts/samples unlimitedly high-order graphlets. We consider these
graphlets, with increasing orders, to model local primitives as well as their
increasingly complex interactions. In order to build our graph representation,
we measure the distribution of these graphlets into a given graph, using
particular hash functions that efficiently assign sampled graphlets into
isomorphic sets with a very low probability of collision. When combined with
maximum margin classifiers, these graphlet-based representations have positive
impact on the performance of pattern comparison and recognition as corroborated
through extensive experiments using standard benchmark databases.
|
[
"['Anjan Dutta' 'Hichem Sahbi']",
"Anjan Dutta and Hichem Sahbi"
] |
cs.LG stat.ML
|
10.1109/ICDAR.2017.148
|
1702.00177
| null | null |
http://arxiv.org/abs/1702.00177v1
|
2017-02-01T09:41:52Z
|
2017-02-01T09:41:52Z
|
PCA-Initialized Deep Neural Networks Applied To Document Image Analysis
|
In this paper, we present a novel approach for initializing deep neural
networks, i.e., by turning PCA into neural layers. Usually, the initialization
of the weights of a deep neural network is done in one of the three following
ways: 1) with random values, 2) layer-wise, usually as Deep Belief Network or
as auto-encoder, and 3) re-use of layers from another network (transfer
learning). Therefore, typically, many training epochs are needed before
meaningful weights are learned, or a rather similar dataset is required for
seeding a fine-tuning of transfer learning. In this paper, we describe how to
turn a PCA into an auto-encoder, by generating an encoder layer of the PCA
parameters and furthermore adding a decoding layer. We analyze the
initialization technique on real documents. First, we show that a PCA-based
initialization is quick and leads to a very stable initialization. Furthermore,
for the task of layout analysis we investigate the effectiveness of PCA-based
initialization and show that it outperforms state-of-the-art random weight
initialization methods.
|
[
"['Mathias Seuret' 'Michele Alberti' 'Rolf Ingold' 'Marcus Liwicki']",
"Mathias Seuret, Michele Alberti, Rolf Ingold, Marcus Liwicki"
] |
cs.SD cs.LG
|
10.17743/aesconf.2017.978-1-942220-15-2
|
1702.00178
| null | null |
http://arxiv.org/abs/1702.00178v2
|
2017-03-31T11:24:42Z
|
2017-02-01T09:44:44Z
|
On the Futility of Learning Complex Frame-Level Language Models for
Chord Recognition
|
Chord recognition systems use temporal models to post-process frame-wise
chord preditions from acoustic models. Traditionally, first-order models such
as Hidden Markov Models were used for this task, with recent works suggesting
to apply Recurrent Neural Networks instead. Due to their ability to learn
longer-term dependencies, these models are supposed to learn and to apply
musical knowledge, instead of just smoothing the output of the acoustic model.
In this paper, we argue that learning complex temporal models at the level of
audio frames is futile on principle, and that non-Markovian models do not
perform better than their first-order counterparts. We support our argument
through three experiments on the McGill Billboard dataset. The first two show
1) that when learning complex temporal models at the frame level, improvements
in chord sequence modelling are marginal; and 2) that these improvements do not
translate when applied within a full chord recognition system. The third, still
rather preliminary experiment gives first indications that the use of complex
sequential models for chord prediction at higher temporal levels might be more
promising.
|
[
"Filip Korzeniowski and Gerhard Widmer",
"['Filip Korzeniowski' 'Gerhard Widmer']"
] |
cs.DS cs.LG
| null |
1702.00196
| null | null |
http://arxiv.org/pdf/1702.00196v1
|
2017-02-01T10:30:32Z
|
2017-02-01T10:30:32Z
|
Communication-Optimal Distributed Clustering
|
Clustering large datasets is a fundamental problem with a number of
applications in machine learning. Data is often collected on different sites
and clustering needs to be performed in a distributed manner with low
communication. We would like the quality of the clustering in the distributed
setting to match that in the centralized setting for which all the data resides
on a single site. In this work, we study both graph and geometric clustering
problems in two distributed models: (1) a point-to-point model, and (2) a model
with a broadcast channel. We give protocols in both models which we show are
nearly optimal by proving almost matching communication lower bounds. Our work
highlights the surprising power of a broadcast channel for clustering problems;
roughly speaking, to spectrally cluster $n$ points or $n$ vertices in a graph
distributed across $s$ servers, for a worst-case partitioning the communication
complexity in a point-to-point model is $n \cdot s$, while in the broadcast
model it is $n + s$. A similar phenomenon holds for the geometric setting as
well. We implement our algorithms and demonstrate this phenomenon on real life
datasets, showing that our algorithms are also very efficient in practice.
|
[
"['Jiecao Chen' 'He Sun' 'David P. Woodruff' 'Qin Zhang']",
"Jiecao Chen and He Sun and David P. Woodruff and Qin Zhang"
] |
physics.optics cs.LG
| null |
1702.0026
| null | null | null | null | null |
Machine learning based compact photonic structure design for strong
light confinement
|
We present a novel approach based on machine learning for designing photonic
structures. In particular, we focus on strong light confinement that allows the
design of an efficient free-space-to-waveguide coupler which is made of Si-
slab overlying on the top of silica substrate. The learning algorithm is
implemented using bitwise square Si- cells and the whole optimized device has a
footprint of $\boldsymbol{2 \, \mu m \times 1\, \mu m}$, which is the smallest
size ever achieved numerically. To find the effect of Si- slab thickness on the
sub-wavelength focusing and strong coupling characteristics of optimized
photonic structure, we carried out three-dimensional time-domain numerical
calculations. Corresponding optimum values of full width at half maximum and
coupling efficiency were calculated as $\boldsymbol{0.158 \lambda}$ and
$\boldsymbol{-1.87\,dB}$ with slab thickness of $\boldsymbol{280nm}$. Compared
to the conventional counterparts, the optimized lens and coupler designs are
easy-to-fabricate via optical lithography techniques, quite compact, and can
operate at telecommunication wavelengths. The outcomes of the presented study
show that machine learning can be beneficial for efficient photonic designs in
various potential applications such as polarization-division, beam manipulation
and optical interconnects.
|
[
"Mirbek Turduev, \\c{C}a\\u{g}r{\\i} Latifo\\u{g}lu, \\.Ibrahim Halil Giden,\n Y. Sinan Hanay"
] |
null | null |
1702.00260
| null | null |
http://arxiv.org/pdf/1702.00260v1
|
2017-01-31T10:48:39Z
|
2017-01-31T10:48:39Z
|
Machine learning based compact photonic structure design for strong
light confinement
|
We present a novel approach based on machine learning for designing photonic structures. In particular, we focus on strong light confinement that allows the design of an efficient free-space-to-waveguide coupler which is made of Si- slab overlying on the top of silica substrate. The learning algorithm is implemented using bitwise square Si- cells and the whole optimized device has a footprint of $boldsymbol{2 , mu m times 1, mu m}$, which is the smallest size ever achieved numerically. To find the effect of Si- slab thickness on the sub-wavelength focusing and strong coupling characteristics of optimized photonic structure, we carried out three-dimensional time-domain numerical calculations. Corresponding optimum values of full width at half maximum and coupling efficiency were calculated as $boldsymbol{0.158 lambda}$ and $boldsymbol{-1.87,dB}$ with slab thickness of $boldsymbol{280nm}$. Compared to the conventional counterparts, the optimized lens and coupler designs are easy-to-fabricate via optical lithography techniques, quite compact, and can operate at telecommunication wavelengths. The outcomes of the presented study show that machine learning can be beneficial for efficient photonic designs in various potential applications such as polarization-division, beam manipulation and optical interconnects.
|
[
"['Mirbek Turduev' 'Çağrı Latifoğlu' 'İbrahim Halil Giden' 'Y. Sinan Hanay']"
] |
stat.ML cs.LG math.OC stat.CO
| null |
1702.00317
| null | null |
http://arxiv.org/pdf/1702.00317v2
|
2017-02-07T22:13:25Z
|
2017-02-01T15:33:01Z
|
On SGD's Failure in Practice: Characterizing and Overcoming Stalling
|
Stochastic Gradient Descent (SGD) is widely used in machine learning problems
to efficiently perform empirical risk minimization, yet, in practice, SGD is
known to stall before reaching the actual minimizer of the empirical risk. SGD
stalling has often been attributed to its sensitivity to the conditioning of
the problem; however, as we demonstrate, SGD will stall even when applied to a
simple linear regression problem with unity condition number for standard
learning rates. Thus, in this work, we numerically demonstrate and
mathematically argue that stalling is a crippling and generic limitation of SGD
and its variants in practice. Once we have established the problem of stalling,
we generalize an existing framework for hedging against its effects, which (1)
deters SGD and its variants from stalling, (2) still provides convergence
guarantees, and (3) makes SGD and its variants more practical methods for
minimization.
|
[
"Vivak Patel",
"['Vivak Patel']"
] |
astro-ph.IM astro-ph.GA cs.LG stat.ML
|
10.1093/mnrasl/slx008
|
1702.00403
| null | null |
http://arxiv.org/abs/1702.00403v1
|
2017-02-01T19:00:02Z
|
2017-02-01T19:00:02Z
|
Generative Adversarial Networks recover features in astrophysical images
of galaxies beyond the deconvolution limit
|
Observations of astrophysical objects such as galaxies are limited by various
sources of random and systematic noise from the sky background, the optical
system of the telescope and the detector used to record the data. Conventional
deconvolution techniques are limited in their ability to recover features in
imaging data by the Shannon-Nyquist sampling theorem. Here we train a
generative adversarial network (GAN) on a sample of $4,550$ images of nearby
galaxies at $0.01<z<0.02$ from the Sloan Digital Sky Survey and conduct
$10\times$ cross validation to evaluate the results. We present a method using
a GAN trained on galaxy images that can recover features from artificially
degraded images with worse seeing and higher noise than the original with a
performance which far exceeds simple deconvolution. The ability to better
recover detailed features such as galaxy morphology from low-signal-to-noise
and low angular resolution imaging data significantly increases our ability to
study existing data sets of astrophysical objects as well as future
observations with observatories such as the Large Synoptic Sky Telescope (LSST)
and the Hubble and James Webb space telescopes.
|
[
"['Kevin Schawinski' 'Ce Zhang' 'Hantian Zhang' 'Lucas Fowler'\n 'Gokula Krishnan Santhanam']",
"Kevin Schawinski, Ce Zhang, Hantian Zhang, Lucas Fowler and Gokula\n Krishnan Santhanam"
] |
cs.DS cs.LG physics.data-an
| null |
1702.00458
| null | null |
http://arxiv.org/pdf/1702.00458v5
|
2018-12-04T23:04:49Z
|
2017-02-01T21:25:13Z
|
Convergence Results for Neural Networks via Electrodynamics
|
We study whether a depth two neural network can learn another depth two
network using gradient descent. Assuming a linear output node, we show that the
question of whether gradient descent converges to the target function is
equivalent to the following question in electrodynamics: Given $k$ fixed
protons in $\mathbb{R}^d,$ and $k$ electrons, each moving due to the attractive
force from the protons and repulsive force from the remaining electrons,
whether at equilibrium all the electrons will be matched up with the protons,
up to a permutation. Under the standard electrical force, this follows from the
classic Earnshaw's theorem. In our setting, the force is determined by the
activation function and the input distribution. Building on this equivalence,
we prove the existence of an activation function such that gradient descent
learns at least one of the hidden nodes in the target network. Iterating, we
show that gradient descent can be used to learn the entire network one node at
a time.
|
[
"['Rina Panigrahy' 'Sushant Sachdeva' 'Qiuyi Zhang']",
"Rina Panigrahy, Sushant Sachdeva, Qiuyi Zhang"
] |
cs.CV cs.DC cs.LG cs.PF
| null |
1702.00505
| null | null |
http://arxiv.org/pdf/1702.00505v2
|
2017-03-21T21:58:41Z
|
2017-02-02T00:01:46Z
|
Algorithmic Performance-Accuracy Trade-off in 3D Vision Applications
Using HyperMapper
|
In this paper we investigate an emerging application, 3D scene understanding,
likely to be significant in the mobile space in the near future. The goal of
this exploration is to reduce execution time while meeting our quality of
result objectives. In previous work we showed for the first time that it is
possible to map this application to power constrained embedded systems,
highlighting that decision choices made at the algorithmic design-level have
the most impact.
As the algorithmic design space is too large to be exhaustively evaluated, we
use a previously introduced multi-objective Random Forest Active Learning
prediction framework dubbed HyperMapper, to find good algorithmic designs. We
show that HyperMapper generalizes on a recent cutting edge 3D scene
understanding algorithm and on a modern GPU-based computer architecture.
HyperMapper is able to beat an expert human hand-tuning the algorithmic
parameters of the class of Computer Vision applications taken under
consideration in this paper automatically. In addition, we use crowd-sourcing
using a 3D scene understanding Android app to show that the Pareto front
obtained on an embedded system can be used to accelerate the same application
on all the 83 smart-phones and tablets crowd-sourced with speedups ranging from
2 to over 12.
|
[
"Luigi Nardi, Bruno Bodin, Sajad Saeedi, Emanuele Vespa, Andrew J.\n Davison, Paul H. J. Kelly",
"['Luigi Nardi' 'Bruno Bodin' 'Sajad Saeedi' 'Emanuele Vespa'\n 'Andrew J. Davison' 'Paul H. J. Kelly']"
] |
cs.CV cs.LG
| null |
1702.00509
| null | null |
http://arxiv.org/pdf/1702.00509v1
|
2017-02-02T00:37:22Z
|
2017-02-02T00:37:22Z
|
Segmentation of optic disc, fovea and retinal vasculature using a single
convolutional neural network
|
We have developed and trained a convolutional neural network to automatically
and simultaneously segment optic disc, fovea and blood vessels. Fundus images
were normalised before segmentation was performed to enforce consistency in
background lighting and contrast. For every effective point in the fundus
image, our algorithm extracted three channels of input from the neighbourhood
of the point and forward the response across the 7 layer network. In average,
our segmentation achieved an accuracy of 92.68 percent on the testing set from
Drive database.
|
[
"['Jen Hong Tan' 'U. Rajendra Acharya' 'Sulatha V. Bhandary'\n 'Kuang Chua Chua' 'Sobha Sivaprasad']",
"Jen Hong Tan, U. Rajendra Acharya, Sulatha V. Bhandary, Kuang Chua\n Chua, Sobha Sivaprasad"
] |
stat.ML cs.LG
| null |
1702.00518
| null | null |
http://arxiv.org/pdf/1702.00518v1
|
2017-02-02T01:22:18Z
|
2017-02-02T01:22:18Z
|
Recovering True Classifier Performance in Positive-Unlabeled Learning
|
A common approach in positive-unlabeled learning is to train a classification
model between labeled and unlabeled data. This strategy is in fact known to
give an optimal classifier under mild conditions; however, it results in biased
empirical estimates of the classifier performance. In this work, we show that
the typically used performance measures such as the receiver operating
characteristic curve, or the precision-recall curve obtained on such data can
be corrected with the knowledge of class priors; i.e., the proportions of the
positive and negative examples in the unlabeled data. We extend the results to
a noisy setting where some of the examples labeled positive are in fact
negative and show that the correction also requires the knowledge of the
proportion of noisy examples in the labeled positives. Using state-of-the-art
algorithms to estimate the positive class prior and the proportion of noise, we
experimentally evaluate two correction approaches and demonstrate their
efficacy on real-life data.
|
[
"Shantanu Jain, Martha White, Predrag Radivojac",
"['Shantanu Jain' 'Martha White' 'Predrag Radivojac']"
] |
cs.CV cs.CL cs.LG
| null |
1702.00523
| null | null |
http://arxiv.org/pdf/1702.00523v1
|
2017-02-02T01:56:22Z
|
2017-02-02T01:56:22Z
|
Deep Learning the Indus Script
|
Standardized corpora of undeciphered scripts, a necessary starting point for
computational epigraphy, requires laborious human effort for their preparation
from raw archaeological records. Automating this process through machine
learning algorithms can be of significant aid to epigraphical research. Here,
we take the first steps in this direction and present a deep learning pipeline
that takes as input images of the undeciphered Indus script, as found in
archaeological artifacts, and returns as output a string of graphemes, suitable
for inclusion in a standard corpus. The image is first decomposed into regions
using Selective Search and these regions are classified as containing textual
and/or graphical information using a convolutional neural network. Regions
classified as potentially containing text are hierarchically merged and trimmed
to remove non-textual information. The remaining textual part of the image is
segmented using standard image processing techniques to isolate individual
graphemes. This set is finally passed to a second convolutional neural network
to classify the graphemes, based on a standard corpus. The classifier can
identify the presence or absence of the most frequent Indus grapheme, the "jar"
sign, with an accuracy of 92%. Our results demonstrate the great potential of
deep learning approaches in computational epigraphy and, more generally, in the
digital humanities.
|
[
"['Satish Palaniappan' 'Ronojoy Adhikari']",
"Satish Palaniappan and Ronojoy Adhikari"
] |
cs.LG cs.IT math.IT
| null |
1702.0061
| null | null | null | null | null |
Optimal Schemes for Discrete Distribution Estimation under Locally
Differential Privacy
|
We consider the minimax estimation problem of a discrete distribution with
support size $k$ under privacy constraints. A privatization scheme is applied
to each raw sample independently, and we need to estimate the distribution of
the raw samples from the privatized samples. A positive number $\epsilon$
measures the privacy level of a privatization scheme. For a given $\epsilon,$
we consider the problem of constructing optimal privatization schemes with
$\epsilon$-privacy level, i.e., schemes that minimize the expected estimation
loss for the worst-case distribution. Two schemes in the literature provide
order optimal performance in the high privacy regime where $\epsilon$ is very
close to $0,$ and in the low privacy regime where $e^{\epsilon}\approx k,$
respectively.
In this paper, we propose a new family of schemes which substantially improve
the performance of the existing schemes in the medium privacy regime when $1\ll
e^{\epsilon} \ll k.$ More concretely, we prove that when $3.8 < \epsilon
<\ln(k/9) ,$ our schemes reduce the expected estimation loss by $50\%$ under
$\ell_2^2$ metric and by $30\%$ under $\ell_1$ metric over the existing
schemes. We also prove a lower bound for the region $e^{\epsilon} \ll k,$ which
implies that our schemes are order optimal in this regime.
|
[
"Min Ye and Alexander Barg"
] |
null | null |
1702.00610
| null | null |
http://arxiv.org/pdf/1702.00610v1
|
2017-02-02T10:37:55Z
|
2017-02-02T10:37:55Z
|
Optimal Schemes for Discrete Distribution Estimation under Locally
Differential Privacy
|
We consider the minimax estimation problem of a discrete distribution with support size $k$ under privacy constraints. A privatization scheme is applied to each raw sample independently, and we need to estimate the distribution of the raw samples from the privatized samples. A positive number $epsilon$ measures the privacy level of a privatization scheme. For a given $epsilon,$ we consider the problem of constructing optimal privatization schemes with $epsilon$-privacy level, i.e., schemes that minimize the expected estimation loss for the worst-case distribution. Two schemes in the literature provide order optimal performance in the high privacy regime where $epsilon$ is very close to $0,$ and in the low privacy regime where $e^{epsilon}approx k,$ respectively. In this paper, we propose a new family of schemes which substantially improve the performance of the existing schemes in the medium privacy regime when $1ll e^{epsilon} ll k.$ More concretely, we prove that when $3.8 < epsilon <ln(k/9) ,$ our schemes reduce the expected estimation loss by $50%$ under $ell_2^2$ metric and by $30%$ under $ell_1$ metric over the existing schemes. We also prove a lower bound for the region $e^{epsilon} ll k,$ which implies that our schemes are order optimal in this regime.
|
[
"['Min Ye' 'Alexander Barg']"
] |
math.OC cs.LG
| null |
1702.00709
| null | null |
http://arxiv.org/pdf/1702.00709v2
|
2017-03-27T19:16:55Z
|
2017-02-02T15:13:06Z
|
IQN: An Incremental Quasi-Newton Method with Local Superlinear
Convergence Rate
|
The problem of minimizing an objective that can be written as the sum of a
set of $n$ smooth and strongly convex functions is considered. The Incremental
Quasi-Newton (IQN) method proposed here belongs to the family of stochastic and
incremental methods that have a cost per iteration independent of $n$. IQN
iterations are a stochastic version of BFGS iterations that use memory to
reduce the variance of stochastic approximations. The convergence properties of
IQN bridge a gap between deterministic and stochastic quasi-Newton methods.
Deterministic quasi-Newton methods exploit the possibility of approximating the
Newton step using objective gradient differences. They are appealing because
they have a smaller computational cost per iteration relative to Newton's
method and achieve a superlinear convergence rate under customary regularity
assumptions. Stochastic quasi-Newton methods utilize stochastic gradient
differences in lieu of actual gradient differences. This makes their
computational cost per iteration independent of the number of objective
functions $n$. However, existing stochastic quasi-Newton methods have sublinear
or linear convergence at best. IQN is the first stochastic quasi-Newton method
proven to converge superlinearly in a local neighborhood of the optimal
solution. IQN differs from state-of-the-art incremental quasi-Newton methods in
three aspects: (i) The use of aggregated information of variables, gradients,
and quasi-Newton Hessian approximation matrices to reduce the noise of gradient
and Hessian approximations. (ii) The approximation of each individual function
by its Taylor's expansion in which the linear and quadratic terms are evaluated
with respect to the same iterate. (iii) The use of a cyclic scheme to update
the functions in lieu of a random selection routine. We use these fundamental
properties of IQN to establish its local superlinear convergence rate.
|
[
"Aryan Mokhtari and Mark Eisen and Alejandro Ribeiro",
"['Aryan Mokhtari' 'Mark Eisen' 'Alejandro Ribeiro']"
] |
cs.LG cs.CV
| null |
1702.00758
| null | null |
http://arxiv.org/pdf/1702.00758v4
|
2017-07-29T17:55:50Z
|
2017-02-02T17:29:24Z
|
HashNet: Deep Learning to Hash by Continuation
|
Learning to hash has been widely applied to approximate nearest neighbor
search for large-scale multimedia retrieval, due to its computation efficiency
and retrieval quality. Deep learning to hash, which improves retrieval quality
by end-to-end representation learning and hash encoding, has received
increasing attention recently. Subject to the ill-posed gradient difficulty in
the optimization with sign activations, existing deep learning to hash methods
need to first learn continuous representations and then generate binary hash
codes in a separated binarization step, which suffer from substantial loss of
retrieval quality. This work presents HashNet, a novel deep architecture for
deep learning to hash by continuation method with convergence guarantees, which
learns exactly binary hash codes from imbalanced similarity data. The key idea
is to attack the ill-posed gradient problem in optimizing deep networks with
non-smooth binary activations by continuation method, in which we begin from
learning an easier network with smoothed activation function and let it evolve
during the training, until it eventually goes back to being the original,
difficult to optimize, deep network with the sign activation function.
Comprehensive empirical evidence shows that HashNet can generate exactly binary
hash codes and yield state-of-the-art multimedia retrieval performance on
standard benchmarks.
|
[
"['Zhangjie Cao' 'Mingsheng Long' 'Jianmin Wang' 'Philip S. Yu']",
"Zhangjie Cao, Mingsheng Long, Jianmin Wang, Philip S. Yu"
] |
math.OC cs.DS cs.LG stat.ML
| null |
1702.00763
| null | null |
http://arxiv.org/pdf/1702.00763v5
|
2018-09-27T09:55:54Z
|
2017-02-02T17:45:09Z
|
Natasha: Faster Non-Convex Stochastic Optimization Via Strongly
Non-Convex Parameter
|
Given a nonconvex function that is an average of $n$ smooth functions, we
design stochastic first-order methods to find its approximate stationary
points. The convergence of our new methods depends on the smallest (negative)
eigenvalue $-\sigma$ of the Hessian, a parameter that describes how nonconvex
the function is.
Our methods outperform known results for a range of parameter $\sigma$, and
can be used to find approximate local minima. Our result implies an interesting
dichotomy: there exists a threshold $\sigma_0$ so that the currently fastest
methods for $\sigma>\sigma_0$ and for $\sigma<\sigma_0$ have different
behaviors: the former scales with $n^{2/3}$ and the latter scales with
$n^{3/4}$.
|
[
"['Zeyuan Allen-Zhu']",
"Zeyuan Allen-Zhu"
] |
cs.CV cs.LG
| null |
1702.00783
| null | null |
http://arxiv.org/pdf/1702.00783v2
|
2017-03-22T16:13:21Z
|
2017-02-02T18:59:17Z
|
Pixel Recursive Super Resolution
|
We present a pixel recursive super resolution model that synthesizes
realistic details into images while enhancing their resolution. A low
resolution image may correspond to multiple plausible high resolution images,
thus modeling the super resolution process with a pixel independent conditional
model often results in averaging different details--hence blurry edges. By
contrast, our model is able to represent a multimodal conditional distribution
by properly modeling the statistical dependencies among the high resolution
image pixels, conditioned on a low resolution input. We employ a PixelCNN
architecture to define a strong prior over natural images and jointly optimize
this prior with a deep conditioning convolutional network. Human evaluations
indicate that samples from our proposed model look more photo realistic than a
strong L2 regression baseline.
|
[
"Ryan Dahl, Mohammad Norouzi, Jonathon Shlens",
"['Ryan Dahl' 'Mohammad Norouzi' 'Jonathon Shlens']"
] |
cs.IT cs.LG cs.NI math.IT
| null |
1702.00832
| null | null |
http://arxiv.org/pdf/1702.00832v2
|
2017-07-11T21:57:19Z
|
2017-02-02T21:30:08Z
|
An Introduction to Deep Learning for the Physical Layer
|
We present and discuss several novel applications of deep learning for the
physical layer. By interpreting a communications system as an autoencoder, we
develop a fundamental new way to think about communications system design as an
end-to-end reconstruction task that seeks to jointly optimize transmitter and
receiver components in a single process. We show how this idea can be extended
to networks of multiple transmitters and receivers and present the concept of
radio transformer networks as a means to incorporate expert domain knowledge in
the machine learning model. Lastly, we demonstrate the application of
convolutional neural networks on raw IQ samples for modulation classification
which achieves competitive accuracy with respect to traditional schemes relying
on expert features. The paper is concluded with a discussion of open challenges
and areas for future investigation.
|
[
"[\"Timothy J. O'Shea\" 'Jakob Hoydis']",
"Timothy J. O'Shea, Jakob Hoydis"
] |
physics.ins-det cs.LG physics.acc-ph
| null |
1702.00833
| null | null |
http://arxiv.org/pdf/1702.00833v1
|
2017-02-02T21:32:32Z
|
2017-02-02T21:32:32Z
|
Recurrent Neural Networks for anomaly detection in the Post-Mortem time
series of LHC superconducting magnets
|
This paper presents a model based on Deep Learning algorithms of LSTM and GRU
for facilitating an anomaly detection in Large Hadron Collider superconducting
magnets. We used high resolution data available in Post Mortem database to
train a set of models and chose the best possible set of their
hyper-parameters. Using Deep Learning approach allowed to examine a vast body
of data and extract the fragments which require further experts examination and
are regarded as anomalies. The presented method does not require tedious manual
threshold setting and operator attention at the stage of the system setup.
Instead, the automatic approach is proposed, which achieves according to our
experiments accuracy of 99%. This is reached for the largest dataset of 302 MB
and the following architecture of the network: single layer LSTM, 128 cells, 20
epochs of training, look_back=16, look_ahead=128, grid=100 and optimizer Adam.
All the experiments were run on GPU Nvidia Tesla K80
|
[
"Maciej Wielgosz and Andrzej Skocze\\'n and Matej Mertik",
"['Maciej Wielgosz' 'Andrzej Skoczeń' 'Matej Mertik']"
] |
cs.CL cs.LG cs.NE
| null |
1702.00887
| null | null |
http://arxiv.org/pdf/1702.00887v3
|
2017-02-16T17:52:03Z
|
2017-02-03T01:40:45Z
|
Structured Attention Networks
|
Attention networks have proven to be an effective approach for embedding
categorical inference within a deep neural network. However, for many tasks we
may want to model richer structural dependencies without abandoning end-to-end
training. In this work, we experiment with incorporating richer structural
distributions, encoded using graphical models, within deep networks. We show
that these structured attention networks are simple extensions of the basic
attention procedure, and that they allow for extending attention beyond the
standard soft-selection approach, such as attending to partial segmentations or
to subtrees. We experiment with two different classes of structured attention
networks: a linear-chain conditional random field and a graph-based parsing
model, and describe how these models can be practically implemented as neural
network layers. Experiments show that this approach is effective for
incorporating structural biases, and structured attention networks outperform
baseline attention models on a variety of synthetic and real tasks: tree
transduction, neural machine translation, question answering, and natural
language inference. We further find that models trained in this way learn
interesting unsupervised hidden representations that generalize simple
attention.
|
[
"['Yoon Kim' 'Carl Denton' 'Luong Hoang' 'Alexander M. Rush']",
"Yoon Kim, Carl Denton, Luong Hoang, Alexander M. Rush"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.