title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
End-to-End Differentiable Proving | cs.NE cs.AI cs.LG cs.LO | We introduce neural networks for end-to-end differentiable proving of queries
to knowledge bases by operating on dense vector representations of symbols.
These neural networks are constructed recursively by taking inspiration from
the backward chaining algorithm as used in Prolog. Specifically, we replace
symbolic unification with a differentiable computation on vector
representations of symbols using a radial basis function kernel, thereby
combining symbolic reasoning with learning subsymbolic vector representations.
By using gradient descent, the resulting neural network can be trained to infer
facts from a given incomplete knowledge base. It learns to (i) place
representations of similar symbols in close proximity in a vector space, (ii)
make use of such similarities to prove queries, (iii) induce logical rules, and
(iv) use provided and induced logical rules for multi-hop reasoning. We
demonstrate that this architecture outperforms ComplEx, a state-of-the-art
neural link prediction model, on three out of four benchmark knowledge bases
while at the same time inducing interpretable function-free first-order logic
rules.
| Tim Rockt\"aschel and Sebastian Riedel | null | 1705.1104 | null | null |
Greedy Algorithms for Cone Constrained Optimization with Convergence
Guarantees | cs.LG stat.ML | Greedy optimization methods such as Matching Pursuit (MP) and Frank-Wolfe
(FW) algorithms regained popularity in recent years due to their simplicity,
effectiveness and theoretical guarantees. MP and FW address optimization over
the linear span and the convex hull of a set of atoms, respectively. In this
paper, we consider the intermediate case of optimization over the convex cone,
parametrized as the conic hull of a generic atom set, leading to the first
principled definitions of non-negative MP algorithms for which we give explicit
convergence rates and demonstrate excellent empirical performance. In
particular, we derive sublinear ($\mathcal{O}(1/t)$) convergence on general
smooth and convex objectives, and linear convergence ($\mathcal{O}(e^{-t})$) on
strongly convex objectives, in both cases for general sets of atoms.
Furthermore, we establish a clear correspondence of our algorithms to known
algorithms from the MP and FW literature. Our novel algorithms and analyses
target general atom sets and general objective functions, and hence are
directly applicable to a large variety of learning settings.
| Francesco Locatello, Michael Tschannen, Gunnar R\"atsch, Martin Jaggi | null | 1705.11041 | null | null |
HiNet: Hierarchical Classification with Neural Network | cs.LG | Traditionally, classifying large hierarchical labels with more than 10000
distinct traces can only be achieved with flatten labels. Although flatten
labels is feasible, it misses the hierarchical information in the labels.
Hierarchical models like HSVM by \cite{vural2004hierarchical} becomes
impossible to train because of the sheer number of SVMs in the whole
architecture. We developed a hierarchical architecture based on neural networks
that is simple to train. Also, we derived an inference algorithm that can
efficiently infer the MAP (maximum a posteriori) trace guaranteed by our
theorems. Furthermore, the complexity of the model is only $O(n^2)$ compared to
$O(n^h)$ in a flatten model, where $h$ is the height of the hierarchy.
| Zhenzhou Wu, Sean Saito | null | 1705.11105 | null | null |
Information Theoretic Properties of Markov Random Fields, and their
Algorithmic Applications | cs.LG cs.DS cs.IT math.IT math.ST stat.TH | Markov random fields area popular model for high-dimensional probability
distributions. Over the years, many mathematical, statistical and algorithmic
problems on them have been studied. Until recently, the only known algorithms
for provably learning them relied on exhaustive search, correlation decay or
various incoherence assumptions. Bresler gave an algorithm for learning general
Ising models on bounded degree graphs. His approach was based on a structural
result about mutual information in Ising models.
Here we take a more conceptual approach to proving lower bounds on the mutual
information through setting up an appropriate zero-sum game. Our proof
generalizes well beyond Ising models, to arbitrary Markov random fields with
higher order interactions. As an application, we obtain algorithms for learning
Markov random fields on bounded degree graphs on $n$ nodes with $r$-order
interactions in $n^r$ time and $\log n$ sample complexity. The sample
complexity is information theoretically optimal up to the dependence on the
maximum degree. The running time is nearly optimal under standard conjectures
about the hardness of learning parity with noise.
| Linus Hamilton, Frederic Koehler, Ankur Moitra | null | 1705.11107 | null | null |
Controllable Invariance through Adversarial Feature Learning | cs.LG cs.AI cs.CL | Learning meaningful representations that maintain the content necessary for a
particular task while filtering away detrimental variations is a problem of
great interest in machine learning. In this paper, we tackle the problem of
learning representations invariant to a specific factor or trait of data. The
representation learning process is formulated as an adversarial minimax game.
We analyze the optimal equilibrium of such a game and find that it amounts to
maximizing the uncertainty of inferring the detrimental factor given the
representation while maximizing the certainty of making task-specific
predictions. On three benchmark tasks, namely fair and bias-free
classification, language-independent generation, and lighting-independent image
classification, we show that the proposed framework induces an invariant
representation, and leads to better generalization evidenced by the improved
performance.
| Qizhe Xie, Zihang Dai, Yulun Du, Eduard Hovy, Graham Neubig | null | 1705.11122 | null | null |
SuperSpike: Supervised learning in multi-layer spiking neural networks | q-bio.NC cs.LG cs.NE stat.ML | A vast majority of computation in the brain is performed by spiking neural
networks. Despite the ubiquity of such spiking, we currently lack an
understanding of how biological spiking neural circuits learn and compute
in-vivo, as well as how we can instantiate such capabilities in artificial
spiking circuits in-silico. Here we revisit the problem of supervised learning
in temporally coding multi-layer spiking neural networks. First, by using a
surrogate gradient approach, we derive SuperSpike, a nonlinear voltage-based
three factor learning rule capable of training multi-layer networks of
deterministic integrate-and-fire neurons to perform nonlinear computations on
spatiotemporal spike patterns. Second, inspired by recent results on feedback
alignment, we compare the performance of our learning rule under different
credit assignment strategies for propagating output errors to hidden units.
Specifically, we test uniform, symmetric and random feedback, finding that
simpler tasks can be solved with any type of feedback, while more complex tasks
require symmetric feedback. In summary, our results open the door to obtaining
a better scientific understanding of learning and computation in spiking neural
networks by advancing our ability to train them to solve nonlinear problems
involving transformations between different spatiotemporal spike-time patterns.
| Friedemann Zenke and Surya Ganguli | 10.1162/neco_a_01086 | 1705.11146 | null | null |
Reinforcement Learning for Learning Rate Control | cs.LG | Stochastic gradient descent (SGD), which updates the model parameters by
adding a local gradient times a learning rate at each step, is widely used in
model training of machine learning algorithms such as neural networks. It is
observed that the models trained by SGD are sensitive to learning rates and
good learning rates are problem specific. We propose an algorithm to
automatically learn learning rates using neural network based actor-critic
methods from deep reinforcement learning (RL).In particular, we train a policy
network called actor to decide the learning rate at each step during training,
and a value network called critic to give feedback about quality of the
decision (e.g., the goodness of the learning rate outputted by the actor) that
the actor made. The introduction of auxiliary actor and critic networks helps
the main network achieve better performance. Experiments on different datasets
and network architectures show that our approach leads to better convergence of
SGD than human-designed competitors.
| Chang Xu, Tao Qin, Gang Wang and Tie-Yan Liu | null | 1705.11159 | null | null |
Emergence of Language with Multi-agent Games: Learning to Communicate
with Sequences of Symbols | cs.LG cs.CL cs.CV cs.MA | Learning to communicate through interaction, rather than relying on explicit
supervision, is often considered a prerequisite for developing a general AI. We
study a setting where two agents engage in playing a referential game and, from
scratch, develop a communication protocol necessary to succeed in this game.
Unlike previous work, we require that messages they exchange, both at train and
test time, are in the form of a language (i.e. sequences of discrete symbols).
We compare a reinforcement learning approach and one using a differentiable
relaxation (straight-through Gumbel-softmax estimator) and observe that the
latter is much faster to converge and it results in more effective protocols.
Interestingly, we also observe that the protocol we induce by optimizing the
communication success exhibits a degree of compositionality and variability
(i.e. the same information can be phrased in different ways), both properties
characteristic of natural languages. As the ultimate goal is to ensure that
communication is accomplished in natural language, we also perform experiments
where we inject prior information about natural language into our model and
study properties of the resulting protocol.
| Serhii Havrylov, Ivan Titov | null | 1705.11192 | null | null |
Feature Extraction for Machine Learning Based Crackle Detection in Lung
Sounds from a Health Survey | cs.SD cs.LG | In recent years, many innovative solutions for recording and viewing sounds
from a stethoscope have become available. However, to fully utilize such
devices, there is a need for an automated approach for detecting abnormal lung
sounds, which is better than the existing methods that typically have been
developed and evaluated using a small and non-diverse dataset.
We propose a machine learning based approach for detecting crackles in lung
sounds recorded using a stethoscope in a large health survey. Our method is
trained and evaluated using 209 files with crackles classified by expert
listeners. Our analysis pipeline is based on features extracted from small
windows in audio files. We evaluated several feature extraction methods and
classifiers. We evaluated the pipeline using a training set of 175 crackle
windows and 208 normal windows. We did 100 cycles of cross validation where we
shuffled training sets between cycles. For all the division between training
and evaluation was 70%-30%.
We found and evaluated a 5-dimenstional vector with four features from the
time domain and one from the spectrum domain. We evaluated several classifiers
and found SVM with a Radial Basis Function Kernel to perform best. Our approach
had a precision of 86% and recall of 84% for classifying a crackle in a window,
which is more accurate than found in studies of health personnel. The
low-dimensional feature vector makes the SVM very fast. The model can be
trained on a regular computer in 1.44 seconds, and 319 crackles can be
classified in 1.08 seconds.
Our approach detects and visualizes individual crackles in recorded audio
files. It is accurate, fast, and has low resource requirements. It can be used
to train health personnel or as part of a smartphone application for Bluetooth
stethoscopes.
| Morten Gr{\o}nnesby, Juan Carlos Aviles Solis, Einar Holsb{\o}, Hasse
Melbye, Lars Ailo Bongo | null | 1706.00005 | null | null |
Toward Robustness against Label Noise in Training Deep Discriminative
Neural Networks | cs.LG stat.ML | Collecting large training datasets, annotated with high-quality labels, is
costly and time-consuming. This paper proposes a novel framework for training
deep convolutional neural networks from noisy labeled datasets that can be
obtained cheaply. The problem is formulated using an undirected graphical model
that represents the relationship between noisy and clean labels, trained in a
semi-supervised setting. In our formulation, the inference over latent clean
labels is tractable and is regularized during training using auxiliary sources
of information. The proposed model is applied to the image labeling problem and
is shown to be effective in labeling unseen images as well as reducing label
noise in training on CIFAR-10 and MS COCO datasets.
| Arash Vahdat | null | 1706.00038 | null | null |
Biased Importance Sampling for Deep Neural Network Training | cs.LG | Importance sampling has been successfully used to accelerate stochastic
optimization in many convex problems. However, the lack of an efficient way to
calculate the importance still hinders its application to Deep Learning.
In this paper, we show that the loss value can be used as an alternative
importance metric, and propose a way to efficiently approximate it for a deep
model, using a small model trained for that purpose in parallel.
This method allows in particular to utilize a biased gradient estimate that
implicitly optimizes a soft max-loss, and leads to better generalization
performance. While such method suffers from a prohibitively high variance of
the gradient estimate when using a standard stochastic optimizer, we show that
when it is combined with our sampling mechanism, it results in a reliable
procedure.
We showcase the generality of our method by testing it on both image
classification and language modeling tasks using deep convolutional and
recurrent neural networks. In particular, our method results in 30% faster
training of a CNN for CIFAR10 than when using uniform sampling.
| Angelos Katharopoulos and Fran\c{c}ois Fleuret | null | 1706.00043 | null | null |
Learning Time/Memory-Efficient Deep Architectures with Budgeted Super
Networks | cs.LG | We propose to focus on the problem of discovering neural network
architectures efficient in terms of both prediction quality and cost. For
instance, our approach is able to solve the following tasks: learn a neural
network able to predict well in less than 100 milliseconds or learn an
efficient model that fits in a 50 Mb memory. Our contribution is a novel family
of models called Budgeted Super Networks (BSN). They are learned using gradient
descent techniques applied on a budgeted learning objective function which
integrates a maximum authorized cost, while making no assumption on the nature
of this cost. We present a set of experiments on computer vision problems and
analyze the ability of our technique to deal with three different costs: the
computation cost, the memory consumption cost and a distributed computation
cost. We particularly show that our model can discover neural network
architectures that have a better accuracy than the ResNet and Convolutional
Neural Fabrics architectures on CIFAR-10 and CIFAR-100, at a lower cost.
| Tom Veniat and Ludovic Denoyer | null | 1706.00046 | null | null |
Deep Generative Adversarial Networks for Compressed Sensing Automates
MRI | cs.CV cs.LG stat.ML | Magnetic resonance image (MRI) reconstruction is a severely ill-posed linear
inverse task demanding time and resource intensive computations that can
substantially trade off {\it accuracy} for {\it speed} in real-time imaging. In
addition, state-of-the-art compressed sensing (CS) analytics are not cognizant
of the image {\it diagnostic quality}. To cope with these challenges we put
forth a novel CS framework that permeates benefits from generative adversarial
networks (GAN) to train a (low-dimensional) manifold of diagnostic-quality MR
images from historical patients. Leveraging a mixture of least-squares (LS)
GANs and pixel-wise $\ell_1$ cost, a deep residual network with skip
connections is trained as the generator that learns to remove the {\it
aliasing} artifacts by projecting onto the manifold. LSGAN learns the texture
details, while $\ell_1$ controls the high-frequency noise. A multilayer
convolutional neural network is then jointly trained based on diagnostic
quality images to discriminate the projection quality. The test phase performs
feed-forward propagation over the generator network that demands a very low
computational overhead. Extensive evaluations are performed on a large
contrast-enhanced MR dataset of pediatric patients. In particular, images rated
based on expert radiologists corroborate that GANCS retrieves high contrast
images with detailed texture relative to conventional CS, and pixel-wise
schemes. In addition, it offers reconstruction under a few milliseconds, two
orders of magnitude faster than state-of-the-art CS-MRI schemes.
| Morteza Mardani, Enhao Gong, Joseph Y. Cheng, Shreyas Vasanawala, Greg
Zaharchuk, Marcus Alley, Neil Thakur, Song Han, William Dally, John M. Pauly,
and Lei Xing | null | 1706.00051 | null | null |
The Sample Complexity of Online One-Class Collaborative Filtering | cs.LG cs.AI cs.IT math.IT stat.ML | We consider the online one-class collaborative filtering (CF) problem that
consists of recommending items to users over time in an online fashion based on
positive ratings only. This problem arises when users respond only occasionally
to a recommendation with a positive rating, and never with a negative one. We
study the impact of the probability of a user responding to a recommendation,
p_f, on the sample complexity, i.e., the number of ratings required to make
`good' recommendations, and ask whether receiving positive and negative
ratings, instead of positive ratings only, improves the sample complexity. Both
questions arise in the design of recommender systems. We introduce a simple
probabilistic user model, and analyze the performance of an online user-based
CF algorithm. We prove that after an initial cold start phase, where
recommendations are invested in exploring the user's preferences, this
algorithm makes---up to a fraction of the recommendations required for updating
the user's preferences---perfect recommendations. The number of ratings
required for the cold start phase is nearly proportional to 1/p_f, and that for
updating the user's preferences is essentially independent of p_f. As a
consequence we find that, receiving positive and negative ratings instead of
only positive ones improves the number of ratings required for initial
exploration by a factor of 1/p_f, which can be significant.
| Reinhard Heckel and Kannan Ramchandran | null | 1706.00061 | null | null |
Free energy-based reinforcement learning using a quantum processor | cs.LG cs.AI cs.NE math.OC quant-ph | Recent theoretical and experimental results suggest the possibility of using
current and near-future quantum hardware in challenging sampling tasks. In this
paper, we introduce free energy-based reinforcement learning (FERL) as an
application of quantum hardware. We propose a method for processing a quantum
annealer's measured qubit spin configurations in approximating the free energy
of a quantum Boltzmann machine (QBM). We then apply this method to perform
reinforcement learning on the grid-world problem using the D-Wave 2000Q quantum
annealer. The experimental results show that our technique is a promising
method for harnessing the power of quantum sampling in reinforcement learning
tasks.
| Anna Levit, Daniel Crawford, Navid Ghadermarzy, Jaspreet S. Oberoi,
Ehsan Zahedinejad, Pooya Ronagh | null | 1706.00074 | null | null |
Low-Rank Matrix Approximation in the Infinity Norm | cs.CC cs.LG math.NA math.OC | The low-rank matrix approximation problem with respect to the entry-wise
$\ell_{\infty}$-norm is the following: given a matrix $M$ and a factorization
rank $r$, find a matrix $X$ whose rank is at most $r$ and that minimizes
$\max_{i,j} |M_{ij} - X_{ij}|$. In this paper, we prove that the decision
variant of this problem for $r=1$ is NP-complete using a reduction from the
problem `not all equal 3SAT'. We also analyze several cases when the problem
can be solved in polynomial time, and propose a simple practical heuristic
algorithm which we apply on the problem of the recovery of a quantized low-rank
matrix.
| Nicolas Gillis, Yaroslav Shitov | 10.1016/j.laa.2019.07.017 | 1706.00078 | null | null |
Megapixel Size Image Creation using Generative Adversarial Networks | cs.CV cs.GR cs.LG | Since its appearance, Generative Adversarial Networks (GANs) have received a
lot of interest in the AI community. In image generation several projects
showed how GANs are able to generate photorealistic images but the results so
far did not look adequate for the quality standard of visual media production
industry. We present an optimized image generation process based on a Deep
Convolutional Generative Adversarial Networks (DCGANs), in order to create
photorealistic high-resolution images (up to 1024x1024 pixels). Furthermore,
the system was fed with a limited dataset of images, less than two thousand
images. All these results give more clue about future exploitation of GANs in
Computer Graphics and Visual Effects.
| Marco Marchesi | null | 1706.00082 | null | null |
Lower Bounds on Regret for Noisy Gaussian Process Bandit Optimization | stat.ML cs.IT cs.LG math.IT | In this paper, we consider the problem of sequentially optimizing a black-box
function $f$ based on noisy samples and bandit feedback. We assume that $f$ is
smooth in the sense of having a bounded norm in some reproducing kernel Hilbert
space (RKHS), yielding a commonly-considered non-Bayesian form of Gaussian
process bandit optimization. We provide algorithm-independent lower bounds on
the simple regret, measuring the suboptimality of a single point reported after
$T$ rounds, and on the cumulative regret, measuring the sum of regrets over the
$T$ chosen points. For the isotropic squared-exponential kernel in $d$
dimensions, we find that an average simple regret of $\epsilon$ requires $T =
\Omega\big(\frac{1}{\epsilon^2} (\log\frac{1}{\epsilon})^{d/2}\big)$, and the
average cumulative regret is at least $\Omega\big( \sqrt{T(\log T)^{d/2}}
\big)$, thus matching existing upper bounds up to the replacement of $d/2$ by
$2d+O(1)$ in both cases. For the Mat\'ern-$\nu$ kernel, we give analogous
bounds of the form $\Omega\big( (\frac{1}{\epsilon})^{2+d/\nu}\big)$ and
$\Omega\big( T^{\frac{\nu + d}{2\nu + d}} \big)$, and discuss the resulting
gaps to the existing upper bounds.
| Jonathan Scarlett, Ilijia Bogunovic, Volkan Cevher | null | 1706.0009 | null | null |
Using GPI-2 for Distributed Memory Paralleliziation of the Caffe Toolbox
to Speed up Deep Neural Network Training | cs.LG cs.DC | Deep Neural Network (DNN) are currently of great inter- est in research and
application. The training of these net- works is a compute intensive and time
consuming task. To reduce training times to a bearable amount at reasonable
cost we extend the popular Caffe toolbox for DNN with an efficient distributed
memory communication pattern. To achieve good scalability we emphasize the
overlap of computation and communication and prefer fine granu- lar
synchronization patterns over global barriers. To im- plement these
communication patterns we rely on the the Global address space Programming
Interface version 2 (GPI-2) communication library. This interface provides a
light-weight set of asynchronous one-sided communica- tion primitives
supplemented by non-blocking fine gran- ular data synchronization mechanisms.
Therefore, Caf- feGPI is the name of our parallel version of Caffe. First
benchmarks demonstrate better scaling behavior com- pared with other
extensions, e.g., the Intel TM Caffe. Even within a single symmetric
multiprocessing machine with four graphics processing units, the CaffeGPI
scales bet- ter than the standard Caffe toolbox. These first results
demonstrate that the use of standard High Performance Computing (HPC) hardware
is a valid cost saving ap- proach to train large DDNs. I/O is an other
bottleneck to work with DDNs in a standard parallel HPC setting, which we will
consider in more detail in a forthcoming paper.
| Martin Kuehn, Janis Keuper and Franz-Josef Pfreundt | null | 1706.00095 | null | null |
Bayesian fairness | cs.LG stat.ML | We consider the problem of how decision making can be fair when the
underlying probabilistic model of the world is not known with certainty. We
argue that recent notions of fairness in machine learning need to explicitly
incorporate parameter uncertainty, hence we introduce the notion of {\em
Bayesian fairness} as a suitable candidate for fair decision rules. Using
balance, a definition of fairness introduced by Kleinberg et al (2016), we show
how a Bayesian perspective can lead to well-performing, fair decision rules
even under high uncertainty.
| Christos Dimitrakakis and Yang Liu and David Parkes and Goran
Radanovic | null | 1706.00119 | null | null |
Scalable Generalized Linear Bandits: Online Computation and Hashing | stat.ML cs.LG | Generalized Linear Bandits (GLBs), a natural extension of the stochastic
linear bandits, has been popular and successful in recent years. However,
existing GLBs scale poorly with the number of rounds and the number of arms,
limiting their utility in practice. This paper proposes new, scalable solutions
to the GLB problem in two respects. First, unlike existing GLBs, whose
per-time-step space and time complexity grow at least linearly with time $t$,
we propose a new algorithm that performs online computations to enjoy a
constant space and time complexity. At its heart is a novel Generalized Linear
extension of the Online-to-confidence-set Conversion (GLOC method) that takes
\emph{any} online learning algorithm and turns it into a GLB algorithm. As a
special case, we apply GLOC to the online Newton step algorithm, which results
in a low-regret GLB algorithm with much lower time and memory complexity than
prior work. Second, for the case where the number $N$ of arms is very large, we
propose new algorithms in which each next arm is selected via an inner product
search. Such methods can be implemented via hashing algorithms (i.e.,
"hash-amenable") and result in a time complexity sublinear in $N$. While a
Thompson sampling extension of GLOC is hash-amenable, its regret bound for
$d$-dimensional arm sets scales with $d^{3/2}$, whereas GLOC's regret bound
scales with $d$. Towards closing this gap, we propose a new hash-amenable
algorithm whose regret bound scales with $d^{5/4}$. Finally, we propose a fast
approximate hash-key computation (inner product) with a better accuracy than
the state-of-the-art, which can be of independent interest. We conclude the
paper with preliminary experimental results confirming the merits of our
methods.
| Kwang-Sung Jun, Aniruddha Bhargava, Robert Nowak, Rebecca Willett | null | 1706.00136 | null | null |
Cross-modal Common Representation Learning by Hybrid Transfer Network | cs.MM cs.CV cs.LG | DNN-based cross-modal retrieval is a research hotspot to retrieve across
different modalities as image and text, but existing methods often face the
challenge of insufficient cross-modal training data. In single-modal scenario,
similar problem is usually relieved by transferring knowledge from large-scale
auxiliary datasets (as ImageNet). Knowledge from such single-modal datasets is
also very useful for cross-modal retrieval, which can provide rich general
semantic information that can be shared across different modalities. However,
it is challenging to transfer useful knowledge from single-modal (as image)
source domain to cross-modal (as image/text) target domain. Knowledge in source
domain cannot be directly transferred to both two different modalities in
target domain, and the inherent cross-modal correlation contained in target
domain provides key hints for cross-modal retrieval which should be preserved
during transfer process. This paper proposes Cross-modal Hybrid Transfer
Network (CHTN) with two subnetworks: Modal-sharing transfer subnetwork utilizes
the modality in both source and target domains as a bridge, for transferring
knowledge to both two modalities simultaneously; Layer-sharing correlation
subnetwork preserves the inherent cross-modal semantic correlation to further
adapt to cross-modal retrieval task. Cross-modal data can be converted to
common representation by CHTN for retrieval, and comprehensive experiment on 3
datasets shows its effectiveness.
| Xin Huang, Yuxin Peng, and Mingkuan Yuan | null | 1706.00153 | null | null |
Krylov Subspace Recycling for Fast Iterative Least-Squares in Machine
Learning | cs.LG math.NA stat.ML | Solving symmetric positive definite linear problems is a fundamental
computational task in machine learning. The exact solution, famously, is
cubicly expensive in the size of the matrix. To alleviate this problem, several
linear-time approximations, such as spectral and inducing-point methods, have
been suggested and are now in wide use. These are low-rank approximations that
choose the low-rank space a priori and do not refine it over time. While this
allows linear cost in the data-set size, it also causes a finite, uncorrected
approximation error. Authors from numerical linear algebra have explored ways
to iteratively refine such low-rank approximations, at a cost of a small number
of matrix-vector multiplications. This idea is particularly interesting in the
many situations in machine learning where one has to solve a sequence of
related symmetric positive definite linear problems. From the machine learning
perspective, such deflation methods can be interpreted as transfer learning of
a low-rank approximation across a time-series of numerical tasks. We study the
use of such methods for our field. Our empirical results show that, on
regression and classification problems of intermediate size, this approach can
interpolate between low computational cost and numerical precision.
| Filip de Roos and Philipp Hennig | null | 1706.00241 | null | null |
Supervised Quantile Normalisation | stat.ML cs.LG q-bio.QM | Quantile normalisation is a popular normalisation method for data subject to
unwanted variations such as images, speech, or genomic data. It applies a
monotonic transformation to the feature values of each sample to ensure that
after normalisation, they follow the same target distribution for each sample.
Choosing a "good" target distribution remains however largely empirical and
heuristic, and is usually done independently of the subsequent analysis of
normalised data. We propose instead to couple the quantile normalisation step
with the subsequent analysis, and to optimise the target distribution jointly
with the other parameters in the analysis. We illustrate this principle on the
problem of estimating a linear model over normalised data, and show that it
leads to a particular low-rank matrix regression problem that can be solved
efficiently. We illustrate the potential of our method, which we term SUQUAN,
on simulated data, images and genomic data, where it outperforms standard
quantile normalisation.
| Marine Le Morvan (CBIO), Jean-Philippe Vert (DMA, CBIO) | null | 1706.00244 | null | null |
Learning to Compute Word Embeddings On the Fly | cs.LG cs.CL | Words in natural language follow a Zipfian distribution whereby some words
are frequent but most are rare. Learning representations for words in the "long
tail" of this distribution requires enormous amounts of data. Representations
of rare words trained directly on end tasks are usually poor, requiring us to
pre-train embeddings on external data, or treat all rare words as
out-of-vocabulary words with a unique representation. We provide a method for
predicting embeddings of rare words on the fly from small amounts of auxiliary
data with a network trained end-to-end for the downstream task. We show that
this improves results against baselines where embeddings are trained on the end
task for reading comprehension, recognizing textual entailment and language
modeling.
| Dzmitry Bahdanau, Tom Bosc, Stanis{\l}aw Jastrz\k{e}bski, Edward
Grefenstette, Pascal Vincent, Yoshua Bengio | null | 1706.00286 | null | null |
Transfer Learning for Speech Recognition on a Budget | cs.LG cs.CL cs.NE stat.ML | End-to-end training of automated speech recognition (ASR) systems requires
massive data and compute resources. We explore transfer learning based on model
adaptation as an approach for training ASR models under constrained GPU memory,
throughput and training data. We conduct several systematic experiments
adapting a Wav2Letter convolutional neural network originally trained for
English ASR to the German language. We show that this technique allows faster
training on consumer-grade resources while requiring less training data in
order to achieve the same accuracy, thereby lowering the cost of training ASR
models in other languages. Model introspection revealed that small adaptations
to the network's weights were sufficient for good performance, especially for
inner layers.
| Julius Kunze, Louis Kirsch, Ilia Kurenkov, Andreas Krug, Jens
Johannsmeier and Sebastian Stober | null | 1706.0029 | null | null |
Discriminative k-shot learning using probabilistic models | stat.ML cs.LG | This paper introduces a probabilistic framework for k-shot image
classification. The goal is to generalise from an initial large-scale
classification task to a separate task comprising new classes and small numbers
of examples. The new approach not only leverages the feature-based
representation learned by a neural network from the initial task
(representational transfer), but also information about the classes (concept
transfer). The concept information is encapsulated in a probabilistic model for
the final layer weights of the neural network which acts as a prior for
probabilistic k-shot learning. We show that even a simple probabilistic model
achieves state-of-the-art on a standard k-shot learning dataset by a large
margin. Moreover, it is able to accurately model uncertainty, leading to well
calibrated classifiers, and is easily extensible and flexible, unlike many
recent approaches to k-shot learning.
| Matthias Bauer, Mateo Rojas-Carulla, Jakub Bart{\l}omiej
\'Swi\k{a}tkowski, Bernhard Sch\"olkopf, Richard E. Turner | null | 1706.00326 | null | null |
On the stable recovery of deep structured linear networks under sparsity
constraints | math.OC cs.AI cs.LG math.ST stat.TH | We consider a deep structured linear network under sparsity constraints. We
study sharp conditions guaranteeing the stability of the optimal parameters
defining the network. More precisely, we provide sharp conditions on the
network architecture and the sample under which the error on the parameters
defining the network scales linearly with the reconstruction error (i.e. the
risk). Therefore, under these conditions, the weights obtained with a
successful algorithms are well defined and only depend on the architecture of
the network and the sample. The features in the latent spaces are stably
defined. The stability property is required in order to interpret the features
defined in the latent spaces. It can also lead to a guarantee on the
statistical risk. This is what motivates this study. The analysis is based on
the recently proposed Tensorial Lifting. The particularity of this paper is to
consider a sparsity prior. This leads to a better stability constant. As an
illustration, we detail the analysis and provide sharp stability guarantees for
convolutional linear network under sparsity prior. In this analysis, we
distinguish the role of the network architecture and the sample input. This
highlights the requirements on the data in connection to parameter stability.
| Francois Malgouyres (IMT) | null | 1706.00342 | null | null |
Discovering Discrete Latent Topics with Neural Variational Inference | cs.CL cs.AI cs.IR cs.LG | Topic models have been widely explored as probabilistic generative models of
documents. Traditional inference methods have sought closed-form derivations
for updating the models, however as the expressiveness of these models grows,
so does the difficulty of performing fast and accurate inference over their
parameters. This paper presents alternative neural approaches to topic
modelling by providing parameterisable distributions over topics which permit
training by backpropagation in the framework of neural variational inference.
In addition, with the help of a stick-breaking construction, we propose a
recurrent network that is able to discover a notionally unbounded number of
topics, analogous to Bayesian non-parametric topic models. Experimental results
on the MXM Song Lyrics, 20NewsGroups and Reuters News datasets demonstrate the
effectiveness and efficiency of these neural topic models.
| Yishu Miao, Edward Grefenstette, Phil Blunsom | null | 1706.00359 | null | null |
Semantic Specialisation of Distributional Word Vector Spaces using
Monolingual and Cross-Lingual Constraints | cs.CL cs.AI cs.LG | We present Attract-Repel, an algorithm for improving the semantic quality of
word vectors by injecting constraints extracted from lexical resources.
Attract-Repel facilitates the use of constraints from mono- and cross-lingual
resources, yielding semantically specialised cross-lingual vector spaces. Our
evaluation shows that the method can make use of existing cross-lingual
lexicons to construct high-quality vector spaces for a plethora of different
languages, facilitating semantic transfer from high- to lower-resource ones.
The effectiveness of our approach is demonstrated with state-of-the-art results
on semantic similarity datasets in six languages. We next show that
Attract-Repel-specialised vectors boost performance in the downstream task of
dialogue state tracking (DST) across multiple languages. Finally, we show that
cross-lingual vector spaces produced by our algorithm facilitate the training
of multilingual DST models, which brings further performance improvements.
| Nikola Mrk\v{s}i\'c, Ivan Vuli\'c, Diarmuid \'O S\'eaghdha, Ira
Leviant, Roi Reichart, Milica Ga\v{s}i\'c, Anna Korhonen and Steve Young | null | 1706.00374 | null | null |
Interpolated Policy Gradient: Merging On-Policy and Off-Policy Gradient
Estimation for Deep Reinforcement Learning | cs.LG cs.AI cs.RO | Off-policy model-free deep reinforcement learning methods using previously
collected data can improve sample efficiency over on-policy policy gradient
techniques. On the other hand, on-policy algorithms are often more stable and
easier to use. This paper examines, both theoretically and empirically,
approaches to merging on- and off-policy updates for deep reinforcement
learning. Theoretical results show that off-policy updates with a value
function estimator can be interpolated with on-policy policy gradient updates
whilst still satisfying performance bounds. Our analysis uses control variate
methods to produce a family of policy gradient algorithms, with several
recently proposed algorithms being special cases of this family. We then
provide an empirical comparison of these techniques with the remaining
algorithmic details fixed, and show how different mixing of off-policy gradient
estimates with on-policy samples contribute to improvements in empirical
performance. The final algorithm provides a generalization and unification of
existing deep policy gradient techniques, has theoretical guarantees on the
bias introduced by off-policy updates, and improves on the state-of-the-art
model-free deep RL methods on a number of OpenAI Gym continuous control
benchmarks.
| Shixiang Gu and Timothy Lillicrap and Zoubin Ghahramani and Richard E.
Turner and Bernhard Sch\"olkopf and Sergey Levine | null | 1706.00387 | null | null |
Learning Disentangled Representations with Semi-Supervised Deep
Generative Models | stat.ML cs.AI cs.LG | Variational autoencoders (VAEs) learn representations of data by jointly
training a probabilistic encoder and decoder network. Typically these models
encode all features of the data into a single variable. Here we are interested
in learning disentangled representations that encode distinct aspects of the
data into separate variables. We propose to learn such representations using
model architectures that generalise from standard VAEs, employing a general
graphical model structure in the encoder and decoder. This allows us to train
partially-specified models that make relatively strong assumptions about a
subset of interpretable variables and rely on the flexibility of neural
networks to learn representations for the remaining variables. We further
define a general objective for semi-supervised learning in this model class,
which can be approximated using an importance sampling procedure. We evaluate
our framework's ability to learn disentangled representations, both by
qualitative exploration of its generative capacity, and quantitative evaluation
of its discriminative ability on a variety of models and datasets.
| N. Siddharth, Brooks Paige, Jan-Willem van de Meent, Alban Desmaison,
Noah D. Goodman, Pushmeet Kohli, Frank Wood, Philip H.S. Torr | null | 1706.004 | null | null |
Tensor Contraction Layers for Parsimonious Deep Nets | cs.LG | Tensors offer a natural representation for many kinds of data frequently
encountered in machine learning. Images, for example, are naturally represented
as third order tensors, where the modes correspond to height, width, and
channels. Tensor methods are noted for their ability to discover
multi-dimensional dependencies, and tensor decompositions in particular, have
been used to produce compact low-rank approximations of data. In this paper, we
explore the use of tensor contractions as neural network layers and investigate
several ways to apply them to activation tensors. Specifically, we propose the
Tensor Contraction Layer (TCL), the first attempt to incorporate tensor
contractions as end-to-end trainable neural network layers. Applied to existing
networks, TCLs reduce the dimensionality of the activation tensors and thus the
number of model parameters. We evaluate the TCL on the task of image
recognition, augmenting two popular networks (AlexNet, VGG). The resulting
models are trainable end-to-end. Applying the TCL to the task of image
recognition, using the CIFAR100 and ImageNet datasets, we evaluate the effect
of parameter reduction via tensor contraction on performance. We demonstrate
significant model compression without significant impact on the accuracy and,
in some cases, improved performance.
| Jean Kossaifi, Aran Khanna, Zachary C. Lipton, Tommaso Furlanello and
Anima Anandkumar | null | 1706.00439 | null | null |
Deep Learning: A Bayesian Perspective | stat.ML cs.LG stat.ME | Deep learning is a form of machine learning for nonlinear high dimensional
pattern matching and prediction. By taking a Bayesian probabilistic
perspective, we provide a number of insights into more efficient algorithms for
optimisation and hyper-parameter tuning. Traditional high-dimensional data
reduction techniques, such as principal component analysis (PCA), partial least
squares (PLS), reduced rank regression (RRR), projection pursuit regression
(PPR) are all shown to be shallow learners. Their deep learning counterparts
exploit multiple deep layers of data reduction which provide predictive
performance gains. Stochastic gradient descent (SGD) training optimisation and
Dropout (DO) regularization provide estimation and variable selection. Bayesian
regularization is central to finding weights and connections in networks to
optimize the predictive bias-variance trade-off. To illustrate our methodology,
we provide an analysis of international bookings on Airbnb. Finally, we
conclude with directions for future research.
| Nicholas Polson and Vadim Sokolov | 10.1214/17-BA1082 | 1706.00473 | null | null |
The Mixing method: low-rank coordinate descent for semidefinite
programming with diagonal constraints | math.OC cs.LG stat.ML | In this paper, we propose a low-rank coordinate descent approach to
structured semidefinite programming with diagonal constraints. The approach,
which we call the Mixing method, is extremely simple to implement, has no free
parameters, and typically attains an order of magnitude or better improvement
in optimization performance over the current state of the art. We show that the
method is strictly decreasing, converges to a critical point, and further that
for sufficient rank all non-optimal critical points are unstable. Moreover, we
prove that with a step size, the Mixing method converges to the global optimum
of the semidefinite program almost surely in a locally linear rate under random
initialization. This is the first low-rank semidefinite programming method that
has been shown to achieve a global optimum on the spherical manifold without
assumption. We apply our algorithm to two related domains: solving the maximum
cut semidefinite relaxation, and solving a maximum satisfiability relaxation
(we also briefly consider additional applications such as learning word
embeddings). In all settings, we demonstrate substantial improvement over the
existing state of the art along various dimensions, and in total, this work
expands the scope and scale of problems that can be solved using semidefinite
programming methods.
| Po-Wei Wang, Wei-Cheng Chang, J. Zico Kolter | null | 1706.00476 | null | null |
Personalized Pancreatic Tumor Growth Prediction via Group Learning | cs.CV cs.LG | Tumor growth prediction, a highly challenging task, has long been viewed as a
mathematical modeling problem, where the tumor growth pattern is personalized
based on imaging and clinical data of a target patient. Though mathematical
models yield promising results, their prediction accuracy may be limited by the
absence of population trend data and personalized clinical characteristics. In
this paper, we propose a statistical group learning approach to predict the
tumor growth pattern that incorporates both the population trend and
personalized data, in order to discover high-level features from multimodal
imaging data. A deep convolutional neural network approach is developed to
model the voxel-wise spatio-temporal tumor progression. The deep features are
combined with the time intervals and the clinical factors to feed a process of
feature selection. Our predictive model is pretrained on a group data set and
personalized on the target patient data to estimate the future spatio-temporal
progression of the patient's tumor. Multimodal imaging data at multiple time
points are used in the learning, personalization and inference stages. Our
method achieves a Dice coefficient of 86.8% +- 3.6% and RVD of 7.9% +- 5.4% on
a pancreatic tumor data set, outperforming the DSC of 84.4% +- 4.0% and RVD
13.9% +- 9.8% obtained by a previous state-of-the-art model-based method.
| Ling Zhang, Le Lu, Ronald M. Summers, Electron Kebebew, Jianhua Yao | null | 1706.00493 | null | null |
Dynamic Stripes: Exploiting the Dynamic Precision Requirements of
Activation Values in Neural Networks | cs.NE cs.LG | Stripes is a Deep Neural Network (DNN) accelerator that uses bit-serial
computation to offer performance that is proportional to the fixed-point
precision of the activation values. The fixed-point precisions are determined a
priori using profiling and are selected at a per layer granularity. This paper
presents Dynamic Stripes, an extension to Stripes that detects precision
variance at runtime and at a finer granularity. This extra level of precision
reduction increases performance by 41% over Stripes.
| Alberto Delmas, Patrick Judd, Sayeh Sharify, Andreas Moshovos | null | 1706.00504 | null | null |
Discriminative conditional restricted Boltzmann machine for discrete
choice and latent variable modelling | cs.LG | Conventional methods of estimating latent behaviour generally use attitudinal
questions which are subjective and these survey questions may not always be
available. We hypothesize that an alternative approach can be used for latent
variable estimation through an undirected graphical models. For instance,
non-parametric artificial neural networks. In this study, we explore the use of
generative non-parametric modelling methods to estimate latent variables from
prior choice distribution without the conventional use of measurement
indicators. A restricted Boltzmann machine is used to represent latent
behaviour factors by analyzing the relationship information between the
observed choices and explanatory variables. The algorithm is adapted for latent
behaviour analysis in discrete choice scenario and we use a graphical approach
to evaluate and understand the semantic meaning from estimated parameter vector
values. We illustrate our methodology on a financial instrument choice dataset
and perform statistical analysis on parameter sensitivity and stability. Our
findings show that through non-parametric statistical tests, we can extract
useful latent information on the behaviour of latent constructs through machine
learning methods and present strong and significant influence on the choice
process. Furthermore, our modelling framework shows robustness in input
variability through sampling and validation.
| Melvin Wong and Bilal Farooq and Guillaume-Alexandre Bilodeau | 10.1016/j.jocm.2017.11.003 | 1706.00505 | null | null |
CATERPILLAR: Coarse Grain Reconfigurable Architecture for Accelerating
the Training of Deep Neural Networks | cs.DC cs.LG cs.NE | Accelerating the inference of a trained DNN is a well studied subject. In
this paper we switch the focus to the training of DNNs. The training phase is
compute intensive, demands complicated data communication, and contains
multiple levels of data dependencies and parallelism. This paper presents an
algorithm/architecture space exploration of efficient accelerators to achieve
better network convergence rates and higher energy efficiency for training
DNNs. We further demonstrate that an architecture with hierarchical support for
collective communication semantics provides flexibility in training various
networks performing both stochastic and batched gradient descent based
techniques. Our results suggest that smaller networks favor non-batched
techniques while performance for larger networks is higher using batched
operations. At 45nm technology, CATERPILLAR achieves performance efficiencies
of 177 GFLOPS/W at over 80% utilization for SGD training on small networks and
211 GFLOPS/W at over 90% utilization for pipelined SGD/CP training on larger
networks using a total area of 103.2 mm$^2$ and 178.9 mm$^2$ respectively.
| Yuanfang Li and Ardavan Pedram | null | 1706.00517 | null | null |
PixelGAN Autoencoders | cs.LG | In this paper, we describe the "PixelGAN autoencoder", a generative
autoencoder in which the generative path is a convolutional autoregressive
neural network on pixels (PixelCNN) that is conditioned on a latent code, and
the recognition path uses a generative adversarial network (GAN) to impose a
prior distribution on the latent code. We show that different priors result in
different decompositions of information between the latent code and the
autoregressive decoder. For example, by imposing a Gaussian distribution as the
prior, we can achieve a global vs. local decomposition, or by imposing a
categorical distribution as the prior, we can disentangle the style and content
information of images in an unsupervised fashion. We further show how the
PixelGAN autoencoder with a categorical prior can be directly used in
semi-supervised settings and achieve competitive semi-supervised classification
results on the MNIST, SVHN and NORB datasets.
| Alireza Makhzani, Brendan Frey | null | 1706.00531 | null | null |
Bias-Variance Tradeoff of Graph Laplacian Regularizer | stat.ML cs.LG cs.SI | This paper presents a bias-variance tradeoff of graph Laplacian regularizer,
which is widely used in graph signal processing and semi-supervised learning
tasks. The scaling law of the optimal regularization parameter is specified in
terms of the spectral graph properties and a novel signal-to-noise ratio
parameter, which suggests selecting a mediocre regularization parameter is
often suboptimal. The analysis is applied to three applications, including
random, band-limited, and multiple-sampled graph signals. Experiments on
synthetic and real-world graphs demonstrate near-optimal performance of the
established analysis.
| Pin-Yu Chen and Sijia Liu | 10.1109/LSP.2017.2712141 | 1706.00544 | null | null |
On Unifying Deep Generative Models | cs.LG stat.ML | Deep generative models have achieved impressive success in recent years.
Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs), as
emerging families for generative model learning, have largely been considered
as two distinct paradigms and received extensive independent studies
respectively. This paper aims to establish formal connections between GANs and
VAEs through a new formulation of them. We interpret sample generation in GANs
as performing posterior inference, and show that GANs and VAEs involve
minimizing KL divergences of respective posterior and inference distributions
with opposite directions, extending the two learning phases of classic
wake-sleep algorithm, respectively. The unified view provides a powerful tool
to analyze a diverse set of existing model variants, and enables to transfer
techniques across research lines in a principled way. For example, we apply the
importance weighting method in VAE literatures for improved GAN learning, and
enhance VAEs with an adversarial mechanism that leverages generated samples.
Experiments show generality and effectiveness of the transferred techniques.
| Zhiting Hu, Zichao Yang, Ruslan Salakhutdinov, Eric P. Xing | null | 1706.0055 | null | null |
Learning-based Surgical Workflow Detection from Intra-Operative Signals | cs.LG | A modern operating room (OR) provides a plethora of advanced medical devices.
In order to better facilitate the information offered by them, they need to
automatically react to the intra-operative context. To this end, the progress
of the surgical workflow must be detected and interpreted, so that the current
status can be given in machine-readable form. In this work, Random Forests (RF)
and Hidden Markov Models (HMM) are compared and combined to detect the surgical
workflow phase of a laparoscopic cholecystectomy. Various combinations of data
were tested, from using only raw sensor data to filtered and augmented
datasets. Achieved accuracies ranged from 64% to 72% for the RF approach, and
from 80% to 82% for the combination of RF and HMM.
| Ralf Stauder, Erg\"un Kayis, Nassir Navab | null | 1706.00587 | null | null |
Towards Robust Detection of Adversarial Examples | cs.LG | Although the recent progress is substantial, deep learning methods can be
vulnerable to the maliciously generated adversarial examples. In this paper, we
present a novel training procedure and a thresholding test strategy, towards
robust detection of adversarial examples. In training, we propose to minimize
the reverse cross-entropy (RCE), which encourages a deep network to learn
latent representations that better distinguish adversarial examples from normal
ones. In testing, we propose to use a thresholding strategy as the detector to
filter out adversarial examples for reliable predictions. Our method is simple
to implement using standard algorithms, with little extra training cost
compared to the common cross-entropy minimization. We apply our method to
defend various attacking methods on the widely used MNIST and CIFAR-10
datasets, and achieve significant improvements on robust predictions under all
the threat models in the adversarial setting.
| Tianyu Pang, Chao Du, Yinpeng Dong, Jun Zhu | null | 1706.00633 | null | null |
Dataflow Matrix Machines as a Model of Computations with Linear Streams | cs.NE cs.LG cs.PL | We overview dataflow matrix machines as a Turing complete generalization of
recurrent neural networks and as a programming platform. We describe vector
space of finite prefix trees with numerical leaves which allows us to combine
expressive power of dataflow matrix machines with simplicity of traditional
recurrent neural networks.
| Michael Bukatin, Jon Anthony | null | 1706.00648 | null | null |
Weight Sharing is Crucial to Succesful Optimization | cs.LG | Exploiting the great expressive power of Deep Neural Network architectures,
relies on the ability to train them. While current theoretical work provides,
mostly, results showing the hardness of this task, empirical evidence usually
differs from this line, with success stories in abundance. A strong position
among empirically successful architectures is captured by networks where
extensive weight sharing is used, either by Convolutional or Recurrent layers.
Additionally, characterizing specific aspects of different tasks, making them
"harder" or "easier", is an interesting direction explored both theoretically
and empirically. We consider a family of ConvNet architectures, and prove that
weight sharing can be crucial, from an optimization point of view. We explore
different notions of the frequency, of the target function, proving necessity
of the target function having some low frequency components. This necessity is
not sufficient - only with weight sharing can it be exploited, thus
theoretically separating architectures using it, from others which do not. Our
theoretical results are aligned with empirical experiments in an even more
general setting, suggesting viability of examination of the role played by
interleaving those aspects in broader families of tasks.
| Shai Shalev-Shwartz, Ohad Shamir, Shaked Shammah | null | 1706.00687 | null | null |
Convolutional Neural Networks for Medical Image Analysis: Full Training
or Fine Tuning? | cs.CV cs.LG | Training a deep convolutional neural network (CNN) from scratch is difficult
because it requires a large amount of labeled training data and a great deal of
expertise to ensure proper convergence. A promising alternative is to fine-tune
a CNN that has been pre-trained using, for instance, a large set of labeled
natural images. However, the substantial differences between natural and
medical images may advise against such knowledge transfer. In this paper, we
seek to answer the following central question in the context of medical image
analysis: \emph{Can the use of pre-trained deep CNNs with sufficient
fine-tuning eliminate the need for training a deep CNN from scratch?} To
address this question, we considered 4 distinct medical imaging applications in
3 specialties (radiology, cardiology, and gastroenterology) involving
classification, detection, and segmentation from 3 different imaging
modalities, and investigated how the performance of deep CNNs trained from
scratch compared with the pre-trained CNNs fine-tuned in a layer-wise manner.
Our experiments consistently demonstrated that (1) the use of a pre-trained CNN
with adequate fine-tuning outperformed or, in the worst case, performed as well
as a CNN trained from scratch; (2) fine-tuned CNNs were more robust to the size
of training sets than CNNs trained from scratch; (3) neither shallow tuning nor
deep tuning was the optimal choice for a particular application; and (4) our
layer-wise fine-tuning scheme could offer a practical way to reach the best
performance for the application at hand based on the amount of available data.
| Nima Tajbakhsh, Jae Y. Shin, Suryakanth R. Gurudu, R. Todd Hurst,
Christopher B. Kendall, Michael B. Gotway, and Jianming Liang | 10.1109/TMI.2016.2535302 | 1706.00712 | null | null |
Automating Carotid Intima-Media Thickness Video Interpretation with
Convolutional Neural Networks | cs.CV cs.LG | Cardiovascular disease (CVD) is the leading cause of mortality yet largely
preventable, but the key to prevention is to identify at-risk individuals
before adverse events. For predicting individual CVD risk, carotid intima-media
thickness (CIMT), a noninvasive ultrasound method, has proven to be valuable,
offering several advantages over CT coronary artery calcium score. However,
each CIMT examination includes several ultrasound videos, and interpreting each
of these CIMT videos involves three operations: (1) select three end-diastolic
ultrasound frames (EUF) in the video, (2) localize a region of interest (ROI)
in each selected frame, and (3) trace the lumen-intima interface and the
media-adventitia interface in each ROI to measure CIMT. These operations are
tedious, laborious, and time consuming, a serious limitation that hinders the
widespread utilization of CIMT in clinical practice. To overcome this
limitation, this paper presents a new system to automate CIMT video
interpretation. Our extensive experiments demonstrate that the suggested system
significantly outperforms the state-of-the-art methods. The superior
performance is attributable to our unified framework based on convolutional
neural networks (CNNs) coupled with our informative image representation and
effective post-processing of the CNN outputs, which are uniquely designed for
each of the above three operations.
| Jae Y. Shin, Nima Tajbakhsh, R. Todd Hurst, Christopher B. Kendall,
and Jianming Liang | null | 1706.00719 | null | null |
Parameter identification in Markov chain choice models | math.ST cs.LG stat.ML stat.TH | This work studies the parameter identification problem for the Markov chain
choice model of Blanchet, Gallego, and Goyal used in assortment planning. In
this model, the product selected by a customer is determined by a Markov chain
over the products, where the products in the offered assortment are absorbing
states. The underlying parameters of the model were previously shown to be
identifiable from the choice probabilities for the all-products assortment,
together with choice probabilities for assortments of all-but-one products.
Obtaining and estimating choice probabilities for such large assortments is not
desirable in many settings. The main result of this work is that the parameters
may be identified from assortments of sizes two and three, regardless of the
total number of products. The result is obtained via a simple and efficient
parameter recovery algorithm.
| Arushi Gupta, Daniel Hsu | null | 1706.00729 | null | null |
Computationally and statistically efficient learning of causal Bayes
nets using path queries | cs.LG stat.ML | Causal discovery from empirical data is a fundamental problem in many
scientific domains. Observational data allows for identifiability only up to
Markov equivalence class. In this paper we first propose a polynomial time
algorithm for learning the exact correctly-oriented structure of the transitive
reduction of any causal Bayesian network with high probability, by using
interventional path queries. Each path query takes as input an origin node and
a target node, and answers whether there is a directed path from the origin to
the target. This is done by intervening on the origin node and observing
samples from the target node. We theoretically show the logarithmic sample
complexity for the size of interventional data per path query, for continuous
and discrete networks. We then show how to learn the transitive edges using
also logarithmic sample complexity (albeit in time exponential in the maximum
number of parents for discrete networks), which allows us to learn the full
network. We further extend our work by reducing the number of interventional
path queries for learning rooted trees. We also provide an analysis of
imperfect interventions.
| Kevin Bello and Jean Honorio | null | 1706.00754 | null | null |
Hyperparameter Optimization: A Spectral Approach | cs.LG cs.AI math.OC stat.ML | We give a simple, fast algorithm for hyperparameter optimization inspired by
techniques from the analysis of Boolean functions. We focus on the
high-dimensional regime where the canonical example is training a neural
network with a large number of hyperparameters. The algorithm --- an iterative
application of compressed sensing techniques for orthogonal polynomials ---
requires only uniform sampling of the hyperparameters and is thus easily
parallelizable.
Experiments for training deep neural networks on Cifar-10 show that compared
to state-of-the-art tools (e.g., Hyperband and Spearmint), our algorithm finds
significantly improved solutions, in some cases better than what is attainable
by hand-tuning. In terms of overall running time (i.e., time required to sample
various settings of hyperparameters plus additional computation time), we are
at least an order of magnitude faster than Hyperband and Bayesian Optimization.
We also outperform Random Search 8x.
Additionally, our method comes with provable guarantees and yields the first
improvements on the sample complexity of learning decision trees in over two
decades. In particular, we obtain the first quasi-polynomial time algorithm for
learning noisy decision trees with polynomial sample complexity.
| Elad Hazan, Adam Klivans, Yang Yuan | null | 1706.00764 | null | null |
Information, Privacy and Stability in Adaptive Data Analysis | cs.LG stat.ML | Traditional statistical theory assumes that the analysis to be performed on a
given data set is selected independently of the data themselves. This
assumption breaks downs when data are re-used across analyses and the analysis
to be performed at a given stage depends on the results of earlier stages. Such
dependency can arise when the same data are used by several scientific studies,
or when a single analysis consists of multiple stages.
How can we draw statistically valid conclusions when data are re-used? This
is the focus of a recent and active line of work. At a high level, these
results show that limiting the information revealed by earlier stages of
analysis controls the bias introduced in later stages by adaptivity.
Here we review some known results in this area and highlight the role of
information-theoretic concepts, notably several one-shot notions of mutual
information.
| Adam Smith | null | 1706.0082 | null | null |
Online Dynamic Programming | cs.LG | We consider the problem of repeatedly solving a variant of the same dynamic
programming problem in successive trials. An instance of the type of problems
we consider is to find a good binary search tree in a changing environment.At
the beginning of each trial, the learner probabilistically chooses a tree with
the $n$ keys at the internal nodes and the $n+1$ gaps between keys at the
leaves. The learner is then told the frequencies of the keys and gaps and is
charged by the average search cost for the chosen tree. The problem is online
because the frequencies can change between trials. The goal is to develop
algorithms with the property that their total average search cost (loss) in all
trials is close to the total loss of the best tree chosen in hindsight for all
trials. The challenge, of course, is that the algorithm has to deal with
exponential number of trees. We develop a general methodology for tackling such
problems for a wide class of dynamic programming algorithms. Our framework
allows us to extend online learning algorithms like Hedge and Component Hedge
to a significantly wider class of combinatorial objects than was possible
before.
| Holakou Rahmanian, Manfred K. Warmuth | null | 1706.00834 | null | null |
Multiple Kernel Learning and Automatic Subspace Relevance Determination
for High-dimensional Neuroimaging Data | cs.LG q-bio.NC stat.ML | Alzheimer's disease is a major cause of dementia. Its diagnosis requires
accurate biomarkers that are sensitive to disease stages. In this respect, we
regard probabilistic classification as a method of designing a probabilistic
biomarker for disease staging. Probabilistic biomarkers naturally support the
interpretation of decisions and evaluation of uncertainty associated with them.
In this paper, we obtain probabilistic biomarkers via Gaussian Processes.
Gaussian Processes enable probabilistic kernel machines that offer flexible
means to accomplish Multiple Kernel Learning. Exploiting this flexibility, we
propose a new variation of Automatic Relevance Determination and tackle the
challenges of high dimensionality through multiple kernels. Our research
results demonstrate that the Gaussian Process models are competitive with or
better than the well-known Support Vector Machine in terms of classification
performance even in the cases of single kernel learning. Extending the basic
scheme towards the Multiple Kernel Learning, we improve the efficacy of the
Gaussian Process models and their interpretability in terms of the known
anatomical correlates of the disease. For instance, the disease pathology
starts in and around the hippocampus and entorhinal cortex. Through the use of
Gaussian Processes and Multiple Kernel Learning, we have automatically and
efficiently determined those portions of neuroimaging data. In addition to
their interpretability, our Gaussian Process models are competitive with recent
deep learning solutions under similar settings.
| Murat Seckin Ayhan and Vijay Raghavan and Alzheimer's disease
Neuroimaging Initiative | null | 1706.00856 | null | null |
MobiRNN: Efficient Recurrent Neural Network Execution on Mobile GPU | cs.DC cs.LG cs.NE | In this paper, we explore optimizations to run Recurrent Neural Network (RNN)
models locally on mobile devices. RNN models are widely used for Natural
Language Processing, Machine Translation, and other tasks. However, existing
mobile applications that use RNN models do so on the cloud. To address privacy
and efficiency concerns, we show how RNN models can be run locally on mobile
devices. Existing work on porting deep learning models to mobile devices focus
on Convolution Neural Networks (CNNs) and cannot be applied directly to RNN
models. In response, we present MobiRNN, a mobile-specific optimization
framework that implements GPU offloading specifically for mobile GPUs.
Evaluations using an RNN model for activity recognition shows that MobiRNN does
significantly decrease the latency of running RNN models on phones.
| Qingqing Cao, Niranjan Balasubramanian, Aruna Balasubramanian | null | 1706.00878 | null | null |
Task-specific Word Identification from Short Texts Using a Convolutional
Neural Network | cs.CL cs.IR cs.LG | Task-specific word identification aims to choose the task-related words that
best describe a short text. Existing approaches require well-defined seed words
or lexical dictionaries (e.g., WordNet), which are often unavailable for many
applications such as social discrimination detection and fake review detection.
However, we often have a set of labeled short texts where each short text has a
task-related class label, e.g., discriminatory or non-discriminatory, specified
by users or learned by classification algorithms. In this paper, we focus on
identifying task-specific words and phrases from short texts by exploiting
their class labels rather than using seed words or lexical dictionaries. We
consider the task-specific word and phrase identification as feature learning.
We train a convolutional neural network over a set of labeled texts and use
score vectors to localize the task-specific words and phrases. Experimental
results on sentiment word identification show that our approach significantly
outperforms existing methods. We further conduct two case studies to show the
effectiveness of our approach. One case study on a crawled tweets dataset
demonstrates that our approach can successfully capture the
discrimination-related words/phrases. The other case study on fake review
detection shows that our approach can identify the fake-review words/phrases.
| Shuhan Yuan, Xintao Wu, Yang Xiang | null | 1706.00884 | null | null |
IDK Cascades: Fast Deep Learning by Learning not to Overthink | cs.CV cs.LG | Advances in deep learning have led to substantial increases in prediction
accuracy but have been accompanied by increases in the cost of rendering
predictions. We conjecture that fora majority of real-world inputs, the recent
advances in deep learning have created models that effectively "overthink" on
simple inputs. In this paper, we revisit the classic question of building model
cascades that primarily leverage class asymmetry to reduce cost. We introduce
the "I Don't Know"(IDK) prediction cascades framework, a general framework to
systematically compose a set of pre-trained models to accelerate inference
without a loss in prediction accuracy. We propose two search based methods for
constructing cascades as well as a new cost-aware objective within this
framework. The proposed IDK cascade framework can be easily adopted in the
existing model serving systems without additional model re-training. We
evaluate the proposed techniques on a range of benchmarks to demonstrate the
effectiveness of the proposed framework.
| Xin Wang, Yujia Luo, Daniel Crankshaw, Alexey Tumanov, Fisher Yu,
Joseph E. Gonzalez | null | 1706.00885 | null | null |
Spectrum-based deep neural networks for fraud detection | cs.CR cs.LG cs.SI | In this paper, we focus on fraud detection on a signed graph with only a
small set of labeled training data. We propose a novel framework that combines
deep neural networks and spectral graph analysis. In particular, we use the
node projection (called as spectral coordinate) in the low dimensional spectral
space of the graph's adjacency matrix as input of deep neural networks.
Spectral coordinates in the spectral space capture the most useful topology
information of the network. Due to the small dimension of spectral coordinates
(compared with the dimension of the adjacency matrix derived from a graph),
training deep neural networks becomes feasible. We develop and evaluate two
neural networks, deep autoencoder and convolutional neural network, in our
fraud detection framework. Experimental results on a real signed graph show
that our spectrum based deep neural networks are effective in fraud detection.
| Shuhan Yuan, Xintao Wu, Jun Li, Aidong Lu | null | 1706.00891 | null | null |
Learning by Association - A versatile semi-supervised training method
for neural networks | cs.CV cs.LG | In many real-world scenarios, labeled data for a specific machine learning
task is costly to obtain. Semi-supervised training methods make use of
abundantly available unlabeled data and a smaller number of labeled examples.
We propose a new framework for semi-supervised training of deep neural networks
inspired by learning in humans. "Associations" are made from embeddings of
labeled samples to those of unlabeled ones and back. The optimization schedule
encourages correct association cycles that end up at the same class from which
the association was started and penalizes wrong associations ending at a
different class. The implementation is easy to use and can be added to any
existing end-to-end training setup. We demonstrate the capabilities of learning
by association on several data sets and show that it can improve performance on
classification tasks tremendously by making use of additionally available
unlabeled data. In particular, for cases with few labeled data, our training
scheme outperforms the current state of the art on SVHN.
| Philip H\"ausser and Alexander Mordvintsev and Daniel Cremers | null | 1706.00909 | null | null |
Context-aware, Adaptive and Scalable Android Malware Detection through
Online Learning (extended version) | cs.CR cs.LG cs.SE | It is well-known that Android malware constantly evolves so as to evade
detection. This causes the entire malware population to be non-stationary.
Contrary to this fact, most of the prior works on Machine Learning based
Android malware detection have assumed that the distribution of the observed
malware characteristics (i.e., features) does not change over time. In this
work, we address the problem of malware population drift and propose a novel
online learning based framework to detect malware, named CASANDRA
(Contextaware, Adaptive and Scalable ANDRoid mAlware detector). In order to
perform accurate detection, a novel graph kernel that facilitates capturing
apps' security-sensitive behaviors along with their context information from
dependency graphs is proposed. Besides being accurate and scalable, CASANDRA
has specific advantages: i) being adaptive to the evolution in malware features
over time ii) explaining the significant features that led to an app's
classification as being malicious or benign. In a large-scale comparative
analysis, CASANDRA outperforms two state-of-the-art techniques on a benchmark
dataset achieving 99.23% F-measure. When evaluated with more than 87,000 apps
collected in-the-wild, CASANDRA achieves 89.92% accuracy, outperforming
existing techniques by more than 25% in their typical batch learning setting
and more than 7% when they are continuously retained, while maintaining
comparable efficiency.
| Annamalai Narayanan, Mahinthan Chandramohan, Lihui Chen, Yang Liu | null | 1706.00947 | null | null |
Financial Series Prediction: Comparison Between Precision of Time Series
Models and Machine Learning Methods | cs.LG q-fin.ST | Precise financial series predicting has long been a difficult problem because
of unstableness and many noises within the series. Although Traditional time
series models like ARIMA and GARCH have been researched and proved to be
effective in predicting, their performances are still far from satisfying.
Machine Learning, as an emerging research field in recent years, has brought
about many incredible improvements in tasks such as regressing and classifying,
and it's also promising to exploit the methodology in financial time series
predicting. In this paper, the predicting precision of financial time series
between traditional time series models and mainstream machine learning models
including some state-of-the-art ones of deep learning are compared through
experiment using real stock index data from history. The result shows that
machine learning as a modern method far surpasses traditional models in
precision.
| Xin-Yao Qian | null | 1706.00948 | null | null |
Thompson Sampling for the MNL-Bandit | cs.LG | We consider a sequential subset selection problem under parameter
uncertainty, where at each time step, the decision maker selects a subset of
cardinality $K$ from $N$ possible items (arms), and observes a (bandit)
feedback in the form of the index of one of the items in said subset, or none.
Each item in the index set is ascribed a certain value (reward), and the
feedback is governed by a Multinomial Logit (MNL) choice model whose parameters
are a priori unknown. The objective of the decision maker is to maximize the
expected cumulative rewards over a finite horizon $T$, or alternatively,
minimize the regret relative to an oracle that knows the MNL parameters. We
refer to this as the MNL-Bandit problem. This problem is representative of a
larger family of exploration-exploitation problems that involve a combinatorial
objective, and arise in several important application domains. We present an
approach to adapt Thompson Sampling to this problem and show that it achieves
near-optimal regret as well as attractive numerical performance.
| Shipra Agrawal, Vashist Avadhanula, Vineet Goyal, Assaf Zeevi | null | 1706.00977 | null | null |
Semi-supervised Classification: Cluster and Label Approach using
Particle Swarm Optimization | cs.LG | Classification predicts classes of objects using the knowledge learned during
the training phase. This process requires learning from labeled samples.
However, the labeled samples usually limited. Annotation process is annoying,
tedious, expensive, and requires human experts. Meanwhile, unlabeled data is
available and almost free. Semi-supervised learning approaches make use of both
labeled and unlabeled data. This paper introduces cluster and label approach
using PSO for semi-supervised classification. PSO is competitive to traditional
clustering algorithms. A new local best PSO is presented to cluster the
unlabeled data. The available labeled data guides the learning process. The
experiments are conducted using four state-of-the-art datasets from different
domains. The results compared with Label Propagation a popular semi-supervised
classifier and two state-of-the-art supervised classification models, namely
k-nearest neighbors and decision trees. The experiments show the efficiency of
the proposed model.
| Shahira Shaaban Azab, Mohamed Farouk Abdel Hady, Hesham Ahmed Hefny | 10.5120/ijca2017913013 | 1706.00996 | null | null |
Center of Gravity PSO for Partitioning Clustering | cs.LG | This paper presents the local best model of PSO for partition-based
clustering. The proposed model gets rid off the drawbacks of gbest PSO for
clustering. The model uses a pre-specified number of clusters K. The LPOSC has
K neighborhoods. Each neighborhood represents one of the clusters. The goal of
the particles in each neighborhood is optimizing the position of the centroid
of the cluster. The performance of the proposed algorithms is measured using
adjusted rand index. The results is compared with k-means and global best model
of PSO.
| Shahira Shaaban Azab, Hesham Ahmed Hefny | null | 1706.00997 | null | null |
Swarm Intelligence in Semi-supervised Classification | cs.LG | This Paper represents a literature review of Swarm intelligence algorithm in
the area of semi-supervised classification. There are many research papers for
applying swarm intelligence algorithms in the area of machine learning. Some
algorithms of SI are applied in the area of ML either solely or hybrid with
other ML algorithms. SI algorithms are also used for tuning parameters of ML
algorithm, or as a backbone for ML algorithms. This paper introduces a brief
literature review for applying swarm intelligence algorithms in the field of
semi-supervised learning
| Shahira Shaaban Azab, Hesham Ahmed Hefny | null | 1706.00998 | null | null |
DeepSF: deep convolutional neural network for mapping protein sequences
to folds | cs.LG q-bio.BM | Motivation
Protein fold recognition is an important problem in structural
bioinformatics. Almost all traditional fold recognition methods use sequence
(homology) comparison to indirectly predict the fold of a tar get protein based
on the fold of a template protein with known structure, which cannot explain
the relationship between sequence and fold. Only a few methods had been
developed to classify protein sequences into a small number of folds due to
methodological limitations, which are not generally useful in practice.
Results
We develop a deep 1D-convolution neural network (DeepSF) to directly classify
any protein se quence into one of 1195 known folds, which is useful for both
fold recognition and the study of se quence-structure relationship. Different
from traditional sequence alignment (comparison) based methods, our method
automatically extracts fold-related features from a protein sequence of any
length and map it to the fold space. We train and test our method on the
datasets curated from SCOP1.75, yielding a classification accuracy of 80.4%. On
the independent testing dataset curated from SCOP2.06, the classification
accuracy is 77.0%. We compare our method with a top profile profile alignment
method - HHSearch on hard template-based and template-free modeling targets of
CASP9-12 in terms of fold recognition accuracy. The accuracy of our method is
14.5%-29.1% higher than HHSearch on template-free modeling targets and
4.5%-16.7% higher on hard template-based modeling targets for top 1, 5, and 10
predicted folds. The hidden features extracted from sequence by our method is
robust against sequence mutation, insertion, deletion and truncation, and can
be used for other protein pattern recognition problems such as protein
clustering, comparison and ranking.
| Jie Hou, Badri Adhikari, Jianlin Cheng | null | 1706.0101 | null | null |
Nonconvex penalties with analytical solutions for one-bit compressive
sensing | cs.LG stat.ML | One-bit measurements widely exist in the real world, and they can be used to
recover sparse signals. This task is known as the problem of learning
halfspaces in learning theory and one-bit compressive sensing (1bit-CS) in
signal processing. In this paper, we propose novel algorithms based on both
convex and nonconvex sparsity-inducing penalties for robust 1bit-CS. We provide
a sufficient condition to verify whether a solution is globally optimal or not.
Then we show that the globally optimal solution for positive homogeneous
penalties can be obtained in two steps: a proximal operator and a normalization
step. For several nonconvex penalties, including minimax concave penalty (MCP),
$\ell_0$ norm, and sorted $\ell_1$ penalty, we provide fast algorithms for
finding the analytical solutions by solving the dual problem. Specifically, our
algorithm is more than $200$ times faster than the existing algorithm for MCP.
Its efficiency is comparable to the algorithm for the $\ell_1$ penalty in time,
while its performance is much better. Among these penalties, the sorted
$\ell_1$ penalty is most robust to noise in different settings.
| Xiaolin Huang and Ming Yan | 10.1016/j.sigpro.2017.10.023 | 1706.01014 | null | null |
Adaptive Multiple-Arm Identification | cs.LG | We study the problem of selecting $K$ arms with the highest expected rewards
in a stochastic $n$-armed bandit game. This problem has a wide range of
applications, e.g., A/B testing, crowdsourcing, simulation optimization. Our
goal is to develop a PAC algorithm, which, with probability at least
$1-\delta$, identifies a set of $K$ arms with the aggregate regret at most
$\epsilon$. The notion of aggregate regret for multiple-arm identification was
first introduced in \cite{Zhou:14} , which is defined as the difference of the
averaged expected rewards between the selected set of arms and the best $K$
arms. In contrast to \cite{Zhou:14} that only provides instance-independent
sample complexity, we introduce a new hardness parameter for characterizing the
difficulty of any given instance. We further develop two algorithms and
establish the corresponding sample complexity in terms of this hardness
parameter. The derived sample complexity can be significantly smaller than
state-of-the-art results for a large class of instances and matches the
instance-independent lower bound upto a $\log(\epsilon^{-1})$ factor in the
worst case. We also prove a lower bound result showing that the extra
$\log(\epsilon^{-1})$ is necessary for instance-dependent algorithms using the
introduced hardness parameter.
| Jiecao Chen, Xi Chen, Qin Zhang, Yuan Zhou | null | 1706.01026 | null | null |
Nearly Optimal Sampling Algorithms for Combinatorial Pure Exploration | cs.LG cs.DS stat.ML | We study the combinatorial pure exploration problem Best-Set in stochastic
multi-armed bandits. In a Best-Set instance, we are given $n$ arms with unknown
reward distributions, as well as a family $\mathcal{F}$ of feasible subsets
over the arms. Our goal is to identify the feasible subset in $\mathcal{F}$
with the maximum total mean using as few samples as possible. The problem
generalizes the classical best arm identification problem and the top-$k$ arm
identification problem, both of which have attracted significant attention in
recent years. We provide a novel instance-wise lower bound for the sample
complexity of the problem, as well as a nontrivial sampling algorithm, matching
the lower bound up to a factor of $\ln|\mathcal{F}|$. For an important class of
combinatorial families, we also provide polynomial time implementation of the
sampling algorithm, using the equivalence of separation and optimization for
convex program, and approximate Pareto curves in multi-objective optimization.
We also show that the $\ln|\mathcal{F}|$ factor is inevitable in general
through a nontrivial lower bound construction. Our results significantly
improve several previous results for several important combinatorial
constraints, and provide a tighter understanding of the general Best-Set
problem.
We further introduce an even more general problem, formulated in geometric
terms. We are given $n$ Gaussian arms with unknown means and unit variance.
Consider the $n$-dimensional Euclidean space $\mathbb{R}^n$, and a collection
$\mathcal{O}$ of disjoint subsets. Our goal is to determine the subset in
$\mathcal{O}$ that contains the $n$-dimensional vector of the means. The
problem generalizes most pure exploration bandit problems studied in the
literature. We provide the first nearly optimal sample complexity upper and
lower bounds for the problem.
| Lijie Chen, Anupam Gupta, Jian Li, Mingda Qiao, Ruosong Wang | null | 1706.01081 | null | null |
Joint Text Embedding for Personalized Content-based Recommendation | cs.IR cs.CL cs.LG | Learning a good representation of text is key to many recommendation
applications. Examples include news recommendation where texts to be
recommended are constantly published everyday. However, most existing
recommendation techniques, such as matrix factorization based methods, mainly
rely on interaction histories to learn representations of items. While latent
factors of items can be learned effectively from user interaction data, in many
cases, such data is not available, especially for newly emerged items.
In this work, we aim to address the problem of personalized recommendation
for completely new items with text information available. We cast the problem
as a personalized text ranking problem and propose a general framework that
combines text embedding with personalized recommendation. Users and textual
content are embedded into latent feature space. The text embedding function can
be learned end-to-end by predicting user interactions with items. To alleviate
sparsity in interaction data, and leverage large amount of text data with
little or no user interactions, we further propose a joint text embedding model
that incorporates unsupervised text embedding with a combination module.
Experimental results show that our model can significantly improve the
effectiveness of recommendation systems on real-world datasets.
| Ting Chen, Liangjie Hong, Yue Shi, Yizhou Sun | null | 1706.01084 | null | null |
Stochastic Reformulations of Linear Systems: Algorithms and Convergence
Theory | math.NA cs.LG cs.NA stat.ML | We develop a family of reformulations of an arbitrary consistent linear
system into a stochastic problem. The reformulations are governed by two
user-defined parameters: a positive definite matrix defining a norm, and an
arbitrary discrete or continuous distribution over random matrices. Our
reformulation has several equivalent interpretations, allowing for researchers
from various communities to leverage their domain specific insights. In
particular, our reformulation can be equivalently seen as a stochastic
optimization problem, stochastic linear system, stochastic fixed point problem
and a probabilistic intersection problem. We prove sufficient, and necessary
and sufficient conditions for the reformulation to be exact. Further, we
propose and analyze three stochastic algorithms for solving the reformulated
problem---basic, parallel and accelerated methods---with global linear
convergence rates. The rates can be interpreted as condition numbers of a
matrix which depends on the system matrix and on the reformulation parameters.
This gives rise to a new phenomenon which we call stochastic preconditioning,
and which refers to the problem of finding parameters (matrix and distribution)
leading to a sufficiently small condition number. Our basic method can be
equivalently interpreted as stochastic gradient descent, stochastic Newton
method, stochastic proximal point method, stochastic fixed point method, and
stochastic projection method, with fixed stepsize (relaxation parameter),
applied to the reformulations.
| Peter Richt\'arik and Martin Tak\'a\v{c} | null | 1706.01108 | null | null |
InfiniteBoost: building infinite ensembles with gradient descent | stat.ML cs.LG | In machine learning ensemble methods have demonstrated high accuracy for the
variety of problems in different areas. Two notable ensemble methods widely
used in practice are gradient boosting and random forests. In this paper we
present InfiniteBoost - a novel algorithm, which combines important properties
of these two approaches. The algorithm constructs the ensemble of trees for
which two properties hold: trees of the ensemble incorporate the mistakes done
by others; at the same time the ensemble could contain the infinite number of
trees without the over-fitting effect. The proposed algorithm is evaluated on
the regression, classification, and ranking tasks using large scale, publicly
available datasets.
| Alex Rogozhnikov and Tatiana Likhomanenko | null | 1706.01109 | null | null |
Evolving imputation strategies for missing data in classification
problems with TPOT | cs.LG stat.ML | Missing data has a ubiquitous presence in real-life applications of machine
learning techniques. Imputation methods are algorithms conceived for restoring
missing values in the data, based on other entries in the database. The choice
of the imputation method has an influence on the performance of the machine
learning technique, e.g., it influences the accuracy of the classification
algorithm applied to the data. Therefore, selecting and applying the right
imputation method is important and usually requires a substantial amount of
human intervention. In this paper we propose the use of genetic programming
techniques to search for the right combination of imputation and classification
algorithms. We build our work on the recently introduced Python-based TPOT
library, and incorporate a heterogeneous set of imputation algorithms as part
of the machine learning pipeline search. We show that genetic programming can
automatically find increasingly better pipelines that include the most
effective combinations of imputation methods, feature pre-processing, and
classifiers for a variety of classification problems with missing data.
| Unai Garciarena, Roberto Santana, Alexander Mendiburu | null | 1706.0112 | null | null |
Deep MIMO Detection | stat.ML cs.IT cs.LG math.IT | In this paper, we consider the use of deep neural networks in the context of
Multiple-Input-Multiple-Output (MIMO) detection. We give a brief introduction
to deep learning and propose a modern neural network architecture suitable for
this detection task. First, we consider the case in which the MIMO channel is
constant, and we learn a detector for a specific system. Next, we consider the
harder case in which the parameters are known yet changing and a single
detector must be learned for all multiple varying channels. We demonstrate the
performance of our deep MIMO detector using numerical simulations in comparison
to competing methods including approximate message passing and semidefinite
relaxation. The results show that deep networks can achieve state of the art
accuracy with significantly lower complexity while providing robustness against
ill conditioned channels and mis-specified noise variance.
| Neev Samuel, Tzvi Diskin and Ami Wiesel | null | 1706.01151 | null | null |
PReP: Path-Based Relevance from a Probabilistic Perspective in
Heterogeneous Information Networks | cs.SI cs.LG | As a powerful representation paradigm for networked and multi-typed data, the
heterogeneous information network (HIN) is ubiquitous. Meanwhile, defining
proper relevance measures has always been a fundamental problem and of great
pragmatic importance for network mining tasks. Inspired by our probabilistic
interpretation of existing path-based relevance measures, we propose to study
HIN relevance from a probabilistic perspective. We also identify, from
real-world data, and propose to model cross-meta-path synergy, which is a
characteristic important for defining path-based HIN relevance and has not been
modeled by existing methods. A generative model is established to derive a
novel path-based relevance measure, which is data-driven and tailored for each
HIN. We develop an inference algorithm to find the maximum a posteriori (MAP)
estimate of the model parameters, which entails non-trivial tricks. Experiments
on two real-world datasets demonstrate the effectiveness of the proposed model
and relevance measure.
| Yu Shi, Po-Wei Chan, Honglei Zhuang, Huan Gui and Jiawei Han | 10.1145/3097983.3097990 | 1706.01177 | null | null |
Inconsistent Node Flattening for Improving Top-down Hierarchical
Classification | cs.LG stat.ML | Large-scale classification of data where classes are structurally organized
in a hierarchy is an important area of research. Top-down approaches that
exploit the hierarchy during the learning and prediction phase are efficient
for large scale hierarchical classification. However, accuracy of top-down
approaches is poor due to error propagation i.e., prediction errors made at
higher levels in the hierarchy cannot be corrected at lower levels. One of the
main reason behind errors at the higher levels is the presence of inconsistent
nodes that are introduced due to the arbitrary process of creating these
hierarchies by domain experts. In this paper, we propose two different
data-driven approaches (local and global) for hierarchical structure
modification that identifies and flattens inconsistent nodes present within the
hierarchy. Our extensive empirical evaluation of the proposed approaches on
several image and text datasets with varying distribution of features, classes
and training instances per class shows improved classification performance over
competing hierarchical modification approaches. Specifically, we see an
improvement upto 7% in Macro-F1 score with our approach over best TD baseline.
SOURCE CODE: http://www.cs.gmu.edu/~mlbio/InconsistentNodeFlattening
| Azad Naik, Huzefa Rangwala | null | 1706.01214 | null | null |
DeepIoT: Compressing Deep Neural Network Structures for Sensing Systems
with a Compressor-Critic Framework | cs.LG cs.NE cs.NI | Recent advances in deep learning motivate the use of deep neutral networks in
sensing applications, but their excessive resource needs on constrained
embedded devices remain an important impediment. A recently explored solution
space lies in compressing (approximating or simplifying) deep neural networks
in some manner before use on the device. We propose a new compression solution,
called DeepIoT, that makes two key contributions in that space. First, unlike
current solutions geared for compressing specific types of neural networks,
DeepIoT presents a unified approach that compresses all commonly used deep
learning structures for sensing applications, including fully-connected,
convolutional, and recurrent neural networks, as well as their combinations.
Second, unlike solutions that either sparsify weight matrices or assume linear
structure within weight matrices, DeepIoT compresses neural network structures
into smaller dense matrices by finding the minimum number of non-redundant
hidden elements, such as filters and dimensions required by each layer, while
keeping the performance of sensing applications the same. Importantly, it does
so using an approach that obtains a global view of parameter redundancies,
which is shown to produce superior compression. We conduct experiments with
five different sensing-related tasks on Intel Edison devices. DeepIoT
outperforms all compared baseline algorithms with respect to execution time and
energy consumption by a significant margin. It reduces the size of deep neural
networks by 90% to 98.9%. It is thus able to shorten execution time by 71.4% to
94.5%, and decrease energy consumption by 72.2% to 95.7%. These improvements
are achieved without loss of accuracy. The results underscore the potential of
DeepIoT for advancing the exploitation of deep neural networks on
resource-constrained embedded devices.
| Shuochao Yao, Yiran Zhao, Aston Zhang, Lu Su, Tarek Abdelzaher | null | 1706.01215 | null | null |
Bayesian LSTMs in medicine | stat.ML cs.LG stat.AP | The medical field stands to see significant benefits from the recent advances
in deep learning. Knowing the uncertainty in the decision made by any machine
learning algorithm is of utmost importance for medical practitioners. This
study demonstrates the utility of using Bayesian LSTMs for classification of
medical time series. Four medical time series datasets are used to show the
accuracy improvement Bayesian LSTMs provide over standard LSTMs. Moreover, we
show cherry-picked examples of confident and uncertain classifications of the
medical time series. With simple modifications of the common practice for deep
learning, significant improvements can be made for the medical practitioner and
patient.
| Jos van der Westhuizen and Joan Lasenby | null | 1706.01242 | null | null |
Towards Synthesizing Complex Programs from Input-Output Examples | cs.LG cs.AI cs.PL | In recent years, deep learning techniques have been developed to improve the
performance of program synthesis from input-output examples. Albeit its
significant progress, the programs that can be synthesized by state-of-the-art
approaches are still simple in terms of their complexity. In this work, we move
a significant step forward along this direction by proposing a new class of
challenging tasks in the domain of program synthesis from input-output
examples: learning a context-free parser from pairs of input programs and their
parse trees. We show that this class of tasks are much more challenging than
previously studied tasks, and the test accuracy of existing approaches is
almost 0%.
We tackle the challenges by developing three novel techniques inspired by
three novel observations, which reveal the key ingredients of using deep
learning to synthesize a complex program. First, the use of a
non-differentiable machine is the key to effectively restrict the search space.
Thus our proposed approach learns a neural program operating a domain-specific
non-differentiable machine. Second, recursion is the key to achieve
generalizability. Thus, we bake-in the notion of recursion in the design of our
non-differentiable machine. Third, reinforcement learning is the key to learn
how to operate the non-differentiable machine, but it is also hard to train the
model effectively with existing reinforcement learning algorithms from a cold
boot. We develop a novel two-phase reinforcement learning-based search
algorithm to overcome this issue. In our evaluation, we show that using our
novel approach, neural parsing programs can be learned to achieve 100% test
accuracy on test inputs that are 500x longer than the training samples.
| Xinyun Chen, Chang Liu, Dawn Song | null | 1706.01284 | null | null |
Deep learning evaluation using deep linguistic processing | cs.CL cs.AI cs.CV cs.LG | We discuss problems with the standard approaches to evaluation for tasks like
visual question answering, and argue that artificial data can be used to
address these as a complement to current practice. We demonstrate that with the
help of existing 'deep' linguistic processing technology we are able to create
challenging abstract datasets, which enable us to investigate the language
understanding abilities of multimodal deep learning models in detail, as
compared to a single performance value on a static and monolithic dataset.
| Alexander Kuhnle and Ann Copestake | null | 1706.01322 | null | null |
Event Representations for Automated Story Generation with Deep Neural
Nets | cs.CL cs.AI cs.LG cs.NE | Automated story generation is the problem of automatically selecting a
sequence of events, actions, or words that can be told as a story. We seek to
develop a system that can generate stories by learning everything it needs to
know from textual story corpora. To date, recurrent neural networks that learn
language models at character, word, or sentence levels have had little success
generating coherent stories. We explore the question of event representations
that provide a mid-level of abstraction between words and sentences in order to
retain the semantic information of the original data while minimizing event
sparsity. We present a technique for preprocessing textual story data into
event sequences. We then present a technique for automated story generation
whereby we decompose the problem into the generation of successive events
(event2event) and the generation of natural language sentences from events
(event2sentence). We give empirical results comparing different event
representations and their effects on event successor generation and the
translation of events to natural language.
| Lara J. Martin, Prithviraj Ammanabrolu, Xinyu Wang, William Hancock,
Shruti Singh, Brent Harrison, Mark O. Riedl | 10.1609/aaai.v32i1.11430 | 1706.01331 | null | null |
Yeah, Right, Uh-Huh: A Deep Learning Backchannel Predictor | cs.CL cs.CV cs.HC cs.LG cs.SD | Using supporting backchannel (BC) cues can make human-computer interaction
more social. BCs provide a feedback from the listener to the speaker indicating
to the speaker that he is still listened to. BCs can be expressed in different
ways, depending on the modality of the interaction, for example as gestures or
acoustic cues. In this work, we only considered acoustic cues. We are proposing
an approach towards detecting BC opportunities based on acoustic input features
like power and pitch. While other works in the field rely on the use of a
hand-written rule set or specialized features, we made use of artificial neural
networks. They are capable of deriving higher order features from input
features themselves. In our setup, we first used a fully connected feed-forward
network to establish an updated baseline in comparison to our previously
proposed setup. We also extended this setup by the use of Long Short-Term
Memory (LSTM) networks which have shown to outperform feed-forward based setups
on various tasks. Our best system achieved an F1-Score of 0.37 using power and
pitch features. Adding linguistic information using word2vec, the score
increased to 0.39.
| Robin Ruede, Markus M\"uller, Sebastian St\"uker, Alex Waibel | null | 1706.0134 | null | null |
Emergence of Invariance and Disentanglement in Deep Representations | cs.LG cs.AI stat.ML | Using established principles from Statistics and Information Theory, we show
that invariance to nuisance factors in a deep neural network is equivalent to
information minimality of the learned representation, and that stacking layers
and injecting noise during training naturally bias the network towards learning
invariant representations. We then decompose the cross-entropy loss used during
training and highlight the presence of an inherent overfitting term. We propose
regularizing the loss by bounding such a term in two equivalent ways: One with
a Kullbach-Leibler term, which relates to a PAC-Bayes perspective; the other
using the information in the weights as a measure of complexity of a learned
model, yielding a novel Information Bottleneck for the weights. Finally, we
show that invariance and independence of the components of the representation
learned by the network are bounded above and below by the information in the
weights, and therefore are implicitly optimized during training. The theory
enables us to quantify and predict sharp phase transitions between underfitting
and overfitting of random labels when using our regularized loss, which we
verify in experiments, and sheds light on the relation between the geometry of
the loss function, invariance properties of the learned representation, and
generalization error.
| Alessandro Achille and Stefano Soatto | null | 1706.0135 | null | null |
Automatic Response Assessment in Regions of Language Cortex in Epilepsy
Patients Using ECoG-based Functional Mapping and Machine Learning | q-bio.NC cs.CV cs.LG | Accurate localization of brain regions responsible for language and cognitive
functions in Epilepsy patients should be carefully determined prior to surgery.
Electrocorticography (ECoG)-based Real Time Functional Mapping (RTFM) has been
shown to be a safer alternative to the electrical cortical stimulation mapping
(ESM), which is currently the clinical/gold standard. Conventional methods for
analyzing RTFM signals are based on statistical comparison of signal power at
certain frequency bands. Compared to gold standard (ESM), they have limited
accuracies when assessing channel responses.
In this study, we address the accuracy limitation of the current RTFM signal
estimation methods by analyzing the full frequency spectrum of the signal and
replacing signal power estimation methods with machine learning algorithms,
specifically random forest (RF), as a proof of concept. We train RF with power
spectral density of the time-series RTFM signal in supervised learning
framework where ground truth labels are obtained from the ESM. Results obtained
from RTFM of six adult patients in a strictly controlled experimental setup
reveal the state of the art detection accuracy of $\approx 78\%$ for the
language comprehension task, an improvement of $23\%$ over the conventional
RTFM estimation method. To the best of our knowledge, this is the first study
exploring the use of machine learning approaches for determining RTFM signal
characteristics, and using the whole-frequency band for better region
localization. Our results demonstrate the feasibility of machine learning based
RTFM signal analysis method over the full spectrum to be a clinical routine in
the near future.
| Harish RaviPrakash, Milena Korostenskaja, Eduardo Castillo, Ki Lee,
James Baumgartner, Ulas Bagci | null | 1706.0138 | null | null |
Sparse Stochastic Bandits | cs.LG | In the classical multi-armed bandit problem, d arms are available to the
decision maker who pulls them sequentially in order to maximize his cumulative
reward. Guarantees can be obtained on a relative quantity called regret, which
scales linearly with d (or with sqrt(d) in the minimax sense). We here consider
the sparse case of this classical problem in the sense that only a small number
of arms, namely s < d, have a positive expected reward. We are able to leverage
this additional assumption to provide an algorithm whose regret scales with s
instead of d. Moreover, we prove that this algorithm is optimal by providing a
matching lower bound - at least for a wide and pertinent range of parameters
that we determine - and by evaluating its performance on simulated data.
| Joon Kwon, Vianney Perchet, Claire Vernade | null | 1706.01383 | null | null |
Multi-Observation Elicitation | cs.LG | We study loss functions that measure the accuracy of a prediction based on
multiple data points simultaneously. To our knowledge, such loss functions have
not been studied before in the area of property elicitation or in machine
learning more broadly. As compared to traditional loss functions that take only
a single data point, these multi-observation loss functions can in some cases
drastically reduce the dimensionality of the hypothesis required. In
elicitation, this corresponds to requiring many fewer reports; in empirical
risk minimization, it corresponds to algorithms on a hypothesis space of much
smaller dimension. We explore some examples of the tradeoff between
dimensionality and number of observations, give some geometric
characterizations and intuition for relating loss functions and the properties
that they elicit, and discuss some implications for both elicitation and
machine-learning contexts.
| Sebastian Casalaina-Martin, Rafael Frongillo, Tom Morgan, Bo Waggoner | null | 1706.01394 | null | null |
Learning Whenever Learning is Possible: Universal Learning under General
Stochastic Processes | stat.ML cs.LG math.PR math.ST stat.TH | This work initiates a general study of learning and generalization without
the i.i.d. assumption, starting from first principles. While the traditional
approach to statistical learning theory typically relies on standard
assumptions from probability theory (e.g., i.i.d. or stationary ergodic), in
this work we are interested in developing a theory of learning based only on
the most fundamental and necessary assumptions implicit in the requirements of
the learning problem itself. We specifically study universally consistent
function learning, where the objective is to obtain low long-run average loss
for any target function, when the data follow a given stochastic process. We
are then interested in the question of whether there exist learning rules
guaranteed to be universally consistent given only the assumption that
universally consistent learning is possible for the given data process. The
reasoning that motivates this criterion emanates from a kind of optimist's
decision theory, and so we refer to such learning rules as being optimistically
universal. We study this question in three natural learning settings:
inductive, self-adaptive, and online. Remarkably, as our strongest positive
result, we find that optimistically universal learning rules do indeed exist in
the self-adaptive learning setting. Establishing this fact requires us to
develop new approaches to the design of learning algorithms. Along the way, we
also identify concise characterizations of the family of processes under which
universally consistent learning is possible in the inductive and self-adaptive
settings. We additionally pose a number of enticing open problems, particularly
for the online learning setting.
| Steve Hanneke | null | 1706.01418 | null | null |
A simple neural network module for relational reasoning | cs.CL cs.LG | Relational reasoning is a central component of generally intelligent
behavior, but has proven difficult for neural networks to learn. In this paper
we describe how to use Relation Networks (RNs) as a simple plug-and-play module
to solve problems that fundamentally hinge on relational reasoning. We tested
RN-augmented networks on three tasks: visual question answering using a
challenging dataset called CLEVR, on which we achieve state-of-the-art,
super-human performance; text-based question answering using the bAbI suite of
tasks; and complex reasoning about dynamic physical systems. Then, using a
curated dataset called Sort-of-CLEVR we show that powerful convolutional
networks do not have a general capacity to solve relational questions, but can
gain this capacity when augmented with RNs. Our work shows how a deep learning
architecture equipped with an RN module can implicitly discover and learn to
reason about entities and their relations.
| Adam Santoro, David Raposo, David G.T. Barrett, Mateusz Malinowski,
Razvan Pascanu, Peter Battaglia, Timothy Lillicrap | null | 1706.01427 | null | null |
Batched Large-scale Bayesian Optimization in High-dimensional Spaces | stat.ML cs.LG math.OC | Bayesian optimization (BO) has become an effective approach for black-box
function optimization problems when function evaluations are expensive and the
optimum can be achieved within a relatively small number of queries. However,
many cases, such as the ones with high-dimensional inputs, may require a much
larger number of observations for optimization. Despite an abundance of
observations thanks to parallel experiments, current BO techniques have been
limited to merely a few thousand observations. In this paper, we propose
ensemble Bayesian optimization (EBO) to address three current challenges in BO
simultaneously: (1) large-scale observations; (2) high dimensional input
spaces; and (3) selections of batch queries that balance quality and diversity.
The key idea of EBO is to operate on an ensemble of additive Gaussian process
models, each of which possesses a randomized strategy to divide and conquer. We
show unprecedented, previously impossible results of scaling up BO to tens of
thousands of observations within minutes of computation.
| Zi Wang and Clement Gehring and Pushmeet Kohli and Stefanie Jegelka | null | 1706.01445 | null | null |
A Joint Model for Question Answering and Question Generation | cs.CL cs.AI cs.LG cs.NE | We propose a generative machine comprehension model that learns jointly to
ask and answer questions based on documents. The proposed model uses a
sequence-to-sequence framework that encodes the document and generates a
question (answer) given an answer (question). Significant improvement in model
performance is observed empirically on the SQuAD corpus, confirming our
hypothesis that the model benefits from jointly learning to perform both tasks.
We believe the joint model's novelty offers a new perspective on machine
comprehension beyond architectural engineering, and serves as a first step
towards autonomous information seeking.
| Tong Wang and Xingdi Yuan and Adam Trischler | null | 1706.0145 | null | null |
Stochastic Gradient Monomial Gamma Sampler | stat.ML cs.LG stat.AP | Recent advances in stochastic gradient techniques have made it possible to
estimate posterior distributions from large datasets via Markov Chain Monte
Carlo (MCMC). However, when the target posterior is multimodal, mixing
performance is often poor. This results in inadequate exploration of the
posterior distribution. A framework is proposed to improve the sampling
efficiency of stochastic gradient MCMC, based on Hamiltonian Monte Carlo. A
generalized kinetic function is leveraged, delivering superior stationary
mixing, especially for multimodal distributions. Techniques are also discussed
to overcome the practical issues introduced by this generalization. It is shown
that the proposed approach is better at exploring complex multimodal posterior
distributions, as demonstrated on multiple applications and in comparison with
other stochastic gradient MCMC methods.
| Yizhe Zhang, Changyou Chen, Zhe Gan, Ricardo Henao, Lawrence Carin | null | 1706.01498 | null | null |
UCB Exploration via Q-Ensembles | cs.LG stat.ML | We show how an ensemble of $Q^*$-functions can be leveraged for more
effective exploration in deep reinforcement learning. We build on well
established algorithms from the bandit setting, and adapt them to the
$Q$-learning setting. We propose an exploration strategy based on
upper-confidence bounds (UCB). Our experiments show significant gains on the
Atari benchmark.
| Richard Y. Chen, Szymon Sidor, Pieter Abbeel, John Schulman | null | 1706.01502 | null | null |
Beyond Volume: The Impact of Complex Healthcare Data on the Machine
Learning Pipeline | cs.CY cs.LG stat.ML | From medical charts to national census, healthcare has traditionally operated
under a paper-based paradigm. However, the past decade has marked a long and
arduous transformation bringing healthcare into the digital age. Ranging from
electronic health records, to digitized imaging and laboratory reports, to
public health datasets, today, healthcare now generates an incredible amount of
digital information. Such a wealth of data presents an exciting opportunity for
integrated machine learning solutions to address problems across multiple
facets of healthcare practice and administration. Unfortunately, the ability to
derive accurate and informative insights requires more than the ability to
execute machine learning models. Rather, a deeper understanding of the data on
which the models are run is imperative for their success. While a significant
effort has been undertaken to develop models able to process the volume of data
obtained during the analysis of millions of digitalized patient records, it is
important to remember that volume represents only one aspect of the data. In
fact, drawing on data from an increasingly diverse set of sources, healthcare
data presents an incredibly complex set of attributes that must be accounted
for throughout the machine learning pipeline. This chapter focuses on
highlighting such challenges, and is broken down into three distinct
components, each representing a phase of the pipeline. We begin with attributes
of the data accounted for during preprocessing, then move to considerations
during model building, and end with challenges to the interpretation of model
output. For each component, we present a discussion around data as it relates
to the healthcare domain and offer insight into the challenges each may impose
on the efficiency of machine learning techniques.
| Keith Feldman, Louis Faust, Xian Wu, Chao Huang, and Nitesh V. Chawla | 10.1007/978-3-319-69775-8_9 | 1706.01513 | null | null |
Progressive Boosting for Class Imbalance | cs.LG cs.CV | Pattern recognition applications often suffer from skewed data distributions
between classes, which may vary during operations w.r.t. the design data.
Two-class classification systems designed using skewed data tend to recognize
the majority class better than the minority class of interest. Several
data-level techniques have been proposed to alleviate this issue by up-sampling
minority samples or under-sampling majority samples. However, some informative
samples may be neglected by random under-sampling and adding synthetic positive
samples through up-sampling adds to training complexity. In this paper, a new
ensemble learning algorithm called Progressive Boosting (PBoost) is proposed
that progressively inserts uncorrelated groups of samples into a Boosting
procedure to avoid loss of information while generating a diverse pool of
classifiers. Base classifiers in this ensemble are generated from one iteration
to the next, using subsets from a validation set that grows gradually in size
and imbalance. Consequently, PBoost is more robust to unknown and variable
levels of skew in operational data, and has lower computation complexity than
Boosting ensembles in literature. In PBoost, a new loss factor is proposed to
avoid bias of performance towards the negative class. Using this loss factor,
the weight update of samples and classifier contribution in final predictions
are set based on the ability to recognize both classes. Using the proposed loss
factor instead of standard accuracy can avoid biasing performance in any
Boosting ensemble. The proposed approach was validated and compared using
synthetic data, videos from the FIA dataset that emulates face
re-identification applications, and KEEL collection of datasets. Results show
that PBoost can outperform state of the art techniques in terms of both
accuracy and complexity over different levels of imbalance and overlap between
classes.
| Roghayeh Soleymani, Eric Granger, Giorgio Fumera | null | 1706.01531 | null | null |
Deep learning for extracting protein-protein interactions from
biomedical literature | cs.CL cs.LG q-bio.QM | State-of-the-art methods for protein-protein interaction (PPI) extraction are
primarily feature-based or kernel-based by leveraging lexical and syntactic
information. But how to incorporate such knowledge in the recent deep learning
methods remains an open question. In this paper, we propose a multichannel
dependency-based convolutional neural network model (McDepCNN). It applies one
channel to the embedding vector of each word in the sentence, and another
channel to the embedding vector of the head of the corresponding word.
Therefore, the model can use richer information obtained from different
channels. Experiments on two public benchmarking datasets, AIMed and BioInfer,
demonstrate that McDepCNN compares favorably to the state-of-the-art
rich-feature and single-kernel based methods. In addition, McDepCNN achieves
24.4% relative improvement in F1-score over the state-of-the-art methods on
cross-corpus evaluation and 12% improvement in F1-score over kernel-based
methods on "difficult" instances. These results suggest that McDepCNN
generalizes more easily over different corpora, and is capable of capturing
long distance features in the sentences.
| Yifan Peng and Zhiyong Lu | null | 1706.01556 | null | null |
Open Loop Hyperparameter Optimization and Determinantal Point Processes | stat.ML cs.LG | Driven by the need for parallelizable hyperparameter optimization methods,
this paper studies \emph{open loop} search methods: sequences that are
predetermined and can be generated before a single configuration is evaluated.
Examples include grid search, uniform random search, low discrepancy sequences,
and other sampling distributions. In particular, we propose the use of
$k$-determinantal point processes in hyperparameter optimization via random
search. Compared to conventional uniform random search where hyperparameter
settings are sampled independently, a $k$-DPP promotes diversity. We describe
an approach that transforms hyperparameter search spaces for efficient use with
a $k$-DPP. In addition, we introduce a novel Metropolis-Hastings algorithm
which can sample from $k$-DPPs defined over any space from which uniform
samples can be drawn, including spaces with a mixture of discrete and
continuous dimensions or tree structure. Our experiments show significant
benefits in realistic scenarios with a limited budget for training supervised
learners, whether in serial or parallel.
| Jesse Dodge, Kevin Jamieson, Noah A. Smith | null | 1706.01566 | null | null |
Embedding Feature Selection for Large-scale Hierarchical Classification | cs.LG stat.ML | Large-scale Hierarchical Classification (HC) involves datasets consisting of
thousands of classes and millions of training instances with high-dimensional
features posing several big data challenges. Feature selection that aims to
select the subset of discriminant features is an effective strategy to deal
with large-scale HC problem. It speeds up the training process, reduces the
prediction time and minimizes the memory requirements by compressing the total
size of learned model weight vectors. Majority of the studies have also shown
feature selection to be competent and successful in improving the
classification accuracy by removing irrelevant features. In this work, we
investigate various filter-based feature selection methods for dimensionality
reduction to solve the large-scale HC problem. Our experimental evaluation on
text and image datasets with varying distribution of features, classes and
instances shows upto 3x order of speed-up on massive datasets and upto 45% less
memory requirements for storing the weight vectors of learned model without any
significant loss (improvement for some datasets) in the classification
accuracy. Source Code: https://cs.gmu.edu/~mlbio/featureselection.
| Azad Naik and Huzefa Rangwala | null | 1706.01581 | null | null |
Classifying Documents within Multiple Hierarchical Datasets using
Multi-Task Learning | cs.LG stat.ML | Multi-task learning (MTL) is a supervised learning paradigm in which the
prediction models for several related tasks are learned jointly to achieve
better generalization performance. When there are only a few training examples
per task, MTL considerably outperforms the traditional Single task learning
(STL) in terms of prediction accuracy. In this work we develop an MTL based
approach for classifying documents that are archived within dual concept
hierarchies, namely, DMOZ and Wikipedia. We solve the multi-class
classification problem by defining one-versus-rest binary classification tasks
for each of the different classes across the two hierarchical datasets. Instead
of learning a linear discriminant for each of the different tasks
independently, we use a MTL approach with relationships between the different
tasks across the datasets established using the non-parametric, lazy, nearest
neighbor approach. We also develop and evaluate a transfer learning (TL)
approach and compare the MTL (and TL) methods against the standard single task
learning and semi-supervised learning approaches. Our empirical results
demonstrate the strength of our developed methods that show an improvement
especially when there are fewer number of training examples per classification
task.
| Azad Naik, Anveshi Charuvaka and Huzefa Rangwala | null | 1706.01583 | null | null |
Sample-Efficient Learning of Mixtures | cs.LG | We consider PAC learning of probability distributions (a.k.a. density
estimation), where we are given an i.i.d. sample generated from an unknown
target distribution, and want to output a distribution that is close to the
target in total variation distance. Let $\mathcal F$ be an arbitrary class of
probability distributions, and let $\mathcal{F}^k$ denote the class of
$k$-mixtures of elements of $\mathcal F$. Assuming the existence of a method
for learning $\mathcal F$ with sample complexity $m_{\mathcal{F}}(\epsilon)$,
we provide a method for learning $\mathcal F^k$ with sample complexity
$O({k\log k \cdot m_{\mathcal F}(\epsilon) }/{\epsilon^{2}})$. Our mixture
learning algorithm has the property that, if the $\mathcal F$-learner is
proper/agnostic, then the $\mathcal F^k$-learner would be proper/agnostic as
well.
This general result enables us to improve the best known sample complexity
upper bounds for a variety of important mixture classes. First, we show that
the class of mixtures of $k$ axis-aligned Gaussians in $\mathbb{R}^d$ is
PAC-learnable in the agnostic setting with $\widetilde{O}({kd}/{\epsilon ^ 4})$
samples, which is tight in $k$ and $d$ up to logarithmic factors. Second, we
show that the class of mixtures of $k$ Gaussians in $\mathbb{R}^d$ is
PAC-learnable in the agnostic setting with sample complexity
$\widetilde{O}({kd^2}/{\epsilon ^ 4})$, which improves the previous known
bounds of $\widetilde{O}({k^3d^2}/{\epsilon ^ 4})$ and
$\widetilde{O}(k^4d^4/\epsilon ^ 2)$ in its dependence on $k$ and $d$. Finally,
we show that the class of mixtures of $k$ log-concave distributions over
$\mathbb{R}^d$ is PAC-learnable using
$\widetilde{O}(d^{(d+5)/2}\epsilon^{-(d+9)/2}k)$ samples.
| Hassan Ashtiani, Shai Ben-David and Abbas Mehrabian | null | 1706.01596 | null | null |
Hyperplane Clustering Via Dual Principal Component Pursuit | cs.CV cs.LG stat.ML | We extend the theoretical analysis of a recently proposed single subspace
learning algorithm, called Dual Principal Component Pursuit (DPCP), to the case
where the data are drawn from of a union of hyperplanes. To gain insight into
the properties of the $\ell_1$ non-convex problem associated with DPCP, we
develop a geometric analysis of a closely related continuous optimization
problem. Then transferring this analysis to the discrete problem, our results
state that as long as the hyperplanes are sufficiently separated, the dominant
hyperplane is sufficiently dominant and the points are uniformly distributed
inside the associated hyperplanes, then the non-convex DPCP problem has a
unique global solution, equal to the normal vector of the dominant hyperplane.
This suggests the correctness of a sequential hyperplane learning algorithm
based on DPCP. A thorough experimental evaluation reveals that hyperplane
learning schemes based on DPCP dramatically improve over the state-of-the-art
methods for the case of synthetic data, while are competitive to the
state-of-the-art in the case of 3D plane clustering for Kinect data.
| Manolis C. Tsakiris and Rene Vidal | null | 1706.01604 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.