title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Multi-modal Sensor Registration for Vehicle Perception via Deep Neural
Networks | cs.CV cs.LG cs.NE | The ability to simultaneously leverage multiple modes of sensor information
is critical for perception of an automated vehicle's physical surroundings.
Spatio-temporal alignment of registration of the incoming information is often
a prerequisite to analyzing the fused data. The persistence and reliability of
multi-modal registration is therefore the key to the stability of decision
support systems ingesting the fused information. LiDAR-video systems like on
those many driverless cars are a common example of where keeping the LiDAR and
video channels registered to common physical features is important. We develop
a deep learning method that takes multiple channels of heterogeneous data, to
detect the misalignment of the LiDAR-video inputs. A number of variations were
tested on the Ford LiDAR-video driving test data set and will be discussed. To
the best of our knowledge the use of multi-modal deep convolutional neural
networks for dynamic real-time LiDAR-video registration has not been presented.
| Michael Giering, Vivek Venugopalan, Kishore Reddy | 10.1109/HPEC.2015.7322485 | 1412.7006 | null | null |
Occlusion Edge Detection in RGB-D Frames using Deep Convolutional
Networks | cs.CV cs.LG cs.NE | Occlusion edges in images which correspond to range discontinuity in the
scene from the point of view of the observer are an important prerequisite for
many vision and mobile robot tasks. Although they can be extracted from range
data however extracting them from images and videos would be extremely
beneficial. We trained a deep convolutional neural network (CNN) to identify
occlusion edges in images and videos with both RGB-D and RGB inputs. The use of
CNN avoids hand-crafting of features for automatically isolating occlusion
edges and distinguishing them from appearance edges. Other than quantitative
occlusion edge detection results, qualitative results are provided to
demonstrate the trade-off between high resolution analysis and frame-level
computation time which is critical for real-time robotics applications.
| Soumik Sarkar, Vivek Venugopalan, Kishore Reddy, Michael Giering,
Julian Ryde, Navdeep Jaitly | null | 1412.7007 | null | null |
Generative Class-conditional Autoencoders | cs.NE cs.LG | Recent work by Bengio et al. (2013) proposes a sampling procedure for
denoising autoencoders which involves learning the transition operator of a
Markov chain. The transition operator is typically unimodal, which limits its
capacity to model complex data. In order to perform efficient sampling from
conditional distributions, we extend this work, both theoretically and
algorithmically, to gated autoencoders (Memisevic, 2013), The proposed model is
able to generate convincing class-conditional samples when trained on both the
MNIST and TFD datasets.
| Jan Rudy, Graham Taylor | null | 1412.7009 | null | null |
Audio Source Separation with Discriminative Scattering Networks | cs.SD cs.LG | In this report we describe an ongoing line of research for solving
single-channel source separation problems. Many monaural signal decomposition
techniques proposed in the literature operate on a feature space consisting of
a time-frequency representation of the input data. A challenge faced by these
approaches is to effectively exploit the temporal dependencies of the signals
at scales larger than the duration of a time-frame. In this work we propose to
tackle this problem by modeling the signals using a time-frequency
representation with multiple temporal resolutions. The proposed representation
consists of a pyramid of wavelet scattering operators, which generalizes
Constant Q Transforms (CQT) with extra layers of convolution and complex
modulus. We first show that learning standard models with this multi-resolution
setting improves source separation results over fixed-resolution methods. As
study case, we use Non-Negative Matrix Factorizations (NMF) that has been
widely considered in many audio application. Then, we investigate the inclusion
of the proposed multi-resolution setting into a discriminative training regime.
We discuss several alternatives using different deep neural network
architectures.
| Pablo Sprechmann, Joan Bruna, Yann LeCun | null | 1412.7022 | null | null |
Training deep neural networks with low precision multiplications | cs.LG cs.CV cs.NE | Multipliers are the most space and power-hungry arithmetic operators of the
digital implementation of deep neural networks. We train a set of
state-of-the-art neural networks (Maxout networks) on three benchmark datasets:
MNIST, CIFAR-10 and SVHN. They are trained with three distinct formats:
floating point, fixed point and dynamic fixed point. For each of those datasets
and for each of those formats, we assess the impact of the precision of the
multiplications on the final error after training. We find that very low
precision is sufficient not just for running trained networks but also for
training them. For example, it is possible to train Maxout networks with 10
bits multiplications.
| Matthieu Courbariaux, Yoshua Bengio and Jean-Pierre David | null | 1412.7024 | null | null |
Language Recognition using Random Indexing | cs.CL cs.LG | Random Indexing is a simple implementation of Random Projections with a wide
range of applications. It can solve a variety of problems with good accuracy
without introducing much complexity. Here we use it for identifying the
language of text samples. We present a novel method of generating language
representation vectors using letter blocks. Further, we show that the method is
easily implemented and requires little computational power and space.
Experiments on a number of model parameters illustrate certain properties about
high dimensional sparse vector representations of data. Proof of statistically
relevant language vectors are shown through the extremely high success of
various language recognition tasks. On a difficult data set of 21,000 short
sentences from 21 different languages, our model performs a language
recognition task and achieves 97.8% accuracy, comparable to state-of-the-art
methods.
| Aditya Joshi, Johan Halseth, Pentti Kanerva | null | 1412.7026 | null | null |
Joint RNN-Based Greedy Parsing and Word Composition | cs.LG cs.CL cs.NE | This paper introduces a greedy parser based on neural networks, which
leverages a new compositional sub-tree representation. The greedy parser and
the compositional procedure are jointly trained, and tightly depends on
each-other. The composition procedure outputs a vector representation which
summarizes syntactically (parsing tags) and semantically (words) sub-trees.
Composition and tagging is achieved over continuous (word or tag)
representations, and recurrent neural networks. We reach F1 performance on par
with well-known existing parsers, while having the advantage of speed, thanks
to the greedy nature of the parser. We provide a fully functional
implementation of the method described in this paper.
| Jo\"el Legrand and Ronan Collobert | null | 1412.7028 | null | null |
Attention for Fine-Grained Categorization | cs.CV cs.LG cs.NE | This paper presents experiments extending the work of Ba et al. (2014) on
recurrent neural models for attention into less constrained visual
environments, specifically fine-grained categorization on the Stanford Dogs
data set. In this work we use an RNN of the same structure but substitute a
more powerful visual network and perform large-scale pre-training of the visual
network outside of the attention RNN. Most work in attention models to date
focuses on tasks with toy or more constrained visual environments, whereas we
present results for fine-grained categorization better than the
state-of-the-art GoogLeNet classification model. We show that our model learns
to direct high resolution attention to the most discriminative regions without
any spatial supervision such as bounding boxes, and it is able to discriminate
fine-grained dog breeds moderately well even when given only an initial
low-resolution context image and narrow, inexpensive glimpses at faces and fur
patterns. This and similar attention models have the major advantage of being
trained end-to-end, as opposed to other current detection and recognition
pipelines with hand-engineered components where information is lost. While our
model is state-of-the-art, further work is needed to fully leverage the
sequential input.
| Pierre Sermanet, Andrea Frome, Esteban Real | null | 1412.7054 | null | null |
Clustering multi-way data: a novel algebraic approach | cs.LG cs.CV cs.IT math.IT stat.ML | In this paper, we develop a method for unsupervised clustering of two-way
(matrix) data by combining two recent innovations from different fields: the
Sparse Subspace Clustering (SSC) algorithm [10], which groups points coming
from a union of subspaces into their respective subspaces, and the t-product
[18], which was introduced to provide a matrix-like multiplication for third
order tensors. Our algorithm is analogous to SSC in that an "affinity" between
different data points is built using a sparse self-representation of the data.
Unlike SSC, we employ the t-product in the self-representation. This allows us
more flexibility in modeling; infact, SSC is a special case of our method. When
using the t-product, three-way arrays are treated as matrices whose elements
(scalars) are n-tuples or tubes. Convolutions take the place of scalar
multiplication. This framework allows us to embed the 2-D data into a
vector-space-like structure called a free module over a commutative ring. These
free modules retain many properties of complex inner-product spaces, and we
leverage that to provide theoretical guarantees on our algorithm. We show that
compared to vector-space counterparts, SSmC achieves higher accuracy and better
able to cluster data with less preprocessing in some image clustering problems.
In particular we show the performance of the proposed method on Weizmann face
database, the Extended Yale B Face database and the MNIST handwritten digits
database.
| Eric Kernfeld and Shuchin Aeron and Misha Kilmer | null | 1412.7056 | null | null |
Semantic Image Segmentation with Deep Convolutional Nets and Fully
Connected CRFs | cs.CV cs.LG cs.NE | Deep Convolutional Neural Networks (DCNNs) have recently shown state of the
art performance in high level vision tasks, such as image classification and
object detection. This work brings together methods from DCNNs and
probabilistic graphical models for addressing the task of pixel-level
classification (also called "semantic image segmentation"). We show that
responses at the final layer of DCNNs are not sufficiently localized for
accurate object segmentation. This is due to the very invariance properties
that make DCNNs good for high level tasks. We overcome this poor localization
property of deep networks by combining the responses at the final DCNN layer
with a fully connected Conditional Random Field (CRF). Qualitatively, our
"DeepLab" system is able to localize segment boundaries at a level of accuracy
which is beyond previous methods. Quantitatively, our method sets the new
state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching
71.6% IOU accuracy in the test set. We show how these results can be obtained
efficiently: Careful network re-purposing and a novel application of the 'hole'
algorithm from the wavelet community allow dense computation of neural net
responses at 8 frames per second on a modern GPU.
| Liang-Chieh Chen and George Papandreou and Iasonas Kokkinos and Kevin
Murphy and Alan L. Yuille | null | 1412.7062 | null | null |
Diverse Embedding Neural Network Language Models | cs.CL cs.LG cs.NE | We propose Diverse Embedding Neural Network (DENN), a novel architecture for
language models (LMs). A DENNLM projects the input word history vector onto
multiple diverse low-dimensional sub-spaces instead of a single
higher-dimensional sub-space as in conventional feed-forward neural network
LMs. We encourage these sub-spaces to be diverse during network training
through an augmented loss function. Our language modeling experiments on the
Penn Treebank data set show the performance benefit of using a DENNLM.
| Kartik Audhkhasi, Abhinav Sethy, Bhuvana Ramabhadran | null | 1412.7063 | null | null |
Efficient Exact Gradient Update for training Deep Networks with Very
Large Sparse Targets | cs.NE cs.CL cs.LG | An important class of problems involves training deep neural networks with
sparse prediction targets of very high dimension D. These occur naturally in
e.g. neural language models or the learning of word-embeddings, often posed as
predicting the probability of next words among a vocabulary of size D (e.g. 200
000). Computing the equally large, but typically non-sparse D-dimensional
output vector from a last hidden layer of reasonable dimension d (e.g. 500)
incurs a prohibitive O(Dd) computational cost for each example, as does
updating the D x d output weight matrix and computing the gradient needed for
backpropagation to previous layers. While efficient handling of large sparse
network inputs is trivial, the case of large sparse targets is not, and has
thus so far been sidestepped with approximate alternatives such as hierarchical
softmax or sampling-based approximations during training. In this work we
develop an original algorithmic approach which, for a family of loss functions
that includes squared error and spherical softmax, can compute the exact loss,
gradient update for the output weights, and gradient for backpropagation, all
in O(d^2) per example instead of O(Dd), remarkably without ever computing the
D-dimensional output. The proposed algorithm yields a speedup of D/4d , i.e.
two orders of magnitude for typical sizes, for that critical part of the
computations that often dominates the training time in this kind of network
architecture.
| Pascal Vincent, Alexandre de Br\'ebisson, Xavier Bouthillier | null | 1412.7091 | null | null |
Learning linearly separable features for speech recognition using
convolutional neural networks | cs.LG cs.CL cs.NE | Automatic speech recognition systems usually rely on spectral-based features,
such as MFCC of PLP. These features are extracted based on prior knowledge such
as, speech perception or/and speech production. Recently, convolutional neural
networks have been shown to be able to estimate phoneme conditional
probabilities in a completely data-driven manner, i.e. using directly temporal
raw speech signal as input. This system was shown to yield similar or better
performance than HMM/ANN based system on phoneme recognition task and on large
scale continuous speech recognition task, using less parameters. Motivated by
these studies, we investigate the use of simple linear classifier in the
CNN-based framework. Thus, the network learns linearly separable features from
raw speech. We show that such system yields similar or better performance than
MLP based system using cepstral-based features as input.
| Dimitri Palaz, Mathew Magimai Doss and Ronan Collobert | null | 1412.7110 | null | null |
Learning Deep Object Detectors from 3D Models | cs.CV cs.LG cs.NE | Crowdsourced 3D CAD models are becoming easily accessible online, and can
potentially generate an infinite number of training images for almost any
object category.We show that augmenting the training data of contemporary Deep
Convolutional Neural Net (DCNN) models with such synthetic data can be
effective, especially when real training data is limited or not well matched to
the target domain. Most freely available CAD models capture 3D shape but are
often missing other low level cues, such as realistic object texture, pose, or
background. In a detailed analysis, we use synthetic CAD-rendered images to
probe the ability of DCNN to learn without these cues, with surprising
findings. In particular, we show that when the DCNN is fine-tuned on the target
detection task, it exhibits a large degree of invariance to missing low-level
cues, but, when pretrained on generic ImageNet classification, it learns better
when the low-level cues are simulated. We show that our synthetic DCNN training
approach significantly outperforms previous methods on the PASCAL VOC2007
dataset when learning in the few-shot scenario and improves performance in a
domain shift scenario on the Office benchmark.
| Xingchao Peng, Baochen Sun, Karim Ali, and Kate Saenko | null | 1412.7122 | null | null |
Fully Convolutional Multi-Class Multiple Instance Learning | cs.CV cs.LG cs.NE | Multiple instance learning (MIL) can reduce the need for costly annotation in
tasks such as semantic segmentation by weakening the required degree of
supervision. We propose a novel MIL formulation of multi-class semantic
segmentation learning by a fully convolutional network. In this setting, we
seek to learn a semantic segmentation model from just weak image-level labels.
The model is trained end-to-end to jointly optimize the representation while
disambiguating the pixel-image label assignment. Fully convolutional training
accepts inputs of any size, does not need object proposal pre-processing, and
offers a pixelwise loss map for selecting latent instances. Our multi-class MIL
loss exploits the further supervision given by images with multiple labels. We
evaluate this approach through preliminary experiments on the PASCAL VOC
segmentation challenge.
| Deepak Pathak, Evan Shelhamer, Jonathan Long and Trevor Darrell | null | 1412.7144 | null | null |
Deep Fried Convnets | cs.LG cs.NE stat.ML | The fully connected layers of a deep convolutional neural network typically
contain over 90% of the network parameters, and consume the majority of the
memory required to store the network parameters. Reducing the number of
parameters while preserving essentially the same predictive performance is
critically important for operating deep neural networks in memory constrained
environments such as GPUs or embedded devices.
In this paper we show how kernel methods, in particular a single Fastfood
layer, can be used to replace all fully connected layers in a deep
convolutional neural network. This novel Fastfood layer is also end-to-end
trainable in conjunction with convolutional layers, allowing us to combine them
into a new architecture, named deep fried convolutional networks, which
substantially reduces the memory footprint of convolutional networks trained on
MNIST and ImageNet with no drop in predictive performance.
| Zichao Yang, Marcin Moczulski, Misha Denil, Nando de Freitas, Alex
Smola, Le Song, Ziyu Wang | null | 1412.7149 | null | null |
Learning Compact Convolutional Neural Networks with Nested Dropout | cs.CV cs.LG cs.NE | Recently, nested dropout was proposed as a method for ordering representation
units in autoencoders by their information content, without diminishing
reconstruction cost. However, it has only been applied to training
fully-connected autoencoders in an unsupervised setting. We explore the impact
of nested dropout on the convolutional layers in a CNN trained by
backpropagation, investigating whether nested dropout can provide a simple and
systematic way to determine the optimal representation size with respect to the
desired accuracy and desired task and data complexity.
| Chelsea Finn, Lisa Anne Hendricks, Trevor Darrell | null | 1412.7155 | null | null |
Representation Learning for cold-start recommendation | cs.IR cs.LG | A standard approach to Collaborative Filtering (CF), i.e. prediction of user
ratings on items, relies on Matrix Factorization techniques. Representations
for both users and items are computed from the observed ratings and used for
prediction. Unfortunatly, these transductive approaches cannot handle the case
of new users arriving in the system, with no known rating, a problem known as
user cold-start. A common approach in this context is to ask these incoming
users for a few initialization ratings. This paper presents a model to tackle
this twofold problem of (i) finding good questions to ask, (ii) building
efficient representations from this small amount of information. The model can
also be used in a more standard (warm) context. Our approach is evaluated on
the classical CF problem and on the cold-start problem on four different
datasets showing its ability to improve baseline performance in both cases.
| Gabriella Contardo and Ludovic Denoyer and Thierry Artieres | null | 1412.7156 | null | null |
Bayesian Optimisation for Machine Translation | cs.CL cs.LG | This paper presents novel Bayesian optimisation algorithms for minimum error
rate training of statistical machine translation systems. We explore two
classes of algorithms for efficiently exploring the translation space, with the
first based on N-best lists and the second based on a hypergraph representation
that compactly represents an exponential number of translation options. Our
algorithms exhibit faster convergence and are capable of obtaining lower error
rates than the existing translation model specific approaches, all within a
generic Bayesian optimisation framework. Further more, we also introduce a
random embedding algorithm to scale our approach to sparse high dimensional
feature sets.
| Yishu Miao, Ziyu Wang, Phil Blunsom | null | 1412.7180 | null | null |
Convolutional Neural Networks for joint object detection and pose
estimation: A comparative study | cs.CV cs.LG cs.NE | In this paper we study the application of convolutional neural networks for
jointly detecting objects depicted in still images and estimating their 3D
pose. We identify different feature representations of oriented objects, and
energies that lead a network to learn this representations. The choice of the
representation is crucial since the pose of an object has a natural, continuous
structure while its category is a discrete variable. We evaluate the different
approaches on the joint object detection and pose estimation task of the
Pascal3D+ benchmark using Average Viewpoint Precision. We show that a
classification approach on discretized viewpoints achieves state-of-the-art
performance for joint object detection and pose estimation, and significantly
outperforms existing baselines on this benchmark.
| Francisco Massa, Mathieu Aubry, Renaud Marlet | null | 1412.7190 | null | null |
Audio Source Separation Using a Deep Autoencoder | cs.SD cs.LG cs.NE | This paper proposes a novel framework for unsupervised audio source
separation using a deep autoencoder. The characteristics of unknown source
signals mixed in the mixed input is automatically by properly configured
autoencoders implemented by a network with many layers, and separated by
clustering the coefficient vectors in the code layer. By investigating the
weight vectors to the final target, representation layer, the primitive
components of the audio signals in the frequency domain are observed. By
clustering the activation coefficients in the code layer, the previously
unknown source signals are segregated. The original source sounds are then
separated and reconstructed by using code vectors which belong to different
clusters. The restored sounds are not perfect but yield promising results for
the possibility in the success of many practical applications.
| Giljin Jang, Han-Gyu Kim, Yung-Hwan Oh | null | 1412.7193 | null | null |
Denoising autoencoder with modulated lateral connections learns
invariant representations of natural images | cs.NE cs.CV cs.LG stat.ML | Suitable lateral connections between encoder and decoder are shown to allow
higher layers of a denoising autoencoder (dAE) to focus on invariant
representations. In regular autoencoders, detailed information needs to be
carried through the highest layers but lateral connections from encoder to
decoder relieve this pressure. It is shown that abstract invariant features can
be translated to detailed reconstructions when invariant features are allowed
to modulate the strength of the lateral connection. Three dAE structures with
modulated and additive lateral connections, and without lateral connections
were compared in experiments using real-world images. The experiments verify
that adding modulated lateral connections to the model 1) improves the accuracy
of the probability model for inputs, as measured by denoising performance; 2)
results in representations whose degree of invariance grows faster towards the
higher layers; and 3) supports the formation of diverse invariant poolings.
| Antti Rasmus, Tapani Raiko, Harri Valpola | null | 1412.7210 | null | null |
Online Distributed Optimization on Dynamic Networks | math.OC cs.DS cs.LG cs.MA cs.SY | This paper presents a distributed optimization scheme over a network of
agents in the presence of cost uncertainties and over switching communication
topologies. Inspired by recent advances in distributed convex optimization, we
propose a distributed algorithm based on a dual sub-gradient averaging. The
objective of this algorithm is to minimize a cost function cooperatively.
Furthermore, the algorithm changes the weights on the communication links in
the network to adapt to varying reliability of neighboring agents. A
convergence rate analysis as a function of the underlying network topology is
then presented, followed by simulation results for representative classes of
sensor networks.
| Saghar Hosseini, Airlie Chapman, and Mehran Mesbahi | 10.1109/TAC.2016.2525928 | 1412.7215 | null | null |
Unsupervised Feature Learning with C-SVDDNet | cs.CV cs.LG cs.NE | In this paper, we investigate the problem of learning feature representation
from unlabeled data using a single-layer K-means network. A K-means network
maps the input data into a feature representation by finding the nearest
centroid for each input point, which has attracted researchers' great attention
recently due to its simplicity, effectiveness, and scalability. However, one
drawback of this feature mapping is that it tends to be unreliable when the
training data contains noise. To address this issue, we propose a SVDD based
feature learning algorithm that describes the density and distribution of each
cluster from K-means with an SVDD ball for more robust feature representation.
For this purpose, we present a new SVDD algorithm called C-SVDD that centers
the SVDD ball towards the mode of local density of each cluster, and we show
that the objective of C-SVDD can be solved very efficiently as a linear
programming problem. Additionally, traditional unsupervised feature learning
methods usually take an average or sum of local representations to obtain
global representation which ignore spatial relationship among them. To use
spatial information we propose a global representation with a variant of SIFT
descriptor. The architecture is also extended with multiple receptive field
scales and multiple pooling sizes. Extensive experiments on several popular
object recognition benchmarks, such as STL-10, MINST, Holiday and Copydays
shows that the proposed C-SVDDNet method yields comparable or better
performance than that of the previous state of the art methods.
| Dong Wang and Xiaoyang Tan | null | 1412.7259 | null | null |
Learning Non-deterministic Representations with Energy-based Ensembles | cs.LG cs.NE | The goal of a generative model is to capture the distribution underlying the
data, typically through latent variables. After training, these variables are
often used as a new representation, more effective than the original features
in a variety of learning tasks. However, the representations constructed by
contemporary generative models are usually point-wise deterministic mappings
from the original feature space. Thus, even with representations robust to
class-specific transformations, statistically driven models trained on them
would not be able to generalize when the labeled data is scarce. Inspired by
the stochasticity of the synaptic connections in the brain, we introduce
Energy-based Stochastic Ensembles. These ensembles can learn non-deterministic
representations, i.e., mappings from the feature space to a family of
distributions in the latent space. These mappings are encoded in a distribution
over a (possibly infinite) collection of models. By conditionally sampling
models from the ensemble, we obtain multiple representations for every input
example and effectively augment the data. We propose an algorithm similar to
contrastive divergence for training restricted Boltzmann stochastic ensembles.
Finally, we demonstrate the concept of the stochastic representations on a
synthetic dataset as well as test them in the one-shot learning scenario on
MNIST.
| Maruan Al-Shedivat, Emre Neftci and Gert Cauwenberghs | null | 1412.7272 | null | null |
Microbial community pattern detection in human body habitats via
ensemble clustering framework | q-bio.QM cs.CE cs.LG q-bio.GN | The human habitat is a host where microbial species evolve, function, and
continue to evolve. Elucidating how microbial communities respond to human
habitats is a fundamental and critical task, as establishing baselines of human
microbiome is essential in understanding its role in human disease and health.
However, current studies usually overlook a complex and interconnected
landscape of human microbiome and limit the ability in particular body habitats
with learning models of specific criterion. Therefore, these methods could not
capture the real-world underlying microbial patterns effectively. To obtain a
comprehensive view, we propose a novel ensemble clustering framework to mine
the structure of microbial community pattern on large-scale metagenomic data.
Particularly, we first build a microbial similarity network via integrating
1920 metagenomic samples from three body habitats of healthy adults. Then a
novel symmetric Nonnegative Matrix Factorization (NMF) based ensemble model is
proposed and applied onto the network to detect clustering pattern. Extensive
experiments are conducted to evaluate the effectiveness of our model on
deriving microbial community with respect to body habitat and host gender. From
clustering results, we observed that body habitat exhibits a strong bound but
non-unique microbial structural patterns. Meanwhile, human microbiome reveals
different degree of structural variations over body habitat and host gender. In
summary, our ensemble clustering framework could efficiently explore integrated
clustering results to accurately identify microbial communities, and provide a
comprehensive view for a set of microbial communities. Such trends depict an
integrated biography of microbial communities, which offer a new insight
towards uncovering pathogenic model of human microbiome.
| Peng Yang, Xiaoquan Su, Le Ou-Yang, Hon-Nian Chua, Xiao-Li Li, Kang
Ning | 10.1186/1752-0509-8-S4-S7 | 1412.7384 | null | null |
ADASECANT: Robust Adaptive Secant Method for Stochastic Gradient | cs.LG cs.NE stat.ML | Stochastic gradient algorithms have been the main focus of large-scale
learning problems and they led to important successes in machine learning. The
convergence of SGD depends on the careful choice of learning rate and the
amount of the noise in stochastic estimates of the gradients. In this paper, we
propose a new adaptive learning rate algorithm, which utilizes curvature
information for automatically tuning the learning rates. The information about
the element-wise curvature of the loss function is estimated from the local
statistics of the stochastic first order gradients. We further propose a new
variance reduction technique to speed up the convergence. In our preliminary
experiments with deep neural networks, we obtained better performance compared
to the popular stochastic gradient algorithms.
| Caglar Gulcehre, Marcin Moczulski and Yoshua Bengio | null | 1412.7419 | null | null |
Grammar as a Foreign Language | cs.CL cs.LG stat.ML | Syntactic constituency parsing is a fundamental problem in natural language
processing and has been the subject of intensive research and engineering for
decades. As a result, the most accurate parsers are domain specific, complex,
and inefficient. In this paper we show that the domain agnostic
attention-enhanced sequence-to-sequence model achieves state-of-the-art results
on the most widely used syntactic constituency parsing dataset, when trained on
a large synthetic corpus that was annotated using existing parsers. It also
matches the performance of standard parsers when trained only on a small
human-annotated dataset, which shows that this model is highly data-efficient,
in contrast to sequence-to-sequence models without the attention mechanism. Our
parser is also fast, processing over a hundred sentences per second with an
unoptimized CPU implementation.
| Oriol Vinyals, Lukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever,
Geoffrey Hinton | null | 1412.7449 | null | null |
Pathwise Coordinate Optimization for Sparse Learning: Algorithm and
Theory | stat.ML cs.LG math.OC | The pathwise coordinate optimization is one of the most important
computational frameworks for high dimensional convex and nonconvex sparse
learning problems. It differs from the classical coordinate optimization
algorithms in three salient features: {\it warm start initialization}, {\it
active set updating}, and {\it strong rule for coordinate preselection}. Such a
complex algorithmic structure grants superior empirical performance, but also
poses significant challenge to theoretical analysis. To tackle this long
lasting problem, we develop a new theory showing that these three features play
pivotal roles in guaranteeing the outstanding statistical and computational
performance of the pathwise coordinate optimization framework. Particularly, we
analyze the existing pathwise coordinate optimization algorithms and provide
new theoretical insights into them. The obtained insights further motivate the
development of several modifications to improve the pathwise coordinate
optimization framework, which guarantees linear convergence to a unique sparse
local optimum with optimal statistical properties in parameter estimation and
support recovery. This is the first result on the computational and statistical
guarantees of the pathwise coordinate optimization framework in high
dimensions. Thorough numerical experiments are provided to support our theory.
| Tuo Zhao, Han Liu, Tong Zhang | null | 1412.7477 | null | null |
Deep Networks With Large Output Spaces | cs.NE cs.LG | Deep neural networks have been extremely successful at various image, speech,
video recognition tasks because of their ability to model deep structures
within the data. However, they are still prohibitively expensive to train and
apply for problems containing millions of classes in the output layer. Based on
the observation that the key computation common to most neural network layers
is a vector/matrix product, we propose a fast locality-sensitive hashing
technique to approximate the actual dot product enabling us to scale up the
training and inference to millions of output classes. We evaluate our technique
on three diverse large-scale recognition tasks and show that our approach can
train large-scale models at a faster rate (in terms of steps/total time)
compared to baseline methods.
| Sudheendra Vijayanarasimhan and Jonathon Shlens and Rajat Monga and
Jay Yagnik | null | 1412.7479 | null | null |
A Unified Perspective on Multi-Domain and Multi-Task Learning | stat.ML cs.LG cs.NE | In this paper, we provide a new neural-network based perspective on
multi-task learning (MTL) and multi-domain learning (MDL). By introducing the
concept of a semantic descriptor, this framework unifies MDL and MTL as well as
encompassing various classic and recent MTL/MDL algorithms by interpreting them
as different ways of constructing semantic descriptors. Our interpretation
provides an alternative pipeline for zero-shot learning (ZSL), where a model
for a novel class can be constructed without training data. Moreover, it leads
to a new and practically relevant problem setting of zero-shot domain
adaptation (ZSDA), which is the analogous to ZSL but for novel domains: A model
for an unseen domain can be generated by its semantic descriptor. Experiments
across this range of problems demonstrate that our framework outperforms a
variety of alternatives.
| Yongxin Yang and Timothy M. Hospedales | null | 1412.7489 | null | null |
Learning Deep Temporal Representations for Brain Decoding | cs.LG cs.NE | Functional magnetic resonance imaging produces high dimensional data, with a
less then ideal number of labelled samples for brain decoding tasks (predicting
brain states). In this study, we propose a new deep temporal convolutional
neural network architecture with spatial pooling for brain decoding which aims
to reduce dimensionality of feature space along with improved classification
performance. Temporal representations (filters) for each layer of the
convolutional model are learned by leveraging unlabelled fMRI data in an
unsupervised fashion with regularized autoencoders. Learned temporal
representations in multiple levels capture the regularities in the temporal
domain and are observed to be a rich bank of activation patterns which also
exhibit similarities to the actual hemodynamic responses. Further, spatial
pooling layers in the convolutional architecture reduce the dimensionality
without losing excessive information. By employing the proposed temporal
convolutional architecture with spatial pooling, raw input fMRI data is mapped
to a non-linear, highly-expressive and low-dimensional feature space where the
final classification is conducted. In addition, we propose a simple heuristic
approach for hyper-parameter tuning when no validation data is available.
Proposed method is tested on a ten class recognition memory experiment with
nine subjects. The results support the efficiency and potential of the proposed
model, compared to the baseline multi-voxel pattern analysis techniques.
| Orhan Firat, Emre Aksan, Ilke Oztekin, Fatos T. Yarman Vural | null | 1412.7522 | null | null |
Difference Target Propagation | cs.LG cs.NE | Back-propagation has been the workhorse of recent successes of deep learning
but it relies on infinitesimal effects (partial derivatives) in order to
perform credit assignment. This could become a serious issue as one considers
deeper and more non-linear functions, e.g., consider the extreme case of
nonlinearity where the relation between parameters and cost is actually
discrete. Inspired by the biological implausibility of back-propagation, a few
approaches have been proposed in the past that could play a similar credit
assignment role. In this spirit, we explore a novel approach to credit
assignment in deep networks that we call target propagation. The main idea is
to compute targets rather than gradients, at each layer. Like gradients, they
are propagated backwards. In a way that is related but different from
previously proposed proxies for back-propagation which rely on a backwards
network with symmetric weights, target propagation relies on auto-encoders at
each layer. Unlike back-propagation, it can be applied even when units exchange
stochastic bits rather than real numbers. We show that a linear correction for
the imperfectness of the auto-encoders, called difference target propagation,
is very effective to make target propagation actually work, leading to results
comparable to back-propagation for deep networks with discrete and continuous
units and denoising auto-encoders and achieving state of the art for stochastic
networks.
| Dong-Hyun Lee, Saizheng Zhang, Asja Fischer, Yoshua Bengio | null | 1412.7525 | null | null |
Fast Convolutional Nets With fbfft: A GPU Performance Evaluation | cs.LG cs.DC cs.NE | We examine the performance profile of Convolutional Neural Network training
on the current generation of NVIDIA Graphics Processing Units. We introduce two
new Fast Fourier Transform convolution implementations: one based on NVIDIA's
cuFFT library, and another based on a Facebook authored FFT implementation,
fbfft, that provides significant speedups over cuFFT (over 1.5x) for whole
CNNs. Both of these convolution implementations are available in open source,
and are faster than NVIDIA's cuDNN implementation for many common convolutional
layers (up to 23.5x for some synthetic kernel configurations). We discuss
different performance regimes of convolutions, comparing areas where
straightforward time domain convolutions outperform Fourier frequency domain
convolutions. Details on algorithmic applications of NVIDIA GPU hardware
specifics in the implementation of fbfft are also provided.
| Nicolas Vasilache, Jeff Johnson, Michael Mathieu, Soumith Chintala,
Serkan Piantino, Yann LeCun | null | 1412.7580 | null | null |
Differential Privacy and Machine Learning: a Survey and Review | cs.LG cs.CR cs.DB | The objective of machine learning is to extract useful information from data,
while privacy is preserved by concealing information. Thus it seems hard to
reconcile these competing interests. However, they frequently must be balanced
when mining sensitive data. For example, medical research represents an
important application where it is necessary both to extract useful information
and protect patient privacy. One way to resolve the conflict is to extract
general characteristics of whole populations without disclosing the private
information of individuals.
In this paper, we consider differential privacy, one of the most popular and
powerful definitions of privacy. We explore the interplay between machine
learning and differential privacy, namely privacy-preserving machine learning
algorithms and learning-based data release mechanisms. We also describe some
theoretical results that address what can be learned differentially privately
and upper bounds of loss functions for differentially private algorithms.
Finally, we present some open questions, including how to incorporate public
data, how to deal with missing data in private datasets, and whether, as the
number of observed samples grows arbitrarily large, differentially private
machine learning algorithms can be achieved at no cost to utility as compared
to corresponding non-differentially private algorithms.
| Zhanglong Ji, Zachary C. Lipton, Charles Elkan | null | 1412.7584 | null | null |
An Effective Semi-supervised Divisive Clustering Algorithm | cs.LG cs.CV stat.ML | Nowadays, data are generated massively and rapidly from scientific fields as
bioinformatics, neuroscience and astronomy to business and engineering fields.
Cluster analysis, as one of the major data analysis tools, is therefore more
significant than ever. We propose in this work an effective Semi-supervised
Divisive Clustering algorithm (SDC). Data points are first organized by a
minimal spanning tree. Next, this tree structure is transitioned to the in-tree
structure, and then divided into sub-trees under the supervision of the labeled
data, and in the end, all points in the sub-trees are directly associated with
specific cluster centers. SDC is fully automatic, non-iterative, involving no
free parameter, insensitive to noise, able to detect irregularly shaped cluster
structures, applicable to the data sets of high dimensionality and different
attributes. The power of SDC is demonstrated on several datasets.
| Teng Qiu, Yongjie Li | null | 1412.7625 | null | null |
Transformation Properties of Learned Visual Representations | cs.LG cs.CV cs.NE | When a three-dimensional object moves relative to an observer, a change
occurs on the observer's image plane and in the visual representation computed
by a learned model. Starting with the idea that a good visual representation is
one that transforms linearly under scene motions, we show, using the theory of
group representations, that any such representation is equivalent to a
combination of the elementary irreducible representations. We derive a striking
relationship between irreducibility and the statistical dependency structure of
the representation, by showing that under restricted conditions, irreducible
representations are decorrelated. Under partial observability, as induced by
the perspective projection of a scene onto the image plane, the motion group
does not have a linear action on the space of images, so that it becomes
necessary to perform inference over a latent representation that does transform
linearly. This idea is demonstrated in a model of rotating NORB objects that
employs a latent representation of the non-commutative 3D rotation group SO(3).
| Taco S. Cohen and Max Welling | null | 1412.7659 | null | null |
Automatic Photo Adjustment Using Deep Neural Networks | cs.CV cs.GR cs.LG eess.IV | Photo retouching enables photographers to invoke dramatic visual impressions
by artistically enhancing their photos through stylistic color and tone
adjustments. However, it is also a time-consuming and challenging task that
requires advanced skills beyond the abilities of casual photographers. Using an
automated algorithm is an appealing alternative to manual work but such an
algorithm faces many hurdles. Many photographic styles rely on subtle
adjustments that depend on the image content and even its semantics. Further,
these adjustments are often spatially varying. Because of these
characteristics, existing automatic algorithms are still limited and cover only
a subset of these challenges. Recently, deep machine learning has shown unique
abilities to address hard problems that resisted machine algorithms for long.
This motivated us to explore the use of deep learning in the context of photo
editing. In this paper, we explain how to formulate the automatic photo
adjustment problem in a way suitable for this approach. We also introduce an
image descriptor that accounts for the local semantics of an image. Our
experiments demonstrate that our deep learning formulation applied using these
descriptors successfully capture sophisticated photographic styles. In
particular and unlike previous techniques, it can model local adjustments that
depend on the image semantics. We show on several examples that this yields
results that are qualitatively and quantitatively better than previous work.
| Zhicheng Yan and Hao Zhang and Baoyuan Wang and Sylvain Paris and
Yizhou Yu | null | 1412.7725 | null | null |
Learning Longer Memory in Recurrent Neural Networks | cs.NE cs.LG | Recurrent neural network is a powerful model that learns temporal patterns in
sequential data. For a long time, it was believed that recurrent networks are
difficult to train using simple optimizers, such as stochastic gradient
descent, due to the so-called vanishing gradient problem. In this paper, we
show that learning longer term patterns in real data, such as in natural
language, is perfectly possible using gradient descent. This is achieved by
using a slight structural modification of the simple recurrent neural network
architecture. We encourage some of the hidden units to change their state
slowly by making part of the recurrent weight matrix close to identity, thus
forming kind of a longer term memory. We evaluate our model in language
modeling experiments, where we obtain similar performance to the much more
complex Long Short Term Memory (LSTM) networks (Hochreiter & Schmidhuber,
1997).
| Tomas Mikolov, Armand Joulin, Sumit Chopra, Michael Mathieu,
Marc'Aurelio Ranzato | null | 1412.7753 | null | null |
Multiple Object Recognition with Visual Attention | cs.LG cs.CV cs.NE | We present an attention-based model for recognizing multiple objects in
images. The proposed model is a deep recurrent neural network trained with
reinforcement learning to attend to the most relevant regions of the input
image. We show that the model learns to both localize and recognize multiple
objects despite being given only class labels during training. We evaluate the
model on the challenging task of transcribing house number sequences from
Google Street View images and show that it is both more accurate than the
state-of-the-art convolutional networks and uses fewer parameters and less
computation.
| Jimmy Ba, Volodymyr Mnih, Koray Kavukcuoglu | null | 1412.7755 | null | null |
Protein Secondary Structure Prediction with Long Short Term Memory
Networks | q-bio.QM cs.LG cs.NE | Prediction of protein secondary structure from the amino acid sequence is a
classical bioinformatics problem. Common methods use feed forward neural
networks or SVMs combined with a sliding window, as these models does not
naturally handle sequential data. Recurrent neural networks are an
generalization of the feed forward neural network that naturally handle
sequential data. We use a bidirectional recurrent neural network with long
short term memory cells for prediction of secondary structure and evaluate
using the CB513 dataset. On the secondary structure 8-class problem we report
better performance (0.674) than state of the art (0.664). Our model includes
feed forward networks between the long short term memory cells, a path that can
be further explored.
| S{\o}ren Kaae S{\o}nderby and Ole Winther | null | 1412.7828 | null | null |
Cloud K-SVD: A Collaborative Dictionary Learning Algorithm for Big,
Distributed Data | cs.LG cs.IT math.IT stat.ML | This paper studies the problem of data-adaptive representations for big,
distributed data. It is assumed that a number of geographically-distributed,
interconnected sites have massive local data and they are interested in
collaboratively learning a low-dimensional geometric structure underlying these
data. In contrast to previous works on subspace-based data representations,
this paper focuses on the geometric structure of a union of subspaces (UoS). In
this regard, it proposes a distributed algorithm---termed cloud K-SVD---for
collaborative learning of a UoS structure underlying distributed data of
interest. The goal of cloud K-SVD is to learn a common overcomplete dictionary
at each individual site such that every sample in the distributed data can be
represented through a small number of atoms of the learned dictionary. Cloud
K-SVD accomplishes this goal without requiring exchange of individual samples
between sites. This makes it suitable for applications where sharing of raw
data is discouraged due to either privacy concerns or large volumes of data.
This paper also provides an analysis of cloud K-SVD that gives insights into
its properties as well as deviations of the dictionaries learned at individual
sites from a centralized solution in terms of different measures of
local/global data and topology of interconnections. Finally, the paper
numerically illustrates the efficacy of cloud K-SVD on real and synthetic
distributed data.
| Haroon Raja and Waheed U. Bajwa | 10.1109/TSP.2015.2472372 | 1412.7839 | null | null |
Gaussian Process Pseudo-Likelihood Models for Sequence Labeling | cs.LG stat.ML | Several machine learning problems arising in natural language processing can
be modeled as a sequence labeling problem. We provide Gaussian process models
based on pseudo-likelihood approximation to perform sequence labeling. Gaussian
processes (GPs) provide a Bayesian approach to learning in a kernel based
framework. The pseudo-likelihood model enables one to capture long range
dependencies among the output components of the sequence without becoming
computationally intractable. We use an efficient variational Gaussian
approximation method to perform inference in the proposed model. We also
provide an iterative algorithm which can effectively make use of the
information from the neighboring labels to perform prediction. The ability to
capture long range dependencies makes the proposed approach useful for a wide
range of sequence labeling problems. Numerical experiments on some sequence
labeling data sets demonstrate the usefulness of the proposed approach.
| P. K. Srijith, P. Balamurugan and Shirish Shevade | null | 1412.7868 | null | null |
Polyphonic Music Generation by Modeling Temporal Dependencies Using a
RNN-DBN | cs.LG cs.AI cs.NE | In this paper, we propose a generic technique to model temporal dependencies
and sequences using a combination of a recurrent neural network and a Deep
Belief Network. Our technique, RNN-DBN, is an amalgamation of the memory state
of the RNN that allows it to provide temporal information and a multi-layer DBN
that helps in high level representation of the data. This makes RNN-DBNs ideal
for sequence generation. Further, the use of a DBN in conjunction with the RNN
makes this model capable of significantly more complex data representation than
an RBM. We apply this technique to the task of polyphonic music generation.
| Kratarth Goel, Raunaq Vohra, and J.K. Sahoo | 10.1007/978-3-319-11179-7_28 | 1412.7927 | null | null |
A Novel Feature Selection and Extraction Technique for Classification | cs.LG cs.CV | This paper presents a versatile technique for the purpose of feature
selection and extraction - Class Dependent Features (CDFs). We use CDFs to
improve the accuracy of classification and at the same time control
computational expense by tackling the curse of dimensionality. In order to
demonstrate the generality of this technique, it is applied to handwritten
digit recognition and text categorization.
| Kratarth Goel, Raunaq Vohra, and Ainesh Bakshi | 10.1109/SMC.2014.6974562 | 1412.7934 | null | null |
Adjusting Leverage Scores by Row Weighting: A Practical Approach to
Coherent Matrix Completion | cs.LG stat.ML | Low-rank matrix completion is an important problem with extensive real-world
applications. When observations are uniformly sampled from the underlying
matrix entries, existing methods all require the matrix to be incoherent. This
paper provides the first working method for coherent matrix completion under
the standard uniform sampling model. Our approach is based on the weighted
nuclear norm minimization idea proposed in several recent work, and our key
contribution is a practical method to compute the weighting matrices so that
the leverage scores become more uniform after weighting. Under suitable
conditions, we are able to derive theoretical results, showing the
effectiveness of our approach. Experiments on synthetic data show that our
approach recovers highly coherent matrices with high precision, whereas the
standard unweighted method fails even on noise-free data.
| Shusen Wang, Tong Zhang, Zhihua Zhang | null | 1412.7938 | null | null |
The Computational Theory of Intelligence: Information Entropy | cs.AI cs.LG | This paper presents an information theoretic approach to the concept of
intelligence in the computational sense. We introduce a probabilistic framework
from which computational intelligence is shown to be an entropy minimizing
process at the local level. Using this new scheme, we develop a simple data
driven clustering example and discuss its applications.
| Daniel Kovach | 10.4236/ijmnta.2014.34020 | 1412.7978 | null | null |
Predicting User Engagement in Twitter with Collaborative Ranking | cs.IR cs.CY cs.LG | Collaborative Filtering (CF) is a core component of popular web-based
services such as Amazon, YouTube, Netflix, and Twitter. Most applications use
CF to recommend a small set of items to the user. For instance, YouTube
presents to a user a list of top-n videos she would likely watch next based on
her rating and viewing history. Current methods of CF evaluation have been
focused on assessing the quality of a predicted rating or the ranking
performance for top-n recommended items. However, restricting the recommender
system evaluation to these two aspects is rather limiting and neglects other
dimensions that could better characterize a well-perceived recommendation. In
this paper, instead of optimizing rating or top-n recommendation, we focus on
the task of predicting which items generate the highest user engagement. In
particular, we use Twitter as our testbed and cast the problem as a
Collaborative Ranking task where the rich features extracted from the metadata
of the tweets help to complement the transaction information limited to user
ids, item ids, ratings and timestamps. We learn a scoring function that
directly optimizes the user engagement in terms of nDCG@10 on the predicted
ranking. Experiments conducted on an extended version of the MovieTweetings
dataset, released as part of the RecSys Challenge 2014, show the effectiveness
of our approach.
| Ernesto Diaz-Aviles, Hoang Thanh Lam, Fabio Pinelli, Stefano Braghin,
Yiannis Gkoufas, Michele Berlingerio, and Francesco Calabrese | 10.1145/2668067.2668072 | 1412.7990 | null | null |
Coordinate Descent with Arbitrary Sampling I: Algorithms and Complexity | math.OC cs.LG cs.NA math.NA | We study the problem of minimizing the sum of a smooth convex function and a
convex block-separable regularizer and propose a new randomized coordinate
descent method, which we call ALPHA. Our method at every iteration updates a
random subset of coordinates, following an arbitrary distribution. No
coordinate descent methods capable to handle an arbitrary sampling have been
studied in the literature before for this problem. ALPHA is a remarkably
flexible algorithm: in special cases, it reduces to deterministic and
randomized methods such as gradient descent, coordinate descent, parallel
coordinate descent and distributed coordinate descent -- both in nonaccelerated
and accelerated variants. The variants with arbitrary (or importance) sampling
are new. We provide a complexity analysis of ALPHA, from which we deduce as a
direct corollary complexity bounds for its many variants, all matching or
improving best known bounds.
| Zheng Qu and Peter Richt\'arik | null | 1412.8060 | null | null |
Coordinate Descent with Arbitrary Sampling II: Expected Separable
Overapproximation | math.OC cs.LG cs.NA math.NA math.PR | The design and complexity analysis of randomized coordinate descent methods,
and in particular of variants which update a random subset (sampling) of
coordinates in each iteration, depends on the notion of expected separable
overapproximation (ESO). This refers to an inequality involving the objective
function and the sampling, capturing in a compact way certain smoothness
properties of the function in a random subspace spanned by the sampled
coordinates. ESO inequalities were previously established for special classes
of samplings only, almost invariably for uniform samplings. In this paper we
develop a systematic technique for deriving these inequalities for a large
class of functions and for arbitrary samplings. We demonstrate that one can
recover existing ESO results using our general approach, which is based on the
study of eigenvalues associated with samplings and the data describing the
function.
| Zheng Qu and Peter Richt\'arik | null | 1412.8063 | null | null |
Complex support vector machines regression for robust channel estimation
in LTE downlink system | cs.IT cs.LG math.IT | In this paper, the problem of channel estimation for LTE Downlink system in
the environment of high mobility presenting non-Gaussian impulse noise
interfering with reference signals is faced. The estimation of the frequency
selective time varying multipath fading channel is performed by using a channel
estimator based on a nonlinear complex Support Vector Machine Regression (SVR)
which is applied to Long Term Evolution (LTE) downlink. The estimation
algorithm makes use of the pilot signals to estimate the total frequency
response of the highly selective fading multipath channel. Thus, the algorithm
maps trained data into a high dimensional feature space and uses the structural
risk minimization principle to carry out the regression estimation for the
frequency response function of the fading channel. The obtained results show
the effectiveness of the proposed method which has better performance than the
conventional Least Squares (LS) and Decision Feedback methods to track the
variations of the fading multipath channel.
| Anis Charrada, Abdelaziz Samet | 10.5121/ijcnc.2012.4115 | 1412.8109 | null | null |
Improving Persian Document Classification Using Semantic Relations
between Words | cs.IR cs.LG | With the increase of information, document classification as one of the
methods of text mining, plays vital role in many management and organizing
information. Document classification is the process of assigning a document to
one or more predefined category labels. Document classification includes
different parts such as text processing, term selection, term weighting and
final classification. The accuracy of document classification is very
important. Thus improvement in each part of classification should lead to
better results and higher precision. Term weighting has a great impact on the
accuracy of the classification. Most of the existing weighting methods exploit
the statistical information of terms in documents and do not consider semantic
relations between words. In this paper, an automated document classification
system is presented that uses a novel term weighting method based on semantic
relations between terms. To evaluate the proposed method, three standard
Persian corpuses are used. Experiment results show 2 to 4 percent improvement
in classification accuracy compared with the best previous designed system for
Persian documents.
| Saeed Parseh and Ahmad Baraani | null | 1412.8147 | null | null |
Improving approximate RPCA with a k-sparsity prior | cs.NE cs.LG | A process centric view of robust PCA (RPCA) allows its fast approximate
implementation based on a special form o a deep neural network with weights
shared across all layers. However, empirically this fast approximation to RPCA
fails to find representations that are parsemonious. We resolve these bad local
minima by relaxing the elementwise L1 and L2 priors and instead utilize a
structure inducing k-sparsity prior. In a discriminative classification task
the newly learned representations outperform these from the original
approximate RPCA formulation significantly.
| Maximilian Karl and Christian Osendorfer | null | 1412.8291 | null | null |
Quasi-Monte Carlo Feature Maps for Shift-Invariant Kernels | stat.ML cs.LG math.NA stat.CO | We consider the problem of improving the efficiency of randomized Fourier
feature maps to accelerate training and testing speed of kernel methods on
large datasets. These approximate feature maps arise as Monte Carlo
approximations to integral representations of shift-invariant kernel functions
(e.g., Gaussian kernel). In this paper, we propose to use Quasi-Monte Carlo
(QMC) approximations instead, where the relevant integrands are evaluated on a
low-discrepancy sequence of points as opposed to random point sets as in the
Monte Carlo approach. We derive a new discrepancy measure called box
discrepancy based on theoretical characterizations of the integration error
with respect to a given sequence. We then propose to learn QMC sequences
adapted to our setting based on explicit box discrepancy minimization. Our
theoretical analyses are complemented with empirical results that demonstrate
the effectiveness of classical and adaptive QMC techniques for this problem.
| Haim Avron, Vikas Sindhwani, Jiyan Yang, Michael Mahoney | null | 1412.8293 | null | null |
Fast, simple and accurate handwritten digit classification by training
shallow neural network classifiers with the 'extreme learning machine'
algorithm | cs.NE cs.CV cs.LG | Recent advances in training deep (multi-layer) architectures have inspired a
renaissance in neural network use. For example, deep convolutional networks are
becoming the default option for difficult tasks on large datasets, such as
image and speech recognition. However, here we show that error rates below 1%
on the MNIST handwritten digit benchmark can be replicated with shallow
non-convolutional neural networks. This is achieved by training such networks
using the 'Extreme Learning Machine' (ELM) approach, which also enables a very
rapid training time (~10 minutes). Adding distortions, as is common practise
for MNIST, reduces error rates even further. Our methods are also shown to be
capable of achieving less than 5.5% error rates on the NORB image database. To
achieve these results, we introduce several enhancements to the standard ELM
algorithm, which individually and in combination can significantly improve
performance. The main innovation is to ensure each hidden-unit operates only on
a randomly sized and positioned patch of each image. This form of random
`receptive field' sampling of the input ensures the input weight matrix is
sparse, with about 90% of weights equal to zero. Furthermore, combining our
methods with a small number of iterations of a single-batch backpropagation
method can significantly reduce the number of hidden-units required to achieve
a particular performance. Our close to state-of-the-art results for MNIST and
NORB suggest that the ease of use and accuracy of the ELM algorithm for
designing a single-hidden-layer neural network classifier should cause it to be
given greater consideration either as a standalone method for simpler problems,
or as the final classification stage in deep neural networks applied to more
difficult problems.
| Mark D. McDonnell, Migel D. Tissera, Tony Vladusich, Andr\'e van
Schaik, and Jonathan Tapson | 10.1371/journal.pone.0134254 | 1412.8307 | null | null |
A simple coding for cross-domain matching with dimension reduction via
spectral graph embedding | stat.ML cs.CV cs.LG | Data vectors are obtained from multiple domains. They are feature vectors of
images or vector representations of words. Domains may have different numbers
of data vectors with different dimensions. These data vectors from multiple
domains are projected to a common space by linear transformations in order to
search closely related vectors across domains. We would like to find projection
matrices to minimize distances between closely related data vectors. This
formulation of cross-domain matching is regarded as an extension of the
spectral graph embedding to multi-domain setting, and it includes several
multivariate analysis methods of statistics such as multiset canonical
correlation analysis, correspondence analysis, and principal component
analysis. Similar approaches are very popular recently in pattern recognition
and vision. In this paper, instead of proposing a novel method, we will
introduce an embarrassingly simple idea of coding the data vectors for
explaining all the above mentioned approaches. A data vector is concatenated
with zero vectors from all other domains to make an augmented vector. The
cross-domain matching is solved by applying the single-domain version of
spectral graph embedding to these augmented vectors of all the domains. An
interesting connection to the classical associative memory model of neural
networks is also discussed by noticing a coding for association. A
cross-validation method for choosing the dimension of the common space and a
regularization parameter will be discussed in an illustrative numerical
example.
| Hidetoshi Shimodaira | null | 1412.8380 | null | null |
Spy vs. Spy: Rumor Source Obfuscation | cs.SI cs.LG | Anonymous messaging platforms, such as Secret and Whisper, have emerged as
important social media for sharing one's thoughts without the fear of being
judged by friends, family, or the public. Further, such anonymous platforms are
crucial in nations with authoritarian governments; the right to free expression
and sometimes the personal safety of the author of the message depend on
anonymity. Whether for fear of judgment or personal endangerment, it is crucial
to keep anonymous the identity of the user who initially posted a sensitive
message. In this paper, we consider an adversary who observes a snapshot of the
spread of a message at a certain time. Recent advances in rumor source
detection shows that the existing messaging protocols are vulnerable against
such an adversary. We introduce a novel messaging protocol, which we call
adaptive diffusion, and show that it spreads the messages fast and achieves a
perfect obfuscation of the source when the underlying contact network is an
infinite regular tree: all users with the message are nearly equally likely to
have been the origin of the message. Experiments on a sampled Facebook network
show that it effectively hides the location of the source even when the graph
is finite, irregular and has cycles. We further consider a stronger adversarial
model where a subset of colluding users track the reception of messages. We
show that the adaptive diffusion provides a strong protection of the anonymity
of the source even under this scenario.
| Giulia Fanti, Peter Kairouz, Sewoong Oh, Pramod Viswanath | null | 1412.8439 | null | null |
An ADMM algorithm for solving a proximal bound-constrained quadratic
program | math.OC cs.LG stat.ML | We consider a proximal operator given by a quadratic function subject to
bound constraints and give an optimization algorithm using the alternating
direction method of multipliers (ADMM). The algorithm is particularly efficient
to solve a collection of proximal operators that share the same quadratic form,
or if the quadratic program is the relaxation of a binary quadratic problem.
| Miguel \'A. Carreira-Perpi\~n\'an | null | 1412.8493 | null | null |
Disjunctive Normal Networks | cs.LG cs.NE | Artificial neural networks are powerful pattern classifiers; however, they
have been surpassed in accuracy by methods such as support vector machines and
random forests that are also easier to use and faster to train.
Backpropagation, which is used to train artificial neural networks, suffers
from the herd effect problem which leads to long training times and limit
classification accuracy. We use the disjunctive normal form and approximate the
boolean conjunction operations with products to construct a novel network
architecture. The proposed model can be trained by minimizing an error function
and it allows an effective and intuitive initialization which solves the
herd-effect problem associated with backpropagation. This leads to state-of-the
art classification accuracy and fast training times. In addition, our model can
be jointly optimized with convolutional features in an unified structure
leading to state-of-the-art results on computer vision problems with fast
convergence rates. A GPU implementation of LDNN with optional convolutional
features is also available
| Mehdi Sajjadi, Mojtaba Seyedhosseini, Tolga Tasdizen | null | 1412.8534 | null | null |
Accurate and Conservative Estimates of MRF Log-likelihood using Reverse
Annealing | cs.LG stat.ML | Markov random fields (MRFs) are difficult to evaluate as generative models
because computing the test log-probabilities requires the intractable partition
function. Annealed importance sampling (AIS) is widely used to estimate MRF
partition functions, and often yields quite accurate results. However, AIS is
prone to overestimate the log-likelihood with little indication that anything
is wrong. We present the Reverse AIS Estimator (RAISE), a stochastic lower
bound on the log-likelihood of an approximation to the original MRF model.
RAISE requires only the same MCMC transition operators as standard AIS.
Experimental results indicate that RAISE agrees closely with AIS
log-probability estimates for RBMs, DBMs, and DBNs, but typically errs on the
side of underestimating, rather than overestimating, the log-likelihood.
| Yuri Burda and Roger B. Grosse and Ruslan Salakhutdinov | null | 1412.8566 | null | null |
Breaking the Curse of Dimensionality with Convex Neural Networks | cs.LG math.OC math.ST stat.TH | We consider neural networks with a single hidden layer and non-decreasing
homogeneous activa-tion functions like the rectified linear units. By letting
the number of hidden units grow unbounded and using classical non-Euclidean
regularization tools on the output weights, we provide a detailed theoretical
analysis of their generalization performance, with a study of both the
approximation and the estimation errors. We show in particular that they are
adaptive to unknown underlying linear structures, such as the dependence on the
projection of the input variables onto a low-dimensional subspace. Moreover,
when using sparsity-inducing norms on the input weights, we show that
high-dimensional non-linear variable selection may be achieved, without any
strong assumption regarding the data and with a total number of variables
potentially exponential in the number of ob-servations. In addition, we provide
a simple geometric interpretation to the non-convex problem of addition of a
new unit, which is the core potentially hard computational element in the
framework of learning from continuously many basis functions. We provide simple
conditions for convex relaxations to achieve the same generalization error
bounds, even when constant-factor approxi-mations cannot be found (e.g.,
because it is NP-hard such as for the zero-homogeneous activation function). We
were not able to find strong enough convex relaxations and leave open the
existence or non-existence of polynomial-time algorithms.
| Francis Bach (LIENS, SIERRA) | null | 1412.8690 | null | null |
Discriminative Clustering with Relative Constraints | cs.LG | We study the problem of clustering with relative constraints, where each
constraint specifies relative similarities among instances. In particular, each
constraint $(x_i, x_j, x_k)$ is acquired by posing a query: is instance $x_i$
more similar to $x_j$ than to $x_k$? We consider the scenario where answers to
such queries are based on an underlying (but unknown) class concept, which we
aim to discover via clustering. Different from most existing methods that only
consider constraints derived from yes and no answers, we also incorporate don't
know responses. We introduce a Discriminative Clustering method with Relative
Constraints (DCRC) which assumes a natural probabilistic relationship between
instances, their underlying cluster memberships, and the observed constraints.
The objective is to maximize the model likelihood given the constraints, and in
the meantime enforce cluster separation and cluster balance by also making use
of the unlabeled instances. We evaluated the proposed method using constraints
generated from ground-truth class labels, and from (noisy) human judgments from
a user study. Experimental results demonstrate: 1) the usefulness of relative
constraints, in particular when don't know answers are considered; 2) the
improved performance of the proposed method over state-of-the-art methods that
utilize either relative or pairwise constraints; and 3) the robustness of our
method in the presence of noisy constraints, such as those provided by human
judgement.
| Yuanli Pei, Xiaoli Z. Fern, R\'omer Rosales, Teresa Vania Tjahja | null | 1501.00037 | null | null |
Detailed Derivations of Small-Variance Asymptotics for some Hierarchical
Bayesian Nonparametric Models | stat.ML cs.LG | In this note we provide detailed derivations of two versions of
small-variance asymptotics for hierarchical Dirichlet process (HDP) mixture
models and the HDP hidden Markov model (HDP-HMM, a.k.a. the infinite HMM). We
include derivations for the probabilities of certain CRP and CRF partitions,
which are of more general interest.
| Jonathan H. Huggins, Ardavan Saeedi, and Matthew J. Johnson | null | 1501.00052 | null | null |
ModDrop: adaptive multi-modal gesture recognition | cs.CV cs.HC cs.LG | We present a method for gesture detection and localisation based on
multi-scale and multi-modal deep learning. Each visual modality captures
spatial information at a particular spatial scale (such as motion of the upper
body or a hand), and the whole system operates at three temporal scales. Key to
our technique is a training strategy which exploits: i) careful initialization
of individual modalities; and ii) gradual fusion involving random dropping of
separate channels (dubbed ModDrop) for learning cross-modality correlations
while preserving uniqueness of each modality-specific representation. We
present experiments on the ChaLearn 2014 Looking at People Challenge gesture
recognition track, in which we placed first out of 17 teams. Fusing multiple
modalities at several spatial and temporal scales leads to a significant
increase in recognition rates, allowing the model to compensate for errors of
the individual classifiers as well as noise in the separate channels.
Futhermore, the proposed ModDrop training technique ensures robustness of the
classifier to missing signals in one or several channels to produce meaningful
predictions from any number of available modalities. In addition, we
demonstrate the applicability of the proposed fusion scheme to modalities of
arbitrary nature by experiments on the same dataset augmented with audio.
| Natalia Neverova and Christian Wolf and Graham W. Taylor and Florian
Nebout | null | 1501.00102 | null | null |
Maximum Margin Clustering for State Decomposition of Metastable Systems | cs.LG cs.NA cs.SY math.NA physics.data-an | When studying a metastable dynamical system, a prime concern is how to
decompose the phase space into a set of metastable states. Unfortunately, the
metastable state decomposition based on simulation or experimental data is
still a challenge. The most popular and simplest approach is geometric
clustering which is developed based on the classical clustering technique.
However, the prerequisites of this approach are: (1) data are obtained from
simulations or experiments which are in global equilibrium and (2) the
coordinate system is appropriately selected. Recently, the kinetic clustering
approach based on phase space discretization and transition probability
estimation has drawn much attention due to its applicability to more general
cases, but the choice of discretization policy is a difficult task. In this
paper, a new decomposition method designated as maximum margin metastable
clustering is proposed, which converts the problem of metastable state
decomposition to a semi-supervised learning problem so that the large margin
technique can be utilized to search for the optimal decomposition without phase
space discretization. Moreover, several simulation examples are given to
illustrate the effectiveness of the proposed method.
| Hao Wu | null | 1501.00125 | null | null |
ACCAMS: Additive Co-Clustering to Approximate Matrices Succinctly | cs.LG stat.ML | Matrix completion and approximation are popular tools to capture a user's
preferences for recommendation and to approximate missing data. Instead of
using low-rank factorization we take a drastically different approach, based on
the simple insight that an additive model of co-clusterings allows one to
approximate matrices efficiently. This allows us to build a concise model that,
per bit of model learned, significantly beats all factorization approaches to
matrix approximation. Even more surprisingly, we find that summing over small
co-clusterings is more effective in modeling matrices than classic
co-clustering, which uses just one large partitioning of the matrix.
Following Occam's razor principle suggests that the simple structure induced
by our model better captures the latent preferences and decision making
processes present in the real world than classic co-clustering or matrix
factorization. We provide an iterative minimization algorithm, a collapsed
Gibbs sampler, theoretical guarantees for matrix approximation, and excellent
empirical evidence for the efficacy of our approach. We achieve
state-of-the-art results on the Netflix problem with a fraction of the model
complexity.
| Alex Beutel, Amr Ahmed and Alexander J. Smola | null | 1501.00199 | null | null |
Communication-Efficient Distributed Optimization of Self-Concordant
Empirical Loss | math.OC cs.LG stat.ML | We consider distributed convex optimization problems originated from sample
average approximation of stochastic optimization, or empirical risk
minimization in machine learning. We assume that each machine in the
distributed computing system has access to a local empirical loss function,
constructed with i.i.d. data sampled from a common distribution. We propose a
communication-efficient distributed algorithm to minimize the overall empirical
loss, which is the average of the local empirical losses. The algorithm is
based on an inexact damped Newton method, where the inexact Newton steps are
computed by a distributed preconditioned conjugate gradient method. We analyze
its iteration complexity and communication efficiency for minimizing
self-concordant empirical loss functions, and discuss the results for
distributed ridge regression, logistic regression and binary classification
with a smoothed hinge loss. In a standard setting for supervised learning, the
required number of communication rounds of the algorithm does not increase with
the sample size, and only grows slowly with the number of machines.
| Yuchen Zhang and Lin Xiao | null | 1501.00263 | null | null |
Consistent Classification Algorithms for Multi-class Non-Decomposable
Performance Metrics | cs.LG stat.ML | We study consistency of learning algorithms for a multi-class performance
metric that is a non-decomposable function of the confusion matrix of a
classifier and cannot be expressed as a sum of losses on individual data
points; examples of such performance metrics include the macro F-measure
popular in information retrieval and the G-mean metric used in class-imbalanced
problems. While there has been much work in recent years in understanding the
consistency properties of learning algorithms for `binary' non-decomposable
metrics, little is known either about the form of the optimal classifier for a
general multi-class non-decomposable metric, or about how these learning
algorithms generalize to the multi-class case. In this paper, we provide a
unified framework for analysing a multi-class non-decomposable performance
metric, where the problem of finding the optimal classifier for the performance
metric is viewed as an optimization problem over the space of all confusion
matrices achievable under the given distribution. Using this framework, we show
that (under a continuous distribution) the optimal classifier for a multi-class
performance metric can be obtained as the solution of a cost-sensitive
classification problem, thus generalizing several previous results on specific
binary non-decomposable metrics. We then design a consistent learning algorithm
for concave multi-class performance metrics that proceeds via a sequence of
cost-sensitive classification problems, and can be seen as applying the
conditional gradient (CG) optimization method over the space of feasible
confusion matrices. To our knowledge, this is the first efficient learning
algorithm (whose running time is polynomial in the number of classes) that is
consistent for a large family of multi-class non-decomposable metrics. Our
consistency proof uses a novel technique based on the convergence analysis of
the CG method.
| Harish G. Ramaswamy, Harikrishna Narasimhan, Shivani Agarwal | null | 1501.00287 | null | null |
Sequence Modeling using Gated Recurrent Neural Networks | cs.NE cs.LG | In this paper, we have used Recurrent Neural Networks to capture and model
human motion data and generate motions by prediction of the next immediate data
point at each time-step. Our RNN is armed with recently proposed Gated
Recurrent Units which has shown promising results in some sequence modeling
problems such as Machine Translation and Speech Synthesis. We demonstrate that
this model is able to capture long-term dependencies in data and generate
realistic motions.
| Mohammad Pezeshki | null | 1501.00299 | null | null |
A robust sub-linear time R-FFAST algorithm for computing a sparse DFT | cs.IT cs.LG math.IT | The Fast Fourier Transform (FFT) is the most efficiently known way to compute
the Discrete Fourier Transform (DFT) of an arbitrary n-length signal, and has a
computational complexity of O(n log n). If the DFT X of the signal x has only k
non-zero coefficients (where k < n), can we do better? In [1], we addressed
this question and presented a novel FFAST (Fast Fourier Aliasing-based Sparse
Transform) algorithm that cleverly induces sparse graph alias codes in the DFT
domain, via a Chinese-Remainder-Theorem (CRT)-guided sub-sampling operation of
the time-domain samples. The resulting sparse graph alias codes are then
exploited to devise a fast and iterative onion-peeling style decoder that
computes an n length DFT of a signal using only O(k) time-domain samples and
O(klog k) computations. The FFAST algorithm is applicable whenever k is
sub-linear in n (i.e. k = o(n)), but is obviously most attractive when k is
much smaller than n.
In this paper, we adapt the FFAST framework of [1] to the case where the
time-domain samples are corrupted by a white Gaussian noise. In particular, we
show that the extended noise robust algorithm R-FFAST computes an n-length
k-sparse DFT X using O(klog ^3 n) noise-corrupted time-domain samples, in
O(klog^4n) computations, i.e., sub-linear time complexity. While our
theoretical results are for signals with a uniformly random support of the
non-zero DFT coefficients and additive white Gaussian noise, we provide
simulation results which demonstrates that the R-FFAST algorithm performs well
even for signals like MR images, that have an approximately sparse Fourier
spectrum with a non-uniform support for the dominant DFT coefficients.
| Sameer Pawar and Kannan Ramchandran | null | 1501.00320 | null | null |
Multi-Access Communications with Energy Harvesting: A Multi-Armed Bandit
Model and the Optimality of the Myopic Policy | cs.IT cs.LG math.IT | A multi-access wireless network with N transmitting nodes, each equipped with
an energy harvesting (EH) device and a rechargeable battery of finite capacity,
is studied. At each time slot (TS) a node is operative with a certain
probability, which may depend on the availability of data, or the state of its
channel. The energy arrival process at each node is modelled as an independent
two-state Markov process, such that, at each TS, a node either harvests one
unit of energy, or none. At each TS a subset of the nodes is scheduled by the
access point (AP). The scheduling policy that maximises the total throughput is
studied assuming that the AP does not know the states of either the EH
processes or the batteries. The problem is identified as a restless multiarmed
bandit (RMAB) problem, and an upper bound on the optimal scheduling policy is
found. Under certain assumptions regarding the EH processes and the battery
sizes, the optimality of the myopic policy (MP) is proven. For the general
case, the performance of MP is compared numerically to the upper bound.
| Pol Blasco and Deniz Gunduz | null | 1501.00329 | null | null |
Comprehend DeepWalk as Matrix Factorization | cs.LG | Word2vec, as an efficient tool for learning vector representation of words
has shown its effectiveness in many natural language processing tasks. Mikolov
et al. issued Skip-Gram and Negative Sampling model for developing this
toolbox. Perozzi et al. introduced the Skip-Gram model into the study of social
network for the first time, and designed an algorithm named DeepWalk for
learning node embedding on a graph. We prove that the DeepWalk algorithm is
actually factoring a matrix M where each entry M_{ij} is logarithm of the
average probability that node i randomly walks to node j in fix steps.
| Cheng Yang and Zhiyuan Liu | null | 1501.00358 | null | null |
Passing Expectation Propagation Messages with Kernel Methods | stat.ML cs.LG | We propose to learn a kernel-based message operator which takes as input all
expectation propagation (EP) incoming messages to a factor node and produces an
outgoing message. In ordinary EP, computing an outgoing message involves
estimating a multivariate integral which may not have an analytic expression.
Learning such an operator allows one to bypass the expensive computation of the
integral during inference by directly mapping all incoming messages into an
outgoing message. The operator can be learned from training data (examples of
input and output messages) which allows automated inference to be made on any
kind of factor that can be sampled.
| Wittawat Jitkrittum, Arthur Gretton, Nicolas Heess | null | 1501.00375 | null | null |
Efficiently Discovering Frequent Motifs in Large-scale Sensor Data | cs.DB cs.LG | While analyzing vehicular sensor data, we found that frequently occurring
waveforms could serve as features for further analysis, such as rule mining,
classification, and anomaly detection. The discovery of waveform patterns, also
known as time-series motifs, has been studied extensively; however, available
techniques for discovering frequently occurring time-series motifs were found
lacking in either efficiency or quality: Standard subsequence clustering
results in poor quality, to the extent that it has even been termed
'meaningless'. Variants of hierarchical clustering using techniques for
efficient discovery of 'exact pair motifs' find high-quality frequent motifs,
but at the cost of high computational complexity, making such techniques
unusable for our voluminous vehicular sensor data. We show that good quality
frequent motifs can be discovered using bounded spherical clustering of
time-series subsequences, which we refer to as COIN clustering, with near
linear complexity in time-series size. COIN clustering addresses many of the
challenges that previously led to subsequence clustering being viewed as
meaningless. We describe an end-to-end motif-discovery procedure using a
sequence of pre and post-processing techniques that remove trivial-matches and
shifted-motifs, which also plagued previous subsequence-clustering approaches.
We demonstrate that our technique efficiently discovers frequent motifs in
voluminous vehicular sensor data as well as in publicly available data sets.
| Puneet Agarwal, Gautam Shroff, Sarmimala Saikia, and Zaigham Khan | null | 1501.00405 | null | null |
Computational Feasibility of Clustering under Clusterability Assumptions | cs.CC cs.LG | It is well known that most of the common clustering objectives are NP-hard to
optimize. In practice, however, clustering is being routinely carried out. One
approach for providing theoretical understanding of this seeming discrepancy is
to come up with notions of clusterability that distinguish realistically
interesting input data from worst-case data sets. The hope is that there will
be clustering algorithms that are provably efficient on such 'clusterable'
instances. In other words, hope that "Clustering is difficult only when it does
not matter" (CDNM thesis, for short).
We believe that to some extent this may indeed be the case. This paper
provides a survey of recent papers along this line of research and a critical
evaluation their results. Our bottom line conclusion is that that CDNM thesis
is still far from being formally substantiated. We start by discussing which
requirements should be met in order to provide formal support the validity of
the CDNM thesis. In particular, we list some implied requirements for notions
of clusterability. We then examine existing results in view of those
requirements and outline some research challenges and open questions.
| Shai Ben-David | null | 1501.00437 | null | null |
An Empirical Study of the L2-Boost technique with Echo State Networks | cs.LG cs.NE | A particular case of Recurrent Neural Network (RNN) was introduced at the
beginning of the 2000s under the name of Echo State Networks (ESNs). The ESN
model overcomes the limitations during the training of the RNNs while
introducing no significant disadvantages. Although the model presents some
well-identified drawbacks when the parameters are not well initialised. The
performance of an ESN is highly dependent on its internal parameters and
pattern of connectivity of the hidden-hidden weights Often, the tuning of the
network parameters can be hard and can impact in the accuracy of the models.
In this work, we investigate the performance of a specific boosting technique
(called L2-Boost) with ESNs as single predictors. The L2-Boost technique has
been shown to be an effective tool to combine "weak" predictors in regression
problems. In this study, we use an ensemble of random initialized ESNs (without
control their parameters) as "weak" predictors of the boosting procedure. We
evaluate our approach on five well-know time-series benchmark problems.
Additionally, we compare this technique with a baseline approach that consists
of averaging the prediction of an ensemble of ESNs.
| Sebasti\'an Basterrech | null | 1501.00503 | null | null |
The Learnability of Unknown Quantum Measurements | quant-ph cs.LG stat.ML | Quantum machine learning has received significant attention in recent years,
and promising progress has been made in the development of quantum algorithms
to speed up traditional machine learning tasks. In this work, however, we focus
on investigating the information-theoretic upper bounds of sample complexity -
how many training samples are sufficient to predict the future behaviour of an
unknown target function. This kind of problem is, arguably, one of the most
fundamental problems in statistical learning theory and the bounds for
practical settings can be completely characterised by a simple measure of
complexity.
Our main result in the paper is that, for learning an unknown quantum
measurement, the upper bound, given by the fat-shattering dimension, is
linearly proportional to the dimension of the underlying Hilbert space.
Learning an unknown quantum state becomes a dual problem to ours, and as a
byproduct, we can recover Aaronson's famous result [Proc. R. Soc. A
463:3089-3144 (2007)] solely using a classical machine learning technique. In
addition, other famous complexity measures like covering numbers and Rademacher
complexities are derived explicitly. We are able to connect measures of sample
complexity with various areas in quantum information science, e.g. quantum
state/measurement tomography, quantum state discrimination and quantum random
access codes, which may be of independent interest. Lastly, with the assistance
of general Bloch-sphere representation, we show that learning quantum
measurements/states can be mathematically formulated as a neural network.
Consequently, classical ML algorithms can be applied to efficiently accomplish
the two quantum learning tasks.
| Hao-Chung Cheng, Min-Hsiu Hsieh, Ping-Cheng Yeh | null | 1501.00559 | null | null |
Evaluation of Predictive Data Mining Algorithms in Erythemato-Squamous
Disease Diagnosis | cs.LG cs.CE | A lot of time is spent searching for the most performing data mining
algorithms applied in clinical diagnosis. The study set out to identify the
most performing predictive data mining algorithms applied in the diagnosis of
Erythemato-squamous diseases. The study used Naive Bayes, Multilayer Perceptron
and J48 decision tree induction to build predictive data mining models on 366
instances of Erythemato-squamous diseases datasets. Also, 10-fold
cross-validation and sets of performance metrics were used to evaluate the
baseline predictive performance of the classifiers. The comparative analysis
shows that the Naive Bayes performed best with accuracy of 97.4%, Multilayer
Perceptron came out second with accuracy of 96.6%, and J48 came out the worst
with accuracy of 93.5%. The evaluation of these classifiers on clinical
datasets, gave an insight into the predictive ability of different data mining
algorithms applicable in clinical diagnosis especially in the diagnosis of
Erythemato-squamous diseases.
| Kwetishe Danjuma and Adenike O. Osofisan | null | 1501.00607 | null | null |
On Enhancing The Performance Of Nearest Neighbour Classifiers Using
Hassanat Distance Metric | cs.LG | We showed in this work how the Hassanat distance metric enhances the
performance of the nearest neighbour classifiers. The results demonstrate the
superiority of this distance metric over the traditional and most-used
distances, such as Manhattan distance and Euclidian distance. Moreover, we
proved that the Hassanat distance metric is invariant to data scale, noise and
outliers. Throughout this work, it is clearly notable that both ENN and IINC
performed very well with the distance investigated, as their accuracy increased
significantly by 3.3% and 3.1% respectively, with no significant advantage of
the ENN over the IINC in terms of accuracy. Correspondingly, it can be noted
from our results that there is no optimal algorithm that can solve all
real-life problems perfectly; this is supported by the no-free-lunch theorem
| Mouhammd Alkasassbeh, Ghada A. Altarawneh, Ahmad B. A. Hassanat | null | 1501.00687 | null | null |
Differential Search Algorithm-based Parametric Optimization of Fuzzy
Generalized Eigenvalue Proximal Support Vector Machine | cs.LG | Support Vector Machine (SVM) is an effective model for many classification
problems. However, SVM needs the solution of a quadratic program which require
specialized code. In addition, SVM has many parameters, which affects the
performance of SVM classifier. Recently, the Generalized Eigenvalue Proximal
SVM (GEPSVM) has been presented to solve the SVM complexity. In real world
applications data may affected by error or noise, working with this data is a
challenging problem. In this paper, an approach has been proposed to overcome
this problem. This method is called DSA-GEPSVM. The main improvements are
carried out based on the following: 1) a novel fuzzy values in the linear case.
2) A new Kernel function in the nonlinear case. 3) Differential Search
Algorithm (DSA) is reformulated to find near optimal values of the GEPSVM
parameters and its kernel parameters. The experimental results show that the
proposed approach is able to find the suitable parameter values, and has higher
classification accuracy compared with some other algorithms.
| M. H. Marghny, Rasha M. Abd ElAziz, Ahmed I. Taloba | null | 1501.00728 | null | null |
A Deep-structured Conditional Random Field Model for Object Silhouette
Tracking | cs.CV cs.LG stat.ML | In this work, we introduce a deep-structured conditional random field
(DS-CRF) model for the purpose of state-based object silhouette tracking. The
proposed DS-CRF model consists of a series of state layers, where each state
layer spatially characterizes the object silhouette at a particular point in
time. The interactions between adjacent state layers are established by
inter-layer connectivity dynamically determined based on inter-frame optical
flow. By incorporate both spatial and temporal context in a dynamic fashion
within such a deep-structured probabilistic graphical model, the proposed
DS-CRF model allows us to develop a framework that can accurately and
efficiently track object silhouettes that can change greatly over time, as well
as under different situations such as occlusion and multiple targets within the
scene. Experiment results using video surveillance datasets containing
different scenarios such as occlusion and multiple targets showed that the
proposed DS-CRF approach provides strong object silhouette tracking performance
when compared to baseline methods such as mean-shift tracking, as well as
state-of-the-art methods such as context tracking and boosted particle
filtering.
| Mohammad Shafiee, Zohreh Azimifar, and Alexander Wong | 10.1371/journal.pone.0133036 | 1501.00752 | null | null |
Hashing with binary autoencoders | cs.LG cs.CV math.OC stat.ML | An attractive approach for fast search in image databases is binary hashing,
where each high-dimensional, real-valued image is mapped onto a
low-dimensional, binary vector and the search is done in this binary space.
Finding the optimal hash function is difficult because it involves binary
constraints, and most approaches approximate the optimization by relaxing the
constraints and then binarizing the result. Here, we focus on the binary
autoencoder model, which seeks to reconstruct an image from the binary code
produced by the hash function. We show that the optimization can be simplified
with the method of auxiliary coordinates. This reformulates the optimization as
alternating two easier steps: one that learns the encoder and decoder
separately, and one that optimizes the code for each image. Image retrieval
experiments, using precision/recall and a measure of code utilization, show the
resulting hash function outperforms or is competitive with state-of-the-art
methods for binary hashing.
| Miguel \'A. Carreira-Perpi\~n\'an, Ramin Raziperchikolaei | null | 1501.00756 | null | null |
Sparse Deep Stacking Network for Image Classification | cs.CV cs.LG cs.NE | Sparse coding can learn good robust representation to noise and model more
higher-order representation for image classification. However, the inference
algorithm is computationally expensive even though the supervised signals are
used to learn compact and discriminative dictionaries in sparse coding
techniques. Luckily, a simplified neural network module (SNNM) has been
proposed to directly learn the discriminative dictionaries for avoiding the
expensive inference. But the SNNM module ignores the sparse representations.
Therefore, we propose a sparse SNNM module by adding the mixed-norm
regularization (l1/l2 norm). The sparse SNNM modules are further stacked to
build a sparse deep stacking network (S-DSN). In the experiments, we evaluate
S-DSN with four databases, including Extended YaleB, AR, 15 scene and
Caltech101. Experimental results show that our model outperforms related
classification methods with only a linear classifier. It is worth noting that
we reach 98.8% recognition accuracy on 15 scene.
| Jun Li, Heyou Chang, Jian Yang | null | 1501.00777 | null | null |
Reinforcement Learning and Nonparametric Detection of Game-Theoretic
Equilibrium Play in Social Networks | cs.GT cs.LG cs.SI stat.ML | This paper studies two important signal processing aspects of equilibrium
behavior in non-cooperative games arising in social networks, namely,
reinforcement learning and detection of equilibrium play. The first part of the
paper presents a reinforcement learning (adaptive filtering) algorithm that
facilitates learning an equilibrium by resorting to diffusion cooperation
strategies in a social network. Agents form homophilic social groups, within
which they exchange past experiences over an undirected graph. It is shown
that, if all agents follow the proposed algorithm, their global behavior is
attracted to the correlated equilibria set of the game. The second part of the
paper provides a test to detect if the actions of agents are consistent with
play from the equilibrium of a concave potential game. The theory of revealed
preference from microeconomics is used to construct a non-parametric decision
test and statistical test which only require the probe and associated actions
of agents. A stochastic gradient algorithm is given to optimize the probe in
real time to minimize the Type-II error probabilities of the detection test
subject to specified Type-I error probability. We provide a real-world example
using the energy market, and a numerical example to detect malicious agents in
an online social network.
| Omid Namvar Gharehshiran and William Hoiles and Vikram Krishnamurthy | null | 1501.01209 | null | null |
Efficient Online Relative Comparison Kernel Learning | cs.LG | Learning a kernel matrix from relative comparison human feedback is an
important problem with applications in collaborative filtering, object
retrieval, and search. For learning a kernel over a large number of objects,
existing methods face significant scalability issues inhibiting the application
of these methods to settings where a kernel is learned in an online and timely
fashion. In this paper we propose a novel framework called Efficient online
Relative comparison Kernel LEarning (ERKLE), for efficiently learning the
similarity of a large set of objects in an online manner. We learn a kernel
from relative comparisons via stochastic gradient descent, one query response
at a time, by taking advantage of the sparse and low-rank properties of the
gradient to efficiently restrict the kernel to lie in the space of positive
semidefinite matrices. In addition, we derive a passive-aggressive online
update for minimally satisfying new relative comparisons as to not disrupt the
influence of previously obtained comparisons. Experimentally, we demonstrate a
considerable improvement in speed while obtaining improved or comparable
accuracy compared to current methods in the online learning setting.
| Eric Heim (1), Matthew Berger (2), Lee M. Seversky (2), and Milos
Hauskrecht (1) ((1) University of Pittsburgh, (2) Air Force Research
Laboratory, Information Directorate) | null | 1501.01242 | null | null |
ITCM: A Real Time Internet Traffic Classifier Monitor | cs.NI cs.LG | The continual growth of high speed networks is a challenge for real-time
network analysis systems. The real time traffic classification is an issue for
corporations and ISPs (Internet Service Providers). This work presents the
design and implementation of a real time flow-based network traffic
classification system. The classifier monitor acts as a pipeline consisting of
three modules: packet capture and pre-processing, flow reassembly, and
classification with Machine Learning (ML). The modules are built as concurrent
processes with well defined data interfaces between them so that any module can
be improved and updated independently. In this pipeline, the flow reassembly
function becomes the bottleneck of the performance. In this implementation, was
used a efficient method of reassembly which results in a average delivery delay
of 0.49 seconds, approximately. For the classification module, the performances
of the K-Nearest Neighbor (KNN), C4.5 Decision Tree, Naive Bayes (NB), Flexible
Naive Bayes (FNB) and AdaBoost Ensemble Learning Algorithm are compared in
order to validate our approach.
| Silas Santiago Lopes Pereira, Jos\'e Everardo Bessa Maia and Jorge
Luiz de Castro e Silva | 10.5121/ijcsit.2014.6602 | 1501.01321 | null | null |
Deep Autoencoders for Dimensionality Reduction of High-Content Screening
Data | cs.LG | High-content screening uses large collections of unlabeled cell image data to
reason about genetics or cell biology. Two important tasks are to identify
those cells which bear interesting phenotypes, and to identify sub-populations
enriched for these phenotypes. This exploratory data analysis usually involves
dimensionality reduction followed by clustering, in the hope that clusters
represent a phenotype. We propose the use of stacked de-noising auto-encoders
to perform dimensionality reduction for high-content screening. We demonstrate
the superior performance of our approach over PCA, Local Linear Embedding,
Kernel PCA and Isomap.
| Lee Zamparo and Zhaolei Zhang | null | 1501.01348 | null | null |
Sparse Solutions to Nonnegative Linear Systems and Applications | cs.DS cs.IT cs.LG math.IT | We give an efficient algorithm for finding sparse approximate solutions to
linear systems of equations with nonnegative coefficients. Unlike most known
results for sparse recovery, we do not require {\em any} assumption on the
matrix other than non-negativity. Our algorithm is combinatorial in nature,
inspired by techniques for the set cover problem, as well as the multiplicative
weight update method.
We then present a natural application to learning mixture models in the PAC
framework. For learning a mixture of $k$ axis-aligned Gaussians in $d$
dimensions, we give an algorithm that outputs a mixture of $O(k/\epsilon^3)$
Gaussians that is $\epsilon$-close in statistical distance to the true
distribution, without any separation assumptions. The time and sample
complexity is roughly $O(kd/\epsilon^3)^{d}$. This is polynomial when $d$ is
constant -- precisely the regime in which known methods fail to identify the
components efficiently.
Given that non-negativity is a natural assumption, we believe that our result
may find use in other settings in which we wish to approximately explain data
using a small number of a (large) candidate set of components.
| Aditya Bhaskara, Ananda Theertha Suresh, Morteza Zadimoghaddam | null | 1501.01689 | null | null |
Less is More: Building Selective Anomaly Ensembles | cs.DB cs.LG | Ensemble techniques for classification and clustering have long proven
effective, yet anomaly ensembles have been barely studied. In this work, we tap
into this gap and propose a new ensemble approach for anomaly mining, with
application to event detection in temporal graphs. Our method aims to combine
results from heterogeneous detectors with varying outputs, and leverage the
evidence from multiple sources to yield better performance. However, trusting
all the results may deteriorate the overall ensemble accuracy, as some
detectors may fall short and provide inaccurate results depending on the nature
of the data in hand. This suggests that being selective in which results to
combine is vital in building effective ensembles---hence "less is more".
In this paper we propose SELECT; an ensemble approach for anomaly mining that
employs novel techniques to automatically and systematically select the results
to assemble in a fully unsupervised fashion. We apply our method to event
detection in temporal graphs, where SELECT successfully utilizes five base
detectors and seven consensus methods under a unified ensemble framework. We
provide extensive quantitative evaluation of our approach on five real-world
datasets (four with ground truth), including Enron email communications, New
York Times news corpus, and World Cup 2014 Twitter news feed. Thanks to its
selection mechanism, SELECT yields superior performance compared to individual
detectors alone, the full ensemble (naively combining all results), and an
existing diversity-based ensemble.
| Shebuti Rayana and Leman Akoglu | null | 1501.01924 | null | null |
Sequential Kernel Herding: Frank-Wolfe Optimization for Particle
Filtering | stat.ML cs.LG | Recently, the Frank-Wolfe optimization algorithm was suggested as a procedure
to obtain adaptive quadrature rules for integrals of functions in a reproducing
kernel Hilbert space (RKHS) with a potentially faster rate of convergence than
Monte Carlo integration (and "kernel herding" was shown to be a special case of
this procedure). In this paper, we propose to replace the random sampling step
in a particle filter by Frank-Wolfe optimization. By optimizing the position of
the particles, we can obtain better accuracy than random or quasi-Monte Carlo
sampling. In applications where the evaluation of the emission probabilities is
expensive (such as in robot localization), the additional computational cost to
generate the particles through optimization can be justified. Experiments on
standard synthetic examples as well as on a robot localization task indicate
indeed an improvement of accuracy over random and quasi-Monte Carlo sampling.
| Simon Lacoste-Julien (LIENS, INRIA Paris - Rocquencourt, MSR - INRIA),
Fredrik Lindsten, Francis Bach (LIENS, INRIA Paris - Rocquencourt, MSR -
INRIA) | null | 1501.02056 | null | null |
HOG based Fast Human Detection | cs.RO cs.CV cs.LG | Objects recognition in image is one of the most difficult problems in
computer vision. It is also an important step for the implementation of several
existing applications that require high-level image interpretation. Therefore,
there is a growing interest in this research area during the last years. In
this paper, we present an algorithm for human detection and recognition in
real-time, from images taken by a CCD camera mounted on a car-like mobile
robot. The proposed technique is based on Histograms of Oriented Gradient (HOG)
and SVM classifier. The implementation of our detector has provided good
results, and can be used in robotics tasks.
| M. Kachouane (USTHB), S. Sahki, M. Lakrouf (CDTA, USTHB), N. Ouadah
(CDTA) | 10.1109/ICM.2012.6471380 | 1501.02058 | null | null |
Riemannian Metric Learning for Symmetric Positive Definite Matrices | cs.CV cs.LG | Over the past few years, symmetric positive definite (SPD) matrices have been
receiving considerable attention from computer vision community. Though various
distance measures have been proposed in the past for comparing SPD matrices,
the two most widely-used measures are affine-invariant distance and
log-Euclidean distance. This is because these two measures are true geodesic
distances induced by Riemannian geometry. In this work, we focus on the
log-Euclidean Riemannian geometry and propose a data-driven approach for
learning Riemannian metrics/geodesic distances for SPD matrices. We show that
the geodesic distance learned using the proposed approach performs better than
various existing distance measures when evaluated on face matching and
clustering tasks.
| Raviteja Vemulapalli, David W. Jacobs | null | 1501.02393 | null | null |
A Gaussian Particle Filter Approach for Sensors to Track Multiple Moving
Targets | cs.LG | In a variety of problems, the number and state of multiple moving targets are
unknown and are subject to be inferred from their measurements obtained by a
sensor with limited sensing ability. This type of problems is raised in a
variety of applications, including monitoring of endangered species, cleaning,
and surveillance. Particle filters are widely used to estimate target state
from its prior information and its measurements that recently become available,
especially for the cases when the measurement model and the prior distribution
of state of interest are non-Gaussian. However, the problem of estimating
number of total targets and their state becomes intractable when the number of
total targets and the measurement-target association are unknown. This paper
presents a novel Gaussian particle filter technique that combines Kalman filter
and particle filter for estimating the number and state of total targets based
on the measurement obtained online. The estimation is represented by a set of
weighted particles, different from classical particle filter, where each
particle is a Gaussian distribution instead of a point mass.
| Haojun Li | null | 1501.02411 | null | null |
Learning a Fuzzy Hyperplane Fat Margin Classifier with Minimum VC
dimension | cs.LG | The Vapnik-Chervonenkis (VC) dimension measures the complexity of a learning
machine, and a low VC dimension leads to good generalization. The recently
proposed Minimal Complexity Machine (MCM) learns a hyperplane classifier by
minimizing an exact bound on the VC dimension. This paper extends the MCM
classifier to the fuzzy domain. The use of a fuzzy membership is known to
reduce the effect of outliers, and to reduce the effect of noise on learning.
Experimental results show, that on a number of benchmark datasets, the the
fuzzy MCM classifier outperforms SVMs and the conventional MCM in terms of
generalization, and that the fuzzy MCM uses fewer support vectors. On several
benchmark datasets, the fuzzy MCM classifier yields excellent test set
accuracies while using one-tenth the number of support vectors used by SVMs.
| Jayadeva, Sanjit Singh Batra, and Siddarth Sabharwal | null | 1501.02432 | null | null |
Crowd-ML: A Privacy-Preserving Learning Framework for a Crowd of Smart
Devices | cs.LG cs.CR cs.DC cs.NI | Smart devices with built-in sensors, computational capabilities, and network
connectivity have become increasingly pervasive. The crowds of smart devices
offer opportunities to collectively sense and perform computing tasks in an
unprecedented scale. This paper presents Crowd-ML, a privacy-preserving machine
learning framework for a crowd of smart devices, which can solve a wide range
of learning problems for crowdsensing data with differential privacy
guarantees. Crowd-ML endows a crowdsensing system with an ability to learn
classifiers or predictors online from crowdsensing data privately with minimal
computational overheads on devices and servers, suitable for a practical and
large-scale employment of the framework. We analyze the performance and the
scalability of Crowd-ML, and implement the system with off-the-shelf
smartphones as a proof of concept. We demonstrate the advantages of Crowd-ML
with real and simulated experiments under various conditions.
| Jihun Hamm, Adam Champion, Guoxing Chen, Mikhail Belkin, Dong Xuan | null | 1501.02484 | null | null |
Photonic Delay Systems as Machine Learning Implementations | cs.NE cs.LG | Nonlinear photonic delay systems present interesting implementation platforms
for machine learning models. They can be extremely fast, offer great degrees of
parallelism and potentially consume far less power than digital processors. So
far they have been successfully employed for signal processing using the
Reservoir Computing paradigm. In this paper we show that their range of
applicability can be greatly extended if we use gradient descent with
backpropagation through time on a model of the system to optimize the input
encoding of such systems. We perform physical experiments that demonstrate that
the obtained input encodings work well in reality, and we show that optimized
systems perform significantly better than the common Reservoir Computing
approach. The results presented here demonstrate that common gradient descent
techniques from machine learning may well be applicable on physical
neuro-inspired analog computers.
| Michiel Hermans, Miguel Soriano, Joni Dambre, Peter Bienstman, Ingo
Fischer | null | 1501.02592 | null | null |
Combining Language and Vision with a Multimodal Skip-gram Model | cs.CL cs.CV cs.LG | We extend the SKIP-GRAM model of Mikolov et al. (2013a) by taking visual
information into account. Like SKIP-GRAM, our multimodal models (MMSKIP-GRAM)
build vector-based word representations by learning to predict linguistic
contexts in text corpora. However, for a restricted set of words, the models
are also exposed to visual representations of the objects they denote
(extracted from natural images), and must predict linguistic and visual
features jointly. The MMSKIP-GRAM models achieve good performance on a variety
of semantic benchmarks. Moreover, since they propagate visual information to
all words, we use them to improve image labeling and retrieval in the zero-shot
setup, where the test concepts are never seen during model training. Finally,
the MMSKIP-GRAM models discover intriguing visual properties of abstract words,
paving the way to realistic implementations of embodied theories of meaning.
| Angeliki Lazaridou, Nghia The Pham, Marco Baroni | null | 1501.02598 | null | null |
Scaling-up Empirical Risk Minimization: Optimization of Incomplete
U-statistics | stat.ML cs.AI cs.LG | In a wide range of statistical learning problems such as ranking, clustering
or metric learning among others, the risk is accurately estimated by
$U$-statistics of degree $d\geq 1$, i.e. functionals of the training data with
low variance that take the form of averages over $k$-tuples. From a
computational perspective, the calculation of such statistics is highly
expensive even for a moderate sample size $n$, as it requires averaging
$O(n^d)$ terms. This makes learning procedures relying on the optimization of
such data functionals hardly feasible in practice. It is the major goal of this
paper to show that, strikingly, such empirical risks can be replaced by
drastically computationally simpler Monte-Carlo estimates based on $O(n)$ terms
only, usually referred to as incomplete $U$-statistics, without damaging the
$O_{\mathbb{P}}(1/\sqrt{n})$ learning rate of Empirical Risk Minimization (ERM)
procedures. For this purpose, we establish uniform deviation results describing
the error made when approximating a $U$-process by its incomplete version under
appropriate complexity assumptions. Extensions to model selection, fast rate
situations and various sampling techniques are also considered, as well as an
application to stochastic gradient descent for ERM. Finally, numerical examples
are displayed in order to provide strong empirical evidence that the approach
we promote largely surpasses more naive subsampling techniques.
| St\'ephan Cl\'emen\c{c}on, Aur\'elien Bellet, Igor Colin | null | 1501.02629 | null | null |
Max-Cost Discrete Function Evaluation Problem under a Budget | cs.LG | We propose novel methods for max-cost Discrete Function Evaluation Problem
(DFEP) under budget constraints. We are motivated by applications such as
clinical diagnosis where a patient is subjected to a sequence of (possibly
expensive) tests before a decision is made. Our goal is to develop strategies
for minimizing max-costs. The problem is known to be NP hard and greedy methods
based on specialized impurity functions have been proposed. We develop a broad
class of \emph{admissible} impurity functions that admit monomials, classes of
polynomials, and hinge-loss functions that allow for flexible impurity design
with provably optimal approximation bounds. This flexibility is important for
datasets when max-cost can be overly sensitive to "outliers." Outliers bias
max-cost to a few examples that require a large number of tests for
classification. We design admissible functions that allow for accuracy-cost
trade-off and result in $O(\log n)$ guarantees of the optimal cost among trees
with corresponding classification accuracy levels.
| Feng Nan, Joseph Wang, Venkatesh Saligrama | null | 1501.02702 | null | null |
$\ell_0$ Sparsifying Transform Learning with Efficient Optimal Updates
and Convergence Guarantees | stat.ML cs.LG | Many applications in signal processing benefit from the sparsity of signals
in a certain transform domain or dictionary. Synthesis sparsifying dictionaries
that are directly adapted to data have been popular in applications such as
image denoising, inpainting, and medical image reconstruction. In this work, we
focus instead on the sparsifying transform model, and study the learning of
well-conditioned square sparsifying transforms. The proposed algorithms
alternate between a $\ell_0$ "norm"-based sparse coding step, and a non-convex
transform update step. We derive the exact analytical solution for each of
these steps. The proposed solution for the transform update step achieves the
global minimum in that step, and also provides speedups over iterative
solutions involving conjugate gradients. We establish that our alternating
algorithms are globally convergent to the set of local minimizers of the
non-convex transform learning problems. In practice, the algorithms are
insensitive to initialization. We present results illustrating the promising
performance and significant speed-ups of transform learning over synthesis
K-SVD in image denoising.
| Saiprasad Ravishankar and Yoram Bresler | 10.1109/TSP.2015.2405503 | 1501.02859 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.