categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
list |
---|---|---|---|---|---|---|---|---|---|---|
stat.ML cs.LG
| null |
1701.03212
| null | null |
http://arxiv.org/pdf/1701.03212v4
|
2017-11-13T01:31:28Z
|
2017-01-12T02:22:45Z
|
Sparse-TDA: Sparse Realization of Topological Data Analysis for
Multi-Way Classification
|
Topological data analysis (TDA) has emerged as one of the most promising
techniques to reconstruct the unknown shapes of high-dimensional spaces from
observed data samples. TDA, thus, yields key shape descriptors in the form of
persistent topological features that can be used for any supervised or
unsupervised learning task, including multi-way classification. Sparse
sampling, on the other hand, provides a highly efficient technique to
reconstruct signals in the spatial-temporal domain from just a few
carefully-chosen samples. Here, we present a new method, referred to as the
Sparse-TDA algorithm, that combines favorable aspects of the two techniques.
This combination is realized by selecting an optimal set of sparse pixel
samples from the persistent features generated by a vector-based TDA algorithm.
These sparse samples are selected from a low-rank matrix representation of
persistent features using QR pivoting. We show that the Sparse-TDA method
demonstrates promising performance on three benchmark problems related to human
posture recognition and image texture classification.
|
[
"['Wei Guo' 'Krithika Manohar' 'Steven L. Brunton' 'Ashis G. Banerjee']",
"Wei Guo, Krithika Manohar, Steven L. Brunton and Ashis G. Banerjee"
] |
cs.CL cs.IR cs.LG
| null |
1701.03227
| null | null |
http://arxiv.org/pdf/1701.03227v3
|
2017-10-14T18:25:03Z
|
2017-01-12T04:26:00Z
|
Prior matters: simple and general methods for evaluating and improving
topic quality in topic modeling
|
Latent Dirichlet Allocation (LDA) models trained without stopword removal
often produce topics with high posterior probabilities on uninformative words,
obscuring the underlying corpus content. Even when canonical stopwords are
manually removed, uninformative words common in that corpus will still dominate
the most probable words in a topic. In this work, we first show how the
standard topic quality measures of coherence and pointwise mutual information
act counter-intuitively in the presence of common but irrelevant words, making
it difficult to even quantitatively identify situations in which topics may be
dominated by stopwords. We propose an additional topic quality metric that
targets the stopword problem, and show that it, unlike the standard measures,
correctly correlates with human judgements of quality. We also propose a
simple-to-implement strategy for generating topics that are evaluated to be of
much higher quality by both human assessment and our new metric. This approach,
a collection of informative priors easily introduced into most LDA-style
inference methods, automatically promotes terms with domain relevance and
demotes domain-specific stop words. We demonstrate this approach's
effectiveness in three very different domains: Department of Labor accident
reports, online health forum posts, and NIPS abstracts. Overall we find that
current practices thought to solve this problem do not do so adequately, and
that our proposal offers a substantial improvement for those interested in
interpreting their topics as objects in their own right.
|
[
"Angela Fan, Finale Doshi-Velez, Luke Miratrix",
"['Angela Fan' 'Finale Doshi-Velez' 'Luke Miratrix']"
] |
cs.LG cs.NE
| null |
1701.03281
| null | null |
http://arxiv.org/pdf/1701.03281v1
|
2017-01-12T09:48:53Z
|
2017-01-12T09:48:53Z
|
Modularized Morphing of Neural Networks
|
In this work we study the problem of network morphism, an effective learning
scheme to morph a well-trained neural network to a new one with the network
function completely preserved. Different from existing work where basic
morphing types on the layer level were addressed, we target at the central
problem of network morphism at a higher level, i.e., how a convolutional layer
can be morphed into an arbitrary module of a neural network. To simplify the
representation of a network, we abstract a module as a graph with blobs as
vertices and convolutional layers as edges, based on which the morphing process
is able to be formulated as a graph transformation problem. Two atomic morphing
operations are introduced to compose the graphs, based on which modules are
classified into two families, i.e., simple morphable modules and complex
modules. We present practical morphing solutions for both of these two
families, and prove that any reasonable module can be morphed from a single
convolutional layer. Extensive experiments have been conducted based on the
state-of-the-art ResNet on benchmark datasets, and the effectiveness of the
proposed solution has been verified.
|
[
"['Tao Wei' 'Changhu Wang' 'Chang Wen Chen']",
"Tao Wei, Changhu Wang, Chang Wen Chen"
] |
cs.LG cs.AI cs.SD
| null |
1701.0336
| null | null | null | null | null |
Residual LSTM: Design of a Deep Recurrent Architecture for Distant
Speech Recognition
|
In this paper, a novel architecture for a deep recurrent neural network,
residual LSTM is introduced. A plain LSTM has an internal memory cell that can
learn long term dependencies of sequential data. It also provides a temporal
shortcut path to avoid vanishing or exploding gradients in the temporal domain.
The residual LSTM provides an additional spatial shortcut path from lower
layers for efficient training of deep networks with multiple LSTM layers.
Compared with the previous work, highway LSTM, residual LSTM separates a
spatial shortcut path with temporal one by using output layers, which can help
to avoid a conflict between spatial and temporal-domain gradient flows.
Furthermore, residual LSTM reuses the output projection matrix and the output
gate of LSTM to control the spatial information flow instead of additional gate
networks, which effectively reduces more than 10% of network parameters. An
experiment for distant speech recognition on the AMI SDM corpus shows that
10-layer plain and highway LSTM networks presented 13.7% and 6.2% increase in
WER over 3-layer aselines, respectively. On the contrary, 10-layer residual
LSTM networks provided the lowest WER 41.0%, which corresponds to 3.3% and 2.8%
WER reduction over plain and highway LSTM networks, respectively.
|
[
"Jaeyoung Kim, Mostafa El-Khamy, and Jungwon Lee"
] |
null | null |
1701.03360
| null | null |
http://arxiv.org/pdf/1701.03360v3
|
2017-06-05T18:51:08Z
|
2017-01-10T20:03:37Z
|
Residual LSTM: Design of a Deep Recurrent Architecture for Distant
Speech Recognition
|
In this paper, a novel architecture for a deep recurrent neural network, residual LSTM is introduced. A plain LSTM has an internal memory cell that can learn long term dependencies of sequential data. It also provides a temporal shortcut path to avoid vanishing or exploding gradients in the temporal domain. The residual LSTM provides an additional spatial shortcut path from lower layers for efficient training of deep networks with multiple LSTM layers. Compared with the previous work, highway LSTM, residual LSTM separates a spatial shortcut path with temporal one by using output layers, which can help to avoid a conflict between spatial and temporal-domain gradient flows. Furthermore, residual LSTM reuses the output projection matrix and the output gate of LSTM to control the spatial information flow instead of additional gate networks, which effectively reduces more than 10% of network parameters. An experiment for distant speech recognition on the AMI SDM corpus shows that 10-layer plain and highway LSTM networks presented 13.7% and 6.2% increase in WER over 3-layer aselines, respectively. On the contrary, 10-layer residual LSTM networks provided the lowest WER 41.0%, which corresponds to 3.3% and 2.8% WER reduction over plain and highway LSTM networks, respectively.
|
[
"['Jaeyoung Kim' 'Mostafa El-Khamy' 'Jungwon Lee']"
] |
cs.CV cs.LG
| null |
1701.034
| null | null | null | null | null |
Scaling Binarized Neural Networks on Reconfigurable Logic
|
Binarized neural networks (BNNs) are gaining interest in the deep learning
community due to their significantly lower computational and memory cost. They
are particularly well suited to reconfigurable logic devices, which contain an
abundance of fine-grained compute resources and can result in smaller, lower
power implementations, or conversely in higher classification rates. Towards
this end, the Finn framework was recently proposed for building fast and
flexible field programmable gate array (FPGA) accelerators for BNNs. Finn
utilized a novel set of optimizations that enable efficient mapping of BNNs to
hardware and implemented fully connected, non-padded convolutional and pooling
layers, with per-layer compute resources being tailored to user-provided
throughput requirements. However, FINN was not evaluated on larger topologies
due to the size of the chosen FPGA, and exhibited decreased accuracy due to
lack of padding. In this paper, we improve upon Finn to show how padding can be
employed on BNNs while still maintaining a 1-bit datapath and high accuracy.
Based on this technique, we demonstrate numerous experiments to illustrate
flexibility and scalability of the approach. In particular, we show that a
large BNN requiring 1.2 billion operations per frame running on an ADM-PCIE-8K5
platform can classify images at 12 kFPS with 671 us latency while drawing less
than 41 W board power and classifying CIFAR-10 images at 88.7% accuracy. Our
implementation of this network achieves 14.8 trillion operations per second. We
believe this is the fastest classification rate reported to date on this
benchmark at this level of accuracy.
|
[
"Nicholas J. Fraser, Yaman Umuroglu, Giulio Gambardella, Michaela\n Blott, Philip Leong, Magnus Jahre and Kees Vissers"
] |
null | null |
1701.03400
| null | null |
http://arxiv.org/pdf/1701.03400v2
|
2017-01-27T09:12:48Z
|
2017-01-12T16:42:47Z
|
Scaling Binarized Neural Networks on Reconfigurable Logic
|
Binarized neural networks (BNNs) are gaining interest in the deep learning community due to their significantly lower computational and memory cost. They are particularly well suited to reconfigurable logic devices, which contain an abundance of fine-grained compute resources and can result in smaller, lower power implementations, or conversely in higher classification rates. Towards this end, the Finn framework was recently proposed for building fast and flexible field programmable gate array (FPGA) accelerators for BNNs. Finn utilized a novel set of optimizations that enable efficient mapping of BNNs to hardware and implemented fully connected, non-padded convolutional and pooling layers, with per-layer compute resources being tailored to user-provided throughput requirements. However, FINN was not evaluated on larger topologies due to the size of the chosen FPGA, and exhibited decreased accuracy due to lack of padding. In this paper, we improve upon Finn to show how padding can be employed on BNNs while still maintaining a 1-bit datapath and high accuracy. Based on this technique, we demonstrate numerous experiments to illustrate flexibility and scalability of the approach. In particular, we show that a large BNN requiring 1.2 billion operations per frame running on an ADM-PCIE-8K5 platform can classify images at 12 kFPS with 671 us latency while drawing less than 41 W board power and classifying CIFAR-10 images at 88.7% accuracy. Our implementation of this network achieves 14.8 trillion operations per second. We believe this is the fastest classification rate reported to date on this benchmark at this level of accuracy.
|
[
"['Nicholas J. Fraser' 'Yaman Umuroglu' 'Giulio Gambardella'\n 'Michaela Blott' 'Philip Leong' 'Magnus Jahre' 'Kees Vissers']"
] |
stat.ML cs.LG math.PR
| null |
1701.03449
| null | null |
http://arxiv.org/pdf/1701.03449v1
|
2017-01-12T18:36:47Z
|
2017-01-12T18:36:47Z
|
Manifold Alignment Determination: finding correspondences across
different data views
|
We present Manifold Alignment Determination (MAD), an algorithm for learning
alignments between data points from multiple views or modalities. The approach
is capable of learning correspondences between views as well as correspondences
between individual data-points. The proposed method requires only a few aligned
examples from which it is capable to recover a global alignment through a
probabilistic model. The strong, yet flexible regularization provided by the
generative model is sufficient to align the views. We provide experiments on
both synthetic and real data to highlight the benefit of the proposed approach.
|
[
"Andreas Damianou, Neil D. Lawrence and Carl Henrik Ek",
"['Andreas Damianou' 'Neil D. Lawrence' 'Carl Henrik Ek']"
] |
cs.LG cs.DC
| null |
1701.03458
| null | null |
http://arxiv.org/pdf/1701.03458v1
|
2017-01-12T05:14:40Z
|
2017-01-12T05:14:40Z
|
An Asynchronous Parallel Approach to Sparse Recovery
|
Asynchronous parallel computing and sparse recovery are two areas that have
received recent interest. Asynchronous algorithms are often studied to solve
optimization problems where the cost function takes the form $\sum_{i=1}^M
f_i(x)$, with a common assumption that each $f_i$ is sparse; that is, each
$f_i$ acts only on a small number of components of $x\in\mathbb{R}^n$. Sparse
recovery problems, such as compressed sensing, can be formulated as
optimization problems, however, the cost functions $f_i$ are dense with respect
to the components of $x$, and instead the signal $x$ is assumed to be sparse,
meaning that it has only $s$ non-zeros where $s\ll n$. Here we address how one
may use an asynchronous parallel architecture when the cost functions $f_i$ are
not sparse in $x$, but rather the signal $x$ is sparse. We propose an
asynchronous parallel approach to sparse recovery via a stochastic greedy
algorithm, where multiple processors asynchronously update a vector in shared
memory containing information on the estimated signal support. We include
numerical simulations that illustrate the potential benefits of our proposed
asynchronous method.
|
[
"['Deanna Needell' 'Tina Woolf']",
"Deanna Needell, Tina Woolf"
] |
cs.GT cs.LG stat.ML
| null |
1701.03537
| null | null |
http://arxiv.org/pdf/1701.03537v2
|
2017-04-24T21:53:21Z
|
2017-01-13T01:08:35Z
|
Perishability of Data: Dynamic Pricing under Varying-Coefficient Models
|
We consider a firm that sells a large number of products to its customers in
an online fashion. Each product is described by a high dimensional feature
vector, and the market value of a product is assumed to be linear in the values
of its features. Parameters of the valuation model are unknown and can change
over time. The firm sequentially observes a product's features and can use the
historical sales data (binary sale/no sale feedbacks) to set the price of
current product, with the objective of maximizing the collected revenue. We
measure the performance of a dynamic pricing policy via regret, which is the
expected revenue loss compared to a clairvoyant that knows the sequence of
model parameters in advance.
We propose a pricing policy based on projected stochastic gradient descent
(PSGD) and characterize its regret in terms of time $T$, features dimension
$d$, and the temporal variability in the model parameters, $\delta_t$. We
consider two settings. In the first one, feature vectors are chosen
antagonistically by nature and we prove that the regret of PSGD pricing policy
is of order $O(\sqrt{T} + \sum_{t=1}^T \sqrt{t}\delta_t)$. In the second
setting (referred to as stochastic features model), the feature vectors are
drawn independently from an unknown distribution. We show that in this case,
the regret of PSGD pricing policy is of order $O(d^2 \log T + \sum_{t=1}^T
t\delta_t/d)$.
|
[
"['Adel Javanmard']",
"Adel Javanmard"
] |
stat.ML cs.AI cs.CL cs.LG
| null |
1701.03577
| null | null |
http://arxiv.org/pdf/1701.03577v1
|
2017-01-13T07:24:18Z
|
2017-01-13T07:24:18Z
|
Kernel Approximation Methods for Speech Recognition
|
We study large-scale kernel methods for acoustic modeling in speech
recognition and compare their performance to deep neural networks (DNNs). We
perform experiments on four speech recognition datasets, including the TIMIT
and Broadcast News benchmark tasks, and compare these two types of models on
frame-level performance metrics (accuracy, cross-entropy), as well as on
recognition metrics (word/character error rate). In order to scale kernel
methods to these large datasets, we use the random Fourier feature method of
Rahimi and Recht (2007). We propose two novel techniques for improving the
performance of kernel acoustic models. First, in order to reduce the number of
random features required by kernel models, we propose a simple but effective
method for feature selection. The method is able to explore a large number of
non-linear features while maintaining a compact model more efficiently than
existing approaches. Second, we present a number of frame-level metrics which
correlate very strongly with recognition performance when computed on the
heldout set; we take advantage of these correlations by monitoring these
metrics during training in order to decide when to stop learning. This
technique can noticeably improve the recognition performance of both DNN and
kernel models, while narrowing the gap between them. Additionally, we show that
the linear bottleneck method of Sainath et al. (2013) improves the performance
of our kernel models significantly, in addition to speeding up training and
making the models more compact. Together, these three methods dramatically
improve the performance of kernel acoustic models, making their performance
comparable to DNNs on the tasks we explored.
|
[
"Avner May, Alireza Bagheri Garakani, Zhiyun Lu, Dong Guo, Kuan Liu,\n Aur\\'elien Bellet, Linxi Fan, Michael Collins, Daniel Hsu, Brian Kingsbury,\n Michael Picheny, Fei Sha",
"['Avner May' 'Alireza Bagheri Garakani' 'Zhiyun Lu' 'Dong Guo' 'Kuan Liu'\n 'Aurélien Bellet' 'Linxi Fan' 'Michael Collins' 'Daniel Hsu'\n 'Brian Kingsbury' 'Michael Picheny' 'Fei Sha']"
] |
stat.ML cs.LG physics.data-an
| null |
1701.03619
| null | null |
http://arxiv.org/pdf/1701.03619v2
|
2018-11-20T16:39:50Z
|
2017-01-13T10:45:01Z
|
Diffusion-based nonlinear filtering for multimodal data fusion with
application to sleep stage assessment
|
The problem of information fusion from multiple data-sets acquired by
multimodal sensors has drawn significant research attention over the years. In
this paper, we focus on a particular problem setting consisting of a physical
phenomenon or a system of interest observed by multiple sensors. We assume that
all sensors measure some aspects of the system of interest with additional
sensor-specific and irrelevant components. Our goal is to recover the variables
relevant to the observed system and to filter out the nuisance effects of the
sensor-specific variables. We propose an approach based on manifold learning,
which is particularly suitable for problems with multiple modalities, since it
aims to capture the intrinsic structure of the data and relies on minimal prior
model knowledge. Specifically, we propose a nonlinear filtering scheme, which
extracts the hidden sources of variability captured by two or more sensors,
that are independent of the sensor-specific components. In addition to
presenting a theoretical analysis, we demonstrate our technique on real
measured data for the purpose of sleep stage assessment based on multiple,
multimodal sensor measurements. We show that without prior knowledge on the
different modalities and on the measured system, our method gives rise to a
data-driven representation that is well correlated with the underlying sleep
process and is robust to noise and sensor-specific effects.
|
[
"Ori Katz, Ronen Talmon, Yu-Lun Lo and Hau-Tieng Wu",
"['Ori Katz' 'Ronen Talmon' 'Yu-Lun Lo' 'Hau-Tieng Wu']"
] |
cs.LG
| null |
1701.03633
| null | null |
http://arxiv.org/pdf/1701.03633v1
|
2017-01-13T11:31:35Z
|
2017-01-13T11:31:35Z
|
A dissimilarity-based approach to predictive maintenance with
application to HVAC systems
|
The goal of predictive maintenance is to forecast the occurrence of faults of
an appliance, in order to proactively take the necessary actions to ensure its
availability. In many application scenarios, predictive maintenance is applied
to a set of homogeneous appliances. In this paper, we firstly review taxonomies
and main methodologies currently used for condition-based maintenance;
secondly, we argue that the mutual dissimilarities of the behaviours of all
appliances of this set (the "cohort") can be exploited to detect upcoming
faults. Specifically, inspired by dissimilarity-based representations, we
propose a novel machine learning approach based on the analysis of concurrent
mutual differences of the measurements coming from the cohort. We evaluate our
method over one year of historical data from a cohort of 17 HVAC (Heating,
Ventilation and Air Conditioning) systems installed in an Italian hospital. We
show that certain kinds of faults can be foreseen with an accuracy, measured in
terms of area under the ROC curve, as high as 0.96.
|
[
"Riccardo Satta, Stefano Cavallari, Eraldo Pomponi, Daniele Grasselli,\n Davide Picheo, Carlo Annis",
"['Riccardo Satta' 'Stefano Cavallari' 'Eraldo Pomponi' 'Daniele Grasselli'\n 'Davide Picheo' 'Carlo Annis']"
] |
cs.LG
| null |
1701.03641
| null | null |
http://arxiv.org/pdf/1701.03641v3
|
2017-03-10T11:14:17Z
|
2017-01-13T12:23:10Z
|
Symbolic Regression Algorithms with Built-in Linear Regression
|
Recently, several algorithms for symbolic regression (SR) emerged which
employ a form of multiple linear regression (LR) to produce generalized linear
models. The use of LR allows the algorithms to create models with relatively
small error right from the beginning of the search; such algorithms are thus
claimed to be (sometimes by orders of magnitude) faster than SR algorithms
based on vanilla genetic programming. However, a systematic comparison of these
algorithms on a common set of problems is still missing. In this paper we
conceptually and experimentally compare several representatives of such
algorithms (GPTIPS, FFX, and EFS). They are applied as off-the-shelf,
ready-to-use techniques, mostly using their default settings. The methods are
compared on several synthetic and real-world SR benchmark problems. Their
performance is also related to the performance of three conventional machine
learning algorithms --- multiple regression, random forests and support vector
regression.
|
[
"['Jan Žegklitz' 'Petr Pošík']",
"Jan \\v{Z}egklitz, Petr Po\\v{s}\\'ik"
] |
cs.LG
| null |
1701.03647
| null | null |
http://arxiv.org/pdf/1701.03647v2
|
2018-12-05T13:43:12Z
|
2017-01-13T12:43:58Z
|
Restricted Boltzmann Machines with Gaussian Visible Units Guided by
Pairwise Constraints
|
Restricted Boltzmann machines (RBMs) and their variants are usually trained
by contrastive divergence (CD) learning, but the training procedure is an
unsupervised learning approach, without any guidances of the background
knowledge. To enhance the expression ability of traditional RBMs, in this
paper, we propose pairwise constraints restricted Boltzmann machine with
Gaussian visible units (pcGRBM) model, in which the learning procedure is
guided by pairwise constraints and the process of encoding is conducted under
these guidances. The pairwise constraints are encoded in hidden layer features
of pcGRBM. Then, some pairwise hidden features of pcGRBM flock together and
another part of them are separated by the guidances. In order to deal with
real-valued data, the binary visible units are replaced by linear units with
Gausian noise in the pcGRBM model. In the learning process of pcGRBM, the
pairwise constraints are iterated transitions between visible and hidden units
during CD learning procedure. Then, the proposed model is inferred by
approximative gradient descent method and the corresponding learning algorithm
is designed in this paper. In order to compare the availability of pcGRBM and
traditional RBMs with Gaussian visible units, the features of the pcGRBM and
RBMs hidden layer are used as input 'data' for K-means, spectral clustering
(SP) and affinity propagation (AP) algorithms, respectively. A thorough
experimental evaluation is performed with sixteen image datasets of Microsoft
Research Asia Multimedia (MSRA-MM). The experimental results show that the
clustering performance of K-means, SP and AP algorithms based on pcGRBM model
are significantly better than traditional RBMs. In addition, the pcGRBM model
for clustering task shows better performance than some semi-supervised
clustering algorithms.
|
[
"Jielei Chu, Hongjun Wang, Hua Meng, Peng Jin and Tianrui Li (Senior\n member, IEEE)",
"['Jielei Chu' 'Hongjun Wang' 'Hua Meng' 'Peng Jin' 'Tianrui Li']"
] |
cs.LG stat.ML
|
10.1186/s13634-018-0533-0
|
1701.03655
| null | null |
http://arxiv.org/abs/1701.03655v2
|
2017-01-19T13:37:00Z
|
2017-01-13T13:06:47Z
|
Dictionary Learning from Incomplete Data
|
This paper extends the recently proposed and theoretically justified
iterative thresholding and $K$ residual means algorithm ITKrM to learning
dicionaries from incomplete/masked training data (ITKrMM). It further adapts
the algorithm to the presence of a low rank component in the data and provides
a strategy for recovering this low rank component again from incomplete data.
Several synthetic experiments show the advantages of incorporating information
about the corruption into the algorithm. Finally, image inpainting is
considered as application example, which demonstrates the superior performance
of ITKrMM in terms of speed at similar or better reconstruction quality
compared to its closest dictionary learning counterpart.
|
[
"['Valeriya Naumova' 'Karin Schnass']",
"Valeriya Naumova and Karin Schnass"
] |
cs.LG stat.ML
| null |
1701.03743
| null | null |
http://arxiv.org/pdf/1701.03743v1
|
2017-01-13T17:28:09Z
|
2017-01-13T17:28:09Z
|
Truncation-free Hybrid Inference for DPMM
|
Dirichlet process mixture models (DPMM) are a cornerstone of Bayesian
non-parametrics. While these models free from choosing the number of components
a-priori, computationally attractive variational inference often reintroduces
the need to do so, via a truncation on the variational distribution. In this
paper we present a truncation-free hybrid inference for DPMM, combining the
advantages of sampling-based MCMC and variational methods. The proposed
hybridization enables more efficient variational updates, while increasing
model complexity only if needed. We evaluate the properties of the hybrid
updates and their empirical performance in single- as well as mixed-membership
models. Our method is easy to implement and performs favorably compared to
existing schemas.
|
[
"['Arnim Bleier']",
"Arnim Bleier"
] |
stat.ML cs.AI cs.LG cs.PL stat.CO
| null |
1701.03757
| null | null |
http://arxiv.org/pdf/1701.03757v2
|
2017-03-07T18:41:45Z
|
2017-01-13T17:52:07Z
|
Deep Probabilistic Programming
|
We propose Edward, a Turing-complete probabilistic programming language.
Edward defines two compositional representations---random variables and
inference. By treating inference as a first class citizen, on a par with
modeling, we show that probabilistic programming can be as flexible and
computationally efficient as traditional deep learning. For flexibility, Edward
makes it easy to fit the same model using a variety of composable inference
methods, ranging from point estimation to variational inference to MCMC. In
addition, Edward can reuse the modeling representation as part of inference,
facilitating the design of rich variational models and generative adversarial
networks. For efficiency, Edward is integrated into TensorFlow, providing
significant speedups over existing probabilistic systems. For example, we show
on a benchmark logistic regression task that Edward is at least 35x faster than
Stan and 6x faster than PyMC3. Further, Edward incurs no runtime overhead: it
is as fast as handwritten TensorFlow.
|
[
"Dustin Tran, Matthew D. Hoffman, Rif A. Saurous, Eugene Brevdo, Kevin\n Murphy, David M. Blei",
"['Dustin Tran' 'Matthew D. Hoffman' 'Rif A. Saurous' 'Eugene Brevdo'\n 'Kevin Murphy' 'David M. Blei']"
] |
cs.AI cs.LG cs.NE
| null |
1701.03866
| null | null |
http://arxiv.org/pdf/1701.03866v1
|
2017-01-14T01:47:54Z
|
2017-01-14T01:47:54Z
|
Long Timescale Credit Assignment in NeuralNetworks with External Memory
|
Credit assignment in traditional recurrent neural networks usually involves
back-propagating through a long chain of tied weight matrices. The length of
this chain scales linearly with the number of time-steps as the same network is
run at each time-step. This creates many problems, such as vanishing gradients,
that have been well studied. In contrast, a NNEM's architecture recurrent
activity doesn't involve a long chain of activity (though some architectures
such as the NTM do utilize a traditional recurrent architecture as a
controller). Rather, the externally stored embedding vectors are used at each
time-step, but no messages are passed from previous time-steps. This means that
vanishing gradients aren't a problem, as all of the necessary gradient paths
are short. However, these paths are extremely numerous (one per embedding
vector in memory) and reused for a very long time (until it leaves the memory).
Thus, the forward-pass information of each memory must be stored for the entire
duration of the memory. This is problematic as this additional storage far
surpasses that of the actual memories, to the extent that large memories on
infeasible to back-propagate through in high dimensional settings. One way to
get around the need to hold onto forward-pass information is to recalculate the
forward-pass whenever gradient information is available. However, if the
observations are too large to store in the domain of interest, direct
reinstatement of a forward pass cannot occur. Instead, we rely on a learned
autoencoder to reinstate the observation, and then use the embedding network to
recalculate the forward-pass. Since the recalculated embedding vector is
unlikely to perfectly match the one stored in memory, we try out 2
approximations to utilize error gradient w.r.t. the vector in memory.
|
[
"Steven Stenberg Hansen",
"['Steven Stenberg Hansen']"
] |
stat.ML cs.AI cs.IT cs.LG math.IT
| null |
1701.03891
| null | null |
http://arxiv.org/pdf/1701.03891v1
|
2017-01-14T08:42:19Z
|
2017-01-14T08:42:19Z
|
Learning to Invert: Signal Recovery via Deep Convolutional Networks
|
The promise of compressive sensing (CS) has been offset by two significant
challenges. First, real-world data is not exactly sparse in a fixed basis.
Second, current high-performance recovery algorithms are slow to converge,
which limits CS to either non-real-time applications or scenarios where massive
back-end computing is available. In this paper, we attack both of these
challenges head-on by developing a new signal recovery framework we call {\em
DeepInverse} that learns the inverse transformation from measurement vectors to
signals using a {\em deep convolutional network}. When trained on a set of
representative images, the network learns both a representation for the signals
(addressing challenge one) and an inverse map approximating a greedy or convex
recovery algorithm (addressing challenge two). Our experiments indicate that
the DeepInverse network closely approximates the solution produced by
state-of-the-art CS recovery algorithms yet is hundreds of times faster in run
time. The tradeoff for the ultrafast run time is a computationally intensive,
off-line training procedure typical to deep networks. However, the training
needs to be completed only once, which makes the approach attractive for a host
of sparse recovery problems.
|
[
"['Ali Mousavi' 'Richard G. Baraniuk']",
"Ali Mousavi, Richard G. Baraniuk"
] |
cs.LG cs.CV cs.IT math.IT
|
10.3390/e19030122
|
1701.03916
| null | null |
http://arxiv.org/abs/1701.03916v1
|
2017-01-14T12:57:44Z
|
2017-01-14T12:57:44Z
|
On H\"older projective divergences
|
We describe a framework to build distances by measuring the tightness of
inequalities, and introduce the notion of proper statistical divergences and
improper pseudo-divergences. We then consider the H\"older ordinary and reverse
inequalities, and present two novel classes of H\"older divergences and
pseudo-divergences that both encapsulate the special case of the Cauchy-Schwarz
divergence. We report closed-form formulas for those statistical
dissimilarities when considering distributions belonging to the same
exponential family provided that the natural parameter space is a cone (e.g.,
multivariate Gaussians), or affine (e.g., categorical distributions). Those new
classes of H\"older distances are invariant to rescaling, and thus do not
require distributions to be normalized. Finally, we show how to compute
statistical H\"older centroids with respect to those divergences, and carry out
center-based clustering toy experiments on a set of Gaussian distributions that
demonstrate empirically that symmetrized H\"older divergences outperform the
symmetric Cauchy-Schwarz divergence.
|
[
"['Frank Nielsen' 'Ke Sun' 'Stéphane Marchand-Maillet']",
"Frank Nielsen and Ke Sun and St\\'ephane Marchand-Maillet"
] |
cs.LG stat.ML
| null |
1701.03918
| null | null |
http://arxiv.org/pdf/1701.03918v1
|
2017-01-14T13:26:39Z
|
2017-01-14T13:26:39Z
|
Marked Temporal Dynamics Modeling based on Recurrent Neural Network
|
We are now witnessing the increasing availability of event stream data, i.e.,
a sequence of events with each event typically being denoted by the time it
occurs and its mark information (e.g., event type). A fundamental problem is to
model and predict such kind of marked temporal dynamics, i.e., when the next
event will take place and what its mark will be. Existing methods either
predict only the mark or the time of the next event, or predict both of them,
yet separately. Indeed, in marked temporal dynamics, the time and the mark of
the next event are highly dependent on each other, requiring a method that
could simultaneously predict both of them. To tackle this problem, in this
paper, we propose to model marked temporal dynamics by using a mark-specific
intensity function to explicitly capture the dependency between the mark and
the time of the next event. Extensive experiments on two datasets demonstrate
that the proposed method outperforms state-of-the-art methods at predicting
marked temporal dynamics.
|
[
"['Yongqing Wang' 'Shenghua Liu' 'Huawei Shen' 'Xueqi Cheng']",
"Yongqing Wang, Shenghua Liu, Huawei Shen, Xueqi Cheng"
] |
cs.LG
| null |
1701.0394
| null | null | null | null | null |
Scalable and Incremental Learning of Gaussian Mixture Models
|
This work presents a fast and scalable algorithm for incremental learning of
Gaussian mixture models. By performing rank-one updates on its precision
matrices and determinants, its asymptotic time complexity is of \BigO{NKD^2}
for $N$ data points, $K$ Gaussian components and $D$ dimensions. The resulting
algorithm can be applied to high dimensional tasks, and this is confirmed by
applying it to the classification datasets MNIST and CIFAR-10. Additionally, in
order to show the algorithm's applicability to function approximation and
control tasks, it is applied to three reinforcement learning tasks and its
data-efficiency is evaluated.
|
[
"Rafael Pinto, Paulo Engel"
] |
null | null |
1701.03940
| null | null |
http://arxiv.org/pdf/1701.03940v1
|
2017-01-14T16:15:44Z
|
2017-01-14T16:15:44Z
|
Scalable and Incremental Learning of Gaussian Mixture Models
|
This work presents a fast and scalable algorithm for incremental learning of Gaussian mixture models. By performing rank-one updates on its precision matrices and determinants, its asymptotic time complexity is of BigO{NKD^2} for $N$ data points, $K$ Gaussian components and $D$ dimensions. The resulting algorithm can be applied to high dimensional tasks, and this is confirmed by applying it to the classification datasets MNIST and CIFAR-10. Additionally, in order to show the algorithm's applicability to function approximation and control tasks, it is applied to three reinforcement learning tasks and its data-efficiency is evaluated.
|
[
"['Rafael Pinto' 'Paulo Engel']"
] |
math.OC cs.LG
| null |
1701.03961
| null | null |
http://arxiv.org/pdf/1701.03961v2
|
2017-02-04T16:45:06Z
|
2017-01-14T19:48:49Z
|
Communication-Efficient Algorithms for Decentralized and Stochastic
Optimization
|
We present a new class of decentralized first-order methods for nonsmooth and
stochastic optimization problems defined over multiagent networks. Considering
that communication is a major bottleneck in decentralized optimization, our
main goal in this paper is to develop algorithmic frameworks which can
significantly reduce the number of inter-node communications. We first propose
a decentralized primal-dual method which can find an $\epsilon$-solution both
in terms of functional optimality gap and feasibility residual in
$O(1/\epsilon)$ inter-node communication rounds when the objective functions
are convex and the local primal subproblems are solved exactly. Our major
contribution is to present a new class of decentralized primal-dual type
algorithms, namely the decentralized communication sliding (DCS) methods, which
can skip the inter-node communications while agents solve the primal
subproblems iteratively through linearizations of their local objective
functions. By employing DCS, agents can still find an $\epsilon$-solution in
$O(1/\epsilon)$ (resp., $O(1/\sqrt{\epsilon})$) communication rounds for
general convex functions (resp., strongly convex functions), while maintaining
the $O(1/\epsilon^2)$ (resp., $O(1/\epsilon)$) bound on the total number of
intra-node subgradient evaluations. We also present a stochastic counterpart
for these algorithms, denoted by SDCS, for solving stochastic optimization
problems whose objective function cannot be evaluated exactly. In comparison
with existing results for decentralized nonsmooth and stochastic optimization,
we can reduce the total number of inter-node communication rounds by orders of
magnitude while still maintaining the optimal complexity bounds on intra-node
stochastic subgradient evaluations. The bounds on the subgradient evaluations
are actually comparable to those required for centralized nonsmooth and
stochastic optimization.
|
[
"Guanghui Lan, Soomin Lee, and Yi Zhou",
"['Guanghui Lan' 'Soomin Lee' 'Yi Zhou']"
] |
cs.SY cs.LG math.OC stat.ML
|
10.1109/TSP.2017.2750109
|
1701.03974
| null | null |
http://arxiv.org/abs/1701.03974v2
|
2017-01-27T05:33:29Z
|
2017-01-14T23:28:21Z
|
An Online Convex Optimization Approach to Dynamic Network Resource
Allocation
|
Existing approaches to online convex optimization (OCO) make sequential
one-slot-ahead decisions, which lead to (possibly adversarial) losses that
drive subsequent decision iterates. Their performance is evaluated by the
so-called regret that measures the difference of losses between the online
solution and the best yet fixed overall solution in hindsight. The present
paper deals with online convex optimization involving adversarial loss
functions and adversarial constraints, where the constraints are revealed after
making decisions, and can be tolerable to instantaneous violations but must be
satisfied in the long term. Performance of an online algorithm in this setting
is assessed by: i) the difference of its losses relative to the best dynamic
solution with one-slot-ahead information of the loss function and the
constraint (that is here termed dynamic regret); and, ii) the accumulated
amount of constraint violations (that is here termed dynamic fit). In this
context, a modified online saddle-point (MOSP) scheme is developed, and proved
to simultaneously yield sub-linear dynamic regret and fit, provided that the
accumulated variations of per-slot minimizers and constraints are sub-linearly
growing with time. MOSP is also applied to the dynamic network resource
allocation task, and it is compared with the well-known stochastic dual
gradient method. Under various scenarios, numerical experiments demonstrate the
performance gain of MOSP relative to the state-of-the-art.
|
[
"['Tianyi Chen' 'Qing Ling' 'Georgios B. Giannakis']",
"Tianyi Chen, Qing Ling, Georgios B. Giannakis"
] |
cs.LG
| null |
1701.04077
| null | null |
http://arxiv.org/pdf/1701.04077v3
|
2017-01-27T09:11:59Z
|
2017-01-15T17:06:08Z
|
Breeding electric zebras in the fields of Medicine
|
A few notes on the use of machine learning in medicine and the related
unintended consequences.
|
[
"['Federico Cabitza']",
"Federico Cabitza"
] |
cs.LG cs.AI
| null |
1701.04079
| null | null |
http://arxiv.org/pdf/1701.04079v1
|
2017-01-15T17:14:40Z
|
2017-01-15T17:14:40Z
|
Agent-Agnostic Human-in-the-Loop Reinforcement Learning
|
Providing Reinforcement Learning agents with expert advice can dramatically
improve various aspects of learning. Prior work has developed teaching
protocols that enable agents to learn efficiently in complex environments; many
of these methods tailor the teacher's guidance to agents with a particular
representation or underlying learning scheme, offering effective but
specialized teaching procedures. In this work, we explore protocol programs, an
agent-agnostic schema for Human-in-the-Loop Reinforcement Learning. Our goal is
to incorporate the beneficial properties of a human teacher into Reinforcement
Learning without making strong assumptions about the inner workings of the
agent. We show how to represent existing approaches such as action pruning,
reward shaping, and training in simulation as special cases of our schema and
conduct preliminary experiments on simple domains.
|
[
"David Abel, John Salvatier, Andreas Stuhlm\\\"uller, Owain Evans",
"['David Abel' 'John Salvatier' 'Andreas Stuhlmüller' 'Owain Evans']"
] |
cs.LG
| null |
1701.04099
| null | null |
http://arxiv.org/pdf/1701.04099v3
|
2017-02-23T05:26:04Z
|
2017-01-15T19:13:22Z
|
Field-aware Factorization Machines in a Real-world Online Advertising
System
|
Predicting user response is one of the core machine learning tasks in
computational advertising. Field-aware Factorization Machines (FFM) have
recently been established as a state-of-the-art method for that problem and in
particular won two Kaggle challenges. This paper presents some results from
implementing this method in a production system that predicts click-through and
conversion rates for display advertising and shows that this method it is not
only effective to win challenges but is also valuable in a real-world
prediction system. We also discuss some specific challenges and solutions to
reduce the training time, namely the use of an innovative seeding algorithm and
a distributed learning mechanism.
|
[
"Yuchin Juan, Damien Lefortier, Olivier Chapelle",
"['Yuchin Juan' 'Damien Lefortier' 'Olivier Chapelle']"
] |
cs.LG cs.AI
| null |
1701.04113
| null | null |
http://arxiv.org/pdf/1701.04113v1
|
2017-01-15T21:24:45Z
|
2017-01-15T21:24:45Z
|
Near Optimal Behavior via Approximate State Abstraction
|
The combinatorial explosion that plagues planning and reinforcement learning
(RL) algorithms can be moderated using state abstraction. Prohibitively large
task representations can be condensed such that essential information is
preserved, and consequently, solutions are tractably computable. However, exact
abstractions, which treat only fully-identical situations as equivalent, fail
to present opportunities for abstraction in environments where no two
situations are exactly alike. In this work, we investigate approximate state
abstractions, which treat nearly-identical situations as equivalent. We present
theoretical guarantees of the quality of behaviors derived from four types of
approximate abstractions. Additionally, we empirically demonstrate that
approximate abstractions lead to reduction in task complexity and bounded loss
of optimality of behavior in a variety of environments.
|
[
"David Abel, D. Ellis Hershkowitz, Michael L. Littman",
"['David Abel' 'D. Ellis Hershkowitz' 'Michael L. Littman']"
] |
cs.CV cs.AI cs.LG
| null |
1701.04128
| null | null |
http://arxiv.org/pdf/1701.04128v2
|
2017-01-25T06:32:29Z
|
2017-01-15T23:52:49Z
|
Understanding the Effective Receptive Field in Deep Convolutional Neural
Networks
|
We study characteristics of receptive fields of units in deep convolutional
networks. The receptive field size is a crucial issue in many visual tasks, as
the output must respond to large enough areas in the image to capture
information about large objects. We introduce the notion of an effective
receptive field, and show that it both has a Gaussian distribution and only
occupies a fraction of the full theoretical receptive field. We analyze the
effective receptive field in several architecture designs, and the effect of
nonlinear activations, dropout, sub-sampling and skip connections on it. This
leads to suggestions for ways to address its tendency to be too small.
|
[
"Wenjie Luo and Yujia Li and Raquel Urtasun and Richard Zemel",
"['Wenjie Luo' 'Yujia Li' 'Raquel Urtasun' 'Richard Zemel']"
] |
cs.LG cs.AI
| null |
1701.04143
| null | null |
http://arxiv.org/pdf/1701.04143v1
|
2017-01-16T02:39:01Z
|
2017-01-16T02:39:01Z
|
Vulnerability of Deep Reinforcement Learning to Policy Induction Attacks
|
Deep learning classifiers are known to be inherently vulnerable to
manipulation by intentionally perturbed inputs, named adversarial examples. In
this work, we establish that reinforcement learning techniques based on Deep
Q-Networks (DQNs) are also vulnerable to adversarial input perturbations, and
verify the transferability of adversarial examples across different DQN models.
Furthermore, we present a novel class of attacks based on this vulnerability
that enable policy manipulation and induction in the learning process of DQNs.
We propose an attack mechanism that exploits the transferability of adversarial
examples to implement policy induction attacks on DQNs, and demonstrate its
efficacy and impact through experimental study of a game-learning scenario.
|
[
"Vahid Behzadan and Arslan Munir",
"['Vahid Behzadan' 'Arslan Munir']"
] |
cs.LG cs.AI cs.CR
| null |
1701.04222
| null | null |
http://arxiv.org/pdf/1701.04222v1
|
2017-01-16T10:04:05Z
|
2017-01-16T10:04:05Z
|
Achieving Privacy in the Adversarial Multi-Armed Bandit
|
In this paper, we improve the previously best known regret bound to achieve
$\epsilon$-differential privacy in oblivious adversarial bandits from
$\mathcal{O}{(T^{2/3}/\epsilon)}$ to $\mathcal{O}{(\sqrt{T} \ln T /\epsilon)}$.
This is achieved by combining a Laplace Mechanism with EXP3. We show that
though EXP3 is already differentially private, it leaks a linear amount of
information in $T$. However, we can improve this privacy by relying on its
intrinsic exponential mechanism for selecting actions. This allows us to reach
$\mathcal{O}{(\sqrt{\ln T})}$-DP, with a regret of $\mathcal{O}{(T^{2/3})}$
that holds against an adaptive adversary, an improvement from the best known of
$\mathcal{O}{(T^{3/4})}$. This is done by using an algorithm that run EXP3 in a
mini-batch loop. Finally, we run experiments that clearly demonstrate the
validity of our theoretical analysis.
|
[
"['Aristide C. Y. Tossou' 'Christos Dimitrakakis']",
"Aristide C. Y. Tossou and Christos Dimitrakakis"
] |
cs.LG cs.AI
| null |
1701.04238
| null | null |
http://arxiv.org/pdf/1701.04238v1
|
2017-01-16T10:52:51Z
|
2017-01-16T10:52:51Z
|
Thompson Sampling For Stochastic Bandits with Graph Feedback
|
We present a novel extension of Thompson Sampling for stochastic sequential
decision problems with graph feedback, even when the graph structure itself is
unknown and/or changing. We provide theoretical guarantees on the Bayesian
regret of the algorithm, linking its performance to the underlying properties
of the graph. Thompson Sampling has the advantage of being applicable without
the need to construct complicated upper confidence bounds for different
problems. We illustrate its performance through extensive experimental results
on real and simulated networks with graph feedback. More specifically, we
tested our algorithms on power law, planted partitions and Erdo's-Renyi graphs,
as well as on graphs derived from Facebook and Flixster data. These all show
that our algorithms clearly outperform related methods that employ upper
confidence bounds, even if the latter use more information about the graph.
|
[
"['Aristide C. Y. Tossou' 'Christos Dimitrakakis' 'Devdatt Dubhashi']",
"Aristide C. Y. Tossou, Christos Dimitrakakis, Devdatt Dubhashi"
] |
cs.LG stat.ML
| null |
1701.04245
| null | null |
http://arxiv.org/pdf/1701.04245v4
|
2017-04-10T06:25:18Z
|
2017-01-16T11:22:38Z
|
Learning Traffic as Images: A Deep Convolutional Neural Network for
Large-Scale Transportation Network Speed Prediction
|
This paper proposes a convolutional neural network (CNN)-based method that
learns traffic as images and predicts large-scale, network-wide traffic speed
with a high accuracy. Spatiotemporal traffic dynamics are converted to images
describing the time and space relations of traffic flow via a two-dimensional
time-space matrix. A CNN is applied to the image following two consecutive
steps: abstract traffic feature extraction and network-wide traffic speed
prediction. The effectiveness of the proposed method is evaluated by taking two
real-world transportation networks, the second ring road and north-east
transportation network in Beijing, as examples, and comparing the method with
four prevailing algorithms, namely, ordinary least squares, k-nearest
neighbors, artificial neural network, and random forest, and three deep
learning architectures, namely, stacked autoencoder, recurrent neural network,
and long-short-term memory network. The results show that the proposed method
outperforms other algorithms by an average accuracy improvement of 42.91%
within an acceptable execution time. The CNN can train the model in a
reasonable time and, thus, is suitable for large-scale transportation networks.
|
[
"['Xiaolei Ma' 'Zhuang Dai' 'Zhengbing He' 'Jihui Na' 'Yong Wang'\n 'Yunpeng Wang']",
"Xiaolei Ma, Zhuang Dai, Zhengbing He, Jihui Na, Yong Wang and Yunpeng\n Wang"
] |
cs.CV cs.LG
| null |
1701.04249
| null | null |
http://arxiv.org/pdf/1701.04249v1
|
2017-01-16T11:30:31Z
|
2017-01-16T11:30:31Z
|
Geometric features for voxel-based surface recognition
|
We introduce a library of geometric voxel features for CAD surface
recognition/retrieval tasks. Our features include local versions of the
intrinsic volumes (the usual 3D volume, surface area, integrated mean and
Gaussian curvature) and a few closely related quantities. We also compute Haar
wavelet and statistical distribution features by aggregating raw voxel
features. We apply our features to object classification on the ESB data set
and demonstrate accurate results with a small number of shallow decision trees.
|
[
"['Dmitry Yarotsky']",
"Dmitry Yarotsky"
] |
cs.LG
| null |
1701.04271
| null | null |
http://arxiv.org/pdf/1701.04271v4
|
2017-06-04T16:11:24Z
|
2017-01-16T12:55:23Z
|
Fast Rates for Empirical Risk Minimization of Strict Saddle Problems
|
We derive bounds on the sample complexity of empirical risk minimization
(ERM) in the context of minimizing non-convex risks that admit the strict
saddle property. Recent progress in non-convex optimization has yielded
efficient algorithms for minimizing such functions. Our results imply that
these efficient algorithms are statistically stable and also generalize well.
In particular, we derive fast rates which resemble the bounds that are often
attained in the strongly convex setting. We specify our bounds to Principal
Component Analysis and Independent Component Analysis. Our results and
techniques may pave the way for statistical analyses of additional strict
saddle problems.
|
[
"Alon Gonen and Shai Shalev-Shwartz",
"['Alon Gonen' 'Shai Shalev-Shwartz']"
] |
cs.CL cs.IR cs.LG cs.NE
|
10.1109/JSTSP.2017.2759726
|
1701.04313
| null | null |
http://arxiv.org/abs/1701.04313v1
|
2017-01-13T15:05:39Z
|
2017-01-13T15:05:39Z
|
End-to-End ASR-free Keyword Search from Speech
|
End-to-end (E2E) systems have achieved competitive results compared to
conventional hybrid hidden Markov model (HMM)-deep neural network based
automatic speech recognition (ASR) systems. Such E2E systems are attractive due
to the lack of dependence on alignments between input acoustic and output
grapheme or HMM state sequence during training. This paper explores the design
of an ASR-free end-to-end system for text query-based keyword search (KWS) from
speech trained with minimal supervision. Our E2E KWS system consists of three
sub-systems. The first sub-system is a recurrent neural network (RNN)-based
acoustic auto-encoder trained to reconstruct the audio through a
finite-dimensional representation. The second sub-system is a character-level
RNN language model using embeddings learned from a convolutional neural
network. Since the acoustic and text query embeddings occupy different
representation spaces, they are input to a third feed-forward neural network
that predicts whether the query occurs in the acoustic utterance or not. This
E2E ASR-free KWS system performs respectably despite lacking a conventional ASR
system and trains much faster.
|
[
"Kartik Audhkhasi, Andrew Rosenberg, Abhinav Sethy, Bhuvana\n Ramabhadran, Brian Kingsbury",
"['Kartik Audhkhasi' 'Andrew Rosenberg' 'Abhinav Sethy'\n 'Bhuvana Ramabhadran' 'Brian Kingsbury']"
] |
cs.LG stat.ML
| null |
1701.04355
| null | null |
http://arxiv.org/pdf/1701.04355v1
|
2017-01-16T17:02:31Z
|
2017-01-16T17:02:31Z
|
Classification of MRI data using Deep Learning and Gaussian
Process-based Model Selection
|
The classification of MRI images according to the anatomical field of view is
a necessary task to solve when faced with the increasing quantity of medical
images. In parallel, advances in deep learning makes it a suitable tool for
computer vision problems. Using a common architecture (such as AlexNet)
provides quite good results, but not sufficient for clinical use. Improving the
model is not an easy task, due to the large number of hyper-parameters
governing both the architecture and the training of the network, and to the
limited understanding of their relevance. Since an exhaustive search is not
tractable, we propose to optimize the network first by random search, and then
by an adaptive search based on Gaussian Processes and Probability of
Improvement. Applying this method on a large and varied MRI dataset, we show a
substantial improvement between the baseline network and the final one (up to
20\% for the most difficult classes).
|
[
"['Hadrien Bertrand' 'Matthieu Perrot' 'Roberto Ardon' 'Isabelle Bloch']",
"Hadrien Bertrand, Matthieu Perrot, Roberto Ardon, Isabelle Bloch"
] |
cs.NE cs.LG
| null |
1701.04465
| null | null |
http://arxiv.org/pdf/1701.04465v2
|
2017-11-25T09:15:28Z
|
2017-01-16T21:49:47Z
|
The Incredible Shrinking Neural Network: New Perspectives on Learning
Representations Through The Lens of Pruning
|
How much can pruning algorithms teach us about the fundamentals of learning
representations in neural networks? And how much can these fundamentals help
while devising new pruning techniques? A lot, it turns out. Neural network
pruning has become a topic of great interest in recent years, and many
different techniques have been proposed to address this problem. The decision
of what to prune and when to prune necessarily forces us to confront our
assumptions about how neural networks actually learn to represent patterns in
data. In this work, we set out to test several long-held hypotheses about
neural network learning representations, approaches to pruning and the
relevance of one in the context of the other. To accomplish this, we argue in
favor of pruning whole neurons as opposed to the traditional method of pruning
weights from optimally trained networks. We first review the historical
literature, point out some common assumptions it makes, and propose methods to
demonstrate the inherent flaws in these assumptions. We then propose our novel
approach to pruning and set about analyzing the quality of the decisions it
makes. Our analysis led us to question the validity of many widely-held
assumptions behind pruning algorithms and the trade-offs we often make in the
interest of reducing computational complexity. We discovered that there is a
straightforward way, however expensive, to serially prune 40-70% of the neurons
in a trained network with minimal effect on the learning representation and
without any re-training. It is to be noted here that the motivation behind this
work is not to propose an algorithm that would outperform all existing methods,
but to shed light on what some inherent flaws in these methods can teach us
about learning representations and how this can lead us to superior pruning
techniques.
|
[
"Aditya Sharma, Nikolas Wolfe, Bhiksha Raj",
"['Aditya Sharma' 'Nikolas Wolfe' 'Bhiksha Raj']"
] |
cs.LG stat.ML
| null |
1701.04489
| null | null |
http://arxiv.org/pdf/1701.04489v1
|
2017-01-16T23:57:33Z
|
2017-01-16T23:57:33Z
|
Towards a New Interpretation of Separable Convolutions
|
In recent times, the use of separable convolutions in deep convolutional
neural network architectures has been explored. Several researchers, most
notably (Chollet, 2016) and (Ghosh, 2017) have used separable convolutions in
their deep architectures and have demonstrated state of the art or close to
state of the art performance. However, the underlying mechanism of action of
separable convolutions are still not fully understood. Although their
mathematical definition is well understood as a depthwise convolution followed
by a pointwise convolution, deeper interpretations such as the extreme
Inception hypothesis (Chollet, 2016) have failed to provide a thorough
explanation of their efficacy. In this paper, we propose a hybrid
interpretation that we believe is a better model for explaining the efficacy of
separable convolutions.
|
[
"['Tapabrata Ghosh']",
"Tapabrata Ghosh"
] |
stat.ML cs.AI cs.CE cs.LG physics.chem-ph
| null |
1701.04503
| null | null |
http://arxiv.org/pdf/1701.04503v1
|
2017-01-17T01:15:14Z
|
2017-01-17T01:15:14Z
|
Deep Learning for Computational Chemistry
|
The rise and fall of artificial neural networks is well documented in the
scientific literature of both computer science and computational chemistry. Yet
almost two decades later, we are now seeing a resurgence of interest in deep
learning, a machine learning algorithm based on multilayer neural networks.
Within the last few years, we have seen the transformative impact of deep
learning in many domains, particularly in speech recognition and computer
vision, to the extent that the majority of expert practitioners in those field
are now regularly eschewing prior established models in favor of deep learning
models. In this review, we provide an introductory overview into the theory of
deep neural networks and their unique properties that distinguish them from
traditional machine learning algorithms used in cheminformatics. By providing
an overview of the variety of emerging applications of deep neural networks, we
highlight its ubiquity and broad applicability to a wide range of challenges in
the field, including QSAR, virtual screening, protein structure prediction,
quantum chemistry, materials design and property prediction. In reviewing the
performance of deep neural networks, we observed a consistent outperformance
against non-neural networks state-of-the-art models across disparate research
topics, and deep neural network based models often exceeded the "glass ceiling"
expectations of their respective tasks. Coupled with the maturity of
GPU-accelerated computing for training deep neural networks and the exponential
growth of chemical data on which to train these networks on, we anticipate that
deep learning algorithms will be a valuable tool for computational chemistry.
|
[
"Garrett B. Goh, Nathan O. Hodas, Abhinav Vishnu",
"['Garrett B. Goh' 'Nathan O. Hodas' 'Abhinav Vishnu']"
] |
cs.LG
| null |
1701.04508
| null | null |
http://arxiv.org/pdf/1701.04508v2
|
2018-04-09T05:29:03Z
|
2017-01-17T01:40:07Z
|
Online Learning with Regularized Kernel for One-class Classification
|
This paper presents an online learning with regularized kernel based
one-class extreme learning machine (ELM) classifier and is referred as online
RK-OC-ELM. The baseline kernel hyperplane model considers whole data in a
single chunk with regularized ELM approach for offline learning in case of
one-class classification (OCC). Further, the basic hyper plane model is adapted
in an online fashion from stream of training samples in this paper. Two
frameworks viz., boundary and reconstruction are presented to detect the target
class in online RKOC-ELM. Boundary framework based one-class classifier
consists of single node output architecture and classifier endeavors to
approximate all data to any real number. However, one-class classifier based on
reconstruction framework is an autoencoder architecture, where output nodes are
identical to input nodes and classifier endeavor to reconstruct input layer at
the output layer. Both these frameworks employ regularized kernel ELM based
online learning and consistency based model selection has been employed to
select learning algorithm parameters. The performance of online RK-OC-ELM has
been evaluated on standard benchmark datasets as well as on artificial datasets
and the results are compared with existing state-of-the art one-class
classifiers. The results indicate that the online learning one-class classifier
is slightly better or same as batch learning based approaches. As, base
classifier used for the proposed classifiers are based on the ELM, hence,
proposed classifiers would also inherit the benefit of the base classifier i.e.
it will perform faster computation compared to traditional autoencoder based
one-class classifier.
|
[
"['Chandan Gautam' 'Aruna Tiwari' 'Sundaram Suresh' 'Kapil Ahuja']",
"Chandan Gautam, Aruna Tiwari, Sundaram Suresh and Kapil Ahuja"
] |
cs.LG stat.ML
|
10.1016/j.neucom.2016.04.070
|
1701.04516
| null | null |
http://arxiv.org/abs/1701.04516v1
|
2017-01-17T02:55:51Z
|
2017-01-17T02:55:51Z
|
On The Construction of Extreme Learning Machine for Online and Offline
One-Class Classification - An Expanded Toolbox
|
One-Class Classification (OCC) has been prime concern for researchers and
effectively employed in various disciplines. But, traditional methods based
one-class classifiers are very time consuming due to its iterative process and
various parameters tuning. In this paper, we present six OCC methods based on
extreme learning machine (ELM) and Online Sequential ELM (OSELM). Our proposed
classifiers mainly lie in two categories: reconstruction based and boundary
based, which supports both types of learning viz., online and offline learning.
Out of various proposed methods, four are offline and remaining two are online
methods. Out of four offline methods, two methods perform random feature
mapping and two methods perform kernel feature mapping. Kernel feature mapping
based approaches have been tested with RBF kernel and online version of
one-class classifiers are tested with both types of nodes viz., additive and
RBF. It is well known fact that threshold decision is a crucial factor in case
of OCC, so, three different threshold deciding criteria have been employed so
far and analyses the effectiveness of one threshold deciding criteria over
another. Further, these methods are tested on two artificial datasets to check
there boundary construction capability and on eight benchmark datasets from
different discipline to evaluate the performance of the classifiers. Our
proposed classifiers exhibit better performance compared to ten traditional
one-class classifiers and ELM based two one-class classifiers. Through proposed
one-class classifiers, we intend to expand the functionality of the most used
toolbox for OCC i.e. DD toolbox. All of our methods are totally compatible with
all the present features of the toolbox.
|
[
"Chandan Gautam, Aruna Tiwari and Qian Leng",
"['Chandan Gautam' 'Aruna Tiwari' 'Qian Leng']"
] |
cs.LG stat.AP
| null |
1701.04518
| null | null |
http://arxiv.org/pdf/1701.04518v1
|
2017-01-17T03:08:12Z
|
2017-01-17T03:08:12Z
|
Towards prediction of rapid intensification in tropical cyclones with
recurrent neural networks
|
The problem where a tropical cyclone intensifies dramatically within a short
period of time is known as rapid intensification. This has been one of the
major challenges for tropical weather forecasting. Recurrent neural networks
have been promising for time series problems which makes them appropriate for
rapid intensification. In this paper, recurrent neural networks are used to
predict rapid intensification cases of tropical cyclones from the South Pacific
and South Indian Ocean regions. A class imbalanced problem is encountered which
makes it very challenging to achieve promising performance. A simple strategy
was proposed to include more positive cases for detection where the false
positive rate was slightly improved. The limitations of building an efficient
system remains due to the challenges of addressing the class imbalance problem
encountered for rapid intensification prediction. This motivates further
research in using innovative machine learning methods.
|
[
"['Rohitash Chandra']",
"Rohitash Chandra"
] |
cs.LG cs.IR
| null |
1701.046
| null | null | null | null | null |
Faster K-Means Cluster Estimation
|
There has been considerable work on improving popular clustering algorithm
`K-means' in terms of mean squared error (MSE) and speed, both. However, most
of the k-means variants tend to compute distance of each data point to each
cluster centroid for every iteration. We propose a fast heuristic to overcome
this bottleneck with only marginal increase in MSE. We observe that across all
iterations of K-means, a data point changes its membership only among a small
subset of clusters. Our heuristic predicts such clusters for each data point by
looking at nearby clusters after the first iteration of k-means. We augment
well known variants of k-means with our heuristic to demonstrate effectiveness
of our heuristic. For various synthetic and real-world datasets, our heuristic
achieves speed-up of up-to 3 times when compared to efficient variants of
k-means.
|
[
"Siddhesh Khandelwal, Amit Awekar"
] |
null | null |
1701.04600
| null | null |
http://arxiv.org/pdf/1701.04600v1
|
2017-01-17T10:00:51Z
|
2017-01-17T10:00:51Z
|
Faster K-Means Cluster Estimation
|
There has been considerable work on improving popular clustering algorithm `K-means' in terms of mean squared error (MSE) and speed, both. However, most of the k-means variants tend to compute distance of each data point to each cluster centroid for every iteration. We propose a fast heuristic to overcome this bottleneck with only marginal increase in MSE. We observe that across all iterations of K-means, a data point changes its membership only among a small subset of clusters. Our heuristic predicts such clusters for each data point by looking at nearby clusters after the first iteration of k-means. We augment well known variants of k-means with our heuristic to demonstrate effectiveness of our heuristic. For various synthetic and real-world datasets, our heuristic achieves speed-up of up-to 3 times when compared to efficient variants of k-means.
|
[
"['Siddhesh Khandelwal' 'Amit Awekar']"
] |
cs.RO cs.HC cs.LG
| null |
1701.04693
| null | null |
http://arxiv.org/pdf/1701.04693v1
|
2017-01-17T14:29:05Z
|
2017-01-17T14:29:05Z
|
Incremental Learning for Robot Perception through HRI
|
Scene understanding and object recognition is a difficult to achieve yet
crucial skill for robots. Recently, Convolutional Neural Networks (CNN), have
shown success in this task. However, there is still a gap between their
performance on image datasets and real-world robotics scenarios. We present a
novel paradigm for incrementally improving a robot's visual perception through
active human interaction. In this paradigm, the user introduces novel objects
to the robot by means of pointing and voice commands. Given this information,
the robot visually explores the object and adds images from it to re-train the
perception module. Our base perception module is based on recent development in
object detection and recognition using deep learning. Our method leverages
state of the art CNNs from off-line batch learning, human guidance, robot
exploration and incremental on-line learning.
|
[
"['Sepehr Valipour' 'Camilo Perez' 'Martin Jagersand']",
"Sepehr Valipour, Camilo Perez, Martin Jagersand"
] |
cs.LG
| null |
1701.04722
| null | null |
http://arxiv.org/pdf/1701.04722v4
|
2018-06-11T12:19:02Z
|
2017-01-17T15:18:31Z
|
Adversarial Variational Bayes: Unifying Variational Autoencoders and
Generative Adversarial Networks
|
Variational Autoencoders (VAEs) are expressive latent variable models that
can be used to learn complex probability distributions from training data.
However, the quality of the resulting model crucially relies on the
expressiveness of the inference model. We introduce Adversarial Variational
Bayes (AVB), a technique for training Variational Autoencoders with arbitrarily
expressive inference models. We achieve this by introducing an auxiliary
discriminative network that allows to rephrase the maximum-likelihood-problem
as a two-player game, hence establishing a principled connection between VAEs
and Generative Adversarial Networks (GANs). We show that in the nonparametric
limit our method yields an exact maximum-likelihood assignment for the
parameters of the generative model, as well as the exact posterior distribution
over the latent variables given an observation. Contrary to competing
approaches which combine VAEs with GANs, our approach has a clear theoretical
justification, retains most advantages of standard Variational Autoencoders and
is easy to implement.
|
[
"['Lars Mescheder' 'Sebastian Nowozin' 'Andreas Geiger']",
"Lars Mescheder, Sebastian Nowozin and Andreas Geiger"
] |
cs.LG stat.ML
| null |
1701.04724
| null | null |
http://arxiv.org/pdf/1701.04724v5
|
2019-06-27T13:10:36Z
|
2017-01-17T15:19:44Z
|
On the Sample Complexity of Graphical Model Selection for Non-Stationary
Processes
|
We characterize the sample size required for accurate graphical model
selection from non-stationary samples. The observed data is modeled as a
vector-valued zero-mean Gaussian random process whose samples are uncorrelated
but have different covariance matrices. This model contains as special cases
the standard setting of i.i.d. samples as well as the case of samples forming a
stationary or underspread (non-stationary) processes. More generally, our model
applies to any process model for which an efficient decorrelation can be
obtained. By analyzing a particular model selection method, we derive a
sufficient condition on the required sample size for accurate graphical model
selection based on non-stationary data.
|
[
"Nguyen Q. Tran and Oleksii Abramenko and Alexander Jung",
"['Nguyen Q. Tran' 'Oleksii Abramenko' 'Alexander Jung']"
] |
cs.CR cs.LG
| null |
1701.04739
| null | null |
http://arxiv.org/pdf/1701.04739v1
|
2017-01-17T15:59:17Z
|
2017-01-17T15:59:17Z
|
Summoning Demons: The Pursuit of Exploitable Bugs in Machine Learning
|
Governments and businesses increasingly rely on data analytics and machine
learning (ML) for improving their competitive edge in areas such as consumer
satisfaction, threat intelligence, decision making, and product efficiency.
However, by cleverly corrupting a subset of data used as input to a target's ML
algorithms, an adversary can perturb outcomes and compromise the effectiveness
of ML technology. While prior work in the field of adversarial machine learning
has studied the impact of input manipulation on correct ML algorithms, we
consider the exploitation of bugs in ML implementations. In this paper, we
characterize the attack surface of ML programs, and we show that malicious
inputs exploiting implementation bugs enable strictly more powerful attacks
than the classic adversarial machine learning techniques. We propose a
semi-automated technique, called steered fuzzing, for exploring this attack
surface and for discovering exploitable bugs in machine learning programs, in
order to demonstrate the magnitude of this threat. As a result of our work, we
responsibly disclosed five vulnerabilities, established three new CVE-IDs, and
illuminated a common insecure practice across many machine learning systems.
Finally, we outline several research directions for further understanding and
mitigating this threat.
|
[
"['Rock Stevens' 'Octavian Suciu' 'Andrew Ruef' 'Sanghyun Hong'\n 'Michael Hicks' 'Tudor Dumitraş']",
"Rock Stevens, Octavian Suciu, Andrew Ruef, Sanghyun Hong, Michael\n Hicks, Tudor Dumitra\\c{s}"
] |
cs.LG cs.IR
| null |
1701.04783
| null | null |
http://arxiv.org/pdf/1701.04783v1
|
2017-01-17T17:46:04Z
|
2017-01-17T17:46:04Z
|
Joint Deep Modeling of Users and Items Using Reviews for Recommendation
|
A large amount of information exists in reviews written by users. This source
of information has been ignored by most of the current recommender systems
while it can potentially alleviate the sparsity problem and improve the quality
of recommendations. In this paper, we present a deep model to learn item
properties and user behaviors jointly from review text. The proposed model,
named Deep Cooperative Neural Networks (DeepCoNN), consists of two parallel
neural networks coupled in the last layers. One of the networks focuses on
learning user behaviors exploiting reviews written by the user, and the other
one learns item properties from the reviews written for the item. A shared
layer is introduced on the top to couple these two networks together. The
shared layer enables latent factors learned for users and items to interact
with each other in a manner similar to factorization machine techniques.
Experimental results demonstrate that DeepCoNN significantly outperforms all
baseline recommender systems on a variety of datasets.
|
[
"['Lei Zheng' 'Vahid Noroozi' 'Philip S. Yu']",
"Lei Zheng, Vahid Noroozi, Philip S. Yu"
] |
stat.ML cs.LG
| null |
1701.04862
| null | null |
http://arxiv.org/pdf/1701.04862v1
|
2017-01-17T20:46:21Z
|
2017-01-17T20:46:21Z
|
Towards Principled Methods for Training Generative Adversarial Networks
|
The goal of this paper is not to introduce a single algorithm or method, but
to make theoretical steps towards fully understanding the training dynamics of
generative adversarial networks. In order to substantiate our theoretical
analysis, we perform targeted experiments to verify our assumptions, illustrate
our claims, and quantify the phenomena. This paper is divided into three
sections. The first section introduces the problem at hand. The second section
is dedicated to studying and proving rigorously the problems including
instability and saturation that arize when training generative adversarial
networks. The third section examines a practical and theoretically grounded
direction towards solving these problems, while introducing new tools to study
them.
|
[
"Martin Arjovsky, L\\'eon Bottou",
"['Martin Arjovsky' 'Léon Bottou']"
] |
cs.LG stat.ML
| null |
1701.04869
| null | null |
http://arxiv.org/pdf/1701.04869v2
|
2017-01-23T12:37:29Z
|
2017-01-17T21:15:56Z
|
3D Morphology Prediction of Progressive Spinal Deformities from
Probabilistic Modeling of Discriminant Manifolds
|
We introduce a novel approach for predicting the progression of adolescent
idiopathic scoliosis from 3D spine models reconstructed from biplanar X-ray
images. Recent progress in machine learning have allowed to improve
classification and prognosis rates, but lack a probabilistic framework to
measure uncertainty in the data. We propose a discriminative probabilistic
manifold embedding where locally linear mappings transform data points from
high-dimensional space to corresponding low-dimensional coordinates. A
discriminant adjacency matrix is constructed to maximize the separation between
progressive and non-progressive groups of patients diagnosed with scoliosis,
while minimizing the distance in latent variables belonging to the same class.
To predict the evolution of deformation, a baseline reconstruction is projected
onto the manifold, from which a spatiotemporal regression model is built from
parallel transport curves inferred from neighboring exemplars. Rate of
progression is modulated from the spine flexibility and curve magnitude of the
3D spine deformation. The method was tested on 745 reconstructions from 133
subjects using longitudinal 3D reconstructions of the spine, with results
demonstrating the discriminatory framework can identify between progressive and
non-progressive of scoliotic patients with a classification rate of 81% and
prediction differences of 2.1$^{o}$ in main curve angulation, outperforming
other manifold learning methods. Our method achieved a higher prediction
accuracy and improved the modeling of spatiotemporal morphological changes in
highly deformed spines compared to other learning methods.
|
[
"Samuel Kadoury, William Mandel, Marjolaine Roy-Beaudry, Marie-Lyne\n Nault, Stefan Parent",
"['Samuel Kadoury' 'William Mandel' 'Marjolaine Roy-Beaudry'\n 'Marie-Lyne Nault' 'Stefan Parent']"
] |
cs.IT cs.LG math.IT
| null |
1701.04926
| null | null |
http://arxiv.org/pdf/1701.04926v3
|
2017-02-24T13:25:40Z
|
2017-01-18T02:18:08Z
|
Agglomerative Info-Clustering
|
An agglomerative clustering of random variables is proposed, where clusters
of random variables sharing the maximum amount of multivariate mutual
information are merged successively to form larger clusters. Compared to the
previous info-clustering algorithms, the agglomerative approach allows the
computation to stop earlier when clusters of desired size and accuracy are
obtained. An efficient algorithm is also derived based on the submodularity of
entropy and the duality between the principal sequence of partitions and the
principal sequence for submodular functions.
|
[
"['Chung Chan' 'Ali Al-Bashabsheh' 'Qiaoqiao Zhou']",
"Chung Chan, Ali Al-Bashabsheh, Qiaoqiao Zhou"
] |
stat.ML cs.LG
| null |
1701.04944
| null | null |
http://arxiv.org/pdf/1701.04944v5
|
2017-02-20T19:12:33Z
|
2017-01-18T05:07:03Z
|
A Machine Learning Alternative to P-values
|
This paper presents an alternative approach to p-values in regression
settings. This approach, whose origins can be traced to machine learning, is
based on the leave-one-out bootstrap for prediction error. In machine learning
this is called the out-of-bag (OOB) error. To obtain the OOB error for a model,
one draws a bootstrap sample and fits the model to the in-sample data. The
out-of-sample prediction error for the model is obtained by calculating the
prediction error for the model using the out-of-sample data. Repeating and
averaging yields the OOB error, which represents a robust cross-validated
estimate of the accuracy of the underlying model. By a simple modification to
the bootstrap data involving "noising up" a variable, the OOB method yields a
variable importance (VIMP) index, which directly measures how much a specific
variable contributes to the prediction precision of a model. VIMP provides a
scientifically interpretable measure of the effect size of a variable, we call
the "predictive effect size", that holds whether the researcher's model is
correct or not, unlike the p-value whose calculation is based on the assumed
correctness of the model. We also discuss a marginal VIMP index, also easily
calculated, which measures the marginal effect of a variable, or what we call
"the discovery effect". The OOB procedure can be applied to both parametric and
nonparametric regression models and requires only that the researcher can
repeatedly fit their model to bootstrap and modified bootstrap data. We
illustrate this approach on a survival data set involving patients with
systolic heart failure and to a simulated survival data set where the model is
incorrectly specified to illustrate its robustness to model misspecification.
|
[
"Min Lu and Hemant Ishwaran",
"['Min Lu' 'Hemant Ishwaran']"
] |
cs.NE cs.CV cs.LG
| null |
1701.04949
| null | null |
http://arxiv.org/pdf/1701.04949v1
|
2017-01-18T05:24:24Z
|
2017-01-18T05:24:24Z
|
A Deep Convolutional Auto-Encoder with Pooling - Unpooling Layers in
Caffe
|
This paper presents the development of several models of a deep convolutional
auto-encoder in the Caffe deep learning framework and their experimental
evaluation on the example of MNIST dataset. We have created five models of a
convolutional auto-encoder which differ architecturally by the presence or
absence of pooling and unpooling layers in the auto-encoder's encoder and
decoder parts. Our results show that the developed models provide very good
results in dimensionality reduction and unsupervised clustering tasks, and
small classification errors when we used the learned internal code as an input
of a supervised linear classifier and multi-layer perceptron. The best results
were provided by a model where the encoder part contains convolutional and
pooling layers, followed by an analogous decoder part with deconvolution and
unpooling layers without the use of switch variables in the decoder part. The
paper also discusses practical details of the creation of a deep convolutional
auto-encoder in the very popular Caffe deep learning framework. We believe that
our approach and results presented in this paper could help other researchers
to build efficient deep neural network architectures in the future.
|
[
"['Volodymyr Turchenko' 'Eric Chalmers' 'Artur Luczak']",
"Volodymyr Turchenko, Eric Chalmers, Artur Luczak"
] |
stat.ML cs.LG
| null |
1701.04968
| null | null |
http://arxiv.org/pdf/1701.04968v1
|
2017-01-18T06:49:03Z
|
2017-01-18T06:49:03Z
|
Multilayer Perceptron Algebra
|
Artificial Neural Networks(ANN) has been phenomenally successful on various
pattern recognition tasks. However, the design of neural networks rely heavily
on the experience and intuitions of individual developers. In this article, the
author introduces a mathematical structure called MLP algebra on the set of all
Multilayer Perceptron Neural Networks(MLP), which can serve as a guiding
principle to build MLPs accommodating to the particular data sets, and to build
complex MLPs from simpler ones.
|
[
"Zhao Peng",
"['Zhao Peng']"
] |
cs.LG
| null |
1701.05053
| null | null |
http://arxiv.org/pdf/1701.05053v1
|
2017-01-18T13:23:21Z
|
2017-01-18T13:23:21Z
|
Highly Efficient Hierarchical Online Nonlinear Regression Using Second
Order Methods
|
We introduce highly efficient online nonlinear regression algorithms that are
suitable for real life applications. We process the data in a truly online
manner such that no storage is needed, i.e., the data is discarded after being
used. For nonlinear modeling we use a hierarchical piecewise linear approach
based on the notion of decision trees where the space of the regressor vectors
is adaptively partitioned based on the performance. As the first time in the
literature, we learn both the piecewise linear partitioning of the regressor
space as well as the linear models in each region using highly effective second
order methods, i.e., Newton-Raphson Methods. Hence, we avoid the well known
over fitting issues by using piecewise linear models, however, since both the
region boundaries as well as the linear models in each region are trained using
the second order methods, we achieve substantial performance compared to the
state of the art. We demonstrate our gains over the well known benchmark data
sets and provide performance results in an individual sequence manner
guaranteed to hold without any statistical assumptions. Hence, the introduced
algorithms address computational complexity issues widely encountered in real
life applications while providing superior guaranteed performance in a strong
deterministic sense.
|
[
"Burak C. Civek, Ibrahim Delibalta and Suleyman S. Kozat",
"['Burak C. Civek' 'Ibrahim Delibalta' 'Suleyman S. Kozat']"
] |
cs.LG math.FA
| null |
1701.05217
| null | null |
http://arxiv.org/pdf/1701.05217v1
|
2017-01-18T19:51:28Z
|
2017-01-18T19:51:28Z
|
Lipschitz Properties for Deep Convolutional Networks
|
In this paper we discuss the stability properties of convolutional neural
networks. Convolutional neural networks are widely used in machine learning. In
classification they are mainly used as feature extractors. Ideally, we expect
similar features when the inputs are from the same class. That is, we hope to
see a small change in the feature vector with respect to a deformation on the
input signal. This can be established mathematically, and the key step is to
derive the Lipschitz properties. Further, we establish that the stability
results can be extended for more general networks. We give a formula for
computing the Lipschitz bound, and compare it with other methods to show it is
closer to the optimal value.
|
[
"Radu Balan, Maneesh Singh, Dongmian Zou",
"['Radu Balan' 'Maneesh Singh' 'Dongmian Zou']"
] |
cs.CV cs.AI cs.LG cs.NE
| null |
1701.05221
| null | null |
http://arxiv.org/pdf/1701.05221v5
|
2017-01-31T12:15:43Z
|
2017-01-18T20:03:12Z
|
Parsimonious Inference on Convolutional Neural Networks: Learning and
applying on-line kernel activation rules
|
A new, radical CNN design approach is presented in this paper, considering
the reduction of the total computational load during inference. This is
achieved by a new holistic intervention on both the CNN architecture and the
training procedure, which targets to the parsimonious inference by learning to
exploit or remove the redundant capacity of a CNN architecture. This is
accomplished, by the introduction of a new structural element that can be
inserted as an add-on to any contemporary CNN architecture, whilst preserving
or even improving its recognition accuracy. Our approach formulates a
systematic and data-driven method for developing CNNs that are trained to
eventually change size and form in real-time during inference, targeting to the
smaller possible computational footprint. Results are provided for the optimal
implementation on a few modern, high-end mobile computing platforms indicating
a significant speed-up of up to x3 times.
|
[
"['I. Theodorakopoulos' 'V. Pothos' 'D. Kastaniotis' 'N. Fragoulis']",
"I. Theodorakopoulos, V. Pothos, D. Kastaniotis and N. Fragoulis"
] |
stat.ML cs.IR cs.LG
| null |
1701.05228
| null | null |
http://arxiv.org/pdf/1701.05228v2
|
2017-03-12T23:33:18Z
|
2017-01-18T20:45:57Z
|
Recommendation under Capacity Constraints
|
In this paper, we investigate the common scenario where every candidate item
for recommendation is characterized by a maximum capacity, i.e., number of
seats in a Point-of-Interest (POI) or size of an item's inventory. Despite the
prevalence of the task of recommending items under capacity constraints in a
variety of settings, to the best of our knowledge, none of the known
recommender methods is designed to respect capacity constraints. To close this
gap, we extend three state-of-the art latent factor recommendation approaches:
probabilistic matrix factorization (PMF), geographical matrix factorization
(GeoMF), and bayesian personalized ranking (BPR), to optimize for both
recommendation accuracy and expected item usage that respects the capacity
constraints. We introduce the useful concepts of user propensity to listen and
item capacity. Our experimental results in real-world datasets, both for the
domain of item recommendation and POI recommendation, highlight the benefit of
our method for the setting of recommendation under capacity constraints.
|
[
"Konstantina Christakopoulou, Jaya Kawale, Arindam Banerjee",
"['Konstantina Christakopoulou' 'Jaya Kawale' 'Arindam Banerjee']"
] |
stat.ML cs.LG
| null |
1701.05265
| null | null |
http://arxiv.org/pdf/1701.05265v1
|
2017-01-19T00:42:01Z
|
2017-01-19T00:42:01Z
|
Online Structure Learning for Sum-Product Networks with Gaussian Leaves
|
Sum-product networks have recently emerged as an attractive representation
due to their dual view as a special type of deep neural network with clear
semantics and a special type of probabilistic graphical model for which
inference is always tractable. Those properties follow from some conditions
(i.e., completeness and decomposability) that must be respected by the
structure of the network. As a result, it is not easy to specify a valid
sum-product network by hand and therefore structure learning techniques are
typically used in practice. This paper describes the first online structure
learning technique for continuous SPNs with Gaussian leaves. We also introduce
an accompanying new parameter learning technique.
|
[
"['Wilson Hsu' 'Agastya Kalra' 'Pascal Poupart']",
"Wilson Hsu, Agastya Kalra, Pascal Poupart"
] |
cs.LG stat.ML
| null |
1701.05335
| null | null |
http://arxiv.org/pdf/1701.05335v3
|
2018-12-21T12:00:27Z
|
2017-01-19T08:55:20Z
|
Validity of Clusters Produced By kernel-$k$-means With Kernel-Trick
|
This paper corrects the proof of the Theorem 2 from the Gower's paper
\cite[page 5]{Gower:1982} as well as corrects the Theorem 7 from Gower's paper
\cite{Gower:1986}. The first correction is needed in order to establish the
existence of the kernel function used commonly in the kernel trick e.g. for
$k$-means clustering algorithm, on the grounds of distance matrix. The
correction encompasses the missing if-part proof and dropping unnecessary
conditions. The second correction deals with transformation of the kernel
matrix into a one embeddable in Euclidean space.
|
[
"Mieczys{\\l}aw A. K{\\l}opotek",
"['Mieczysław A. Kłopotek']"
] |
stat.ML cs.LG math.OC q-bio.NC
|
10.1109/TSP.2017.2752697
|
1701.05363
| null | null |
http://arxiv.org/abs/1701.05363v3
|
2017-10-30T09:24:27Z
|
2017-01-19T10:35:01Z
|
Stochastic Subsampling for Factorizing Huge Matrices
|
We present a matrix-factorization algorithm that scales to input matrices
with both huge number of rows and columns. Learned factors may be sparse or
dense and/or non-negative, which makes our algorithm suitable for dictionary
learning, sparse component analysis, and non-negative matrix factorization. Our
algorithm streams matrix columns while subsampling them to iteratively learn
the matrix factors. At each iteration, the row dimension of a new sample is
reduced by subsampling, resulting in lower time complexity compared to a simple
streaming algorithm. Our method comes with convergence guarantees to reach a
stationary point of the matrix-factorization problem. We demonstrate its
efficiency on massive functional Magnetic Resonance Imaging data (2 TB), and on
patches extracted from hyperspectral images (103 GB). For both problems, which
involve different penalties on rows and columns, we obtain significant
speed-ups compared to state-of-the-art algorithms.
|
[
"Arthur Mensch (PARIETAL, NEUROSPIN), Julien Mairal (Thoth), Bertrand\n Thirion (PARIETAL, NEUROSPIN), Gael Varoquaux (NEUROSPIN, PARIETAL)",
"['Arthur Mensch' 'Julien Mairal' 'Bertrand Thirion' 'Gael Varoquaux']"
] |
stat.ML cs.LG
| null |
1701.05369
| null | null |
http://arxiv.org/pdf/1701.05369v3
|
2017-06-13T11:01:55Z
|
2017-01-19T10:44:55Z
|
Variational Dropout Sparsifies Deep Neural Networks
|
We explore a recently proposed Variational Dropout technique that provided an
elegant Bayesian interpretation to Gaussian Dropout. We extend Variational
Dropout to the case when dropout rates are unbounded, propose a way to reduce
the variance of the gradient estimator and report first experimental results
with individual dropout rates per weight. Interestingly, it leads to extremely
sparse solutions both in fully-connected and convolutional layers. This effect
is similar to automatic relevance determination effect in empirical Bayes but
has a number of advantages. We reduce the number of parameters up to 280 times
on LeNet architectures and up to 68 times on VGG-like networks with a
negligible decrease of accuracy.
|
[
"Dmitry Molchanov, Arsenii Ashukha and Dmitry Vetrov",
"['Dmitry Molchanov' 'Arsenii Ashukha' 'Dmitry Vetrov']"
] |
cs.LG cs.LO
| null |
1701.05487
| null | null |
http://arxiv.org/pdf/1701.05487v1
|
2017-01-19T15:48:11Z
|
2017-01-19T15:48:11Z
|
Learning first-order definable concepts over structures of small degree
|
We consider a declarative framework for machine learning where concepts and
hypotheses are defined by formulas of a logic over some background structure.
We show that within this framework, concepts defined by first-order formulas
over a background structure of at most polylogarithmic degree can be learned in
polylogarithmic time in the "probably approximately correct" learning sense.
|
[
"Martin Grohe and Martin Ritzert",
"['Martin Grohe' 'Martin Ritzert']"
] |
stat.ML cs.LG stat.CO
| null |
1701.05512
| null | null |
http://arxiv.org/pdf/1701.05512v2
|
2017-04-25T20:04:39Z
|
2017-01-19T17:07:21Z
|
Fisher consistency for prior probability shift
|
We introduce Fisher consistency in the sense of unbiasedness as a desirable
property for estimators of class prior probabilities. Lack of Fisher
consistency could be used as a criterion to dismiss estimators that are
unlikely to deliver precise estimates in test datasets under prior probability
and more general dataset shift. The usefulness of this unbiasedness concept is
demonstrated with three examples of classifiers used for quantification:
Adjusted Classify & Count, EM-algorithm and CDE-Iterate. We find that Adjusted
Classify & Count and EM-algorithm are Fisher consistent. A counter-example
shows that CDE-Iterate is not Fisher consistent and, therefore, cannot be
trusted to deliver reliable estimates of class probabilities.
|
[
"['Dirk Tasche']",
"Dirk Tasche"
] |
cs.LG stat.ML
| null |
1701.05517
| null | null |
http://arxiv.org/pdf/1701.05517v1
|
2017-01-19T17:29:06Z
|
2017-01-19T17:29:06Z
|
PixelCNN++: Improving the PixelCNN with Discretized Logistic Mixture
Likelihood and Other Modifications
|
PixelCNNs are a recently proposed class of powerful generative models with
tractable likelihood. Here we discuss our implementation of PixelCNNs which we
make available at https://github.com/openai/pixel-cnn. Our implementation
contains a number of modifications to the original model that both simplify its
structure and improve its performance. 1) We use a discretized logistic mixture
likelihood on the pixels, rather than a 256-way softmax, which we find to speed
up training. 2) We condition on whole pixels, rather than R/G/B sub-pixels,
simplifying the model structure. 3) We use downsampling to efficiently capture
structure at multiple resolutions. 4) We introduce additional short-cut
connections to further speed up optimization. 5) We regularize the model using
dropout. Finally, we present state-of-the-art log likelihood results on
CIFAR-10 to demonstrate the usefulness of these modifications.
|
[
"['Tim Salimans' 'Andrej Karpathy' 'Xi Chen' 'Diederik P. Kingma']",
"Tim Salimans, Andrej Karpathy, Xi Chen, Diederik P. Kingma"
] |
cs.NE cs.CV cs.LG
| null |
1701.05549
| null | null |
http://arxiv.org/pdf/1701.05549v1
|
2017-01-19T18:43:56Z
|
2017-01-19T18:43:56Z
|
Deep Neural Networks - A Brief History
|
Introduction to deep neural networks and their history.
|
[
"['Krzysztof J. Cios']",
"Krzysztof J. Cios"
] |
stat.ML cs.LG
| null |
1701.05573
| null | null |
http://arxiv.org/pdf/1701.05573v1
|
2017-01-19T19:28:37Z
|
2017-01-19T19:28:37Z
|
Poisson--Gamma Dynamical Systems
|
We introduce a new dynamical system for sequentially observed multivariate
count data. This model is based on the gamma--Poisson construction---a natural
choice for count data---and relies on a novel Bayesian nonparametric prior that
ties and shrinks the model parameters, thus avoiding overfitting. We present an
efficient MCMC inference algorithm that advances recent work on augmentation
schemes for inference in negative binomial models. Finally, we demonstrate the
model's inductive bias using a variety of real-world data sets, showing that it
exhibits superior predictive performance over other models and infers highly
interpretable latent structure.
|
[
"['Aaron Schein' 'Mingyuan Zhou' 'Hanna Wallach']",
"Aaron Schein, Mingyuan Zhou, Hanna Wallach"
] |
stat.ML cs.LG
| null |
1701.05644
| null | null |
http://arxiv.org/pdf/1701.05644v1
|
2017-01-19T23:43:54Z
|
2017-01-19T23:43:54Z
|
Rare Disease Physician Targeting: A Factor Graph Approach
|
In rare disease physician targeting, a major challenge is how to identify
physicians who are treating diagnosed or underdiagnosed rare diseases patients.
Rare diseases have extremely low incidence rate. For a specified rare disease,
only a small number of patients are affected and a fractional of physicians are
involved. The existing targeting methodologies, such as segmentation and
profiling, are developed under mass market assumption. They are not suitable
for rare disease market where the target classes are extremely imbalanced. The
authors propose a graphical model approach to predict targets by jointly
modeling physician and patient features from different data spaces and
utilizing the extra relational information. Through an empirical example with
medical claim and prescription data, the proposed approach demonstrates better
accuracy in finding target physicians. The graph representation also provides
visual interpretability of relationship among physicians and patients. The
model can be extended to incorporate more complex dependency structures. This
article contributes to the literature of exploring the benefit of utilizing
relational dependencies among entities in healthcare industry.
|
[
"Yong Cai, Yunlong Wang, Dong Dai",
"['Yong Cai' 'Yunlong Wang' 'Dong Dai']"
] |
cs.LG cs.CR
|
10.2478/popets-2019-0053
|
1701.05681
| null | null |
http://arxiv.org/abs/1701.05681v3
|
2019-07-26T00:43:15Z
|
2017-01-20T04:17:30Z
|
Git Blame Who?: Stylistic Authorship Attribution of Small, Incomplete
Source Code Fragments
|
Program authorship attribution has implications for the privacy of
programmers who wish to contribute code anonymously. While previous work has
shown that complete files that are individually authored can be attributed, we
show here for the first time that accounts belonging to open source
contributors containing short, incomplete, and typically uncompilable fragments
can also be effectively attributed.
We propose a technique for authorship attribution of contributor accounts
containing small source code samples, such as those that can be obtained from
version control systems or other direct comparison of sequential versions. We
show that while application of previous methods to individual small source code
samples yields an accuracy of about 73% for 106 programmers as a baseline, by
ensembling and averaging the classification probabilities of a sufficiently
large set of samples belonging to the same author we achieve 99% accuracy for
assigning the set of samples to the correct author. Through these results, we
demonstrate that attribution is an important threat to privacy for programmers
even in real-world collaborative environments such as GitHub. Additionally, we
propose the use of calibration curves to identify samples by unknown and
previously unencountered authors in the open world setting. We show that we can
also use these calibration curves in the case that we do not have linking
information and thus are forced to classify individual samples directly. This
is because the calibration curves allow us to identify which samples are more
likely to have been correctly attributed. Using such a curve can help an
analyst choose a cut-off point which will prevent most misclassifications, at
the cost of causing the rejection of some of the more dubious correct
attributions.
|
[
"['Edwin Dauber' 'Aylin Caliskan' 'Richard Harang' 'Gregory Shearer'\n 'Michael Weisman' 'Frederica Nelson' 'Rachel Greenstadt']",
"Edwin Dauber, Aylin Caliskan, Richard Harang, Gregory Shearer, Michael\n Weisman, Frederica Nelson, Rachel Greenstadt"
] |
stat.AP cs.LG
| null |
1701.05691
| null | null |
http://arxiv.org/pdf/1701.05691v2
|
2017-10-31T17:13:09Z
|
2017-01-20T05:05:20Z
|
Real-time Traffic Accident Risk Prediction based on Frequent Pattern
Tree
|
Traffic accident data are usually noisy, contain missing values, and
heterogeneous. How to select the most important variables to improve real-time
traffic accident risk prediction has become a concern of many recent studies.
This paper proposes a novel variable selection method based on the Frequent
Pattern tree (FP tree) algorithm. First, all the frequent patterns in the
traffic accident dataset are discovered. Then for each frequent pattern, a new
criterion, called the Relative Object Purity Ratio (ROPR) which we proposed, is
calculated. This ROPR is added to the importance score of the variables that
differentiate one frequent pattern from the others. To test the proposed
method, a dataset was compiled from the traffic accidents records detected by
only one detector on interstate highway I-64 in Virginia in 2005. This dataset
was then linked to other variables such as real-time traffic information and
weather conditions. Both the proposed method based on the FP tree algorithm, as
well as the widely utilized, random forest method, were then used to identify
the important variables or the Virginia dataset. The results indicate that
there are some differences between the variables deemed important by the FP
tree and those selected by the random forest method. Following this, two
baseline models (i.e. a nearest neighbor (k-NN) method and a Bayesian network)
were developed to predict accident risk based on the variables identified by
both the FP tree method and the random forest method. The results show that the
models based on the variable selection using the FP tree performed better than
those based on the random forest method for several versions of the k-NN and
Bayesian network models.The best results were derived from a Bayesian network
model using variables from FP tree. That model could predict 61.11% of
accidents accurately while having a false alarm rate of 38.16%.
|
[
"Lei Lin, Qian Wang, Adel W. Sadek",
"['Lei Lin' 'Qian Wang' 'Adel W. Sadek']"
] |
cs.SD cs.LG
| null |
1701.05779
| null | null |
http://arxiv.org/pdf/1701.05779v1
|
2017-01-20T12:48:02Z
|
2017-01-20T12:48:02Z
|
Empirical Study of Drone Sound Detection in Real-Life Environment with
Deep Neural Networks
|
This work aims to investigate the use of deep neural network to detect
commercial hobby drones in real-life environments by analyzing their sound
data. The purpose of work is to contribute to a system for detecting drones
used for malicious purposes, such as for terrorism. Specifically, we present a
method capable of detecting the presence of commercial hobby drones as a binary
classification problem based on sound event detection. We recorded the sound
produced by a few popular commercial hobby drones, and then augmented this data
with diverse environmental sound data to remedy the scarcity of drone sound
data in diverse environments. We investigated the effectiveness of
state-of-the-art event sound classification methods, i.e., a Gaussian Mixture
Model (GMM), Convolutional Neural Network (CNN), and Recurrent Neural Network
(RNN), for drone sound detection. Our empirical results, which were obtained
with a testing dataset collected on an urban street, confirmed the
effectiveness of these models for operating in a real environment. In summary,
our RNN models showed the best detection performance with an F-Score of 0.8009
with 240 ms of input audio with a short processing time, indicating their
applicability to real-time detection systems.
|
[
"['Sungho Jeon' 'Jong-Woo Shin' 'Young-Jun Lee' 'Woong-Hee Kim'\n 'YoungHyoun Kwon' 'Hae-Yong Yang']",
"Sungho Jeon, Jong-Woo Shin, Young-Jun Lee, Woong-Hee Kim, YoungHyoun\n Kwon, and Hae-Yong Yang"
] |
cs.SI cs.LG physics.soc-ph stat.ML
| null |
1701.05804
| null | null |
http://arxiv.org/pdf/1701.05804v4
|
2018-12-19T17:53:42Z
|
2017-01-20T14:33:45Z
|
Disentangling group and link persistence in Dynamic Stochastic Block
models
|
We study the inference of a model of dynamic networks in which both
communities and links keep memory of previous network states. By considering
maximum likelihood inference from single snapshot observations of the network,
we show that link persistence makes the inference of communities harder,
decreasing the detectability threshold, while community persistence tends to
make it easier. We analytically show that communities inferred from single
network snapshot can share a maximum overlap with the underlying communities of
a specific previous instant in time. This leads to time-lagged inference: the
identification of past communities rather than present ones. Finally we compute
the time lag and propose a corrected algorithm, the Lagged Snapshot Dynamic
(LSD) algorithm, for community detection in dynamic networks. We analytically
and numerically characterize the detectability transitions of such algorithm as
a function of the memory parameters of the model and we make a comparison with
a full dynamic inference.
|
[
"Paolo Barucca, Fabrizio Lillo, Piero Mazzarisi, Daniele Tantari",
"['Paolo Barucca' 'Fabrizio Lillo' 'Piero Mazzarisi' 'Daniele Tantari']"
] |
cs.IT cs.LG math.IT
| null |
1701.05931
| null | null |
http://arxiv.org/pdf/1701.05931v3
|
2017-07-27T19:46:30Z
|
2017-01-20T21:55:03Z
|
Neural Offset Min-Sum Decoding
|
Recently, it was shown that if multiplicative weights are assigned to the
edges of a Tanner graph used in belief propagation decoding, it is possible to
use deep learning techniques to find values for the weights which improve the
error-correction performance of the decoder. Unfortunately, this approach
requires many multiplications, which are generally expensive operations. In
this paper, we suggest a more hardware-friendly approach in which offset
min-sum decoding is augmented with learnable offset parameters. Our method uses
no multiplications and has a parameter count less than half that of the
multiplicative algorithm. This both speeds up training and provides a feasible
path to hardware architectures. After describing our method, we compare the
performance of the two neural decoding algorithms and show that our method
achieves error-correction performance within 0.1 dB of the multiplicative
approach and as much as 1 dB better than traditional belief propagation for the
codes under consideration.
|
[
"['Loren Lugosch' 'Warren J. Gross']",
"Loren Lugosch, Warren J. Gross"
] |
math.OC cs.LG stat.ML
| null |
1701.05954
| null | null |
http://arxiv.org/pdf/1701.05954v1
|
2017-01-21T00:11:06Z
|
2017-01-21T00:11:06Z
|
Learning Policies for Markov Decision Processes from Data
|
We consider the problem of learning a policy for a Markov decision process
consistent with data captured on the state-actions pairs followed by the
policy. We assume that the policy belongs to a class of parameterized policies
which are defined using features associated with the state-action pairs. The
features are known a priori, however, only an unknown subset of them could be
relevant. The policy parameters that correspond to an observed target policy
are recovered using $\ell_1$-regularized logistic regression that best fits the
observed state-action samples. We establish bounds on the difference between
the average reward of the estimated and the original policy (regret) in terms
of the generalization error and the ergodic coefficient of the underlying
Markov chain. To that end, we combine sample complexity theory and sensitivity
analysis of the stationary distribution of Markov chains. Our analysis suggests
that to achieve regret within order $O(\sqrt{\epsilon})$, it suffices to use
training sample size on the order of $\Omega(\log n \cdot poly(1/\epsilon))$,
where $n$ is the number of the features. We demonstrate the effectiveness of
our method on a synthetic robot navigation example.
|
[
"['Manjesh K. Hanawal' 'Hao Liu' 'Henghui Zhu' 'Ioannis Ch. Paschalidis']",
"Manjesh K. Hanawal, Hao Liu, Henghui Zhu, Ioannis Ch. Paschalidis"
] |
cs.LG cs.AI cs.SI
| null |
1701.06075
| null | null |
http://arxiv.org/pdf/1701.06075v1
|
2017-01-21T19:47:38Z
|
2017-01-21T19:47:38Z
|
Label Propagation on K-partite Graphs with Heterophily
|
In this paper, for the first time, we study label propagation in
heterogeneous graphs under heterophily assumption. Homophily label propagation
(i.e., two connected nodes share similar labels) in homogeneous graph (with
same types of vertices and relations) has been extensively studied before.
Unfortunately, real-life networks are heterogeneous, they contain different
types of vertices (e.g., users, images, texts) and relations (e.g.,
friendships, co-tagging) and allow for each node to propagate both the same and
opposite copy of labels to its neighbors. We propose a $\mathcal{K}$-partite
label propagation model to handle the mystifying combination of heterogeneous
nodes/relations and heterophily propagation. With this model, we develop a
novel label inference algorithm framework with update rules in near-linear time
complexity. Since real networks change over time, we devise an incremental
approach, which supports fast updates for both new data and evidence (e.g.,
ground truth labels) with guaranteed efficiency. We further provide a utility
function to automatically determine whether an incremental or a re-modeling
approach is favored. Extensive experiments on real datasets have verified the
effectiveness and efficiency of our approach, and its superiority over the
state-of-the-art label propagation methods.
|
[
"['Dingxiong Deng' 'Fan Bai' 'Yiqi Tang' 'Shuigeng Zhou' 'Cyrus Shahabi'\n 'Linhong Zhu']",
"Dingxiong Deng, Fan Bai, Yiqi Tang, Shuigeng Zhou, Cyrus Shahabi,\n Linhong Zhu"
] |
cs.SD cs.AI cs.IR cs.LG eess.AS
|
10.1109/ACCESS.2017.2738558
|
1701.06078
| null | null |
http://arxiv.org/abs/1701.06078v2
|
2017-01-24T16:25:15Z
|
2017-01-21T20:15:08Z
|
Lyrics-to-Audio Alignment by Unsupervised Discovery of Repetitive
Patterns in Vowel Acoustics
|
Most of the previous approaches to lyrics-to-audio alignment used a
pre-developed automatic speech recognition (ASR) system that innately suffered
from several difficulties to adapt the speech model to individual singers. A
significant aspect missing in previous works is the self-learnability of
repetitive vowel patterns in the singing voice, where the vowel part used is
more consistent than the consonant part. Based on this, our system first learns
a discriminative subspace of vowel sequences, based on weighted symmetric
non-negative matrix factorization (WS-NMF), by taking the self-similarity of a
standard acoustic feature as an input. Then, we make use of canonical time
warping (CTW), derived from a recent computer vision technique, to find an
optimal spatiotemporal transformation between the text and the acoustic
sequences. Experiments with Korean and English data sets showed that deploying
this method after a pre-developed, unsupervised, singing source separation
achieved more promising results than other state-of-the-art unsupervised
approaches and an existing ASR-based system.
|
[
"['Sungkyun Chang' 'Kyogu Lee']",
"Sungkyun Chang, Kyogu Lee"
] |
cs.LG cs.AI cs.CV cs.NE stat.ML
| null |
1701.06106
| null | null |
http://arxiv.org/pdf/1701.06106v2
|
2017-02-19T08:15:55Z
|
2017-01-22T00:35:24Z
|
Neurogenesis-Inspired Dictionary Learning: Online Model Adaption in a
Changing World
|
In this paper, we focus on online representation learning in non-stationary
environments which may require continuous adaptation of model architecture. We
propose a novel online dictionary-learning (sparse-coding) framework which
incorporates the addition and deletion of hidden units (dictionary elements),
and is inspired by the adult neurogenesis phenomenon in the dentate gyrus of
the hippocampus, known to be associated with improved cognitive function and
adaptation to new environments. In the online learning setting, where new input
instances arrive sequentially in batches, the neuronal-birth is implemented by
adding new units with random initial weights (random dictionary elements); the
number of new units is determined by the current performance (representation
error) of the dictionary, higher error causing an increase in the birth rate.
Neuronal-death is implemented by imposing l1/l2-regularization (group sparsity)
on the dictionary within the block-coordinate descent optimization at each
iteration of our online alternating minimization scheme, which iterates between
the code and dictionary updates. Finally, hidden unit connectivity adaptation
is facilitated by introducing sparsity in dictionary elements. Our empirical
evaluation on several real-life datasets (images and language) as well as on
synthetic data demonstrates that the proposed approach can considerably
outperform the state-of-art fixed-size (nonadaptive) online sparse coding of
Mairal et al. (2009) in the presence of nonstationary data. Moreover, we
identify certain properties of the data (e.g., sparse inputs with nearly
non-overlapping supports) and of the model (e.g., dictionary sparsity)
associated with such improvements.
|
[
"['Sahil Garg' 'Irina Rish' 'Guillermo Cecchi' 'Aurelie Lozano']",
"Sahil Garg, Irina Rish, Guillermo Cecchi, Aurelie Lozano"
] |
cs.LG cs.IT math.IT stat.ML
| null |
1701.0612
| null | null | null | null | null |
Effective and Extensible Feature Extraction Method Using Genetic
Algorithm-Based Frequency-Domain Feature Search for Epileptic EEG
Multi-classification
|
In this paper, a genetic algorithm-based frequency-domain feature search
(GAFDS) method is proposed for the electroencephalogram (EEG) analysis of
epilepsy. In this method, frequency-domain features are first searched and then
combined with nonlinear features. Subsequently, these features are selected and
optimized to classify EEG signals. The extracted features are analyzed
experimentally. The features extracted by GAFDS show remarkable independence,
and they are superior to the nonlinear features in terms of the ratio of
inter-class distance and intra-class distance. Moreover, the proposed feature
search method can additionally search for features of instantaneous frequency
in a signal after Hilbert transformation. The classification results achieved
using these features are reasonable, thus, GAFDS exhibits good extensibility.
Multiple classic classifiers (i.e., $k$-nearest neighbor, linear discriminant
analysis, decision tree, AdaBoost, multilayer perceptron, and Na\"ive Bayes)
achieve good results by using the features generated by GAFDS method and the
optimized selection. Specifically, the accuracies for the two-classification
and three-classification problems may reach up to 99% and 97%, respectively.
Results of several cross-validation experiments illustrate that GAFDS is
effective in feature extraction for EEG classification. Therefore, the proposed
feature selection and optimization model can improve classification accuracy.
|
[
"Tingxi Wen, Zhongnan Zhang"
] |
null | null |
1701.06120
| null | null |
http://arxiv.org/pdf/1701.06120v1
|
2017-01-22T04:20:52Z
|
2017-01-22T04:20:52Z
|
Effective and Extensible Feature Extraction Method Using Genetic
Algorithm-Based Frequency-Domain Feature Search for Epileptic EEG
Multi-classification
|
In this paper, a genetic algorithm-based frequency-domain feature search (GAFDS) method is proposed for the electroencephalogram (EEG) analysis of epilepsy. In this method, frequency-domain features are first searched and then combined with nonlinear features. Subsequently, these features are selected and optimized to classify EEG signals. The extracted features are analyzed experimentally. The features extracted by GAFDS show remarkable independence, and they are superior to the nonlinear features in terms of the ratio of inter-class distance and intra-class distance. Moreover, the proposed feature search method can additionally search for features of instantaneous frequency in a signal after Hilbert transformation. The classification results achieved using these features are reasonable, thus, GAFDS exhibits good extensibility. Multiple classic classifiers (i.e., $k$-nearest neighbor, linear discriminant analysis, decision tree, AdaBoost, multilayer perceptron, and Na"ive Bayes) achieve good results by using the features generated by GAFDS method and the optimized selection. Specifically, the accuracies for the two-classification and three-classification problems may reach up to 99% and 97%, respectively. Results of several cross-validation experiments illustrate that GAFDS is effective in feature extraction for EEG classification. Therefore, the proposed feature selection and optimization model can improve classification accuracy.
|
[
"['Tingxi Wen' 'Zhongnan Zhang']"
] |
cs.CV cs.LG cs.NE
| null |
1701.06123
| null | null |
http://arxiv.org/pdf/1701.06123v2
|
2017-11-27T09:08:19Z
|
2017-01-22T05:35:39Z
|
Optimization on Product Submanifolds of Convolution Kernels
|
Recent advances in optimization methods used for training convolutional
neural networks (CNNs) with kernels, which are normalized according to
particular constraints, have shown remarkable success. This work introduces an
approach for training CNNs using ensembles of joint spaces of kernels
constructed using different constraints. For this purpose, we address a problem
of optimization on ensembles of products of submanifolds (PEMs) of convolution
kernels. To this end, we first propose three strategies to construct ensembles
of PEMs in CNNs. Next, we expound their geometric properties (metric and
curvature properties) in CNNs. We make use of our theoretical results by
developing a geometry-aware SGD algorithm (G-SGD) for optimization on ensembles
of PEMs to train CNNs. Moreover, we analyze convergence properties of G-SGD
considering geometric properties of PEMs. In the experimental analyses, we
employ G-SGD to train CNNs on Cifar-10, Cifar-100 and Imagenet datasets. The
results show that geometric adaptive step size computation methods of G-SGD can
improve training loss and convergence properties of CNNs. Moreover, we observe
that classification performance of baseline CNNs can be boosted using G-SGD on
ensembles of PEMs identified by multiple constraints.
|
[
"['Mete Ozay' 'Takayuki Okatani']",
"Mete Ozay, Takayuki Okatani"
] |
cs.LG cs.SI stat.ML
| null |
1701.06225
| null | null |
http://arxiv.org/pdf/1701.06225v1
|
2017-01-22T22:16:46Z
|
2017-01-22T22:16:46Z
|
Predicting Demographics of High-Resolution Geographies with Geotagged
Tweets
|
In this paper, we consider the problem of predicting demographics of
geographic units given geotagged Tweets that are composed within these units.
Traditional survey methods that offer demographics estimates are usually
limited in terms of geographic resolution, geographic boundaries, and time
intervals. Thus, it would be highly useful to develop computational methods
that can complement traditional survey methods by offering demographics
estimates at finer geographic resolutions, with flexible geographic boundaries
(i.e. not confined to administrative boundaries), and at different time
intervals. While prior work has focused on predicting demographics and health
statistics at relatively coarse geographic resolutions such as the county-level
or state-level, we introduce an approach to predict demographics at finer
geographic resolutions such as the blockgroup-level. For the task of predicting
gender and race/ethnicity counts at the blockgroup-level, an approach adapted
from prior work to our problem achieves an average correlation of 0.389
(gender) and 0.569 (race) on a held-out test dataset. Our approach outperforms
this prior approach with an average correlation of 0.671 (gender) and 0.692
(race).
|
[
"Omar Montasser and Daniel Kifer",
"['Omar Montasser' 'Daniel Kifer']"
] |
cs.CY cs.AI cs.CL cs.LG
| null |
1701.06233
| null | null |
http://arxiv.org/pdf/1701.06233v1
|
2017-01-22T23:03:11Z
|
2017-01-22T23:03:11Z
|
What the Language You Tweet Says About Your Occupation
|
Many aspects of people's lives are proven to be deeply connected to their
jobs. In this paper, we first investigate the distinct characteristics of major
occupation categories based on tweets. From multiple social media platforms, we
gather several types of user information. From users' LinkedIn webpages, we
learn their proficiencies. To overcome the ambiguity of self-reported
information, a soft clustering approach is applied to extract occupations from
crowd-sourced data. Eight job categories are extracted, including Marketing,
Administrator, Start-up, Editor, Software Engineer, Public Relation, Office
Clerk, and Designer. Meanwhile, users' posts on Twitter provide cues for
understanding their linguistic styles, interests, and personalities. Our
results suggest that people of different jobs have unique tendencies in certain
language styles and interests. Our results also clearly reveal distinctive
levels in terms of Big Five Traits for different jobs. Finally, a classifier is
built to predict job types based on the features extracted from tweets. A high
accuracy indicates a strong discrimination power of language features for job
prediction task.
|
[
"['Tianran Hu' 'Haoyuan Xiao' 'Thuy-vy Thi Nguyen' 'Jiebo Luo']",
"Tianran Hu, Haoyuan Xiao, Thuy-vy Thi Nguyen, Jiebo Luo"
] |
cs.CL cs.AI cs.LG
| null |
1701.06247
| null | null |
http://arxiv.org/pdf/1701.06247v1
|
2017-01-23T01:36:10Z
|
2017-01-23T01:36:10Z
|
A Multichannel Convolutional Neural Network For Cross-language Dialog
State Tracking
|
The fifth Dialog State Tracking Challenge (DSTC5) introduces a new
cross-language dialog state tracking scenario, where the participants are asked
to build their trackers based on the English training corpus, while evaluating
them with the unlabeled Chinese corpus. Although the computer-generated
translations for both English and Chinese corpus are provided in the dataset,
these translations contain errors and careless use of them can easily hurt the
performance of the built trackers. To address this problem, we propose a
multichannel Convolutional Neural Networks (CNN) architecture, in which we
treat English and Chinese language as different input channels of one single
CNN model. In the evaluation of DSTC5, we found that such multichannel
architecture can effectively improve the robustness against translation errors.
Additionally, our method for DSTC5 is purely machine learning based and
requires no prior knowledge about the target language. We consider this a
desirable property for building a tracker in the cross-language context, as not
every developer will be familiar with both languages.
|
[
"['Hongjie Shi' 'Takashi Ushio' 'Mitsuru Endo' 'Katsuyoshi Yamagami'\n 'Noriaki Horii']",
"Hongjie Shi, Takashi Ushio, Mitsuru Endo, Katsuyoshi Yamagami, Noriaki\n Horii"
] |
q-bio.QM cs.CL cs.LG stat.ML
| null |
1701.06279
| null | null |
http://arxiv.org/pdf/1701.06279v1
|
2017-01-23T07:21:43Z
|
2017-01-23T07:21:43Z
|
dna2vec: Consistent vector representations of variable-length k-mers
|
One of the ubiquitous representation of long DNA sequence is dividing it into
shorter k-mer components. Unfortunately, the straightforward vector encoding of
k-mer as a one-hot vector is vulnerable to the curse of dimensionality. Worse
yet, the distance between any pair of one-hot vectors is equidistant. This is
particularly problematic when applying the latest machine learning algorithms
to solve problems in biological sequence analysis. In this paper, we propose a
novel method to train distributed representations of variable-length k-mers.
Our method is based on the popular word embedding model word2vec, which is
trained on a shallow two-layer neural network. Our experiments provide evidence
that the summing of dna2vec vectors is akin to nucleotides concatenation. We
also demonstrate that there is correlation between Needleman-Wunsch similarity
score and cosine similarity of dna2vec vectors.
|
[
"['Patrick Ng']",
"Patrick Ng"
] |
stat.ML cs.LG
|
10.1109/CCE.2016.7562650
|
1701.06421
| null | null |
http://arxiv.org/abs/1701.06421v1
|
2017-01-23T14:45:20Z
|
2017-01-23T14:45:20Z
|
Comparative study on supervised learning methods for identifying
phytoplankton species
|
Phytoplankton plays an important role in marine ecosystem. It is defined as a
biological factor to assess marine quality. The identification of phytoplankton
species has a high potential for monitoring environmental, climate changes and
for evaluating water quality. However, phytoplankton species identification is
not an easy task owing to their variability and ambiguity due to thousands of
micro and pico-plankton species. Therefore, the aim of this paper is to build a
framework for identifying phytoplankton species and to perform a comparison on
different features types and classifiers. We propose a new features type
extracted from raw signals of phytoplankton species. We then analyze the
performance of various classifiers on the proposed features type as well as two
other features types for finding the robust one. Through experiments, it is
found that Random Forest using the proposed features gives the best
classification results with average accuracy up to 98.24%.
|
[
"['Thi-Thu-Hong Phan' 'Emilie Poisson Caillault' 'André Bigand']",
"Thi-Thu-Hong Phan (LISIC), Emilie Poisson Caillault (LISIC), Andr\\'e\n Bigand (LISIC)"
] |
stat.ML cs.CV cs.LG
| null |
1701.06452
| null | null |
http://arxiv.org/pdf/1701.06452v1
|
2017-01-23T15:29:47Z
|
2017-01-23T15:29:47Z
|
Learning what to look in chest X-rays with a recurrent visual attention
model
|
X-rays are commonly performed imaging tests that use small amounts of
radiation to produce pictures of the organs, tissues, and bones of the body.
X-rays of the chest are used to detect abnormalities or diseases of the
airways, blood vessels, bones, heart, and lungs. In this work we present a
stochastic attention-based model that is capable of learning what regions
within a chest X-ray scan should be visually explored in order to conclude that
the scan contains a specific radiological abnormality. The proposed model is a
recurrent neural network (RNN) that learns to sequentially sample the entire
X-ray and focus only on informative areas that are likely to contain the
relevant information. We report on experiments carried out with more than
$100,000$ X-rays containing enlarged hearts or medical devices. The model has
been trained using reinforcement learning methods to learn task-specific
policies.
|
[
"['Petros-Pavlos Ypsilantis' 'Giovanni Montana']",
"Petros-Pavlos Ypsilantis and Giovanni Montana"
] |
stat.ML cs.LG
| null |
1701.06511
| null | null |
http://arxiv.org/pdf/1701.06511v3
|
2017-09-14T09:34:40Z
|
2017-01-23T17:14:02Z
|
Aggressive Sampling for Multi-class to Binary Reduction with
Applications to Text Classification
|
We address the problem of multi-class classification in the case where the
number of classes is very large. We propose a double sampling strategy on top
of a multi-class to binary reduction strategy, which transforms the original
multi-class problem into a binary classification problem over pairs of
examples. The aim of the sampling strategy is to overcome the curse of
long-tailed class distributions exhibited in majority of large-scale
multi-class classification problems and to reduce the number of pairs of
examples in the expanded data. We show that this strategy does not alter the
consistency of the empirical risk minimization principle defined over the
double sample reduction. Experiments are carried out on DMOZ and Wikipedia
collections with 10,000 to 100,000 classes where we show the efficiency of the
proposed approach in terms of training and prediction time, memory consumption,
and predictive performance with respect to state-of-the-art approaches.
|
[
"['Bikash Joshi' 'Massih-Reza Amini' 'Ioannis Partalas' 'Franck Iutzeler'\n 'Yury Maximov']",
"Bikash Joshi, Massih-Reza Amini, Ioannis Partalas, Franck Iutzeler,\n Yury Maximov"
] |
cs.LO cs.AI cs.LG
| null |
1701.06532
| null | null |
http://arxiv.org/pdf/1701.06532v1
|
2017-01-23T18:03:52Z
|
2017-01-23T18:03:52Z
|
ENIGMA: Efficient Learning-based Inference Guiding Machine
|
ENIGMA is a learning-based method for guiding given clause selection in
saturation-based theorem provers. Clauses from many proof searches are
classified as positive and negative based on their participation in the proofs.
An efficient classification model is trained on this data, using fast
feature-based characterization of the clauses . The learned model is then
tightly linked with the core prover and used as a basis of a new parameterized
evaluation heuristic that provides fast ranking of all generated clauses. The
approach is evaluated on the E prover and the CASC 2016 AIM benchmark, showing
a large increase of E's performance.
|
[
"['Jan Jakubův' 'Josef Urban']",
"Jan Jakub\\r{u}v, Josef Urban"
] |
cs.LG cs.CL cs.NE stat.ML
| null |
1701.06538
| null | null |
http://arxiv.org/pdf/1701.06538v1
|
2017-01-23T18:10:00Z
|
2017-01-23T18:10:00Z
|
Outrageously Large Neural Networks: The Sparsely-Gated
Mixture-of-Experts Layer
|
The capacity of a neural network to absorb information is limited by its
number of parameters. Conditional computation, where parts of the network are
active on a per-example basis, has been proposed in theory as a way of
dramatically increasing model capacity without a proportional increase in
computation. In practice, however, there are significant algorithmic and
performance challenges. In this work, we address these challenges and finally
realize the promise of conditional computation, achieving greater than 1000x
improvements in model capacity with only minor losses in computational
efficiency on modern GPU clusters. We introduce a Sparsely-Gated
Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward
sub-networks. A trainable gating network determines a sparse combination of
these experts to use for each example. We apply the MoE to the tasks of
language modeling and machine translation, where model capacity is critical for
absorbing the vast quantities of knowledge available in the training corpora.
We present model architectures in which a MoE with up to 137 billion parameters
is applied convolutionally between stacked LSTM layers. On large language
modeling and machine translation benchmarks, these models achieve significantly
better results than state-of-the-art at lower computational cost.
|
[
"Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc\n Le, Geoffrey Hinton, Jeff Dean",
"['Noam Shazeer' 'Azalia Mirhoseini' 'Krzysztof Maziarz' 'Andy Davis'\n 'Quoc Le' 'Geoffrey Hinton' 'Jeff Dean']"
] |
cs.NE cs.LG
| null |
1701.06548
| null | null |
http://arxiv.org/pdf/1701.06548v1
|
2017-01-23T18:35:28Z
|
2017-01-23T18:35:28Z
|
Regularizing Neural Networks by Penalizing Confident Output
Distributions
|
We systematically explore regularizing neural networks by penalizing low
entropy output distributions. We show that penalizing low entropy output
distributions, which has been shown to improve exploration in reinforcement
learning, acts as a strong regularizer in supervised learning. Furthermore, we
connect a maximum entropy based confidence penalty to label smoothing through
the direction of the KL divergence. We exhaustively evaluate the proposed
confidence penalty and label smoothing on 6 common benchmarks: image
classification (MNIST and Cifar-10), language modeling (Penn Treebank), machine
translation (WMT'14 English-to-German), and speech recognition (TIMIT and WSJ).
We find that both label smoothing and the confidence penalty improve
state-of-the-art models across benchmarks without modifying existing
hyperparameters, suggesting the wide applicability of these regularizers.
|
[
"Gabriel Pereyra, George Tucker, Jan Chorowski, {\\L}ukasz Kaiser,\n Geoffrey Hinton",
"['Gabriel Pereyra' 'George Tucker' 'Jan Chorowski' 'Łukasz Kaiser'\n 'Geoffrey Hinton']"
] |
cs.LG
| null |
1701.06551
| null | null |
http://arxiv.org/pdf/1701.06551v1
|
2016-11-21T18:13:42Z
|
2016-11-21T18:13:42Z
|
On the Parametric Study of Lubricating Oil Production using an
Artificial Neural Network (ANN) Approach
|
In this study, an Artificial Neural Network (ANN) approach is utilized to
perform a parametric study on the process of extraction of lubricants from
heavy petroleum cuts. To train the model, we used field data collected from an
industrial plant. Operational conditions of feed and solvent flow rate,
Temperature of streams and mixing rate were considered as the input to the
model, whereas the flow rate of the main product was considered as the output
of the ANN model. A feed-forward Multi-Layer Perceptron Neural Network was
successfully applied to capture the relationship between inputs and output
parameters.
|
[
"Masood Tehrani and Mary Ahmadi",
"['Masood Tehrani' 'Mary Ahmadi']"
] |
cs.IT cs.LG math.IT stat.ME
| null |
1701.06605
| null | null |
http://arxiv.org/pdf/1701.06605v1
|
2017-01-23T19:48:11Z
|
2017-01-23T19:48:11Z
|
Identifying Nonlinear 1-Step Causal Influences in Presence of Latent
Variables
|
We propose an approach for learning the causal structure in stochastic
dynamical systems with a $1$-step functional dependency in the presence of
latent variables. We propose an information-theoretic approach that allows us
to recover the causal relations among the observed variables as long as the
latent variables evolve without exogenous noise. We further propose an
efficient learning method based on linear regression for the special sub-case
when the dynamics are restricted to be linear. We validate the performance of
our approach via numerical simulations.
|
[
"['Saber Salehkaleybar' 'Jalal Etesami' 'Negar Kiyavash']",
"Saber Salehkaleybar and Jalal Etesami and Negar Kiyavash"
] |
q-fin.GN cs.LG
| null |
1701.06624
| null | null |
http://arxiv.org/pdf/1701.06624v1
|
2016-11-21T20:41:12Z
|
2016-11-21T20:41:12Z
|
Revenue Forecasting for Enterprise Products
|
For any business, planning is a continuous process, and typically
business-owners focus on making both long-term planning aligned with a
particular strategy as well as short-term planning that accommodates the
dynamic market situations. An ability to perform an accurate financial forecast
is crucial for effective planning. In this paper, we focus on providing an
intelligent and efficient solution that will help in forecasting revenue using
machine learning algorithms. We experiment with three different revenue
forecasting models, and here we provide detailed insights into the methodology
and their relative performance measured on real finance data. As a real-world
application of our models, we partner with Microsoft's Finance organization
(department that reports Microsoft's finances) to provide them a guidance on
the projected revenue for upcoming quarters.
|
[
"['Amita Gajewar' 'Gagan Bansal']",
"Amita Gajewar, Gagan Bansal"
] |
cs.SY cs.LG math.OC
| null |
1701.06652
| null | null |
http://arxiv.org/pdf/1701.06652v1
|
2017-01-23T22:13:59Z
|
2017-01-23T22:13:59Z
|
Convex Parameterizations and Fidelity Bounds for Nonlinear
Identification and Reduced-Order Modelling
|
Model instability and poor prediction of long-term behavior are common
problems when modeling dynamical systems using nonlinear "black-box"
techniques. Direct optimization of the long-term predictions, often called
simulation error minimization, leads to optimization problems that are
generally non-convex in the model parameters and suffer from multiple local
minima. In this work we present methods which address these problems through
convex optimization, based on Lagrangian relaxation, dissipation inequalities,
contraction theory, and semidefinite programming. We demonstrate the proposed
methods with a model order reduction task for electronic circuit design and the
identification of a pneumatic actuator from experiment.
|
[
"Mark M. Tobenkin and Ian R. Manchester and Alexandre Megretski",
"['Mark M. Tobenkin' 'Ian R. Manchester' 'Alexandre Megretski']"
] |
cs.LG stat.ML
| null |
1701.06655
| null | null |
http://arxiv.org/pdf/1701.06655v4
|
2018-07-07T16:55:04Z
|
2017-01-23T22:20:47Z
|
Patchwork Kriging for Large-scale Gaussian Process Regression
|
This paper presents a new approach for Gaussian process (GP) regression for
large datasets. The approach involves partitioning the regression input domain
into multiple local regions with a different local GP model fitted in each
region. Unlike existing local partitioned GP approaches, we introduce a
technique for patching together the local GP models nearly seamlessly to ensure
that the local GP models for two neighboring regions produce nearly the same
response prediction and prediction error variance on the boundary between the
two regions. This largely mitigates the well-known discontinuity problem that
degrades the boundary accuracy of existing local partitioned GP methods. Our
main innovation is to represent the continuity conditions as additional
pseudo-observations that the differences between neighboring GP responses are
identically zero at an appropriately chosen set of boundary input locations. To
predict the response at any input location, we simply augment the actual
response observations with the pseudo-observations and apply standard GP
prediction methods to the augmented data. In contrast to heuristic continuity
adjustments, this has an advantage of working within a formal GP framework, so
that the GP-based predictive uncertainty quantification remains valid. Our
approach also inherits a sparse block-like structure for the sample covariance
matrix, which results in computationally efficient closed-form expressions for
the predictive mean and variance. In addition, we provide a new spatial
partitioning scheme based on a recursive space partitioning along local
principal component directions, which makes the proposed approach applicable
for regression domains having more than two dimensions. Using three spatial
datasets and three higher dimensional datasets, we investigate the numerical
performance of the approach and compare it to several state-of-the-art
approaches.
|
[
"['Chiwoo Park' 'Daniel Apley']",
"Chiwoo Park and Daniel Apley"
] |
cs.LG
| null |
1701.06725
| null | null |
http://arxiv.org/pdf/1701.06725v1
|
2017-01-24T04:12:25Z
|
2017-01-24T04:12:25Z
|
A Contextual Bandit Approach for Stream-Based Active Learning
|
Contextual bandit algorithms -- a class of multi-armed bandit algorithms that
exploit the contextual information -- have been shown to be effective in
solving sequential decision making problems under uncertainty. A common
assumption adopted in the literature is that the realized (ground truth) reward
by taking the selected action is observed by the learner at no cost, which,
however, is not realistic in many practical scenarios. When observing the
ground truth reward is costly, a key challenge for the learner is how to
judiciously acquire the ground truth by assessing the benefits and costs in
order to balance learning efficiency and learning cost. From the information
theoretic perspective, a perhaps even more interesting question is how much
efficiency might be lost due to this cost. In this paper, we design a novel
contextual bandit-based learning algorithm and endow it with the active
learning capability. The key feature of our algorithm is that in addition to
sending a query to an annotator for the ground truth, prior information about
the ground truth learned by the learner is sent together, thereby reducing the
query cost. We prove that by carefully choosing the algorithm parameters, the
learning regret of the proposed algorithm achieves the same order as that of
conventional contextual bandit algorithms in cost-free scenarios, implying
that, surprisingly, cost due to acquiring the ground truth does not increase
the learning regret in the long-run. Our analysis shows that prior information
about the ground truth plays a critical role in improving the system
performance in scenarios where active learning is necessary.
|
[
"['Linqi Song' 'Jie Xu']",
"Linqi Song and Jie Xu"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.