title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
A Semi-Definite Programming approach to low dimensional embedding for
unsupervised clustering | stat.ML cs.LG | This paper proposes a variant of the method of Gu\'edon and Verhynin for
estimating the cluster matrix in the Mixture of Gaussians framework via
Semi-Definite Programming. A clustering oriented embedding is deduced from this
estimate. The procedure is suitable for very high dimensional data because it
is based on pairwise distances only. Theoretical garantees are provided and an
eigenvalue optimisation approach is proposed for computing the embedding. The
performance of the method is illustrated via Monte Carlo experiements and
comparisons with other embeddings from the literature.
| St\'ephane Chr\'etien, Cl\'ement Dombry and Adrien Faivre | null | 1606.09190 | null | null |
Model-Free Trajectory-based Policy Optimization with Monotonic
Improvement | cs.LG cs.RO | Many of the recent trajectory optimization algorithms alternate between
linear approximation of the system dynamics around the mean trajectory and
conservative policy update. One way of constraining the policy change is by
bounding the Kullback-Leibler (KL) divergence between successive policies.
These approaches already demonstrated great experimental success in challenging
problems such as end-to-end control of physical systems. However, the linear
approximation of the system dynamics can introduce a bias in the policy update
and prevent convergence to the optimal policy. In this article, we propose a
new model-free trajectory-based policy optimization algorithm with guaranteed
monotonic improvement. The algorithm backpropagates a local, quadratic and
time-dependent \qfunc~learned from trajectory data instead of a model of the
system dynamics. Our policy update ensures exact KL-constraint satisfaction
without simplifying assumptions on the system dynamics. We experimentally
demonstrate on highly non-linear control tasks the improvement in performance
of our algorithm in comparison to approaches linearizing the system dynamics.
In order to show the monotonic improvement of our algorithm, we additionally
conduct a theoretical analysis of our policy update scheme to derive a lower
bound of the change in policy return between successive iterations.
| Riad Akrour, Abbas Abdolmaleki, Hany Abdulsamad, Jan Peters and
Gerhard Neumann | null | 1606.09197 | null | null |
Tighter bounds lead to improved classifiers | cs.LG stat.ML | The standard approach to supervised classification involves the minimization
of a log-loss as an upper bound to the classification error. While this is a
tight bound early on in the optimization, it overemphasizes the influence of
incorrectly classified examples far from the decision boundary. Updating the
upper bound during the optimization leads to improved classification rates
while transforming the learning into a sequence of minimization problems. In
addition, in the context where the classifier is part of a larger system, this
modification makes it possible to link the performance of the classifier to
that of the whole system, allowing the seamless introduction of external
constraints.
| Nicolas Le Roux | null | 1606.09202 | null | null |
Learning Concept Taxonomies from Multi-modal Data | cs.CL cs.CV cs.LG | We study the problem of automatically building hypernym taxonomies from
textual and visual data. Previous works in taxonomy induction generally ignore
the increasingly prominent visual data, which encode important perceptual
semantics. Instead, we propose a probabilistic model for taxonomy induction by
jointly leveraging text and images. To avoid hand-crafted feature engineering,
we design end-to-end features based on distributed representations of images
and words. The model is discriminatively trained given a small set of existing
ontologies and is capable of building full taxonomies from scratch for a
collection of unseen conceptual label items with associated images. We evaluate
our model and features on the WordNet hierarchies, where our system outperforms
previous approaches by a large gap.
| Hao Zhang, Zhiting Hu, Yuntian Deng, Mrinmaya Sachan, Zhicheng Yan,
Eric P. Xing | null | 1606.09239 | null | null |
Learning without Forgetting | cs.CV cs.LG stat.ML | When building a unified vision system or gradually adding new capabilities to
a system, the usual assumption is that training data for all tasks is always
available. However, as the number of tasks grows, storing and retraining on
such data becomes infeasible. A new problem arises where we add new
capabilities to a Convolutional Neural Network (CNN), but the training data for
its existing capabilities are unavailable. We propose our Learning without
Forgetting method, which uses only new task data to train the network while
preserving the original capabilities. Our method performs favorably compared to
commonly used feature extraction and fine-tuning adaption techniques and
performs similarly to multitask learning that uses original task data we assume
unavailable. A more surprising observation is that Learning without Forgetting
may be able to replace fine-tuning with similar old and new task datasets for
improved new task performance.
| Zhizhong Li, Derek Hoiem | null | 1606.09282 | null | null |
Dimension-Free Iteration Complexity of Finite Sum Optimization Problems | math.OC cs.LG math.NA | Many canonical machine learning problems boil down to a convex optimization
problem with a finite sum structure. However, whereas much progress has been
made in developing faster algorithms for this setting, the inherent limitations
of these problems are not satisfactorily addressed by existing lower bounds.
Indeed, current bounds focus on first-order optimization algorithms, and only
apply in the often unrealistic regime where the number of iterations is less
than $\mathcal{O}(d/n)$ (where $d$ is the dimension and $n$ is the number of
samples). In this work, we extend the framework of (Arjevani et al., 2015) to
provide new lower bounds, which are dimension-free, and go beyond the
assumptions of current bounds, thereby covering standard finite sum
optimization methods, e.g., SAG, SAGA, SVRG, SDCA without duality, as well as
stochastic coordinate-descent methods, such as SDCA and accelerated proximal
SDCA.
| Yossi Arjevani and Ohad Shamir | null | 1606.09333 | null | null |
Convolutional Neural Networks on Graphs with Fast Localized Spectral
Filtering | cs.LG stat.ML | In this work, we are interested in generalizing convolutional neural networks
(CNNs) from low-dimensional regular grids, where image, video and speech are
represented, to high-dimensional irregular domains, such as social networks,
brain connectomes or words' embedding, represented by graphs. We present a
formulation of CNNs in the context of spectral graph theory, which provides the
necessary mathematical background and efficient numerical schemes to design
fast localized convolutional filters on graphs. Importantly, the proposed
technique offers the same linear computational complexity and constant learning
complexity as classical CNNs, while being universal to any graph structure.
Experiments on MNIST and 20NEWS demonstrate the ability of this novel deep
learning system to learn local, stationary, and compositional features on
graphs.
| Micha\"el Defferrard, Xavier Bresson, Pierre Vandergheynst | null | 1606.09375 | null | null |
On Approximate Dynamic Programming with Multivariate Splines for
Adaptive Control | cs.LG cs.SY | We define a SDP framework based on the RLSTD algorithm and multivariate
simplex B-splines. We introduce a local forget factor capable of preserving the
continuity of the simplex splines. This local forget factor is integrated with
the RLSTD algorithm, resulting in a modified RLSTD algorithm that is capable of
tracking time-varying systems. We present the results of two numerical
experiments, one validating SDP and comparing it with NDP and another to show
the advantages of the modified RLSTD algorithm over the original. While SDP
requires more computations per time-step, the experiment shows that for the
same amount of function approximator parameters, there is an increase in
performance in terms of stability and learning rate compared to NDP. The second
experiment shows that SDP in combination with the modified RLSTD algorithm
allows for faster recovery compared to the original RLSTD algorithm when system
parameters are altered, paving the way for an adaptive high-performance
non-linear control method.
| Willem Eerland, Coen de Visser, Erik-Jan van Kampen | null | 1606.09383 | null | null |
Asymptotically Optimal Algorithms for Budgeted Multiple Play Bandits | stat.ML cs.LG | We study a generalization of the multi-armed bandit problem with multiple
plays where there is a cost associated with pulling each arm and the agent has
a budget at each time that dictates how much she can expect to spend. We derive
an asymptotic regret lower bound for any uniformly efficient algorithm in our
setting. We then study a variant of Thompson sampling for Bernoulli rewards and
a variant of KL-UCB for both single-parameter exponential families and bounded,
finitely supported rewards. We show these algorithms are asymptotically
optimal, both in rateand leading problem-dependent constants, including in the
thick margin setting where multiple arms fall on the decision boundary.
| Alexander Luedtke, Emilie Kaufmann (CRIStAL), Antoine Chambaz (MAP5 -
UMR 8145) | null | 1606.09388 | null | null |
Vote-boosting ensembles | cs.LG stat.ML | Vote-boosting is a sequential ensemble learning method in which the
individual classifiers are built on different weighted versions of the training
data. To build a new classifier, the weight of each training instance is
determined in terms of the degree of disagreement among the current ensemble
predictions for that instance. For low class-label noise levels, especially
when simple base learners are used, emphasis should be made on instances for
which the disagreement rate is high. When more flexible classifiers are used
and as the noise level increases, the emphasis on these uncertain instances
should be reduced. In fact, at sufficiently high levels of class-label noise,
the focus should be on instances on which the ensemble classifiers agree. The
optimal type of emphasis can be automatically determined using
cross-validation. An extensive empirical analysis using the beta distribution
as emphasis function illustrates that vote-boosting is an effective method to
generate ensembles that are both accurate and robust.
| Maryam Sabzevari, Gonzalo Mart\'inez-Mu\~noz, Alberto Su\'arez | null | 1606.09458 | null | null |
A Model Explanation System: Latest Updates and Extensions | stat.ML cs.LG | We propose a general model explanation system (MES) for "explaining" the
output of black box classifiers. This paper describes extensions to Turner
(2015), which is referred to frequently in the text. We use the motivating
example of a classifier trained to detect fraud in a credit card transaction
history. The key aspect is that we provide explanations applicable to a single
prediction, rather than provide an interpretable set of parameters. We focus on
explaining positive predictions (alerts). However, the presented methodology is
symmetrically applicable to negative predictions.
| Ryan Turner | null | 1606.09517 | null | null |
Performance Based Evaluation of Various Machine Learning Classification
Techniques for Chronic Kidney Disease Diagnosis | cs.LG cs.AI cs.CY | Areas where Artificial Intelligence (AI) & related fields are finding their
applications are increasing day by day, moving from core areas of computer
science they are finding their applications in various other domains.In recent
times Machine Learning i.e. a sub-domain of AI has been widely used in order to
assist medical experts and doctors in the prediction, diagnosis and prognosis
of various diseases and other medical disorders. In this manuscript the authors
applied various machine learning algorithms to a problem in the domain of
medical diagnosis and analyzed their efficiency in predicting the results. The
problem selected for the study is the diagnosis of the Chronic Kidney
Disease.The dataset used for the study consists of 400 instances and 24
attributes. The authors evaluated 12 classification techniques by applying them
to the Chronic Kidney Disease data. In order to calculate efficiency, results
of the prediction by candidate methods were compared with the actual medical
results of the subject.The various metrics used for performance evaluation are
predictive accuracy, precision, sensitivity and specificity. The results
indicate that decision-tree performed best with nearly the accuracy of 98.6%,
sensitivity of 0.9720, precision of 1 and specificity of 1.
| Sahil Sharma, Vinod Sharma and Atul Sharma | null | 1606.09581 | null | null |
A Permutation-based Model for Crowd Labeling: Optimal Estimation and
Robustness | cs.LG cs.AI cs.IT math.IT stat.ML | The task of aggregating and denoising crowd-labeled data has gained increased
significance with the advent of crowdsourcing platforms and massive datasets.
We propose a permutation-based model for crowd labeled data that is a
significant generalization of the classical Dawid-Skene model, and introduce a
new error metric by which to compare different estimators. We derive global
minimax rates for the permutation-based model that are sharp up to logarithmic
factors, and match the minimax lower bounds derived under the simpler
Dawid-Skene model. We then design two computationally-efficient estimators: the
WAN estimator for the setting where the ordering of workers in terms of their
abilities is approximately known, and the OBI-WAN estimator where that is not
known. For each of these estimators, we provide non-asymptotic bounds on their
performance. We conduct synthetic simulations and experiments on real-world
crowdsourcing data, and the experimental results corroborate our theoretical
findings.
| Nihar B. Shah, Sivaraman Balakrishnan, Martin J. Wainwright | 10.1109/TIT.2020.3045613 | 1606.09632 | null | null |
Review Based Rating Prediction | cs.IR cs.LG | Recommendation systems are an important units in today's e-commerce
applications, such as targeted advertising, personalized marketing and
information retrieval. In recent years, the importance of contextual
information has motivated generation of personalized recommendations according
to the available contextual information of users.
Compared to the traditional systems which mainly utilize users' rating
history, review-based recommendation hopefully provide more relevant results to
users. We introduce a review-based recommendation approach that obtains
contextual information by mining user reviews. The proposed approach relate to
features obtained by analyzing textual reviews using methods developed in
Natural Language Processing (NLP) and information retrieval discipline to
compute a utility function over a given item. An item utility is a measure that
shows how much it is preferred according to user's current context.
In our system, the context inference is modeled as similarity between the
users reviews history and the item reviews history. As an example application,
we used our method to mine contextual data from customers' reviews of movies
and use it to produce review-based rating prediction. The predicted ratings can
generate recommendations that are item-based and should appear at the
recommended items list in the product page. Our evaluations suggest that our
system can help produce better prediction rating scores in comparison to the
standard prediction methods.
| Tal Hadad | null | 1607.00024 | null | null |
Ballpark Learning: Estimating Labels from Rough Group Comparisons | stat.ML cs.LG | We are interested in estimating individual labels given only coarse,
aggregated signal over the data points. In our setting, we receive sets
("bags") of unlabeled instances with constraints on label proportions. We relax
the unrealistic assumption of known label proportions, made in previous work;
instead, we assume only to have upper and lower bounds, and constraints on bag
differences. We motivate the problem, propose an intuitive formulation and
algorithm, and apply our methods to real-world scenarios. Across several
domains, we show how using only proportion constraints and no labeled examples,
we can achieve surprisingly high accuracy. In particular, we demonstrate how to
predict income level using rough stereotypes and how to perform sentiment
analysis using very little information. We also apply our method to guide
exploratory analysis, recovering geographical differences in twitter dialect.
| Tom Hope and Dafna Shahaf | null | 1607.00034 | null | null |
Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | cs.LG cs.NE | We extend neural Turing machine (NTM) model into a dynamic neural Turing
machine (D-NTM) by introducing a trainable memory addressing scheme. This
addressing scheme maintains for each memory cell two separate vectors, content
and address vectors. This allows the D-NTM to learn a wide variety of
location-based addressing strategies including both linear and nonlinear ones.
We implement the D-NTM with both continuous, differentiable and discrete,
non-differentiable read/write mechanisms. We investigate the mechanisms and
effects of learning to read and write into a memory through experiments on
Facebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is
evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM
baselines. We have done extensive analysis of our model and different
variations of NTM on bAbI task. We also provide further experimental results on
sequential pMNIST, Stanford Natural Language Inference, associative recall and
copy tasks.
| Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio | null | 1607.00036 | null | null |
Unsupervised Learning with Imbalanced Data via Structure Consolidation
Latent Variable Model | cs.LG stat.ML | Unsupervised learning on imbalanced data is challenging because, when given
imbalanced data, current model is often dominated by the major category and
ignores the categories with small amount of data. We develop a latent variable
model that can cope with imbalanced data by dividing the latent space into a
shared space and a private space. Based on Gaussian Process Latent Variable
Models, we propose a new kernel formulation that enables the separation of
latent space and derives an efficient variational inference method. The
performance of our model is demonstrated with an imbalanced medical image
dataset.
| Fariba Yousefi, Zhenwen Dai, Carl Henrik Ek, Neil Lawrence | null | 1607.00067 | null | null |
Multi-class classification: mirror descent approach | math.OC cs.LG stat.ML | We consider the problem of multi-class classification and a stochastic opti-
mization approach to it. We derive risk bounds for stochastic mirror descent
algorithm and provide examples of set geometries that make the use of the
algorithm efficient in terms of error in k.
| Daria Reshetova | null | 1607.00076 | null | null |
Fractal Dimension Pattern Based Multiresolution Analysis for Rough
Estimator of Person-Dependent Audio Emotion Recognition | cs.AI cs.LG cs.SD | As a general means of expression, audio analysis and recognition has
attracted much attentions for its wide applications in real-life world. Audio
emotion recognition (AER) attempts to understand emotional states of human with
the given utterance signals, and has been studied abroad for its further
development on friendly human-machine interfaces. Distinguish from other
existing works, the person-dependent patterns of audio emotions are conducted,
and fractal dimension features are calculated for acoustic feature extraction.
Furthermore, it is able to efficiently learn intrinsic characteristics of
auditory emotions, while the utterance features are learned from fractal
dimensions of each sub-bands. Experimental results show the proposed method is
able to provide comparative performance for audio emotion recognition.
| Miao Cheng and Ah Chung Tsoi | null | 1607.00087 | null | null |
Randomized block proximal damped Newton method for composite
self-concordant minimization | math.OC cs.LG cs.NA math.NA stat.CO stat.ML | In this paper we consider the composite self-concordant (CSC) minimization
problem, which minimizes the sum of a self-concordant function $f$ and a
(possibly nonsmooth) proper closed convex function $g$. The CSC minimization is
the cornerstone of the path-following interior point methods for solving a
broad class of convex optimization problems. It has also found numerous
applications in machine learning. The proximal damped Newton (PDN) methods have
been well studied in the literature for solving this problem that enjoy a nice
iteration complexity. Given that at each iteration these methods typically
require evaluating or accessing the Hessian of $f$ and also need to solve a
proximal Newton subproblem, the cost per iteration can be prohibitively high
when applied to large-scale problems. Inspired by the recent success of block
coordinate descent methods, we propose a randomized block proximal damped
Newton (RBPDN) method for solving the CSC minimization. Compared to the PDN
methods, the computational cost per iteration of RBPDN is usually significantly
lower. The computational experiment on a class of regularized logistic
regression problems demonstrate that RBPDN is indeed promising in solving
large-scale CSC minimization problems. The convergence of RBPDN is also
analyzed in the paper. In particular, we show that RBPDN is globally convergent
when $g$ is Lipschitz continuous. It is also shown that RBPDN enjoys a local
linear convergence. Moreover, we show that for a class of $g$ including the
case where $g$ is Lipschitz differentiable, RBPDN enjoys a global linear
convergence. As a striking consequence, it shows that the classical damped
Newton methods [22,40] and the PDN [31] for such $g$ are globally linearly
convergent, which was previously unknown in the literature. Moreover, this
result can be used to sharpen the existing iteration complexity of these
methods.
| Zhaosong Lu | null | 1607.00101 | null | null |
Combining Gradient Boosting Machines with Collective Inference to
Predict Continuous Values | cs.LG stat.ML | Gradient boosting of regression trees is a competitive procedure for learning
predictive models of continuous data that fits the data with an additive
non-parametric model. The classic version of gradient boosting assumes that the
data is independent and identically distributed. However, relational data with
interdependent, linked instances is now common and the dependencies in such
data can be exploited to improve predictive performance. Collective inference
is one approach to exploit relational correlation patterns and significantly
reduce classification error. However, much of the work on collective learning
and inference has focused on discrete prediction tasks rather than continuous.
%target values has not got that attention in terms of collective inference. In
this work, we investigate how to combine these two paradigms together to
improve regression in relational domains. Specifically, we propose a boosting
algorithm for learning a collective inference model that predicts a continuous
target variable. In the algorithm, we learn a basic relational model,
collectively infer the target values, and then iteratively learn relational
models to predict the residuals. We evaluate our proposed algorithm on a real
network dataset and show that it outperforms alternative boosting methods.
However, our investigation also revealed that the relational features interact
together to produce better predictions.
| Iman Alodah and Jennifer Neville | null | 1607.00110 | null | null |
Less-forgetting Learning in Deep Neural Networks | cs.LG | A catastrophic forgetting problem makes deep neural networks forget the
previously learned information, when learning data collected in new
environments, such as by different sensors or in different light conditions.
This paper presents a new method for alleviating the catastrophic forgetting
problem. Unlike previous research, our method does not use any information from
the source domain. Surprisingly, our method is very effective to forget less of
the information in the source domain, and we show the effectiveness of our
method using several experiments. Furthermore, we observed that the forgetting
problem occurs between mini-batches when performing general training processes
using stochastic gradient descent methods, and this problem is one of the
factors that degrades generalization performance of the network. We also try to
solve this problem using the proposed method. Finally, we show our
less-forgetting learning method is also helpful to improve the performance of
deep neural networks in terms of recognition rates.
| Heechul Jung, Jeongwoo Ju, Minju Jung, Junmo Kim | null | 1607.00122 | null | null |
Deep Learning with Differential Privacy | stat.ML cs.CR cs.LG | Machine learning techniques based on neural networks are achieving remarkable
results in a wide variety of domains. Often, the training of models requires
large, representative datasets, which may be crowdsourced and contain sensitive
information. The models should not expose private information in these
datasets. Addressing this goal, we develop new algorithmic techniques for
learning and a refined analysis of privacy costs within the framework of
differential privacy. Our implementation and experiments demonstrate that we
can train deep neural networks with non-convex objectives, under a modest
privacy budget, and at a manageable cost in software complexity, training
efficiency, and model quality.
| Mart\'in Abadi and Andy Chu and Ian Goodfellow and H. Brendan McMahan
and Ilya Mironov and Kunal Talwar and Li Zhang | 10.1145/2976749.2978318 | 1607.00133 | null | null |
Missing Data Estimation in High-Dimensional Datasets: A Swarm
Intelligence-Deep Neural Network Approach | cs.AI cs.LG stat.ML | In this paper, we examine the problem of missing data in high-dimensional
datasets by taking into consideration the Missing Completely at Random and
Missing at Random mechanisms, as well as theArbitrary missing pattern.
Additionally, this paper employs a methodology based on Deep Learning and Swarm
Intelligence algorithms in order to provide reliable estimates for missing
data. The deep learning technique is used to extract features from the input
data via an unsupervised learning approach by modeling the data distribution
based on the input. This deep learning technique is then used as part of the
objective function for the swarm intelligence technique in order to estimate
the missing data after a supervised fine-tuning phase by minimizing an error
function based on the interrelationship and correlation between features in the
dataset. The investigated methodology in this paper therefore has longer
running times, however, the promising potential outcomes justify the trade-off.
Also, basic knowledge of statistics is presumed.
| Collins Leke and Tshilidzi Marwala | null | 1607.00136 | null | null |
Efficient and Consistent Robust Time Series Analysis | cs.LG stat.ML | We study the problem of robust time series analysis under the standard
auto-regressive (AR) time series model in the presence of arbitrary outliers.
We devise an efficient hard thresholding based algorithm which can obtain a
consistent estimate of the optimal AR model despite a large fraction of the
time series points being corrupted. Our algorithm alternately estimates the
corrupted set of points and the model parameters, and is inspired by recent
advances in robust regression and hard-thresholding methods. However, a direct
application of existing techniques is hindered by a critical difference in the
time-series domain: each point is correlated with all previous points rendering
existing tools inapplicable directly. We show how to overcome this hurdle using
novel proof techniques. Using our techniques, we are also able to provide the
first efficient and provably consistent estimator for the robust regression
problem where a standard linear observation model with white additive noise is
corrupted arbitrarily. We illustrate our methods on synthetic datasets and show
that our methods indeed are able to consistently recover the optimal parameters
despite a large fraction of points being corrupted.
| Kush Bhatia, Prateek Jain, Parameswaran Kamalaruban, Purushottam Kar | null | 1607.00146 | null | null |
LSTM-based Encoder-Decoder for Multi-sensor Anomaly Detection | cs.AI cs.LG stat.ML | Mechanical devices such as engines, vehicles, aircrafts, etc., are typically
instrumented with numerous sensors to capture the behavior and health of the
machine. However, there are often external factors or variables which are not
captured by sensors leading to time-series which are inherently unpredictable.
For instance, manual controls and/or unmonitored environmental conditions or
load may lead to inherently unpredictable time-series. Detecting anomalies in
such scenarios becomes challenging using standard approaches based on
mathematical models that rely on stationarity, or prediction models that
utilize prediction errors to detect anomalies. We propose a Long Short Term
Memory Networks based Encoder-Decoder scheme for Anomaly Detection (EncDec-AD)
that learns to reconstruct 'normal' time-series behavior, and thereafter uses
reconstruction error to detect anomalies. We experiment with three publicly
available quasi predictable time-series datasets: power demand, space shuttle,
and ECG, and two real-world engine datasets with both predictive and
unpredictable behavior. We show that EncDec-AD is robust and can detect
anomalies from predictable, unpredictable, periodic, aperiodic, and
quasi-periodic time-series. Further, we show that EncDec-AD is able to detect
anomalies from short time-series (length as small as 30) as well as long
time-series (length as large as 500).
| Pankaj Malhotra, Anusha Ramakrishnan, Gaurangi Anand, Lovekesh Vig,
Puneet Agarwal, Gautam Shroff | null | 1607.00148 | null | null |
Why is Posterior Sampling Better than Optimism for Reinforcement
Learning? | stat.ML cs.AI cs.LG | Computational results demonstrate that posterior sampling for reinforcement
learning (PSRL) dramatically outperforms algorithms driven by optimism, such as
UCRL2. We provide insight into the extent of this performance boost and the
phenomenon that drives it. We leverage this insight to establish an
$\tilde{O}(H\sqrt{SAT})$ Bayesian expected regret bound for PSRL in
finite-horizon episodic Markov decision processes, where $H$ is the horizon,
$S$ is the number of states, $A$ is the number of actions and $T$ is the time
elapsed. This improves upon the best previous bound of $\tilde{O}(H S
\sqrt{AT})$ for any reinforcement learning algorithm.
| Ian Osband, Benjamin Van Roy | null | 1607.00215 | null | null |
Permutation Invariant Training of Deep Models for Speaker-Independent
Multi-talker Speech Separation | cs.CL cs.LG cs.SD eess.AS | We propose a novel deep learning model, which supports permutation invariant
training (PIT), for speaker independent multi-talker speech separation,
commonly known as the cocktail-party problem. Different from most of the prior
arts that treat speech separation as a multi-class regression problem and the
deep clustering technique that considers it a segmentation (or clustering)
problem, our model optimizes for the separation regression error, ignoring the
order of mixing sources. This strategy cleverly solves the long-lasting label
permutation problem that has prevented progress on deep learning based
techniques for speech separation. Experiments on the equal-energy mixing setup
of a Danish corpus confirms the effectiveness of PIT. We believe improvements
built upon PIT can eventually solve the cocktail-party problem and enable
real-world adoption of, e.g., automatic meeting transcription and multi-party
human-computer interaction, where overlapping speech is common.
| Dong Yu, Morten Kolb{\ae}k, Zheng-Hua Tan, and Jesper Jensen | null | 1607.00325 | null | null |
Convergence Rate of Frank-Wolfe for Non-Convex Objectives | math.OC cs.LG cs.NA stat.ML | We give a simple proof that the Frank-Wolfe algorithm obtains a stationary
point at a rate of $O(1/\sqrt{t})$ on non-convex objectives with a Lipschitz
continuous gradient. Our analysis is affine invariant and is the first, to the
best of our knowledge, giving a similar rate to what was already proven for
projected gradient methods (though on slightly different measures of
stationarity).
| Simon Lacoste-Julien | null | 1607.00345 | null | null |
A scaled Bregman theorem with applications | cs.LG stat.ML | Bregman divergences play a central role in the design and analysis of a range
of machine learning algorithms. This paper explores the use of Bregman
divergences to establish reductions between such algorithms and their analyses.
We present a new scaled isodistortion theorem involving Bregman divergences
(scaled Bregman theorem for short) which shows that certain "Bregman
distortions'" (employing a potentially non-convex generator) may be exactly
re-written as a scaled Bregman divergence computed over transformed data.
Admissible distortions include geodesic distances on curved manifolds and
projections or gauge-normalisation, while admissible data include scalars,
vectors and matrices.
Our theorem allows one to leverage to the wealth and convenience of Bregman
divergences when analysing algorithms relying on the aforementioned Bregman
distortions. We illustrate this with three novel applications of our theorem: a
reduction from multi-class density ratio to class-probability estimation, a new
adaptive projection free yet norm-enforcing dual norm mirror descent algorithm,
and a reduction from clustering on flat manifolds to clustering on curved
manifolds. Experiments on each of these domains validate the analyses and
suggest that the scaled Bregman theorem might be a worthy addition to the
popular handful of Bregman divergence properties that have been pervasive in
machine learning.
| Richard Nock and Aditya Krishna Menon and Cheng Soon Ong | null | 1607.00360 | null | null |
Domain Adaptation for Neural Networks by Parameter Augmentation | cs.CL cs.AI cs.LG | We propose a simple domain adaptation method for neural networks in a
supervised setting. Supervised domain adaptation is a way of improving the
generalization performance on the target domain by using the source domain
dataset, assuming that both of the datasets are labeled. Recently, recurrent
neural networks have been shown to be successful on a variety of NLP tasks such
as caption generation; however, the existing domain adaptation techniques are
limited to (1) tune the model parameters by the target dataset after the
training by the source dataset, or (2) design the network to have dual output,
one for the source domain and the other for the target domain. Reformulating
the idea of the domain adaptation technique proposed by Daume (2007), we
propose a simple domain adaptation method, which can be applied to neural
networks trained with a cross-entropy loss. On captioning datasets, we show
performance improvements over other domain adaptation methods.
| Yusuke Watanabe, Kazuma Hashimoto, Yoshimasa Tsuruoka | null | 1607.00410 | null | null |
Learning Relational Dependency Networks for Relation Extraction | cs.AI cs.CL cs.LG | We consider the task of KBP slot filling -- extracting relation information
from newswire documents for knowledge base construction. We present our
pipeline, which employs Relational Dependency Networks (RDNs) to learn
linguistic patterns for relation extraction. Additionally, we demonstrate how
several components such as weak supervision, word2vec features, joint learning
and the use of human advice, can be incorporated in this relational framework.
We evaluate the different components in the benchmark KBP 2015 task and show
that RDNs effectively model a diverse set of features and perform competitively
with current state-of-the-art relation extraction.
| Dileep Viswanathan and Ameet Soni and Jude Shavlik and Sriraam
Natarajan | null | 1607.00424 | null | null |
Decoding the Encoding of Functional Brain Networks: an fMRI
Classification Comparison of Non-negative Matrix Factorization (NMF),
Independent Component Analysis (ICA), and Sparse Coding Algorithms | q-bio.NC cs.LG stat.ML | Brain networks in fMRI are typically identified using spatial independent
component analysis (ICA), yet mathematical constraints such as sparse coding
and positivity both provide alternate biologically-plausible frameworks for
generating brain networks. Non-negative Matrix Factorization (NMF) would
suppress negative BOLD signal by enforcing positivity. Spatial sparse coding
algorithms ($L1$ Regularized Learning and K-SVD) would impose local
specialization and a discouragement of multitasking, where the total observed
activity in a single voxel originates from a restricted number of possible
brain networks.
The assumptions of independence, positivity, and sparsity to encode
task-related brain networks are compared; the resulting brain networks for
different constraints are used as basis functions to encode the observed
functional activity at a given time point. These encodings are decoded using
machine learning to compare both the algorithms and their assumptions, using
the time series weights to predict whether a subject is viewing a video,
listening to an audio cue, or at rest, in 304 fMRI scans from 51 subjects.
For classifying cognitive activity, the sparse coding algorithm of $L1$
Regularized Learning consistently outperformed 4 variations of ICA across
different numbers of networks and noise levels (p$<$0.001). The NMF algorithms,
which suppressed negative BOLD signal, had the poorest accuracy. Within each
algorithm, encodings using sparser spatial networks (containing more
zero-valued voxels) had higher classification accuracy (p$<$0.001). The success
of sparse coding algorithms may suggest that algorithms which enforce sparse
coding, discourage multitasking, and promote local specialization may capture
better the underlying source processes than those which allow inexhaustible
local processes such as ICA.
| Jianwen Xie, Pamela K. Douglas, Ying Nian Wu, Arthur L. Brody, Ariana
E. Anderson | null | 1607.00435 | null | null |
A Greedy Approach to Adapting the Trace Parameter for Temporal
Difference Learning | cs.AI cs.LG stat.ML | One of the main obstacles to broad application of reinforcement learning
methods is the parameter sensitivity of our core learning algorithms. In many
large-scale applications, online computation and function approximation
represent key strategies in scaling up reinforcement learning algorithms. In
this setting, we have effective and reasonably well understood algorithms for
adapting the learning-rate parameter, online during learning. Such
meta-learning approaches can improve robustness of learning and enable
specialization to current task, improving learning speed. For
temporal-difference learning algorithms which we study here, there is yet
another parameter, $\lambda$, that similarly impacts learning speed and
stability in practice. Unfortunately, unlike the learning-rate parameter,
$\lambda$ parametrizes the objective function that temporal-difference methods
optimize. Different choices of $\lambda$ produce different fixed-point
solutions, and thus adapting $\lambda$ online and characterizing the
optimization is substantially more complex than adapting the learning-rate
parameter. There are no meta-learning method for $\lambda$ that can achieve (1)
incremental updating, (2) compatibility with function approximation, and (3)
maintain stability of learning under both on and off-policy sampling. In this
paper we contribute a novel objective function for optimizing $\lambda$ as a
function of state rather than time. We derive a new incremental, linear
complexity $\lambda$-adaption algorithm that does not require offline batch
updating or access to a model of the world, and present a suite of experiments
illustrating the practicality of our new algorithm in three different settings.
Taken together, our contributions represent a concrete step towards black-box
application of temporal-difference learning methods in real world problems.
| Martha White and Adam White | null | 1607.00446 | null | null |
Alzheimer's Disease Diagnostics by Adaptation of 3D Convolutional
Network | cs.LG q-bio.NC stat.ML | Early diagnosis, playing an important role in preventing progress and
treating the Alzheimer\{'}s disease (AD), is based on classification of
features extracted from brain images. The features have to accurately capture
main AD-related variations of anatomical brain structures, such as, e.g.,
ventricles size, hippocampus shape, cortical thickness, and brain volume. This
paper proposed to predict the AD with a deep 3D convolutional neural network
(3D-CNN), which can learn generic features capturing AD biomarkers and adapt to
different domain datasets. The 3D-CNN is built upon a 3D convolutional
autoencoder, which is pre-trained to capture anatomical shape variations in
structural brain MRI scans. Fully connected upper layers of the 3D-CNN are then
fine-tuned for each task-specific AD classification. Experiments on the
CADDementia MRI dataset with no skull-stripping preprocessing have shown our
3D-CNN outperforms several conventional classifiers by accuracy. Abilities of
the 3D-CNN to generalize the features learnt and adapt to other domains have
been validated on the ADNI dataset.
| Ehsan Hosseini-Asl, Robert Keynto, Ayman El-Baz | 10.1109/ICIP.2016.7532332 | 1607.00455 | null | null |
Outlier absorbing based on a Bayesian approach | cs.LG | The presence of outliers is prevalent in machine learning applications and
may produce misleading results. In this paper a new method for dealing with
outliers and anomal samples is proposed. To overcome the outlier issue, the
proposed method combines the global and local views of the samples. By
combination of these views, our algorithm performs in a robust manner. The
experimental results show the capabilities of the proposed method.
| Parsa Bagherzadeh and Hadi Sadoghi Yazdi | null | 1607.00466 | null | null |
Adaptive Neighborhood Graph Construction for Inference in
Multi-Relational Networks | cs.SI cs.AI cs.LG | A neighborhood graph, which represents the instances as vertices and their
relations as weighted edges, is the basis of many semi-supervised and
relational models for node labeling and link prediction. Most methods employ a
sequential process to construct the neighborhood graph. This process often
consists of generating a candidate graph, pruning the candidate graph to make a
neighborhood graph, and then performing inference on the variables (i.e.,
nodes) in the neighborhood graph. In this paper, we propose a framework that
can dynamically adapt the neighborhood graph based on the states of variables
from intermediate inference results, as well as structural properties of the
relations connecting them. A key strength of our framework is its ability to
handle multi-relational data and employ varying amounts of relations for each
instance based on the intermediate inference results. We formulate the link
prediction task as inference on neighborhood graphs, and include preliminary
results illustrating the effects of different strategies in our proposed
framework.
| Shobeir Fakhraei, Dhanya Sridhar, Jay Pujara, Lise Getoor | null | 1607.00474 | null | null |
Group Sparse Regularization for Deep Neural Networks | stat.ML cs.LG | In this paper, we consider the joint task of simultaneously optimizing (i)
the weights of a deep neural network, (ii) the number of neurons for each
hidden layer, and (iii) the subset of active input features (i.e., feature
selection). While these problems are generally dealt with separately, we
present a simple regularized formulation allowing to solve all three of them in
parallel, using standard optimization routines. Specifically, we extend the
group Lasso penalty (originated in the linear regression literature) in order
to impose group-level sparsity on the network's connections, where each group
is defined as the set of outgoing weights from a unit. Depending on the
specific case, the weights can be related to an input variable, to a hidden
neuron, or to a bias unit, thus performing simultaneously all the
aforementioned tasks in order to obtain a compact network. We perform an
extensive experimental evaluation, by comparing with classical weight decay and
Lasso penalties. We show that a sparse version of the group Lasso penalty is
able to achieve competitive performances, while at the same time resulting in
extremely compact networks with a smaller number of input features. We evaluate
both on a toy dataset for handwritten digit recognition, and on multiple
realistic large-scale classification problems.
| Simone Scardapane, Danilo Comminiello, Amir Hussain, Aurelio Uncini | 10.1016/j.neucom.2017.02.029 | 1607.00485 | null | null |
Big IoT and social networking data for smart cities: Algorithmic
improvements on Big Data Analysis in the context of RADICAL city applications | cs.CY cs.LG cs.SI | In this paper we present a SOA (Service Oriented Architecture)-based
platform, enabling the retrieval and analysis of big datasets stemming from
social networking (SN) sites and Internet of Things (IoT) devices, collected by
smart city applications and socially-aware data aggregation services. A large
set of city applications in the areas of Participating Urbanism, Augmented
Reality and Sound-Mapping throughout participating cities is being applied,
resulting into produced sets of millions of user-generated events and online SN
reports fed into the RADICAL platform. Moreover, we study the application of
data analytics such as sentiment analysis to the combined IoT and SN data saved
into an SQL database, further investigating algorithmic and configurations to
minimize delays in dataset processing and results retrieval.
| Evangelos Psomakelis, Fotis Aisopos, Antonios Litke, Konstantinos
Tserpes, Magdalini Kardara, Pablo Mart\'inez Campo | null | 1607.00509 | null | null |
Approximate Joint Matrix Triangularization | cs.NA cs.LG math.NA stat.ML | We consider the problem of approximate joint triangularization of a set of
noisy jointly diagonalizable real matrices. Approximate joint triangularizers
are commonly used in the estimation of the joint eigenstructure of a set of
matrices, with applications in signal processing, linear algebra, and tensor
decomposition. By assuming the input matrices to be perturbations of
noise-free, simultaneously diagonalizable ground-truth matrices, the
approximate joint triangularizers are expected to be perturbations of the exact
joint triangularizers of the ground-truth matrices. We provide a priori and a
posteriori perturbation bounds on the `distance' between an approximate joint
triangularizer and its exact counterpart. The a priori bounds are theoretical
inequalities that involve functions of the ground-truth matrices and noise
matrices, whereas the a posteriori bounds are given in terms of observable
quantities that can be computed from the input matrices. From a practical
perspective, the problem of finding the best approximate joint triangularizer
of a set of noisy matrices amounts to solving a nonconvex optimization problem.
We show that, under a condition on the noise level of the input matrices, it is
possible to find a good initial triangularizer such that the solution obtained
by any local descent-type algorithm has certain global guarantees. Finally, we
discuss the application of approximate joint matrix triangularization to
canonical tensor decomposition and we derive novel estimation error bounds.
| Nicolo Colombo and Nikos Vlassis | null | 1607.00514 | null | null |
Alzheimer's Disease Diagnostics by a Deeply Supervised Adaptable 3D
Convolutional Network | cs.LG q-bio.NC stat.ML | Early diagnosis, playing an important role in preventing progress and
treating the Alzheimer's disease (AD), is based on classification of features
extracted from brain images. The features have to accurately capture main
AD-related variations of anatomical brain structures, such as, e.g., ventricles
size, hippocampus shape, cortical thickness, and brain volume. This paper
proposes to predict the AD with a deep 3D convolutional neural network
(3D-CNN), which can learn generic features capturing AD biomarkers and adapt to
different domain datasets. The 3D-CNN is built upon a 3D convolutional
autoencoder, which is pre-trained to capture anatomical shape variations in
structural brain MRI scans. Fully connected upper layers of the 3D-CNN are then
fine-tuned for each task-specific AD classification. Experiments on the
\emph{ADNI} MRI dataset with no skull-stripping preprocessing have shown our
3D-CNN outperforms several conventional classifiers by accuracy and robustness.
Abilities of the 3D-CNN to generalize the features learnt and adapt to other
domains have been validated on the \emph{CADDementia} dataset.
| Ehsan Hosseini-Asl, Georgy Gimel'farb, Ayman El-Baz | null | 1607.00556 | null | null |
Rademacher Complexity Bounds for a Penalized Multiclass Semi-Supervised
Algorithm | stat.ML cs.LG | We propose Rademacher complexity bounds for multiclass classifiers trained
with a two-step semi-supervised model. In the first step, the algorithm
partitions the partially labeled data and then identifies dense clusters
containing $\kappa$ predominant classes using the labeled training examples
such that the proportion of their non-predominant classes is below a fixed
threshold. In the second step, a classifier is trained by minimizing a margin
empirical loss over the labeled training set and a penalization term measuring
the disability of the learner to predict the $\kappa$ predominant classes of
the identified clusters. The resulting data-dependent generalization error
bound involves the margin distribution of the classifier, the stability of the
clustering technique used in the first step and Rademacher complexity terms
corresponding to partially labeled training data. Our theoretical result
exhibit convergence rates extending those proposed in the literature for the
binary case, and experimental results on different multiclass classification
problems show empirical evidence that supports the theory.
| Yury Maximov, Massih-Reza Amini, Zaid Harchaoui | 10.1613/jair.5638 | 1607.00567 | null | null |
node2vec: Scalable Feature Learning for Networks | cs.SI cs.LG stat.ML | Prediction tasks over nodes and edges in networks require careful effort in
engineering features used by learning algorithms. Recent research in the
broader field of representation learning has led to significant progress in
automating prediction by learning the features themselves. However, present
feature learning approaches are not expressive enough to capture the diversity
of connectivity patterns observed in networks. Here we propose node2vec, an
algorithmic framework for learning continuous feature representations for nodes
in networks. In node2vec, we learn a mapping of nodes to a low-dimensional
space of features that maximizes the likelihood of preserving network
neighborhoods of nodes. We define a flexible notion of a node's network
neighborhood and design a biased random walk procedure, which efficiently
explores diverse neighborhoods. Our algorithm generalizes prior work which is
based on rigid notions of network neighborhoods, and we argue that the added
flexibility in exploring neighborhoods is the key to learning richer
representations. We demonstrate the efficacy of node2vec over existing
state-of-the-art techniques on multi-label classification and link prediction
in several real-world networks from diverse domains. Taken together, our work
represents a new way for efficiently learning state-of-the-art task-independent
representations in complex networks.
| Aditya Grover, Jure Leskovec | null | 1607.00653 | null | null |
Unsupervised Learning of 3D Structure from Images | cs.CV cs.LG stat.ML | A key goal of computer vision is to recover the underlying 3D structure from
2D observations of the world. In this paper we learn strong deep generative
models of 3D structures, and recover these structures from 3D and 2D images via
probabilistic inference. We demonstrate high-quality samples and report
log-likelihoods on several datasets, including ShapeNet [2], and establish the
first benchmarks in the literature. We also show how these models and their
inference networks can be trained end-to-end from 2D images. This demonstrates
for the first time the feasibility of learning to infer 3D representations of
the world in a purely unsupervised manner.
| Danilo Jimenez Rezende and S. M. Ali Eslami and Shakir Mohamed and
Peter Battaglia and Max Jaderberg and Nicolas Heess | null | 1607.00662 | null | null |
Understanding the Energy and Precision Requirements for Online Learning | stat.ML cs.LG | It is well-known that the precision of data, hyperparameters, and internal
representations employed in learning systems directly impacts its energy,
throughput, and latency. The precision requirements for the training algorithm
are also important for systems that learn on-the-fly. Prior work has shown that
the data and hyperparameters can be quantized heavily without incurring much
penalty in classification accuracy when compared to floating point
implementations. These works suffer from two key limitations. First, they
assume uniform precision for the classifier and for the training algorithm and
thus miss out on the opportunity to further reduce precision. Second, prior
works are empirical studies. In this article, we overcome both these
limitations by deriving analytical lower bounds on the precision requirements
of the commonly employed stochastic gradient descent (SGD) on-line learning
algorithm in the specific context of a support vector machine (SVM). Lower
bounds on the data precision are derived in terms of the the desired
classification accuracy and precision of the hyperparameters used in the
classifier. Additionally, lower bounds on the hyperparameter precision in the
SGD training algorithm are obtained. These bounds are validated using both
synthetic and the UCI breast cancer dataset. Additionally, the impact of these
precisions on the energy consumption of a fixed-point SVM with on-line training
is studied.
| Charbel Sakr, Ameya Patil, Sai Zhang, Yongjune Kim, Naresh Shanbhag | null | 1607.00669 | null | null |
Confidence-Weighted Bipartite Ranking | cs.LG | Bipartite ranking is a fundamental machine learning and data mining problem.
It commonly concerns the maximization of the AUC metric. Recently, a number of
studies have proposed online bipartite ranking algorithms to learn from massive
streams of class-imbalanced data. These methods suggest both linear and
kernel-based bipartite ranking algorithms based on first and second-order
online learning. Unlike kernelized ranker, linear ranker is more scalable
learning algorithm. The existing linear online bipartite ranking algorithms
lack either handling non-separable data or constructing adaptive large margin.
These limitations yield unreliable bipartite ranking performance. In this work,
we propose a linear online confidence-weighted bipartite ranking algorithm
(CBR) that adopts soft confidence-weighted learning. The proposed algorithm
leverages the same properties of soft confidence-weighted learning in a
framework for bipartite ranking. We also develop a diagonal variation of the
proposed confidence-weighted bipartite ranking algorithm to deal with
high-dimensional data by maintaining only the diagonal elements of the
covariance matrix. We empirically evaluate the effectiveness of the proposed
algorithms on several benchmark and high-dimensional datasets. The experimental
results validate the reliability of the proposed algorithms. The results also
show that our algorithms outperform or are at least comparable to the competing
online AUC maximization methods.
| Majdi Khalid, Indrakshi Ray, and Hamidreza Chitsaz | null | 1607.00847 | null | null |
Neighborhood Features Help Detecting Non-Technical Losses in Big Data
Sets | cs.LG cs.AI | Electricity theft is a major problem around the world in both developed and
developing countries and may range up to 40% of the total electricity
distributed. More generally, electricity theft belongs to non-technical losses
(NTL), which are losses that occur during the distribution of electricity in
power grids. In this paper, we build features from the neighborhood of
customers. We first split the area in which the customers are located into
grids of different sizes. For each grid cell we then compute the proportion of
inspected customers and the proportion of NTL found among the inspected
customers. We then analyze the distributions of features generated and show why
they are useful to predict NTL. In addition, we compute features from the
consumption time series of customers. We also use master data features of
customers, such as their customer class and voltage of their connection. We
compute these features for a Big Data base of 31M meter readings, 700K
customers and 400K inspection results. We then use these features to train four
machine learning algorithms that are particularly suitable for Big Data sets
because of their parallelizable structure: logistic regression, k-nearest
neighbors, linear support vector machine and random forest. Using the
neighborhood features instead of only analyzing the time series has resulted in
appreciable results for Big Data sets for varying NTL proportions of 1%-90%.
This work can therefore be deployed to a wide range of different regions around
the world.
| Patrick Glauner, Jorge Meira, Lautaro Dolberg, Radu State, Franck
Bettinger, Yves Rangoni, Diogo Duarte | null | 1607.00872 | null | null |
Optimal Quantum Sample Complexity of Learning Algorithms | quant-ph cs.CC cs.LG | $ \newcommand{\eps}{\varepsilon} $In learning theory, the VC dimension of a
concept class $C$ is the most common way to measure its "richness." In the PAC
model $$ \Theta\Big(\frac{d}{\eps} + \frac{\log(1/\delta)}{\eps}\Big) $$
examples are necessary and sufficient for a learner to output, with probability
$1-\delta$, a hypothesis $h$ that is $\eps$-close to the target concept $c$. In
the related agnostic model, where the samples need not come from a $c\in C$, we
know that $$ \Theta\Big(\frac{d}{\eps^2} + \frac{\log(1/\delta)}{\eps^2}\Big)
$$ examples are necessary and sufficient to output an hypothesis $h\in C$ whose
error is at most $\eps$ worse than the best concept in $C$.
Here we analyze quantum sample complexity, where each example is a coherent
quantum state. This model was introduced by Bshouty and Jackson, who showed
that quantum examples are more powerful than classical examples in some
fixed-distribution settings. However, Atici and Servedio, improved by Zhang,
showed that in the PAC setting, quantum examples cannot be much more powerful:
the required number of quantum examples is $$
\Omega\Big(\frac{d^{1-\eta}}{\eps} + d + \frac{\log(1/\delta)}{\eps}\Big)\mbox{
for all }\eta> 0. $$ Our main result is that quantum and classical sample
complexity are in fact equal up to constant factors in both the PAC and
agnostic models. We give two approaches. The first is a fairly simple
information-theoretic argument that yields the above two classical bounds and
yields the same bounds for quantum sample complexity up to a $\log(d/\eps)$
factor. We then give a second approach that avoids the log-factor loss, based
on analyzing the behavior of the "Pretty Good Measurement" on the quantum state
identification problems that correspond to learning. This shows classical and
quantum sample complexity are equal up to constant factors.
| Srinivasan Arunachalam (CWI) and Ronald de Wolf (CWI and U of
Amsterdam) | null | 1607.00932 | null | null |
Sequence to Backward and Forward Sequences: A Content-Introducing
Approach to Generative Short-Text Conversation | cs.CL cs.LG | Using neural networks to generate replies in human-computer dialogue systems
is attracting increasing attention over the past few years. However, the
performance is not satisfactory: the neural network tends to generate safe,
universally relevant replies which carry little meaning. In this paper, we
propose a content-introducing approach to neural network-based generative
dialogue systems. We first use pointwise mutual information (PMI) to predict a
noun as a keyword, reflecting the main gist of the reply. We then propose
seq2BF, a "sequence to backward and forward sequences" model, which generates a
reply containing the given keyword. Experimental results show that our approach
significantly outperforms traditional sequence-to-sequence models in terms of
human evaluation and the entropy measure, and that the predicted keyword can
appear at an appropriate position in the reply.
| Lili Mou, Yiping Song, Rui Yan, Ge Li, Lu Zhang, Zhi Jin | null | 1607.00970 | null | null |
Accelerate Stochastic Subgradient Method by Leveraging Local Growth
Condition | math.OC cs.LG cs.NA stat.ML | In this paper, a new theory is developed for first-order stochastic convex
optimization, showing that the global convergence rate is sufficiently
quantified by a local growth rate of the objective function in a neighborhood
of the optimal solutions. In particular, if the objective function $F(\mathbf
w)$ in the $\epsilon$-sublevel set grows as fast as $\|\mathbf w - \mathbf
w_*\|_2^{1/\theta}$, where $\mathbf w_*$ represents the closest optimal
solution to $\mathbf w$ and $\theta\in(0,1]$ quantifies the local growth rate,
the iteration complexity of first-order stochastic optimization for achieving
an $\epsilon$-optimal solution can be $\widetilde O(1/\epsilon^{2(1-\theta)})$,
which is optimal at most up to a logarithmic factor. To achieve the faster
global convergence, we develop two different accelerated stochastic subgradient
methods by iteratively solving the original problem approximately in a local
region around a historical solution with the size of the local region gradually
decreasing as the solution approaches the optimal set. Besides the theoretical
improvements, this work also includes new contributions towards making the
proposed algorithms practical: (i) we present practical variants of accelerated
stochastic subgradient methods that can run without the knowledge of
multiplicative growth constant and even the growth rate $\theta$; (ii) we
consider a broad family of problems in machine learning to demonstrate that the
proposed algorithms enjoy faster convergence than traditional stochastic
subgradient method. We also characterize the complexity of the proposed
algorithms for ensuring the gradient is small without the smoothness
assumption.
| Yi Xu, Qihang Lin, Tianbao Yang | null | 1607.01027 | null | null |
Bootstrap Model Aggregation for Distributed Statistical Learning | stat.ML cs.AI cs.LG | In distributed, or privacy-preserving learning, we are often given a set of
probabilistic models estimated from different local repositories, and asked to
combine them into a single model that gives efficient statistical estimation. A
simple method is to linearly average the parameters of the local models, which,
however, tends to be degenerate or not applicable on non-convex models, or
models with different parameter dimensions. One more practical strategy is to
generate bootstrap samples from the local models, and then learn a joint model
based on the combined bootstrap set. Unfortunately, the bootstrap procedure
introduces additional noise and can significantly deteriorate the performance.
In this work, we propose two variance reduction methods to correct the
bootstrap noise, including a weighted M-estimator that is both statistically
efficient and practically powerful. Both theoretical and empirical analysis is
provided to demonstrate our methods.
| Jun Han, Qiang Liu | null | 1607.01036 | null | null |
Application of Statistical Relational Learning to Hybrid Recommendation
Systems | cs.AI cs.IR cs.LG | Recommendation systems usually involve exploiting the relations among known
features and content that describe items (content-based filtering) or the
overlap of similar users who interacted with or rated the target item
(collaborative filtering). To combine these two filtering approaches, current
model-based hybrid recommendation systems typically require extensive feature
engineering to construct a user profile. Statistical Relational Learning (SRL)
provides a straightforward way to combine the two approaches. However, due to
the large scale of the data used in real world recommendation systems, little
research exists on applying SRL models to hybrid recommendation systems, and
essentially none of that research has been applied on real big-data-scale
systems. In this paper, we proposed a way to adapt the state-of-the-art in SRL
learning approaches to construct a real hybrid recommendation system.
Furthermore, in order to satisfy a common requirement in recommendation systems
(i.e. that false positives are more undesirable and therefore penalized more
harshly than false negatives), our approach can also allow tuning the trade-off
between the precision and recall of the system in a principled way. Our
experimental results demonstrate the efficiency of our proposed approach as
well as its improved performance on recommendation precision.
| Shuo Yang, Mohammed Korayem, Khalifeh AlJadda, Trey Grainger, Sriraam
Natarajan | null | 1607.01050 | null | null |
AdaNet: Adaptive Structural Learning of Artificial Neural Networks | cs.LG | We present new algorithms for adaptively learning artificial neural networks.
Our algorithms (AdaNet) adaptively learn both the structure of the network and
its weights. They are based on a solid theoretical analysis, including
data-dependent generalization guarantees that we prove and discuss in detail.
We report the results of large-scale experiments with one of our algorithms on
several binary classification tasks extracted from the CIFAR-10 dataset. The
results demonstrate that our algorithm can automatically learn network
structures with very competitive performance accuracies when compared with
those achieved for neural networks found by standard approaches.
| Corinna Cortes, Xavi Gonzalvo, Vitaly Kuznetsov, Mehryar Mohri and
Scott Yang | null | 1607.01097 | null | null |
Minimalist Regression Network with Reinforced Gradients and Weighted
Estimates: a Case Study on Parameters Estimation in Automated Welding | cs.LG cs.RO | This paper presents a minimalist neural regression network as an aggregate of
independent identical regression blocks that are trained simultaneously.
Moreover, it introduces a new multiplicative parameter, shared by all the
neural units of a given layer, to maintain the quality of its gradients.
Furthermore, it increases its estimation accuracy via learning a weight factor
whose quantity captures the redundancy between the estimated and actual values
at each training iteration. We choose the estimation of the direct weld
parameters of different welding techniques to show a significant improvement in
calculation of these parameters by our model in contrast to state-of-the-arts
techniques in the literature. Furthermore, we demonstrate the ability of our
model to retain its performance when presented with combined data of different
welding techniques. This is a nontrivial result in attaining an scalable model
whose quality of estimation is independent of adopted welding techniques.
| Soheil Keshmiri | null | 1607.01136 | null | null |
How to Evaluate the Quality of Unsupervised Anomaly Detection
Algorithms? | stat.ML cs.LG | When sufficient labeled data are available, classical criteria based on
Receiver Operating Characteristic (ROC) or Precision-Recall (PR) curves can be
used to compare the performance of un-supervised anomaly detection algorithms.
However , in many situations, few or no data are labeled. This calls for
alternative criteria one can compute on non-labeled data. In this paper, two
criteria that do not require labels are empirically shown to discriminate
accurately (w.r.t. ROC or PR based criteria) between algorithms. These criteria
are based on existing Excess-Mass (EM) and Mass-Volume (MV) curves, which
generally cannot be well estimated in large dimension. A methodology based on
feature sub-sampling and aggregating is also described and tested, extending
the use of these criteria to high-dimensional datasets and solving major
drawbacks inherent to standard EM and MV curves.
| Nicolas Goix (LTCI) | null | 1607.01152 | null | null |
Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization | math.OC cs.LG cs.NA stat.ML | In this paper we study stochastic quasi-Newton methods for nonconvex
stochastic optimization, where we assume that noisy information about the
gradients of the objective function is available via a stochastic first-order
oracle (SFO). We propose a general framework for such methods, for which we
prove almost sure convergence to stationary points and analyze its worst-case
iteration complexity. When a randomly chosen iterate is returned as the output
of such an algorithm, we prove that in the worst-case, the SFO-calls complexity
is $O(\epsilon^{-2})$ to ensure that the expectation of the squared norm of the
gradient is smaller than the given accuracy tolerance $\epsilon$. We also
propose a specific algorithm, namely a stochastic damped L-BFGS (SdLBFGS)
method, that falls under the proposed framework. {Moreover, we incorporate the
SVRG variance reduction technique into the proposed SdLBFGS method, and analyze
its SFO-calls complexity. Numerical results on a nonconvex binary
classification problem using SVM, and a multiclass classification problem using
neural networks are reported.
| Xiao Wang, Shiqian Ma, Donald Goldfarb, Wei Liu | null | 1607.01231 | null | null |
Temporal Topic Analysis with Endogenous and Exogenous Processes | cs.CL cs.IR cs.LG | We consider the problem of modeling temporal textual data taking endogenous
and exogenous processes into account. Such text documents arise in real world
applications, including job advertisements and economic news articles, which
are influenced by the fluctuations of the general economy. We propose a
hierarchical Bayesian topic model which imposes a "group-correlated"
hierarchical structure on the evolution of topics over time incorporating both
processes, and show that this model can be estimated from Markov chain Monte
Carlo sampling methods. We further demonstrate that this model captures the
intrinsic relationships between the topic distribution and the time-dependent
factors, and compare its performance with latent Dirichlet allocation (LDA) and
two other related models. The model is applied to two collections of documents
to illustrate its empirical performance: online job advertisements from
DirectEmployers Association and journalists' postings on BusinessInsider.com.
| Baiyang Wang, Diego Klabjan | null | 1607.01274 | null | null |
Resource Allocation in a MAC with and without security via Game
Theoretic Learning | cs.IT cs.LG math.IT | In this paper a $K$-user fading multiple access channel with and without
security constraints is studied. First we consider a F-MAC without the security
constraints. Under the assumption of individual CSI of users, we propose the
problem of power allocation as a stochastic game when the receiver sends an ACK
or a NACK depending on whether it was able to decode the message or not. We
have used Multiplicative weight no-regret algorithm to obtain a Coarse
Correlated Equilibrium (CCE). Then we consider the case when the users can
decode ACK/NACK of each other. In this scenario we provide an algorithm to
maximize the weighted sum-utility of all the users and obtain a Pareto optimal
point. PP is socially optimal but may be unfair to individual users. Next we
consider the case where the users can cooperate with each other so as to
disagree with the policy which will be unfair to individual user. We then
obtain a Nash bargaining solution, which in addition to being Pareto optimal,
is also fair to each user.
Next we study a $K$-user fading multiple access wiretap Channel with CSI of
Eve available to the users. We use the previous algorithms to obtain a CCE, PP
and a NBS.
Next we consider the case where each user does not know the CSI of Eve but
only its distribution. In that case we use secrecy outage as the criterion for
the receiver to send an ACK or a NACK. Here also we use the previous algorithms
to obtain a CCE, PP or a NBS. Finally we show that our algorithms can be
extended to the case where a user can transmit at different rates. At the end
we provide a few examples to compute different solutions and compare them under
different CSI scenarios.
| Shahid Mehraj Shah, Krishna Chaitanya A and Vinod Sharma | null | 1607.01346 | null | null |
Learning Discriminative Features using Encoder-Decoder type Deep Neural
Nets | cs.LG stat.ML | As machine learning is applied to an increasing variety of complex problems,
which are defined by high dimensional and complex data sets, the necessity for
task oriented feature learning grows in importance. With the advancement of
Deep Learning algorithms, various successful feature learning techniques have
evolved. In this paper, we present a novel way of learning discriminative
features by training Deep Neural Nets which have Encoder or Decoder type
architecture similar to an Autoencoder. We demonstrate that our approach can
learn discriminative features which can perform better at pattern
classification tasks when the number of training samples is relatively small in
size.
| Vishwajeet Singh, Killamsetti Ravi Kumar and K Eswaran | null | 1607.01354 | null | null |
An Aggregate and Iterative Disaggregate Algorithm with Proven Optimality
in Machine Learning | stat.ML cs.LG | We propose a clustering-based iterative algorithm to solve certain
optimization problems in machine learning, where we start the algorithm by
aggregating the original data, solving the problem on aggregated data, and then
in subsequent steps gradually disaggregate the aggregated data. We apply the
algorithm to common machine learning problems such as the least absolute
deviation regression problem, support vector machines, and semi-supervised
support vector machines. We derive model-specific data aggregation and
disaggregation procedures. We also show optimality, convergence, and the
optimality gap of the approximated solution in each iteration. A computational
study is provided.
| Young Woong Park and Diego Klabjan | 10.1007/s10994-016-5562-z | 1607.01400 | null | null |
Algorithms for Generalized Cluster-wise Linear Regression | stat.ML cs.LG | Cluster-wise linear regression (CLR), a clustering problem intertwined with
regression, is to find clusters of entities such that the overall sum of
squared errors from regressions performed over these clusters is minimized,
where each cluster may have different variances. We generalize the CLR problem
by allowing each entity to have more than one observation, and refer to it as
generalized CLR. We propose an exact mathematical programming based approach
relying on column generation, a column generation based heuristic algorithm
that clusters predefined groups of entities, a metaheuristic genetic algorithm
with adapted Lloyd's algorithm for K-means clustering, a two-stage approach,
and a modified algorithm of Sp{\"a}th \cite{Spath1979} for solving generalized
CLR. We examine the performance of our algorithms on a stock keeping unit (SKU)
clustering problem employed in forecasting halo and cannibalization effects in
promotions using real-world retail data from a large supermarket chain. In the
SKU clustering problem, the retailer needs to cluster SKUs based on their
seasonal effects in response to promotions. The seasonal effects are the
results of regressions with predictors being promotion mechanisms and seasonal
dummies performed over clusters generated. We compare the performance of all
proposed algorithms for the SKU problem with real-world and synthetic data.
| Young Woong Park, Yan Jiang, Diego Klabjan, Loren Williams | 10.1287/ijoc.2016.0729 | 1607.01417 | null | null |
An optimal learning method for developing personalized treatment regimes | stat.ML cs.LG | A treatment regime is a function that maps individual patient information to
a recommended treatment, hence explicitly incorporating the heterogeneity in
need for treatment across individuals. Patient responses are dichotomous and
can be predicted through an unknown relationship that depends on the patient
information and the selected treatment. The goal is to find the treatments that
lead to the best patient responses on average. Each experiment is expensive,
forcing us to learn the most from each experiment. We adopt a Bayesian approach
both to incorporate possible prior information and to update our treatment
regime continuously as information accrues, with the potential to allow smaller
yet more informative trials and for patients to receive better treatment. By
formulating the problem as contextual bandits, we introduce a knowledge
gradient policy to guide the treatment assignment by maximizing the expected
value of information, for which an approximation method is used to overcome
computational challenges. We provide a detailed study on how to make sequential
medical decisions under uncertainty to reduce health care costs on a real world
knee replacement dataset. We use clustering and LASSO to deal with the
intrinsic sparsity in health datasets. We show experimentally that even though
the problem is sparse, through careful selection of physicians (versus picking
them at random), we can significantly improve the success rates.
| Yingfei Wang and Warren Powell | null | 1607.01462 | null | null |
On Sampling and Greedy MAP Inference of Constrained Determinantal Point
Processes | cs.DS cs.LG math.PR | Subset selection problems ask for a small, diverse yet representative subset
of the given data. When pairwise similarities are captured by a kernel, the
determinants of submatrices provide a measure of diversity or independence of
items within a subset. Matroid theory gives another notion of independence,
thus giving rise to optimization and sampling questions about Determinantal
Point Processes (DPPs) under matroid constraints. Partition constraints, as a
special case, arise naturally when incorporating additional labeling or
clustering information, besides the kernel, in DPPs. Finding the maximum
determinant submatrix under matroid constraints on its row/column indices has
been previously studied. However, the corresponding question of sampling from
DPPs under matroid constraints has been unresolved, beyond the simple
cardinality constrained k-DPPs. We give the first polynomial time algorithm to
sample exactly from DPPs under partition constraints, for any constant number
of partitions. We complement this by a complexity theoretic barrier that rules
out such a result under general matroid constraints. Our experiments indicate
that partition-constrained DPPs offer more flexibility and more diversity than
k-DPPs and their naive extensions, while being reasonably efficient in running
time. We also show that a simple greedy initialization followed by local search
gives improved approximation guarantees for the problem of MAP inference from
k- DPPs on well-conditioned kernels. Our experiments show that this improvement
is significant for larger values of k, supporting our theoretical result.
| Tarun Kathuria, Amit Deshpande | null | 1607.01551 | null | null |
Bagged Boosted Trees for Classification of Ecological Momentary
Assessment Data | cs.LG | Ecological Momentary Assessment (EMA) data is organized in multiple levels
(per-subject, per-day, etc.) and this particular structure should be taken into
account in machine learning algorithms used in EMA like decision trees and its
variants. We propose a new algorithm called BBT (standing for Bagged Boosted
Trees) that is enhanced by a over/under sampling method and can provide better
estimates for the conditional class probability function. Experimental results
on a real-world dataset show that BBT can benefit EMA data classification and
performance.
| Gerasimos Spanakis and Gerhard Weiss and Anne Roefs | null | 1607.01582 | null | null |
Tensor Decomposition for Signal Processing and Machine Learning | stat.ML cs.LG cs.NA math.NA | Tensors or {\em multi-way arrays} are functions of three or more indices
$(i,j,k,\cdots)$ -- similar to matrices (two-way arrays), which are functions
of two indices $(r,c)$ for (row,column). Tensors have a rich history,
stretching over almost a century, and touching upon numerous disciplines; but
they have only recently become ubiquitous in signal and data analytics at the
confluence of signal processing, statistics, data mining and machine learning.
This overview article aims to provide a good starting point for researchers and
practitioners interested in learning about and working with tensors. As such,
it focuses on fundamentals and motivation (using various application examples),
aiming to strike an appropriate balance of breadth {\em and depth} that will
enable someone having taken first graduate courses in matrix algebra and
probability to get started doing research and/or developing tensor algorithms
and software. Some background in applied optimization is useful but not
strictly required. The material covered includes tensor rank and rank
decomposition; basic tensor factorization models and their relationships and
properties (including fairly good coverage of identifiability); broad coverage
of algorithms ranging from alternating optimization to stochastic gradient;
statistical performance analysis; and applications ranging from source
separation to collaborative filtering, mixture and topic modeling,
classification, and multilinear subspace learning.
| Nicholas D. Sidiropoulos, Lieven De Lathauwer, Xiao Fu, Kejun Huang,
Evangelos E. Papalexakis, Christos Faloutsos | 10.1109/TSP.2017.2690524 | 1607.01668 | null | null |
A New Hierarchical Redundancy Eliminated Tree Augmented Naive Bayes
Classifier for Coping with Gene Ontology-based Features | cs.LG cs.AI | The Tree Augmented Naive Bayes classifier is a type of probabilistic
graphical model that can represent some feature dependencies. In this work, we
propose a Hierarchical Redundancy Eliminated Tree Augmented Naive Bayes
(HRE-TAN) algorithm, which considers removing the hierarchical redundancy
during the classifier learning process, when coping with data containing
hierarchically structured features. The experiments showed that HRE-TAN obtains
significantly better predictive performance than the conventional Tree
Augmented Naive Bayes classifier, and enhanced the robustness against
imbalanced class distributions, in aging-related gene datasets with Gene
Ontology terms used as features.
| Cen Wan and Alex A. Freitas | null | 1607.01690 | null | null |
Deep CORAL: Correlation Alignment for Deep Domain Adaptation | cs.CV cs.AI cs.LG cs.NE | Deep neural networks are able to learn powerful representations from large
quantities of labeled input data, however they cannot always generalize well
across changes in input distributions. Domain adaptation algorithms have been
proposed to compensate for the degradation in performance due to domain shift.
In this paper, we address the case when the target domain is unlabeled,
requiring unsupervised adaptation. CORAL is a "frustratingly easy" unsupervised
domain adaptation method that aligns the second-order statistics of the source
and target distributions with a linear transformation. Here, we extend CORAL to
learn a nonlinear transformation that aligns correlations of layer activations
in deep neural networks (Deep CORAL). Experiments on standard benchmark
datasets show state-of-the-art performance.
| Baochen Sun, Kate Saenko | null | 1607.01719 | null | null |
Finding Significant Fourier Coefficients: Clarifications,
Simplifications, Applications and Limitations | cs.CR cs.DS cs.LG | Ideas from Fourier analysis have been used in cryptography for the last three
decades. Akavia, Goldwasser and Safra unified some of these ideas to give a
complete algorithm that finds significant Fourier coefficients of functions on
any finite abelian group. Their algorithm stimulated a lot of interest in the
cryptography community, especially in the context of `bit security'. This
manuscript attempts to be a friendly and comprehensive guide to the tools and
results in this field. The intended readership is cryptographers who have heard
about these tools and seek an understanding of their mechanics and their
usefulness and limitations. A compact overview of the algorithm is presented
with emphasis on the ideas behind it. We show how these ideas can be extended
to a `modulus-switching' variant of the algorithm. We survey some applications
of this algorithm, and explain that several results should be taken in the
right context. In particular, we point out that some of the most important bit
security problems are still open. Our original contributions include: a
discussion of the limitations on the usefulness of these tools; an answer to an
open question about the modular inversion hidden number problem.
| Steven D. Galbraith, Joel Laity and Barak Shani | null | 1607.01842 | null | null |
Stock trend prediction using news sentiment analysis | cs.CL cs.IR cs.LG | Efficient Market Hypothesis is the popular theory about stock prediction.
With its failure much research has been carried in the area of prediction of
stocks. This project is about taking non quantifiable data such as financial
news articles about a company and predicting its future stock trend with news
sentiment classification. Assuming that news articles have impact on stock
market, this is an attempt to study relationship between news and stock trend.
To show this, we created three different classification models which depict
polarity of news articles being positive or negative. Observations show that RF
and SVM perform well in all types of testing. Na\"ive Bayes gives good result
but not compared to the other two. Experiments are conducted to evaluate
various aspects of the proposed model and encouraging results are obtained in
all of the experiments. The accuracy of the prediction model is more than 80%
and in comparison with news random labeling with 50% of accuracy; the model has
increased the accuracy by 30%.
| Joshi Kalyani, Prof. H. N. Bharathi, Prof. Rao Jyothi | null | 1607.01958 | null | null |
Sequence Training and Adaptation of Highway Deep Neural Networks | cs.CL cs.LG cs.NE | Highway deep neural network (HDNN) is a type of depth-gated feedforward
neural network, which has shown to be easier to train with more hidden layers
and also generalise better compared to conventional plain deep neural networks
(DNNs). Previously, we investigated a structured HDNN architecture for speech
recognition, in which the two gate functions were tied across all the hidden
layers, and we were able to train a much smaller model without sacrificing the
recognition accuracy. In this paper, we carry on the study of this architecture
with sequence-discriminative training criterion and speaker adaptation
techniques on the AMI meeting speech recognition corpus. We show that these two
techniques improve speech recognition accuracy on top of the model trained with
the cross entropy criterion. Furthermore, we demonstrate that the two gate
functions that are tied across all the hidden layers are able to control the
information flow over the whole network, and we can achieve considerable
improvements by only updating these gate functions in both sequence training
and adaptation experiments.
| Liang Lu | null | 1607.01963 | null | null |
Nesterov's Accelerated Gradient and Momentum as approximations to
Regularised Update Descent | stat.ML cs.LG | We present a unifying framework for adapting the update direction in
gradient-based iterative optimization methods. As natural special cases we
re-derive classical momentum and Nesterov's accelerated gradient method,
lending a new intuitive interpretation to the latter algorithm. We show that a
new algorithm, which we term Regularised Gradient Descent, can converge more
quickly than either Nesterov's algorithm or the classical momentum algorithm.
| Aleksandar Botev, Guy Lever, David Barber | null | 1607.01981 | null | null |
Mini-Batch Spectral Clustering | stat.ML cs.LG | The cost of computing the spectrum of Laplacian matrices hinders the
application of spectral clustering to large data sets. While approximations
recover computational tractability, they can potentially affect clustering
performance. This paper proposes a practical approach to learn spectral
clustering based on adaptive stochastic gradient optimization. Crucially, the
proposed approach recovers the exact spectrum of Laplacian matrices in the
limit of the iterations, and the cost of each iteration is linear in the number
of samples. Extensive experimental validation on data sets with up to half a
million samples demonstrate its scalability and its ability to outperform
state-of-the-art approximate methods to learn spectral clustering for a given
computational budget.
| Yufei Han, Maurizio Filippone | null | 1607.02024 | null | null |
Artificial neural networks and fuzzy logic for recognizing alphabet
characters and mathematical symbols | cs.NE cs.LG | Optical Character Recognition software (OCR) are important tools for
obtaining accessible texts. We propose the use of artificial neural networks
(ANN) in order to develop pattern recognition algorithms capable of recognizing
both normal texts and formulae. We present an original improvement of the
backpropagation algorithm. Moreover, we describe a novel image segmentation
algorithm that exploits fuzzy logic for separating touching characters.
| Giuseppe Air\`o Farulla, Tiziana Armano, Anna Capietto, Nadir Murru,
Rosaria Rossini | 10.1007/978-3-319-41264-1_1 | 1607.02028 | null | null |
DeepChrome: Deep-learning for predicting gene expression from histone
modifications | cs.LG q-bio.GN | Motivation: Histone modifications are among the most important factors that
control gene regulation. Computational methods that predict gene expression
from histone modification signals are highly desirable for understanding their
combinatorial effects in gene regulation. This knowledge can help in developing
'epigenetic drugs' for diseases like cancer. Previous studies for quantifying
the relationship between histone modifications and gene expression levels
either failed to capture combinatorial effects or relied on multiple methods
that separate predictions and combinatorial analysis. This paper develops a
unified discriminative framework using a deep convolutional neural network to
classify gene expression using histone modification data as input. Our system,
called DeepChrome, allows automatic extraction of complex interactions among
important features. To simultaneously visualize the combinatorial interactions
among histone modifications, we propose a novel optimization-based technique
that generates feature pattern maps from the learnt deep model. This provides
an intuitive description of underlying epigenetic mechanisms that regulate
genes. Results: We show that DeepChrome outperforms state-of-the-art models
like Support Vector Machines and Random Forests for gene expression
classification task on 56 different cell-types from REMC database. The output
of our visualization technique not only validates the previous observations but
also allows novel insights about combinatorial interactions among histone
modification marks, some of which have recently been observed by experimental
studies.
| Ritambhara Singh, Jack Lanchantin, Gabriel Robins, and Yanjun Qi | null | 1607.02078 | null | null |
Single-Channel Multi-Speaker Separation using Deep Clustering | cs.LG cs.SD stat.ML | Deep clustering is a recently introduced deep learning architecture that uses
discriminatively trained embeddings as the basis for clustering. It was
recently applied to spectrogram segmentation, resulting in impressive results
on speaker-independent multi-speaker separation. In this paper we extend the
baseline system with an end-to-end signal approximation objective that greatly
improves performance on a challenging speech separation. We first significantly
improve upon the baseline system performance by incorporating better
regularization, larger temporal context, and a deeper architecture, culminating
in an overall improvement in signal to distortion ratio (SDR) of 10.3 dB
compared to the baseline of 6.0 dB for two-speaker separation, as well as a 7.1
dB SDR improvement for three-speaker separation. We then extend the model to
incorporate an enhancement layer to refine the signal estimates, and perform
end-to-end training through both the clustering and enhancement stages to
maximize signal fidelity. We evaluate the results using automatic speech
recognition. The new signal approximation objective, combined with end-to-end
training, produces unprecedented performance, reducing the word error rate
(WER) from 89.1% down to 30.8%. This represents a major advancement towards
solving the cocktail party problem.
| Yusuf Isik, Jonathan Le Roux, Zhuo Chen, Shinji Watanabe, John R.
Hershey | null | 1607.02173 | null | null |
Applying Deep Learning to the Newsvendor Problem | cs.LG | The newsvendor problem is one of the most basic and widely applied inventory
models. There are numerous extensions of this problem. If the probability
distribution of the demand is known, the problem can be solved analytically.
However, approximating the probability distribution is not easy and is prone
to error; therefore, the resulting solution to the newsvendor problem may be
not optimal. To address this issue, we propose an algorithm based on deep
learning that optimizes the order quantities for all products based on features
of the demand data. Our algorithm integrates the forecasting and
inventory-optimization steps, rather than solving them separately, as is
typically done, and does not require knowledge of the probability distributions
of the demand. Numerical experiments on real-world data suggest that our
algorithm outperforms other approaches, including data-driven and machine
learning approaches, especially for demands with high volatility. Finally, in
order to show how this approach can be used for other inventory optimization
problems, we provide an extension for (r,Q) policies.
| Afshin Oroojlooyjadid and Lawrence Snyder and Martin Tak\'a\v{c} | null | 1607.02177 | null | null |
Overcoming Challenges in Fixed Point Training of Deep Convolutional
Networks | cs.LG cs.CV | It is known that training deep neural networks, in particular, deep
convolutional networks, with aggressively reduced numerical precision is
challenging. The stochastic gradient descent algorithm becomes unstable in the
presence of noisy gradient updates resulting from arithmetic with limited
numeric precision. One of the well-accepted solutions facilitating the training
of low precision fixed point networks is stochastic rounding. However, to the
best of our knowledge, the source of the instability in training neural
networks with noisy gradient updates has not been well investigated. This work
is an attempt to draw a theoretical connection between low numerical precision
and training algorithm stability. In doing so, we will also propose and verify
through experiments methods that are able to improve the training performance
of deep convolutional networks in fixed point.
| Darryl D. Lin and Sachin S. Talathi | null | 1607.02241 | null | null |
CNN-LTE: a Class of 1-X Pooling Convolutional Neural Networks on Label
Tree Embeddings for Audio Scene Recognition | cs.NE cs.CV cs.LG cs.MM cs.SD | We describe in this report our audio scene recognition system submitted to
the DCASE 2016 challenge. Firstly, given the label set of the scenes, a label
tree is automatically constructed. This category taxonomy is then used in the
feature extraction step in which an audio scene instance is represented by a
label tree embedding image. Different convolutional neural networks, which are
tailored for the task at hand, are finally learned on top of the image features
for scene recognition. Our system reaches an overall recognition accuracy of
81.2% and 83.3% and outperforms the DCASE 2016 baseline with absolute
improvements of 8.7% and 6.1% on the development and test data, respectively.
| Huy Phan, Lars Hertel, Marco Maass, Philipp Koch, Alfred Mertins | null | 1607.02303 | null | null |
CaR-FOREST: Joint Classification-Regression Decision Forests for
Overlapping Audio Event Detection | cs.SD cs.AI cs.LG cs.MM | This report describes our submissions to Task2 and Task3 of the DCASE 2016
challenge. The systems aim at dealing with the detection of overlapping audio
events in continuous streams, where the detectors are based on random decision
forests. The proposed forests are jointly trained for classification and
regression simultaneously. Initially, the training is classification-oriented
to encourage the trees to select discriminative features from overlapping
mixtures to separate positive audio segments from the negative ones. The
regression phase is then carried out to let the positive audio segments vote
for the event onsets and offsets, and therefore model the temporal structure of
audio events. One random decision forest is specifically trained for each event
category of interest. Experimental results on the development data show that
our systems significantly outperform the baseline on the Task2 evaluation while
they are inferior to the baseline in the Task3 evaluation.
| Huy Phan, Lars Hertel, Marco Maass, Philipp Koch, Alfred Mertins | null | 1607.02306 | null | null |
Collaborative Training of Tensors for Compositional Distributional
Semantics | cs.CL cs.LG | Type-based compositional distributional semantic models present an
interesting line of research into functional representations of linguistic
meaning. One of the drawbacks of such models, however, is the lack of training
data required to train each word-type combination. In this paper we address
this by introducing training methods that share parameters between similar
words. We show that these methods enable zero-shot learning for words that have
no training data at all, as well as enabling construction of high-quality
tensors from very few training examples per word.
| Tamara Polajnar | null | 1607.02310 | null | null |
Watch This: Scalable Cost-Function Learning for Path Planning in Urban
Environments | cs.RO cs.LG | In this work, we present an approach to learn cost maps for driving in
complex urban environments from a very large number of demonstrations of
driving behaviour by human experts. The learned cost maps are constructed
directly from raw sensor measurements, bypassing the effort of manually
designing cost maps as well as features. When deploying the learned cost maps,
the trajectories generated not only replicate human-like driving behaviour but
are also demonstrably robust against systematic errors in putative robot
configuration. To achieve this we deploy a Maximum Entropy based, non-linear
IRL framework which uses Fully Convolutional Neural Networks (FCNs) to
represent the cost model underlying expert driving behaviour. Using a deep,
parametric approach enables us to scale efficiently to large datasets and
complex behaviours by being run-time independent of dataset extent during
deployment. We demonstrate the scalability and the performance of the proposed
approach on an ambitious dataset collected over the course of one year
including more than 25k demonstration trajectories extracted from over 120km of
driving around pedestrianised areas in the city of Milton Keynes, UK. We
evaluate the resulting cost representations by showing the advantages over a
carefully manually designed cost map and, in addition, demonstrate its
robustness to systematic errors by learning precise cost-maps even in the
presence of system calibration perturbations.
| Markus Wulfmeier, Dominic Zeng Wang and Ingmar Posner | null | 1607.02329 | null | null |
Lower Bounds on Active Learning for Graphical Model Selection | cs.IT cs.LG cs.SI math.IT stat.ML | We consider the problem of estimating the underlying graph associated with a
Markov random field, with the added twist that the decoding algorithm can
iteratively choose which subsets of nodes to sample based on the previous
samples, resulting in an active learning setting. Considering both Ising and
Gaussian models, we provide algorithm-independent lower bounds for
high-probability recovery within the class of degree-bounded graphs. Our main
results are minimax lower bounds for the active setting that match the best
known lower bounds for the passive setting, which in turn are known to be tight
in several cases of interest. Our analysis is based on Fano's inequality, along
with novel mutual information bounds for the active learning setting, and the
application of restricted graph ensembles. While we consider ensembles that are
similar or identical to those used in the passive setting, we require different
analysis techniques, with a key challenge being bounding a mutual information
quantity associated with observed subsets of nodes, as opposed to full
observations.
| Jonathan Scarlett and Volkan Cevher | null | 1607.02413 | null | null |
Explaining Deep Convolutional Neural Networks on Music Classification | cs.LG cs.AI cs.MM cs.SD | Deep convolutional neural networks (CNNs) have been actively adopted in the
field of music information retrieval, e.g. genre classification, mood
detection, and chord recognition. However, the process of learning and
prediction is little understood, particularly when it is applied to
spectrograms. We introduce auralisation of a CNN to understand its underlying
mechanism, which is based on a deconvolution procedure introduced in [2].
Auralisation of a CNN is converting the learned convolutional features that are
obtained from deconvolution into audio signals. In the experiments and
discussions, we explain trained features of a 5-layer CNN based on the
deconvolved spectrograms and auralised signals. The pairwise correlations per
layers with varying different musical attributes are also investigated to
understand the evolution of the learnt features. It is shown that in the deep
layers, the features are learnt to capture textures, the patterns of continuous
distributions, rather than shapes of lines.
| Keunwoo Choi, George Fazekas, Mark Sandler | null | 1607.02444 | null | null |
Log-Linear RNNs: Towards Recurrent Neural Networks with Flexible Prior
Knowledge | cs.AI cs.CL cs.LG cs.NE | We introduce LL-RNNs (Log-Linear RNNs), an extension of Recurrent Neural
Networks that replaces the softmax output layer by a log-linear output layer,
of which the softmax is a special case. This conceptually simple move has two
main advantages. First, it allows the learner to combat training data sparsity
by allowing it to model words (or more generally, output symbols) as complex
combinations of attributes without requiring that each combination is directly
observed in the training data (as the softmax does). Second, it permits the
inclusion of flexible prior knowledge in the form of a priori specified modular
features, where the neural network component learns to dynamically control the
weights of a log-linear distribution exploiting these features.
We conduct experiments in the domain of language modelling of French, that
exploit morphological prior knowledge and show an important decrease in
perplexity relative to a baseline RNN.
We provide other motivating iillustrations, and finally argue that the
log-linear and the neural-network components contribute complementary strengths
to the LL-RNN: the LL aspect allows the model to incorporate rich prior
knowledge, while the NN aspect, according to the "representation learning"
paradigm, allows the model to discover novel combination of characteristics.
| Marc Dymetman, Chunyang Xiao | null | 1607.02467 | null | null |
Adjusting for Dropout Variance in Batch Normalization and Weight
Initialization | cs.LG cs.NE | We show how to adjust for the variance introduced by dropout with corrections
to weight initialization and Batch Normalization, yielding higher accuracy.
Though dropout can preserve the expected input to a neuron between train and
test, the variance of the input differs. We thus propose a new weight
initialization by correcting for the influence of dropout rates and an
arbitrary nonlinearity's influence on variance through simple corrective
scalars. Since Batch Normalization trained with dropout estimates the variance
of a layer's incoming distribution with some inputs dropped, the variance also
differs between train and test. After training a network with Batch
Normalization and dropout, we simply update Batch Normalization's variance
moving averages with dropout off and obtain state of the art on CIFAR-10 and
CIFAR-100 without data augmentation.
| Dan Hendrycks and Kevin Gimpel | null | 1607.02488 | null | null |
Proceedings of the 2016 ICML Workshop on Human Interpretability in
Machine Learning (WHI 2016) | stat.ML cs.LG | This is the Proceedings of the 2016 ICML Workshop on Human Interpretability
in Machine Learning (WHI 2016), which was held in New York, NY, June 23, 2016.
Invited speakers were Susan Athey, Rich Caruana, Jacob Feldman, Percy Liang,
and Hanna Wallach.
| Been Kim, Dmitry M. Malioutov, Kush R. Varshney | null | 1607.02531 | null | null |
Adversarial examples in the physical world | cs.CV cs.CR cs.LG stat.ML | Most existing machine learning classifiers are highly vulnerable to
adversarial examples. An adversarial example is a sample of input data which
has been modified very slightly in a way that is intended to cause a machine
learning classifier to misclassify it. In many cases, these modifications can
be so subtle that a human observer does not even notice the modification at
all, yet the classifier still makes a mistake. Adversarial examples pose
security concerns because they could be used to perform an attack on machine
learning systems, even if the adversary has no access to the underlying model.
Up to now, all previous work have assumed a threat model in which the adversary
can feed data directly into the machine learning classifier. This is not always
the case for systems operating in the physical world, for example those which
are using signals from cameras and other sensors as an input. This paper shows
that even in such physical world scenarios, machine learning systems are
vulnerable to adversarial examples. We demonstrate this by feeding adversarial
images obtained from cell-phone camera to an ImageNet Inception classifier and
measuring the classification accuracy of the system. We find that a large
fraction of adversarial examples are classified incorrectly even when perceived
through the camera.
| Alexey Kurakin, Ian Goodfellow and Samy Bengio | null | 1607.02533 | null | null |
Learning from Multiway Data: Simple and Efficient Tensor Regression | cs.LG | Tensor regression has shown to be advantageous in learning tasks with
multi-directional relatedness. Given massive multiway data, traditional methods
are often too slow to operate on or suffer from memory bottleneck. In this
paper, we introduce subsampled tensor projected gradient to solve the problem.
Our algorithm is impressively simple and efficient. It is built upon projected
gradient method with fast tensor power iterations, leveraging randomized
sketching for further acceleration. Theoretical analysis shows that our
algorithm converges to the correct solution in fixed number of iterations. The
memory requirement grows linearly with the size of the problem. We demonstrate
superior empirical performance on both multi-linear multi-task learning and
spatio-temporal applications.
| Rose Yu, Yan Liu | null | 1607.02535 | null | null |
Online Learning Schemes for Power Allocation in Energy Harvesting
Communications | cs.LG | We consider the problem of power allocation over a time-varying channel with
unknown distribution in energy harvesting communication systems. In this
problem, the transmitter has to choose the transmit power based on the amount
of stored energy in its battery with the goal of maximizing the average rate
obtained over time. We model this problem as a Markov decision process (MDP)
with the transmitter as the agent, the battery status as the state, the
transmit power as the action and the rate obtained as the reward. The average
reward maximization problem over the MDP can be solved by a linear program (LP)
that uses the transition probabilities for the state-action pairs and their
reward values to choose a power allocation policy. Since the rewards associated
the state-action pairs are unknown, we propose two online learning algorithms:
UCLP and Epoch-UCLP that learn these rewards and adapt their policies along the
way. The UCLP algorithm solves the LP at each step to decide its current policy
using the upper confidence bounds on the rewards, while the Epoch-UCLP
algorithm divides the time into epochs, solves the LP only at the beginning of
the epochs and follows the obtained policy in that epoch. We prove that the
reward losses or regrets incurred by both these algorithms are upper bounded by
constants. Epoch-UCLP incurs a higher regret compared to UCLP, but reduces the
computational requirements substantially. We also show that the presented
algorithms work for online learning in cost minimization problems like the
packet scheduling with power-delay tradeoff with minor changes.
| Pranav Sakulkar and Bhaskar Krishnamachari | null | 1607.02552 | null | null |
Uncovering Locally Discriminative Structure for Feature Analysis | cs.LG | Manifold structure learning is often used to exploit geometric information
among data in semi-supervised feature learning algorithms. In this paper, we
find that local discriminative information is also of importance for
semi-supervised feature learning. We propose a method that utilizes both the
manifold structure of data and local discriminant information. Specifically, we
define a local clique for each data point. The k-Nearest Neighbors (kNN) is
used to determine the structural information within each clique. We then employ
a variant of Fisher criterion model to each clique for local discriminant
evaluation and sum all cliques as global integration into the framework. In
this way, local discriminant information is embedded. Labels are also utilized
to minimize distances between data from the same class. In addition, we use the
kernel method to extend our proposed model and facilitate feature learning in a
high-dimensional space after feature mapping. Experimental results show that
our method is superior to all other compared methods over a number of datasets.
| Sen Wang and Feiping Nie and Xiaojun Chang and Xue Li and Quan Z.
Sheng and Lina Yao | null | 1607.02559 | null | null |
Visual Dynamics: Probabilistic Future Frame Synthesis via Cross
Convolutional Networks | cs.CV cs.LG | We study the problem of synthesizing a number of likely future frames from a
single input image. In contrast to traditional methods, which have tackled this
problem in a deterministic or non-parametric way, we propose a novel approach
that models future frames in a probabilistic manner. Our probabilistic model
makes it possible for us to sample and synthesize many possible future frames
from a single input image. Future frame synthesis is challenging, as it
involves low- and high-level image and motion understanding. We propose a novel
network structure, namely a Cross Convolutional Network to aid in synthesizing
future frames; this network structure encodes image and motion information as
feature maps and convolutional kernels, respectively. In experiments, our model
performs well on synthetic data, such as 2D shapes and animated game sprites,
as well as on real-wold videos. We also show that our model can be applied to
tasks such as visual analogy-making, and present an analysis of the learned
network representations.
| Tianfan Xue, Jiajun Wu, Katherine L. Bouman, William T. Freeman | null | 1607.02586 | null | null |
Classifier Risk Estimation under Limited Labeling Resources | cs.LG stat.AP stat.ML | In this paper we propose strategies for estimating performance of a
classifier when labels cannot be obtained for the whole test set. The number of
test instances which can be labeled is very small compared to the whole test
data size. The goal then is to obtain a precise estimate of classifier
performance using as little labeling resource as possible. Specifically, we try
to answer, how to select a subset of the large test set for labeling such that
the performance of a classifier estimated on this subset is as close as
possible to the one on the whole test set. We propose strategies based on
stratified sampling for selecting this subset. We show that these strategies
can reduce the variance in estimation of classifier accuracy by a significant
amount compared to simple random sampling (over 65% in several cases). Hence,
our proposed methods are much more precise compared to random sampling for
accuracy estimation under restricted labeling resources. The reduction in
number of samples required (compared to random sampling) to estimate the
classifier accuracy with only 1% error is high as 60% in some cases.
| Anurag Kumar, Bhiksha Raj | null | 1607.02665 | null | null |
Dealing with Class Imbalance using Thresholding | cs.LG | We propose thresholding as an approach to deal with class imbalance. We
define the concept of thresholding as a process of determining a decision
boundary in the presence of a tunable parameter. The threshold is the maximum
value of this tunable parameter where the conditions of a certain decision are
satisfied. We show that thresholding is applicable not only for linear
classifiers but also for non-linear classifiers. We show that this is the
implicit assumption for many approaches to deal with class imbalance in linear
classifiers. We then extend this paradigm beyond linear classification and show
how non-linear classification can be dealt with under this umbrella framework
of thresholding. The proposed method can be used for outlier detection in many
real-life scenarios like in manufacturing. In advanced manufacturing units,
where the manufacturing process has matured over time, the number of instances
(or parts) of the product that need to be rejected (based on a strict regime of
quality tests) becomes relatively rare and are defined as outliers. How to
detect these rare parts or outliers beforehand? How to detect combination of
conditions leading to these outliers? These are the questions motivating our
research. This paper focuses on prediction of outliers and conditions leading
to outliers using classification. We address the problem of outlier detection
using classification. The classes are good parts (those passing the quality
tests) and bad parts (those failing the quality tests and can be considered as
outliers). The rarity of outliers transforms this problem into a
class-imbalanced classification problem.
| Charmgil Hong, Rumi Ghosh, Soundar Srinivasan | null | 1607.02705 | null | null |
How to Allocate Resources For Features Acquisition? | cs.AI cs.LG stat.ML | We study classification problems where features are corrupted by noise and
where the magnitude of the noise in each feature is influenced by the resources
allocated to its acquisition. This is the case, for example, when multiple
sensors share a common resource (power, bandwidth, attention, etc.). We develop
a method for computing the optimal resource allocation for a variety of
scenarios and derive theoretical bounds concerning the benefit that may arise
by non-uniform allocation. We further demonstrate the effectiveness of the
developed method in simulations.
| Oran Richman, Shie Mannor | null | 1607.02763 | null | null |
On Faster Convergence of Cyclic Block Coordinate Descent-type Methods
for Strongly Convex Minimization | math.OC cs.LG stat.ML | The cyclic block coordinate descent-type (CBCD-type) methods, which performs
iterative updates for a few coordinates (a block) simultaneously throughout the
procedure, have shown remarkable computational performance for solving strongly
convex minimization problems. Typical applications include many popular
statistical machine learning methods such as elastic-net regression, ridge
penalized logistic regression, and sparse additive regression. Existing
optimization literature has shown that for strongly convex minimization, the
CBCD-type methods attain iteration complexity of
$\mathcal{O}(p\log(1/\epsilon))$, where $\epsilon$ is a pre-specified accuracy
of the objective value, and $p$ is the number of blocks. However, such
iteration complexity explicitly depends on $p$, and therefore is at least $p$
times worse than the complexity $\mathcal{O}(\log(1/\epsilon))$ of gradient
descent (GD) methods. To bridge this theoretical gap, we propose an improved
convergence analysis for the CBCD-type methods. In particular, we first show
that for a family of quadratic minimization problems, the iteration complexity
$\mathcal{O}(\log^2(p)\cdot\log(1/\epsilon))$ of the CBCD-type methods matches
that of the GD methods in term of dependency on $p$, up to a $\log^2 p$ factor.
Thus our complexity bounds are sharper than the existing bounds by at least a
factor of $p/\log^2(p)$. We also provide a lower bound to confirm that our
improved complexity bounds are tight (up to a $\log^2 (p)$ factor), under the
assumption that the largest and smallest eigenvalues of the Hessian matrix do
not scale with $p$. Finally, we generalize our analysis to other strongly
convex minimization problems beyond quadratic ones.
| Xingguo Li, Tuo Zhao, Raman Arora, Han Liu, Mingyi Hong | null | 1607.02793 | null | null |
Tight Lower Bounds for Multiplicative Weights Algorithmic Families | cs.LG | We study the fundamental problem of prediction with expert advice and develop
regret lower bounds for a large family of algorithms for this problem. We
develop simple adversarial primitives, that lend themselves to various
combinations leading to sharp lower bounds for many algorithmic families. We
use these primitives to show that the classic Multiplicative Weights Algorithm
(MWA) has a regret of $\sqrt{\frac{T \ln k}{2}}$, there by completely closing
the gap between upper and lower bounds. We further show a regret lower bound of
$\frac{2}{3}\sqrt{\frac{T\ln k}{2}}$ for a much more general family of
algorithms than MWA, where the learning rate can be arbitrarily varied over
time, or even picked from arbitrary distributions over time. We also use our
primitives to construct adversaries in the geometric horizon setting for MWA to
precisely characterize the regret at $\frac{0.391}{\sqrt{\delta}}$ for the case
of $2$ experts and a lower bound of $\frac{1}{2}\sqrt{\frac{\ln k}{2\delta}}$
for the case of arbitrary number of experts $k$.
| Nick Gravin, Yuval Peres, Balasubramanian Sivan | null | 1607.02834 | null | null |
Classifying Variable-Length Audio Files with All-Convolutional Networks
and Masked Global Pooling | cs.NE cs.LG cs.MM cs.SD | We trained a deep all-convolutional neural network with masked global pooling
to perform single-label classification for acoustic scene classification and
multi-label classification for domestic audio tagging in the DCASE-2016
contest. Our network achieved an average accuracy of 84.5% on the four-fold
cross-validation for acoustic scene recognition, compared to the provided
baseline of 72.5%, and an average equal error rate of 0.17 for domestic audio
tagging, compared to the baseline of 0.21. The network therefore improves the
baselines by a relative amount of 17% and 19%, respectively. The network only
consists of convolutional layers to extract features from the short-time
Fourier transform and one global pooling layer to combine those features. It
particularly possesses neither fully-connected layers, besides the
fully-connected output layer, nor dropout layers.
| Lars Hertel, Huy Phan, Alfred Mertins | null | 1607.02857 | null | null |
Incremental Factorization Machines for Persistently Cold-starting Online
Item Recommendation | cs.LG cs.IR | Real-world item recommenders commonly suffer from a persistent cold-start
problem which is caused by dynamically changing users and items. In order to
overcome the problem, several context-aware recommendation techniques have been
recently proposed. In terms of both feasibility and performance, factorization
machine (FM) is one of the most promising methods as generalization of the
conventional matrix factorization techniques. However, since online algorithms
are suitable for dynamic data, the static FMs are still inadequate. Thus, this
paper proposes incremental FMs (iFMs), a general online factorization
framework, and specially extends iFMs into an online item recommender. The
proposed framework can be a promising baseline for further development of the
production recommender systems. Evaluation is done empirically both on
synthetic and real-world unstable datasets.
| Takuya Kitazawa | null | 1607.02858 | null | null |
From Behavior to Sparse Graphical Games: Efficient Recovery of
Equilibria | cs.GT cs.LG stat.ML | In this paper we study the problem of exact recovery of the pure-strategy
Nash equilibria (PSNE) set of a graphical game from noisy observations of joint
actions of the players alone. We consider sparse linear influence games --- a
parametric class of graphical games with linear payoffs, and represented by
directed graphs of n nodes (players) and in-degree of at most k. We present an
$\ell_1$-regularized logistic regression based algorithm for recovering the
PSNE set exactly, that is both computationally efficient --- i.e. runs in
polynomial time --- and statistically efficient --- i.e. has logarithmic sample
complexity. Specifically, we show that the sufficient number of samples
required for exact PSNE recovery scales as $\mathcal{O}(\mathrm{poly}(k) \log
n)$. We also validate our theoretical results using synthetic experiments.
| Asish Ghoshal and Jean Honorio | null | 1607.02959 | null | null |
Learning a metric for class-conditional KNN | cs.LG stat.ML | Naive Bayes Nearest Neighbour (NBNN) is a simple and effective framework
which addresses many of the pitfalls of K-Nearest Neighbour (KNN)
classification. It has yielded competitive results on several computer vision
benchmarks. Its central tenet is that during NN search, a query is not compared
to every example in a database, ignoring class information. Instead, NN
searches are performed within each class, generating a score per class. A key
problem with NN techniques, including NBNN, is that they fail when the data
representation does not capture perceptual (e.g.~class-based) similarity. NBNN
circumvents this by using independent engineered descriptors (e.g.~SIFT). To
extend its applicability outside of image-based domains, we propose to learn a
metric which captures perceptual similarity. Similar to how Neighbourhood
Components Analysis optimizes a differentiable form of KNN classification, we
propose "Class Conditional" metric learning (CCML), which optimizes a soft form
of the NBNN selection rule. Typical metric learning algorithms learn either a
global or local metric. However, our proposed method can be adjusted to a
particular level of locality by tuning a single parameter. An empirical
evaluation on classification and retrieval tasks demonstrates that our proposed
method clearly outperforms existing learned distance metrics across a variety
of image and non-image datasets.
| Daniel Jiwoong Im, Graham W. Taylor | null | 1607.0305 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.