title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Learning detectors of malicious web requests for intrusion detection in
network traffic | stat.ML cs.LG | This paper proposes a generic classification system designed to detect
security threats based on the behavior of malware samples. The system relies on
statistical features computed from proxy log fields to train detectors using a
database of malware samples. The behavior detectors serve as basic reusable
building blocks of the multi-level detection architecture. The detectors
identify malicious communication exploiting encrypted URL strings and domains
generated by a Domain Generation Algorithm (DGA) which are frequently used in
Command and Control (C&C), phishing, and click fraud. Surprisingly, very
precise detectors can be built given only a limited amount of information
extracted from a single proxy log. This way, the computational requirements of
the detectors are kept low which allows for deployment on a wide range of
security devices and without depending on traffic context such as DNS logs,
Whois records, webpage content, etc. Results on several weeks of live traffic
from 100+ companies having 350k+ hosts show correct detection with a precision
exceeding 95% of malicious flows, 95% of malicious URLs and 90% of infected
hosts. In addition, a comparison with a signature and rule-based solution shows
that our system is able to detect significant amount of new threats.
| Lukas Machlica and Karel Bartos and Michal Sofka | null | 1702.0253 | null | null |
A Modified Construction for a Support Vector Classifier to Accommodate
Class Imbalances | stat.ML cs.LG | Given a training set with binary classification, the Support Vector Machine
identifies the hyperplane maximizing the margin between the two classes of
training data. This general formulation is useful in that it can be applied
without regard to variance differences between the classes. Ignoring these
differences is not optimal, however, as the general SVM will give the class
with lower variance an unjustifiably wide berth. This increases the chance of
misclassification of the other class and results in an overall loss of
predictive performance. An alternate construction is proposed in which the
margins of the separating hyperplane are different for each class, each
proportional to the standard deviation of its class along the direction
perpendicular to the hyperplane. The construction agrees with the SVM in the
case of equal class variances. This paper will then examine the impact to the
dual representation of the modified constraint equations.
| Matt Parker, Colin Parker | null | 1702.02555 | null | null |
Causal Regularization | cs.LG cs.AI cs.NE stat.ML | In application domains such as healthcare, we want accurate predictive models
that are also causally interpretable. In pursuit of such models, we propose a
causal regularizer to steer predictive models towards causally-interpretable
solutions and theoretically study its properties. In a large-scale analysis of
Electronic Health Records (EHR), our causally-regularized model outperforms its
L1-regularized counterpart in causal accuracy and is competitive in predictive
performance. We perform non-linear causality analysis by causally regularizing
a special neural network architecture. We also show that the proposed causal
regularizer can be used together with neural representation learning algorithms
to yield up to 20% improvement over multilayer perceptron in detecting
multivariate causation, a situation common in healthcare, where many causal
factors should occur simultaneously to have an effect on the target variable.
| Mohammad Taha Bahadori, Krzysztof Chalupka, Edward Choi, Robert Chen,
Walter F. Stewart, Jimeng Sun | null | 1702.02604 | null | null |
Character-level Deep Conflation for Business Data Analytics | cs.CL cs.LG | Connecting different text attributes associated with the same entity
(conflation) is important in business data analytics since it could help merge
two different tables in a database to provide a more comprehensive profile of
an entity. However, the conflation task is challenging because two text strings
that describe the same entity could be quite different from each other for
reasons such as misspelling. It is therefore critical to develop a conflation
model that is able to truly understand the semantic meaning of the strings and
match them at the semantic level. To this end, we develop a character-level
deep conflation model that encodes the input text strings from character level
into finite dimension feature vectors, which are then used to compute the
cosine similarity between the text strings. The model is trained in an
end-to-end manner using back propagation and stochastic gradient descent to
maximize the likelihood of the correct association. Specifically, we propose
two variants of the deep conflation model, based on long-short-term memory
(LSTM) recurrent neural network (RNN) and convolutional neural network (CNN),
respectively. Both models perform well on a real-world business analytics
dataset and significantly outperform the baseline bag-of-character (BoC) model.
| Zhe Gan, P. D. Singh, Ameet Joshi, Xiaodong He, Jianshu Chen, Jianfeng
Gao, Li Deng | null | 1702.0264 | null | null |
EEG Representation Using Multi-instance Framework on The Manifold of
Symmetric Positive Definite Matrices for EEG-based Computer Aided Diagnosis | cs.LG | The generalization and robustness of an electroencephalogram (EEG)-based
computer aided diagnostic system are crucial requirements in actual clinical
practice. To reach these goals, we propose a new EEG representation that
provides a more realistic view of brain functionality by applying
multi-instance (MI) framework to consider the non-stationarity of the EEG
signal. The non-stationary characteristic of EEG is considered by describing
the signal as a bag of relevant and irrelevant concepts. The concepts are
provided by a robust representation of homogenous segments of EEG signal using
spatial covariance matrices. Due to the nonlinear geometry of the space of
covariance matrices, we determine the boundaries of the homogeneous segments
based on adaptive segmentation of the signal in a Riemannian framework. Each
subject is described as a bag of covariance matrices of homogenous segments and
the bag-level discriminative information is used for classification. To
evaluate the performance of the proposed approach, we examine it in attention
deficit hyperactivity/bipolar mood disorder detection and depression/normal
diagnosis applications. Experimental results confirm the superiority of the
proposed approach, which is gained due to the robustness of covariance
descriptor, the effectiveness of Riemannian geometry, and the benefits of
considering the inherent non-stationary nature of the brain.
| Khadijeh Sadatnejad, Saeed S. Ghidary, Reza Rostami, and Reza Kazemi | null | 1702.02655 | null | null |
Inductive Pairwise Ranking: Going Beyond the n log(n) Barrier | cs.LG cs.IT math.IT stat.ML | We study the problem of ranking a set of items from nonactively chosen
pairwise preferences where each item has feature information with it. We
propose and characterize a very broad class of preference matrices giving rise
to the Feature Low Rank (FLR) model, which subsumes several models ranging from
the classic Bradley-Terry-Luce (BTL) (Bradley and Terry 1952) and Thurstone
(Thurstone 1927) models to the recently proposed blade-chest (Chen and Joachims
2016) and generic low-rank preference (Rajkumar and Agarwal 2016) models. We
use the technique of matrix completion in the presence of side information to
develop the Inductive Pairwise Ranking (IPR) algorithm that provably learns a
good ranking under the FLR model, in a sample-efficient manner. In practice,
through systematic synthetic simulations, we confirm our theoretical findings
regarding improvements in the sample complexity due to the use of feature
information. Moreover, on popular real-world preference learning datasets, with
as less as 10% sampling of the pairwise comparisons, our method recovers a good
ranking.
| U.N. Niranjan, Arun Rajkumar | null | 1702.02661 | null | null |
Energy Saving Additive Neural Network | cs.NE cs.AI cs.LG | In recent years, machine learning techniques based on neural networks for
mobile computing become increasingly popular. Classical multi-layer neural
networks require matrix multiplications at each stage. Multiplication operation
is not an energy efficient operation and consequently it drains the battery of
the mobile device. In this paper, we propose a new energy efficient neural
network with the universal approximation property over space of Lebesgue
integrable functions. This network, called, additive neural network, is very
suitable for mobile computing. The neural structure is based on a novel vector
product definition, called ef-operator, that permits a multiplier-free
implementation. In ef-operation, the "product" of two real numbers is defined
as the sum of their absolute values, with the sign determined by the sign of
the product of the numbers. This "product" is used to construct a vector
product in $R^N$. The vector product induces the $l_1$ norm. The proposed
additive neural network successfully solves the XOR problem. The experiments on
MNIST dataset show that the classification performances of the proposed
additive neural networks are very similar to the corresponding multi-layer
perceptron and convolutional neural networks (LeNet).
| Arman Afrasiyabi, Ozan Yildiz, Baris Nasir, Fatos T. Yarman Vural and
A. Enis Cetin | null | 1702.02676 | null | null |
Rate Optimal Estimation and Confidence Intervals for High-dimensional
Regression with Missing Covariates | stat.ML cs.LG stat.ME | Although a majority of the theoretical literature in high-dimensional
statistics has focused on settings which involve fully-observed data, settings
with missing values and corruptions are common in practice. We consider the
problems of estimation and of constructing component-wise confidence intervals
in a sparse high-dimensional linear regression model when some covariates of
the design matrix are missing completely at random. We analyze a variant of the
Dantzig selector [9] for estimating the regression model and we use a
de-biasing argument to construct component-wise confidence intervals. Our first
main result is to establish upper bounds on the estimation error as a function
of the model parameters (the sparsity level s, the expected fraction of
observed covariates $\rho_*$, and a measure of the signal strength
$\|\beta^*\|_2$). We find that even in an idealized setting where the
covariates are assumed to be missing completely at random, somewhat
surprisingly and in contrast to the fully-observed setting, there is a
dichotomy in the dependence on model parameters and much faster rates are
obtained if the covariance matrix of the random design is known. To study this
issue further, our second main contribution is to provide lower bounds on the
estimation error showing that this discrepancy in rates is unavoidable in a
minimax sense. We then consider the problem of high-dimensional inference in
the presence of missing data. We construct and analyze confidence intervals
using a de-biased estimator. In the presence of missing data, inference is
complicated by the fact that the de-biasing matrix is correlated with the pilot
estimator and this necessitates the design of a new estimator and a novel
analysis. We also complement our mathematical study with extensive simulations
on synthetic and semi-synthetic data that show the accuracy of our asymptotic
predictions for finite sample sizes.
| Yining Wang, Jialei Wang, Sivaraman Balakrishnan, Aarti Singh | null | 1702.02686 | null | null |
A Fast and Scalable Joint Estimator for Learning Multiple Related Sparse
Gaussian Graphical Models | stat.ML cs.LG cs.PF | Estimating multiple sparse Gaussian Graphical Models (sGGMs) jointly for many
related tasks (large $K$) under a high-dimensional (large $p$) situation is an
important task. Most previous studies for the joint estimation of multiple
sGGMs rely on penalized log-likelihood estimators that involve expensive and
difficult non-smooth optimizations. We propose a novel approach, FASJEM for
\underline{fa}st and \underline{s}calable \underline{j}oint
structure-\underline{e}stimation of \underline{m}ultiple sGGMs at a large
scale. As the first study of joint sGGM using the Elementary Estimator
framework, our work has three major contributions: (1) We solve FASJEM through
an entry-wise manner which is parallelizable. (2) We choose a proximal
algorithm to optimize FASJEM. This improves the computational efficiency from
$O(Kp^3)$ to $O(Kp^2)$ and reduces the memory requirement from $O(Kp^2)$ to
$O(K)$. (3) We theoretically prove that FASJEM achieves a consistent estimation
with a convergence rate of $O(\log(Kp)/n_{tot})$. On several synthetic and four
real-world datasets, FASJEM shows significant improvements over baselines on
accuracy, computational complexity, and memory costs.
| Beilun Wang, Ji Gao, Yanjun Qi | null | 1702.02715 | null | null |
Joint Discovery of Object States and Manipulation Actions | cs.CV cs.LG | Many human activities involve object manipulations aiming to modify the
object state. Examples of common state changes include full/empty bottle,
open/closed door, and attached/detached car wheel. In this work, we seek to
automatically discover the states of objects and the associated manipulation
actions. Given a set of videos for a particular task, we propose a joint model
that learns to identify object states and to localize state-modifying actions.
Our model is formulated as a discriminative clustering cost with constraints.
We assume a consistent temporal order for the changes in object states and
manipulation actions, and introduce new optimization techniques to learn model
parameters without additional supervision. We demonstrate successful discovery
of seven manipulation actions and corresponding object states on a new dataset
of videos depicting real-life object manipulations. We show that our joint
formulation results in an improvement of object state discovery by action
recognition and vice versa.
| Jean-Baptiste Alayrac, Josev Sivic, Ivan Laptev, Simon Lacoste-Julien | null | 1702.02738 | null | null |
Graph Based Relational Features for Collective Classification | cs.IR cs.AI cs.LG | Statistical Relational Learning (SRL) methods have shown that classification
accuracy can be improved by integrating relations between samples. Techniques
such as iterative classification or relaxation labeling achieve this by
propagating information between related samples during the inference process.
When only a few samples are labeled and connections between samples are sparse,
collective inference methods have shown large improvements over standard
feature-based ML methods. However, in contrast to feature based ML, collective
inference methods require complex inference procedures and often depend on the
strong assumption of label consistency among related samples. In this paper, we
introduce new relational features for standard ML methods by extracting
information from direct and indirect relations. We show empirically on three
standard benchmark datasets that our relational features yield results
comparable to collective inference methods. Finally we show that our proposal
outperforms these methods when additional information is available.
| Immanuel Bayer, Uwe Nagel, Steffen Rendle | 10.1007/978-3-319-18032-8_35 | 1702.02817 | null | null |
Minimax Lower Bounds for Ridge Combinations Including Neural Nets | stat.ML cs.LG | Estimation of functions of $ d $ variables is considered using ridge
combinations of the form $ \textstyle\sum_{k=1}^m c_{1,k}
\phi(\textstyle\sum_{j=1}^d c_{0,j,k}x_j-b_k) $ where the activation function $
\phi $ is a function with bounded value and derivative. These include
single-hidden layer neural networks, polynomials, and sinusoidal models. From a
sample of size $ n $ of possibly noisy values at random sites $ X \in B =
[-1,1]^d $, the minimax mean square error is examined for functions in the
closure of the $ \ell_1 $ hull of ridge functions with activation $ \phi $. It
is shown to be of order $ d/n $ to a fractional power (when $ d $ is of smaller
order than $ n $), and to be of order $ (\log d)/n $ to a fractional power
(when $ d $ is of larger order than $ n $). Dependence on constraints $ v_0 $
and $ v_1 $ on the $ \ell_1 $ norms of inner parameter $ c_0 $ and outer
parameter $ c_1 $, respectively, is also examined. Also, lower and upper bounds
on the fractional power are given. The heart of the analysis is development of
information-theoretic packing numbers for these classes of functions.
| Jason M. Klusowski and Andrew R. Barron | null | 1702.02828 | null | null |
Coordinated Online Learning With Applications to Learning User
Preferences | cs.LG stat.ML | We study an online multi-task learning setting, in which instances of related
tasks arrive sequentially, and are handled by task-specific online learners. We
consider an algorithmic framework to model the relationship of these tasks via
a set of convex constraints. To exploit this relationship, we design a novel
algorithm -- COOL -- for coordinating the individual online learners: Our key
idea is to coordinate their parameters via weighted projections onto a convex
set. By adjusting the rate and accuracy of the projection, the COOL algorithm
allows for a trade-off between the benefit of coordination and the required
computation/communication. We derive regret bounds for our approach and analyze
how they are influenced by these trade-off factors. We apply our results on the
application of learning users' preferences on the Airbnb marketplace with the
goal of incentivizing users to explore under-reviewed apartments.
| Christoph Hirnschall, Adish Singla, Sebastian Tschiatschek, Andreas
Krause | null | 1702.02849 | null | null |
Multi-feature classifiers for burst detection in single EEG channels
from preterm infants | q-bio.NC cs.LG | The study of electroencephalographic (EEG) bursts in preterm infants provides
valuable information about maturation or prognostication after perinatal
asphyxia. Over the last two decades, a number of works proposed algorithms to
automatically detect EEG bursts in preterm infants, but they were designed for
populations under 35 weeks of post menstrual age (PMA). However, as the brain
activity evolves rapidly during postnatal life, these solutions might be
under-performing with increasing PMA. In this work we focused on preterm
infants reaching term ages (PMA $\geq$ 36 weeks) using multi-feature
classification on a single EEG channel. Five EEG burst detectors relying on
different machine learning approaches were compared: Logistic regression (LR),
linear discriminant analysis (LDA), k-nearest neighbors (kNN), support vector
machines (SVM) and thresholding (Th). Classifiers were trained by visually
labeled EEG recordings from 14 very preterm infants (born after 28 weeks of
gestation) with 36 - 41 weeks PMA. The most performing classifiers reached
about 95\% accuracy (kNN, SVM and LR) whereas Th obtained 84\%. Compared to
human-automatic agreements, LR provided the highest scores (Cohen's kappa =
0.71) and the best computational efficiency using only three EEG features.
Applying this classifier in a test database of 21 infants $\geq$ 36 weeks PMA,
we show that long EEG bursts and short inter-bust periods are characteristic of
infants with the highest PMA and weights. In view of these results, LR-based
burst detection could be a suitable tool to study maturation in monitoring or
portable devices using a single EEG channel.
| X. Navarro, F. Por\'ee, M. Kuchenbuch, M. Chavez, A. Beuch\'ee, G.
Carrault | 10.1088/1741-2552/aa714a | 1702.02873 | null | null |
Policy Learning with Observational Data | math.ST cs.LG econ.EM stat.ML stat.TH | In many areas, practitioners seek to use observational data to learn a
treatment assignment policy that satisfies application-specific constraints,
such as budget, fairness, simplicity, or other functional form constraints. For
example, policies may be restricted to take the form of decision trees based on
a limited set of easily observable individual characteristics. We propose a new
approach to this problem motivated by the theory of semiparametrically
efficient estimation. Our method can be used to optimize either binary
treatments or infinitesimal nudges to continuous treatments, and can leverage
observational data where causal effects are identified using a variety of
strategies, including selection on observables and instrumental variables.
Given a doubly robust estimator of the causal effect of assigning everyone to
treatment, we develop an algorithm for choosing whom to treat, and establish
strong guarantees for the asymptotic utilitarian regret of the resulting
policy.
| Susan Athey and Stefan Wager | null | 1702.02896 | null | null |
Online and Offline Domain Adaptation for Reducing BCI Calibration Effort | cs.LG cs.HC | Many real-world brain-computer interface (BCI) applications rely on
single-trial classification of event-related potentials (ERPs) in EEG signals.
However, because different subjects have different neural responses to even the
same stimulus, it is very difficult to build a generic ERP classifier whose
parameters fit all subjects. The classifier needs to be calibrated for each
individual subject, using some labeled subject-specific data. This paper
proposes both online and offline weighted adaptation regularization (wAR)
algorithms to reduce this calibration effort, i.e., to minimize the amount of
labeled subject-specific EEG data required in BCI calibration, and hence to
increase the utility of the BCI system. We demonstrate using a visually-evoked
potential oddball task and three different EEG headsets that both online and
offline wAR algorithms significantly outperform several other algorithms.
Moreover, through source domain selection, we can reduce their computational
cost by about 50%, making them more suitable for real-time applications.
| Dongrui Wu | 10.1109/THMS.2016.2608931 | 1702.02897 | null | null |
Driver Drowsiness Estimation from EEG Signals Using Online Weighted
Adaptation Regularization for Regression (OwARR) | cs.LG cs.HC | One big challenge that hinders the transition of brain-computer interfaces
(BCIs) from laboratory settings to real-life applications is the availability
of high-performance and robust learning algorithms that can effectively handle
individual differences, i.e., algorithms that can be applied to a new subject
with zero or very little subject-specific calibration data. Transfer learning
and domain adaptation have been extensively used for this purpose. However,
most previous works focused on classification problems. This paper considers an
important regression problem in BCI, namely, online driver drowsiness
estimation from EEG signals. By integrating fuzzy sets with domain adaptation,
we propose a novel online weighted adaptation regularization for regression
(OwARR) algorithm to reduce the amount of subject-specific calibration data,
and also a source domain selection (SDS) approach to save about half of the
computational cost of OwARR. Using a simulated driving dataset with 15
subjects, we show that OwARR and OwARR-SDS can achieve significantly smaller
estimation errors than several other approaches. We also provide comprehensive
analyses on the robustness of OwARR and OwARR-SDS.
| Dongrui Wu, Vernon J. Lawhern, Stephen Gordon, Brent J. Lance,
Chin-Teng Lin | 10.1109/TFUZZ.2016.2633379 | 1702.02901 | null | null |
Switching EEG Headsets Made Easy: Reducing Offline Calibration Effort
Using Active Weighted Adaptation Regularization | cs.LG cs.HC | Electroencephalography (EEG) headsets are the most commonly used sensing
devices for Brain-Computer Interface. In real-world applications, there are
advantages to extrapolating data from one user session to another. However,
these advantages are limited if the data arise from different hardware systems,
which often vary between application spaces. Currently, this creates a need to
recalibrate classifiers, which negatively affects people's interest in using
such systems. In this paper, we employ active weighted adaptation
regularization (AwAR), which integrates weighted adaptation regularization
(wAR) and active learning, to expedite the calibration process. wAR makes use
of labeled data from the previous headset and handles class-imbalance, and
active learning selects the most informative samples from the new headset to
label. Experiments on single-trial event-related potential classification show
that AwAR can significantly increase the classification accuracy, given the
same number of labeled samples from the new headset. In other words, AwAR can
effectively reduce the number of labeled samples required from the new headset,
given a desired classification accuracy, suggesting value in collating data for
use in wide scale transfer-learning applications.
| Dongrui Wu, Vernon J. Lawhern, W. David Hairston, Brent J. Lance | 10.1109/TNSRE.2016.2544108 | 1702.02906 | null | null |
Spatial Filtering for EEG-Based Regression Problems in Brain-Computer
Interface (BCI) | cs.LG cs.HC | Electroencephalogram (EEG) signals are frequently used in brain-computer
interfaces (BCIs), but they are easily contaminated by artifacts and noises, so
preprocessing must be done before they are fed into a machine learning
algorithm for classification or regression. Spatial filters have been widely
used to increase the signal-to-noise ratio of EEG for BCI classification
problems, but their applications in BCI regression problems have been very
limited. This paper proposes two common spatial pattern (CSP) filters for
EEG-based regression problems in BCI, which are extended from the CSP filter
for classification, by making use of fuzzy sets. Experimental results on
EEG-based response speed estimation from a large-scale study, which collected
143 sessions of sustained-attention psychomotor vigilance task data from 17
subjects during a 5-month period, demonstrate that the two proposed spatial
filters can significantly increase the EEG signal quality. When used in LASSO
and k-nearest neighbors regression for user response speed estimation, the
spatial filters can reduce the root mean square estimation error by
10.02-19.77%, and at the same time increase the correlation to the true
response speed by 19.39-86.47%.
| Dongrui Wu, Jung-Tai King, Chun-Hsiang Chuang, Chin-Teng Lin,
Tzyy-Ping Jung | null | 1702.02914 | null | null |
Fixing an error in Caponnetto and de Vito (2007) | stat.ML cs.LG math.ST stat.TH | The seminal paper of Caponnetto and de Vito (2007) provides minimax-optimal
rates for kernel ridge regression in a very general setting. Its proof,
however, contains an error in its bound on the effective dimensionality. In
this note, we explain the mistake, provide a correct bound, and show that the
main theorem remains true.
| Danica J. Sutherland | null | 1702.02982 | null | null |
Multi-step Off-policy Learning Without Importance Sampling Ratios | cs.LG | To estimate the value functions of policies from exploratory data, most
model-free off-policy algorithms rely on importance sampling, where the use of
importance sampling ratios often leads to estimates with severe variance. It is
thus desirable to learn off-policy without using the ratios. However, such an
algorithm does not exist for multi-step learning with function approximation.
In this paper, we introduce the first such algorithm based on
temporal-difference (TD) learning updates. We show that an explicit use of
importance sampling ratios can be eliminated by varying the amount of
bootstrapping in TD updates in an action-dependent manner. Our new algorithm
achieves stability using a two-timescale gradient-based TD update. A prior
algorithm based on lookup table representation called Tree Backup can also be
retrieved using action-dependent bootstrapping, becoming a special case of our
algorithm. In two challenging off-policy tasks, we demonstrate that our
algorithm is stable, effectively avoids the large variance issue, and can
perform substantially better than its state-of-the-art counterpart.
| Ashique Rupam Mahmood, Huizhen Yu, Richard S. Sutton | null | 1702.03006 | null | null |
Multi-agent Reinforcement Learning in Sequential Social Dilemmas | cs.MA cs.AI cs.GT cs.LG | Matrix games like Prisoner's Dilemma have guided research on social dilemmas
for decades. However, they necessarily treat the choice to cooperate or defect
as an atomic action. In real-world social dilemmas these choices are temporally
extended. Cooperativeness is a property that applies to policies, not
elementary actions. We introduce sequential social dilemmas that share the
mixed incentive structure of matrix game social dilemmas but also require
agents to learn policies that implement their strategic intentions. We analyze
the dynamics of policies learned by multiple self-interested independent
learning agents, each using its own deep Q-network, on two Markov games we
introduce here: 1. a fruit Gathering game and 2. a Wolfpack hunting game. We
characterize how learned behavior in each domain changes as a function of
environmental factors including resource abundance. Our experiments show how
conflict can emerge from competition over shared resources and shed light on
how the sequential nature of real world social dilemmas affects cooperation.
| Joel Z. Leibo, Vinicius Zambaldi, Marc Lanctot, Janusz Marecki, Thore
Graepel | null | 1702.03037 | null | null |
Following the Leader and Fast Rates in Linear Prediction: Curved
Constraint Sets and Other Regularities | cs.LG | The follow the leader (FTL) algorithm, perhaps the simplest of all online
learning algorithms, is known to perform well when the loss functions it is
used on are convex and positively curved. In this paper we ask whether there
are other "lucky" settings when FTL achieves sublinear, "small" regret. In
particular, we study the fundamental problem of linear prediction over a
non-empty convex, compact domain. Amongst other results, we prove that the
curvature of the boundary of the domain can act as if the losses were curved:
In this case, we prove that as long as the mean of the loss vectors have
positive lengths bounded away from zero, FTL enjoys a logarithmic growth rate
of regret, while, e.g., for polytope domains and stochastic data it enjoys
finite expected regret. Building on a previously known meta-algorithm, we also
get an algorithm that simultaneously enjoys the worst-case guarantees and the
bound available for FTL.
| Ruitong Huang, Tor Lattimore, Andr\'as Gy\"orgy, Csaba Szepesv\'ari | null | 1702.0304 | null | null |
Sigmoid-Weighted Linear Units for Neural Network Function Approximation
in Reinforcement Learning | cs.LG | In recent years, neural networks have enjoyed a renaissance as function
approximators in reinforcement learning. Two decades after Tesauro's TD-Gammon
achieved near top-level human performance in backgammon, the deep reinforcement
learning algorithm DQN achieved human-level performance in many Atari 2600
games. The purpose of this study is twofold. First, we propose two activation
functions for neural network function approximation in reinforcement learning:
the sigmoid-weighted linear unit (SiLU) and its derivative function (dSiLU).
The activation of the SiLU is computed by the sigmoid function multiplied by
its input. Second, we suggest that the more traditional approach of using
on-policy learning with eligibility traces, instead of experience replay, and
softmax action selection with simple annealing can be competitive with DQN,
without the need for a separate target network. We validate our proposed
approach by, first, achieving new state-of-the-art results in both stochastic
SZ-Tetris and Tetris with a small 10$\times$10 board, using TD($\lambda$)
learning and shallow dSiLU network agents, and, then, by outperforming DQN in
the Atari 2600 domain by using a deep Sarsa($\lambda$) agent with SiLU and
dSiLU hidden units.
| Stefan Elfwing, Eiji Uchibe and Kenji Doya | null | 1702.03118 | null | null |
Supervised Learning Based Algorithm Selection for Deep Neural Networks | cs.DC cs.LG | Many recent deep learning platforms rely on third-party libraries (such as
cuBLAS) to utilize the computing power of modern hardware accelerators (such as
GPUs). However, we observe that they may achieve suboptimal performance because
the library functions are not used appropriately. In this paper, we target at
optimizing the operations of multiplying a matrix with the transpose of another
matrix (referred to as NT operation hereafter), which contribute about half of
the training time of fully connected deep neural networks. Rather than directly
calling the library function, we propose a supervised learning based algorithm
selection approach named MTNN, which uses a gradient boosted decision tree to
select one from two alternative NT implementations intelligently: (1) calling
the cuBLAS library function; (2) calling our proposed algorithm TNN that uses
an efficient out-of-place matrix transpose. We evaluate the performance of MTNN
on two modern GPUs: NVIDIA GTX 1080 and NVIDIA Titan X Pascal. MTNN can achieve
96\% of prediction accuracy with very low computational overhead, which results
in an average of 54\% performance improvement on a range of NT operations. To
further evaluate the impact of MTNN on the training process of deep neural
networks, we have integrated MTNN into a popular deep learning platform Caffe.
Our experimental results show that the revised Caffe can outperform the
original one by an average of 28\%. Both MTNN and the revised Caffe are
open-source.
| Shaohuai Shi, Pengfei Xu, Xiaowen Chu | null | 1702.03192 | null | null |
Adaptive and Resilient Soft Tensegrity Robots | cs.RO cs.LG cs.SY | Living organisms intertwine soft (e.g., muscle) and hard (e.g., bones)
materials, giving them an intrinsic flexibility and resiliency often lacking in
conventional rigid robots. The emerging field of soft robotics seeks to harness
these same properties in order to create resilient machines. The nature of soft
materials, however, presents considerable challenges to aspects of design,
construction, and control -- and up until now, the vast majority of gaits for
soft robots have been hand-designed through empirical trial-and-error. This
manuscript describes an easy-to-assemble tensegrity-based soft robot capable of
highly dynamic locomotive gaits and demonstrating structural and behavioral
resilience in the face of physical damage. Enabling this is the use of a
machine learning algorithm able to discover effective gaits with a minimal
number of physical trials. These results lend further credence to soft-robotic
approaches that seek to harness the interaction of complex material dynamics in
order to generate a wealth of dynamical behaviors.
| John Rieffel and Jean-Baptiste Mouret | null | 1702.03258 | null | null |
A Deterministic and Generalized Framework for Unsupervised Learning with
Restricted Boltzmann Machines | cs.LG cond-mat.dis-nn cs.NE stat.ML | Restricted Boltzmann machines (RBMs) are energy-based neural-networks which
are commonly used as the building blocks for deep architectures neural
architectures. In this work, we derive a deterministic framework for the
training, evaluation, and use of RBMs based upon the Thouless-Anderson-Palmer
(TAP) mean-field approximation of widely-connected systems with weak
interactions coming from spin-glass theory. While the TAP approach has been
extensively studied for fully-visible binary spin systems, our construction is
generalized to latent-variable models, as well as to arbitrarily distributed
real-valued spin systems with bounded support. In our numerical experiments, we
demonstrate the effective deterministic training of our proposed models and are
able to show interesting features of unsupervised learning which could not be
directly observed with sampling. Additionally, we demonstrate how to utilize
our TAP-based framework for leveraging trained RBMs as joint priors in
denoising problems.
| Eric W. Tramel and Marylou Gabri\'e and Andre Manoel and Francesco
Caltagirone and Florent Krzakala | 10.1103/PhysRevX.8.041006 | 1702.0326 | null | null |
Batch Renormalization: Towards Reducing Minibatch Dependence in
Batch-Normalized Models | cs.LG | Batch Normalization is quite effective at accelerating and improving the
training of deep models. However, its effectiveness diminishes when the
training minibatches are small, or do not consist of independent samples. We
hypothesize that this is due to the dependence of model layer inputs on all the
examples in the minibatch, and different activations being produced between
training and inference. We propose Batch Renormalization, a simple and
effective extension to ensure that the training and inference models generate
the same outputs that depend on individual examples rather than the entire
minibatch. Models trained with Batch Renormalization perform substantially
better than batchnorm when training with small or non-i.i.d. minibatches. At
the same time, Batch Renormalization retains the benefits of batchnorm such as
insensitivity to initialization and training efficiency.
| Sergey Ioffe | null | 1702.03275 | null | null |
Generative Mixture of Networks | cs.LG stat.ML | A generative model based on training deep architectures is proposed. The
model consists of K networks that are trained together to learn the underlying
distribution of a given data set. The process starts with dividing the input
data into K clusters and feeding each of them into a separate network. After
few iterations of training networks separately, we use an EM-like algorithm to
train the networks together and update the clusters of the data. We call this
model Mixture of Networks. The provided model is a platform that can be used
for any deep structure and be trained by any conventional objective function
for distribution modeling. As the components of the model are neural networks,
it has high capability in characterizing complicated data distributions as well
as clustering data. We apply the algorithm on MNIST hand-written digits and
Yale face datasets. We also demonstrate the clustering ability of the model
using some real-world and toy examples.
| Ershad Banijamali, Ali Ghodsi, Pascal Poupart | null | 1702.03307 | null | null |
Batch Policy Gradient Methods for Improving Neural Conversation Models | stat.ML cs.LG | We study reinforcement learning of chatbots with recurrent neural network
architectures when the rewards are noisy and expensive to obtain. For instance,
a chatbot used in automated customer service support can be scored by quality
assurance agents, but this process can be expensive, time consuming and noisy.
Previous reinforcement learning work for natural language processing uses
on-policy updates and/or is designed for on-line learning settings. We
demonstrate empirically that such strategies are not appropriate for this
setting and develop an off-policy batch policy gradient method (BPG). We
demonstrate the efficacy of our method via a series of synthetic experiments
and an Amazon Mechanical Turk experiment on a restaurant recommendations
dataset.
| Kirthevasan Kandasamy, Yoram Bachrach, Ryota Tomioka, Daniel Tarlow,
David Carter | null | 1702.03334 | null | null |
Training Deep Neural Networks via Optimization Over Graphs | cs.LG cs.DC | In this work, we propose to train a deep neural network by distributed
optimization over a graph. Two nonlinear functions are considered: the
rectified linear unit (ReLU) and a linear unit with both lower and upper
cutoffs (DCutLU). The problem reformulation over a graph is realized by
explicitly representing ReLU or DCutLU using a set of slack variables. We then
apply the alternating direction method of multipliers (ADMM) to update the
weights of the network layerwise by solving subproblems of the reformulated
problem. Empirical results suggest that the ADMM-based method is less sensitive
to overfitting than the stochastic gradient descent (SGD) and Adam methods.
| Guoqiang Zhang and W. Bastiaan Kleijn | null | 1702.0338 | null | null |
Parallel Long Short-Term Memory for Multi-stream Classification | cs.LG cs.CL | Recently, machine learning methods have provided a broad spectrum of original
and efficient algorithms based on Deep Neural Networks (DNN) to automatically
predict an outcome with respect to a sequence of inputs. Recurrent hidden cells
allow these DNN-based models to manage long-term dependencies such as Recurrent
Neural Networks (RNN) and Long Short-Term Memory (LSTM). Nevertheless, these
RNNs process a single input stream in one (LSTM) or two (Bidirectional LSTM)
directions. But most of the information available nowadays is from multistreams
or multimedia documents, and require RNNs to process these information
synchronously during the training. This paper presents an original LSTM-based
architecture, named Parallel LSTM (PLSTM), that carries out multiple parallel
synchronized input sequences in order to predict a common output. The proposed
PLSTM method could be used for parallel sequence classification purposes. The
PLSTM approach is evaluated on an automatic telecast genre sequences
classification task and compared with different state-of-the-art architectures.
Results show that the proposed PLSTM method outperforms the baseline n-gram
models as well as the state-of-the-art LSTM approach.
| Mohamed Bouaziz, Mohamed Morchid, Richard Dufour, Georges Linar\`es,
Renato De Mori | null | 1702.03402 | null | null |
A Collective, Probabilistic Approach to Schema Mapping: Appendix | cs.DB cs.LG | In this appendix we provide additional supplementary material to "A
Collective, Probabilistic Approach to Schema Mapping." We include an additional
extended example, supplementary experiment details, and proof for the
complexity result stated in the main paper.
| Angelika Kimmig, Alex Memory, Renee J. Miller, Lise Getoor | null | 1702.03447 | null | null |
Enabling Robots to Communicate their Objectives | cs.RO cs.LG | The overarching goal of this work is to efficiently enable end-users to
correctly anticipate a robot's behavior in novel situations. Since a robot's
behavior is often a direct result of its underlying objective function, our
insight is that end-users need to have an accurate mental model of this
objective function in order to understand and predict what the robot will do.
While people naturally develop such a mental model over time through observing
the robot act, this familiarization process may be lengthy. Our approach
reduces this time by having the robot model how people infer objectives from
observed behavior, and then it selects those behaviors that are maximally
informative. The problem of computing a posterior over objectives from observed
behavior is known as Inverse Reinforcement Learning (IRL), and has been applied
to robots learning human objectives. We consider the problem where the roles of
human and robot are swapped. Our main contribution is to recognize that unlike
robots, humans will not be exact in their IRL inference. We thus introduce two
factors to define candidate approximate-inference models for human learning in
this setting, and analyze them in a user study in the autonomous driving
domain. We show that certain approximate-inference models lead to the robot
generating example behaviors that better enable users to anticipate what it
will do in novel situations. Our results also suggest, however, that additional
research is needed in modeling how humans extrapolate from examples of robot
behavior.
| Sandy H. Huang, David Held, Pieter Abbeel, Anca D. Dragan | 10.15607/RSS.2017.XIII.059 | 1702.03465 | null | null |
Concept Drift Adaptation by Exploiting Historical Knowledge | cs.LG | Incremental learning with concept drift has often been tackled by ensemble
methods, where models built in the past can be re-trained to attain new models
for the current data. Two design questions need to be addressed in developing
ensemble methods for incremental learning with concept drift, i.e., which
historical (i.e., previously trained) models should be preserved and how to
utilize them. A novel ensemble learning method, namely Diversity and Transfer
based Ensemble Learning (DTEL), is proposed in this paper. Given newly arrived
data, DTEL uses each preserved historical model as an initial model and further
trains it with the new data via transfer learning. Furthermore, DTEL preserves
a diverse set of historical models, rather than a set of historical models that
are merely accurate in terms of classification accuracy. Empirical studies on
15 synthetic data streams and 4 real-world data streams (all with concept
drifts) demonstrate that DTEL can handle concept drift more effectively than 4
other state-of-the-art methods.
| Yu Sun, Ke Tang, Zexuan Zhu, Xin Yao | null | 1702.035 | null | null |
On Consistency of Compressive Spectral Clustering | stat.ML cs.IT cs.LG math.IT | Spectral clustering is one of the most popular methods for community
detection in graphs. A key step in spectral clustering algorithms is the eigen
decomposition of the $n{\times}n$ graph Laplacian matrix to extract its $k$
leading eigenvectors, where $k$ is the desired number of clusters among $n$
objects. This is prohibitively complex to implement for very large datasets.
However, it has recently been shown that it is possible to bypass the eigen
decomposition by computing an approximate spectral embedding through graph
filtering of random signals. In this paper, we analyze the working of spectral
clustering performed via graph filtering on the stochastic block model.
Specifically, we characterize the effects of sparsity, dimensionality and
filter approximation error on the consistency of the algorithm in recovering
planted clusters.
| Muni Sreenivas Pydi and Ambedkar Dukkipati | null | 1702.03522 | null | null |
Similarity Preserving Representation Learning for Time Series Clustering | cs.AI cs.LG | A considerable amount of clustering algorithms take instance-feature matrices
as their inputs. As such, they cannot directly analyze time series data due to
its temporal nature, usually unequal lengths, and complex properties. This is a
great pity since many of these algorithms are effective, robust, efficient, and
easy to use. In this paper, we bridge this gap by proposing an efficient
representation learning framework that is able to convert a set of time series
with various lengths to an instance-feature matrix. In particular, we guarantee
that the pairwise similarities between time series are well preserved after the
transformation, thus the learned feature representation is particularly
suitable for the time series clustering task. Given a set of $n$ time series,
we first construct an $n\times n$ partially-observed similarity matrix by
randomly sampling $\mathcal{O}(n \log n)$ pairs of time series and computing
their pairwise similarities. We then propose an efficient algorithm that solves
a non-convex and NP-hard problem to learn new features based on the
partially-observed similarity matrix. By conducting extensive empirical
studies, we show that the proposed framework is more effective, efficient, and
flexible, compared to other state-of-the-art time series clustering methods.
| Qi Lei, Jinfeng Yi, Roman Vaculin, Lingfei Wu, Inderjit S. Dhillon | null | 1702.03584 | null | null |
Nearly Instance Optimal Sample Complexity Bounds for Top-k Arm Selection | cs.LG cs.DS stat.ML | In the Best-$k$-Arm problem, we are given $n$ stochastic bandit arms, each
associated with an unknown reward distribution. We are required to identify the
$k$ arms with the largest means by taking as few samples as possible. In this
paper, we make progress towards a complete characterization of the
instance-wise sample complexity bounds for the Best-$k$-Arm problem. On the
lower bound side, we obtain a novel complexity term to measure the sample
complexity that every Best-$k$-Arm instance requires. This is derived by an
interesting and nontrivial reduction from the Best-$1$-Arm problem. We also
provide an elimination-based algorithm that matches the instance-wise lower
bound within doubly-logarithmic factors. The sample complexity of our algorithm
strictly dominates the state-of-the-art for Best-$k$-Arm (module constant
factors).
| Lijie Chen, Jian Li, Mingda Qiao | null | 1702.03605 | null | null |
A Multi-model Combination Approach for Probabilistic Wind Power
Forecasting | cs.LG stat.AP | Short-term probabilistic wind power forecasting can provide critical
quantified uncertainty information of wind generation for power system
operation and control. As the complicated characteristics of wind power
prediction error, it would be difficult to develop a universal forecasting
model dominating over other alternative models. Therefore, a novel multi-model
combination (MMC) approach for short-term probabilistic wind generation
forecasting is proposed in this paper to exploit the advantages of different
forecasting models. The proposed approach can combine different forecasting
models those provide different kinds of probability density functions to
improve the probabilistic forecast accuracy. Three probabilistic forecasting
models based on the sparse Bayesian learning, kernel density estimation and
beta distribution fitting are used to form the combined model. The parameters
of the MMC model are solved based on Bayesian framework. Numerical tests
illustrate the effectiveness of the proposed MMC approach.
| You Lin, Ming Yang, Can Wan, Jianhui Wang, Yonghua Song | null | 1702.03613 | null | null |
Coresets for Kernel Regression | cs.LG cs.DS | Kernel regression is an essential and ubiquitous tool for non-parametric data
analysis, particularly popular among time series and spatial data. However, the
central operation which is performed many times, evaluating a kernel on the
data set, takes linear time. This is impractical for modern large data sets.
In this paper we describe coresets for kernel regression: compressed data
sets which can be used as proxy for the original data and have provably bounded
worst case error. The size of the coresets are independent of the raw number of
data points, rather they only depend on the error guarantee, and in some cases
the size of domain and amount of smoothing. We evaluate our methods on very
large time series and spatial data, and demonstrate that they incur negligible
error, can be constructed extremely efficiently, and allow for great
computational gains.
| Yan Zheng and Jeff M. Phillips | null | 1702.03644 | null | null |
Is Big Data Sufficient for a Reliable Detection of Non-Technical Losses? | cs.LG cs.AI | Non-technical losses (NTL) occur during the distribution of electricity in
power grids and include, but are not limited to, electricity theft and faulty
meters. In emerging countries, they may range up to 40% of the total
electricity distributed. In order to detect NTLs, machine learning methods are
used that learn irregular consumption patterns from customer data and
inspection results. The Big Data paradigm followed in modern machine learning
reflects the desire of deriving better conclusions from simply analyzing more
data, without the necessity of looking at theory and models. However, the
sample of inspected customers may be biased, i.e. it does not represent the
population of all customers. As a consequence, machine learning models trained
on these inspection results are biased as well and therefore lead to unreliable
predictions of whether customers cause NTL or not. In machine learning, this
issue is called covariate shift and has not been addressed in the literature on
NTL detection yet. In this work, we present a novel framework for quantifying
and visualizing covariate shift. We apply it to a commercial data set from
Brazil that consists of 3.6M customers and 820K inspection results. We show
that some features have a stronger covariate shift than others, making
predictions less reliable. In particular, previous inspections were focused on
certain neighborhoods or customer classes and that they were not sufficiently
spread among the population of customers. This framework is about to be
deployed in a commercial product for NTL detection.
| Patrick Glauner, Angelo Migliosi, Jorge Meira, Petko Valtchev, Radu
State, Franck Bettinger | null | 1702.03767 | null | null |
DNN Filter Bank Cepstral Coefficients for Spoofing Detection | cs.SD cs.CR cs.LG | With the development of speech synthesis techniques, automatic speaker
verification systems face the serious challenge of spoofing attack. In order to
improve the reliability of speaker verification systems, we develop a new
filter bank based cepstral feature, deep neural network filter bank cepstral
coefficients (DNN-FBCC), to distinguish between natural and spoofed speech. The
deep neural network filter bank is automatically generated by training a filter
bank neural network (FBNN) using natural and synthetic speech. By adding
restrictions on the training rules, the learned weight matrix of FBNN is
band-limited and sorted by frequency, similar to the normal filter bank. Unlike
the manually designed filter bank, the learned filter bank has different filter
shapes in different channels, which can capture the differences between natural
and synthetic speech more effectively. The experimental results on the ASVspoof
{2015} database show that the Gaussian mixture model maximum-likelihood
(GMM-ML) classifier trained by the new feature performs better than the
state-of-the-art linear frequency cepstral coefficients (LFCC) based
classifier, especially on detecting unknown attacks.
| Hong Yu, Zheng-Hua Tan, Zhanyu Ma, Jun Guo | null | 1702.03791 | null | null |
Non-convex learning via Stochastic Gradient Langevin Dynamics: a
nonasymptotic analysis | cs.LG math.OC math.PR stat.ML | Stochastic Gradient Langevin Dynamics (SGLD) is a popular variant of
Stochastic Gradient Descent, where properly scaled isotropic Gaussian noise is
added to an unbiased estimate of the gradient at each iteration. This modest
change allows SGLD to escape local minima and suffices to guarantee asymptotic
convergence to global minimizers for sufficiently regular non-convex objectives
(Gelfand and Mitter, 1991). The present work provides a nonasymptotic analysis
in the context of non-convex learning problems, giving finite-time guarantees
for SGLD to find approximate minimizers of both empirical and population risks.
As in the asymptotic setting, our analysis relates the discrete-time SGLD
Markov chain to a continuous-time diffusion process. A new tool that drives the
results is the use of weighted transportation cost inequalities to quantify the
rate of convergence of SGLD to a stationary distribution in the Euclidean
$2$-Wasserstein distance.
| Maxim Raginsky, Alexander Rakhlin, Matus Telgarsky | null | 1702.03849 | null | null |
Next-Step Conditioned Deep Convolutional Neural Networks Improve Protein
Secondary Structure Prediction | cs.LG q-bio.BM | Recently developed deep learning techniques have significantly improved the
accuracy of various speech and image recognition systems. In this paper we show
how to adapt some of these techniques to create a novel chained convolutional
architecture with next-step conditioning for improving performance on protein
sequence prediction problems. We explore its value by demonstrating its ability
to improve performance on eight-class secondary structure prediction. We first
establish a state-of-the-art baseline by adapting recent advances in
convolutional neural networks which were developed for vision tasks. This model
achieves 70.0% per amino acid accuracy on the CB513 benchmark dataset without
use of standard performance-boosting techniques such as ensembling or multitask
learning. We then improve upon this state-of-the-art result using a novel
chained prediction approach which frames the secondary structure prediction as
a next-step prediction problem. This sequential model achieves 70.3% Q8
accuracy on CB513 with a single model; an ensemble of these models produces
71.4% Q8 accuracy on the same test set, improving upon the previous overall
state of the art for the eight-class secondary structure problem. Our models
are implemented using TensorFlow, an open-source machine learning software
library available at TensorFlow.org; we aim to release the code for these
experiments as part of the TensorFlow repository.
| Akosua Busia and Navdeep Jaitly | null | 1702.03865 | null | null |
Cognitive Mapping and Planning for Visual Navigation | cs.CV cs.AI cs.LG cs.RO | We introduce a neural architecture for navigation in novel environments. Our
proposed architecture learns to map from first-person views and plans a
sequence of actions towards goals in the environment. The Cognitive Mapper and
Planner (CMP) is based on two key ideas: a) a unified joint architecture for
mapping and planning, such that the mapping is driven by the needs of the task,
and b) a spatial memory with the ability to plan given an incomplete set of
observations about the world. CMP constructs a top-down belief map of the world
and applies a differentiable neural net planner to produce the next action at
each time step. The accumulated belief of the world enables the agent to track
visited regions of the environment. We train and test CMP on navigation
problems in simulation environments derived from scans of real world buildings.
Our experiments demonstrate that CMP outperforms alternate learning-based
architectures, as well as, classical mapping and path planning approaches in
many cases. Furthermore, it naturally extends to semantically specified goals,
such as 'going to a chair'. We also deploy CMP on physical robots in indoor
environments, where it achieves reasonable performance, even though it is
trained entirely in simulation.
| Saurabh Gupta, Varun Tolani, James Davidson, Sergey Levine, Rahul
Sukthankar, Jitendra Malik | null | 1702.0392 | null | null |
Soft Weight-Sharing for Neural Network Compression | stat.ML cs.LG | The success of deep learning in numerous application domains created the de-
sire to run and train them on mobile devices. This however, conflicts with
their computationally, memory and energy intense nature, leading to a growing
interest in compression. Recent work by Han et al. (2015a) propose a pipeline
that involves retraining, pruning and quantization of neural network weights,
obtaining state-of-the-art compression rates. In this paper, we show that
competitive compression rates can be achieved by using a version of soft
weight-sharing (Nowlan & Hinton, 1992). Our method achieves both quantization
and pruning in one simple (re-)training procedure. This point of view also
exposes the relation between compression and the minimum description length
(MDL) principle.
| Karen Ullrich, Edward Meeds, Max Welling | null | 1702.04008 | null | null |
Is a Data-Driven Approach still Better than Random Choice with Naive
Bayes classifiers? | cs.LG stat.ML | We study the performance of data-driven, a priori and random approaches to
label space partitioning for multi-label classification with a Gaussian Naive
Bayes classifier. Experiments were performed on 12 benchmark data sets and
evaluated on 5 established measures of classification quality: micro and macro
averaged F1 score, Subset Accuracy and Hamming loss. Data-driven methods are
significantly better than an average run of the random baseline. In case of F1
scores and Subset Accuracy - data driven approaches were more likely to perform
better than random approaches than otherwise in the worst case. There always
exists a method that performs better than a priori methods in the worst case.
The advantage of data-driven methods against a priori methods with a weak
classifier is lesser than when tree classifiers are used.
| Piotr Szyma\'nski and Tomasz Kajdanowicz | null | 1702.04013 | null | null |
Mutual Kernel Matrix Completion | cs.LG cs.NA stat.ML | With the huge influx of various data nowadays, extracting knowledge from them
has become an interesting but tedious task among data scientists, particularly
when the data come in heterogeneous form and have missing information. Many
data completion techniques had been introduced, especially in the advent of
kernel methods. However, among the many data completion techniques available in
the literature, studies about mutually completing several incomplete kernel
matrices have not been given much attention yet. In this paper, we present a
new method, called Mutual Kernel Matrix Completion (MKMC) algorithm, that
tackles this problem of mutually inferring the missing entries of multiple
kernel matrices by combining the notions of data fusion and kernel matrix
completion, applied on biological data sets to be used for classification task.
We first introduced an objective function that will be minimized by exploiting
the EM algorithm, which in turn results to an estimate of the missing entries
of the kernel matrices involved. The completed kernel matrices are then
combined to produce a model matrix that can be used to further improve the
obtained estimates. An interesting result of our study is that the E-step and
the M-step are given in closed form, which makes our algorithm efficient in
terms of time and memory. After completion, the (completed) kernel matrices are
then used to train an SVM classifier to test how well the relationships among
the entries are preserved. Our empirical results show that the proposed
algorithm bested the traditional completion techniques in preserving the
relationships among the data points, and in accurately recovering the missing
kernel matrix entries. By far, MKMC offers a promising solution to the problem
of mutual estimation of a number of relevant incomplete kernel matrices.
| Tsuyoshi Kato and Rachelle Rivero | null | 1702.04077 | null | null |
Practical Learning of Predictive State Representations | stat.ML cs.LG | Over the past decade there has been considerable interest in spectral
algorithms for learning Predictive State Representations (PSRs). Spectral
algorithms have appealing theoretical guarantees; however, the resulting models
do not always perform well on inference tasks in practice. One reason for this
behavior is the mismatch between the intended task (accurate filtering or
prediction) and the loss function being optimized by the algorithm (estimation
error in model parameters).
A natural idea is to improve performance by refining PSRs using an algorithm
such as EM. Unfortunately it is not obvious how to apply apply an EM style
algorithm in the context of PSRs as the Log Likelihood is not well defined for
all PSRs. We show that it is possible to overcome this problem using ideas from
Predictive State Inference Machines.
We combine spectral algorithms for PSRs as a consistent and efficient
initialization with PSIM-style updates to refine the resulting model
parameters. By combining these two ideas we develop Inference Gradients, a
simple, fast, and robust method for practical learning of PSRs. Inference
Gradients performs gradient descent in the PSR parameter space to optimize an
inference-based loss function like PSIM. Because Inference Gradients uses a
spectral initialization we get the same consistency benefits as PSRs. We show
that Inference Gradients outperforms both PSRs and PSIMs on real and synthetic
data sets.
| Carlton Downey, Ahmed Hefny, Geoffrey Gordon | null | 1702.04121 | null | null |
Gaussian-Dirichlet Posterior Dominance in Sequential Learning | stat.ML cs.LG math.PR | We consider the problem of sequential learning from categorical observations
bounded in [0,1]. We establish an ordering between the Dirichlet posterior over
categorical outcomes and a Gaussian posterior under observations with N(0,1)
noise. We establish that, conditioned upon identical data with at least two
observations, the posterior mean of the categorical distribution will always
second-order stochastically dominate the posterior mean of the Gaussian
distribution. These results provide a useful tool for the analysis of
sequential learning under categorical outcomes.
| Ian Osband and Benjamin Van Roy | null | 1702.04126 | null | null |
Crossmatching variable objects with the Gaia data | astro-ph.IM cs.LG | Tens of millions of new variable objects are expected to be identified in
over a billion time series from the Gaia mission. Crossmatching known variable
sources with those from Gaia is crucial to incorporate current knowledge,
understand how these objects appear in the Gaia data, train supervised
classifiers to recognise known classes, and validate the results of the
Variability Processing and Analysis Coordination Unit (CU7) within the Gaia
Data Analysis and Processing Consortium (DPAC). The method employed by CU7 to
crossmatch variables for the first Gaia data release includes a binary
classifier to take into account positional uncertainties, proper motion,
targeted variability signals, and artefacts present in the early calibration of
the Gaia data. Crossmatching with a classifier makes it possible to automate
all those decisions which are typically made during visual inspection. The
classifier can be trained with objects characterized by a variety of attributes
to ensure similarity in multiple dimensions (astrometry, photometry,
time-series features), with no need for a-priori transformations to compare
different photometric bands, or of predictive models of the motion of objects
to compare positions. Other advantages as well as some disadvantages of the
method are discussed. Implementation steps from the training to the assessment
of the crossmatch classifier and selection of results are described.
| Lorenzo Rimoldini, Krzysztof Nienartowicz, Maria S\"uveges, Jonathan
Charnas, Leanne P. Guy, Gr\'egory Jevardat de Fombelle, Berry Holl, Isabelle
Lecoeur-Ta\"ibi, Nami Mowlavi, Diego Ord\'o\~nez-Blanco, and Laurent Eyer | null | 1702.04165 | null | null |
On Detecting Adversarial Perturbations | stat.ML cs.AI cs.CV cs.LG | Machine learning and deep learning in particular has advanced tremendously on
perceptual tasks in recent years. However, it remains vulnerable against
adversarial perturbations of the input that have been crafted specifically to
fool the system while being quasi-imperceptible to a human. In this work, we
propose to augment deep neural networks with a small "detector" subnetwork
which is trained on the binary classification task of distinguishing genuine
data from data containing adversarial perturbations. Our method is orthogonal
to prior work on addressing adversarial perturbations, which has mostly focused
on making the classification network itself more robust. We show empirically
that adversarial perturbations can be detected surprisingly well even though
they are quasi-imperceptible to humans. Moreover, while the detectors have been
trained to detect only a specific adversary, they generalize to similar and
weaker adversaries. In addition, we propose an adversarial attack that fools
both the classifier and the detector and a novel training procedure for the
detector that counteracts this attack.
| Jan Hendrik Metzen, Tim Genewein, Volker Fischer, Bastian Bischoff | null | 1702.04267 | null | null |
Exploring loss function topology with cyclical learning rates | cs.LG cs.NE | We present observations and discussion of previously unreported phenomena
discovered while training residual networks. The goal of this work is to better
understand the nature of neural networks through the examination of these new
empirical results. These behaviors were identified through the application of
Cyclical Learning Rates (CLR) and linear network interpolation. Among these
behaviors are counterintuitive increases and decreases in training loss and
instances of rapid training. For example, we demonstrate how CLR can produce
greater testing accuracy than traditional training despite using large learning
rates. Files to replicate these results are available at
https://github.com/lnsmith54/exploring-loss
| Leslie N. Smith and Nicholay Topin | null | 1702.04283 | null | null |
Small Boxes Big Data: A Deep Learning Approach to Optimize Variable
Sized Bin Packing | cs.LG stat.ML | Bin Packing problems have been widely studied because of their broad
applications in different domains. Known as a set of NP-hard problems, they
have different vari- ations and many heuristics have been proposed for
obtaining approximate solutions. Specifically, for the 1D variable sized bin
packing problem, the two key sets of optimization heuristics are the bin
assignment and the bin allocation. Usually the performance of a single static
optimization heuristic can not beat that of a dynamic one which is tailored for
each bin packing instance. Building such an adaptive system requires modeling
the relationship between bin features and packing perform profiles. The primary
drawbacks of traditional AI machine learnings for this task are the natural
limitations of feature engineering, such as the curse of dimensionality and
feature selection quality. We introduce a deep learning approach to overcome
the drawbacks by applying a large training data set, auto feature selection and
fast, accurate labeling. We show in this paper how to build such a system by
both theoretical formulation and engineering practices. Our prediction system
achieves up to 89% training accuracy and 72% validation accuracy to select the
best heuristic that can generate a better quality bin packing solution.
| Feng Mao, Edgar Blanco, Mingang Fu, Rohit Jain, Anurag Gupta,
Sebastien Mancel, Rong Yuan, Stephen Guo, Sai Kumar, Yayang Tian | null | 1702.04415 | null | null |
Efficient Multitask Feature and Relationship Learning | cs.LG cs.AI | We consider a multitask learning problem, in which several predictors are
learned jointly. Prior research has shown that learning the relations between
tasks, and between the input features, together with the predictor, can lead to
better generalization and interpretability, which proved to be useful for
applications in many domains. In this paper, we consider a formulation of
multitask learning that learns the relationships both between tasks and between
features, represented through a task covariance and a feature covariance
matrix, respectively. First, we demonstrate that existing methods proposed for
this problem present an issue that may lead to ill-posed optimization. We then
propose an alternative formulation, as well as an efficient algorithm to
optimize it. Using ideas from optimization and graph theory, we propose an
efficient coordinate-wise minimization algorithm that has a closed form
solution for each block subproblem. Our experiments show that the proposed
optimization method is orders of magnitude faster than its competitors. We also
provide a nonlinear extension that is able to achieve better generalization
than existing methods.
| Han Zhao, Otilia Stretcu, Alex Smola, Geoff Gordon | null | 1702.04423 | null | null |
Robust Stochastic Configuration Networks with Kernel Density Estimation | cs.NE cs.LG stat.ML | Neural networks have been widely used as predictive models to fit data
distribution, and they could be implemented through learning a collection of
samples. In many applications, however, the given dataset may contain noisy
samples or outliers which may result in a poor learner model in terms of
generalization. This paper contributes to a development of robust stochastic
configuration networks (RSCNs) for resolving uncertain data regression
problems. RSCNs are built on original stochastic configuration networks with
weighted least squares method for evaluating the output weights, and the input
weights and biases are incrementally and randomly generated by satisfying with
a set of inequality constrains. The kernel density estimation (KDE) method is
employed to set the penalty weights for each training samples, so that some
negative impacts, caused by noisy data or outliers, on the resulting learner
model can be reduced. The alternating optimization technique is applied for
updating a RSCN model with improved penalty weights computed from the kernel
density estimation function. Performance evaluation is carried out by a
function approximation, four benchmark datasets and a case study on engineering
application. Comparisons to other robust randomised neural modelling
techniques, including the probabilistic robust learning algorithm for neural
networks with random weights and improved RVFL networks, indicate that the
proposed RSCNs with KDE perform favourably and demonstrate good potential for
real-world applications.
| Dianhui Wang, Ming Li | null | 1702.04459 | null | null |
Frustratingly Short Attention Spans in Neural Language Modeling | cs.CL cs.AI cs.LG cs.NE | Neural language models predict the next token using a latent representation
of the immediate token history. Recently, various methods for augmenting neural
language models with an attention mechanism over a differentiable memory have
been proposed. For predicting the next token, these models query information
from a memory of the recent history which can facilitate learning mid- and
long-range dependencies. However, conventional attention mechanisms used in
memory-augmented neural language models produce a single output vector per time
step. This vector is used both for predicting the next token as well as for the
key and value of a differentiable memory of a token history. In this paper, we
propose a neural language model with a key-value attention mechanism that
outputs separate representations for the key and value of a differentiable
memory, as well as for encoding the next-word distribution. This model
outperforms existing memory-augmented neural language models on two corpora.
Yet, we found that our method mainly utilizes a memory of the five most recent
output representations. This led to the unexpected main finding that a much
simpler model based only on the concatenation of recent output representations
from previous time steps is on par with more sophisticated memory-augmented
neural language models.
| Micha{\l} Daniluk, Tim Rockt\"aschel, Johannes Welbl, Sebastian Riedel | null | 1702.04521 | null | null |
On the Discrepancy Between Kleinberg's Clustering Axioms and $k$-Means
Clustering Algorithm Behavior | cs.LG cs.AI | This paper investigates the validity of Kleinberg's axioms for clustering
functions with respect to the quite popular clustering algorithm called
$k$-means. While Kleinberg's axioms have been discussed heavily in the past, we
concentrate here on the case predominantly relevant for $k$-means algorithm,
that is behavior embedded in Euclidean space. We point at some contradictions
and counter intuitiveness aspects of this axiomatic set within $\mathbb{R}^m$
that were evidently not discussed so far. Our results suggest that apparently
without defining clearly what kind of clusters we expect we will not be able to
construct a valid axiomatic system. In particular we look at the shape and the
gaps between the clusters. Finally we demonstrate that there exist several ways
to reconcile the formulation of the axioms with their intended meaning and that
under this reformulation the axioms stop to be contradictory and the real-world
$k$-means algorithm conforms to this axiomatic system.
| Robert K{\l}opotek and Mieczys{\l}aw K{\l}opotek | null | 1702.04577 | null | null |
A Spacetime Approach to Generalized Cognitive Reasoning in Multi-scale
Learning | cs.AI cs.LG | In modern machine learning, pattern recognition replaces realtime semantic
reasoning. The mapping from input to output is learned with fixed semantics by
training outcomes deliberately. This is an expensive and static approach which
depends heavily on the availability of a very particular kind of prior raining
data to make inferences in a single step. Conventional semantic network
approaches, on the other hand, base multi-step reasoning on modal logics and
handcrafted ontologies, which are ad hoc, expensive to construct, and fragile
to inconsistency. Both approaches may be enhanced by a hybrid approach, which
completely separates reasoning from pattern recognition. In this report, a
quasi-linguistic approach to knowledge representation is discussed, motivated
by spacetime structure. Tokenized patterns from diverse sources are integrated
to build a lightly constrained and approximately scale-free network. This is
then be parsed with very simple recursive algorithms to generate
`brainstorming' sets of reasoned knowledge.
| Mark Burgess | null | 1702.04638 | null | null |
Generative Temporal Models with Memory | cs.LG cs.NE stat.ML | We consider the general problem of modeling temporal data with long-range
dependencies, wherein new observations are fully or partially predictable based
on temporally-distant, past observations. A sufficiently powerful temporal
model should separate predictable elements of the sequence from unpredictable
elements, express uncertainty about those unpredictable elements, and rapidly
identify novel elements that may help to predict the future. To create such
models, we introduce Generative Temporal Models augmented with external memory
systems. They are developed within the variational inference framework, which
provides both a practical training methodology and methods to gain insight into
the models' operation. We show, on a range of problems with sparse, long-term
temporal dependencies, that these models store information from early in a
sequence, and reuse this stored information efficiently. This allows them to
perform substantially better than existing models based on well-known recurrent
neural networks, like LSTMs.
| Mevlana Gemici, Chia-Chun Hung, Adam Santoro, Greg Wayne, Shakir
Mohamed, Danilo J. Rezende, David Amos, Timothy Lillicrap | null | 1702.04649 | null | null |
Distributed deep learning on edge-devices: feasibility via adaptive
compression | cs.LG | A large portion of data mining and analytic services use modern machine
learning techniques, such as deep learning. The state-of-the-art results by
deep learning come at the price of an intensive use of computing resources. The
leading frameworks (e.g., TensorFlow) are executed on GPUs or on high-end
servers in datacenters. On the other end, there is a proliferation of personal
devices with possibly free CPU cycles; this can enable services to run in
users' homes, embedding machine learning operations. In this paper, we ask the
following question: Is distributed deep learning computation on WAN connected
devices feasible, in spite of the traffic caused by learning tasks? We show
that such a setup rises some important challenges, most notably the ingress
traffic that the servers hosting the up-to-date model have to sustain.
In order to reduce this stress, we propose adaComp, a novel algorithm for
compressing worker updates to the model on the server. Applicable to stochastic
gradient descent based approaches, it combines efficient gradient selection and
learning rate modulation. We then experiment and measure the impact of
compression, device heterogeneity and reliability on the accuracy of learned
models, with an emulator platform that embeds TensorFlow into Linux containers.
We report a reduction of the total amount of data sent by workers to the server
by two order of magnitude (e.g., 191-fold reduction for a convolutional network
on the MNIST dataset), when compared to a standard asynchronous stochastic
gradient descent, while preserving model accuracy.
| Corentin Hardy, Erwan Le Merrer and Bruno Sericola | null | 1702.04683 | null | null |
Nearest Labelset Using Double Distances for Multi-label Classification | stat.ML cs.LG | Multi-label classification is a type of supervised learning where an instance
may belong to multiple labels simultaneously. Predicting each label
independently has been criticized for not exploiting any correlation between
labels. In this paper we propose a novel approach, Nearest Labelset using
Double Distances (NLDD), that predicts the labelset observed in the training
data that minimizes a weighted sum of the distances in both the feature space
and the label space to the new instance. The weights specify the relative
tradeoff between the two distances. The weights are estimated from a binomial
regression of the number of misclassified labels as a function of the two
distances. Model parameters are estimated by maximum likelihood. NLDD only
considers labelsets observed in the training data, thus implicitly taking into
account label dependencies. Experiments on benchmark multi-label data sets show
that the proposed method on average outperforms other well-known approaches in
terms of Hamming loss, 0/1 loss, and multi-label accuracy and ranks second
after ECC on the F-measure.
| Hyukjun Gweon and Matthias Schonlau and Stefan Steiner | null | 1702.04684 | null | null |
Support Vector Machines and generalisation in HEP | physics.data-an cs.LG hep-ex | We review the concept of Support Vector Machines (SVMs) and discuss examples
of their use in a number of scenarios. Several SVM implementations have been
used in HEP and we exemplify this algorithm using the Toolkit for Multivariate
Analysis (TMVA) implementation. We discuss examples relevant to HEP including
background suppression for $H\to\tau^+\tau^-$ at the LHC with several different
kernel functions. Performance benchmarking leads to the issue of generalisation
of hyper-parameter selection. The avoidance of fine tuning (over training or
over fitting) in MVA hyper-parameter optimisation, i.e. the ability to ensure
generalised performance of an MVA that is independent of the training,
validation and test samples, is of utmost importance. We discuss this issue and
compare and contrast performance of hold-out and k-fold cross-validation. We
have extended the SVM functionality and introduced tools to facilitate cross
validation in TMVA and present results based on these improvements.
| Adrian Bevan, Rodrigo Gamboa Go\~ni, Jon Hays, Tom Stevenson | 10.1088/1742-6596/898/7/072021 | 1702.04686 | null | null |
Linear Time Computation of Moments in Sum-Product Networks | cs.LG cs.AI | Bayesian online algorithms for Sum-Product Networks (SPNs) need to update
their posterior distribution after seeing one single additional instance. To do
so, they must compute moments of the model parameters under this distribution.
The best existing method for computing such moments scales quadratically in the
size of the SPN, although it scales linearly for trees. This unfortunate
scaling makes Bayesian online algorithms prohibitively expensive, except for
small or tree-structured SPNs. We propose an optimal linear-time algorithm that
works even when the SPN is a general directed acyclic graph (DAG), which
significantly broadens the applicability of Bayesian online algorithms for
SPNs. There are three key ingredients in the design and analysis of our
algorithm: 1). For each edge in the graph, we construct a linear time reduction
from the moment computation problem to a joint inference problem in SPNs. 2).
Using the property that each SPN computes a multilinear polynomial, we give an
efficient procedure for polynomial evaluation by differentiation without
expanding the network that may contain exponentially many monomials. 3). We
propose a dynamic programming method to further reduce the computation of the
moments of all the edges in the graph from quadratic to linear. We demonstrate
the usefulness of our linear time algorithm by applying it to develop a linear
time assume density filter (ADF) for SPNs.
| Han Zhao, Geoff Gordon | null | 1702.04767 | null | null |
Training Language Models Using Target-Propagation | cs.CL cs.LG cs.NE | While Truncated Back-Propagation through Time (BPTT) is the most popular
approach to training Recurrent Neural Networks (RNNs), it suffers from being
inherently sequential (making parallelization difficult) and from truncating
gradient flow between distant time-steps. We investigate whether Target
Propagation (TPROP) style approaches can address these shortcomings.
Unfortunately, extensive experiments suggest that TPROP generally underperforms
BPTT, and we end with an analysis of this phenomenon, and suggestions for
future work.
| Sam Wiseman, Sumit Chopra, Marc'Aurelio Ranzato, Arthur Szlam, Ruoyu
Sun, Soumith Chintala, Nicolas Vasilache | null | 1702.0477 | null | null |
Precise Recovery of Latent Vectors from Generative Adversarial Networks | cs.LG cs.NE stat.ML | Generative adversarial networks (GANs) transform latent vectors into visually
plausible images. It is generally thought that the original GAN formulation
gives no out-of-the-box method to reverse the mapping, projecting images back
into latent space. We introduce a simple, gradient-based technique called
stochastic clipping. In experiments, for images generated by the GAN, we
precisely recover their latent vector pre-images 100% of the time. Additional
experiments demonstrate that this method is robust to noise. Finally, we show
that even for unseen images, our method appears to recover unique encodings.
| Zachary C. Lipton, Subarna Tripathi | null | 1702.04782 | null | null |
Learning to Use Learners' Advice | cs.LG | In this paper, we study a variant of the framework of online learning using
expert advice with limited/bandit feedback. We consider each expert as a
learning entity, seeking to more accurately reflecting certain real-world
applications. In our setting, the feedback at any time $t$ is limited in a
sense that it is only available to the expert $i^t$ that has been selected by
the central algorithm (forecaster), \emph{i.e.}, only the expert $i^t$ receives
feedback from the environment and gets to learn at time $t$. We consider a
generic black-box approach whereby the forecaster does not control or know the
learning dynamics of the experts apart from knowing the following no-regret
learning property: the average regret of any expert $j$ vanishes at a rate of
at least $O(t_j^{\regretRate-1})$ with $t_j$ learning steps where $\regretRate
\in [0, 1]$ is a parameter.
In the spirit of competing against the best action in hindsight in
multi-armed bandits problem, our goal here is to be competitive w.r.t. the
cumulative losses the algorithm could receive by following the policy of always
selecting one expert. We prove the following hardness result: without any
coordination between the forecaster and the experts, it is impossible to design
a forecaster achieving no-regret guarantees. In order to circumvent this
hardness result, we consider a practical assumption allowing the forecaster to
"guide" the learning process of the experts by filtering/blocking some of the
feedbacks observed by them from the environment, \emph{i.e.}, not allowing the
selected expert $i^t$ to learn at time $t$ for some time steps. Then, we design
a novel no-regret learning algorithm \algo for this problem setting by
carefully guiding the feedbacks observed by experts. We prove that \algo
achieves the worst-case expected cumulative regret of $O(\Time^\frac{1}{2 -
\regretRate})$ after $\Time$ time steps.
| Adish Singla, Hamed Hassani, Andreas Krause | null | 1702.04825 | null | null |
Dynamic Partition Models | stat.ML cs.LG | We present a new approach for learning compact and intuitive distributed
representations with binary encoding. Rather than summing up expert votes as in
products of experts, we employ for each variable the opinion of the most
reliable expert. Data points are hence explained through a partitioning of the
variables into expert supports. The partitions are dynamically adapted based on
which experts are active. During the learning phase we adopt a smoothed version
of this model that uses separate mixtures for each data dimension. In our
experiments we achieve accurate reconstructions of high-dimensional data points
with at most a dozen experts.
| Marc Goessling, Yali Amit | null | 1702.04832 | null | null |
Sketched Ridge Regression: Optimization Perspective, Statistical
Perspective, and Model Averaging | stat.ML cs.LG cs.NA | We address the statistical and optimization impacts of the classical sketch
and Hessian sketch used to approximately solve the Matrix Ridge Regression
(MRR) problem. Prior research has quantified the effects of classical sketch on
the strictly simpler least squares regression (LSR) problem. We establish that
classical sketch has a similar effect upon the optimization properties of MRR
as it does on those of LSR: namely, it recovers nearly optimal solutions. By
contrast, Hessian sketch does not have this guarantee, instead, the
approximation error is governed by a subtle interplay between the "mass" in the
responses and the optimal objective value.
For both types of approximation, the regularization in the sketched MRR
problem results in significantly different statistical properties from those of
the sketched LSR problem. In particular, there is a bias-variance trade-off in
sketched MRR that is not present in sketched LSR. We provide upper and lower
bounds on the bias and variance of sketched MRR, these bounds show that
classical sketch significantly increases the variance, while Hessian sketch
significantly increases the bias. Empirically, sketched MRR solutions can have
risks that are higher by an order-of-magnitude than those of the optimal MRR
solutions.
We establish theoretically and empirically that model averaging greatly
decreases the gap between the risks of the true and sketched solutions to the
MRR problem. Thus, in parallel or distributed settings, sketching combined with
model averaging is a powerful technique that quickly obtains near-optimal
solutions to the MRR problem while greatly mitigating the increased statistical
risk incurred by sketching.
| Shusen Wang and Alex Gittens and Michael W. Mahoney | null | 1702.04837 | null | null |
Generalizing Jensen and Bregman divergences with comparative convexity
and the statistical Bhattacharyya distances with comparable means | cs.IT cs.LG math.IT | Comparative convexity is a generalization of convexity relying on abstract
notions of means. We define the Jensen divergence and the Jensen diversity from
the viewpoint of comparative convexity, and show how to obtain the generalized
Bregman divergences as limit cases of skewed Jensen divergences. In particular,
we report explicit formula of these generalized Bregman divergences when
considering quasi-arithmetic means. Finally, we introduce a generalization of
the Bhattacharyya statistical distances based on comparative means using
relative convexity.
| Frank Nielsen and Richard Nock | null | 1702.04877 | null | null |
Reflexive Regular Equivalence for Bipartite Data | cs.LG cs.AI stat.ML | Bipartite data is common in data engineering and brings unique challenges,
particularly when it comes to clustering tasks that impose on strong structural
assumptions. This work presents an unsupervised method for assessing similarity
in bipartite data. Similar to some co-clustering methods, the method is based
on regular equivalence in graphs. The algorithm uses spectral properties of a
bipartite adjacency matrix to estimate similarity in both dimensions. The
method is reflexive in that similarity in one dimension is used to inform
similarity in the other. Reflexive regular equivalence can also use the
structure of transitivities -- in a network sense -- the contribution of which
is controlled by the algorithm's only free-parameter, $\alpha$. The method is
completely unsupervised and can be used to validate assumptions of
co-similarity, which are required but often untested, in co-clustering
analyses. Three variants of the method with different normalizations are tested
on synthetic data. The method is found to be robust to noise and well-suited to
asymmetric co-similar structure, making it particularly informative for cluster
analysis and recommendation in bipartite data of unknown structure. In
experiments, the convergence and speed of the algorithm are found to be stable
for different levels of noise. Real-world data from a network of malaria genes
are analyzed, where the similarity produced by the reflexive method is shown to
out-perform other measures' ability to correctly classify genes.
| Aaron Gerow, Mingyang Zhou, Stan Matwin, Feng Shi | null | 1702.04956 | null | null |
Unbiased Online Recurrent Optimization | cs.NE cs.LG | The novel Unbiased Online Recurrent Optimization (UORO) algorithm allows for
online learning of general recurrent computational graphs such as recurrent
network models. It works in a streaming fashion and avoids backtracking through
past activations and inputs. UORO is computationally as costly as Truncated
Backpropagation Through Time (truncated BPTT), a widespread algorithm for
online learning of recurrent networks. UORO is a modification of NoBackTrack
that bypasses the need for model sparsity and makes implementation easy in
current deep learning frameworks, even for complex models.
Like NoBackTrack, UORO provides unbiased gradient estimates; unbiasedness is
the core hypothesis in stochastic gradient descent theory, without which
convergence to a local optimum is not guaranteed. On the contrary, truncated
BPTT does not provide this property, leading to possible divergence.
On synthetic tasks where truncated BPTT is shown to diverge, UORO converges.
For instance, when a parameter has a positive short-term but negative long-term
influence, truncated BPTT diverges unless the truncation span is very
significantly longer than the intrinsic temporal range of the interactions,
while UORO performs well thanks to the unbiasedness of its gradients.
| Corentin Tallec and Yann Ollivier | null | 1702.05043 | null | null |
Discovering objects and their relations from entangled scene
representations | cs.LG cs.CV | Our world can be succinctly and compactly described as structured scenes of
objects and relations. A typical room, for example, contains salient objects
such as tables, chairs and books, and these objects typically relate to each
other by their underlying causes and semantics. This gives rise to correlated
features, such as position, function and shape. Humans exploit knowledge of
objects and their relations for learning a wide spectrum of tasks, and more
generally when learning the structure underlying observed data. In this work,
we introduce relation networks (RNs) - a general purpose neural network
architecture for object-relation reasoning. We show that RNs are capable of
learning object relations from scene description data. Furthermore, we show
that RNs can act as a bottleneck that induces the factorization of objects from
entangled scene description inputs, and from distributed deep representations
of scene images provided by a variational autoencoder. The model can also be
used in conjunction with differentiable memory mechanisms for implicit relation
discovery in one-shot learning tasks. Our results suggest that relation
networks are a potentially powerful architecture for solving a variety of
problems that require object relation reasoning.
| David Raposo, Adam Santoro, David Barrett, Razvan Pascanu, Timothy
Lillicrap, Peter Battaglia | null | 1702.05068 | null | null |
Semi-supervised Learning for Discrete Choice Models | stat.ML cs.LG | We introduce a semi-supervised discrete choice model to calibrate discrete
choice models when relatively few requests have both choice sets and stated
preferences but the majority only have the choice sets. Two classic
semi-supervised learning algorithms, the expectation maximization algorithm and
the cluster-and-label algorithm, have been adapted to our choice modeling
problem setting. We also develop two new algorithms based on the
cluster-and-label algorithm. The new algorithms use the Bayesian Information
Criterion to evaluate a clustering setting to automatically adjust the number
of clusters. Two computational studies including a hotel booking case and a
large-scale airline itinerary shopping case are presented to evaluate the
prediction accuracy and computational effort of the proposed algorithms.
Algorithmic recommendations are rendered under various scenarios.
| Jie Yang, Sergey Shebalov, Diego Klabjan | null | 1702.05137 | null | null |
Latent Laplacian Maximum Entropy Discrimination for Detection of
High-Utility Anomalies | stat.ML cs.CR cs.LG | Data-driven anomaly detection methods suffer from the drawback of detecting
all instances that are statistically rare, irrespective of whether the detected
instances have real-world significance or not. In this paper, we are interested
in the problem of specifically detecting anomalous instances that are known to
have high real-world utility, while ignoring the low-utility statistically
anomalous instances. To this end, we propose a novel method called Latent
Laplacian Maximum Entropy Discrimination (LatLapMED) as a potential solution.
This method uses the EM algorithm to simultaneously incorporate the Geometric
Entropy Minimization principle for identifying statistical anomalies, and the
Maximum Entropy Discrimination principle to incorporate utility labels, in
order to detect high-utility anomalies. We apply our method in both simulated
and real datasets to demonstrate that it has superior performance over existing
alternatives that independently pre-process with unsupervised anomaly detection
algorithms before classifying.
| Elizabeth Hou, Kumar Sricharan, and Alfred O. Hero | null | 1702.05148 | null | null |
RIPML: A Restricted Isometry Property based Approach to Multilabel
Learning | cs.IR cs.LG stat.ML | The multilabel learning problem with large number of labels, features, and
data-points has generated a tremendous interest recently. A recurring theme of
these problems is that only a few labels are active in any given datapoint as
compared to the total number of labels. However, only a small number of
existing work take direct advantage of this inherent extreme sparsity in the
label space. By the virtue of Restricted Isometry Property (RIP), satisfied by
many random ensembles, we propose a novel procedure for multilabel learning
known as RIPML. During the training phase, in RIPML, labels are projected onto
a random low-dimensional subspace followed by solving a least-square problem in
this subspace. Inference is done by a k-nearest neighbor (kNN) based approach.
We demonstrate the effectiveness of RIPML by conducting extensive simulations
and comparing results with the state-of-the-art linear dimensionality reduction
based approaches.
| Akshay Soni and Yashar Mehdad | null | 1702.05181 | null | null |
Completing a joint PMF from projections: a low-rank coupled tensor
factorization approach | cs.LG cs.DM stat.ML | There has recently been considerable interest in completing a low-rank matrix
or tensor given only a small fraction (or few linear combinations) of its
entries. Related approaches have found considerable success in the area of
recommender systems, under machine learning. From a statistical estimation
point of view, the gold standard is to have access to the joint probability
distribution of all pertinent random variables, from which any desired optimal
estimator can be readily derived. In practice high-dimensional joint
distributions are very hard to estimate, and only estimates of low-dimensional
projections may be available. We show that it is possible to identify
higher-order joint PMFs from lower-order marginalized PMFs using coupled
low-rank tensor factorization. Our approach features guaranteed identifiability
when the full joint PMF is of low-enough rank, and effective approximation
otherwise. We provide an algorithmic approach to compute the sought factors,
and illustrate the merits of our approach using rating prediction as an
example.
| Nikos Kargas, Nicholas D. Sidiropoulos | null | 1702.05184 | null | null |
The Simulator: Understanding Adaptive Sampling in the
Moderate-Confidence Regime | cs.LG stat.ML | We propose a novel technique for analyzing adaptive sampling called the {\em
Simulator}. Our approach differs from the existing methods by considering not
how much information could be gathered by any fixed sampling strategy, but how
difficult it is to distinguish a good sampling strategy from a bad one given
the limited amount of data collected up to any given time. This change of
perspective allows us to match the strength of both Fano and change-of-measure
techniques, without succumbing to the limitations of either method. For
concreteness, we apply our techniques to a structured multi-arm bandit problem
in the fixed-confidence pure exploration setting, where we show that the
constraints on the means imply a substantial gap between the
moderate-confidence sample complexity, and the asymptotic sample complexity as
$\delta \to 0$ found in the literature. We also prove the first instance-based
lower bounds for the top-k problem which incorporate the appropriate
log-factors. Moreover, our lower bounds zero-in on the number of times each
\emph{individual} arm needs to be pulled, uncovering new phenomena which are
drowned out in the aggregate sample complexity. Our new analysis inspires a
simple and near-optimal algorithm for the best-arm and top-k identification,
the first {\em practical} algorithm of its kind for the latter problem which
removes extraneous log factors, and outperforms the state-of-the-art in
experiments.
| Max Simchowitz and Kevin Jamieson and Benjamin Recht | null | 1702.05186 | null | null |
Cloud-based Deep Learning of Big EEG Data for Epileptic Seizure
Prediction | cs.LG stat.ML | Developing a Brain-Computer Interface~(BCI) for seizure prediction can help
epileptic patients have a better quality of life. However, there are many
difficulties and challenges in developing such a system as a real-life support
for patients. Because of the nonstationary nature of EEG signals, normal and
seizure patterns vary across different patients. Thus, finding a group of
manually extracted features for the prediction task is not practical. Moreover,
when using implanted electrodes for brain recording massive amounts of data are
produced. This big data calls for the need for safe storage and high
computational resources for real-time processing. To address these challenges,
a cloud-based BCI system for the analysis of this big EEG data is presented.
First, a dimensionality-reduction technique is developed to increase
classification accuracy as well as to decrease the communication bandwidth and
computation time. Second, following a deep-learning approach, a stacked
autoencoder is trained in two steps for unsupervised feature extraction and
classification. Third, a cloud-computing solution is proposed for real-time
analysis of big EEG data. The results on a benchmark clinical dataset
illustrate the superiority of the proposed patient-specific BCI as an
alternative method and its expected usefulness in real-life support of epilepsy
patients.
| Mohammad-Parsa Hosseini, Hamid Soltanian-Zadeh, Kost Elisevich, and
Dario Pompili | null | 1702.05192 | null | null |
Solving Equations of Random Convex Functions via Anchored Regression | cs.LG cs.IT math.IT math.OC math.PR stat.ML | We consider the question of estimating a solution to a system of equations
that involve convex nonlinearities, a problem that is common in machine
learning and signal processing. Because of these nonlinearities, conventional
estimators based on empirical risk minimization generally involve solving a
non-convex optimization program. We propose anchored regression, a new approach
based on convex programming that amounts to maximizing a linear functional
(perhaps augmented by a regularizer) over a convex set. The proposed convex
program is formulated in the natural space of the problem, and avoids the
introduction of auxiliary variables, making it computationally favorable.
Working in the native space also provides great flexibility as structural
priors (e.g., sparsity) can be seamlessly incorporated.
For our analysis, we model the equations as being drawn from a fixed set
according to a probability law. Our main results provide guarantees on the
accuracy of the estimator in terms of the number of equations we are solving,
the amount of noise present, a measure of statistical complexity of the random
equations, and the geometry of the regularizer at the true solution. We also
provide recipes for constructing the anchor vector (that determines the linear
functional to maximize) directly from the observed data.
| Sohail Bahmani and Justin Romberg | null | 1702.05327 | null | null |
Predicting Surgery Duration with Neural Heteroscedastic Regression | stat.ML cs.LG cs.NE | Scheduling surgeries is a challenging task due to the fundamental uncertainty
of the clinical environment, as well as the risks and costs associated with
under- and over-booking. We investigate neural regression algorithms to
estimate the parameters of surgery case durations, focusing on the issue of
heteroscedasticity. We seek to simultaneously estimate the duration of each
surgery, as well as a surgery-specific notion of our uncertainty about its
duration. Estimating this uncertainty can lead to more nuanced and effective
scheduling strategies, as we are able to schedule surgeries more efficiently
while allowing an informed and case-specific margin of error. Using surgery
records %from the UC San Diego Health System, from a large United States health
system we demonstrate potential improvements on the order of 20% (in terms of
minutes overbooked) compared to current scheduling techniques. Moreover, we
demonstrate that surgery durations are indeed heteroscedastic. We show that
models that estimate case-specific uncertainty better fit the data (log
likelihood). Additionally, we show that the heteroscedastic predictions can
more optimally trade off between over and under-booking minutes, especially
when idle minutes and scheduling collisions confer disparate costs.
| Nathan Ng, Rodney A Gabriel, Julian McAuley, Charles Elkan, Zachary C
Lipton | null | 1702.05386 | null | null |
A Random Matrix Approach to Neural Networks | math.PR cs.LG | This article studies the Gram random matrix model $G=\frac1T\Sigma^{\rm
T}\Sigma$, $\Sigma=\sigma(WX)$, classically found in the analysis of random
feature maps and random neural networks, where $X=[x_1,\ldots,x_T]\in{\mathbb
R}^{p\times T}$ is a (data) matrix of bounded norm, $W\in{\mathbb R}^{n\times
p}$ is a matrix of independent zero-mean unit variance entries, and
$\sigma:{\mathbb R}\to{\mathbb R}$ is a Lipschitz continuous (activation)
function --- $\sigma(WX)$ being understood entry-wise. By means of a key
concentration of measure lemma arising from non-asymptotic random matrix
arguments, we prove that, as $n,p,T$ grow large at the same rate, the resolvent
$Q=(G+\gamma I_T)^{-1}$, for $\gamma>0$, has a similar behavior as that met in
sample covariance matrix models, involving notably the moment
$\Phi=\frac{T}n{\mathbb E}[G]$, which provides in passing a deterministic
equivalent for the empirical spectral measure of $G$. Application-wise, this
result enables the estimation of the asymptotic performance of single-layer
random neural networks. This in turn provides practical insights into the
underlying mechanisms into play in random neural networks, entailing several
unexpected consequences, as well as a fast practical means to tune the network
hyperparameters.
| Cosme Louart, Zhenyu Liao, Romain Couillet | null | 1702.05419 | null | null |
Maximally Correlated Principal Component Analysis | stat.ML cs.IT cs.LG math.IT | In the era of big data, reducing data dimensionality is critical in many
areas of science. Widely used Principal Component Analysis (PCA) addresses this
problem by computing a low dimensional data embedding that maximally explain
variance of the data. However, PCA has two major weaknesses. Firstly, it only
considers linear correlations among variables (features), and secondly it is
not suitable for categorical data. We resolve these issues by proposing
Maximally Correlated Principal Component Analysis (MCPCA). MCPCA computes
transformations of variables whose covariance matrix has the largest Ky Fan
norm. Variable transformations are unknown, can be nonlinear and are computed
in an optimization. MCPCA can also be viewed as a multivariate extension of
Maximal Correlation. For jointly Gaussian variables we show that the covariance
matrix corresponding to the identity (or the negative of the identity)
transformations majorizes covariance matrices of non-identity functions. Using
this result we characterize global MCPCA optimizers for nonlinear functions of
jointly Gaussian variables for every rank constraint. For categorical variables
we characterize global MCPCA optimizers for the rank one constraint based on
the leading eigenvector of a matrix computed using pairwise joint
distributions. For a general rank constraint we propose a block coordinate
descend algorithm and show its convergence to stationary points of the MCPCA
optimization. We compare MCPCA with PCA and other state-of-the-art
dimensionality reduction methods including Isomap, LLE, multilayer autoencoders
(neural networks), kernel PCA, probabilistic PCA and diffusion maps on several
synthetic and real datasets. We show that MCPCA consistently provides improved
performance compared to other methods.
| Soheil Feizi and David Tse | null | 1702.05471 | null | null |
Beyond the Hazard Rate: More Perturbation Algorithms for Adversarial
Multi-armed Bandits | cs.LG cs.GT stat.ML | Recent work on follow the perturbed leader (FTPL) algorithms for the
adversarial multi-armed bandit problem has highlighted the role of the hazard
rate of the distribution generating the perturbations. Assuming that the hazard
rate is bounded, it is possible to provide regret analyses for a variety of
FTPL algorithms for the multi-armed bandit problem. This paper pushes the
inquiry into regret bounds for FTPL algorithms beyond the bounded hazard rate
condition. There are good reasons to do so: natural distributions such as the
uniform and Gaussian violate the condition. We give regret bounds for both
bounded support and unbounded support distributions without assuming the hazard
rate condition. We also disprove a conjecture that the Gaussian distribution
cannot lead to a low-regret algorithm. In fact, it turns out that it leads to
near optimal regret, up to logarithmic factors. A key ingredient in our
approach is the introduction of a new notion called the generalized hazard
rate.
| Zifan Li, Ambuj Tewari | null | 1702.05536 | null | null |
Dataset Augmentation in Feature Space | stat.ML cs.LG | Dataset augmentation, the practice of applying a wide array of
domain-specific transformations to synthetically expand a training set, is a
standard tool in supervised learning. While effective in tasks such as visual
recognition, the set of transformations must be carefully designed,
implemented, and tested for every new domain, limiting its re-use and
generality. In this paper, we adopt a simpler, domain-agnostic approach to
dataset augmentation. We start with existing data points and apply simple
transformations such as adding noise, interpolating, or extrapolating between
them. Our main insight is to perform the transformation not in input space, but
in a learned feature space. A re-kindling of interest in unsupervised
representation learning makes this technique timely and more effective. It is a
simple proposal, but to-date one that has not been tested empirically. Working
in the space of context vectors generated by sequence-to-sequence models, we
demonstrate a technique that is effective for both static and sequential data.
| Terrance DeVries, Graham W. Taylor | null | 1702.05538 | null | null |
Bi-Level Online Control without Regret | math.OC cs.LG cs.SY | This paper considers a bi-level discrete-time control framework with
real-time constraints, consisting of several local controllers and a central
controller. The objective is to bridge the gap between the online convex
optimization and real-time control literature by proposing an online control
algorithm with small dynamic regret, which is a natural performance criterion
in nonstationary environments related to real-time control problems. We
illustrate how the proposed algorithm can be applied to real-time control of
power setpoints in an electrical grid.
| Andrey Bernstein | null | 1702.05548 | null | null |
On the Equivalence of Holographic and Complex Embeddings for Link
Prediction | cs.LG | We show the equivalence of two state-of-the-art link prediction/knowledge
graph completion methods: Nickel et al's holographic embedding and Trouillon et
al.'s complex embedding. We first consider a spectral version of the
holographic embedding, exploiting the frequency domain in the Fourier transform
for efficient computation. The analysis of the resulting method reveals that it
can be viewed as an instance of the complex embedding with certain constraints
cast on the initial vectors upon training. Conversely, any complex embedding
can be converted to an equivalent holographic embedding.
| Katsuhiko Hayashi and Masashi Shimbo | null | 1702.05563 | null | null |
Thresholding based Efficient Outlier Robust PCA | cs.LG | We consider the problem of outlier robust PCA (OR-PCA) where the goal is to
recover principal directions despite the presence of outlier data points. That
is, given a data matrix $M^*$, where $(1-\alpha)$ fraction of the points are
noisy samples from a low-dimensional subspace while $\alpha$ fraction of the
points can be arbitrary outliers, the goal is to recover the subspace
accurately. Existing results for \OR-PCA have serious drawbacks: while some
results are quite weak in the presence of noise, other results have runtime
quadratic in dimension, rendering them impractical for large scale
applications.
In this work, we provide a novel thresholding based iterative algorithm with
per-iteration complexity at most linear in the data size. Moreover, the
fraction of outliers, $\alpha$, that our method can handle is tight up to
constants while providing nearly optimal computational complexity for a general
noise setting. For the special case where the inliers are obtained from a
low-dimensional subspace with additive Gaussian noise, we show that a
modification of our thresholding based method leads to significant improvement
in recovery error (of the subspace) even in the presence of a large fraction of
outliers.
| Yeshwanth Cherapanamjeri and Prateek Jain and Praneeth Netrapalli | null | 1702.05571 | null | null |
A Hitting Time Analysis of Stochastic Gradient Langevin Dynamics | cs.LG math.OC stat.ML | We study the Stochastic Gradient Langevin Dynamics (SGLD) algorithm for
non-convex optimization. The algorithm performs stochastic gradient descent,
where in each step it injects appropriately scaled Gaussian noise to the
update. We analyze the algorithm's hitting time to an arbitrary subset of the
parameter space. Two results follow from our general theory: First, we prove
that for empirical risk minimization, if the empirical risk is point-wise close
to the (smooth) population risk, then the algorithm achieves an approximate
local minimum of the population risk in polynomial time, escaping suboptimal
local minima that only exist in the empirical risk. Second, we show that SGLD
improves on one of the best known learnability results for learning linear
classifiers under the zero-one loss.
| Yuchen Zhang, Percy Liang, Moses Charikar | null | 1702.05575 | null | null |
Revisiting Perceptron: Efficient and Label-Optimal Learning of
Halfspaces | cs.LG stat.ML | It has been a long-standing problem to efficiently learn a halfspace using as
few labels as possible in the presence of noise. In this work, we propose an
efficient Perceptron-based algorithm for actively learning homogeneous
halfspaces under the uniform distribution over the unit sphere. Under the
bounded noise condition~\cite{MN06}, where each label is flipped with
probability at most $\eta < \frac 1 2$, our algorithm achieves a near-optimal
label complexity of
$\tilde{O}\left(\frac{d}{(1-2\eta)^2}\ln\frac{1}{\epsilon}\right)$ in time
$\tilde{O}\left(\frac{d^2}{\epsilon(1-2\eta)^3}\right)$. Under the adversarial
noise condition~\cite{ABL14, KLS09, KKMS08}, where at most a $\tilde
\Omega(\epsilon)$ fraction of labels can be flipped, our algorithm achieves a
near-optimal label complexity of $\tilde{O}\left(d\ln\frac{1}{\epsilon}\right)$
in time $\tilde{O}\left(\frac{d^2}{\epsilon}\right)$. Furthermore, we show that
our active learning algorithm can be converted to an efficient passive learning
algorithm that has near-optimal sample complexities with respect to $\epsilon$
and $d$.
| Songbai Yan and Chicheng Zhang | null | 1702.05581 | null | null |
Riemannian stochastic variance reduced gradient algorithm with
retraction and vector transport | cs.LG math.OC stat.ML | In recent years, stochastic variance reduction algorithms have attracted
considerable attention for minimizing the average of a large but finite number
of loss functions. This paper proposes a novel Riemannian extension of the
Euclidean stochastic variance reduced gradient (R-SVRG) algorithm to a manifold
search space. The key challenges of averaging, adding, and subtracting multiple
gradients are addressed with retraction and vector transport. For the proposed
algorithm, we present a global convergence analysis with a decaying step size
as well as a local convergence rate analysis with a fixed step size under some
natural assumptions. In addition, the proposed algorithm is applied to the
computation problem of the Riemannian centroid on the symmetric positive
definite (SPD) manifold as well as the principal component analysis and
low-rank matrix completion problems on the Grassmann manifold. The results show
that the proposed algorithm outperforms the standard Riemannian stochastic
gradient descent algorithm in each case.
| Hiroyuki Sato, Hiroyuki Kasai, Bamdev Mishra | 10.1137/17M1116787 | 1702.05594 | null | null |
Deep Stochastic Configuration Networks with Universal Approximation
Property | cs.LG cs.NE | This paper develops a randomized approach for incrementally building deep
neural networks, where a supervisory mechanism is proposed to constrain the
random assignment of the weights and biases, and all the hidden layers have
direct links to the output layer. A fundamental result on the universal
approximation property is established for such a class of randomized leaner
models, namely deep stochastic configuration networks (DeepSCNs). A learning
algorithm is presented to implement DeepSCNs with either specific architecture
or self-organization. The read-out weights attached with all direct links from
each hidden layer to the output layer are evaluated by the least squares
method. Given a set of training examples, DeepSCNs can speedily produce a
learning representation, that is, a collection of random basis functions with
the cascaded inputs together with the read-out weights. An empirical study on a
function approximation is carried out to demonstrate some properties of the
proposed deep learner model.
| Dianhui Wang, Ming Li | null | 1702.05639 | null | null |
On Loss Functions for Deep Neural Networks in Classification | cs.LG | Deep neural networks are currently among the most commonly used classifiers.
Despite easily achieving very good performance, one of the best selling points
of these models is their modular design - one can conveniently adapt their
architecture to specific needs, change connectivity patterns, attach
specialised layers, experiment with a large amount of activation functions,
normalisation schemes and many others. While one can find impressively wide
spread of various configurations of almost every aspect of the deep nets, one
element is, in authors' opinion, underrepresented - while solving
classification problems, vast majority of papers and applications simply use
log loss. In this paper we try to investigate how particular choices of loss
functions affect deep models and their learning dynamics, as well as resulting
classifiers robustness to various effects. We perform experiments on classical
datasets, as well as provide some additional, theoretical insights into the
problem. In particular we show that L1 and L2 losses are, quite surprisingly,
justified classification objectives for deep nets, by providing probabilistic
interpretation in terms of expected misclassification. We also introduce two
losses which are not typically used as deep nets objectives and show that they
are viable alternatives to the existing ones.
| Katarzyna Janocha, Wojciech Marian Czarnecki | null | 1702.05659 | null | null |
Quadratic Upper Bound for Recursive Teaching Dimension of Finite VC
Classes | cs.LG cs.AI stat.ML | In this work we study the quantitative relation between the recursive
teaching dimension (RTD) and the VC dimension (VCD) of concept classes of
finite sizes. The RTD of a concept class $\mathcal C \subseteq \{0, 1\}^n$,
introduced by Zilles et al. (2011), is a combinatorial complexity measure
characterized by the worst-case number of examples necessary to identify a
concept in $\mathcal C$ according to the recursive teaching model.
For any finite concept class $\mathcal C \subseteq \{0,1\}^n$ with
$\mathrm{VCD}(\mathcal C)=d$, Simon & Zilles (2015) posed an open problem
$\mathrm{RTD}(\mathcal C) = O(d)$, i.e., is RTD linearly upper bounded by VCD?
Previously, the best known result is an exponential upper bound
$\mathrm{RTD}(\mathcal C) = O(d \cdot 2^d)$, due to Chen et al. (2016). In this
paper, we show a quadratic upper bound: $\mathrm{RTD}(\mathcal C) = O(d^2)$,
much closer to an answer to the open problem. We also discuss the challenges in
fully solving the problem.
| Lunjia Hu, Ruihan Wu, Tianhong Li, Liwei Wang | null | 1702.05677 | null | null |
An Adaptivity Hierarchy Theorem for Property Testing | cs.DS cs.LG | Adaptivity is known to play a crucial role in property testing. In
particular, there exist properties for which there is an exponential gap
between the power of \emph{adaptive} testing algorithms, wherein each query may
be determined by the answers received to prior queries, and their
\emph{non-adaptive} counterparts, in which all queries are independent of
answers obtained from previous queries.
In this work, we investigate the role of adaptivity in property testing at a
finer level. We first quantify the degree of adaptivity of a testing algorithm
by considering the number of "rounds of adaptivity" it uses. More accurately,
we say that a tester is $k$-(round) adaptive if it makes queries in $k+1$
rounds, where the queries in the $i$'th round may depend on the answers
obtained in the previous $i-1$ rounds. Then, we ask the following question:
Does the power of testing algorithms smoothly grow with the number of rounds
of adaptivity?
We provide a positive answer to the foregoing question by proving an
adaptivity hierarchy theorem for property testing. Specifically, our main
result shows that for every $n\in \mathbb{N}$ and $0 \le k \le n^{0.99}$ there
exists a property $\mathcal{P}_{n,k}$ of functions for which (1) there exists a
$k$-adaptive tester for $\mathcal{P}_{n,k}$ with query complexity
$\tilde{O}(k)$, yet (2) any $(k-1)$-adaptive tester for $\mathcal{P}_{n,k}$
must make $\Omega(n)$ queries. In addition, we show that such a qualitative
adaptivity hierarchy can be witnessed for testing natural properties of graphs.
| Clement Canonne and Tom Gur | null | 1702.05678 | null | null |
Non-negative Tensor Factorization for Human Behavioral Pattern Mining in
Online Games | cs.LG cs.HC cs.SI physics.soc-ph | Multiplayer online battle arena has become a popular game genre. It also
received increasing attention from our research community because they provide
a wealth of information about human interactions and behaviors. A major problem
is extracting meaningful patterns of activity from this type of data, in a way
that is also easy to interpret. Here, we propose to exploit tensor
decomposition techniques, and in particular Non-negative Tensor Factorization,
to discover hidden correlated behavioral patterns of play in a popular game:
League of Legends. We first collect the entire gaming history of a group of
about one thousand players, totaling roughly $100K$ matches. By applying our
methodological framework, we then separate players into groups that exhibit
similar features and playing strategies, as well as similar temporal
trajectories, i.e., behavioral progressions over the course of their gaming
history: this will allow us to investigate how players learn and improve their
skills.
| Anna Sapienza, Alessandro Bessi, Emilio Ferrara | null | 1702.05695 | null | null |
Online Robust Principal Component Analysis with Change Point Detection | cs.LG cs.CV stat.AP stat.CO stat.ML | Robust PCA methods are typically batch algorithms which requires loading all
observations into memory before processing. This makes them inefficient to
process big data. In this paper, we develop an efficient online robust
principal component methods, namely online moving window robust principal
component analysis (OMWRPCA). Unlike existing algorithms, OMWRPCA can
successfully track not only slowly changing subspace but also abruptly changed
subspace. By embedding hypothesis testing into the algorithm, OMWRPCA can
detect change points of the underlying subspaces. Extensive simulation studies
demonstrate the superior performance of OMWRPCA compared with other
state-of-art approaches. We also apply the algorithm for real-time background
subtraction of surveillance video.
| Wei Xiao, Xiaolin Huang, Jorge Silva, Saba Emrani and Arin Chaudhuri | null | 1702.05698 | null | null |
Collaborative Deep Reinforcement Learning | cs.LG | Besides independent learning, human learning process is highly improved by
summarizing what has been learned, communicating it with peers, and
subsequently fusing knowledge from different sources to assist the current
learning goal. This collaborative learning procedure ensures that the knowledge
is shared, continuously refined, and concluded from different perspectives to
construct a more profound understanding. The idea of knowledge transfer has led
to many advances in machine learning and data mining, but significant
challenges remain, especially when it comes to reinforcement learning,
heterogeneous model structures, and different learning tasks. Motivated by
human collaborative learning, in this paper we propose a collaborative deep
reinforcement learning (CDRL) framework that performs adaptive knowledge
transfer among heterogeneous learning agents. Specifically, the proposed CDRL
conducts a novel deep knowledge distillation method to address the
heterogeneity among different learning tasks with a deep alignment network.
Furthermore, we present an efficient collaborative Asynchronous Advantage
Actor-Critic (cA3C) algorithm to incorporate deep knowledge distillation into
the online training of agents, and demonstrate the effectiveness of the CDRL
framework using extensive empirical evaluation on OpenAI gym.
| Kaixiang Lin, Shu Wang and Jiayu Zhou | null | 1702.05796 | null | null |
Compressive Embedding and Visualization using Graphs | cs.LG stat.ML | Visualizing high-dimensional data has been a focus in data analysis
communities for decades, which has led to the design of many algorithms, some
of which are now considered references (such as t-SNE for example). In our era
of overwhelming data volumes, the scalability of such methods have become more
and more important. In this work, we present a method which allows to apply any
visualization or embedding algorithm on very large datasets by considering only
a fraction of the data as input and then extending the information to all data
points using a graph encoding its global similarity. We show that in most
cases, using only $\mathcal{O}(\log(N))$ samples is sufficient to diffuse the
information to all $N$ data points. In addition, we propose quantitative
methods to measure the quality of embeddings and demonstrate the validity of
our technique on both synthetic and real-world datasets.
| Johan Paratte, Nathana\"el Perraudin, Pierre Vandergheynst | null | 1702.05815 | null | null |
Robust Sparse Estimation Tasks in High Dimensions | cs.LG cs.DS | In this paper we initiate the study of whether or not sparse estimation tasks
can be performed efficiently in high dimensions, in the robust setting where an
$\eps$-fraction of samples are corrupted adversarially. We study the natural
robust version of two classical sparse estimation problems, namely, sparse mean
estimation and sparse PCA in the spiked covariance model. For both of these
problems, we provide the first efficient algorithms that provide non-trivial
error guarantees in the presence of noise, using only a number of samples which
is similar to the number required for these problems without noise. In
particular, our sample complexities are sublinear in the ambient dimension $d$.
Our work also suggests evidence for new computational-vs-statistical gaps for
these problems (similar to those for sparse PCA without noise) which only arise
in the presence of noise.
| Jerry Li | null | 1702.0586 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.