title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
DeepKey: An EEG and Gait Based Dual-Authentication System | cs.LG | Biometric authentication involves various technologies to identify
individuals by exploiting their unique, measurable physiological and behavioral
characteristics. However, traditional biometric authentication systems (e.g.,
face recognition, iris, retina, voice, and fingerprint) are facing an
increasing risk of being tricked by biometric tools such as anti-surveillance
masks, contact lenses, vocoder, or fingerprint films. In this paper, we design
a multimodal biometric authentication system named Deepkey, which uses both
Electroencephalography (EEG) and gait signals to better protect against such
risk. Deepkey consists of two key components: an Invalid ID Filter Model to
block unauthorized subjects and an identification model based on
attention-based Recurrent Neural Network (RNN) to identify a subject`s EEG IDs
and gait IDs in parallel. The subject can only be granted access while all the
components produce consistent evidence to match the user`s proclaimed identity.
We implement Deepkey with a live deployment in our university and conduct
extensive empirical experiments to study its technical feasibility in practice.
DeepKey achieves the False Acceptance Rate (FAR) and the False Rejection Rate
(FRR) of 0 and 1.0%, respectively. The preliminary results demonstrate that
Deepkey is feasible, show consistent superior performance compared to a set of
methods, and has the potential to be applied to the authentication deployment
in real world settings.
| Xiang Zhang, Lina Yao, Chaoran Huang, Tao Gu, Zheng Yang and Yunhao
Liu | null | 1706.01606 | null | null |
Retrosynthetic reaction prediction using neural sequence-to-sequence
models | cs.LG q-bio.QM stat.ML | We describe a fully data driven model that learns to perform a retrosynthetic
reaction prediction task, which is treated as a sequence-to-sequence mapping
problem. The end-to-end trained model has an encoder-decoder architecture that
consists of two recurrent neural networks, which has previously shown great
success in solving other sequence-to-sequence prediction tasks such as machine
translation. The model is trained on 50,000 experimental reaction examples from
the United States patent literature, which span 10 broad reaction types that
are commonly used by medicinal chemists. We find that our model performs
comparably with a rule-based expert system baseline model, and also overcomes
certain limitations associated with rule-based expert systems and with any
machine learning approach that contains a rule-based expert system component.
Our model provides an important first step towards solving the challenging
problem of computational retrosynthetic analysis.
| Bowen Liu, Bharath Ramsundar, Prasad Kawthekar, Jade Shi, Joseph
Gomes, Quang Luu Nguyen, Stephen Ho, Jack Sloane, Paul Wender, Vijay Pande | null | 1706.01643 | null | null |
Learning Pairwise Disjoint Simple Languages from Positive Examples | cs.LG cs.FL | A classical problem in grammatical inference is to identify a deterministic
finite automaton (DFA) from a set of positive and negative examples. In this
paper, we address the related - yet seemingly novel - problem of identifying a
set of DFAs from examples that belong to different unknown simple regular
languages. We propose two methods based on compression for clustering the
observed positive examples. We apply our methods to a set of print jobs
submitted to large industrial printers.
| Alexis Linard, Rick Smetsers, Frits Vaandrager, Umar Waqas, Joost van
Pinxten, Sicco Verwer | null | 1706.01663 | null | null |
Limitations on Variance-Reduction and Acceleration Schemes for Finite
Sum Optimization | math.OC cs.LG stat.ML | We study the conditions under which one is able to efficiently apply
variance-reduction and acceleration schemes on finite sum optimization
problems. First, we show that, perhaps surprisingly, the finite sum structure
by itself, is not sufficient for obtaining a complexity bound of
$\tilde{\cO}((n+L/\mu)\ln(1/\epsilon))$ for $L$-smooth and $\mu$-strongly
convex individual functions - one must also know which individual function is
being referred to by the oracle at each iteration. Next, we show that for a
broad class of first-order and coordinate-descent finite sum algorithms
(including, e.g., SDCA, SVRG, SAG), it is not possible to get an `accelerated'
complexity bound of $\tilde{\cO}((n+\sqrt{n L/\mu})\ln(1/\epsilon))$, unless
the strong convexity parameter is given explicitly. Lastly, we show that when
this class of algorithms is used for minimizing $L$-smooth and convex finite
sums, the optimal complexity bound is $\tilde{\cO}(n+L/\epsilon)$, assuming
that (on average) the same update rule is used in every iteration, and
$\tilde{\cO}(n+\sqrt{nL/\epsilon})$, otherwise.
| Yossi Arjevani | null | 1706.01686 | null | null |
Deep Latent Dirichlet Allocation with Topic-Layer-Adaptive Stochastic
Gradient Riemannian MCMC | stat.ML cs.LG stat.CO | It is challenging to develop stochastic gradient based scalable inference for
deep discrete latent variable models (LVMs), due to the difficulties in not
only computing the gradients, but also adapting the step sizes to different
latent factors and hidden layers. For the Poisson gamma belief network (PGBN),
a recently proposed deep discrete LVM, we derive an alternative representation
that is referred to as deep latent Dirichlet allocation (DLDA). Exploiting data
augmentation and marginalization techniques, we derive a block-diagonal Fisher
information matrix and its inverse for the simplex-constrained global model
parameters of DLDA. Exploiting that Fisher information matrix with stochastic
gradient MCMC, we present topic-layer-adaptive stochastic gradient Riemannian
(TLASGR) MCMC that jointly learns simplex-constrained global parameters across
all layers and topics, with topic and layer specific learning rates.
State-of-the-art results are demonstrated on big data sets.
| Yulai Cong, Bo Chen, Hongwei Liu, Mingyuan Zhou | null | 1706.01724 | null | null |
Multi-View Kernels for Low-Dimensional Modeling of Seismic Events | cs.LG | The problem of learning from seismic recordings has been studied for years.
There is a growing interest in developing automatic mechanisms for identifying
the properties of a seismic event. One main motivation is the ability have a
reliable identification of man-made explosions. The availability of multiple
high-dimensional observations has increased the use of machine learning
techniques in a variety of fields. In this work, we propose to use a
kernel-fusion based dimensionality reduction framework for generating
meaningful seismic representations from raw data. The proposed method is tested
on 2023 events that were recorded in Israel and in Jordan. The method achieves
promising results in classification of event type as well as in estimating the
location of the event. The proposed fusion and dimensionality reduction tools
may be applied to other types of geophysical data.
| Ofir Lindenbaum, Yuri Bregman, Neta Rabin, Amir Averbuch | 10.1109/TGRS.2018.2797537 | 1706.0175 | null | null |
Adversarial-Playground: A Visualization Suite for Adversarial Sample
Generation | cs.CR cs.AI cs.LG | With growing interest in adversarial machine learning, it is important for
machine learning practitioners and users to understand how their models may be
attacked. We propose a web-based visualization tool, Adversarial-Playground, to
demonstrate the efficacy of common adversarial methods against a deep neural
network (DNN) model, built on top of the TensorFlow library.
Adversarial-Playground provides users an efficient and effective experience in
exploring techniques generating adversarial examples, which are inputs crafted
by an adversary to fool a machine learning system. To enable
Adversarial-Playground to generate quick and accurate responses for users, we
use two primary tactics: (1) We propose a faster variant of the
state-of-the-art Jacobian saliency map approach that maintains a comparable
evasion rate. (2) Our visualization does not transmit the generated adversarial
images to the client, but rather only the matrix describing the sample and the
vector representing classification likelihoods.
The source code along with the data from all of our experiments are available
at \url{https://github.com/QData/AdversarialDNN-Playground}.
| Andrew Norton and Yanjun Qi | null | 1706.01763 | null | null |
Deep Factorization for Speech Signal | cs.SD cs.LG | Speech signals are complex intermingling of various informative factors, and
this information blending makes decoding any of the individual factors
extremely difficult. A natural idea is to factorize each speech frame into
independent factors, though it turns out to be even more difficult than
decoding each individual factor. A major encumbrance is that the speaker trait,
a major factor in speech signals, has been suspected to be a long-term
distributional pattern and so not identifiable at the frame level. In this
paper, we demonstrated that the speaker factor is also a short-time spectral
pattern and can be largely identified with just a few frames using a simple
deep neural network (DNN). This discovery motivated a cascade deep
factorization (CDF) framework that infers speech factors in a sequential way,
and factors previously inferred are used as conditional variables when
inferring other factors. Our experiment on an automatic emotion recognition
(AER) task demonstrated that this approach can effectively factorize speech
signals, and using these factors, the original speech spectrum can be recovered
with high accuracy. This factorization and reconstruction approach provides a
novel tool for many speech processing tasks.
| Dong Wang and Lantian Li and Ying Shi and Yixiang Chen and Zhiyuan
Tang | null | 1706.01777 | null | null |
Robust Online Multi-Task Learning with Correlative and Personalized
Structures | cs.LG stat.ML | Multi-Task Learning (MTL) can enhance a classifier's generalization
performance by learning multiple related tasks simultaneously. Conventional MTL
works under the offline or batch setting, and suffers from expensive training
cost and poor scalability. To address such inefficiency issues, online learning
techniques have been applied to solve MTL problems. However, most existing
algorithms of online MTL constrain task relatedness into a presumed structure
via a single weight matrix, which is a strict restriction that does not always
hold in practice. In this paper, we propose a robust online MTL framework that
overcomes this restriction by decomposing the weight matrix into two
components: the first one captures the low-rank common structure among tasks
via a nuclear norm and the second one identifies the personalized patterns of
outlier tasks via a group lasso. Theoretical analysis shows the proposed
algorithm can achieve a sub-linear regret with respect to the best linear model
in hindsight. Even though the above framework achieves good performance, the
nuclear norm that simply adds all nonzero singular values together may not be a
good low-rank approximation. To improve the results, we use a log-determinant
function as a non-convex rank approximation. The gradient scheme is applied to
optimize log-determinant function and can obtain a closed-form solution for
this refined problem. Experimental results on a number of real-world
applications verify the efficacy of our method.
| Peng Yang, Peilin Zhao, Xin Gao | 10.1109/TKDE.2017.2703106 | 1706.01824 | null | null |
Efficient Antihydrogen Detection in Antimatter Physics by Deep Learning | physics.ins-det cs.LG hep-ex | Antihydrogen is at the forefront of antimatter research at the CERN
Antiproton Decelerator. Experiments aiming to test the fundamental CPT symmetry
and antigravity effects require the efficient detection of antihydrogen
annihilation events, which is performed using highly granular tracking
detectors installed around an antimatter trap. Improving the efficiency of the
antihydrogen annihilation detection plays a central role in the final
sensitivity of the experiments. We propose deep learning as a novel technique
to analyze antihydrogen annihilation data, and compare its performance with a
traditional track and vertex reconstruction method. We report that the deep
learning approach yields significant improvement, tripling event coverage while
simultaneously improving performance by over 5% in terms of Area Under Curve
(AUC).
| Peter Sadowski, Balint Radics, Ananya, Yasunori Yamazaki, Pierre Baldi | null | 1706.01826 | null | null |
Online Adaptive Machine Learning Based Algorithm for Implied Volatility
Surface Modeling | stat.ML cs.LG q-fin.CP | In this work, we design a machine learning based method, online adaptive
primal support vector regression (SVR), to model the implied volatility surface
(IVS). The algorithm proposed is the first derivation and implementation of an
online primal kernel SVR. It features enhancements that allow efficient online
adaptive learning by embedding the idea of local fitness and budget maintenance
to dynamically update support vectors upon pattern drifts. For algorithm
acceleration, we implement its most computationally intensive parts in a Field
Programmable Gate Arrays hardware, where a 132x speedup over CPU is achieved
during online prediction. Using intraday tick data from the E-mini S&P 500
options market, we show that the Gaussian kernel outperforms the linear kernel
in regulating the size of support vectors, and that our empirical IVS algorithm
beats two competing online methods with regards to model complexity and
regression errors (the mean absolute percentage error of our algorithm is up to
13%). Best results are obtained at the center of the IVS grid due to its larger
number of adjacent support vectors than the edges of the grid. Sensitivity
analysis is also presented to demonstrate how hyper parameters affect the error
rates and model complexity.
| Yaxiong Zeng, Diego Klabjan | null | 1706.01833 | null | null |
Attributed Network Embedding for Learning in a Dynamic Environment | cs.SI cs.LG stat.ML | Network embedding leverages the node proximity manifested to learn a
low-dimensional node vector representation for each node in the network. The
learned embeddings could advance various learning tasks such as node
classification, network clustering, and link prediction. Most, if not all, of
the existing works, are overwhelmingly performed in the context of plain and
static networks. Nonetheless, in reality, network structure often evolves over
time with addition/deletion of links and nodes. Also, a vast majority of
real-world networks are associated with a rich set of node attributes, and
their attribute values are also naturally changing, with the emerging of new
content patterns and the fading of old content patterns. These changing
characteristics motivate us to seek an effective embedding representation to
capture network and attribute evolving patterns, which is of fundamental
importance for learning in a dynamic environment. To our best knowledge, we are
the first to tackle this problem with the following two challenges: (1) the
inherently correlated network and node attributes could be noisy and
incomplete, it necessitates a robust consensus representation to capture their
individual properties and correlations; (2) the embedding learning needs to be
performed in an online fashion to adapt to the changes accordingly. In this
paper, we tackle this problem by proposing a novel dynamic attributed network
embedding framework - DANE. In particular, DANE first provides an offline
method for a consensus embedding and then leverages matrix perturbation theory
to maintain the freshness of the end embedding results in an online manner. We
perform extensive experiments on both synthetic and real attributed networks to
corroborate the effectiveness and efficiency of the proposed framework.
| Jundong Li, Harsh Dani, Xia Hu, Jiliang Tang, Yi Chang, Huan Liu | 10.1145/3132847.3132919 | 1706.0186 | null | null |
A generalized method toward drug-target interaction prediction via
low-rank matrix projection | cs.LG cs.CE physics.bio-ph | Drug-target interaction (DTI) prediction plays a very important role in drug
development and drug discovery. Biochemical experiments or \textit{in vitro}
methods are very expensive, laborious and time-consuming. Therefore, \textit{in
silico} approaches including docking simulation and machine learning have been
proposed to solve this problem. In particular, machine learning approaches have
attracted increasing attentions recently. However, in addition to the known
drug-target interactions, most of the machine learning methods require extra
characteristic information such as chemical structures, genome sequences,
binding types and so on. Whenever such information is not available, they may
perform poor. Very recently, the similarity-based link prediction methods were
extended to bipartite networks, which can be applied to solve the DTI
prediction problem by using topological information only. In this work, we
propose a method based on low-rank matrix projection to solve the DTI
prediction problem. On one hand, when there is no extra characteristic
information of drugs or targets, the proposed method utilizes only the known
interactions. On the other hand, the proposed method can also utilize the extra
characteristic information when it is available and the performances will be
remarkably improved. Moreover, the proposed method can predict the interactions
associated with new drugs or targets of which we know nothing about their
associated interactions, but only some characteristic information. We compare
the proposed method with ten baseline methods, e.g., six similarity-based
methods that utilize only the known interactions and four methods that utilize
the extra characteristic information. The datasets and codes implementing the
simulations are available at https://github.com/rathapech/DTI_LMP.
| Ratha Pech, Dong Hao, Yan-Li Lee, Maryna Po, Tao Zhou | null | 1706.01876 | null | null |
Parameter Space Noise for Exploration | cs.LG cs.AI cs.NE cs.RO stat.ML | Deep reinforcement learning (RL) methods generally engage in exploratory
behavior through noise injection in the action space. An alternative is to add
noise directly to the agent's parameters, which can lead to more consistent
exploration and a richer set of behaviors. Methods such as evolutionary
strategies use parameter perturbations, but discard all temporal structure in
the process and require significantly more samples. Combining parameter noise
with traditional RL methods allows to combine the best of both worlds. We
demonstrate that both off- and on-policy methods benefit from this approach
through experimental comparison of DQN, DDPG, and TRPO on high-dimensional
discrete action environments as well as continuous control tasks. Our results
show that RL with parameter noise learns more efficiently than traditional RL
with action space noise and evolutionary strategies individually.
| Matthias Plappert, Rein Houthooft, Prafulla Dhariwal, Szymon Sidor,
Richard Y. Chen, Xi Chen, Tamim Asfour, Pieter Abbeel, Marcin Andrychowicz | null | 1706.01905 | null | null |
Deep Learning: Generalization Requires Deep Compositional Feature Space
Design | cs.LG stat.ML | Generalization error defines the discriminability and the representation
power of a deep model. In this work, we claim that feature space design using
deep compositional function plays a significant role in generalization along
with explicit and implicit regularizations. Our claims are being established
with several image classification experiments. We show that the information
loss due to convolution and max pooling can be marginalized with the
compositional design, improving generalization performance. Also, we will show
that learning rate decay acts as an implicit regularizer in deep model
training.
| Mrinal Haloi | null | 1706.01983 | null | null |
Stacked Convolutional and Recurrent Neural Networks for Bird Audio
Detection | cs.SD cs.LG | This paper studies the detection of bird calls in audio segments using
stacked convolutional and recurrent neural networks. Data augmentation by
blocks mixing and domain adaptation using a novel method of test mixing are
proposed and evaluated in regard to making the method robust to unseen data.
The contributions of two kinds of acoustic features (dominant frequency and log
mel-band energy) and their combinations are studied in the context of bird
audio detection. Our best achieved AUC measure on five cross-validations of the
development data is 95.5% and 88.1% on the unseen evaluation data.
| Sharath Adavanne, Konstantinos Drossos, Emre \c{C}ak{\i}r, Tuomas
Virtanen | null | 1706.02047 | null | null |
Are Saddles Good Enough for Deep Learning? | stat.ML cs.LG cs.NE | Recent years have seen a growing interest in understanding deep neural
networks from an optimization perspective. It is understood now that converging
to low-cost local minima is sufficient for such models to become effective in
practice. However, in this work, we propose a new hypothesis based on recent
theoretical findings and empirical studies that deep neural network models
actually converge to saddle points with high degeneracy. Our findings from this
work are new, and can have a significant impact on the development of gradient
descent based methods for training deep networks. We validated our hypotheses
using an extensive experimental evaluation on standard datasets such as MNIST
and CIFAR-10, and also showed that recent efforts that attempt to escape
saddles finally converge to saddles with high degeneracy, which we define as
`good saddles'. We also verified the famous Wigner's Semicircle Law in our
experimental results.
| Adepu Ravi Sankar, Vineeth N Balasubramanian | null | 1706.02052 | null | null |
Semi-Supervised Phoneme Recognition with Recurrent Ladder Networks | cs.CL cs.LG cs.NE | Ladder networks are a notable new concept in the field of semi-supervised
learning by showing state-of-the-art results in image recognition tasks while
being compatible with many existing neural architectures. We present the
recurrent ladder network, a novel modification of the ladder network, for
semi-supervised learning of recurrent neural networks which we evaluate with a
phoneme recognition task on the TIMIT corpus. Our results show that the model
is able to consistently outperform the baseline and achieve fully-supervised
baseline performance with only 75% of all labels which demonstrates that the
model is capable of using unsupervised data as an effective regulariser.
| Marian Tietz, Tayfun Alpay, Johannes Twiefel, Stefan Wermter | 10.1007/978-3-319-68600-4_1 | 1706.02124 | null | null |
Inductive Representation Learning on Large Graphs | cs.SI cs.LG stat.ML | Low-dimensional embeddings of nodes in large graphs have proved extremely
useful in a variety of prediction tasks, from content recommendation to
identifying protein functions. However, most existing approaches require that
all nodes in the graph are present during training of the embeddings; these
previous approaches are inherently transductive and do not naturally generalize
to unseen nodes. Here we present GraphSAGE, a general, inductive framework that
leverages node feature information (e.g., text attributes) to efficiently
generate node embeddings for previously unseen data. Instead of training
individual embeddings for each node, we learn a function that generates
embeddings by sampling and aggregating features from a node's local
neighborhood. Our algorithm outperforms strong baselines on three inductive
node-classification benchmarks: we classify the category of unseen nodes in
evolving information graphs based on citation and Reddit post data, and we show
that our algorithm generalizes to completely unseen graphs using a multi-graph
dataset of protein-protein interactions.
| William L. Hamilton, Rex Ying, Jure Leskovec | null | 1706.02216 | null | null |
Gated Recurrent Neural Tensor Network | cs.LG cs.CL stat.ML | Recurrent Neural Networks (RNNs), which are a powerful scheme for modeling
temporal and sequential data need to capture long-term dependencies on datasets
and represent them in hidden layers with a powerful model to capture more
information from inputs. For modeling long-term dependencies in a dataset, the
gating mechanism concept can help RNNs remember and forget previous
information. Representing the hidden layers of an RNN with more expressive
operations (i.e., tensor products) helps it learn a more complex relationship
between the current input and the previous hidden layer information. These
ideas can generally improve RNN performances. In this paper, we proposed a
novel RNN architecture that combine the concepts of gating mechanism and the
tensor product into a single model. By combining these two concepts into a
single RNN, our proposed models learn long-term dependencies by modeling with
gating units and obtain more expressive and direct interaction between input
and hidden layers using a tensor product on 3-dimensional array (tensor) weight
parameters. We use Long Short Term Memory (LSTM) RNN and Gated Recurrent Unit
(GRU) RNN and combine them with a tensor product inside their formulations. Our
proposed RNNs, which are called a Long-Short Term Memory Recurrent Neural
Tensor Network (LSTMRNTN) and Gated Recurrent Unit Recurrent Neural Tensor
Network (GRURNTN), are made by combining the LSTM and GRU RNN models with the
tensor product. We conducted experiments with our proposed models on word-level
and character-level language modeling tasks and revealed that our proposed
models significantly improved their performance compared to our baseline
models.
| Andros Tjandra, Sakriani Sakti, Ruli Manurung, Mirna Adriani and
Satoshi Nakamura | 10.1109/IJCNN.2016.7727233 | 1706.02222 | null | null |
Efficient Reinforcement Learning via Initial Pure Exploration | cs.LG stat.ML | In several realistic situations, an interactive learning agent can practice
and refine its strategy before going on to be evaluated. For instance, consider
a student preparing for a series of tests. She would typically take a few
practice tests to know which areas she needs to improve upon. Based of the
scores she obtains in these practice tests, she would formulate a strategy for
maximizing her scores in the actual tests. We treat this scenario in the
context of an agent exploring a fixed-horizon episodic Markov Decision Process
(MDP), where the agent can practice on the MDP for some number of episodes (not
necessarily known in advance) before starting to incur regret for its actions.
During practice, the agent's goal must be to maximize the probability of
following an optimal policy. This is akin to the problem of Pure Exploration
(PE). We extend the PE problem of Multi Armed Bandits (MAB) to MDPs and propose
a Bayesian algorithm called Posterior Sampling for Pure Exploration (PSPE),
which is similar to its bandit counterpart. We show that the Bayesian simple
regret converges at an optimal exponential rate when using PSPE.
When the agent starts being evaluated, its goal would be to minimize the
cumulative regret incurred. This is akin to the problem of Reinforcement
Learning (RL). The agent uses the Posterior Sampling for Reinforcement Learning
algorithm (PSRL) initialized with the posteriors of the practice phase. We
hypothesize that this PSPE + PSRL combination is an optimal strategy for
minimizing regret in RL problems with an initial practice phase. We show
empirical results which prove that having a lower simple regret at the end of
the practice phase results in having lower cumulative regret during evaluation.
| Sudeep Raja Putta, Theja Tulabandhula | null | 1706.02237 | null | null |
Recurrent computations for visual pattern completion | q-bio.NC cs.AI cs.CV cs.LG | Making inferences from partial information constitutes a critical aspect of
cognition. During visual perception, pattern completion enables recognition of
poorly visible or occluded objects. We combined psychophysics, physiology and
computational models to test the hypothesis that pattern completion is
implemented by recurrent computations and present three pieces of evidence that
are consistent with this hypothesis. First, subjects robustly recognized
objects even when rendered <15% visible, but recognition was largely impaired
when processing was interrupted by backward masking. Second, invasive
physiological responses along the human ventral cortex exhibited visually
selective responses to partially visible objects that were delayed compared to
whole objects, suggesting the need for additional computations. These
physiological delays were correlated with the effects of backward masking.
Third, state-of-the-art feed-forward computational architectures were not
robust to partial visibility. However, recognition performance was recovered
when the model was augmented with attractor-based recurrent connectivity. These
results provide a strong argument of plausibility for the role of recurrent
computations in making visual inferences from partial information.
| Hanlin Tang, Martin Schrimpf, Bill Lotter, Charlotte Moerman, Ana
Paredes, Josue Ortega Caro, Walter Hardesty, David Cox, Gabriel Kreiman | 10.1073/pnas.1719397115 | 1706.0224 | null | null |
Comparative Analysis of Open Source Frameworks for Machine Learning with
Use Case in Single-Threaded and Multi-Threaded Modes | cs.LG cs.CV cs.DC | The basic features of some of the most versatile and popular open source
frameworks for machine learning (TensorFlow, Deep Learning4j, and H2O) are
considered and compared. Their comparative analysis was performed and
conclusions were made as to the advantages and disadvantages of these
platforms. The performance tests for the de facto standard MNIST data set were
carried out on H2O framework for deep learning algorithms designed for CPU and
GPU platforms for single-threaded and multithreaded modes of operation.
| Yuriy Kochura, Sergii Stirenko, Anis Rojbi, Oleg Alienin, Michail
Novotarskiy, and Yuri Gordienko | 10.1109/STC-CSIT.2017.8098808 | 1706.02248 | null | null |
Driver Action Prediction Using Deep (Bidirectional) Recurrent Neural
Network | stat.ML cs.AI cs.CV cs.LG cs.NE | Advanced driver assistance systems (ADAS) can be significantly improved with
effective driver action prediction (DAP). Predicting driver actions early and
accurately can help mitigate the effects of potentially unsafe driving
behaviors and avoid possible accidents. In this paper, we formulate driver
action prediction as a timeseries anomaly prediction problem. While the anomaly
(driver actions of interest) detection might be trivial in this context,
finding patterns that consistently precede an anomaly requires searching for or
extracting features across multi-modal sensory inputs. We present such a driver
action prediction system, including a real-time data acquisition, processing
and learning framework for predicting future or impending driver action. The
proposed system incorporates camera-based knowledge of the driving environment
and the driver themselves, in addition to traditional vehicle dynamics. It then
uses a deep bidirectional recurrent neural network (DBRNN) to learn the
correlation between sensory inputs and impending driver behavior achieving
accurate and high horizon action prediction. The proposed system performs
better than other existing systems on driver action prediction tasks and can
accurately predict key driver actions including acceleration, braking, lane
change and turning at durations of 5sec before the action is executed by the
driver.
| Oluwatobi Olabiyi, Eric Martinson, Vijay Chintalapudi, Rui Guo | null | 1706.02257 | null | null |
InfoVAE: Information Maximizing Variational Autoencoders | cs.LG cs.AI stat.ML | A key advance in learning generative models is the use of amortized inference
distributions that are jointly trained with the models. We find that existing
training objectives for variational autoencoders can lead to inaccurate
amortized inference distributions and, in some cases, improving the objective
provably degrades the inference quality. In addition, it has been observed that
variational autoencoders tend to ignore the latent variables when combined with
a decoding distribution that is too flexible. We again identify the cause in
existing training criteria and propose a new class of objectives (InfoVAE) that
mitigate these problems. We show that our model can significantly improve the
quality of the variational posterior and can make effective use of the latent
features regardless of the flexibility of the decoding distribution. Through
extensive qualitative and quantitative analyses, we demonstrate that our models
outperform competing approaches on multiple performance metrics.
| Shengjia Zhao, Jiaming Song, Stefano Ermon | null | 1706.02262 | null | null |
Graph Convolutional Matrix Completion | stat.ML cs.DB cs.IR cs.LG | We consider matrix completion for recommender systems from the point of view
of link prediction on graphs. Interaction data such as movie ratings can be
represented by a bipartite user-item graph with labeled edges denoting observed
ratings. Building on recent progress in deep learning on graph-structured data,
we propose a graph auto-encoder framework based on differentiable message
passing on the bipartite interaction graph. Our model shows competitive
performance on standard collaborative filtering benchmarks. In settings where
complimentary feature information or structured data such as a social network
is available, our framework outperforms recent state-of-the-art methods.
| Rianne van den Berg, Thomas N. Kipf, Max Welling | null | 1706.02263 | null | null |
Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments | cs.LG cs.AI cs.NE | We explore deep reinforcement learning methods for multi-agent domains. We
begin by analyzing the difficulty of traditional algorithms in the multi-agent
case: Q-learning is challenged by an inherent non-stationarity of the
environment, while policy gradient suffers from a variance that increases as
the number of agents grows. We then present an adaptation of actor-critic
methods that considers action policies of other agents and is able to
successfully learn policies that require complex multi-agent coordination.
Additionally, we introduce a training regimen utilizing an ensemble of policies
for each agent that leads to more robust multi-agent policies. We show the
strength of our approach compared to existing methods in cooperative as well as
competitive scenarios, where agent populations are able to discover various
physical and informational coordination strategies.
| Ryan Lowe, Yi Wu, Aviv Tamar, Jean Harb, Pieter Abbeel, Igor Mordatch | null | 1706.02275 | null | null |
Meta-Learning for Resampling Recommendation Systems | cs.LG stat.AP stat.CO stat.ME | One possible approach to tackle the class imbalance in classification tasks
is to resample a training dataset, i.e., to drop some of its elements or to
synthesize new ones. There exist several widely-used resampling methods. Recent
research showed that the choice of resampling method significantly affects the
quality of classification, which raises resampling selection problem.
Exhaustive search for optimal resampling is time-consuming and hence it is of
limited use. In this paper, we describe an alternative approach to the
resampling selection. We follow the meta-learning concept to build resampling
recommendation systems, i.e., algorithms recommending resampling for datasets
on the basis of their properties.
| Smolyakov Dmitry, Alexander Korotin, Pavel Erofeev, Artem Papanov,
Evgeny Burnaev | null | 1706.02289 | null | null |
Sound Event Detection Using Spatial Features and Convolutional Recurrent
Neural Network | cs.SD cs.LG | This paper proposes to use low-level spatial features extracted from
multichannel audio for sound event detection. We extend the convolutional
recurrent neural network to handle more than one type of these multichannel
features by learning from each of them separately in the initial stages. We
show that instead of concatenating the features of each channel into a single
feature vector the network learns sound events in multichannel audio better
when they are presented as separate layers of a volume. Using the proposed
spatial features over monaural features on the same network gives an absolute
F-score improvement of 6.1% on the publicly available TUT-SED 2016 dataset and
2.7% on the TUT-SED 2009 dataset that is fifteen times larger.
| Sharath Adavanne, Pasi Pertil\"a, Tuomas Virtanen | null | 1706.02291 | null | null |
Stacked Convolutional and Recurrent Neural Networks for Music Emotion
Recognition | cs.SD cs.LG | This paper studies the emotion recognition from musical tracks in the
2-dimensional valence-arousal (V-A) emotional space. We propose a method based
on convolutional (CNN) and recurrent neural networks (RNN), having
significantly fewer parameters compared with the state-of-the-art method for
the same task. We utilize one CNN layer followed by two branches of RNNs
trained separately for arousal and valence. The method was evaluated using the
'MediaEval2015 emotion in music' dataset. We achieved an RMSE of 0.202 for
arousal and 0.268 for valence, which is the best result reported on this
dataset.
| Miroslav Malik, Sharath Adavanne, Konstantinos Drossos, Tuomas
Virtanen, Dasa Ticha, Roman Jarina | null | 1706.02292 | null | null |
Sound Event Detection in Multichannel Audio Using Spatial and Harmonic
Features | cs.SD cs.LG | In this paper, we propose the use of spatial and harmonic features in
combination with long short term memory (LSTM) recurrent neural network (RNN)
for automatic sound event detection (SED) task. Real life sound recordings
typically have many overlapping sound events, making it hard to recognize with
just mono channel audio. Human listeners have been successfully recognizing the
mixture of overlapping sound events using pitch cues and exploiting the stereo
(multichannel) audio signal available at their ears to spatially localize these
events. Traditionally SED systems have only been using mono channel audio,
motivated by the human listener we propose to extend them to use multichannel
audio. The proposed SED system is compared against the state of the art mono
channel method on the development subset of TUT sound events detection 2016
database. The usage of spatial and harmonic features are shown to improve the
performance of SED.
| Sharath Adavanne, Giambattista Parascandolo, Pasi Pertil\"a, Toni
Heittola, Tuomas Virtanen | null | 1706.02293 | null | null |
Generative-Discriminative Variational Model for Visual Recognition | cs.LG | The paradigm shift from shallow classifiers with hand-crafted features to
end-to-end trainable deep learning models has shown significant improvements on
supervised learning tasks. Despite the promising power of deep neural networks
(DNN), how to alleviate overfitting during training has been a research topic
of interest. In this paper, we present a Generative-Discriminative Variational
Model (GDVM) for visual classification, in which we introduce a latent variable
inferred from inputs for exhibiting generative abilities towards prediction. In
other words, our GDVM casts the supervised learning task as a generative
learning process, with data discrimination to be jointly exploited for improved
classification. In our experiments, we consider the tasks of multi-class
classification, multi-label classification, and zero-shot learning. We show
that our GDVM performs favorably against the baselines or recent generative DNN
models.
| Chih-Kuan Yeh and Yao-Hung Hubert Tsai and Yu-Chiang Frank Wang | null | 1706.02295 | null | null |
Low-shot learning with large-scale diffusion | cs.CV cs.LG stat.ML | This paper considers the problem of inferring image labels from images when
only a few annotated examples are available at training time. This setup is
often referred to as low-shot learning, where a standard approach is to
re-train the last few layers of a convolutional neural network learned on
separate classes for which training examples are abundant. We consider a
semi-supervised setting based on a large collection of images to support label
propagation. This is possible by leveraging the recent advances on large-scale
similarity graph construction.
We show that despite its conceptual simplicity, scaling label propagation up
to hundred millions of images leads to state of the art accuracy in the
low-shot learning regime.
| Matthijs Douze and Arthur Szlam and Bharath Hariharan and Herv\'e
J\'egou | null | 1706.02332 | null | null |
Learning to Extract Semantic Structure from Documents Using Multimodal
Fully Convolutional Neural Network | cs.CV cs.LG | We present an end-to-end, multimodal, fully convolutional network for
extracting semantic structures from document images. We consider document
semantic structure extraction as a pixel-wise segmentation task, and propose a
unified model that classifies pixels based not only on their visual appearance,
as in the traditional page segmentation task, but also on the content of
underlying text. Moreover, we propose an efficient synthetic document
generation process that we use to generate pretraining data for our network.
Once the network is trained on a large set of synthetic documents, we fine-tune
the network on unlabeled real documents using a semi-supervised approach. We
systematically study the optimum network architecture and show that both our
multimodal approach and the synthetic data pretraining significantly boost the
performance.
| Xiao Yang, Ersin Yumer, Paul Asente, Mike Kraley, Daniel Kifer, C. Lee
Giles | null | 1706.02337 | null | null |
The Effects of Noisy Labels on Deep Convolutional Neural Networks for
Music Tagging | cs.IR cs.LG cs.MM cs.SD | Deep neural networks (DNN) have been successfully applied to music
classification including music tagging. However, there are several open
questions regarding the training, evaluation, and analysis of DNNs. In this
article, we investigate specific aspects of neural networks, the effects of
noisy labels, to deepen our understanding of their properties. We analyse and
(re-)validate a large music tagging dataset to investigate the reliability of
training and evaluation. Using a trained network, we compute label vector
similarities which is compared to groundtruth similarity.
The results highlight several important aspects of music tagging and neural
networks. We show that networks can be effective despite relatively large error
rates in groundtruth datasets, while conjecturing that label noise can be the
cause of varying tag-wise performance differences. Lastly, the analysis of our
trained network provides valuable insight into the relationships between music
tags. These results highlight the benefit of using data-driven methods to
address automatic music tagging.
| Keunwoo Choi and George Fazekas and Kyunghyun Cho and Mark Sandler | null | 1706.02361 | null | null |
Fast Black-box Variational Inference through Stochastic Trust-Region
Optimization | cs.LG stat.ML | We introduce TrustVI, a fast second-order algorithm for black-box variational
inference based on trust-region optimization and the reparameterization trick.
At each iteration, TrustVI proposes and assesses a step based on minibatches of
draws from the variational distribution. The algorithm provably converges to a
stationary point. We implemented TrustVI in the Stan framework and compared it
to two alternatives: Automatic Differentiation Variational Inference (ADVI) and
Hessian-free Stochastic Gradient Variational Inference (HFSGVI). The former is
based on stochastic first-order optimization. The latter uses second-order
information, but lacks convergence guarantees. TrustVI typically converged at
least one order of magnitude faster than ADVI, demonstrating the value of
stochastic second-order information. TrustVI often found substantially better
variational distributions than HFSGVI, demonstrating that our convergence
theory can matter in practice.
| Jeffrey Regier and Michael I. Jordan and Jon McAuliffe | null | 1706.02375 | null | null |
Training Quantized Nets: A Deeper Understanding | cs.LG cs.CV stat.ML | Currently, deep neural networks are deployed on low-power portable devices by
first training a full-precision model using powerful hardware, and then
deriving a corresponding low-precision model for efficient inference on such
systems. However, training models directly with coarsely quantized weights is a
key step towards learning on embedded platforms that have limited computing
resources, memory capacity, and power consumption. Numerous recent publications
have studied methods for training quantized networks, but these studies have
mostly been empirical. In this work, we investigate training methods for
quantized neural networks from a theoretical viewpoint. We first explore
accuracy guarantees for training methods under convexity assumptions. We then
look at the behavior of these algorithms for non-convex problems, and show that
training algorithms that exploit high-precision representations have an
important greedy search phase that purely quantized training methods lack,
which explains the difficulty of training using low-precision arithmetic.
| Hao Li, Soham De, Zheng Xu, Christoph Studer, Hanan Samet, Tom
Goldstein | null | 1706.02379 | null | null |
Learning the structure of Bayesian Networks via the bootstrap | cs.LG stat.ML | Learning the structure of dependencies among multiple random variables is a
problem of considerable theoretical and practical interest. Within the context
of Bayesian Networks, a practical and surprisingly successful solution to this
learning problem is achieved by adopting score-functions optimisation schema,
augmented with multiple restarts to avoid local optima. Yet, the conditions
under which such strategies work well are poorly understood, and there are also
some intrinsic limitations to learning the directionality of the interaction
among the variables. Following an early intuition of Friedman and Koller, we
propose to decouple the learning problem into two steps: first, we identify a
partial ordering among input variables which constrains the structural learning
problem, and then propose an effective bootstrap-based algorithm to simulate
augmented data sets, and select the most important dependencies among the
variables. By using several synthetic data sets, we show that our algorithm
yields better recovery performance than the state of the art, increasing the
chances of identifying a globally-optimal solution to the learning problem, and
solving also well-known identifiability issues that affect the standard
approach. We use our new algorithm to infer statistical dependencies between
cancer driver somatic mutations detected by high-throughput genome sequencing
data of multiple colorectal cancer patients. In this way, we also show how the
proposed methods can shade new insights about cancer initiation, and
progression. Code: https://github.com/caravagn/Bootstrap-based-Learning
| Giulio Caravagna and Daniele Ramazzotti | null | 1706.02386 | null | null |
CosmoGAN: creating high-fidelity weak lensing convergence maps using
Generative Adversarial Networks | astro-ph.IM cs.LG | Inferring model parameters from experimental data is a grand challenge in
many sciences, including cosmology. This often relies critically on high
fidelity numerical simulations, which are prohibitively computationally
expensive. The application of deep learning techniques to generative modeling
is renewing interest in using high dimensional density estimators as
computationally inexpensive emulators of fully-fledged simulations. These
generative models have the potential to make a dramatic shift in the field of
scientific simulations, but for that shift to happen we need to study the
performance of such generators in the precision regime needed for science
applications. To this end, in this work we apply Generative Adversarial
Networks to the problem of generating weak lensing convergence maps. We show
that our generator network produces maps that are described by, with high
statistical confidence, the same summary statistics as the fully simulated
maps.
| Mustafa Mustafa, Deborah Bard, Wahid Bhimji, Zarija Luki\'c, Rami
Al-Rfou, Jan M. Kratochvil | 10.1186/s40668-019-0029-9 | 1706.0239 | null | null |
A Convex Framework for Fair Regression | cs.LG stat.ML | We introduce a flexible family of fairness regularizers for (linear and
logistic) regression problems. These regularizers all enjoy convexity,
permitting fast optimization, and they span the rang from notions of group
fairness to strong individual fairness. By varying the weight on the fairness
regularizer, we can compute the efficient frontier of the accuracy-fairness
trade-off on any given dataset, and we measure the severity of this trade-off
via a numerical quantity we call the Price of Fairness (PoF). The centerpiece
of our results is an extensive comparative study of the PoF across six
different datasets in which fairness is a primary consideration.
| Richard Berk, Hoda Heidari, Shahin Jabbari, Matthew Joseph, Michael
Kearns, Jamie Morgenstern, Seth Neel, Aaron Roth | null | 1706.02409 | null | null |
Generalized Value Iteration Networks: Life Beyond Lattices | cs.LG cs.AI | In this paper, we introduce a generalized value iteration network (GVIN),
which is an end-to-end neural network planning module. GVIN emulates the value
iteration algorithm by using a novel graph convolution operator, which enables
GVIN to learn and plan on irregular spatial graphs. We propose three novel
differentiable kernels as graph convolution operators and show that the
embedding based kernel achieves the best performance. We further propose
episodic Q-learning, an improvement upon traditional n-step Q-learning that
stabilizes training for networks that contain a planning module. Lastly, we
evaluate GVIN on planning problems in 2D mazes, irregular graphs, and
real-world street networks, showing that GVIN generalizes well for both
arbitrary graphs and unseen graphs of larger scale and outperforms a naive
generalization of VIN (discretizing a spatial graph into a 2D image).
| Sufeng Niu, Siheng Chen, Hanyu Guo, Colin Targonski, Melissa C. Smith,
Jelena Kova\v{c}evi\'c | null | 1706.02416 | null | null |
Seamless Integration and Coordination of Cognitive Skills in Humanoid
Robots: A Deep Learning Approach | cs.AI cs.LG cs.RO | This study investigates how adequate coordination among the different
cognitive processes of a humanoid robot can be developed through end-to-end
learning of direct perception of visuomotor stream. We propose a deep dynamic
neural network model built on a dynamic vision network, a motor generation
network, and a higher-level network. The proposed model was designed to process
and to integrate direct perception of dynamic visuomotor patterns in a
hierarchical model characterized by different spatial and temporal constraints
imposed on each level. We conducted synthetic robotic experiments in which a
robot learned to read human's intention through observing the gestures and then
to generate the corresponding goal-directed actions. Results verify that the
proposed model is able to learn the tutored skills and to generalize them to
novel situations. The model showed synergic coordination of perception, action
and decision making, and it integrated and coordinated a set of cognitive
skills including visual perception, intention reading, attention switching,
working memory, action preparation and execution in a seamless manner. Analysis
reveals that coherent internal representations emerged at each level of the
hierarchy. Higher-level representation reflecting actional intention developed
by means of continuous integration of the lower-level visuo-proprioceptive
stream.
| Jungsik Hwang and Jun Tani | null | 1706.02423 | null | null |
Predictive Coding-based Deep Dynamic Neural Network for Visuomotor
Learning | cs.AI cs.LG cs.RO q-bio.NC | This study presents a dynamic neural network model based on the predictive
coding framework for perceiving and predicting the dynamic visuo-proprioceptive
patterns. In our previous study [1], we have shown that the deep dynamic neural
network model was able to coordinate visual perception and action generation in
a seamless manner. In the current study, we extended the previous model under
the predictive coding framework to endow the model with a capability of
perceiving and predicting dynamic visuo-proprioceptive patterns as well as a
capability of inferring intention behind the perceived visuomotor information
through minimizing prediction error. A set of synthetic experiments were
conducted in which a robot learned to imitate the gestures of another robot in
a simulation environment. The experimental results showed that with given
intention states, the model was able to mentally simulate the possible incoming
dynamic visuo-proprioceptive patterns in a top-down process without the inputs
from the external environment. Moreover, the results highlighted the role of
minimizing prediction error in inferring underlying intention of the perceived
visuo-proprioceptive patterns, supporting the predictive coding account of the
mirror neuron systems. The results also revealed that minimizing prediction
error in one modality induced the recall of the corresponding representation of
another modality acquired during the consolidative learning of raw-level
visuo-proprioceptive patterns.
| Jungsik Hwang, Jinhyung Kim, Ahmadreza Ahmadi, Minkyu Choi, Jun Tani | null | 1706.02444 | null | null |
Luck is Hard to Beat: The Difficulty of Sports Prediction | cs.LG stat.AP | Predicting the outcome of sports events is a hard task. We quantify this
difficulty with a coefficient that measures the distance between the observed
final results of sports leagues and idealized perfectly balanced competitions
in terms of skill. This indicates the relative presence of luck and skill. We
collected and analyzed all games from 198 sports leagues comprising 1503
seasons from 84 countries of 4 different sports: basketball, soccer, volleyball
and handball. We measured the competitiveness by countries and sports. We also
identify in each season which teams, if removed from its league, result in a
completely random tournament. Surprisingly, not many of them are needed. As
another contribution of this paper, we propose a probabilistic graphical model
to learn about the teams' skills and to decompose the relative weights of luck
and skill in each game. We break down the skill component into factors
associated with the teams' characteristics. The model also allows to estimate
as 0.36 the probability that an underdog team wins in the NBA league, with a
home advantage adding 0.09 to this probability. As shown in the first part of
the paper, luck is substantially present even in the most competitive
championships, which partially explains why sophisticated and complex
feature-based models hardly beat simple models in the task of forecasting
sports' outcomes.
| Raquel YS Aoki, Renato M Assuncao, Pedro OS Vaz de Melo | 10.1145/3097983.3098045 | 1706.02447 | null | null |
Distribution-Free One-Pass Learning | cs.LG stat.ML | In many large-scale machine learning applications, data are accumulated with
time, and thus, an appropriate model should be able to update in an online
paradigm. Moreover, as the whole data volume is unknown when constructing the
model, it is desired to scan each data item only once with a storage
independent with the data volume. It is also noteworthy that the distribution
underlying may change during the data accumulation procedure. To handle such
tasks, in this paper we propose DFOP, a distribution-free one-pass learning
approach. This approach works well when distribution change occurs during data
accumulation, without requiring prior knowledge about the change. Every data
item can be discarded once it has been scanned. Besides, theoretical guarantee
shows that the estimate error, under a mild assumption, decreases until
convergence with high probability. The performance of DFOP for both regression
and classification are validated in experiments.
| Peng Zhao and Zhi-Hua Zhou | 10.1109/TKDE.2019.2937078 | 1706.02471 | null | null |
Forward Thinking: Building and Training Neural Networks One Layer at a
Time | stat.ML cs.LG | We present a general framework for training deep neural networks without
backpropagation. This substantially decreases training time and also allows for
construction of deep networks with many sorts of learners, including networks
whose layers are defined by functions that are not easily differentiated, like
decision trees. The main idea is that layers can be trained one at a time, and
once they are trained, the input data are mapped forward through the layer to
create a new learning problem. The process is repeated, transforming the data
through multiple layers, one at a time, rendering a new data set, which is
expected to be better behaved, and on which a final output layer can achieve
good performance. We call this forward thinking and demonstrate a proof of
concept by achieving state-of-the-art accuracy on the MNIST dataset for
convolutional neural networks. We also provide a general mathematical
formulation of forward thinking that allows for other types of deep learning
problems to be considered.
| Chris Hettinger, Tanner Christensen, Ben Ehlert, Jeffrey Humpherys,
Tyler Jarvis, and Sean Wade | null | 1706.0248 | null | null |
Where is my forearm? Clustering of body parts from simultaneous tactile
and linguistic input using sequential mapping | cs.NE cs.AI cs.CL cs.LG cs.RO | Humans and animals are constantly exposed to a continuous stream of sensory
information from different modalities. At the same time, they form more
compressed representations like concepts or symbols. In species that use
language, this process is further structured by this interaction, where a
mapping between the sensorimotor concepts and linguistic elements needs to be
established. There is evidence that children might be learning language by
simply disambiguating potential meanings based on multiple exposures to
utterances in different contexts (cross-situational learning). In existing
models, the mapping between modalities is usually found in a single step by
directly using frequencies of referent and meaning co-occurrences. In this
paper, we present an extension of this one-step mapping and introduce a newly
proposed sequential mapping algorithm together with a publicly available Matlab
implementation. For demonstration, we have chosen a less typical scenario:
instead of learning to associate objects with their names, we focus on body
representations. A humanoid robot is receiving tactile stimulations on its
body, while at the same time listening to utterances of the body part names
(e.g., hand, forearm and torso). With the goal at arriving at the correct "body
categories", we demonstrate how a sequential mapping algorithm outperforms
one-step mapping. In addition, the effect of data set size and noise in the
linguistic input are studied.
| Karla Stepanova and Matej Hoffmann and Zdenek Straka and Frederico B.
Klein and Angelo Cangelosi and Michal Vavrecka | null | 1706.0249 | null | null |
Context encoders as a simple but powerful extension of word2vec | stat.ML cs.CL cs.LG | With a simple architecture and the ability to learn meaningful word
embeddings efficiently from texts containing billions of words, word2vec
remains one of the most popular neural language models used today. However, as
only a single embedding is learned for every word in the vocabulary, the model
fails to optimally represent words with multiple meanings. Additionally, it is
not possible to create embeddings for new (out-of-vocabulary) words on the
spot. Based on an intuitive interpretation of the continuous bag-of-words
(CBOW) word2vec model's negative sampling training objective in terms of
predicting context based similarities, we motivate an extension of the model we
call context encoders (ConEc). By multiplying the matrix of trained word2vec
embeddings with a word's average context vector, out-of-vocabulary (OOV)
embeddings and representations for a word with multiple meanings can be created
based on the word's local contexts. The benefits of this approach are
illustrated by using these word embeddings as features in the CoNLL 2003 named
entity recognition (NER) task.
| Franziska Horn | null | 1706.02496 | null | null |
Unlocking the Potential of Simulators: Design with RL in Mind | cs.LG cs.RO | Using Reinforcement Learning (RL) in simulation to construct policies useful
in real life is challenging. This is often attributed to the sequential
decision making aspect: inaccuracies in simulation accumulate over multiple
steps, hence the simulated trajectories diverge from what would happen in
reality.
In our work we show the need to consider another important aspect: the
mismatch in simulating control. We bring attention to the need for modeling
control as well as dynamics, since oversimplifying assumptions about applying
actions of RL policies could make the policies fail on real-world systems.
We design a simulator for solving a pivoting task (of interest in Robotics)
and demonstrate that even a simple simulator designed with RL in mind
outperforms high-fidelity simulators when it comes to learning a policy that is
to be deployed on a real robotic system. We show that a phenomenon that is hard
to model - friction - could be exploited successfully, even when RL is
performed using a simulator with a simple dynamics and noise model. Hence, we
demonstrate that as long as the main sources of uncertainty are identified, it
could be possible to learn policies applicable to real systems even using a
simple simulator.
RL-compatible simulators could open the possibilities for applying a wide
range of RL algorithms in various fields. This is important, since currently
data sparsity in fields like healthcare and education frequently forces
researchers and engineers to only consider sample-efficient RL approaches.
Successful simulator-aided RL could increase flexibility of experimenting with
RL algorithms and help applying RL policies to real-world settings in fields
where data is scarce. We believe that lessons learned in Robotics could help
other fields design RL-compatible simulators, so we summarize our experience
and conclude with suggestions.
| Rika Antonova, Silvia Cruciani | null | 1706.02501 | null | null |
Self-Normalizing Neural Networks | cs.LG stat.ML | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs.
| G\"unter Klambauer, Thomas Unterthiner, Andreas Mayr and Sepp
Hochreiter | null | 1706.02515 | null | null |
Scaling up the Automatic Statistician: Scalable Structure Discovery
using Gaussian Processes | stat.ML cs.LG | Automating statistical modelling is a challenging problem in artificial
intelligence. The Automatic Statistician takes a first step in this direction,
by employing a kernel search algorithm with Gaussian Processes (GP) to provide
interpretable statistical models for regression problems. However this does not
scale due to its $O(N^3)$ running time for the model selection. We propose
Scalable Kernel Composition (SKC), a scalable kernel search algorithm that
extends the Automatic Statistician to bigger data sets. In doing so, we derive
a cheap upper bound on the GP marginal likelihood that sandwiches the marginal
likelihood with the variational lower bound . We show that the upper bound is
significantly tighter than the lower bound and thus useful for model selection.
| Hyunjik Kim and Yee Whye Teh | null | 1706.02524 | null | null |
Pain-Free Random Differential Privacy with Sensitivity Sampling | cs.LG cs.CR cs.DB stat.ML | Popular approaches to differential privacy, such as the Laplace and
exponential mechanisms, calibrate randomised smoothing through global
sensitivity of the target non-private function. Bounding such sensitivity is
often a prohibitively complex analytic calculation. As an alternative, we
propose a straightforward sampler for estimating sensitivity of non-private
mechanisms. Since our sensitivity estimates hold with high probability, any
mechanism that would be $(\epsilon,\delta)$-differentially private under
bounded global sensitivity automatically achieves
$(\epsilon,\delta,\gamma)$-random differential privacy (Hall et al., 2012),
without any target-specific calculations required. We demonstrate on worked
example learners how our usable approach adopts a naturally-relaxed privacy
guarantee, while achieving more accurate releases even for non-private
functions that are black-box computer programs.
| Benjamin I. P. Rubinstein, Francesco Ald\`a | null | 1706.02562 | null | null |
Clustering with t-SNE, provably | cs.LG stat.ML | t-distributed Stochastic Neighborhood Embedding (t-SNE), a clustering and
visualization method proposed by van der Maaten & Hinton in 2008, has rapidly
become a standard tool in a number of natural sciences. Despite its
overwhelming success, there is a distinct lack of mathematical foundations and
the inner workings of the algorithm are not well understood. The purpose of
this paper is to prove that t-SNE is able to recover well-separated clusters;
more precisely, we prove that t-SNE in the `early exaggeration' phase, an
optimization technique proposed by van der Maaten & Hinton (2008) and van der
Maaten (2014), can be rigorously analyzed. As a byproduct, the proof suggests
novel ways for setting the exaggeration parameter $\alpha$ and step size $h$.
Numerical examples illustrate the effectiveness of these rules: in particular,
the quality of embedding of topological structures (e.g. the swiss roll)
improves. We also discuss a connection to spectral clustering methods.
| George C. Linderman, Stefan Steinerberger | null | 1706.02582 | null | null |
Decoupling "when to update" from "how to update" | cs.LG | Deep learning requires data. A useful approach to obtain data is to be
creative and mine data from various sources, that were created for different
purposes. Unfortunately, this approach often leads to noisy labels. In this
paper, we propose a meta algorithm for tackling the noisy labels problem. The
key idea is to decouple "when to update" from "how to update". We demonstrate
the effectiveness of our algorithm by mining data for gender classification by
combining the Labeled Faces in the Wild (LFW) face recognition dataset with a
textual genderizing service, which leads to a noisy dataset. While our approach
is very simple to implement, it leads to state-of-the-art results. We analyze
some convergence properties of the proposed algorithm.
| Eran Malach, Shai Shalev-Shwartz | null | 1706.02613 | null | null |
Real-valued (Medical) Time Series Generation with Recurrent Conditional
GANs | stat.ML cs.LG | Generative Adversarial Networks (GANs) have shown remarkable success as a
framework for training models to produce realistic-looking data. In this work,
we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to
produce realistic real-valued multi-dimensional time series, with an emphasis
on their application to medical data. RGANs make use of recurrent neural
networks in the generator and the discriminator. In the case of RCGANs, both of
these RNNs are conditioned on auxiliary information. We demonstrate our models
in a set of toy datasets, where we show visually and quantitatively (using
sample likelihood and maximum mean discrepancy) that they can successfully
generate realistic time-series. We also describe novel evaluation methods for
GANs, where we generate a synthetic labelled training dataset, and evaluate on
a real test set the performance of a model trained on the synthetic data, and
vice-versa. We illustrate with these metrics that RCGANs can generate
time-series data useful for supervised training, with only minor degradation in
performance on real test data. This is demonstrated on digit classification
from 'serialised' MNIST and by training an early warning system on a medical
dataset of 17,000 patients from an intensive care unit. We further discuss and
analyse the privacy concerns that may arise when using RCGANs to generate
realistic synthetic medical time series data.
| Crist\'obal Esteban, Stephanie L. Hyland, Gunnar R\"atsch | null | 1706.02633 | null | null |
Nuclear Discrepancy for Active Learning | cs.LG stat.ML | Active learning algorithms propose which unlabeled objects should be queried
for their labels to improve a predictive model the most. We study active
learners that minimize generalization bounds and uncover relationships between
these bounds that lead to an improved approach to active learning. In
particular we show the relation between the bound of the state-of-the-art
Maximum Mean Discrepancy (MMD) active learner, the bound of the Discrepancy,
and a new and looser bound that we refer to as the Nuclear Discrepancy bound.
We motivate this bound by a probabilistic argument: we show it considers
situations which are more likely to occur. Our experiments indicate that active
learning using the tightest Discrepancy bound performs the worst in terms of
the squared loss. Overall, our proposed loosest Nuclear Discrepancy
generalization bound performs the best. We confirm our probabilistic argument
empirically: the other bounds focus on more pessimistic scenarios that are
rarer in practice. We conclude that tightness of bounds is not always of main
importance and that active learning methods should concentrate on realistic
scenarios in order to improve performance.
| Tom J. Viering, Jesse H. Krijthe, Marco Loog | null | 1706.02645 | null | null |
Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | cs.CV cs.DC cs.LG | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency.
| Priya Goyal, Piotr Doll\'ar, Ross Girshick, Pieter Noordhuis, Lukasz
Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He | null | 1706.02677 | null | null |
Learning Local Receptive Fields and their Weight Sharing Scheme on
Graphs | cs.LG cs.CV cs.NE | We propose a simple and generic layer formulation that extends the properties
of convolutional layers to any domain that can be described by a graph. Namely,
we use the support of its adjacency matrix to design learnable weight sharing
filters able to exploit the underlying structure of signals in the same fashion
as for images. The proposed formulation makes it possible to learn the weights
of the filter as well as a scheme that controls how they are shared across the
graph. We perform validation experiments with image datasets and show that
these filters offer performances comparable with convolutional ones.
| Jean-Charles Vialatte, Vincent Gripon, Gilles Coppin | null | 1706.02684 | null | null |
Enhancing The Reliability of Out-of-distribution Image Detection in
Neural Networks | cs.LG stat.ML | We consider the problem of detecting out-of-distribution images in neural
networks. We propose ODIN, a simple and effective method that does not require
any change to a pre-trained neural network. Our method is based on the
observation that using temperature scaling and adding small perturbations to
the input can separate the softmax score distributions between in- and
out-of-distribution images, allowing for more effective detection. We show in a
series of experiments that ODIN is compatible with diverse network
architectures and datasets. It consistently outperforms the baseline approach
by a large margin, establishing a new state-of-the-art performance on this
task. For example, ODIN reduces the false positive rate from the baseline 34.7%
to 4.3% on the DenseNet (applied to CIFAR-10) when the true positive rate is
95%.
| Shiyu Liang, Yixuan Li and R. Srikant | null | 1706.0269 | null | null |
Climbing a shaky ladder: Better adaptive risk estimation | cs.LG | We revisit the \emph{leaderboard problem} introduced by Blum and Hardt (2015)
in an effort to reduce overfitting in machine learning benchmarks. We show that
a randomized version of their Ladder algorithm achieves leaderboard error
O(1/n^{0.4}) compared with the previous best rate of O(1/n^{1/3}).
Short of proving that our algorithm is optimal, we point out a major obstacle
toward further progress. Specifically, any improvement to our upper bound would
lead to asymptotic improvements in the general adaptive estimation setting as
have remained elusive in recent years. This connection also directly leads to
lower bounds for specific classes of algorithms. In particular, we exhibit a
new attack on the leaderboard algorithm that both theoretically and empirically
distinguishes between our algorithm and previous leaderboard algorithms.
| Moritz Hardt | null | 1706.02733 | null | null |
Avoiding Discrimination through Causal Reasoning | stat.ML cs.CY cs.LG | Recent work on fairness in machine learning has focused on various
statistical discrimination criteria and how they trade off. Most of these
criteria are observational: They depend only on the joint distribution of
predictor, protected attribute, features, and outcome. While convenient to work
with, observational criteria have severe inherent limitations that prevent them
from resolving matters of fairness conclusively.
Going beyond observational criteria, we frame the problem of discrimination
based on protected attributes in the language of causal reasoning. This
viewpoint shifts attention from "What is the right fairness criterion?" to
"What do we want to assume about the causal data generating process?" Through
the lens of causality, we make several contributions. First, we crisply
articulate why and when observational criteria fail, thus formalizing what was
before a matter of opinion. Second, our approach exposes previously ignored
subtleties and why they are fundamental to the problem. Finally, we put forward
natural causal non-discrimination criteria and develop algorithms that satisfy
them.
| Niki Kilbertus, Mateo Rojas-Carulla, Giambattista Parascandolo, Moritz
Hardt, Dominik Janzing, Bernhard Sch\"olkopf | null | 1706.02744 | null | null |
Gated Orthogonal Recurrent Units: On Learning to Forget | cs.LG cs.NE stat.ML | We present a novel recurrent neural network (RNN) based model that combines
the remembering ability of unitary RNNs with the ability of gated RNNs to
effectively forget redundant/irrelevant information in its memory. We achieve
this by extending unitary RNNs with a gating mechanism. Our model is able to
outperform LSTMs, GRUs and Unitary RNNs on several long-term dependency
benchmark tasks. We empirically both show the orthogonal/unitary RNNs lack the
ability to forget and also the ability of GORU to simultaneously remember long
term dependencies while forgetting irrelevant information. This plays an
important role in recurrent neural networks. We provide competitive results
along with an analysis of our model on many natural sequential tasks including
the bAbI Question Answering, TIMIT speech spectrum prediction, Penn TreeBank,
and synthetic tasks that involve long-term dependencies such as algorithmic,
parenthesis, denoising and copying tasks.
| Li Jing, Caglar Gulcehre, John Peurifoy, Yichen Shen, Max Tegmark,
Marin Solja\v{c}i\'c, Yoshua Bengio | null | 1706.02761 | null | null |
Optimizing expected word error rate via sampling for speech recognition | cs.CL cs.LG cs.NE stat.ML | State-level minimum Bayes risk (sMBR) training has become the de facto
standard for sequence-level training of speech recognition acoustic models. It
has an elegant formulation using the expectation semiring, and gives large
improvements in word error rate (WER) over models trained solely using
cross-entropy (CE) or connectionist temporal classification (CTC). sMBR
training optimizes the expected number of frames at which the reference and
hypothesized acoustic states differ. It may be preferable to optimize the
expected WER, but WER does not interact well with the expectation semiring, and
previous approaches based on computing expected WER exactly involve expanding
the lattices used during training. In this paper we show how to perform
optimization of the expected WER by sampling paths from the lattices used
during conventional sMBR training. The gradient of the expected WER is itself
an expectation, and so may be approximated using Monte Carlo sampling. We show
experimentally that optimizing WER during acoustic model training gives 5%
relative improvement in WER over a well-tuned sMBR baseline on a 2-channel
query recognition task (Google Home).
| Matt Shannon | null | 1706.02776 | null | null |
Setting Players' Behaviors in World of Warcraft through Semi-Supervised
Learning | cs.AI cs.LG | Digital games are one of the major and most important fields on the
entertainment domain, which also involves cinema and music. Numerous attempts
have been done to improve the quality of the games including more realistic
artistic production and computer science. Assessing the player's behavior, a
task known as player modeling, is currently the need of the hour which leads to
possible improvements in terms of: (i) better game interaction experience, (ii)
better exploitation of the relationship between players, and (iii)
increasing/maintaining the number of players interested in the game. In this
paper we model players using the basic four behaviors proposed in
\cite{BartleArtigo}, namely: achiever, explorer, socializer and killer. Our
analysis is carried out using data obtained from the game "World of Warcraft"
over 3 years (2006 $-$ 2009). We employ a semi-supervised learning technique in
order to find out characteristics that possibly impact player's behavior.
| Marcelo Souza Nery, Roque Anderson Teixeira, Victor do Nascimento
Silva, Adriano Alonso Veloso | null | 1706.0278 | null | null |
Scalable Kernel K-Means Clustering with Nystrom Approximation:
Relative-Error Bounds | cs.LG stat.ML | Kernel $k$-means clustering can correctly identify and extract a far more
varied collection of cluster structures than the linear $k$-means clustering
algorithm. However, kernel $k$-means clustering is computationally expensive
when the non-linear feature map is high-dimensional and there are many input
points. Kernel approximation, e.g., the Nystr\"om method, has been applied in
previous works to approximately solve kernel learning problems when both of the
above conditions are present. This work analyzes the application of this
paradigm to kernel $k$-means clustering, and shows that applying the linear
$k$-means clustering algorithm to $\frac{k}{\epsilon} (1 + o(1))$ features
constructed using a so-called rank-restricted Nystr\"om approximation results
in cluster assignments that satisfy a $1 + \epsilon$ approximation ratio in
terms of the kernel $k$-means cost function, relative to the guarantee provided
by the same algorithm without the use of the Nystr\"om method. As part of the
analysis, this work establishes a novel $1 + \epsilon$ relative-error trace
norm guarantee for low-rank approximation using the rank-restricted Nystr\"om
approximation. Empirical evaluations on the $8.1$ million instance MNIST8M
dataset demonstrate the scalability and usefulness of kernel $k$-means
clustering with Nystr\"om approximation. This work argues that spectral
clustering using Nystr\"om approximation---a popular and computationally
efficient, but theoretically unsound approach to non-linear clustering---should
be replaced with the efficient and theoretically sound combination of kernel
$k$-means clustering with Nystr\"om approximation. The superior performance of
the latter approach is empirically verified.
| Shusen Wang and Alex Gittens and Michael W. Mahoney | null | 1706.02803 | null | null |
From Bayesian Sparsity to Gated Recurrent Nets | cs.LG | The iterations of many first-order algorithms, when applied to minimizing
common regularized regression functions, often resemble neural network layers
with pre-specified weights. This observation has prompted the development of
learning-based approaches that purport to replace these iterations with
enhanced surrogates forged as DNN models from available training data. For
example, important NP-hard sparse estimation problems have recently benefitted
from this genre of upgrade, with simple feedforward or recurrent networks
ousting proximal gradient-based iterations. Analogously, this paper
demonstrates that more powerful Bayesian algorithms for promoting sparsity,
which rely on complex multi-loop majorization-minimization techniques, mirror
the structure of more sophisticated long short-term memory (LSTM) networks, or
alternative gated feedback networks previously designed for sequence
prediction. As part of this development, we examine the parallels between
latent variable trajectories operating across multiple time-scales during
optimization, and the activations within deep network structures designed to
adaptively model such characteristic sequences. The resulting insights lead to
a novel sparse estimation system that, when granted training data, can estimate
optimal solutions efficiently in regimes where other algorithms fail, including
practical direction-of-arrival (DOA) and 3D geometry recovery problems. The
underlying principles we expose are also suggestive of a learning process for a
richer class of multi-loop algorithms in other domains.
| Hao He, Bo Xin, David Wipf | null | 1706.02815 | null | null |
A Maximum Matching Algorithm for Basis Selection in Spectral Learning | cs.LG cs.FL stat.ML | We present a solution to scale spectral algorithms for learning sequence
functions. We are interested in the case where these functions are sparse (that
is, for most sequences they return 0). Spectral algorithms reduce the learning
problem to the task of computing an SVD decomposition over a special type of
matrix called the Hankel matrix. This matrix is designed to capture the
relevant statistics of the training sequences. What is crucial is that to
capture long range dependencies we must consider very large Hankel matrices.
Thus the computation of the SVD becomes a critical bottleneck. Our solution
finds a subset of rows and columns of the Hankel that realizes a compact and
informative Hankel submatrix. The novelty lies in the way that this subset is
selected: we exploit a maximal bipartite matching combinatorial algorithm to
look for a sub-block with full structural rank, and show how computation of
this sub-block can be further improved by exploiting the specific structure of
Hankel matrices.
| Ariadna Quattoni, Xavier Carreras, Matthias Gall\'e | null | 1706.02857 | null | null |
Adaptive Consensus ADMM for Distributed Optimization | cs.LG cs.NA cs.SY | The alternating direction method of multipliers (ADMM) is commonly used for
distributed model fitting problems, but its performance and reliability depend
strongly on user-defined penalty parameters. We study distributed ADMM methods
that boost performance by using different fine-tuned algorithm parameters on
each worker node. We present a O(1/k) convergence rate for adaptive ADMM
methods with node-specific parameters, and propose adaptive consensus ADMM
(ACADMM), which automatically tunes parameters without user oversight.
| Zheng Xu, Gavin Taylor, Hao Li, Mario Figueiredo, Xiaoming Yuan, Tom
Goldstein | null | 1706.02869 | null | null |
Assessing the Performance of Deep Learning Algorithms for Newsvendor
Problem | stat.ML cs.LG | In retailer management, the Newsvendor problem has widely attracted attention
as one of basic inventory models. In the traditional approach to solving this
problem, it relies on the probability distribution of the demand. In theory, if
the probability distribution is known, the problem can be considered as fully
solved. However, in any real world scenario, it is almost impossible to even
approximate or estimate a better probability distribution for the demand. In
recent years, researchers start adopting machine learning approach to learn a
demand prediction model by using other feature information. In this paper, we
propose a supervised learning that optimizes the demand quantities for products
based on feature information. We demonstrate that the original Newsvendor loss
function as the training objective outperforms the recently suggested quadratic
loss function. The new algorithm has been assessed on both the synthetic data
and real-world data, demonstrating better performance.
| Yanfei Zhang and Junbin Gao | null | 1706.02899 | null | null |
Characterizing Types of Convolution in Deep Convolutional Recurrent
Neural Networks for Robust Speech Emotion Recognition | cs.LG cs.CL cs.MM cs.SD | Deep convolutional neural networks are being actively investigated in a wide
range of speech and audio processing applications including speech recognition,
audio event detection and computational paralinguistics, owing to their ability
to reduce factors of variations, for learning from speech. However, studies
have suggested to favor a certain type of convolutional operations when
building a deep convolutional neural network for speech applications although
there has been promising results using different types of convolutional
operations. In this work, we study four types of convolutional operations on
different input features for speech emotion recognition under noisy and clean
conditions in order to derive a comprehensive understanding. Since affective
behavioral information has been shown to reflect temporally varying of mental
state and convolutional operation are applied locally in time, all deep neural
networks share a deep recurrent sub-network architecture for further temporal
modeling. We present detailed quantitative module-wise performance analysis to
gain insights into information flows within the proposed architectures. In
particular, we demonstrate the interplay of affective information and the other
irrelevant information during the progression from one module to another.
Finally we show that all of our deep neural networks provide state-of-the-art
performance on the eNTERFACE'05 corpus.
| Che-Wei Huang, Shrikanth. S. Narayanan | null | 1706.02901 | null | null |
End-to-End Musical Key Estimation Using a Convolutional Neural Network | cs.LG cs.SD | We present an end-to-end system for musical key estimation, based on a
convolutional neural network. The proposed system not only out-performs
existing key estimation methods proposed in the academic literature; it is also
capable of learning a unified model for diverse musical genres that performs
comparably to existing systems specialised for specific genres. Our experiments
confirm that different genres do differ in their interpretation of tonality,
and thus a system tuned e.g. for pop music performs subpar on pieces of
electronic music. They also reveal that such cross-genre setups evoke specific
types of error (predicting the relative or parallel minor). However, using the
data-driven approach proposed in this paper, we can train models that deal with
multiple musical styles adequately, and without major losses in accuracy.
| Filip Korzeniowski, Gerhard Widmer | null | 1706.02921 | null | null |
K+ Means : An Enhancement Over K-Means Clustering Algorithm | cs.LG | K-means (MacQueen, 1967) [1] is one of the simplest unsupervised learning
algorithms that solve the well-known clustering problem. The procedure follows
a simple and easy way to classify a given data set to a predefined, say K
number of clusters. Determination of K is a difficult job and it is not known
that which value of K can partition the objects as per our intuition. To
overcome this problem we proposed K+ Means algorithm. This algorithm is an
enhancement over K-Means algorithm.
| Srikanta Kolay, Kumar Sankar Ray, Abhoy Chand Mondal | null | 1706.02949 | null | null |
Stock Trading Using PE ratio: A Dynamic Bayesian Network Modeling on
Behavioral Finance and Fundamental Investment | cs.CE cs.AI cs.LG q-fin.GN | On a daily investment decision in a security market, the price earnings (PE)
ratio is one of the most widely applied methods being used as a firm valuation
tool by investment experts. Unfortunately, recent academic developments in
financial econometrics and machine learning rarely look at this tool. In
practice, fundamental PE ratios are often estimated only by subjective expert
opinions. The purpose of this research is to formalize a process of fundamental
PE estimation by employing advanced dynamic Bayesian network (DBN) methodology.
The estimated PE ratio from our model can be used either as a information
support for an expert to make investment decisions, or as an automatic trading
system illustrated in experiments. Forward-backward inference and EM parameter
estimation algorithms are derived with respect to the proposed DBN structure.
Unlike existing works in literatures, the economic interpretation of our DBN
model is well-justified by behavioral finance evidences of volatility. A simple
but practical trading strategy is invented based on the result of Bayesian
inference. Extensive experiments show that our trading strategy equipped with
the inferenced PE ratios consistently outperforms standard investment
benchmarks.
| Haizhen Wang, Ratthachat Chatpatanasiri, Pairote Sattayatham | null | 1706.02985 | null | null |
Monte-Carlo Tree Search by Best Arm Identification | stat.ML cs.LG | Recent advances in bandit tools and techniques for sequential learning are
steadily enabling new applications and are promising the resolution of a range
of challenging related problems. We study the game tree search problem, where
the goal is to quickly identify the optimal move in a given game tree by
sequentially sampling its stochastic payoffs. We develop new algorithms for
trees of arbitrary depth, that operate by summarizing all deeper levels of the
tree into confidence intervals at depth one, and applying a best arm
identification procedure at the root. We prove new sample complexity guarantees
with a refined dependence on the problem instance. We show experimentally that
our algorithms outperform existing elimination-based algorithms and match
previous special-purpose methods for depth-two trees.
| Emilie Kaufmann (CNRS, CRIStAL, SEQUEL), Wouter Koolen (CWI) | null | 1706.02986 | null | null |
Symmetry Learning for Function Approximation in Reinforcement Learning | stat.ML cs.AI cs.LG | In this paper we explore methods to exploit symmetries for ensuring sample
efficiency in reinforcement learning (RL), this problem deserves ever
increasing attention with the recent advances in the use of deep networks for
complex RL tasks which require large amount of training data. We introduce a
novel method to detect symmetries using reward trails observed during episodic
experience and prove its completeness. We also provide a framework to
incorporate the discovered symmetries for functional approximation. Finally we
show that the use of potential based reward shaping is especially effective for
our symmetry exploitation mechanism. Experiments on various classical problems
show that our method improves the learning performance significantly by
utilizing symmetry information.
| Anuj Mahajan and Theja Tulabandhula | null | 1706.02999 | null | null |
Learning optimal wavelet bases using a neural network approach | cs.NE cs.LG | A novel method for learning optimal, orthonormal wavelet bases for
representing 1- and 2D signals, based on parallels between the wavelet
transform and fully connected artificial neural networks, is described. The
structural similarities between these two concepts are reviewed and combined to
a "wavenet", allowing for the direct learning of optimal wavelet filter
coefficient through stochastic gradient descent with back-propagation over
ensembles of training inputs, where conditions on the filter coefficients for
constituting orthonormal wavelet bases are cast as quadratic regularisations
terms. We describe the practical implementation of this method, and study its
performance for high-energy physics collision events for QCD $2 \to 2$
processes. It is shown that an optimal solution is found, even in a
high-dimensional search space, and the implications of the result are
discussed.
| Andreas S{\o}gaard | null | 1706.03041 | null | null |
Depthwise Separable Convolutions for Neural Machine Translation | cs.CL cs.LG | Depthwise separable convolutions reduce the number of parameters and
computation used in convolutional operations while increasing representational
efficiency. They have been shown to be successful in image classification
models, both in obtaining better models than previously possible for a given
parameter count (the Xception architecture) and considerably reducing the
number of parameters required to perform at a given level (the MobileNets
family of architectures). Recently, convolutional sequence-to-sequence networks
have been applied to machine translation tasks with good results. In this work,
we study how depthwise separable convolutions can be applied to neural machine
translation. We introduce a new architecture inspired by Xception and ByteNet,
called SliceNet, which enables a significant reduction of the parameter count
and amount of computation needed to obtain results like ByteNet, and, with a
similar parameter count, achieves new state-of-the-art results. In addition to
showing that depthwise separable convolutions perform well for machine
translation, we investigate the architectural changes that they enable: we
observe that thanks to depthwise separability, we can increase the length of
convolution windows, removing the need for filter dilation. We also introduce a
new "super-separable" convolution operation that further reduces the number of
parameters and computational cost for obtaining state-of-the-art results.
| Lukasz Kaiser, Aidan N. Gomez, Francois Chollet | null | 1706.03059 | null | null |
Group Invariance, Stability to Deformations, and Complexity of Deep
Convolutional Representations | stat.ML cs.LG | The success of deep convolutional architectures is often attributed in part
to their ability to learn multiscale and invariant representations of natural
signals. However, a precise study of these properties and how they affect
learning guarantees is still missing. In this paper, we consider deep
convolutional representations of signals; we study their invariance to
translations and to more general groups of transformations, their stability to
the action of diffeomorphisms, and their ability to preserve signal
information. This analysis is carried by introducing a multilayer kernel based
on convolutional kernel networks and by studying the geometry induced by the
kernel mapping. We then characterize the corresponding reproducing kernel
Hilbert space (RKHS), showing that it contains a large class of convolutional
neural networks with homogeneous activation functions. This analysis allows us
to separate data representation from learning, and to provide a canonical
measure of model complexity, the RKHS norm, which controls both stability and
generalization of any learned model. In addition to models in the constructed
RKHS, our stability analysis also applies to convolutional networks with
generic activations such as rectified linear units, and we discuss its
relationship with recent generalization bounds based on spectral norms.
| Alberto Bietti and Julien Mairal | null | 1706.03078 | null | null |
Decoupling Learning Rules from Representations | cs.AI cs.LG stat.ML | In the artificial intelligence field, learning often corresponds to changing
the parameters of a parameterized function. A learning rule is an algorithm or
mathematical expression that specifies precisely how the parameters should be
changed. When creating an artificial intelligence system, we must make two
decisions: what representation should be used (i.e., what parameterized
function should be used) and what learning rule should be used to search
through the resulting set of representable functions. Using most learning
rules, these two decisions are coupled in a subtle (and often unintentional)
way. That is, using the same learning rule with two different representations
that can represent the same sets of functions can result in two different
outcomes. After arguing that this coupling is undesirable, particularly when
using artificial neural networks, we present a method for partially decoupling
these two decisions for a broad class of learning rules that span unsupervised
learning, reinforcement learning, and supervised learning.
| Philip S. Thomas and Christoph Dann and Emma Brunskill | null | 1706.031 | null | null |
An Expectation-Maximization Algorithm for the Fractal Inverse Problem | stat.ML cs.LG | We present an Expectation-Maximization algorithm for the fractal inverse
problem: the problem of fitting a fractal model to data. In our setting the
fractals are Iterated Function Systems (IFS), with similitudes as the family of
transformations. The data is a point cloud in ${\mathbb R}^H$ with arbitrary
dimension $H$. Each IFS defines a probability distribution on ${\mathbb R}^H$,
so that the fractal inverse problem can be cast as a problem of parameter
estimation. We show that the algorithm reconstructs well-known fractals from
data, with the model converging to high precision parameters. We also show the
utility of the model as an approximation for datasources outside the IFS model
class.
| Peter Bloem and Steven de Rooij | null | 1706.03149 | null | null |
Toeplitz Inverse Covariance-Based Clustering of Multivariate Time Series
Data | cs.LG cs.SI math.OC | Subsequence clustering of multivariate time series is a useful tool for
discovering repeated patterns in temporal data. Once these patterns have been
discovered, seemingly complicated datasets can be interpreted as a temporal
sequence of only a small number of states, or clusters. For example, raw sensor
data from a fitness-tracking application can be expressed as a timeline of a
select few actions (i.e., walking, sitting, running). However, discovering
these patterns is challenging because it requires simultaneous segmentation and
clustering of the time series. Furthermore, interpreting the resulting clusters
is difficult, especially when the data is high-dimensional. Here we propose a
new method of model-based clustering, which we call Toeplitz Inverse
Covariance-based Clustering (TICC). Each cluster in the TICC method is defined
by a correlation network, or Markov random field (MRF), characterizing the
interdependencies between different observations in a typical subsequence of
that cluster. Based on this graphical representation, TICC simultaneously
segments and clusters the time series data. We solve the TICC problem through
alternating minimization, using a variation of the expectation maximization
(EM) algorithm. We derive closed-form solutions to efficiently solve the two
resulting subproblems in a scalable way, through dynamic programming and the
alternating direction method of multipliers (ADMM), respectively. We validate
our approach by comparing TICC to several state-of-the-art baselines in a
series of synthetic experiments, and we then demonstrate on an automobile
sensor dataset how TICC can be used to learn interpretable clusters in
real-world scenarios.
| David Hallac, Sagar Vare, Stephen Boyd, Jure Leskovec | null | 1706.03161 | null | null |
Recovery Guarantees for One-hidden-layer Neural Networks | cs.LG cs.DS stat.ML | In this paper, we consider regression problems with one-hidden-layer neural
networks (1NNs). We distill some properties of activation functions that lead
to $\mathit{local~strong~convexity}$ in the neighborhood of the ground-truth
parameters for the 1NN squared-loss objective. Most popular nonlinear
activation functions satisfy the distilled properties, including rectified
linear units (ReLUs), leaky ReLUs, squared ReLUs and sigmoids. For activation
functions that are also smooth, we show $\mathit{local~linear~convergence}$
guarantees of gradient descent under a resampling rule. For homogeneous
activations, we show tensor methods are able to initialize the parameters to
fall into the local strong convexity region. As a result, tensor initialization
followed by gradient descent is guaranteed to recover the ground truth with
sample complexity $ d \cdot \log(1/\epsilon) \cdot \mathrm{poly}(k,\lambda )$
and computational complexity $n\cdot d \cdot \mathrm{poly}(k,\lambda) $ for
smooth homogeneous activations with high probability, where $d$ is the
dimension of the input, $k$ ($k\leq d$) is the number of hidden nodes,
$\lambda$ is a conditioning property of the ground-truth parameter matrix
between the input layer and the hidden layer, $\epsilon$ is the targeted
precision and $n$ is the number of samples. To the best of our knowledge, this
is the first work that provides recovery guarantees for 1NNs with both sample
complexity and computational complexity $\mathit{linear}$ in the input
dimension and $\mathit{logarithmic}$ in the precision.
| Kai Zhong, Zhao Song, Prateek Jain, Peter L. Bartlett, Inderjit S.
Dhillon | null | 1706.03175 | null | null |
Image Matching via Loopy RNN | cs.LG cs.CV | Most existing matching algorithms are one-off algorithms, i.e., they usually
measure the distance between the two image feature representation vectors for
only one time. In contrast, human's vision system achieves this task, i.e.,
image matching, by recursively looking at specific/related parts of both images
and then making the final judgement. Towards this end, we propose a novel loopy
recurrent neural network (Loopy RNN), which is capable of aggregating
relationship information of two input images in a progressive/iterative manner
and outputting the consolidated matching score in the final iteration. A Loopy
RNN features two uniqueness. First, built on conventional long short-term
memory (LSTM) nodes, it links the output gate of the tail node to the input
gate of the head node, thus it brings up symmetry property required for
matching. Second, a monotonous loss designed for the proposed network
guarantees increasing confidence during the recursive matching process.
Extensive experiments on several image matching benchmarks demonstrate the
great potential of the proposed method.
| Donghao Luo, Bingbing Ni, Yichao Yan, Xiaokang Yang | null | 1706.0319 | null | null |
Online Learning for Neural Machine Translation Post-editing | cs.LG cs.CL | Neural machine translation has meant a revolution of the field. Nevertheless,
post-editing the outputs of the system is mandatory for tasks requiring high
translation quality. Post-editing offers a unique opportunity for improving
neural machine translation systems, using online learning techniques and
treating the post-edited translations as new, fresh training data. We review
classical learning methods and propose a new optimization algorithm. We
thoroughly compare online learning algorithms in a post-editing scenario.
Results show significant improvements in translation quality and effort
reduction.
| \'Alvaro Peris, Luis Cebri\'an and Francisco Casacuberta | null | 1706.03196 | null | null |
Toward Optimal Run Racing: Application to Deep Learning Calibration | cs.LG | This paper aims at one-shot learning of deep neural nets, where a highly
parallel setting is considered to address the algorithm calibration problem -
selecting the best neural architecture and learning hyper-parameter values
depending on the dataset at hand. The notoriously expensive calibration problem
is optimally reduced by detecting and early stopping non-optimal runs. The
theoretical contribution regards the optimality guarantees within the multiple
hypothesis testing framework. Experimentations on the Cifar10, PTB and Wiki
benchmarks demonstrate the relevance of the approach with a principled and
consistent improvement on the state of the art with no extra hyper-parameter.
| Olivier Bousquet, Sylvain Gelly, Karol Kurach, Marc Schoenauer,
Michele Sebag, Olivier Teytaud, Damien Vincent | null | 1706.03199 | null | null |
Critical Hyper-Parameters: No Random, No Cry | cs.LG | The selection of hyper-parameters is critical in Deep Learning. Because of
the long training time of complex models and the availability of compute
resources in the cloud, "one-shot" optimization schemes - where the sets of
hyper-parameters are selected in advance (e.g. on a grid or in a random manner)
and the training is executed in parallel - are commonly used. It is known that
grid search is sub-optimal, especially when only a few critical parameters
matter, and suggest to use random search instead. Yet, random search can be
"unlucky" and produce sets of values that leave some part of the domain
unexplored. Quasi-random methods, such as Low Discrepancy Sequences (LDS) avoid
these issues. We show that such methods have theoretical properties that make
them appealing for performing hyperparameter search, and demonstrate that, when
applied to the selection of hyperparameters of complex Deep Learning models
(such as state-of-the-art LSTM language models and image classification
models), they yield suitable hyperparameters values with much fewer runs than
random search. We propose a particularly simple LDS method which can be used as
a drop-in replacement for grid or random search in any Deep Learning pipeline,
both as a fully one-shot hyperparameter search or as an initializer in
iterative batch optimization.
| Olivier Bousquet, Sylvain Gelly, Karol Kurach, Olivier Teytaud, Damien
Vincent | null | 1706.032 | null | null |
ACCNet: Actor-Coordinator-Critic Net for "Learning-to-Communicate" with
Deep Multi-agent Reinforcement Learning | cs.AI cs.LG | Communication is a critical factor for the big multi-agent world to stay
organized and productive. Typically, most previous multi-agent
"learning-to-communicate" studies try to predefine the communication protocols
or use technologies such as tabular reinforcement learning and evolutionary
algorithm, which can not generalize to changing environment or large collection
of agents.
In this paper, we propose an Actor-Coordinator-Critic Net (ACCNet) framework
for solving "learning-to-communicate" problem. The ACCNet naturally combines
the powerful actor-critic reinforcement learning technology with deep learning
technology. It can efficiently learn the communication protocols even from
scratch under partially observable environment. We demonstrate that the ACCNet
can achieve better results than several baselines under both continuous and
discrete action space environments. We also analyse the learned protocols and
discuss some design considerations.
| Hangyu Mao, Zhibo Gong, Yan Ni and Zhen Xiao | null | 1706.03235 | null | null |
Progressive Neural Networks for Transfer Learning in Emotion Recognition | cs.LG | Many paralinguistic tasks are closely related and thus representations
learned in one domain can be leveraged for another. In this paper, we
investigate how knowledge can be transferred between three paralinguistic
tasks: speaker, emotion, and gender recognition. Further, we extend this
problem to cross-dataset tasks, asking how knowledge captured in one emotion
dataset can be transferred to another. We focus on progressive neural networks
and compare these networks to the conventional deep learning method of
pre-training and fine-tuning. Progressive neural networks provide a way to
transfer knowledge and avoid the forgetting effect present when pre-training
neural networks on different tasks. Our experiments demonstrate that: (1)
emotion recognition can benefit from using representations originally learned
for different paralinguistic tasks and (2) transfer learning can effectively
leverage additional datasets to improve the performance of emotion recognition
systems.
| John Gideon, Soheil Khorram, Zakaria Aldeneh, Dimitrios Dimitriadis,
Emily Mower Provost | null | 1706.03256 | null | null |
Stepwise regression for unsupervised learning | cs.LG stat.ML | I consider unsupervised extensions of the fast stepwise linear regression
algorithm \cite{efroymson1960multiple}. These extensions allow one to
efficiently identify highly-representative feature variable subsets within a
given set of jointly distributed variables. This in turn allows for the
efficient dimensional reduction of large data sets via the removal of redundant
features. Fast search is effected here through the avoidance of repeat
computations across trial fits, allowing for a full representative-importance
ranking of a set of feature variables to be carried out in $O(n^2 m)$ time,
where $n$ is the number of variables and $m$ is the number of data samples
available. This runtime complexity matches that needed to carry out a single
regression and is $O(n^2)$ faster than that of naive implementations. I present
pseudocode suitable for efficient forward, reverse, and forward-reverse
unsupervised feature selection. To illustrate the algorithm's application, I
apply it to the problem of identifying representative stocks within a given
financial market index -- a challenge relevant to the design of Exchange Traded
Funds (ETFs). I also characterize the growth of numerical error with iteration
step in these algorithms, and finally demonstrate and rationalize the
observation that the forward and reverse algorithms return exactly inverted
feature orderings in the weakly-correlated feature set regime.
| Jonathan Landy | null | 1706.03265 | null | null |
An Alternative to EM for Gaussian Mixture Models: Batch and Stochastic
Riemannian Optimization | stat.ML cs.LG | We consider maximum likelihood estimation for Gaussian Mixture Models (Gmms).
This task is almost invariably solved (in theory and practice) via the
Expectation Maximization (EM) algorithm. EM owes its success to various
factors, of which is its ability to fulfill positive definiteness constraints
in closed form is of key importance. We propose an alternative to EM by
appealing to the rich Riemannian geometry of positive definite matrices, using
which we cast Gmm parameter estimation as a Riemannian optimization problem.
Surprisingly, such an out-of-the-box Riemannian formulation completely fails
and proves much inferior to EM. This motivates us to take a closer look at the
problem geometry, and derive a better formulation that is much more amenable to
Riemannian optimization. We then develop (Riemannian) batch and stochastic
gradient algorithms that outperform EM, often substantially. We provide a
non-asymptotic convergence analysis for our stochastic method, which is also
the first (to our knowledge) such global analysis for Riemannian stochastic
gradient. Numerous empirical results are included to demonstrate the
effectiveness of our methods.
| Reshad Hosseini, Suvrit Sra | null | 1706.03267 | null | null |
An Online Learning Approach to Generative Adversarial Networks | cs.LG stat.ML | We consider the problem of training generative models with a Generative
Adversarial Network (GAN). Although GANs can accurately model complex
distributions, they are known to be difficult to train due to instabilities
caused by a difficult minimax optimization problem. In this paper, we view the
problem of training GANs as finding a mixed strategy in a zero-sum game.
Building on ideas from online learning we propose a novel training method named
Chekhov GAN 1 . On the theory side, we show that our method provably converges
to an equilibrium for semi-shallow GAN architectures, i.e. architectures where
the discriminator is a one layer network and the generator is arbitrary. On the
practical side, we develop an efficient heuristic guided by our theoretical
results, which we apply to commonly used deep GAN architectures. On several
real world tasks our approach exhibits improved stability and performance
compared to standard GAN training.
| Paulina Grnarova and Kfir Y. Levy and Aurelien Lucchi and Thomas
Hofmann and Andreas Krause | null | 1706.03269 | null | null |
Deep Recurrent Neural Networks for seizure detection and early seizure
detection systems | q-bio.QM cs.LG | Epilepsy is common neurological diseases, affecting about 0.6-0.8 % of world
population. Epileptic patients suffer from chronic unprovoked seizures, which
can result in broad spectrum of debilitating medical and social consequences.
Since seizures, in general, occur infrequently and are unpredictable, automated
seizure detection systems are recommended to screen for seizures during
long-term electroencephalogram (EEG) recordings. In addition, systems for early
seizure detection can lead to the development of new types of intervention
systems that are designed to control or shorten the duration of seizure events.
In this article, we investigate the utility of recurrent neural networks (RNNs)
in designing seizure detection and early seizure detection systems. We propose
a deep learning framework via the use of Gated Recurrent Unit (GRU) RNNs for
seizure detection. We use publicly available data in order to evaluate our
method and demonstrate very promising evaluation results with overall accuracy
close to 100 %. We also systematically investigate the application of our
method for early seizure warning systems. Our method can detect about 98% of
seizure events within the first 5 seconds of the overall epileptic seizure
duration.
| Sachin S. Talathi | null | 1706.03283 | null | null |
Poseidon: An Efficient Communication Architecture for Distributed Deep
Learning on GPU Clusters | cs.LG cs.CV cs.DC stat.ML | Deep learning models can take weeks to train on a single GPU-equipped
machine, necessitating scaling out DL training to a GPU-cluster. However,
current distributed DL implementations can scale poorly due to substantial
parameter synchronization over the network, because the high throughput of GPUs
allows more data batches to be processed per unit time than CPUs, leading to
more frequent network synchronization. We present Poseidon, an efficient
communication architecture for distributed DL on GPUs. Poseidon exploits the
layered model structures in DL programs to overlap communication and
computation, reducing bursty network communication. Moreover, Poseidon uses a
hybrid communication scheme that optimizes the number of bytes required to
synchronize each layer, according to layer properties and the number of
machines. We show that Poseidon is applicable to different DL frameworks by
plugging Poseidon into Caffe and TensorFlow. We show that Poseidon enables
Caffe and TensorFlow to achieve 15.5x speed-up on 16 single-GPU machines, even
with limited bandwidth (10GbE) and the challenging VGG19-22K network for image
classification. Moreover, Poseidon-enabled TensorFlow achieves 31.5x speed-up
with 32 single-GPU machines on Inception-V3, a 50% improvement over the
open-source TensorFlow (20x speed-up).
| Hao Zhang, Zeyu Zheng, Shizhen Xu, Wei Dai, Qirong Ho, Xiaodan Liang,
Zhiting Hu, Jinliang Wei, Pengtao Xie, Eric P. Xing | null | 1706.03292 | null | null |
Neural networks and rational functions | cs.LG cs.NE stat.ML | Neural networks and rational functions efficiently approximate each other. In
more detail, it is shown here that for any ReLU network, there exists a
rational function of degree $O(\text{polylog}(1/\epsilon))$ which is
$\epsilon$-close, and similarly for any rational function there exists a ReLU
network of size $O(\text{polylog}(1/\epsilon))$ which is $\epsilon$-close. By
contrast, polynomials need degree $\Omega(\text{poly}(1/\epsilon))$ to
approximate even a single ReLU. When converting a ReLU network to a rational
function as above, the hidden constants depend exponentially on the number of
layers, which is shown to be tight; in other words, a compositional
representation can be beneficial even for rational functions.
| Matus Telgarsky | null | 1706.03301 | null | null |
Collect at Once, Use Effectively: Making Non-interactive Locally Private
Learning Possible | cs.LG cs.DS | Non-interactive Local Differential Privacy (LDP) requires data analysts to
collect data from users through noisy channel at once. In this paper, we extend
the frontiers of Non-interactive LDP learning and estimation from several
aspects. For learning with smooth generalized linear losses, we propose an
approximate stochastic gradient oracle estimated from non-interactive LDP
channel, using Chebyshev expansion. Combined with inexact gradient methods, we
obtain an efficient algorithm with quasi-polynomial sample complexity bound.
For the high-dimensional world, we discover that under $\ell_2$-norm assumption
on data points, high-dimensional sparse linear regression and mean estimation
can be achieved with logarithmic dependence on dimension, using random
projection and approximate recovery. We also extend our methods to Kernel Ridge
Regression. Our work is the first one that makes learning and estimation
possible for a broad range of learning tasks under non-interactive LDP model.
| Kai Zheng, Wenlong Mou, Liwei Wang | null | 1706.03316 | null | null |
On the Sampling Problem for Kernel Quadrature | stat.ML cs.LG math.NA stat.CO | The standard Kernel Quadrature method for numerical integration with random
point sets (also called Bayesian Monte Carlo) is known to converge in root mean
square error at a rate determined by the ratio $s/d$, where $s$ and $d$ encode
the smoothness and dimension of the integrand. However, an empirical
investigation reveals that the rate constant $C$ is highly sensitive to the
distribution of the random points. In contrast to standard Monte Carlo
integration, for which optimal importance sampling is well-understood, the
sampling distribution that minimises $C$ for Kernel Quadrature does not admit a
closed form. This paper argues that the practical choice of sampling
distribution is an important open problem. One solution is considered; a novel
automatic approach based on adaptive tempering and sequential Monte Carlo.
Empirical results demonstrate a dramatic reduction in integration error of up
to 4 orders of magnitude can be achieved with the proposed method.
| Francois-Xavier Briol and Chris J. Oates and Jon Cockayne and Wilson
Ye Chen and Mark Girolami | null | 1706.03369 | null | null |
Deep EHR: A Survey of Recent Advances in Deep Learning Techniques for
Electronic Health Record (EHR) Analysis | cs.LG stat.ML | The past decade has seen an explosion in the amount of digital information
stored in electronic health records (EHR). While primarily designed for
archiving patient clinical information and administrative healthcare tasks,
many researchers have found secondary use of these records for various clinical
informatics tasks. Over the same period, the machine learning community has
seen widespread advances in deep learning techniques, which also have been
successfully applied to the vast amount of EHR data. In this paper, we review
these deep EHR systems, examining architectures, technical aspects, and
clinical applications. We also identify shortcomings of current techniques and
discuss avenues of future research for EHR-based deep learning.
| Benjamin Shickel, Patrick Tighe, Azra Bihorac, Parisa Rashidi | 10.1109/JBHI.2017.2767063 | 1706.03446 | null | null |
Optimal Auctions through Deep Learning: Advances in Differentiable
Economics | cs.GT cs.AI cs.LG | Designing an incentive compatible auction that maximizes expected revenue is
an intricate task. The single-item case was resolved in a seminal piece of work
by Myerson in 1981, but more than 40 years later a full analytical
understanding of the optimal design still remains elusive for settings with two
or more items. In this work, we initiate the exploration of the use of tools
from deep learning for the automated design of optimal auctions. We model an
auction as a multi-layer neural network, frame optimal auction design as a
constrained learning problem, and show how it can be solved using standard
machine learning pipelines. In addition to providing generalization bounds, we
present extensive experimental results, recovering essentially all known
solutions that come from the theoretical analysis of optimal auction design
problems and obtaining novel mechanisms for settings in which the optimal
mechanism is unknown.
| Paul D\"utting and Zhe Feng and Harikrishna Narasimhan and David C.
Parkes and Sai Srivatsa Ravindranath | null | 1706.03459 | null | null |
Confident Multiple Choice Learning | cs.LG stat.ML | Ensemble methods are arguably the most trustworthy techniques for boosting
the performance of machine learning models. Popular independent ensembles (IE)
relying on naive averaging/voting scheme have been of typical choice for most
applications involving deep neural networks, but they do not consider advanced
collaboration among ensemble models. In this paper, we propose new ensemble
methods specialized for deep neural networks, called confident multiple choice
learning (CMCL): it is a variant of multiple choice learning (MCL) via
addressing its overconfidence issue.In particular, the proposed major
components of CMCL beyond the original MCL scheme are (i) new loss, i.e.,
confident oracle loss, (ii) new architecture, i.e., feature sharing and (iii)
new training method, i.e., stochastic labeling. We demonstrate the effect of
CMCL via experiments on the image classification on CIFAR and SVHN, and the
foreground-background segmentation on the iCoseg. In particular, CMCL using 5
residual networks provides 14.05% and 6.60% relative reductions in the top-1
error rates from the corresponding IE scheme for the classification task on
CIFAR and SVHN, respectively.
| Kimin Lee, Changho Hwang, KyoungSoo Park, Jinwoo Shin | null | 1706.03475 | null | null |
Random Forests, Decision Trees, and Categorical Predictors: The "Absent
Levels" Problem | stat.ML cs.LG | One advantage of decision tree based methods like random forests is their
ability to natively handle categorical predictors without having to first
transform them (e.g., by using feature engineering techniques). However, in
this paper, we show how this capability can lead to an inherent "absent levels"
problem for decision tree based methods that has never been thoroughly
discussed, and whose consequences have never been carefully explored. This
problem occurs whenever there is an indeterminacy over how to handle an
observation that has reached a categorical split which was determined when the
observation in question's level was absent during training. Although these
incidents may appear to be innocuous, by using Leo Breiman and Adele Cutler's
random forests FORTRAN code and the randomForest R package (Liaw and Wiener,
2002) as motivating case studies, we examine how overlooking the absent levels
problem can systematically bias a model. Furthermore, by using three real data
examples, we illustrate how absent levels can dramatically alter a model's
performance in practice, and we empirically demonstrate how some simple
heuristics can be used to help mitigate the effects of the absent levels
problem until a more robust theoretical solution is found.
| Timothy C. Au | null | 1706.03492 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.