title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
An improved analysis of the ER-SpUD dictionary learning algorithm | cs.LG cs.DS cs.IT math.IT math.PR | In "dictionary learning" we observe $Y = AX + E$ for some
$Y\in\mathbb{R}^{n\times p}$, $A \in\mathbb{R}^{m\times n}$, and
$X\in\mathbb{R}^{m\times p}$. The matrix $Y$ is observed, and $A, X, E$ are
unknown. Here $E$ is "noise" of small norm, and $X$ is column-wise sparse. The
matrix $A$ is referred to as a {\em dictionary}, and its columns as {\em
atoms}. Then, given some small number $p$ of samples, i.e.\ columns of $Y$, the
goal is to learn the dictionary $A$ up to small error, as well as $X$. The
motivation is that in many applications data is expected to sparse when
represented by atoms in the "right" dictionary $A$ (e.g.\ images in the Haar
wavelet basis), and the goal is to learn $A$ from the data to then use it for
other applications.
Recently, [SWW12] proposed the dictionary learning algorithm ER-SpUD with
provable guarantees when $E = 0$ and $m = n$. They showed if $X$ has
independent entries with an expected $s$ non-zeroes per column for $1 \lesssim
s \lesssim \sqrt{n}$, and with non-zero entries being subgaussian, then for
$p\gtrsim n^2\log^2 n$ with high probability ER-SpUD outputs matrices $A', X'$
which equal $A, X$ up to permuting and scaling columns (resp.\ rows) of $A$
(resp.\ $X$). They conjectured $p\gtrsim n\log n$ suffices, which they showed
was information theoretically necessary for {\em any} algorithm to succeed when
$s \simeq 1$. Significant progress was later obtained in [LV15].
We show that for a slight variant of ER-SpUD, $p\gtrsim n\log(n/\delta)$
samples suffice for successful recovery with probability $1-\delta$. We also
show that for the unmodified ER-SpUD, $p\gtrsim n^{1.99}$ samples are required
even to learn $A, X$ with polynomially small success probability. This resolves
the main conjecture of [SWW12], and contradicts the main result of [LV15],
which claimed that $p\gtrsim n\log^4 n$ guarantees success whp.
| Jaros{\l}aw B{\l}asiok, Jelani Nelson | null | 1602.05719 | null | null |
Toward Deeper Understanding of Neural Networks: The Power of
Initialization and a Dual View on Expressivity | cs.LG cs.AI cs.CC cs.DS stat.ML | We develop a general duality between neural networks and compositional
kernels, striving towards a better understanding of deep learning. We show that
initial representations generated by common random initializations are
sufficiently rich to express all functions in the dual kernel space. Hence,
though the training objective is hard to optimize in the worst case, the
initial weights form a good starting point for optimization. Our dual view also
reveals a pragmatic and aesthetic perspective of neural networks and
underscores their expressive power.
| Amit Daniely and Roy Frostig and Yoram Singer | null | 1602.05897 | null | null |
Efficient approaches for escaping higher order saddle points in
non-convex optimization | cs.LG stat.ML | Local search heuristics for non-convex optimizations are popular in applied
machine learning. However, in general it is hard to guarantee that such
algorithms even converge to a local minimum, due to the existence of
complicated saddle point structures in high dimensions. Many functions have
degenerate saddle points such that the first and second order derivatives
cannot distinguish them with local optima. In this paper we use higher order
derivatives to escape these saddle points: we design the first efficient
algorithm guaranteed to converge to a third order local optimum (while existing
techniques are at most second order). We also show that it is NP-hard to extend
this further to finding fourth order local optima.
| Anima Anandkumar, Rong Ge | null | 1602.05908 | null | null |
Local Rademacher Complexity-based Learning Guarantees for Multi-Task
Learning | cs.LG | We show a Talagrand-type concentration inequality for Multi-Task Learning
(MTL), using which we establish sharp excess risk bounds for MTL in terms of
distribution- and data-dependent versions of the Local Rademacher Complexity
(LRC). We also give a new bound on the LRC for norm regularized as well as
strongly convex hypothesis classes, which applies not only to MTL but also to
the standard i.i.d. setting. Combining both results, one can now easily derive
fast-rate bounds on the excess risk for many prominent MTL methods,
including---as we demonstrate---Schatten-norm, group-norm, and
graph-regularized MTL. The derived bounds reflect a relationship akeen to a
conservation law of asymptotic convergence rates. This very relationship allows
for trading off slower rates w.r.t. the number of tasks for faster rates with
respect to the number of available samples per task, when compared to the rates
obtained via a traditional, global Rademacher analysis.
| Niloofar Yousefi, Yunwen Lei, Marius Kloft, Mansooreh Mollaghasemi and
Georgios Anagnostopoulos | null | 1602.05916 | null | null |
Weighted Unsupervised Learning for 3D Object Detection | cs.CV cs.GR cs.LG cs.MM cs.RO | This paper introduces a novel weighted unsupervised learning for object
detection using an RGB-D camera. This technique is feasible for detecting the
moving objects in the noisy environments that are captured by an RGB-D camera.
The main contribution of this paper is a real-time algorithm for detecting each
object using weighted clustering as a separate cluster. In a preprocessing
step, the algorithm calculates the pose 3D position X, Y, Z and RGB color of
each data point and then it calculates each data point's normal vector using
the point's neighbor. After preprocessing, our algorithm calculates k-weights
for each data point; each weight indicates membership. Resulting in clustered
objects of the scene.
| Kamran Kowsari, Manal H. Alassaf | 10.14569/IJACSA.2016.070180 | 1602.05920 | null | null |
Revise Saturated Activation Functions | cs.LG | In this paper, we revise two commonly used saturated functions, the logistic
sigmoid and the hyperbolic tangent (tanh).
We point out that, besides the well-known non-zero centered property, slope
of the activation function near the origin is another possible reason making
training deep networks with the logistic function difficult to train. We
demonstrate that, with proper rescaling, the logistic sigmoid achieves
comparable results with tanh.
Then following the same argument, we improve tahn by penalizing in the
negative part. We show that "penalized tanh" is comparable and even outperforms
the state-of-the-art non-saturated functions including ReLU and leaky ReLU on
deep convolution neural networks.
Our results contradict to the conclusion of previous works that the
saturation property causes the slow convergence. It suggests further
investigation is necessary to better understand activation functions in deep
architectures.
| Bing Xu, Ruitong Huang, Mu Li | null | 1602.05980 | null | null |
Spectral Learning for Supervised Topic Models | cs.LG cs.CL cs.IR stat.ML | Supervised topic models simultaneously model the latent topic structure of
large collections of documents and a response variable associated with each
document. Existing inference methods are based on variational approximation or
Monte Carlo sampling, which often suffers from the local minimum defect.
Spectral methods have been applied to learn unsupervised topic models, such as
latent Dirichlet allocation (LDA), with provable guarantees. This paper
investigates the possibility of applying spectral methods to recover the
parameters of supervised LDA (sLDA). We first present a two-stage spectral
method, which recovers the parameters of LDA followed by a power update method
to recover the regression model parameters. Then, we further present a
single-phase spectral algorithm to jointly recover the topic distribution
matrix as well as the regression weights. Our spectral algorithms are provably
correct and computationally efficient. We prove a sample complexity bound for
each algorithm and subsequently derive a sufficient condition for the
identifiability of sLDA. Thorough experiments on synthetic and real-world
datasets verify the theory and demonstrate the practical effectiveness of the
spectral algorithms. In fact, our results on a large-scale review rating
dataset demonstrate that our single-phase spectral algorithm alone gets
comparable or even better performance than state-of-the-art methods, while
previous work on spectral methods has rarely reported such promising
performance.
| Yong Ren, Yining Wang, Jun Zhu | null | 1602.06025 | null | null |
Structured Sparse Regression via Greedy Hard-Thresholding | stat.ML cs.LG | Several learning applications require solving high-dimensional regression
problems where the relevant features belong to a small number of (overlapping)
groups. For very large datasets and under standard sparsity constraints, hard
thresholding methods have proven to be extremely efficient, but such methods
require NP hard projections when dealing with overlapping groups. In this
paper, we show that such NP-hard projections can not only be avoided by
appealing to submodular optimization, but such methods come with strong
theoretical guarantees even in the presence of poorly conditioned data (i.e.
say when two features have correlation $\geq 0.99$), which existing analyses
cannot handle. These methods exhibit an interesting computation-accuracy
trade-off and can be extended to significantly harder problems such as sparse
overlapping groups. Experiments on both real and synthetic data validate our
claims and demonstrate that the proposed methods are orders of magnitude faster
than other greedy and convex relaxation techniques for learning with
group-structured sparsity.
| Prateek Jain, Nikhil Rao, Inderjit Dhillon | null | 1602.06042 | null | null |
First-order Methods for Geodesically Convex Optimization | math.OC cs.LG stat.ML | Geodesic convexity generalizes the notion of (vector space) convexity to
nonlinear metric spaces. But unlike convex optimization, geodesically convex
(g-convex) optimization is much less developed. In this paper we contribute to
the understanding of g-convex optimization by developing iteration complexity
analysis for several first-order algorithms on Hadamard manifolds.
Specifically, we prove upper bounds for the global complexity of deterministic
and stochastic (sub)gradient methods for optimizing smooth and nonsmooth
g-convex functions, both with and without strong g-convexity. Our analysis also
reveals how the manifold geometry, especially \emph{sectional curvature},
impacts convergence rates. To the best of our knowledge, our work is the first
to provide global complexity analysis for first-order algorithms for general
g-convex optimization.
| Hongyi Zhang, Suvrit Sra | null | 1602.06053 | null | null |
Node-By-Node Greedy Deep Learning for Interpretable Features | cs.LG | Multilayer networks have seen a resurgence under the umbrella of deep
learning. Current deep learning algorithms train the layers of the network
sequentially, improving algorithmic performance as well as providing some
regularization. We present a new training algorithm for deep networks which
trains \emph{each node in the network} sequentially. Our algorithm is orders of
magnitude faster, creates more interpretable internal representations at the
node level, while not sacrificing on the ultimate out-of-sample performance.
| Ke Wu and Malik Magdon-Ismail | null | 1602.06183 | null | null |
GAP Safe Screening Rules for Sparse-Group-Lasso | stat.ML cs.LG math.OC stat.CO | In high dimensional settings, sparse structures are crucial for efficiency,
either in term of memory, computation or performance. In some contexts, it is
natural to handle more refined structures than pure sparsity, such as for
instance group sparsity. Sparse-Group Lasso has recently been introduced in the
context of linear regression to enforce sparsity both at the feature level and
at the group level. We adapt to the case of Sparse-Group Lasso recent safe
screening rules that discard early in the solver irrelevant features/groups.
Such rules have led to important speed-ups for a wide range of iterative
methods. Thanks to dual gap computations, we provide new safe screening rules
for Sparse-Group Lasso and show significant gains in term of computing time for
a coordinate descent implementation.
| Eugene Ndiaye, Olivier Fercoq, Alexandre Gramfort, Joseph Salmon | null | 1602.06225 | null | null |
Stacking for machine learning redshifts applied to SDSS galaxies | astro-ph.IM astro-ph.CO cs.LG | We present an analysis of a general machine learning technique called
'stacking' for the estimation of photometric redshifts. Stacking techniques can
feed the photometric redshift estimate, as output by a base algorithm, back
into the same algorithm as an additional input feature in a subsequent learning
round. We shown how all tested base algorithms benefit from at least one
additional stacking round (or layer). To demonstrate the benefit of stacking,
we apply the method to both unsupervised machine learning techniques based on
self-organising maps (SOMs), and supervised machine learning methods based on
decision trees. We explore a range of stacking architectures, such as the
number of layers and the number of base learners per layer. Finally we explore
the effectiveness of stacking even when using a successful algorithm such as
AdaBoost. We observe a significant improvement of between 1.9% and 21% on all
computed metrics when stacking is applied to weak learners (such as SOMs and
decision trees). When applied to strong learning algorithms (such as AdaBoost)
the ratio of improvement shrinks, but still remains positive and is between
0.4% and 2.5% for the explored metrics and comes at almost no additional
computational cost.
| Roman Zitlau, Ben Hoyle, Kerstin Paech, Jochen Weller, Markus Michael
Rau, Stella Seitz | 10.1093/mnras/stw1454 | 1602.06294 | null | null |
Policy Error Bounds for Model-Based Reinforcement Learning with Factored
Linear Models | stat.ML cs.LG | In this paper we study a model-based approach to calculating approximately
optimal policies in Markovian Decision Processes. In particular, we derive
novel bounds on the loss of using a policy derived from a factored linear
model, a class of models which generalize numerous previous models out of those
that come with strong computational guarantees. For the first time in the
literature, we derive performance bounds for model-based techniques where the
model inaccuracy is measured in weighted norms. Moreover, our bounds show a
decreased sensitivity to the discount factor and, unlike similar bounds derived
for other approaches, they are insensitive to measure mismatch. Similarly to
previous works, our proofs are also based on contraction arguments, but with
the main differences that we use carefully constructed norms building on Banach
lattices, and the contraction property is only assumed for operators acting on
"compressed" spaces, thus weakening previous assumptions, while strengthening
previous results.
| Bernardo \'Avila Pires and Csaba Szepesv\'ari | null | 1602.06346 | null | null |
FLASH: Fast Bayesian Optimization for Data Analytic Pipelines | cs.LG | Modern data science relies on data analytic pipelines to organize
interdependent computational steps. Such analytic pipelines often involve
different algorithms across multiple steps, each with its own hyperparameters.
To achieve the best performance, it is often critical to select optimal
algorithms and to set appropriate hyperparameters, which requires large
computational efforts. Bayesian optimization provides a principled way for
searching optimal hyperparameters for a single algorithm. However, many
challenges remain in solving pipeline optimization problems with
high-dimensional and highly conditional search space. In this work, we propose
Fast LineAr SearcH (FLASH), an efficient method for tuning analytic pipelines.
FLASH is a two-layer Bayesian optimization framework, which firstly uses a
parametric model to select promising algorithms, then computes a nonparametric
model to fine-tune hyperparameters of the promising algorithms. FLASH also
includes an effective caching algorithm which can further accelerate the search
process. Extensive experiments on a number of benchmark datasets have
demonstrated that FLASH significantly outperforms previous state-of-the-art
methods in both search speed and accuracy. Using 50% of the time budget, FLASH
achieves up to 20% improvement on test error rate compared to the baselines.
FLASH also yields state-of-the-art performance on a real-world application for
healthcare predictive modeling.
| Yuyu Zhang, Mohammad Taha Bahadori, Hang Su, Jimeng Sun | null | 1602.06468 | null | null |
Distributed Private Online Learning for Social Big Data Computing over
Data Center Networks | cs.DC cs.LG cs.SI | With the rapid growth of Internet technologies, cloud computing and social
networks have become ubiquitous. An increasing number of people participate in
social networks and massive online social data are obtained. In order to
exploit knowledge from copious amounts of data obtained and predict social
behavior of users, we urge to realize data mining in social networks. Almost
all online websites use cloud services to effectively process the large scale
of social data, which are gathered from distributed data centers. These data
are so large-scale, high-dimension and widely distributed that we propose a
distributed sparse online algorithm to handle them. Additionally,
privacy-protection is an important point in social networks. We should not
compromise the privacy of individuals in networks, while these social data are
being learned for data mining. Thus we also consider the privacy problem in
this article. Our simulations shows that the appropriate sparsity of data would
enhance the performance of our algorithm and the privacy-preserving method does
not significantly hurt the performance of the proposed algorithm.
| Chencheng Li and Pan Zhou and Yingxue Zhou and Kaigui Bian and Tao
Jiang and Susanto Rahardja | null | 1602.06489 | null | null |
Uniform Hypergraph Partitioning: Provable Tensor Methods and Sampling
Techniques | cs.LG stat.ML | In a series of recent works, we have generalised the consistency results in
the stochastic block model literature to the case of uniform and non-uniform
hypergraphs. The present paper continues the same line of study, where we focus
on partitioning weighted uniform hypergraphs---a problem often encountered in
computer vision. This work is motivated by two issues that arise when a
hypergraph partitioning approach is used to tackle computer vision problems:
(i) The uniform hypergraphs constructed for higher-order learning contain all
edges, but most have negligible weights. Thus, the adjacency tensor is nearly
sparse, and yet, not binary. (ii) A more serious concern is that standard
partitioning algorithms need to compute all edge weights, which is
computationally expensive for hypergraphs. This is usually resolved in practice
by merging the clustering algorithm with a tensor sampling strategy---an
approach that is yet to be analysed rigorously. We build on our earlier work on
partitioning dense unweighted uniform hypergraphs (Ghoshdastidar and Dukkipati,
ICML, 2015), and address the aforementioned issues by proposing provable and
efficient partitioning algorithms. Our analysis justifies the empirical success
of practical sampling techniques. We also complement our theoretical findings
by elaborate empirical comparison of various hypergraph partitioning schemes.
| Debarghya Ghoshdastidar, Ambedkar Dukkipati | null | 1602.06516 | null | null |
Multi-Task Learning with Labeled and Unlabeled Tasks | stat.ML cs.LG | In multi-task learning, a learner is given a collection of prediction tasks
and needs to solve all of them. In contrast to previous work, which required
that annotated training data is available for all tasks, we consider a new
setting, in which for some tasks, potentially most of them, only unlabeled
training data is provided. Consequently, to solve all tasks, information must
be transferred between tasks with labels and tasks without labels. Focusing on
an instance-based transfer method we analyze two variants of this setting: when
the set of labeled tasks is fixed, and when it can be actively selected by the
learner. We state and prove a generalization bound that covers both scenarios
and derive from it an algorithm for making the choice of labeled tasks (in the
active case) and for transferring information between the tasks in a principled
way. We also illustrate the effectiveness of the algorithm by experiments on
synthetic and real data.
| Anastasia Pentina and Christoph H. Lampert | null | 1602.06518 | null | null |
Machine learning meets network science: dimensionality reduction for
fast and efficient embedding of networks in the hyperbolic space | cond-mat.dis-nn cs.AI cs.LG | Complex network topologies and hyperbolic geometry seem specularly connected,
and one of the most fascinating and challenging problems of recent complex
network theory is to map a given network to its hyperbolic space. The
Popularity Similarity Optimization (PSO) model represents - at the moment - the
climax of this theory. It suggests that the trade-off between node popularity
and similarity is a mechanism to explain how complex network topologies emerge
- as discrete samples - from the continuous world of hyperbolic geometry. The
hyperbolic space seems appropriate to represent real complex networks. In fact,
it preserves many of their fundamental topological properties, and can be
exploited for real applications such as, among others, link prediction and
community detection. Here, we observe for the first time that a
topological-based machine learning class of algorithms - for nonlinear
unsupervised dimensionality reduction - can directly approximate the network's
node angular coordinates of the hyperbolic model into a two-dimensional space,
according to a similar topological organization that we named angular
coalescence. On the basis of this phenomenon, we propose a new class of
algorithms that offers fast and accurate coalescent embedding of networks in
the hyperbolic space even for graphs with thousands of nodes.
| Josephine Maria Thomas, Alessandro Muscoloni, Sara Ciucci, Ginestra
Bianconi and Carlo Vittorio Cannistraci | 10.1038/s41467-017-01825-5 | 1602.06522 | null | null |
Multi-task and Lifelong Learning of Kernels | stat.ML cs.LG | We consider a problem of learning kernels for use in SVM classification in
the multi-task and lifelong scenarios and provide generalization bounds on the
error of a large margin classifier. Our results show that, under mild
conditions on the family of kernels used for learning, solving several related
tasks simultaneously is beneficial over single task learning. In particular, as
the number of observed tasks grows, assuming that in the considered family of
kernels there exists one that yields low approximation error on all tasks, the
overhead associated with learning such a kernel vanishes and the complexity
converges to that of learning when this good kernel is given to the learner.
| Anastasia Pentina and Shai Ben-David | null | 1602.06531 | null | null |
Determining the best attributes for surveillance video keywords
generation | cs.LG cs.AI | Automatic video keyword generation is one of the key ingredients in reducing
the burden of security officers in analyzing surveillance videos. Keywords or
attributes are generally chosen manually based on expert knowledge of
surveillance. Most existing works primarily aim at either supervised learning
approaches relying on extensive manual labelling or hierarchical probabilistic
models that assume the features are extracted using the bag-of-words approach;
thus limiting the utilization of the other features. To address this, we turn
our attention to automatic attribute discovery approaches. However, it is not
clear which automatic discovery approach can discover the most meaningful
attributes. Furthermore, little research has been done on how to compare and
choose the best automatic attribute discovery methods. In this paper, we
propose a novel approach, based on the shared structure exhibited amongst
meaningful attributes, that enables us to compare between different automatic
attribute discovery approaches.We then validate our approach by comparing
various attribute discovery methods such as PiCoDeS on two attribute datasets.
The evaluation shows that our approach is able to select the automatic
discovery approach that discovers the most meaningful attributes. We then
employ the best discovery approach to generate keywords for videos recorded
from a surveillance system. This work shows it is possible to massively reduce
the amount of manual work in generating video keywords without limiting
ourselves to a particular video feature descriptor.
| Liangchen Liu and Arnold Wiliem and Shaokang Chen and Kun Zhao and
Brian C. Lovell | null | 1602.06539 | null | null |
Semi-Markov Switching Vector Autoregressive Model-based Anomaly
Detection in Aviation Systems | cs.LG stat.AP stat.ML | In this work we consider the problem of anomaly detection in heterogeneous,
multivariate, variable-length time series datasets. Our focus is on the
aviation safety domain, where data objects are flights and time series are
sensor readings and pilot switches. In this context the goal is to detect
anomalous flight segments, due to mechanical, environmental, or human factors
in order to identifying operationally significant events and provide insights
into the flight operations and highlight otherwise unavailable potential safety
risks and precursors to accidents. For this purpose, we propose a framework
which represents each flight using a semi-Markov switching vector
autoregressive (SMS-VAR) model. Detection of anomalies is then based on
measuring dissimilarities between the model's prediction and data observation.
The framework is scalable, due to the inherent parallel nature of most
computations, and can be used to perform online anomaly detection. Extensive
experimental results on simulated and real datasets illustrate that the
framework can detect various types of anomalies along with the key parameters
involved.
| Igor Melnyk, Arindam Banerjee, Bryan Matthews, and Nikunj Oza | null | 1602.06550 | null | null |
Deep Learning in Finance | cs.LG | We explore the use of deep learning hierarchical models for problems in
financial prediction and classification. Financial prediction problems -- such
as those presented in designing and pricing securities, constructing
portfolios, and risk management -- often involve large data sets with complex
data interactions that currently are difficult or impossible to specify in a
full economic model. Applying deep learning methods to these problems can
produce more useful results than standard methods in finance. In particular,
deep learning can detect and exploit interactions in the data that are, at
least currently, invisible to any existing financial economic theory.
| J. B. Heaton, N. G. Polson, J. H. Witte | null | 1602.06561 | null | null |
Interactive Storytelling over Document Collections | cs.AI cs.LG stat.ML | Storytelling algorithms aim to 'connect the dots' between disparate documents
by linking starting and ending documents through a series of intermediate
documents. Existing storytelling algorithms are based on notions of coherence
and connectivity, and thus the primary way by which users can steer the story
construction is via design of suitable similarity functions. We present an
alternative approach to storytelling wherein the user can interactively and
iteratively provide 'must use' constraints to preferentially support the
construction of some stories over others. The three innovations in our approach
are distance measures based on (inferred) topic distributions, the use of
constraints to define sets of linear inequalities over paths, and the
introduction of slack and surplus variables to condition the topic distribution
to preferentially emphasize desired terms over others. We describe experimental
results to illustrate the effectiveness of our interactive storytelling
approach over multiple text datasets.
| Dipayan Maiti and Mohammad Raihanul Islam and Scotland Leman and Naren
Ramakrishnan | null | 1602.06566 | null | null |
2-Bit Random Projections, NonLinear Estimators, and Approximate Near
Neighbor Search | stat.ML cs.DS cs.LG | The method of random projections has become a standard tool for machine
learning, data mining, and search with massive data at Web scale. The effective
use of random projections requires efficient coding schemes for quantizing
(real-valued) projected data into integers. In this paper, we focus on a simple
2-bit coding scheme. In particular, we develop accurate nonlinear estimators of
data similarity based on the 2-bit strategy. This work will have important
practical applications. For example, in the task of near neighbor search, a
crucial step (often called re-ranking) is to compute or estimate data
similarities once a set of candidate data points have been identified by hash
table techniques. This re-ranking step can take advantage of the proposed
coding scheme and estimator.
As a related task, in this paper, we also study a simple uniform quantization
scheme for the purpose of building hash tables with projected data. Our
analysis shows that typically only a small number of bits are needed. For
example, when the target similarity level is high, 2 or 3 bits might be
sufficient. When the target similarity level is not so high, it is preferable
to use only 1 or 2 bits. Therefore, a 2-bit scheme appears to be overall a good
choice for the task of sublinear time approximate near neighbor search via hash
tables.
Combining these results, we conclude that 2-bit random projections should be
recommended for approximate near neighbor search and similarity estimation.
Extensive experimental results are provided.
| Ping Li, Michael Mitzenmacher, Anshumali Shrivastava | null | 1602.06577 | null | null |
Recovering Structured Probability Matrices | cs.LG | We consider the problem of accurately recovering a matrix B of size M by M ,
which represents a probability distribution over M2 outcomes, given access to
an observed matrix of "counts" generated by taking independent samples from the
distribution B. How can structural properties of the underlying matrix B be
leveraged to yield computationally efficient and information theoretically
optimal reconstruction algorithms? When can accurate reconstruction be
accomplished in the sparse data regime? This basic problem lies at the core of
a number of questions that are currently being considered by different
communities, including building recommendation systems and collaborative
filtering in the sparse data regime, community detection in sparse random
graphs, learning structured models such as topic models or hidden Markov
models, and the efforts from the natural language processing community to
compute "word embeddings".
Our results apply to the setting where B has a low rank structure. For this
setting, we propose an efficient algorithm that accurately recovers the
underlying M by M matrix using Theta(M) samples. This result easily translates
to Theta(M) sample algorithms for learning topic models and learning hidden
Markov Models. These linear sample complexities are optimal, up to constant
factors, in an extremely strong sense: even testing basic properties of the
underlying matrix (such as whether it has rank 1 or 2) requires Omega(M)
samples. We provide an even stronger lower bound where distinguishing whether a
sequence of observations were drawn from the uniform distribution over M
observations versus being generated by an HMM with two hidden states requires
Omega(M) observations. This precludes sublinear-sample hypothesis tests for
basic properties, such as identity or uniformity, as well as sublinear sample
estimators for quantities such as the entropy rate of HMMs.
| Qingqing Huang, Sham M. Kakade, Weihao Kong, Gregory Valiant | null | 1602.06586 | null | null |
Clustering subgaussian mixtures by semidefinite programming | stat.ML cs.DS cs.IT cs.LG math.IT math.ST stat.TH | We introduce a model-free relax-and-round algorithm for k-means clustering
based on a semidefinite relaxation due to Peng and Wei. The algorithm
interprets the SDP output as a denoised version of the original data and then
rounds this output to a hard clustering. We provide a generic method for
proving performance guarantees for this algorithm, and we analyze the algorithm
in the context of subgaussian mixture models. We also study the fundamental
limits of estimating Gaussian centers by k-means clustering in order to compare
our approximation guarantee to the theoretically optimal k-means clustering
solution.
| Dustin G. Mixon, Soledad Villar, Rachel Ward | null | 1602.06612 | null | null |
Structured Learning of Binary Codes with Column Generation | cs.LG | Hashing methods aim to learn a set of hash functions which map the original
features to compact binary codes with similarity preserving in the Hamming
space. Hashing has proven a valuable tool for large-scale information
retrieval. We propose a column generation based binary code learning framework
for data-dependent hash function learning. Given a set of triplets that encode
the pairwise similarity comparison information, our column generation based
method learns hash functions that preserve the relative comparison relations
within the large-margin learning framework. Our method iteratively learns the
best hash functions during the column generation procedure. Existing hashing
methods optimize over simple objectives such as the reconstruction error or
graph Laplacian related loss functions, instead of the performance evaluation
criteria of interest---multivariate performance measures such as the AUC and
NDCG. Our column generation based method can be further generalized from the
triplet loss to a general structured learning based framework that allows one
to directly optimize multivariate performance measures. For optimizing general
ranking measures, the resulting optimization problem can involve exponentially
or infinitely many variables and constraints, which is more challenging than
standard structured output learning. We use a combination of column generation
and cutting-plane techniques to solve the optimization problem. To speed-up the
training we further explore stage-wise training and propose to use a simplified
NDCG loss for efficient inference. We demonstrate the generality of our method
by applying it to ranking prediction and image retrieval, and show that it
outperforms a few state-of-the-art hashing methods.
| Guosheng Lin, Fayao Liu, Chunhua Shen, Jianxin Wu, Heng Tao Shen | null | 1602.06654 | null | null |
Recurrent Orthogonal Networks and Long-Memory Tasks | cs.NE cs.AI cs.LG stat.ML | Although RNNs have been shown to be powerful tools for processing sequential
data, finding architectures or optimization strategies that allow them to model
very long term dependencies is still an active area of research. In this work,
we carefully analyze two synthetic datasets originally outlined in (Hochreiter
and Schmidhuber, 1997) which are used to evaluate the ability of RNNs to store
information over many time steps. We explicitly construct RNN solutions to
these problems, and using these constructions, illuminate both the problems
themselves and the way in which RNNs store different types of information in
their hidden states. These constructions furthermore explain the success of
recent methods that specify unitary initializations or constraints on the
transition matrices.
| Mikael Henaff, Arthur Szlam, Yann LeCun | null | 1602.06662 | null | null |
An Effective and Efficient Approach for Clusterability Evaluation | cs.LG stat.ML | Clustering is an essential data mining tool that aims to discover inherent
cluster structure in data. As such, the study of clusterability, which
evaluates whether data possesses such structure, is an integral part of cluster
analysis. Yet, despite their central role in the theory and application of
clustering, current notions of clusterability fall short in two crucial aspects
that render them impractical; most are computationally infeasible and others
fail to classify the structure of real datasets.
In this paper, we propose a novel approach to clusterability evaluation that
is both computationally efficient and successfully captures the structure of
real data. Our method applies multimodality tests to the (one-dimensional) set
of pairwise distances based on the original, potentially high-dimensional data.
We present extensive analyses of our approach for both the Dip and Silverman
multimodality tests on real data as well as 17,000 simulations, demonstrating
the success of our approach as the first practical notion of clusterability.
| Margareta Ackerman, Andreas Adolfsson, and Naomi Brownstein | null | 1602.06687 | null | null |
Distributed Deep Learning Using Synchronous Stochastic Gradient Descent | cs.DC cs.LG | We design and implement a distributed multinode synchronous SGD algorithm,
without altering hyper parameters, or compressing data, or altering algorithmic
behavior. We perform a detailed analysis of scaling, and identify optimal
design points for different networks. We demonstrate scaling of CNNs on 100s of
nodes, and present what we believe to be record training throughputs. A 512
minibatch VGG-A CNN training run is scaled 90X on 128 nodes. Also 256 minibatch
VGG-A and OverFeat-FAST networks are scaled 53X and 42X respectively on a 64
node cluster. We also demonstrate the generality of our approach via
best-in-class 6.5X scaling for a 7-layer DNN on 16 nodes. Thereafter we attempt
to democratize deep-learning by training on an Ethernet based AWS cluster and
show ~14X scaling on 16 nodes.
| Dipankar Das, Sasikanth Avancha, Dheevatsa Mudigere, Karthikeyan
Vaidynathan, Srinivas Sridharan, Dhiraj Kalamkar, Bharat Kaul, Pradeep Dubey | null | 1602.06709 | null | null |
Variational inference for Monte Carlo objectives | cs.LG stat.ML | Recent progress in deep latent variable models has largely been driven by the
development of flexible and scalable variational inference methods. Variational
training of this type involves maximizing a lower bound on the log-likelihood,
using samples from the variational posterior to compute the required gradients.
Recently, Burda et al. (2016) have derived a tighter lower bound using a
multi-sample importance sampling estimate of the likelihood and showed that
optimizing it yields models that use more of their capacity and achieve higher
likelihoods. This development showed the importance of such multi-sample
objectives and explained the success of several related approaches.
We extend the multi-sample approach to discrete latent variables and analyze
the difficulty encountered when estimating the gradients involved. We then
develop the first unbiased gradient estimator designed for importance-sampled
objectives and evaluate it at training generative and structured output
prediction models. The resulting estimator, which is based on low-variance
per-sample learning signals, is both simpler and more effective than the NVIL
estimator proposed for the single-sample variational objective, and is
competitive with the currently used biased estimators.
| Andriy Mnih, Danilo J. Rezende | null | 1602.06725 | null | null |
Convexification of Learning from Constraints | cs.LG math.OC stat.ML | Regularized empirical risk minimization with constrained labels (in contrast
to fixed labels) is a remarkably general abstraction of learning. For common
loss and regularization functions, this optimization problem assumes the form
of a mixed integer program (MIP) whose objective function is non-convex. In
this form, the problem is resistant to standard optimization techniques. We
construct MIPs with the same solutions whose objective functions are convex.
Specifically, we characterize the tightest convex extension of the objective
function, given by the Legendre-Fenchel biconjugate. Computing values of this
tightest convex extension is NP-hard. However, by applying our characterization
to every function in an additive decomposition of the objective function, we
obtain a class of looser convex extensions that can be computed efficiently.
For some decompositions, common loss and regularization functions, we derive a
closed form.
| Iaroslav Shcherbatyi and Bjoern Andres | null | 1602.06746 | null | null |
Graph Regularized Low Rank Representation for Aerosol Optical Depth
Retrieval | cs.LG | In this paper, we propose a novel data-driven regression model for aerosol
optical depth (AOD) retrieval. First, we adopt a low rank representation (LRR)
model to learn a powerful representation of the spectral response. Then, graph
regularization is incorporated into the LRR model to capture the local
structure information and the nonlinear property of the remote-sensing data.
Since it is easy to acquire the rich satellite-retrieval results, we use them
as a baseline to construct the graph. Finally, the learned feature
representation is feeded into support vector machine (SVM) to retrieve AOD.
Experiments are conducted on two widely used data sets acquired by different
sensors, and the experimental results show that the proposed method can achieve
superior performance compared to the physical models and other state-of-the-art
empirical models.
| Yubao Sun, Renlong Hang, Qingshan Liu, Fuping Zhu, Hucheng Pei | 10.1080/01431161.2016.1249302 | 1602.06818 | null | null |
Understanding Visual Concepts with Continuation Learning | cs.LG | We introduce a neural network architecture and a learning algorithm to
produce factorized symbolic representations. We propose to learn these concepts
by observing consecutive frames, letting all the components of the hidden
representation except a small discrete set (gating units) be predicted from the
previous frame, and let the factors of variation in the next frame be
represented entirely by these discrete gated units (corresponding to symbolic
representations). We demonstrate the efficacy of our approach on datasets of
faces undergoing 3D transformations and Atari 2600 games.
| William F. Whitney, Michael Chang, Tejas Kulkarni, Joshua B. Tenenbaum | null | 1602.06822 | null | null |
Higher-Order Low-Rank Regression | cs.LG | This paper proposes an efficient algorithm (HOLRR) to handle regression tasks
where the outputs have a tensor structure. We formulate the regression problem
as the minimization of a least square criterion under a multilinear rank
constraint, a difficult non convex problem. HOLRR computes efficiently an
approximate solution of this problem, with solid theoretical guarantees. A
kernel extension is also presented. Experiments on synthetic and real data show
that HOLRR outperforms multivariate and multilinear regression methods and is
considerably faster than existing tensor methods.
| Guillaume Rabusseau and Hachem Kadri | null | 1602.06863 | null | null |
Principal Component Projection Without Principal Component Analysis | cs.DS cs.LG stat.ML | We show how to efficiently project a vector onto the top principal components
of a matrix, without explicitly computing these components. Specifically, we
introduce an iterative algorithm that provably computes the projection using
few calls to any black-box routine for ridge regression.
By avoiding explicit principal component analysis (PCA), our algorithm is the
first with no runtime dependence on the number of top principal components. We
show that it can be used to give a fast iterative method for the popular
principal component regression problem, giving the first major runtime
improvement over the naive method of combining PCA with regression.
To achieve our results, we first observe that ridge regression can be used to
obtain a "smooth projection" onto the top principal components. We then sharpen
this approximation to true projection using a low-degree polynomial
approximation to the matrix step function. Step function approximation is a
topic of long-term interest in scientific computing. We extend prior theory by
constructing polynomials with simple iterative structure and rigorously
analyzing their behavior under limited precision.
| Roy Frostig, Cameron Musco, Christopher Musco, Aaron Sidford | null | 1602.06872 | null | null |
Clustering with a Reject Option: Interactive Clustering as Bayesian
Prior Elicitation | stat.ML cs.LG | A good clustering can help a data analyst to explore and understand a data
set, but what constitutes a good clustering may depend on domain-specific and
application-specific criteria. These criteria can be difficult to formalize,
even when it is easy for an analyst to know a good clustering when she sees
one. We present a new approach to interactive clustering for data exploration,
called \ciif, based on a particularly simple feedback mechanism, in which an
analyst can choose to reject individual clusters and request new ones. The new
clusters should be different from previously rejected clusters while still
fitting the data well. We formalize this interaction in a novel Bayesian prior
elicitation framework. In each iteration, the prior is adapted to account for
all the previous feedback, and a new clustering is then produced from the
posterior distribution. To achieve the computational efficiency necessary for
an interactive setting, we propose an incremental optimization method over data
minibatches using Lagrangian relaxation. Experiments demonstrate that \ciif can
produce accurate and diverse clusterings.
| Akash Srivastava, James Zou and Charles Sutton | null | 1602.06886 | null | null |
Sparse Linear Regression via Generalized Orthogonal Least-Squares | stat.ML cs.IT cs.LG math.IT | Sparse linear regression, which entails finding a sparse solution to an
underdetermined system of linear equations, can formally be expressed as an
$l_0$-constrained least-squares problem. The Orthogonal Least-Squares (OLS)
algorithm sequentially selects the features (i.e., columns of the coefficient
matrix) to greedily find an approximate sparse solution. In this paper, a
generalization of Orthogonal Least-Squares which relies on a recursive relation
between the components of the optimal solution to select L features at each
step and solve the resulting overdetermined system of equations is proposed.
Simulation results demonstrate that the generalized OLS algorithm is
computationally efficient and achieves performance superior to that of existing
greedy algorithms broadly used in the literature.
| Abolfazl Hashemi, Haris Vikalo | null | 1602.06916 | null | null |
Streaming PCA: Matching Matrix Bernstein and Near-Optimal Finite Sample
Guarantees for Oja's Algorithm | cs.LG cs.DS cs.NE stat.ML | This work provides improved guarantees for streaming principle component
analysis (PCA). Given $A_1, \ldots, A_n\in \mathbb{R}^{d\times d}$ sampled
independently from distributions satisfying $\mathbb{E}[A_i] = \Sigma$ for
$\Sigma \succeq \mathbf{0}$, this work provides an $O(d)$-space linear-time
single-pass streaming algorithm for estimating the top eigenvector of $\Sigma$.
The algorithm nearly matches (and in certain cases improves upon) the accuracy
obtained by the standard batch method that computes top eigenvector of the
empirical covariance $\frac{1}{n} \sum_{i \in [n]} A_i$ as analyzed by the
matrix Bernstein inequality. Moreover, to achieve constant accuracy, our
algorithm improves upon the best previous known sample complexities of
streaming algorithms by either a multiplicative factor of $O(d)$ or
$1/\mathrm{gap}$ where $\mathrm{gap}$ is the relative distance between the top
two eigenvalues of $\Sigma$.
These results are achieved through a novel analysis of the classic Oja's
algorithm, one of the oldest and most popular algorithms for streaming PCA. In
particular, this work shows that simply picking a random initial point $w_0$
and applying the update rule $w_{i + 1} = w_i + \eta_i A_i w_i$ suffices to
accurately estimate the top eigenvector, with a suitable choice of $\eta_i$. We
believe our result sheds light on how to efficiently perform streaming PCA both
in theory and in practice and we hope that our analysis may serve as the basis
for analyzing many variants and extensions of streaming PCA.
| Prateek Jain and Chi Jin and Sham M. Kakade and Praneeth Netrapalli
and Aaron Sidford | null | 1602.06929 | null | null |
Blind score normalization method for PLDA based speaker recognition | cs.CL cs.LG cs.SD | Probabilistic Linear Discriminant Analysis (PLDA) has become state-of-the-art
method for modeling $i$-vector space in speaker recognition task. However the
performance degradation is observed if enrollment data size differs from one
speaker to another. This paper presents a solution to such problem by
introducing new PLDA scoring normalization technique. Normalization parameters
are derived in a blind way, so that, unlike traditional \textit{ZT-norm}, no
extra development data is required. Moreover, proposed method has shown to be
optimal in terms of detection cost function. The experiments conducted on NIST
SRE 2014 database demonstrate an improved accuracy in a mixed enrollment number
condition.
| Danila Doroshin, Nikolay Lubimov, Marina Nastasenko and Mikhail Kotov | null | 1602.06967 | null | null |
Recovering the number of clusters in data sets with noise features using
feature rescaling factors | stat.ML cs.LG | In this paper we introduce three methods for re-scaling data sets aiming at
improving the likelihood of clustering validity indexes to return the true
number of spherical Gaussian clusters with additional noise features. Our
method obtains feature re-scaling factors taking into account the structure of
a given data set and the intuitive idea that different features may have
different degrees of relevance at different clusters.
We experiment with the Silhouette (using squared Euclidean, Manhattan, and
the p$^{th}$ power of the Minkowski distance), Dunn's, Calinski-Harabasz and
Hartigan indexes on data sets with spherical Gaussian clusters with and without
noise features. We conclude that our methods indeed increase the chances of
estimating the true number of clusters in a data set.
| Renato Cordeiro de Amorim and Christian Hennig | 10.1016/j.ins.2015.06.039 | 1602.06989 | null | null |
A survey of sparse representation: algorithms and applications | cs.CV cs.LG | Sparse representation has attracted much attention from researchers in fields
of signal processing, image processing, computer vision and pattern
recognition. Sparse representation also has a good reputation in both
theoretical research and practical applications. Many different algorithms have
been proposed for sparse representation. The main purpose of this article is to
provide a comprehensive study and an updated review on sparse representation
and to supply a guidance for researchers. The taxonomy of sparse representation
methods can be studied from various viewpoints. For example, in terms of
different norm minimizations used in sparsity constraints, the methods can be
roughly categorized into five groups: sparse representation with $l_0$-norm
minimization, sparse representation with $l_p$-norm (0$<$p$<$1) minimization,
sparse representation with $l_1$-norm minimization and sparse representation
with $l_{2,1}$-norm minimization. In this paper, a comprehensive overview of
sparse representation is provided. The available sparse representation
algorithms can also be empirically categorized into four groups: greedy
strategy approximation, constrained optimization, proximity algorithm-based
optimization, and homotopy algorithm-based sparse representation. The
rationales of different algorithms in each category are analyzed and a wide
range of sparse representation applications are summarized, which could
sufficiently reveal the potential nature of the sparse representation theory.
Specifically, an experimentally comparative study of these sparse
representation algorithms was presented. The Matlab code used in this paper can
be available at: http://www.yongxu.org/lunwen.html.
| Zheng Zhang, Yong Xu, Jian Yang, Xuelong Li, David Zhang | 10.1109/ACCESS.2015.2430359 | 1602.07017 | null | null |
Latent Skill Embedding for Personalized Lesson Sequence Recommendation | cs.LG cs.AI cs.CY | Students in online courses generate large amounts of data that can be used to
personalize the learning process and improve quality of education. In this
paper, we present the Latent Skill Embedding (LSE), a probabilistic model of
students and educational content that can be used to recommend personalized
sequences of lessons with the goal of helping students prepare for specific
assessments. Akin to collaborative filtering for recommender systems, the
algorithm does not require students or content to be described by features, but
it learns a representation using access traces. We formulate this problem as a
regularized maximum-likelihood embedding of students, lessons, and assessments
from historical student-content interactions. An empirical evaluation on
large-scale data from Knewton, an adaptive learning technology company, shows
that this approach predicts assessment results competitively with benchmark
models and is able to discriminate between lesson sequences that lead to
mastery and failure.
| Siddharth Reddy, Igor Labutov, Thorsten Joachims | null | 1602.07029 | null | null |
Mobile Big Data Analytics Using Deep Learning and Apache Spark | cs.DC cs.LG cs.NE | The proliferation of mobile devices, such as smartphones and Internet of
Things (IoT) gadgets, results in the recent mobile big data (MBD) era.
Collecting MBD is unprofitable unless suitable analytics and learning methods
are utilized for extracting meaningful information and hidden patterns from
data. This article presents an overview and brief tutorial of deep learning in
MBD analytics and discusses a scalable learning framework over Apache Spark.
Specifically, a distributed deep learning is executed as an iterative MapReduce
computing on many Spark workers. Each Spark worker learns a partial deep model
on a partition of the overall MBD, and a master deep model is then built by
averaging the parameters of all partial models. This Spark-based framework
speeds up the learning of deep models consisting of many hidden layers and
millions of parameters. We use a context-aware activity recognition application
with a real-world dataset containing millions of samples to validate our
framework and assess its speedup effectiveness.
| Mohammad Abu Alsheikh, Dusit Niyato, Shaowei Lin, Hwee-Pink Tan, and
Zhu Han | 10.1109/MNET.2016.7474340 | 1602.07031 | null | null |
Auditing Black-box Models for Indirect Influence | stat.ML cs.LG | Data-trained predictive models see widespread use, but for the most part they
are used as black boxes which output a prediction or score. It is therefore
hard to acquire a deeper understanding of model behavior, and in particular how
different features influence the model prediction. This is important when
interpreting the behavior of complex models, or asserting that certain
problematic attributes (like race or gender) are not unduly influencing
decisions.
In this paper, we present a technique for auditing black-box models, which
lets us study the extent to which existing models take advantage of particular
features in the dataset, without knowing how the models work. Our work focuses
on the problem of indirect influence: how some features might indirectly
influence outcomes via other, related features. As a result, we can find
attribute influences even in cases where, upon further direct examination of
the model, the attribute is not referred to by the model at all.
Our approach does not require the black-box model to be retrained. This is
important if (for example) the model is only accessible via an API, and
contrasts our work with other methods that investigate feature influence like
feature selection. We present experimental evidence for the effectiveness of
our procedure using a variety of publicly available datasets and models. We
also validate our procedure using techniques from interpretable learning and
feature selection, as well as against other black-box auditing procedures.
| Philip Adler, Casey Falk, Sorelle A. Friedler, Gabriel Rybeck, Carlos
Scheidegger, Brandon Smith and Suresh Venkatasubramanian | null | 1602.07043 | null | null |
An Improved Gap-Dependency Analysis of the Noisy Power Method | stat.ML cs.LG math.NA | We consider the noisy power method algorithm, which has wide applications in
machine learning and statistics, especially those related to principal
component analysis (PCA) under resource (communication, memory or privacy)
constraints. Existing analysis of the noisy power method shows an
unsatisfactory dependency over the "consecutive" spectral gap
$(\sigma_k-\sigma_{k+1})$ of an input data matrix, which could be very small
and hence limits the algorithm's applicability. In this paper, we present a new
analysis of the noisy power method that achieves improved gap dependency for
both sample complexity and noise tolerance bounds. More specifically, we
improve the dependency over $(\sigma_k-\sigma_{k+1})$ to dependency over
$(\sigma_k-\sigma_{q+1})$, where $q$ is an intermediate algorithm parameter and
could be much larger than the target rank $k$. Our proofs are built upon a
novel characterization of proximity between two subspaces that differ from
canonical angle characterizations analyzed in previous works. Finally, we apply
our improved bounds to distributed private PCA and memory-efficient streaming
PCA and obtain bounds that are superior to existing results in the literature.
| Maria Florina Balcan, Simon S. Du, Yining Wang, Adams Wei Yu | null | 1602.07046 | null | null |
A Streaming Algorithm for Crowdsourced Data Classification | stat.ML cs.LG | We propose a streaming algorithm for the binary classification of data based
on crowdsourcing. The algorithm learns the competence of each labeller by
comparing her labels to those of other labellers on the same tasks and uses
this information to minimize the prediction error rate on each task. We provide
performance guarantees of our algorithm for a fixed population of independent
labellers. In particular, we show that our algorithm is optimal in the sense
that the cumulative regret compared to the optimal decision with known labeller
error probabilities is finite, independently of the number of tasks to label.
The complexity of the algorithm is linear in the number of labellers and the
number of tasks, up to some logarithmic factors. Numerical experiments
illustrate the performance of our algorithm compared to existing algorithms,
including simple majority voting and expectation-maximization algorithms, on
both synthetic and real datasets.
| Thomas Bonald and Richard Combes | null | 1602.07107 | null | null |
Variational Inference for On-line Anomaly Detection in High-Dimensional
Time Series | stat.ML cs.LG | Approximate variational inference has shown to be a powerful tool for
modeling unknown complex probability distributions. Recent advances in the
field allow us to learn probabilistic models of sequences that actively exploit
spatial and temporal structure. We apply a Stochastic Recurrent Network (STORN)
to learn robot time series data. Our evaluation demonstrates that we can
robustly detect anomalies both off- and on-line.
| Maximilian Soelch, Justin Bayer, Marvin Ludersdorfer, Patrick van der
Smagt | null | 1602.07109 | null | null |
Submodular Learning and Covering with Response-Dependent Costs | cs.LG stat.ML | We consider interactive learning and covering problems, in a setting where
actions may incur different costs, depending on the response to the action. We
propose a natural greedy algorithm for response-dependent costs. We bound the
approximation factor of this greedy algorithm in active learning settings as
well as in the general setting. We show that a different property of the cost
function controls the approximation factor in each of these scenarios. We
further show that in both settings, the approximation factor of this greedy
algorithm is near-optimal among all greedy algorithms. Experiments demonstrate
the advantages of the proposed algorithm in the response-dependent cost
setting.
| Sivan Sabato | null | 1602.07120 | null | null |
Explore First, Exploit Next: The True Shape of Regret in Bandit Problems | math.ST cs.LG stat.TH | We revisit lower bounds on the regret in the case of multi-armed bandit
problems. We obtain non-asymptotic, distribution-dependent bounds and provide
straightforward proofs based only on well-known properties of Kullback-Leibler
divergences. These bounds show in particular that in an initial phase the
regret grows almost linearly, and that the well-known logarithmic growth of the
regret only holds in a final phase. The proof techniques come to the essence of
the information-theoretic arguments used and they are deprived of all
unnecessary complications.
| Aur\'elien Garivier (IMT), Pierre M\'enard (IMT), Gilles Stoltz
(GREGH) | null | 1602.07182 | null | null |
Lens depth function and k-relative neighborhood graph: versatile tools
for ordinal data analysis | stat.ML cs.DS cs.LG | In recent years it has become popular to study machine learning problems in a
setting of ordinal distance information rather than numerical distance
measurements. By ordinal distance information we refer to binary answers to
distance comparisons such as $d(A,B)<d(C,D)$. For many problems in machine
learning and statistics it is unclear how to solve them in such a scenario. Up
to now, the main approach is to explicitly construct an ordinal embedding of
the data points in the Euclidean space, an approach that has a number of
drawbacks. In this paper, we propose algorithms for the problems of medoid
estimation, outlier identification, classification, and clustering when given
only ordinal data. They are based on estimating the lens depth function and the
$k$-relative neighborhood graph on a data set. Our algorithms are simple, are
much faster than an ordinal embedding approach and avoid some of its drawbacks,
and can easily be parallelized.
| Matth\"aus Kleindessner and Ulrike von Luxburg | null | 1602.07194 | null | null |
A Multivariate Biomarker for Parkinson's Disease | cs.LG | In this study, we executed a genomic analysis with the objective of selecting
a set of genes (possibly small) that would help in the detection and
classification of samples from patients affected by Parkinson Disease. We
performed a complete data analysis and during the exploratory phase, we
selected a list of differentially expressed genes. Despite their association
with the diseased state, we could not use them as a biomarker tool. Therefore,
our research was extended to include a multivariate analysis approach resulting
in the identification and selection of a group of 20 genes that showed a clear
potential in detecting and correctly classify Parkinson Disease samples even in
the presence of other neurodegenerative disorders.
| Giancarlo Crocetti, Michael Coakley, Phil Dressner, Wanda Kellum,
Tamba Lamin | null | 1602.07264 | null | null |
Search Improves Label for Active Learning | cs.LG stat.ML | We investigate active learning with access to two distinct oracles: Label
(which is standard) and Search (which is not). The Search oracle models the
situation where a human searches a database to seed or counterexample an
existing solution. Search is stronger than Label while being natural to
implement in many situations. We show that an algorithm using both oracles can
provide exponentially large problem-dependent improvements over Label alone.
| Alina Beygelzimer, Daniel Hsu, John Langford, Chicheng Zhang | null | 1602.07265 | null | null |
A Statistical Model for Stroke Outcome Prediction and Treatment Planning | stat.AP cs.LG | Stroke is a major cause of mortality and long--term disability in the world.
Predictive outcome models in stroke are valuable for personalized treatment,
rehabilitation planning and in controlled clinical trials. In this paper we
design a new model to predict outcome in the short-term, the putative
therapeutic window for several treatments. Our regression-based model has a
parametric form that is designed to address many challenges common in medical
datasets like highly correlated variables and class imbalance. Empirically our
model outperforms the best--known previous models in predicting short--term
outcomes and in inferring the most effective treatments that improve outcome.
| Abhishek Sengupta, Vaibhav Rajan, Sakyajit Bhattacharya, G R K Sarma | null | 1602.07280 | null | null |
Stuck in a What? Adventures in Weight Space | cs.LG | Deep learning researchers commonly suggest that converged models are stuck in
local minima. More recently, some researchers observed that under reasonable
assumptions, the vast majority of critical points are saddle points, not true
minima. Both descriptions suggest that weights converge around a point in
weight space, be it a local optima or merely a critical point. However, it's
possible that neither interpretation is accurate. As neural networks are
typically over-complete, it's easy to show the existence of vast continuous
regions through weight space with equal loss. In this paper, we build on recent
work empirically characterizing the error surfaces of neural networks. We
analyze training paths through weight space, presenting evidence that apparent
convergence of loss does not correspond to weights arriving at critical points,
but instead to large movements through flat regions of weight space. While it's
trivial to show that neural network error surfaces are globally non-convex, we
show that error surfaces are also locally non-convex, even after breaking
symmetry with a random initialization and also after partial training.
| Zachary C. Lipton | null | 1602.07320 | null | null |
Sparse Estimation of Multivariate Poisson Log-Normal Models from Count
Data | stat.ME cs.LG | Modeling data with multivariate count responses is a challenging problem due
to the discrete nature of the responses. Existing methods for univariate count
responses cannot be easily extended to the multivariate case since the
dependency among multiple responses needs to be properly accommodated. In this
paper, we propose a multivariate Poisson log-normal regression model for
multivariate data with count responses. By simultaneously estimating the
regression coefficients and inverse covariance matrix over the latent variables
with an efficient Monte Carlo EM algorithm, the proposed regression model takes
advantages of association among multiple count responses to improve the model
prediction performance. Simulation studies and applications to real world data
are conducted to systematically evaluate the performance of the proposed method
in comparison with conventional methods.
| Hao Wu, Xinwei Deng and Naren Ramakrishnan | null | 1602.07337 | null | null |
On Study of the Binarized Deep Neural Network for Image Classification | cs.NE cs.CV cs.LG | Recently, the deep neural network (derived from the artificial neural
network) has attracted many researchers' attention by its outstanding
performance. However, since this network requires high-performance GPUs and
large storage, it is very hard to use it on individual devices. In order to
improve the deep neural network, many trials have been made by refining the
network structure or training strategy. Unlike those trials, in this paper, we
focused on the basic propagation function of the artificial neural network and
proposed the binarized deep neural network. This network is a pure binary
system, in which all the values and calculations are binarized. As a result,
our network can save a lot of computational resource and storage. Therefore, it
is possible to use it on various devices. Moreover, the experimental results
proved the feasibility of the proposed network.
| Song Wang, Dongchun Ren, Li Chen, Wei Fan, Jun Sun, Satoshi Naoi | null | 1602.07373 | null | null |
Automatic Moth Detection from Trap Images for Pest Management | cs.CV cs.LG cs.NE | Monitoring the number of insect pests is a crucial component in
pheromone-based pest management systems. In this paper, we propose an automatic
detection pipeline based on deep learning for identifying and counting pests in
images taken inside field traps. Applied to a commercial codling moth dataset,
our method shows promising performance both qualitatively and quantitatively.
Compared to previous attempts at pest detection, our approach uses no
pest-specific engineering which enables it to adapt to other species and
environments with minimal human effort. It is amenable to implementation on
parallel hardware and therefore capable of deployment in settings where
real-time performance is required.
| Weiguang Ding, Graham Taylor | null | 1602.07383 | null | null |
Discrete Distribution Estimation under Local Privacy | stat.ML cs.LG | The collection and analysis of user data drives improvements in the app and
web ecosystems, but comes with risks to privacy. This paper examines discrete
distribution estimation under local privacy, a setting wherein service
providers can learn the distribution of a categorical statistic of interest
without collecting the underlying data. We present new mechanisms, including
hashed K-ary Randomized Response (KRR), that empirically meet or exceed the
utility of existing mechanisms at all privacy levels. New theoretical results
demonstrate the order-optimality of KRR and the existing RAPPOR mechanism at
different privacy regimes.
| Peter Kairouz and Keith Bonawitz and Daniel Ramage | null | 1602.07387 | null | null |
The Myopia of Crowds: A Study of Collective Evaluation on Stack Exchange | cs.HC cs.CY cs.LG physics.soc-ph | Crowds can often make better decisions than individuals or small groups of
experts by leveraging their ability to aggregate diverse information. Question
answering sites, such as Stack Exchange, rely on the "wisdom of crowds" effect
to identify the best answers to questions asked by users. We analyze data from
250 communities on the Stack Exchange network to pinpoint factors affecting
which answers are chosen as the best answers. Our results suggest that, rather
than evaluate all available answers to a question, users rely on simple
cognitive heuristics to choose an answer to vote for or accept. These cognitive
heuristics are linked to an answer's salience, such as the order in which it is
listed and how much screen space it occupies. While askers appear to depend
more on heuristics, compared to voting users, when choosing an answer to accept
as the most helpful one, voters use acceptance itself as a heuristic: they are
more likely to choose the answer after it is accepted than before that very
same answer was accepted. These heuristics become more important in explaining
and predicting behavior as the number of available answers increases. Our
findings suggest that crowd judgments may become less reliable as the number of
answers grow.
| Keith Burghardt, Emanuel F. Alsina, Michelle Girvan, William Rand, and
Kristina Lerman | 10.1371/journal.pone.0173610 | 1602.07388 | null | null |
Domain Specific Author Attribution Based on Feedforward Neural Network
Language Models | cs.CL cs.LG cs.NE | Authorship attribution refers to the task of automatically determining the
author based on a given sample of text. It is a problem with a long history and
has a wide range of application. Building author profiles using language models
is one of the most successful methods to automate this task. New language
modeling methods based on neural networks alleviate the curse of dimensionality
and usually outperform conventional N-gram methods. However, there have not
been much research applying them to authorship attribution. In this paper, we
present a novel setup of a Neural Network Language Model (NNLM) and apply it to
a database of text samples from different authors. We investigate how the NNLM
performs on a task with moderate author set size and relatively limited
training and test data, and how the topics of the text samples affect the
accuracy. NNLM achieves nearly 2.5% reduction in perplexity, a measurement of
fitness of a trained language model to the test data. Given 5 random test
sentences, it also increases the author classification accuracy by 3.43% on
average, compared with the N-gram methods using SRILM tools. An open source
implementation of our methodology is freely available at
https://github.com/zge/authorship-attribution/.
| Zhenhao Ge and Yufang Sun | null | 1602.07393 | null | null |
Ensuring Rapid Mixing and Low Bias for Asynchronous Gibbs Sampling | cs.LG | Gibbs sampling is a Markov chain Monte Carlo technique commonly used for
estimating marginal distributions. To speed up Gibbs sampling, there has
recently been interest in parallelizing it by executing asynchronously. While
empirical results suggest that many models can be efficiently sampled
asynchronously, traditional Markov chain analysis does not apply to the
asynchronous case, and thus asynchronous Gibbs sampling is poorly understood.
In this paper, we derive a better understanding of the two main challenges of
asynchronous Gibbs: bias and mixing time. We show experimentally that our
theoretical results match practical outcomes.
| Christopher De Sa, Kunle Olukotun, and Christopher R\'e | null | 1602.07415 | null | null |
Learning to Generate with Memory | cs.LG cs.CV | Memory units have been widely used to enrich the capabilities of deep
networks on capturing long-term dependencies in reasoning and prediction tasks,
but little investigation exists on deep generative models (DGMs) which are good
at inferring high-level invariant representations from unlabeled data. This
paper presents a deep generative model with a possibly large external memory
and an attention mechanism to capture the local detail information that is
often lost in the bottom-up abstraction process in representation learning. By
adopting a smooth attention model, the whole network is trained end-to-end by
optimizing a variational bound of data likelihood via auto-encoding variational
Bayesian methods, where an asymmetric recognition network is learnt jointly to
infer high-level invariant representations. The asymmetric architecture can
reduce the competition between bottom-up invariant feature extraction and
top-down generation of instance details. Our experiments on several datasets
demonstrate that memory can significantly boost the performance of DGMs and
even achieve state-of-the-art results on various tasks, including density
estimation, image generation, and missing value imputation.
| Chongxuan Li, Jun Zhu and Bo Zhang | null | 1602.07416 | null | null |
Feature ranking for multi-label classification using Markov Networks | cs.LG stat.ML | We propose a simple and efficient method for ranking features in multi-label
classification. The method produces a ranking of features showing their
relevance in predicting labels, which in turn allows to choose a final subset
of features. The procedure is based on Markov Networks and allows to model the
dependencies between labels and features in a direct way. In the first step we
build a simple network using only labels and then we test how much adding a
single feature affects the initial network. More specifically, in the first
step we use the Ising model whereas the second step is based on the score
statistic, which allows to test a significance of added features very quickly.
The proposed approach does not require transformation of label space, gives
interpretable results and allows for attractive visualization of dependency
structure. We give a theoretical justification of the procedure by discussing
some theoretical properties of the Ising model and the score statistic. We also
discuss feature ranking procedure based on fitting Ising model using $l_1$
regularized logistic regressions. Numerical experiments show that the proposed
methods outperform the conventional approaches on the considered artificial and
real datasets.
| Pawe{\l} Teisseyre | null | 1602.07464 | null | null |
Asymptotic consistency and order specification for logistic classifier
chains in multi-label learning | cs.LG stat.ML | Classifier chains are popular and effective method to tackle a multi-label
classification problem. The aim of this paper is to study the asymptotic
properties of the chain model in which the conditional probabilities are of the
logistic form. In particular we find conditions on the number of labels and the
distribution of feature vector under which the estimated mode of the joint
distribution of labels converges to the true mode. Best of our knowledge, this
important issue has not yet been studied in the context of multi-label
learning. We also investigate how the order of model building in a chain
influences the estimation of the joint distribution of labels. We establish the
link between the problem of incorrect ordering in the chain and incorrect model
specification. We propose a procedure of determining the optimal ordering of
labels in the chain, which is based on using measures of correct specification
and allows to find the ordering such that the consecutive logistic models are
best possibly specified. The other important question raised in this paper is
how accurately can we estimate the joint posterior probability when the
ordering of labels is wrong or the logistic models in the chain are incorrectly
specified. The numerical experiments illustrate the theoretical results.
| Pawe{\l} Teisseyre | null | 1602.07466 | null | null |
Active Learning from Positive and Unlabeled Data | cs.LG | During recent years, active learning has evolved into a popular paradigm for
utilizing user's feedback to improve accuracy of learning algorithms. Active
learning works by selecting the most informative sample among unlabeled data
and querying the label of that point from user. Many different methods such as
uncertainty sampling and minimum risk sampling have been utilized to select the
most informative sample in active learning. Although many active learning
algorithms have been proposed so far, most of them work with binary or
multi-class classification problems and therefore can not be applied to
problems in which only samples from one class as well as a set of unlabeled
data are available.
Such problems arise in many real-world situations and are known as the
problem of learning from positive and unlabeled data. In this paper we propose
an active learning algorithm that can work when only samples of one class as
well as a set of unlabelled data are available. Our method works by separately
estimating probability desnity of positive and unlabeled points and then
computing expected value of informativeness to get rid of a hyper-parameter and
have a better measure of informativeness./ Experiments and empirical analysis
show promising results compared to other similar methods.
| Alireza Ghasemi, Hamid R. Rabiee, Mohsen Fadaee, Mohammad T. Manzuri
and Mohammad H. Rohban | 10.1109/ICDMW.2011.20 | 1602.07495 | null | null |
A Bayesian Approach to the Data Description Problem | cs.LG | In this paper, we address the problem of data description using a Bayesian
framework. The goal of data description is to draw a boundary around objects of
a certain class of interest to discriminate that class from the rest of the
feature space. Data description is also known as one-class learning and has a
wide range of applications.
The proposed approach uses a Bayesian framework to precisely compute the
class boundary and therefore can utilize domain information in form of prior
knowledge in the framework. It can also operate in the kernel space and
therefore recognize arbitrary boundary shapes. Moreover, the proposed method
can utilize unlabeled data in order to improve accuracy of discrimination.
We evaluate our method using various real-world datasets and compare it with
other state of the art approaches of data description. Experiments show
promising results and improved performance over other data description and
one-class learning algorithms.
| Alireza Ghasemi, Hamid R. Rabiee, Mohammad T. Manzuri, M. H. Rohban | null | 1602.07507 | null | null |
Bayesian Exploration: Incentivizing Exploration in Bayesian Games | cs.GT cs.DS cs.LG | We consider a ubiquitous scenario in the Internet economy when individual
decision-makers (henceforth, agents) both produce and consume information as
they make strategic choices in an uncertain environment. This creates a
three-way tradeoff between exploration (trying out insufficiently explored
alternatives to help others in the future), exploitation (making optimal
decisions given the information discovered by other agents), and incentives of
the agents (who are myopically interested in exploitation, while preferring the
others to explore). We posit a principal who controls the flow of information
from agents that came before, and strives to coordinate the agents towards a
socially optimal balance between exploration and exploitation, not using any
monetary transfers. The goal is to design a recommendation policy for the
principal which respects agents' incentives and minimizes a suitable notion of
regret.
We extend prior work in this direction to allow the agents to interact with
one another in a shared environment: at each time step, multiple agents arrive
to play a Bayesian game, receive recommendations, choose their actions, receive
their payoffs, and then leave the game forever. The agents now face two sources
of uncertainty: the actions of the other agents and the parameters of the
uncertain game environment.
Our main contribution is to show that the principal can achieve constant
regret when the utilities are deterministic (where the constant depends on the
prior distribution, but not on the time horizon), and logarithmic regret when
the utilities are stochastic. As a key technical tool, we introduce the concept
of explorable actions, the actions which some incentive-compatible policy can
recommend with non-zero probability. We show how the principal can identify
(and explore) all explorable actions, and use the revealed information to
perform optimally.
| Yishay Mansour, Aleksandrs Slivkins, Vasilis Syrgkanis, Zhiwei Steven
Wu | null | 1602.07570 | null | null |
Group Equivariant Convolutional Networks | cs.LG stat.ML | We introduce Group equivariant Convolutional Neural Networks (G-CNNs), a
natural generalization of convolutional neural networks that reduces sample
complexity by exploiting symmetries. G-CNNs use G-convolutions, a new type of
layer that enjoys a substantially higher degree of weight sharing than regular
convolution layers. G-convolutions increase the expressive capacity of the
network without increasing the number of parameters. Group convolution layers
are easy to use and can be implemented with negligible computational overhead
for discrete groups generated by translations, reflections and rotations.
G-CNNs achieve state of the art results on CIFAR10 and rotated MNIST.
| Taco S. Cohen, Max Welling | null | 1602.07576 | null | null |
A Model of Selective Advantage for the Efficient Inference of Cancer
Clonal Evolution | cs.LG | Recently, there has been a resurgence of interest in rigorous algorithms for
the inference of cancer progression from genomic data. The motivations are
manifold: (i) growing NGS and single cell data from cancer patients, (ii) need
for novel Data Science and Machine Learning algorithms to infer models of
cancer progression, and (iii) a desire to understand the temporal and
heterogeneous structure of tumor to tame its progression by efficacious
therapeutic intervention. This thesis presents a multi-disciplinary effort to
model tumor progression involving successive accumulation of genetic
alterations, each resulting populations manifesting themselves in a cancer
phenotype. The framework presented in this work along with algorithms derived
from it, represents a novel approach for inferring cancer progression, whose
accuracy and convergence rates surpass the existing techniques. The approach
derives its power from several fields including algorithms in machine learning,
theory of causality and cancer biology. Furthermore, a modular pipeline to
extract ensemble-level progression models from sequenced cancer genomes is
proposed. The pipeline combines state-of-the-art techniques for sample
stratification, driver selection, identification of fitness-equivalent
exclusive alterations and progression model inference. Furthermore, the results
are validated by synthetic data with realistic generative models, and
empirically interpreted in the context of real cancer datasets; in the later
case, biologically significant conclusions are also highlighted. Specifically,
it demonstrates the pipeline's ability to reproduce much of the knowledge on
colorectal cancer, as well as to suggest novel hypotheses. Lastly, it also
proves that the proposed framework can be applied to reconstruct the
evolutionary history of cancer clones in single patients, as illustrated by an
example from clear cell renal carcinomas.
| Daniele Ramazzotti | null | 1602.07614 | null | null |
Noisy population recovery in polynomial time | cs.CC cs.DS cs.LG | In the noisy population recovery problem of Dvir et al., the goal is to learn
an unknown distribution $f$ on binary strings of length $n$ from noisy samples.
For some parameter $\mu \in [0,1]$, a noisy sample is generated by flipping
each coordinate of a sample from $f$ independently with probability
$(1-\mu)/2$. We assume an upper bound $k$ on the size of the support of the
distribution, and the goal is to estimate the probability of any string to
within some given error $\varepsilon$. It is known that the algorithmic
complexity and sample complexity of this problem are polynomially related to
each other.
We show that for $\mu > 0$, the sample complexity (and hence the algorithmic
complexity) is bounded by a polynomial in $k$, $n$ and $1/\varepsilon$
improving upon the previous best result of $\mathsf{poly}(k^{\log\log
k},n,1/\varepsilon)$ due to Lovett and Zhang.
Our proof combines ideas from Lovett and Zhang with a \emph{noise attenuated}
version of M\"{o}bius inversion. In turn, the latter crucially uses the
construction of \emph{robust local inverse} due to Moitra and Saks.
| Anindya De and Michael Saks and Sijian Tang | null | 1602.07616 | null | null |
Online Dual Coordinate Ascent Learning | math.OC cs.LG stat.ML | The stochastic dual coordinate-ascent (S-DCA) technique is a useful
alternative to the traditional stochastic gradient-descent algorithm for
solving large-scale optimization problems due to its scalability to large data
sets and strong theoretical guarantees. However, the available S-DCA
formulation is limited to finite sample sizes and relies on performing multiple
passes over the same data. This formulation is not well-suited for online
implementations where data keep streaming in. In this work, we develop an {\em
online} dual coordinate-ascent (O-DCA) algorithm that is able to respond to
streaming data and does not need to revisit the past data. This feature embeds
the resulting construction with continuous adaptation, learning, and tracking
abilities, which are particularly attractive for online learning scenarios.
| Bicheng Ying, Kun Yuan, Ali H. Sayed | null | 1602.07630 | null | null |
Learning values across many orders of magnitude | cs.LG cs.AI cs.NE stat.ML | Most learning algorithms are not invariant to the scale of the function that
is being approximated. We propose to adaptively normalize the targets used in
learning. This is useful in value-based reinforcement learning, where the
magnitude of appropriate value approximations can change over time when we
update the policy of behavior. Our main motivation is prior work on learning to
play Atari games, where the rewards were all clipped to a predetermined range.
This clipping facilitates learning across many different games with a single
learning algorithm, but a clipped reward function can result in qualitatively
different behavior. Using the adaptive normalization we can remove this
domain-specific heuristic without diminishing overall performance.
| Hado van Hasselt and Arthur Guez and Matteo Hessel and Volodymyr Mnih
and David Silver | null | 1602.07714 | null | null |
Adaptive Learning with Robust Generalization Guarantees | cs.DS cs.LG | The traditional notion of generalization---i.e., learning a hypothesis whose
empirical error is close to its true error---is surprisingly brittle. As has
recently been noted in [DFH+15b], even if several algorithms have this
guarantee in isolation, the guarantee need not hold if the algorithms are
composed adaptively. In this paper, we study three notions of
generalization---increasing in strength---that are robust to postprocessing and
amenable to adaptive composition, and examine the relationships between them.
We call the weakest such notion Robust Generalization. A second, intermediate,
notion is the stability guarantee known as differential privacy. The strongest
guarantee we consider we call Perfect Generalization. We prove that every
hypothesis class that is PAC learnable is also PAC learnable in a robustly
generalizing fashion, with almost the same sample complexity. It was previously
known that differentially private algorithms satisfy robust generalization. In
this paper, we show that robust generalization is a strictly weaker concept,
and that there is a learning task that can be carried out subject to robust
generalization guarantees, yet cannot be carried out subject to differential
privacy. We also show that perfect generalization is a strictly stronger
guarantee than differential privacy, but that, nevertheless, many learning
tasks can be carried out subject to the guarantees of perfect generalization.
| Rachel Cummings, Katrina Ligett, Kobbi Nissim, Aaron Roth, Zhiwei
Steven Wu | null | 1602.07726 | null | null |
A Compressed Sensing Based Decomposition of Electrodermal Activity
Signals | stat.ML cs.LG stat.AP | The measurement and analysis of Electrodermal Activity (EDA) offers
applications in diverse areas ranging from market research, to seizure
detection, to human stress analysis. Unfortunately, the analysis of EDA signals
is made difficult by the superposition of numerous components which can obscure
the signal information related to a user's response to a stimulus. We show how
simple pre-processing followed by a novel compressed sensing based
decomposition can mitigate the effects of the undesired noise components and
help reveal the underlying physiological signal. The proposed framework allows
for decomposition of EDA signals with provable bounds on the recovery of user
responses. We test our procedure on both synthetic and real-world EDA signals
from wearable sensors and demonstrate that our approach allows for more
accurate recovery of user responses as compared to the existing techniques.
| Swayambhoo Jain, Urvashi Oswal, Kevin S. Xu, Brian Eriksson, Jarvis
Haupt | 10.1109/TBME.2016.2632523 | 1602.07754 | null | null |
Reinforcement Learning of POMDPs using Spectral Methods | cs.AI cs.LG cs.NA math.OC stat.ML | We propose a new reinforcement learning algorithm for partially observable
Markov decision processes (POMDP) based on spectral decomposition methods.
While spectral methods have been previously employed for consistent learning of
(passive) latent variable models such as hidden Markov models, POMDPs are more
challenging since the learner interacts with the environment and possibly
changes the future observations in the process. We devise a learning algorithm
running through episodes, in each episode we employ spectral techniques to
learn the POMDP parameters from a trajectory generated by a fixed policy. At
the end of the episode, an optimization oracle returns the optimal memoryless
planning policy which maximizes the expected reward based on the estimated
POMDP model. We prove an order-optimal regret bound with respect to the optimal
memoryless policy and efficient scaling with respect to the dimensionality of
observation and action spaces.
| Kamyar Azizzadenesheli, Alessandro Lazaric, Animashree Anandkumar | null | 1602.07764 | null | null |
Fast Nonsmooth Regularized Risk Minimization with Continuation | cs.LG math.OC stat.ML | In regularized risk minimization, the associated optimization problem becomes
particularly difficult when both the loss and regularizer are nonsmooth.
Existing approaches either have slow or unclear convergence properties, are
restricted to limited problem subclasses, or require careful setting of a
smoothing parameter. In this paper, we propose a continuation algorithm that is
applicable to a large class of nonsmooth regularized risk minimization
problems, can be flexibly used with a number of existing solvers for the
underlying smoothed subproblem, and with convergence results on the whole
algorithm rather than just one of its subproblems. In particular, when
accelerated solvers are used, the proposed algorithm achieves the fastest known
rates of $O(1/T^2)$ on strongly convex problems, and $O(1/T)$ on general convex
problems. Experiments on nonsmooth classification and regression tasks
demonstrate that the proposed algorithm outperforms the state-of-the-art.
| Shuai Zheng and Ruiliang Zhang and James T. Kwok | null | 1602.07844 | null | null |
Modeling cumulative biological phenomena with Suppes-Bayes Causal
Networks | cs.AI cs.LG | Several diseases related to cell proliferation are characterized by the
accumulation of somatic DNA changes, with respect to wildtype conditions.
Cancer and HIV are two common examples of such diseases, where the mutational
load in the cancerous/viral population increases over time. In these cases,
selective pressures are often observed along with competition, cooperation and
parasitism among distinct cellular clones. Recently, we presented a
mathematical framework to model these phenomena, based on a combination of
Bayesian inference and Suppes' theory of probabilistic causation, depicted in
graphical structures dubbed Suppes-Bayes Causal Networks (SBCNs). SBCNs are
generative probabilistic graphical models that recapitulate the potential
ordering of accumulation of such DNA changes during the progression of the
disease. Such models can be inferred from data by exploiting likelihood-based
model-selection strategies with regularization. In this paper we discuss the
theoretical foundations of our approach and we investigate in depth the
influence on the model-selection task of: (i) the poset based on Suppes' theory
and (ii) different regularization strategies. Furthermore, we provide an
example of application of our framework to HIV genetic data highlighting the
valuable insights provided by the inferred.
| Daniele Ramazzotti and Alex Graudenzi and Giulio Caravagna and Marco
Antoniotti | 10.1177/1176934318785167 | 1602.07857 | null | null |
Probably Approximately Correct Greedy Maximization with Efficient Bounds
on Information Gain for Sensor Selection | cs.AI cs.LG stat.ML | Submodular function maximization finds application in a variety of real-world
decision-making problems. However, most existing methods, based on greedy
maximization, assume it is computationally feasible to evaluate F, the function
being maximized. Unfortunately, in many realistic settings F is too expensive
to evaluate exactly even once. We present probably approximately correct greedy
maximization, which requires access only to cheap anytime confidence bounds on
F and uses them to prune elements. We show that, with high probability, our
method returns an approximately optimal set. We propose novel, cheap confidence
bounds for conditional entropy, which appears in many common choices of F and
for which it is difficult to find unbiased or bounded estimates. Finally,
results on a real-world dataset from a multi-camera tracking system in a
shopping mall demonstrate that our approach performs comparably to existing
methods, but at a fraction of the computational cost.
| Yash Satsangi, Shimon Whiteson, Frans A. Oliehoek | null | 1602.07860 | null | null |
Learning Gaussian Graphical Models With Fractional Marginal
Pseudo-likelihood | stat.ML cs.LG | We propose a Bayesian approximate inference method for learning the
dependence structure of a Gaussian graphical model. Using pseudo-likelihood, we
derive an analytical expression to approximate the marginal likelihood for an
arbitrary graph structure without invoking any assumptions about
decomposability. The majority of the existing methods for learning Gaussian
graphical models are either restricted to decomposable graphs or require
specification of a tuning parameter that may have a substantial impact on
learned structures. By combining a simple sparsity inducing prior for the graph
structures with a default reference prior for the model parameters, we obtain a
fast and easily applicable scoring function that works well for even
high-dimensional data. We demonstrate the favourable performance of our
approach by large-scale comparisons against the leading methods for learning
non-decomposable Gaussian graphical models. A theoretical justification for our
method is provided by showing that it yields a consistent estimator of the
graph structure.
| Janne Lepp\"a-aho, Johan Pensar, Teemu Roos, Jukka Corander | 10.1016/j.ijar.2017.01.001 | 1602.07863 | null | null |
Projected Estimators for Robust Semi-supervised Classification | stat.ML cs.LG | For semi-supervised techniques to be applied safely in practice we at least
want methods to outperform their supervised counterparts. We study this
question for classification using the well-known quadratic surrogate loss
function. Using a projection of the supervised estimate onto a set of
constraints imposed by the unlabeled data, we find we can safely improve over
the supervised solution in terms of this quadratic loss. Unlike other
approaches to semi-supervised learning, the procedure does not rely on
assumptions that are not intrinsic to the classifier at hand. It is
theoretically demonstrated that, measured on the labeled and unlabeled training
data, this semi-supervised procedure never gives a lower quadratic loss than
the supervised alternative. To our knowledge this is the first approach that
offers such strong, albeit conservative, guarantees for improvement over the
supervised solution. The characteristics of our approach are explicated using
benchmark datasets to further understand the similarities and differences
between the quadratic loss criterion used in the theoretical results and the
classification accuracy often considered in practice.
| Jesse H. Krijthe and Marco Loog | null | 1602.07865 | null | null |
Weight Normalization: A Simple Reparameterization to Accelerate Training
of Deep Neural Networks | cs.LG cs.AI cs.NE | We present weight normalization: a reparameterization of the weight vectors
in a neural network that decouples the length of those weight vectors from
their direction. By reparameterizing the weights in this way we improve the
conditioning of the optimization problem and we speed up convergence of
stochastic gradient descent. Our reparameterization is inspired by batch
normalization but does not introduce any dependencies between the examples in a
minibatch. This means that our method can also be applied successfully to
recurrent models such as LSTMs and to noise-sensitive applications such as deep
reinforcement learning or generative models, for which batch normalization is
less well suited. Although our method is much simpler, it still provides much
of the speed-up of full batch normalization. In addition, the computational
overhead of our method is lower, permitting more optimization steps to be taken
in the same amount of time. We demonstrate the usefulness of our method on
applications in supervised image recognition, generative modelling, and deep
reinforcement learning.
| Tim Salimans and Diederik P. Kingma | null | 1602.07868 | null | null |
Thompson Sampling is Asymptotically Optimal in General Environments | cs.LG cs.AI stat.ML | We discuss a variant of Thompson sampling for nonparametric reinforcement
learning in a countable classes of general stochastic environments. These
environments can be non-Markov, non-ergodic, and partially observable. We show
that Thompson sampling learns the environment class in the sense that (1)
asymptotically its value converges to the optimal value in mean and (2) given a
recoverability assumption regret is sublinear.
| Jan Leike and Tor Lattimore and Laurent Orseau and Marcus Hutter | null | 1602.07905 | null | null |
How effective can simple ordinal peer grading be? | cs.AI cs.DS cs.LG | Ordinal peer grading has been proposed as a simple and scalable solution for
computing reliable information about student performance in massive open online
courses. The idea is to outsource the grading task to the students themselves
as follows. After the end of an exam, each student is asked to rank -- in terms
of quality -- a bundle of exam papers by fellow students. An aggregation rule
then combines the individual rankings into a global one that contains all
students. We define a broad class of simple aggregation rules, which we call
type-ordering aggregation rules, and present a theoretical framework for
assessing their effectiveness. When statistical information about the grading
behaviour of students is available (in terms of a noise matrix that
characterizes the grading behaviour of the average student from a student
population), the framework can be used to compute the optimal rule from this
class with respect to a series of performance objectives that compare the
ranking returned by the aggregation rule to the underlying ground truth
ranking. For example, a natural rule known as Borda is proved to be optimal
when students grade correctly. In addition, we present extensive simulations
that validate our theory and prove it to be extremely accurate in predicting
the performance of aggregation rules even when only rough information about
grading behaviour (i.e., an approximation of the noise matrix) is available.
Both in the application of our theoretical framework and in our simulations, we
exploit data about grading behaviour of students that have been extracted from
two field experiments in the University of Patras.
| Ioannis Caragiannis, George A. Krimpas, Alexandros A. Voudouris | null | 1602.07985 | null | null |
Practical Riemannian Neural Networks | cs.NE cs.LG stat.ML | We provide the first experimental results on non-synthetic datasets for the
quasi-diagonal Riemannian gradient descents for neural networks introduced in
[Ollivier, 2015]. These include the MNIST, SVHN, and FACE datasets as well as a
previously unpublished electroencephalogram dataset. The quasi-diagonal
Riemannian algorithms consistently beat simple stochastic gradient gradient
descents by a varying margin. The computational overhead with respect to simple
backpropagation is around a factor $2$. Perhaps more interestingly, these
methods also reach their final performance quickly, thus requiring fewer
training epochs and a smaller total computation time.
We also present an implementation guide to these Riemannian gradient descents
for neural networks, showing how the quasi-diagonal versions can be implemented
with minimal effort on top of existing routines which compute gradients.
| Ga\'etan Marceau-Caron, Yann Ollivier | null | 1602.08007 | null | null |
Meta-learning within Projective Simulation | cs.AI cs.LG stat.ML | Learning models of artificial intelligence can nowadays perform very well on
a large variety of tasks. However, in practice different task environments are
best handled by different learning models, rather than a single, universal,
approach. Most non-trivial models thus require the adjustment of several to
many learning parameters, which is often done on a case-by-case basis by an
external party. Meta-learning refers to the ability of an agent to autonomously
and dynamically adjust its own learning parameters, or meta-parameters. In this
work we show how projective simulation, a recently developed model of
artificial intelligence, can naturally be extended to account for meta-learning
in reinforcement learning settings. The projective simulation approach is based
on a random walk process over a network of clips. The suggested meta-learning
scheme builds upon the same design and employs clip networks to monitor the
agent's performance and to adjust its meta-parameters "on the fly". We
distinguish between "reflexive adaptation" and "adaptation through learning",
and show the utility of both approaches. In addition, a trade-off between
flexibility and learning-time is addressed. The extended model is examined on
three different kinds of reinforcement learning tasks, in which the agent has
different optimal values of the meta-parameters, and is shown to perform well,
reaching near-optimal to optimal success rates in all of them, without ever
needing to manually adjust any meta-parameter.
| Adi Makmal, Alexey A. Melnikov, Vedran Dunjko, Hans J. Briegel | 10.1109/access.2016.2556579 | 1602.08017 | null | null |
PCA/LDA Approach for Text-Independent Speaker Recognition | cs.SD cs.LG | Various algorithms for text-independent speaker recognition have been
developed through the decades, aiming to improve both accuracy and efficiency.
This paper presents a novel PCA/LDA-based approach that is faster than
traditional statistical model-based methods and achieves competitive results.
First, the performance based on only PCA and only LDA is measured; then a mixed
model, taking advantages of both methods, is introduced. A subset of the TIMIT
corpus composed of 200 male speakers, is used for enrollment, validation and
testing. The best results achieve 100%; 96% and 95% classification rate at
population level 50; 100 and 200, using 39-dimensional MFCC features with delta
and double delta. These results are based on 12-second text-independent speech
for training and 4-second data for test. These are comparable to the
conventional MFCC-GMM methods, but require significantly less time to train and
operate.
| Zhenhao Ge, Sudhendu R. Sharma, Mark J. T. Smith | 10.1117/12.919235 | 1602.08045 | null | null |
Hierarchical Conflict Propagation: Sequence Learning in a Recurrent Deep
Neural Network | cs.LG | Recurrent neural networks (RNN) are capable of learning to encode and exploit
activation history over an arbitrary timescale. However, in practice, state of
the art gradient descent based training methods are known to suffer from
difficulties in learning long term dependencies. Here, we describe a novel
training method that involves concurrent parallel cloned networks, each sharing
the same weights, each trained at different stimulus phase and each maintaining
independent activation histories. Training proceeds by recursively performing
batch-updates over the parallel clones as activation history is progressively
increased. This allows conflicts to propagate hierarchically from short-term
contexts towards longer-term contexts until they are resolved. We illustrate
the parallel clones method and hierarchical conflict propagation with a
character-level deep RNN tasked with memorizing a paragraph of Moby Dick (by
Herman Melville).
| Andrew J.R. Simpson | null | 1602.08118 | null | null |
vDNN: Virtualized Deep Neural Networks for Scalable, Memory-Efficient
Neural Network Design | cs.DC cs.LG cs.NE | The most widely used machine learning frameworks require users to carefully
tune their memory usage so that the deep neural network (DNN) fits into the
DRAM capacity of a GPU. This restriction hampers a researcher's flexibility to
study different machine learning algorithms, forcing them to either use a less
desirable network architecture or parallelize the processing across multiple
GPUs. We propose a runtime memory manager that virtualizes the memory usage of
DNNs such that both GPU and CPU memory can simultaneously be utilized for
training larger DNNs. Our virtualized DNN (vDNN) reduces the average GPU memory
usage of AlexNet by up to 89%, OverFeat by 91%, and GoogLeNet by 95%, a
significant reduction in memory requirements of DNNs. Similar experiments on
VGG-16, one of the deepest and memory hungry DNNs to date, demonstrate the
memory-efficiency of our proposal. vDNN enables VGG-16 with batch size 256
(requiring 28 GB of memory) to be trained on a single NVIDIA Titan X GPU card
containing 12 GB of memory, with 18% performance loss compared to a
hypothetical, oracular GPU with enough memory to hold the entire DNN.
| Minsoo Rhu, Natalia Gimelshein, Jason Clemons, Arslan Zulfiqar,
Stephen W. Keckler | null | 1602.08124 | null | null |
Auto-JacoBin: Auto-encoder Jacobian Binary Hashing | cs.CV cs.LG | Binary codes can be used to speed up nearest neighbor search tasks in large
scale data sets as they are efficient for both storage and retrieval. In this
paper, we propose a robust auto-encoder model that preserves the geometric
relationships of high-dimensional data sets in Hamming space. This is done by
considering a noise-removing function in a region surrounding the manifold
where the training data points lie. This function is defined with the property
that it projects the data points near the manifold into the manifold wisely,
and we approximate this function by its first order approximation. Experimental
results show that the proposed method achieves better than state-of-the-art
results on three large scale high dimensional data sets.
| Xiping Fu, Brendan McCane, Steven Mills, Michael Albert and Lech
Szymanski | null | 1602.08127 | null | null |
PCA Method for Automated Detection of Mispronounced Words | cs.SD cs.CL cs.LG | This paper presents a method for detecting mispronunciations with the aim of
improving Computer Assisted Language Learning (CALL) tools used by foreign
language learners. The algorithm is based on Principle Component Analysis
(PCA). It is hierarchical with each successive step refining the estimate to
classify the test word as being either mispronounced or correct. Preprocessing
before detection, like normalization and time-scale modification, is
implemented to guarantee uniformity of the feature vectors input to the
detection system. The performance using various features including spectrograms
and Mel-Frequency Cepstral Coefficients (MFCCs) are compared and evaluated.
Best results were obtained using MFCCs, achieving up to 99% accuracy in word
verification and 93% in native/non-native classification. Compared with Hidden
Markov Models (HMMs) which are used pervasively in recognition application,
this particular approach is computational efficient and effective when training
data is limited.
| Zhenhao Ge, Sudhendu R. Sharma, Mark J. T. Smith | 10.1117/12.884155 | 1602.08128 | null | null |
Learning to Abstain from Binary Prediction | cs.LG stat.ML | A binary classifier capable of abstaining from making a label prediction has
two goals in tension: minimizing errors, and avoiding abstaining unnecessarily
often. In this work, we exactly characterize the best achievable tradeoff
between these two goals in a general semi-supervised setting, given an ensemble
of predictors of varying competence as well as unlabeled data on which we wish
to predict or abstain. We give an algorithm for learning a classifier in this
setting which trades off its errors with abstentions in a minimax optimal
manner, is as efficient as linear learning and prediction, and is demonstrably
practical. Our analysis extends to a large class of loss functions and other
scenarios, including ensembles comprised of specialists that can themselves
abstain.
| Akshay Balsubramani | null | 1602.08151 | null | null |
Harnessing disordered quantum dynamics for machine learning | quant-ph cs.AI cs.LG cs.NE nlin.CD | Quantum computer has an amazing potential of fast information processing.
However, realisation of a digital quantum computer is still a challenging
problem requiring highly accurate controls and key application strategies. Here
we propose a novel platform, quantum reservoir computing, to solve these issues
successfully by exploiting natural quantum dynamics, which is ubiquitous in
laboratories nowadays, for machine learning. In this framework, nonlinear
dynamics including classical chaos can be universally emulated in quantum
systems. A number of numerical experiments show that quantum systems consisting
of at most seven qubits possess computational capabilities comparable to
conventional recurrent neural networks of 500 nodes. This discovery opens up a
new paradigm for information processing with artificial intelligence powered by
quantum physics.
| Keisuke Fujii and Kohei Nakajima | 10.1103/PhysRevApplied.8.024030 | 1602.08159 | null | null |
Search by Ideal Candidates: Next Generation of Talent Search at LinkedIn | cs.IR cs.LG | One key challenge in talent search is how to translate complex criteria of a
hiring position into a search query. This typically requires deep knowledge on
which skills are typically needed for the position, what are their
alternatives, which companies are likely to have such candidates, etc. However,
listing examples of suitable candidates for a given position is a relatively
easy job. Therefore, in order to help searchers overcome this challenge, we
design a next generation of talent search paradigm at LinkedIn: Search by Ideal
Candidates. This new system only needs the searcher to input one or several
examples of suitable candidates for the position. The system will generate a
query based on the input candidates and then retrieve and rank results based on
the query as well as the input candidates. The query is also shown to the
searcher to make the system transparent and to allow the searcher to interact
with it. As the searcher modifies the initial query and makes it deviate from
the ideal candidates, the search ranking function dynamically adjusts an
refreshes the ranking results balancing between the roles of query and ideal
candidates. As of writing this paper, the new system is being launched to our
customers.
| Viet Ha-Thuc, Ye Xu, Satya Pradeep Kanduri, Xianren Wu, Vijay Dialani,
Yan Yan, Abhishek Gupta, Shakti Sinha | 10.1145/2872518.2890549 | 1602.08186 | null | null |
DeepSpark: A Spark-Based Distributed Deep Learning Framework for
Commodity Clusters | cs.LG | The increasing complexity of deep neural networks (DNNs) has made it
challenging to exploit existing large-scale data processing pipelines for
handling massive data and parameters involved in DNN training. Distributed
computing platforms and GPGPU-based acceleration provide a mainstream solution
to this computational challenge. In this paper, we propose DeepSpark, a
distributed and parallel deep learning framework that exploits Apache Spark on
commodity clusters. To support parallel operations, DeepSpark automatically
distributes workloads and parameters to Caffe/Tensorflow-running nodes using
Spark, and iteratively aggregates training results by a novel lock-free
asynchronous variant of the popular elastic averaging stochastic gradient
descent based update scheme, effectively complementing the synchronized
processing capabilities of Spark. DeepSpark is an on-going project, and the
current release is available at http://deepspark.snu.ac.kr.
| Hanjoo Kim, Jaehong Park, Jaehee Jang, and Sungroh Yoon | null | 1602.08191 | null | null |
Scalable and Sustainable Deep Learning via Randomized Hashing | stat.ML cs.LG cs.NE | Current deep learning architectures are growing larger in order to learn from
complex datasets. These architectures require giant matrix multiplication
operations to train millions of parameters. Conversely, there is another
growing trend to bring deep learning to low-power, embedded devices. The matrix
operations, associated with both training and testing of deep networks, are
very expensive from a computational and energy standpoint. We present a novel
hashing based technique to drastically reduce the amount of computation needed
to train and test deep networks. Our approach combines recent ideas from
adaptive dropouts and randomized hashing for maximum inner product search to
select the nodes with the highest activation efficiently. Our new algorithm for
deep learning reduces the overall computational cost of forward and
back-propagation by operating on significantly fewer (sparse) nodes. As a
consequence, our algorithm uses only 5% of the total multiplications, while
keeping on average within 1% of the accuracy of the original model. A unique
property of the proposed hashing based back-propagation is that the updates are
always sparse. Due to the sparse gradient updates, our algorithm is ideally
suited for asynchronous and parallel training leading to near linear speedup
with increasing number of cores. We demonstrate the scalability and
sustainability (energy efficiency) of our proposed algorithm via rigorous
experimental evaluations on several real datasets.
| Ryan Spring, Anshumali Shrivastava | null | 1602.08194 | null | null |
Architectural Complexity Measures of Recurrent Neural Networks | cs.LG cs.NE | In this paper, we systematically analyze the connecting architectures of
recurrent neural networks (RNNs). Our main contribution is twofold: first, we
present a rigorous graph-theoretic framework describing the connecting
architectures of RNNs in general. Second, we propose three architecture
complexity measures of RNNs: (a) the recurrent depth, which captures the RNN's
over-time nonlinear complexity, (b) the feedforward depth, which captures the
local input-output nonlinearity (similar to the "depth" in feedforward neural
networks (FNNs)), and (c) the recurrent skip coefficient which captures how
rapidly the information propagates over time. We rigorously prove each
measure's existence and computability. Our experimental results show that RNNs
might benefit from larger recurrent depth and feedforward depth. We further
demonstrate that increasing recurrent skip coefficient offers performance
boosts on long term dependency problems.
| Saizheng Zhang, Yuhuai Wu, Tong Che, Zhouhan Lin, Roland Memisevic,
Ruslan Salakhutdinov, Yoshua Bengio | null | 1602.08210 | null | null |
Multimodal Emotion Recognition Using Multimodal Deep Learning | cs.HC cs.CV cs.LG | To enhance the performance of affective models and reduce the cost of
acquiring physiological signals for real-world applications, we adopt
multimodal deep learning approach to construct affective models from multiple
physiological signals. For unimodal enhancement task, we indicate that the best
recognition accuracy of 82.11% on SEED dataset is achieved with shared
representations generated by Deep AutoEncoder (DAE) model. For multimodal
facilitation tasks, we demonstrate that the Bimodal Deep AutoEncoder (BDAE)
achieves the mean accuracies of 91.01% and 83.25% on SEED and DEAP datasets,
respectively, which are much superior to the state-of-the-art approaches. For
cross-modal learning task, our experimental results demonstrate that the mean
accuracy of 66.34% is achieved on SEED dataset through shared representations
generated by EEG-based DAE as training samples and shared representations
generated by eye-based DAE as testing sample, and vice versa.
| Wei Liu, Wei-Long Zheng, Bao-Liang Lu | null | 1602.08225 | null | null |
Theoretical Analysis of the $k$-Means Algorithm - A Survey | cs.DS cs.LG | The $k$-means algorithm is one of the most widely used clustering heuristics.
Despite its simplicity, analyzing its running time and quality of approximation
is surprisingly difficult and can lead to deep insights that can be used to
improve the algorithm. In this paper we survey the recent results in this
direction as well as several extension of the basic $k$-means method.
| Johannes Bl\"omer, Christiane Lammersen, Melanie Schmidt, Christian
Sohler | null | 1602.08254 | null | null |
Bounded Rational Decision-Making in Feedforward Neural Networks | cs.AI cs.LG cs.NE | Bounded rational decision-makers transform sensory input into motor output
under limited computational resources. Mathematically, such decision-makers can
be modeled as information-theoretic channels with limited transmission rate.
Here, we apply this formalism for the first time to multilayer feedforward
neural networks. We derive synaptic weight update rules for two scenarios,
where either each neuron is considered as a bounded rational decision-maker or
the network as a whole. In the update rules, bounded rationality translates
into information-theoretically motivated types of regularization in weight
space. In experiments on the MNIST benchmark classification task for
handwritten digits, we show that such information-theoretic regularization
successfully prevents overfitting across different architectures and attains
results that are competitive with other recent techniques like dropout,
dropconnect and Bayes by backprop, for both ordinary and convolutional neural
networks.
| Felix Leibfried and Daniel Alexander Braun | null | 1602.08332 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.