title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Spatial Semantic Scan: Jointly Detecting Subtle Events and their Spatial
Footprint | cs.LG cs.CL stat.ML | Many methods have been proposed for detecting emerging events in text streams
using topic modeling. However, these methods have shortcomings that make them
unsuitable for rapid detection of locally emerging events on massive text
streams. We describe Spatially Compact Semantic Scan (SCSS) that has been
developed specifically to overcome the shortcomings of current methods in
detecting new spatially compact events in text streams. SCSS employs
alternating optimization between using semantic scan to estimate contrastive
foreground topics in documents, and discovering spatial neighborhoods with high
occurrence of documents containing the foreground topics. We evaluate our
method on Emergency Department chief complaints dataset (ED dataset) to verify
the effectiveness of our method in detecting real-world disease outbreaks from
free-text ED chief complaint data.
| Abhinav Maurya | null | 1511.00352 | null | null |
BinaryConnect: Training Deep Neural Networks with binary weights during
propagations | cs.LG cs.CV cs.NE | Deep Neural Networks (DNN) have achieved state-of-the-art results in a wide
range of tasks, with the best results obtained with large training sets and
large models. In the past, GPUs enabled these breakthroughs because of their
greater computational speed. In the future, faster computation at both training
and test time is likely to be crucial for further progress and for consumer
applications on low-power devices. As a result, there is much interest in
research and development of dedicated hardware for Deep Learning (DL). Binary
weights, i.e., weights which are constrained to only two possible values (e.g.
-1 or 1), would bring great benefits to specialized DL hardware by replacing
many multiply-accumulate operations by simple accumulations, as multipliers are
the most space and power-hungry components of the digital implementation of
neural networks. We introduce BinaryConnect, a method which consists in
training a DNN with binary weights during the forward and backward
propagations, while retaining precision of the stored weights in which
gradients are accumulated. Like other dropout schemes, we show that
BinaryConnect acts as regularizer and we obtain near state-of-the-art results
with BinaryConnect on the permutation-invariant MNIST, CIFAR-10 and SVHN.
| Matthieu Courbariaux, Yoshua Bengio and Jean-Pierre David | null | 1511.00363 | null | null |
Submodular Functions: from Discrete to Continous Domains | cs.LG math.OC | Submodular set-functions have many applications in combinatorial
optimization, as they can be minimized and approximately maximized in
polynomial time. A key element in many of the algorithms and analyses is the
possibility of extending the submodular set-function to a convex function,
which opens up tools from convex optimization. Submodularity goes beyond
set-functions and has naturally been considered for problems with multiple
labels or for functions defined on continuous domains, where it corresponds
essentially to cross second-derivatives being nonpositive. In this paper, we
show that most results relating submodularity and convexity for set-functions
can be extended to all submodular functions. In particular, (a) we naturally
define a continuous extension in a set of probability measures, (b) show that
the extension is convex if and only if the original function is submodular, (c)
prove that the problem of minimizing a submodular function is equivalent to a
typically non-smooth convex optimization problem, and (d) propose another
convex optimization problem with better computational properties (e.g., a
smooth dual problem). Most of these extensions from the set-function situation
are obtained by drawing links with the theory of multi-marginal optimal
transport, which provides also a new interpretation of existing results for
set-functions. We then provide practical algorithms to minimize generic
submodular functions on discrete domains, with associated convergence rates.
| Francis Bach (LIENS, SIERRA) | null | 1511.00394 | null | null |
An Impossibility Result for Reconstruction in a Degree-Corrected
Planted-Partition Model | math.PR cs.LG cs.SI stat.ML | We consider the Degree-Corrected Stochastic Block Model (DC-SBM): a random
graph on $n$ nodes, having i.i.d. weights $(\phi_u)_{u=1}^n$ (possibly
heavy-tailed), partitioned into $q \geq 2$ asymptotically equal-sized clusters.
The model parameters are two constants $a,b > 0$ and the finite second moment
of the weights $\Phi^{(2)}$. Vertices $u$ and $v$ are connected by an edge with
probability $\frac{\phi_u \phi_v}{n}a$ when they are in the same class and with
probability $\frac{\phi_u \phi_v}{n}b$ otherwise.
We prove that it is information-theoretically impossible to estimate the
clusters in a way positively correlated with the true community structure when
$(a-b)^2 \Phi^{(2)} \leq q(a+b)$.
As by-products of our proof we obtain $(1)$ a precise coupling result for
local neighbourhoods in DC-SBM's, that we use in a follow up paper [Gulikers et
al., 2017] to establish a law of large numbers for local-functionals and $(2)$
that long-range interactions are weak in (power-law) DC-SBM's.
| Lennart Gulikers, Marc Lelarge, Laurent Massouli\'e | null | 1511.00546 | null | null |
SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image
Segmentation | cs.CV cs.LG cs.NE | We present a novel and practical deep fully convolutional neural network
architecture for semantic pixel-wise segmentation termed SegNet. This core
trainable segmentation engine consists of an encoder network, a corresponding
decoder network followed by a pixel-wise classification layer. The architecture
of the encoder network is topologically identical to the 13 convolutional
layers in the VGG16 network. The role of the decoder network is to map the low
resolution encoder feature maps to full input resolution feature maps for
pixel-wise classification. The novelty of SegNet lies is in the manner in which
the decoder upsamples its lower resolution input feature map(s). Specifically,
the decoder uses pooling indices computed in the max-pooling step of the
corresponding encoder to perform non-linear upsampling. This eliminates the
need for learning to upsample. The upsampled maps are sparse and are then
convolved with trainable filters to produce dense feature maps. We compare our
proposed architecture with the widely adopted FCN and also with the well known
DeepLab-LargeFOV, DeconvNet architectures. This comparison reveals the memory
versus accuracy trade-off involved in achieving good segmentation performance.
SegNet was primarily motivated by scene understanding applications. Hence, it
is designed to be efficient both in terms of memory and computational time
during inference. It is also significantly smaller in the number of trainable
parameters than other competing architectures. We also performed a controlled
benchmark of SegNet and other architectures on both road scenes and SUN RGB-D
indoor scene segmentation tasks. We show that SegNet provides good performance
with competitive inference time and more efficient inference memory-wise as
compared to other architectures. We also provide a Caffe implementation of
SegNet and a web demo at http://mi.eng.cam.ac.uk/projects/segnet/.
| Vijay Badrinarayanan and Alex Kendall and Roberto Cipolla | null | 1511.00561 | null | null |
Toward an Efficient Multi-class Classification in an Open Universe | cs.LG cs.AI cs.DB cs.IR | Classification is a fundamental task in machine learning and data mining.
Existing classification methods are designed to classify unknown instances
within a set of previously known training classes. Such a classification takes
the form of a prediction within a closed-set of classes. However, a more
realistic scenario that fits real-world applications is to consider the
possibility of encountering instances that do not belong to any of the training
classes, $i.e.$, an open-set classification. In such situation, existing
closed-set classifiers will assign a training label to these instances
resulting in a misclassification. In this paper, we introduce Galaxy-X, a novel
multi-class classification approach for open-set recognition problems. For each
class of the training set, Galaxy-X creates a minimum bounding hyper-sphere
that encompasses the distribution of the class by enclosing all of its
instances. In such manner, our method is able to distinguish instances
resembling previously seen classes from those that are of unknown ones. To
adequately evaluate open-set classification, we introduce a novel evaluation
procedure. Experimental results on benchmark datasets show the efficiency of
our approach in classifying novel instances from known as well as unknown
classes.
| Wajdi Dhifli, Abdoulaye Banir\'e Diallo | null | 1511.00725 | null | null |
ProtNN: Fast and Accurate Nearest Neighbor Protein Function Prediction
based on Graph Embedding in Structural and Topological Space | cs.LG cs.SI | Studying the function of proteins is important for understanding the
molecular mechanisms of life. The number of publicly available protein
structures has increasingly become extremely large. Still, the determination of
the function of a protein structure remains a difficult, costly, and time
consuming task. The difficulties are often due to the essential role of spatial
and topological structures in the determination of protein functions in living
cells. In this paper, we propose ProtNN, a novel approach for protein function
prediction. Given an unannotated protein structure and a set of annotated
proteins, ProtNN finds the nearest neighbor annotated structures based on
protein-graph pairwise similarities. Given a query protein, ProtNN finds the
nearest neighbor reference proteins based on a graph representation model and a
pairwise similarity between vector embedding of both query and reference
protein-graphs in structural and topological spaces. ProtNN assigns to the
query protein the function with the highest number of votes across the set of k
nearest neighbor reference proteins, where k is a user-defined parameter.
Experimental evaluation demonstrates that ProtNN is able to accurately classify
several datasets in an extremely fast runtime compared to state-of-the-art
approaches. We further show that ProtNN is able to scale up to a whole PDB
dataset in a single-process mode with no parallelization, with a gain of
thousands order of magnitude of runtime compared to state-of-the-art
approaches.
| Wajdi Dhifli, Abdoulaye Banir\'e Diallo | 10.1186/s13040-016-0108-2 | 1511.00736 | null | null |
Learning Unfair Trading: a Market Manipulation Analysis From the
Reinforcement Learning Perspective | q-fin.TR cs.LG | Market manipulation is a strategy used by traders to alter the price of
financial securities. One type of manipulation is based on the process of
buying or selling assets by using several trading strategies, among them
spoofing is a popular strategy and is considered illegal by market regulators.
Some promising tools have been developed to detect manipulation, but cases can
still be found in the markets. In this paper we model spoofing and pinging
trading, two strategies that differ in the legal background but share the same
elemental concept of market manipulation. We use a reinforcement learning
framework within the full and partial observability of Markov decision
processes and analyse the underlying behaviour of the manipulators by finding
the causes of what encourages the traders to perform fraudulent activities.
This reveals procedures to counter the problem that may be helpful to market
regulators as our model predicts the activity of spoofers.
| Enrique Mart\'inez-Miranda and Peter McBurney and Matthew J. Howard | null | 1511.00740 | null | null |
PAC Learning-Based Verification and Model Synthesis | cs.SE cs.LG cs.LO | We introduce a novel technique for verification and model synthesis of
sequential programs. Our technique is based on learning a regular model of the
set of feasible paths in a program, and testing whether this model contains an
incorrect behavior. Exact learning algorithms require checking equivalence
between the model and the program, which is a difficult problem, in general
undecidable. Our learning procedure is therefore based on the framework of
probably approximately correct (PAC) learning, which uses sampling instead and
provides correctness guarantees expressed using the terms error probability and
confidence. Besides the verification result, our procedure also outputs the
model with the said correctness guarantees. Obtained preliminary experiments
show encouraging results, in some cases even outperforming mature software
verifiers.
| Yu-Fang Chen, Chiao Hsieh, Ond\v{r}ej Leng\'al, Tsung-Ju Lii,
Ming-Hsien Tsai, Bow-Yaw Wang, and Farn Wang | null | 1511.00754 | null | null |
Fast Collaborative Filtering from Implicit Feedback with Provable
Guarantees | cs.LG | Building recommendation algorithms is one of the most challenging tasks in
Machine Learning. Although most of the recommendation systems are built on
explicit feedback available from the users in terms of rating or text, a
majority of the applications do not receive such feedback. Here we consider the
recommendation task where the only available data is the records of user-item
interaction over web applications over time, in terms of subscription or
purchase of items; this is known as implicit feedback recommendation. There is
usually a massive amount of such user-item interaction available for any web
applications. Algorithms like PLSI or Matrix Factorization runs several
iterations through the dataset, and may prove very expensive for large
datasets. Here we propose a recommendation algorithm based on Method of Moment,
which involves factorization of second and third order moments of the dataset.
Our algorithm can be proven to be globally convergent using PAC learning
theory. Further, we show how to extract the parameters using only three passes
through the entire dataset. This results in a highly scalable algorithm that
scales up to million of users even on a machine with a single-core processor
and 8 GB RAM and produces competitive performance in comparison with existing
algorithms.
| Sayantan Dasgupta | null | 1511.00792 | null | null |
The Variational Fair Autoencoder | stat.ML cs.LG | We investigate the problem of learning representations that are invariant to
certain nuisance or sensitive factors of variation in the data while retaining
as much of the remaining information as possible. Our model is based on a
variational autoencoding architecture with priors that encourage independence
between sensitive and latent factors of variation. Any subsequent processing,
such as classification, can then be performed on this purged latent
representation. To remove any remaining dependencies we incorporate an
additional penalty term based on the "Maximum Mean Discrepancy" (MMD) measure.
We discuss how these architectures can be efficiently trained on data and show
in experiments that this method is more effective than previous work in
removing unwanted sources of variation while maintaining informative latent
representations.
| Christos Louizos, Kevin Swersky, Yujia Li, Max Welling and Richard
Zemel | null | 1511.00830 | null | null |
Properties of the Sample Mean in Graph Spaces and the
Majorize-Minimize-Mean Algorithm | cs.CV cs.LG stat.ML | One of the most fundamental concepts in statistics is the concept of sample
mean. Properties of the sample mean that are well-defined in Euclidean spaces
become unwieldy or even unclear in graph spaces. Open problems related to the
sample mean of graphs include: non-existence, non-uniqueness, statistical
inconsistency, lack of convergence results of mean algorithms, non-existence of
midpoints, and disparity to midpoints. We present conditions to resolve all six
problems and propose a Majorize-Minimize-Mean (MMM) Algorithm. Experiments on
graph datasets representing images and molecules show that the MMM-Algorithm
best approximates a sample mean of graphs compared to six other mean
algorithms.
| Brijnesh J. Jain | null | 1511.00871 | null | null |
Do Prices Coordinate Markets? | cs.GT cs.LG | Walrasian equilibrium prices can be said to coordinate markets: They support
a welfare optimal allocation in which each buyer is buying bundle of goods that
is individually most preferred. However, this clean story has two caveats.
First, the prices alone are not sufficient to coordinate the market, and buyers
may need to select among their most preferred bundles in a coordinated way to
find a feasible allocation. Second, we don't in practice expect to encounter
exact equilibrium prices tailored to the market, but instead only approximate
prices, somehow encoding "distributional" information about the market. How
well do prices work to coordinate markets when tie-breaking is not coordinated,
and they encode only distributional information?
We answer this question. First, we provide a genericity condition such that
for buyers with Matroid Based Valuations, overdemand with respect to
equilibrium prices is at most 1, independent of the supply of goods, even when
tie-breaking is done in an uncoordinated fashion. Second, we provide
learning-theoretic results that show that such prices are robust to changing
the buyers in the market, so long as all buyers are sampled from the same
(unknown) distribution.
| Justin Hsu, Jamie Morgenstern, Ryan Rogers, Aaron Roth, Rakesh Vohra | 10.1145/2897518.2897559 | 1511.00925 | null | null |
Data Stream Classification using Random Feature Functions and Novel
Method Combinations | cs.LG cs.NE | Big Data streams are being generated in a faster, bigger, and more
commonplace. In this scenario, Hoeffding Trees are an established method for
classification. Several extensions exist, including high-performing ensemble
setups such as online and leveraging bagging. Also, $k$-nearest neighbors is a
popular choice, with most extensions dealing with the inherent performance
limitations over a potentially-infinite stream.
At the same time, gradient descent methods are becoming increasingly popular,
owing in part to the successes of deep learning. Although deep neural networks
can learn incrementally, they have so far proved too sensitive to
hyper-parameter options and initial conditions to be considered an effective
`off-the-shelf' data-streams solution.
In this work, we look at combinations of Hoeffding-trees, nearest neighbour,
and gradient descent methods with a streaming preprocessing approach in the
form of a random feature functions filter for additional predictive power.
We further extend the investigation to implementing methods on GPUs, which we
test on some large real-world datasets, and show the benefits of using GPUs for
data-stream learning due to their high scalability.
Our empirical evaluation yields positive results for the novel approaches
that we experiment with, highlighting important issues, and shed light on
promising future directions in approaches to data-stream classification.
| Diego Marr\'on ([email protected]) and Jesse Read
([email protected]) and Albert Bifet ([email protected])
and Nacho Navarro ([email protected]) | null | 1511.00971 | null | null |
Understanding symmetries in deep networks | cs.LG cs.AI cs.CV | Recent works have highlighted scale invariance or symmetry present in the
weight space of a typical deep network and the adverse effect it has on the
Euclidean gradient based stochastic gradient descent optimization. In this
work, we show that a commonly used deep network, which uses convolution, batch
normalization, reLU, max-pooling, and sub-sampling pipeline, possess more
complex forms of symmetry arising from scaling-based reparameterization of the
network weights. We propose to tackle the issue of the weight space symmetry by
constraining the filters to lie on the unit-norm manifold. Consequently,
training the network boils down to using stochastic gradient descent updates on
the unit-norm manifold. Our empirical evidence based on the MNIST dataset shows
that the proposed updates improve the test performance beyond what is achieved
with batch normalization and without sacrificing the computational efficiency
of the weight updates.
| Vijay Badrinarayanan and Bamdev Mishra and Roberto Cipolla | null | 1511.01029 | null | null |
Detecting Interrogative Utterances with Recurrent Neural Networks | cs.CL cs.LG cs.NE | In this paper, we explore different neural network architectures that can
predict if a speaker of a given utterance is asking a question or making a
statement. We com- pare the outcomes of regularization methods that are
popularly used to train deep neural networks and study how different context
functions can affect the classification performance. We also compare the
efficacy of gated activation functions that are favorably used in recurrent
neural networks and study how to combine multimodal inputs. We evaluate our
models on two multimodal datasets: MSR-Skype and CALLHOME.
| Junyoung Chung and Jacob Devlin and Hany Hassan Awadalla | null | 1511.01042 | null | null |
Detecting Clusters of Anomalies on Low-Dimensional Feature Subsets with
Application to Network Traffic Flow Data | cs.NI cs.CR cs.LG | In a variety of applications, one desires to detect groups of anomalous data
samples, with a group potentially manifesting its atypicality (relative to a
reference model) on a low-dimensional subset of the full measured set of
features. Samples may only be weakly atypical individually, whereas they may be
strongly atypical when considered jointly. What makes this group anomaly
detection problem quite challenging is that it is a priori unknown which subset
of features jointly manifests a particular group of anomalies. Moreover, it is
unknown how many anomalous groups are present in a given data batch. In this
work, we develop a group anomaly detection (GAD) scheme to identify the subset
of samples and subset of features that jointly specify an anomalous cluster. We
apply our approach to network intrusion detection to detect BotNet and
peer-to-peer flow clusters. Unlike previous studies, our approach captures and
exploits statistical dependencies that may exist between the measured features.
Experiments on real world network traffic data demonstrate the advantage of our
proposed system, and highlight the importance of exploiting feature dependency
structure, compared to the feature (or test) independence assumption made in
previous studies.
| Zhicong Qiu, David J. Miller, George Kesidis | null | 1511.01047 | null | null |
Distributed Deep Learning for Question Answering | cs.LG cs.CL cs.DC | This paper is an empirical study of the distributed deep learning for
question answering subtasks: answer selection and question classification.
Comparison studies of SGD, MSGD, ADADELTA, ADAGRAD, ADAM/ADAMAX, RMSPROP,
DOWNPOUR and EASGD/EAMSGD algorithms have been presented. Experimental results
show that the distributed framework based on the message passing interface can
accelerate the convergence speed at a sublinear scale. This paper demonstrates
the importance of distributed training. For example, with 48 workers, a 24x
speedup is achievable for the answer selection task and running time is
decreased from 138.2 hours to 5.81 hours, which will increase the productivity
significantly.
| Minwei Feng, Bing Xiang, Bowen Zhou | 10.1145/2983323.2983377 | 1511.01158 | null | null |
adaQN: An Adaptive Quasi-Newton Algorithm for Training RNNs | cs.LG math.OC stat.ML | Recurrent Neural Networks (RNNs) are powerful models that achieve exceptional
performance on several pattern recognition problems. However, the training of
RNNs is a computationally difficult task owing to the well-known
"vanishing/exploding" gradient problem. Algorithms proposed for training RNNs
either exploit no (or limited) curvature information and have cheap
per-iteration complexity, or attempt to gain significant curvature information
at the cost of increased per-iteration cost. The former set includes
diagonally-scaled first-order methods such as ADAGRAD and ADAM, while the
latter consists of second-order algorithms like Hessian-Free Newton and K-FAC.
In this paper, we present adaQN, a stochastic quasi-Newton algorithm for
training RNNs. Our approach retains a low per-iteration cost while allowing for
non-diagonal scaling through a stochastic L-BFGS updating scheme. The method
uses a novel L-BFGS scaling initialization scheme and is judicious in storing
and retaining L-BFGS curvature pairs. We present numerical experiments on two
language modeling tasks and show that adaQN is competitive with popular RNN
training algorithms.
| Nitish Shirish Keskar and Albert S. Berahas | null | 1511.01169 | null | null |
Learn on Source, Refine on Target:A Model Transfer Learning Framework
with Random Forests | cs.LG | We propose novel model transfer-learning methods that refine a decision
forest model M learned within a "source" domain using a training set sampled
from a "target" domain, assumed to be a variation of the source. We present two
random forest transfer algorithms. The first algorithm searches greedily for
locally optimal modifications of each tree structure by trying to locally
expand or reduce the tree around individual nodes. The second algorithm does
not modify structure, but only the parameter (thresholds) associated with
decision nodes. We also propose to combine both methods by considering an
ensemble that contains the union of the two forests. The proposed methods
exhibit impressive experimental results over a range of problems.
| Noam Segev, Maayan Harel, Shie Mannor, Koby Crammer and Ran El-Yaniv | 10.1109/TPAMI.2016.2618118 | 1511.01258 | null | null |
Study of a bias in the offline evaluation of a recommendation algorithm | cs.IR cs.LG stat.ML | Recommendation systems have been integrated into the majority of large online
systems to filter and rank information according to user profiles. It thus
influences the way users interact with the system and, as a consequence, bias
the evaluation of the performance of a recommendation algorithm computed using
historical data (via offline evaluation). This paper describes this bias and
discuss the relevance of a weighted offline evaluation to reduce this bias for
different classes of recommendation algorithms.
| Arnaud De Myttenaere (SAMM, Viadeo), Boris Golden (Viadeo),
B\'en\'edicte Le Grand (CRI), Fabrice Rossi (SAMM) | null | 1511.01280 | null | null |
Co-Clustering Network-Constrained Trajectory Data | stat.ML cs.DB cs.LG | Recently, clustering moving object trajectories kept gaining interest from
both the data mining and machine learning communities. This problem, however,
was studied mainly and extensively in the setting where moving objects can move
freely on the euclidean space. In this paper, we study the problem of
clustering trajectories of vehicles whose movement is restricted by the
underlying road network. We model relations between these trajectories and road
segments as a bipartite graph and we try to cluster its vertices. We
demonstrate our approaches on synthetic data and show how it could be useful in
inferring knowledge about the flow dynamics and the behavior of the drivers
using the road network.
| Mohamed Khalil El Mahrsi (LTCI, SAMM), Romain Guigour\`es (SAMM),
Fabrice Rossi (SAMM), Marc Boull\'e | 10.1007/978-3-319-23751-0_2 | 1511.01281 | null | null |
Factorizing LambdaMART for cold start recommendations | cs.LG cs.IR | Recommendation systems often rely on point-wise loss metrics such as the mean
squared error. However, in real recommendation settings only few items are
presented to a user. This observation has recently encouraged the use of
rank-based metrics. LambdaMART is the state-of-the-art algorithm in learning to
rank which relies on such a metric. Despite its success it does not have a
principled regularization mechanism relying in empirical approaches to control
model complexity leaving it thus prone to overfitting.
Motivated by the fact that very often the users' and items' descriptions as
well as the preference behavior can be well summarized by a small number of
hidden factors, we propose a novel algorithm, LambdaMART Matrix Factorization
(LambdaMART-MF), that learns a low rank latent representation of users and
items using gradient boosted trees. The algorithm factorizes lambdaMART by
defining relevance scores as the inner product of the learned representations
of the users and items. The low rank is essentially a model complexity
controller; on top of it we propose additional regularizers to constraint the
learned latent representations that reflect the user and item manifolds as
these are defined by their original feature based descriptors and the
preference behavior. Finally we also propose to use a weighted variant of NDCG
to reduce the penalty for similar items with large rating discrepancy.
We experiment on two very different recommendation datasets, meta-mining and
movies-users, and evaluate the performance of LambdaMART-MF, with and without
regularization, in the cold start setting as well as in the simpler matrix
completion setting. In both cases it outperforms in a significant manner
current state of the art algorithms.
| Phong Nguyen and Jun Wang and Alexandros Kalousis | null | 1511.01282 | null | null |
Data-Driven Learning of a Union of Sparsifying Transforms Model for
Blind Compressed Sensing | stat.ML cs.LG | Compressed sensing is a powerful tool in applications such as magnetic
resonance imaging (MRI). It enables accurate recovery of images from highly
undersampled measurements by exploiting the sparsity of the images or image
patches in a transform domain or dictionary. In this work, we focus on blind
compressed sensing (BCS), where the underlying sparse signal model is a priori
unknown, and propose a framework to simultaneously reconstruct the underlying
image as well as the unknown model from highly undersampled measurements.
Specifically, our model is that the patches of the underlying image(s) are
approximately sparse in a transform domain. We also extend this model to a
union of transforms model that better captures the diversity of features in
natural images. The proposed block coordinate descent type algorithms for blind
compressed sensing are highly efficient, and are guaranteed to converge to at
least the partial global and partial local minimizers of the highly non-convex
BCS problems. Our numerical experiments show that the proposed framework
usually leads to better quality of image reconstructions in MRI compared to
several recent image reconstruction methods. Importantly, the learning of a
union of sparsifying transforms leads to better image reconstructions than a
single adaptive transform.
| Saiprasad Ravishankar and Yoram Bresler | 10.1109/TCI.2016.2567299 | 1511.01289 | null | null |
Learning in Auctions: Regret is Hard, Envy is Easy | cs.GT cs.AI cs.CC cs.LG | A line of recent work provides welfare guarantees of simple combinatorial
auction formats, such as selling m items via simultaneous second price auctions
(SiSPAs) (Christodoulou et al. 2008, Bhawalkar and Roughgarden 2011, Feldman et
al. 2013). These guarantees hold even when the auctions are repeatedly executed
and players use no-regret learning algorithms. Unfortunately, off-the-shelf
no-regret algorithms for these auctions are computationally inefficient as the
number of actions is exponential. We show that this obstacle is insurmountable:
there are no polynomial-time no-regret algorithms for SiSPAs, unless
RP$\supseteq$ NP, even when the bidders are unit-demand. Our lower bound raises
the question of how good outcomes polynomially-bounded bidders may discover in
such auctions.
To answer this question, we propose a novel concept of learning in auctions,
termed "no-envy learning." This notion is founded upon Walrasian equilibrium,
and we show that it is both efficiently implementable and results in
approximately optimal welfare, even when the bidders have fractionally
subadditive (XOS) valuations (assuming demand oracles) or coverage valuations
(without demand oracles). No-envy learning outcomes are a relaxation of
no-regret outcomes, which maintain their approximate welfare optimality while
endowing them with computational tractability. Our results extend to other
auction formats that have been studied in the literature via the smoothness
paradigm.
Our results for XOS valuations are enabled by a novel
Follow-The-Perturbed-Leader algorithm for settings where the number of experts
is infinite, and the payoff function of the learner is non-linear. This
algorithm has applications outside of auction settings, such as in security
games. Our result for coverage valuations is based on a novel use of convex
rounding schemes and a reduction to online convex optimization.
| Constantinos Daskalakis, Vasilis Syrgkanis | null | 1511.01411 | null | null |
Train and Test Tightness of LP Relaxations in Structured Prediction | stat.ML cs.AI cs.LG | Structured prediction is used in areas such as computer vision and natural
language processing to predict structured outputs such as segmentations or
parse trees. In these settings, prediction is performed by MAP inference or,
equivalently, by solving an integer linear program. Because of the complex
scoring functions required to obtain accurate predictions, both learning and
inference typically require the use of approximate solvers. We propose a
theoretical explanation to the striking observation that approximations based
on linear programming (LP) relaxations are often tight on real-world instances.
In particular, we show that learning with LP relaxed inference encourages
integrality of training instances, and that tightness generalizes from train to
test data.
| Ofer Meshi, Mehrdad Mahdavi, Adrian Weller and David Sontag | null | 1511.01419 | null | null |
Semi-supervised Sequence Learning | cs.LG cs.CL | We present two approaches that use unlabeled data to improve sequence
learning with recurrent networks. The first approach is to predict what comes
next in a sequence, which is a conventional language model in natural language
processing. The second approach is to use a sequence autoencoder, which reads
the input sequence into a vector and predicts the input sequence again. These
two algorithms can be used as a "pretraining" step for a later supervised
sequence learning algorithm. In other words, the parameters obtained from the
unsupervised step can be used as a starting point for other supervised training
models. In our experiments, we find that long short term memory recurrent
networks after being pretrained with the two approaches are more stable and
generalize better. With pretraining, we are able to train long short term
memory recurrent networks up to a few hundred timesteps, thereby achieving
strong performance in many text classification tasks, such as IMDB, DBpedia and
20 Newsgroups.
| Andrew M. Dai and Quoc V. Le | null | 1511.01432 | null | null |
Low-Rank Approximation of Weighted Tree Automata | cs.LG cs.FL | We describe a technique to minimize weighted tree automata (WTA), a powerful
formalisms that subsumes probabilistic context-free grammars (PCFGs) and
latent-variable PCFGs. Our method relies on a singular value decomposition of
the underlying Hankel matrix defined by the WTA. Our main theoretical result is
an efficient algorithm for computing the SVD of an infinite Hankel matrix
implicitly represented as a WTA. We provide an analysis of the approximation
error induced by the minimization, and we evaluate our method on real-world
data originating in newswire treebank. We show that the model achieves lower
perplexity than previous methods for PCFG minimization, and also is much more
stable due to the absence of local optima.
| Guillaume Rabusseau, Borja Balle, Shay B. Cohen | null | 1511.01442 | null | null |
How Robust are Reconstruction Thresholds for Community Detection? | cs.DS cs.IT cs.LG math.IT math.PR stat.ML | The stochastic block model is one of the oldest and most ubiquitous models
for studying clustering and community detection. In an exciting sequence of
developments, motivated by deep but non-rigorous ideas from statistical
physics, Decelle et al. conjectured a sharp threshold for when community
detection is possible in the sparse regime. Mossel, Neeman and Sly and
Massoulie proved the conjecture and gave matching algorithms and lower bounds.
Here we revisit the stochastic block model from the perspective of semirandom
models where we allow an adversary to make `helpful' changes that strengthen
ties within each community and break ties between them. We show a surprising
result that these `helpful' changes can shift the information-theoretic
threshold, making the community detection problem strictly harder. We
complement this by showing that an algorithm based on semidefinite programming
(which was known to get close to the threshold) continues to work in the
semirandom model (even for partial recovery). This suggests that algorithms
based on semidefinite programming are robust in ways that any algorithm meeting
the information-theoretic threshold cannot be.
These results point to an interesting new direction: Can we find robust,
semirandom analogues to some of the classical, average-case thresholds in
statistics? We also explore this question in the broadcast tree model, and we
show that the viewpoint of semirandom models can help explain why some
algorithms are preferred to others in practice, in spite of the gaps in their
statistical performance on random models.
| Ankur Moitra and William Perry and Alexander S. Wein | null | 1511.01473 | null | null |
Mean-field inference of Hawkes point processes | cs.LG cond-mat.stat-mech | We propose a fast and efficient estimation method that is able to accurately
recover the parameters of a d-dimensional Hawkes point-process from a set of
observations. We exploit a mean-field approximation that is valid when the
fluctuations of the stochastic intensity are small. We show that this is
notably the case in situations when interactions are sufficiently weak, when
the dimension of the system is high or when the fluctuations are self-averaging
due to the large number of past events they involve. In such a regime the
estimation of a Hawkes process can be mapped on a least-squares problem for
which we provide an analytic solution. Though this estimator is biased, we show
that its precision can be comparable to the one of the Maximum Likelihood
Estimator while its computation speed is shown to be improved considerably. We
give a theoretical control on the accuracy of our new approach and illustrate
its efficiency using synthetic datasets, in order to assess the statistical
estimation error of the parameters.
| Emmanuel Bacry, St\'ephane Ga\"iffas, Iacopo Mastromatteo and
Jean-Fran\c{c}ois Muzy | 10.1088/1751-8113/49/17/174006 | 1511.01512 | null | null |
Mining Local Gazetteers of Literary Chinese with CRF and Pattern based
Methods for Biographical Information in Chinese History | cs.CL cs.DL cs.IR cs.LG | Person names and location names are essential building blocks for identifying
events and social networks in historical documents that were written in
literary Chinese. We take the lead to explore the research on algorithmically
recognizing named entities in literary Chinese for historical studies with
language-model based and conditional-random-field based methods, and extend our
work to mining the document structures in historical documents. Practical
evaluations were conducted with texts that were extracted from more than 220
volumes of local gazetteers (Difangzhi). Difangzhi is a huge and the single
most important collection that contains information about officers who served
in local government in Chinese history. Our methods performed very well on
these realistic tests. Thousands of names and addresses were identified from
the texts. A good portion of the extracted names match the biographical
information currently recorded in the China Biographical Database (CBDB) of
Harvard University, and many others can be verified by historians and will
become as new additions to CBDB.
| Chao-Lin Liu, Chih-Kai Huang, Hongsu Wang, Peter K. Bol | 10.1109/BigData.2015.7363931 | 1511.01556 | null | null |
Interpretable classifiers using rules and Bayesian analysis: Building a
better stroke prediction model | stat.AP cs.LG stat.ML | We aim to produce predictive models that are not only accurate, but are also
interpretable to human experts. Our models are decision lists, which consist of
a series of if...then... statements (e.g., if high blood pressure, then stroke)
that discretize a high-dimensional, multivariate feature space into a series of
simple, readily interpretable decision statements. We introduce a generative
model called Bayesian Rule Lists that yields a posterior distribution over
possible decision lists. It employs a novel prior structure to encourage
sparsity. Our experiments show that Bayesian Rule Lists has predictive accuracy
on par with the current top algorithms for prediction in machine learning. Our
method is motivated by recent developments in personalized medicine, and can be
used to produce highly accurate and interpretable medical scoring systems. We
demonstrate this by producing an alternative to the CHADS$_2$ score, actively
used in clinical practice for estimating the risk of stroke in patients that
have atrial fibrillation. Our model is as interpretable as CHADS$_2$, but more
accurate.
| Benjamin Letham, Cynthia Rudin, Tyler H. McCormick, David Madigan | 10.1214/15-AOAS848 | 1511.01644 | null | null |
Stochastic Proximal Gradient Descent for Nuclear Norm Regularization | cs.LG | In this paper, we utilize stochastic optimization to reduce the space
complexity of convex composite optimization with a nuclear norm regularizer,
where the variable is a matrix of size $m \times n$. By constructing a low-rank
estimate of the gradient, we propose an iterative algorithm based on stochastic
proximal gradient descent (SPGD), and take the last iterate of SPGD as the
final solution. The main advantage of the proposed algorithm is that its space
complexity is $O(m+n)$, in contrast, most of previous algorithms have a $O(mn)$
space complexity. Theoretical analysis shows that it achieves $O(\log
T/\sqrt{T})$ and $O(\log T/T)$ convergence rates for general convex functions
and strongly convex functions, respectively.
| Lijun Zhang, Tianbao Yang, Rong Jin, Zhi-Hua Zhou | null | 1511.01664 | null | null |
Symmetry-invariant optimization in deep networks | cs.LG cs.AI cs.CV | Recent works have highlighted scale invariance or symmetry that is present in
the weight space of a typical deep network and the adverse effect that it has
on the Euclidean gradient based stochastic gradient descent optimization. In
this work, we show that these and other commonly used deep networks, such as
those which use a max-pooling and sub-sampling layer, possess more complex
forms of symmetry arising from scaling based reparameterization of the network
weights. We then propose two symmetry-invariant gradient based weight updates
for stochastic gradient descent based learning. Our empirical evidence based on
the MNIST dataset shows that these updates improve the test performance without
sacrificing the computational efficiency of the weight updates. We also show
the results of training with one of the proposed weight updates on an image
segmentation problem.
| Vijay Badrinarayanan and Bamdev Mishra and Roberto Cipolla | null | 1511.01754 | null | null |
Discrete R\'enyi Classifiers | cs.LG | Consider the binary classification problem of predicting a target variable
$Y$ from a discrete feature vector $X = (X_1,...,X_d)$. When the probability
distribution $\mathbb{P}(X,Y)$ is known, the optimal classifier, leading to the
minimum misclassification rate, is given by the Maximum A-posteriori
Probability decision rule. However, estimating the complete joint distribution
$\mathbb{P}(X,Y)$ is computationally and statistically impossible for large
values of $d$. An alternative approach is to first estimate some low order
marginals of $\mathbb{P}(X,Y)$ and then design the classifier based on the
estimated low order marginals. This approach is also helpful when the complete
training data instances are not available due to privacy concerns. In this
work, we consider the problem of finding the optimum classifier based on some
estimated low order marginals of $(X,Y)$. We prove that for a given set of
marginals, the minimum Hirschfeld-Gebelein-Renyi (HGR) correlation principle
introduced in [1] leads to a randomized classification rule which is shown to
have a misclassification rate no larger than twice the misclassification rate
of the optimal classifier. Then, under a separability condition, we show that
the proposed algorithm is equivalent to a randomized linear regression
approach. In addition, this method naturally results in a robust feature
selection method selecting a subset of features having the maximum worst case
HGR correlation with the target variable. Our theoretical upper-bound is
similar to the recent Discrete Chebyshev Classifier (DCC) approach [2], while
the proposed algorithm has significant computational advantages since it only
requires solving a least square optimization problem. Finally, we numerically
compare our proposed algorithm with the DCC classifier and show that the
proposed algorithm results in better misclassification rate over various
datasets.
| Meisam Razaviyayn, Farzan Farnia, David Tse | null | 1511.01764 | null | null |
Computational Intractability of Dictionary Learning for Sparse
Representation | cs.LG stat.ML | In this paper we consider the dictionary learning problem for sparse
representation. We first show that this problem is NP-hard by polynomial time
reduction of the densest cut problem. Then, using successive convex
approximation strategies, we propose efficient dictionary learning schemes to
solve several practical formulations of this problem to stationary points.
Unlike many existing algorithms in the literature, such as K-SVD, our proposed
dictionary learning scheme is theoretically guaranteed to converge to the set
of stationary points under certain mild assumptions. For the image denoising
application, the performance and the efficiency of the proposed dictionary
learning scheme are comparable to that of K-SVD algorithm in simulation.
| Meisam Razaviyayn, Hung-Wei Tseng, Zhi-Quan Luo | null | 1511.01776 | null | null |
A note on the evaluation of generative models | stat.ML cs.LG | Probabilistic generative models can be used for compression, denoising,
inpainting, texture synthesis, semi-supervised learning, unsupervised feature
learning, and other tasks. Given this wide range of applications, it is not
surprising that a lot of heterogeneity exists in the way these models are
formulated, trained, and evaluated. As a consequence, direct comparison between
models is often difficult. This article reviews mostly known but often
underappreciated properties relating to the evaluation and interpretation of
generative models with a focus on image models. In particular, we show that
three of the currently most commonly used criteria---average log-likelihood,
Parzen window estimates, and visual fidelity of samples---are largely
independent of each other when the data is high-dimensional. Good performance
with respect to one criterion therefore need not imply good performance with
respect to the other criteria. Our results show that extrapolation from one
criterion to another is not warranted and generative models need to be
evaluated directly with respect to the application(s) they were intended for.
In addition, we provide examples demonstrating that Parzen window estimates
should generally be avoided.
| Lucas Theis, A\"aron van den Oord, Matthias Bethge | null | 1511.01844 | null | null |
Convolutional Neural Network for Stereotypical Motor Movement Detection
in Autism | cs.NE cs.CV cs.LG stat.ML | Autism Spectrum Disorders (ASDs) are often associated with specific atypical
postural or motor behaviors, of which Stereotypical Motor Movements (SMMs) have
a specific visibility. While the identification and the quantification of SMM
patterns remain complex, its automation would provide support to accurate
tuning of the intervention in the therapy of autism. Therefore, it is essential
to develop automatic SMM detection systems in a real world setting, taking care
of strong inter-subject and intra-subject variability. Wireless accelerometer
sensing technology can provide a valid infrastructure for real-time SMM
detection, however such variability remains a problem also for machine learning
methods, in particular whenever handcrafted features extracted from
accelerometer signal are considered. Here, we propose to employ the deep
learning paradigm in order to learn discriminating features from multi-sensor
accelerometer signals. Our results provide preliminary evidence that feature
learning and transfer learning embedded in the deep architecture achieve higher
accurate SMM detectors in longitudinal scenarios.
| Nastaran Mohammadian Rad, Andrea Bizzego, Seyed Mostafa Kia, Giuseppe
Jurman, Paola Venuti, Cesare Furlanello | null | 1511.01865 | null | null |
Thoughts on Massively Scalable Gaussian Processes | cs.LG cs.AI stat.ME stat.ML | We introduce a framework and early results for massively scalable Gaussian
processes (MSGP), significantly extending the KISS-GP approach of Wilson and
Nickisch (2015). The MSGP framework enables the use of Gaussian processes (GPs)
on billions of datapoints, without requiring distributed inference, or severe
assumptions. In particular, MSGP reduces the standard $O(n^3)$ complexity of GP
learning and inference to $O(n)$, and the standard $O(n^2)$ complexity per test
point prediction to $O(1)$. MSGP involves 1) decomposing covariance matrices as
Kronecker products of Toeplitz matrices approximated by circulant matrices.
This multi-level circulant approximation allows one to unify the orthogonal
computational benefits of fast Kronecker and Toeplitz approaches, and is
significantly faster than either approach in isolation; 2) local kernel
interpolation and inducing points to allow for arbitrarily located data inputs,
and $O(1)$ test time predictions; 3) exploiting block-Toeplitz Toeplitz-block
structure (BTTB), which enables fast inference and learning when
multidimensional Kronecker structure is not present; and 4) projections of the
input space to flexibly model correlated inputs and high dimensional data. The
ability to handle many ($m \approx n$) inducing points allows for near-exact
accuracy and large scale kernel learning.
| Andrew Gordon Wilson, Christoph Dann, Hannes Nickisch | null | 1511.01870 | null | null |
Stop Wasting My Gradients: Practical SVRG | cs.LG math.OC stat.CO stat.ML | We present and analyze several strategies for improving the performance of
stochastic variance-reduced gradient (SVRG) methods. We first show that the
convergence rate of these methods can be preserved under a decreasing sequence
of errors in the control variate, and use this to derive variants of SVRG that
use growing-batch strategies to reduce the number of gradient calculations
required in the early iterations. We further (i) show how to exploit support
vectors to reduce the number of gradient computations in the later iterations,
(ii) prove that the commonly-used regularized SVRG iteration is justified and
improves the convergence rate, (iii) consider alternate mini-batch selection
strategies, and (iv) consider the generalization error of the method.
| Reza Babanezhad, Mohamed Osama Ahmed, Alim Virani, Mark Schmidt, Jakub
Kone\v{c}n\'y, Scott Sallinen | null | 1511.01942 | null | null |
Enhanced Low-Rank Matrix Approximation | cs.CV cs.LG math.OC | This letter proposes to estimate low-rank matrices by formulating a convex
optimization problem with non-convex regularization. We employ parameterized
non-convex penalty functions to estimate the non-zero singular values more
accurately than the nuclear norm. A closed-form solution for the global optimum
of the proposed objective function (sum of data fidelity and the non-convex
regularizer) is also derived. The solution reduces to singular value
thresholding method as a special case. The proposed method is demonstrated for
image denoising.
| Ankit Parekh and Ivan W. Selesnick | 10.1109/LSP.2016.2535227 | 1511.01966 | null | null |
Towards a Better Understanding of Predict and Count Models | cs.LG cs.CL | In a recent paper, Levy and Goldberg pointed out an interesting connection
between prediction-based word embedding models and count models based on
pointwise mutual information. Under certain conditions, they showed that both
models end up optimizing equivalent objective functions. This paper explores
this connection in more detail and lays out the factors leading to differences
between these models. We find that the most relevant differences from an
optimization perspective are (i) predict models work in a low dimensional space
where embedding vectors can interact heavily; (ii) since predict models have
fewer parameters, they are less prone to overfitting.
Motivated by the insight of our analysis, we show how count models can be
regularized in a principled manner and provide closed-form solutions for L1 and
L2 regularization. Finally, we propose a new embedding model with a convex
objective and the additional benefit of being intelligible.
| S. Sathiya Keerthi, Tobias Schnabel, Rajiv Khanna | null | 1511.02024 | null | null |
Finding structure in data using multivariate tree boosting | stat.ML cs.LG | Technology and collaboration enable dramatic increases in the size of
psychological and psychiatric data collections, but finding structure in these
large data sets with many collected variables is challenging. Decision tree
ensembles like random forests (Strobl, Malley, and Tutz, 2009) are a useful
tool for finding structure, but are difficult to interpret with multiple
outcome variables which are often of interest in psychology. To find and
interpret structure in data sets with multiple outcomes and many predictors
(possibly exceeding the sample size), we introduce a multivariate extension to
a decision tree ensemble method called Gradient Boosted Regression Trees
(Friedman, 2001). Our method, multivariate tree boosting, can be used for
identifying important predictors, detecting predictors with non-linear effects
and interactions without specification of such effects, and for identifying
predictors that cause two or more outcome variables to covary without
parametric assumptions. We provide the R package 'mvtboost' to estimate, tune,
and interpret the resulting model, which extends the implementation of
univariate boosting in the R package 'gbm' (Ridgeway, 2013) to continuous,
multivariate outcomes. To illustrate the approach, we analyze predictors of
psychological well-being (Ryff and Keyes, 1995). Simulations verify that our
approach identifies predictors with non-linear effects and achieves high
prediction accuracy, exceeding or matching the performance of (penalized)
multivariate multiple regression and multivariate decision trees over a wide
range of conditions.
| Patrick J. Miller, Gitta H. Lubke, Daniel B. McArtor, C. S. Bergeman | 10.1037/met0000087 | 1511.02025 | null | null |
ALOJA-ML: A Framework for Automating Characterization and Knowledge
Discovery in Hadoop Deployments | cs.LG cs.DC | This article presents ALOJA-Machine Learning (ALOJA-ML) an extension to the
ALOJA project that uses machine learning techniques to interpret Hadoop
benchmark performance data and performance tuning; here we detail the approach,
efficacy of the model and initial results. Hadoop presents a complex execution
environment, where costs and performance depends on a large number of software
(SW) configurations and on multiple hardware (HW) deployment choices. These
results are accompanied by a test bed and tools to deploy and evaluate the
cost-effectiveness of the different hardware configurations, parameter tunings,
and Cloud services. Despite early success within ALOJA from expert-guided
benchmarking, it became clear that a genuinely comprehensive study requires
automation of modeling procedures to allow a systematic analysis of large and
resource-constrained search spaces. ALOJA-ML provides such an automated system
allowing knowledge discovery by modeling Hadoop executions from observed
benchmarks across a broad set of configuration parameters. The resulting
performance models can be used to forecast execution behavior of various
workloads; they allow 'a-priori' prediction of the execution times for new
configurations and HW choices and they offer a route to model-based anomaly
detection. In addition, these models can guide the benchmarking exploration
efficiently, by automatically prioritizing candidate future benchmark tests.
Insights from ALOJA-ML's models can be used to reduce the operational time on
clusters, speed-up the data acquisition and knowledge discovery process, and
importantly, reduce running costs. In addition to learning from the methodology
presented in this work, the community can benefit in general from ALOJA
data-sets, framework, and derived insights to improve the design and deployment
of Big Data applications.
| Josep Ll. Berral, Nicolas Poggi, David Carrera, Aaron Call, Rob
Reinauer, Daron Green | 10.1145/2783258.2788600 | 1511.02030 | null | null |
ALOJA: A Framework for Benchmarking and Predictive Analytics in Big Data
Deployments | cs.LG cs.DC | This article presents the ALOJA project and its analytics tools, which
leverages machine learning to interpret Big Data benchmark performance data and
tuning. ALOJA is part of a long-term collaboration between BSC and Microsoft to
automate the characterization of cost-effectiveness on Big Data deployments,
currently focusing on Hadoop. Hadoop presents a complex run-time environment,
where costs and performance depend on a large number of configuration choices.
The ALOJA project has created an open, vendor-neutral repository, featuring
over 40,000 Hadoop job executions and their performance details. The repository
is accompanied by a test-bed and tools to deploy and evaluate the
cost-effectiveness of different hardware configurations, parameters and Cloud
services. Despite early success within ALOJA, a comprehensive study requires
automation of modeling procedures to allow an analysis of large and
resource-constrained search spaces. The predictive analytics extension,
ALOJA-ML, provides an automated system allowing knowledge discovery by modeling
environments from observed executions. The resulting models can forecast
execution behaviors, predicting execution times for new configurations and
hardware choices. That also enables model-based anomaly detection or efficient
benchmark guidance by prioritizing executions. In addition, the community can
benefit from ALOJA data-sets and framework to improve the design and deployment
of Big Data applications.
| Josep Ll. Berral, Nicolas Poggi, David Carrera, Aaron Call, Rob
Reinauer, Daron Green | 10.1109/TETC.2015.2496504 | 1511.02037 | null | null |
Barrier Frank-Wolfe for Marginal Inference | stat.ML cs.LG math.OC | We introduce a globally-convergent algorithm for optimizing the
tree-reweighted (TRW) variational objective over the marginal polytope. The
algorithm is based on the conditional gradient method (Frank-Wolfe) and moves
pseudomarginals within the marginal polytope through repeated maximum a
posteriori (MAP) calls. This modular structure enables us to leverage black-box
MAP solvers (both exact and approximate) for variational inference, and obtains
more accurate results than tree-reweighted algorithms that optimize over the
local consistency relaxation. Theoretically, we bound the sub-optimality for
the proposed algorithm despite the TRW objective having unbounded gradients at
the boundary of the marginal polytope. Empirically, we demonstrate the
increased quality of results found by tightening the relaxation over the
marginal polytope as well as the spanning tree polytope on synthetic and
real-world instances.
| Rahul G. Krishnan, Simon Lacoste-Julien, David Sontag | null | 1511.02124 | null | null |
Diffusion-Convolutional Neural Networks | cs.LG | We present diffusion-convolutional neural networks (DCNNs), a new model for
graph-structured data. Through the introduction of a diffusion-convolution
operation, we show how diffusion-based representations can be learned from
graph-structured data and used as an effective basis for node classification.
DCNNs have several attractive qualities, including a latent representation for
graphical data that is invariant under isomorphism, as well as polynomial-time
prediction and learning that can be represented as tensor operations and
efficiently implemented on the GPU. Through several experiments with real
structured datasets, we demonstrate that DCNNs are able to outperform
probabilistic relational models and kernel-on-graph methods at relational node
classification tasks.
| James Atwood and Don Towsley | null | 1511.02136 | null | null |
Optimal Non-Asymptotic Lower Bound on the Minimax Regret of Learning
with Expert Advice | stat.ML cs.LG | We prove non-asymptotic lower bounds on the expectation of the maximum of $d$
independent Gaussian variables and the expectation of the maximum of $d$
independent symmetric random walks. Both lower bounds recover the optimal
leading constant in the limit. A simple application of the lower bound for
random walks is an (asymptotically optimal) non-asymptotic lower bound on the
minimax regret of online learning with expert advice.
| Francesco Orabona and David Pal | null | 1511.02176 | null | null |
Evaluating Protein-protein Interaction Predictors with a Novel
3-Dimensional Metric | cs.LG | In order for the predicted interactions to be directly adopted by biologists,
the ma- chine learning predictions have to be of high precision, regardless of
recall. This aspect cannot be evaluated or numerically represented well by
traditional metrics like accuracy, ROC, or precision-recall curve. In this
work, we start from the alignment in sensitivity of ROC and recall of
precision-recall curve, and propose an evaluation metric focusing on the
ability of a model to be adopted by biologists. This metric evaluates the
ability of a machine learning algorithm to predict only new interactions,
meanwhile, it eliminates the influence of test dataset. In the experiment of
evaluating different classifiers with a same data set and evaluating the same
predictor with different datasets, our new metric fulfills the evaluation task
of our interest while two widely recognized metrics, ROC and precision-recall
curve fail the tasks for different reasons.
| Haohan Wang, Madhavi K. Ganapathiraju | null | 1511.02196 | null | null |
Deep Kernel Learning | cs.LG cs.AI stat.ME stat.ML | We introduce scalable deep kernels, which combine the structural properties
of deep learning architectures with the non-parametric flexibility of kernel
methods. Specifically, we transform the inputs of a spectral mixture base
kernel with a deep architecture, using local kernel interpolation, inducing
points, and structure exploiting (Kronecker and Toeplitz) algebra for a
scalable kernel representation. These closed-form kernels can be used as
drop-in replacements for standard kernels, with benefits in expressive power
and scalability. We jointly learn the properties of these kernels through the
marginal likelihood of a Gaussian process. Inference and learning cost $O(n)$
for $n$ training points, and predictions cost $O(1)$ per test point. On a large
and diverse collection of applications, including a dataset with 2 million
examples, we show improved performance over scalable Gaussian processes with
flexible kernel learning models, and stand-alone deep architectures.
| Andrew Gordon Wilson, Zhiting Hu, Ruslan Salakhutdinov, Eric P. Xing | null | 1511.02222 | null | null |
Active Perceptual Similarity Modeling with Auxiliary Information | cs.LG stat.ML | Learning a model of perceptual similarity from a collection of objects is a
fundamental task in machine learning underlying numerous applications. A common
way to learn such a model is from relative comparisons in the form of triplets:
responses to queries of the form "Is object a more similar to b than it is to
c?". If no consideration is made in the determination of which queries to ask,
existing similarity learning methods can require a prohibitively large number
of responses. In this work, we consider the problem of actively learning from
triplets -finding which queries are most useful for learning. Different from
previous active triplet learning approaches, we incorporate auxiliary
information into our similarity model and introduce an active learning scheme
to find queries that are informative for quickly learning both the relevant
aspects of auxiliary data and the directly-learned similarity components.
Compared to prior approaches, we show that we can learn just as effectively
with much fewer queries. For evaluation, we introduce a new dataset of
exhaustive triplet comparisons obtained from humans and demonstrate improved
performance for different types of auxiliary information.
| Eric Heim (1), Matthew Berger (2), Lee Seversky (2), Milos Hauskrecht
(1) ((1) University of Pittsburgh, (2) Air Force Research Laboratory,
Information Directorate) | null | 1511.02254 | null | null |
Efficient Multiscale Gaussian Process Regression using Hierarchical
Clustering | cs.LG stat.ML | Standard Gaussian Process (GP) regression, a powerful machine learning tool,
is computationally expensive when it is applied to large datasets, and
potentially inaccurate when data points are sparsely distributed in a
high-dimensional feature space. To address these challenges, a new multiscale,
sparsified GP algorithm is formulated, with the goal of application to large
scientific computing datasets. In this approach, the data is partitioned into
clusters and the cluster centers are used to define a reduced training set,
resulting in an improvement over standard GPs in terms of training and
evaluation costs. Further, a hierarchical technique is used to adaptively map
the local covariance representation to the underlying sparsity of the feature
space, leading to improved prediction accuracy when the data distribution is
highly non-uniform. A theoretical investigation of the computational complexity
of the algorithm is presented. The efficacy of this method is then demonstrated
on smooth and discontinuous analytical functions and on data from a direct
numerical simulation of turbulent combustion.
| Z. Zhang, K. Duraisamy, N. A. Gumerov | null | 1511.02258 | null | null |
Stacked Attention Networks for Image Question Answering | cs.LG cs.CL cs.CV cs.NE | This paper presents stacked attention networks (SANs) that learn to answer
natural language questions from images. SANs use semantic representation of a
question as query to search for the regions in an image that are related to the
answer. We argue that image question answering (QA) often requires multiple
steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an
image multiple times to infer the answer progressively. Experiments conducted
on four image QA data sets demonstrate that the proposed SANs significantly
outperform previous state-of-the-art approaches. The visualization of the
attention layers illustrates the progress that the SAN locates the relevant
visual clues that lead to the answer of the question layer-by-layer.
| Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Smola | null | 1511.02274 | null | null |
Generation and Comprehension of Unambiguous Object Descriptions | cs.CV cs.CL cs.LG cs.RO | We propose a method that can generate an unambiguous description (known as a
referring expression) of a specific object or region in an image, and which can
also comprehend or interpret such an expression to infer which object is being
described. We show that our method outperforms previous methods that generate
descriptions of objects without taking into account other potentially ambiguous
objects in the scene. Our model is inspired by recent successes of deep
learning methods for image captioning, but while image captioning is difficult
to evaluate, our task allows for easy objective evaluation. We also present a
new large-scale dataset for referring expressions, based on MS-COCO. We have
released the dataset and a toolbox for visualization and evaluation, see
https://github.com/mjhucla/Google_Refexp_toolbox
| Junhua Mao, Jonathan Huang, Alexander Toshev, Oana Camburu, Alan
Yuille, Kevin Murphy | null | 1511.02283 | null | null |
Performance Analysis of Multiclass Support Vector Machine Classification
for Diagnosis of Coronary Heart Diseases | cs.LG | Automatic diagnosis of coronary heart disease helps the doctor to support in
decision making a diagnosis. Coronary heart disease have some types or levels.
Referring to the UCI Repository dataset, it divided into 4 types or levels that
are labeled numbers 1-4 (low, medium, high and serious). The diagnosis models
can be analyzed with multiclass classification approach. One of multiclass
classification approach used, one of which is a support vector machine (SVM).
The SVM use due to strong performance of SVM in binary classification. This
research study multiclass performance classification support vector machine to
diagnose the type or level of coronary heart disease. Coronary heart disease
patient data taken from the UCI Repository. Stages in this study is
preprocessing, which consist of, to normalizing the data, divide the data into
data training and testing. The next stage of multiclass classification and
performance analysis. This study uses multiclass SVM algorithm, namely: Binary
Tree Support Vector Machine (BTSVM), One-Against-One (OAO), One-Against-All
(OAA), Decision Direct Acyclic Graph (DDAG) and Exhaustive Output Error
Correction Code (ECOC). Performance parameter used is recall, precision,
F-measure and Overall accuracy.
| Wiharto Wiharto, Hari Kusnanto, Herianto Herianto | null | 1511.02352 | null | null |
Review-Level Sentiment Classification with Sentence-Level Polarity
Correction | cs.CL cs.AI cs.LG | We propose an effective technique to solving review-level sentiment
classification problem by using sentence-level polarity correction. Our
polarity correction technique takes into account the consistency of the
polarities (positive and negative) of sentences within each product review
before performing the actual machine learning task. While sentences with
inconsistent polarities are removed, sentences with consistent polarities are
used to learn state-of-the-art classifiers. The technique achieved better
results on different types of products reviews and outperforms baseline models
without the correction technique. Experimental results show an average of 82%
F-measure on four different product review domains.
| Sylvester Olubolu Orimaye, Saadat M. Alhashmi, Eu-Gene Siew and Sang
Jung Kang | null | 1511.02385 | null | null |
Hierarchical Variational Models | stat.ML cs.LG stat.CO stat.ME | Black box variational inference allows researchers to easily prototype and
evaluate an array of models. Recent advances allow such algorithms to scale to
high dimensions. However, a central question remains: How to specify an
expressive variational distribution that maintains efficient computation? To
address this, we develop hierarchical variational models (HVMs). HVMs augment a
variational approximation with a prior on its parameters, which allows it to
capture complex structure for both discrete and continuous latent variables.
The algorithm we develop is black box, can be used for any HVM, and has the
same computational efficiency as the original approximation. We study HVMs on a
variety of deep discrete latent variable models. HVMs generalize other
expressive variational distributions and maintains higher fidelity to the
posterior.
| Rajesh Ranganath, Dustin Tran, David M. Blei | null | 1511.02386 | null | null |
Max-Sum Diversification, Monotone Submodular Functions and Semi-metric
Spaces | cs.LG | In many applications such as web-based search, document summarization,
facility location and other applications, the results are preferable to be both
representative and diversified subsets of documents. The goal of this study is
to select a good "quality", bounded-size subset of a given set of items, while
maintaining their diversity relative to a semi-metric distance function. This
problem was first studied by Borodin et al\cite{borodin}, but a crucial
property used throughout their proof is the triangle inequality. In this
modified proof, we want to relax the triangle inequality and relate the
approximation ratio of max-sum diversification problem to the parameter of the
relaxed triangle inequality in the normal form of the problem (i.e., a uniform
matroid) and also in an arbitrary matroid.
| Sepehr Abbasi Zadeh, Mehrdad Ghadiri | null | 1511.02402 | null | null |
Algorithmic Stability for Adaptive Data Analysis | cs.LG cs.CR cs.DS | Adaptivity is an important feature of data analysis---the choice of questions
to ask about a dataset often depends on previous interactions with the same
dataset. However, statistical validity is typically studied in a nonadaptive
model, where all questions are specified before the dataset is drawn. Recent
work by Dwork et al. (STOC, 2015) and Hardt and Ullman (FOCS, 2014) initiated
the formal study of this problem, and gave the first upper and lower bounds on
the achievable generalization error for adaptive data analysis.
Specifically, suppose there is an unknown distribution $\mathbf{P}$ and a set
of $n$ independent samples $\mathbf{x}$ is drawn from $\mathbf{P}$. We seek an
algorithm that, given $\mathbf{x}$ as input, accurately answers a sequence of
adaptively chosen queries about the unknown distribution $\mathbf{P}$. How many
samples $n$ must we draw from the distribution, as a function of the type of
queries, the number of queries, and the desired level of accuracy?
In this work we make two new contributions:
(i) We give upper bounds on the number of samples $n$ that are needed to
answer statistical queries. The bounds improve and simplify the work of Dwork
et al. (STOC, 2015), and have been applied in subsequent work by those authors
(Science, 2015, NIPS, 2015).
(ii) We prove the first upper bounds on the number of samples required to
answer more general families of queries. These include arbitrary
low-sensitivity queries and an important class of optimization queries.
As in Dwork et al., our algorithms are based on a connection with algorithmic
stability in the form of differential privacy. We extend their work by giving a
quantitatively optimal, more general, and simpler proof of their main theorem
that stability implies low generalization error. We also study weaker stability
guarantees such as bounded KL divergence and total variation distance.
| Raef Bassily, Kobbi Nissim, Adam Smith, Thomas Steinke, Uri Stemmer,
Jonathan Ullman | null | 1511.02513 | null | null |
Speed learning on the fly | math.OC cs.LG stat.ML | The practical performance of online stochastic gradient descent algorithms is
highly dependent on the chosen step size, which must be tediously hand-tuned in
many applications. The same is true for more advanced variants of stochastic
gradients, such as SAGA, SVRG, or AdaGrad. Here we propose to adapt the step
size by performing a gradient descent on the step size itself, viewing the
whole performance of the learning trajectory as a function of step size.
Importantly, this adaptation can be computed online at little cost, without
having to iterate backward passes over the full data.
| Pierre-Yves Mass\'e and Yann Ollivier | null | 1511.02540 | null | null |
Sandwiching the marginal likelihood using bidirectional Monte Carlo | stat.ML cs.LG stat.CO | Computing the marginal likelihood (ML) of a model requires marginalizing out
all of the parameters and latent variables, a difficult high-dimensional
summation or integration problem. To make matters worse, it is often hard to
measure the accuracy of one's ML estimates. We present bidirectional Monte
Carlo, a technique for obtaining accurate log-ML estimates on data simulated
from a model. This method obtains stochastic lower bounds on the log-ML using
annealed importance sampling or sequential Monte Carlo, and obtains stochastic
upper bounds by running these same algorithms in reverse starting from an exact
posterior sample. The true value can be sandwiched between these two stochastic
bounds with high probability. Using the ground truth log-ML estimates obtained
from our method, we quantitatively evaluate a wide variety of existing ML
estimators on several latent variable models: clustering, a low rank
approximation, and a binary attributes model. These experiments yield insights
into how to accurately estimate marginal likelihoods.
| Roger B. Grosse, Zoubin Ghahramani, and Ryan P. Adams | null | 1511.02543 | null | null |
Deep Recurrent Neural Networks for Sequential Phenotype Prediction in
Genomics | cs.NE cs.CE cs.LG | In analyzing of modern biological data, we are often dealing with ill-posed
problems and missing data, mostly due to high dimensionality and
multicollinearity of the dataset. In this paper, we have proposed a system
based on matrix factorization (MF) and deep recurrent neural networks (DRNNs)
for genotype imputation and phenotype sequences prediction. In order to model
the long-term dependencies of phenotype data, the new Recurrent Linear Units
(ReLU) learning strategy is utilized for the first time. The proposed model is
implemented for parallel processing on central processing units (CPUs) and
graphic processing units (GPUs). Performance of the proposed model is compared
with other training algorithms for learning long-term dependencies as well as
the sparse partial least square (SPLS) method on a set of genotype and
phenotype data with 604 samples, 1980 single-nucleotide polymorphisms (SNPs),
and two traits. The results demonstrate performance of the ReLU training
algorithm in learning long-term dependencies in RNNs.
| Farhad Pouladi, Hojjat Salehinejad and Amir Mohammad Gilani | null | 1511.02554 | null | null |
How far can we go without convolution: Improving fully-connected
networks | cs.LG cs.NE | We propose ways to improve the performance of fully connected networks. We
found that two approaches in particular have a strong effect on performance:
linear bottleneck layers and unsupervised pre-training using autoencoders
without hidden unit biases. We show how both approaches can be related to
improving gradient flow and reducing sparsity in the network. We show that a
fully connected network can yield approximately 70% classification accuracy on
the permutation-invariant CIFAR-10 task, which is much higher than the current
state-of-the-art. By adding deformations to the training data, the fully
connected network achieves 78% accuracy, which is just 10% short of a decent
convolutional network.
| Zhouhan Lin, Roland Memisevic, Kishore Konda | null | 1511.02580 | null | null |
Batch-normalized Maxout Network in Network | cs.CV cs.LG | This paper reports a novel deep architecture referred to as Maxout network In
Network (MIN), which can enhance model discriminability and facilitate the
process of information abstraction within the receptive field. The proposed
network adopts the framework of the recently developed Network In Network
structure, which slides a universal approximator, multilayer perceptron (MLP)
with rectifier units, to exact features. Instead of MLP, we employ maxout MLP
to learn a variety of piecewise linear activation functions and to mediate the
problem of vanishing gradients that can occur when using rectifier units.
Moreover, batch normalization is applied to reduce the saturation of maxout
units by pre-conditioning the model and dropout is applied to prevent
overfitting. Finally, average pooling is used in all pooling layers to
regularize maxout MLP in order to facilitate information abstraction in every
receptive field while tolerating the change of object position. Because average
pooling preserves all features in the local patch, the proposed MIN model can
enforce the suppression of irrelevant information during training. Our
experiments demonstrated the state-of-the-art classification performance when
the MIN model was applied to MNIST, CIFAR-10, and CIFAR-100 datasets and
comparable performance for SVHN dataset.
| Jia-Ren Chang and Yong-Sheng Chen | null | 1511.02583 | null | null |
A New Relaxation Approach to Normalized Hypergraph Cut | cs.LG cs.DS | Normalized graph cut (NGC) has become a popular research topic due to its
wide applications in a large variety of areas like machine learning and very
large scale integration (VLSI) circuit design. Most of traditional NGC methods
are based on pairwise relationships (similarities). However, in real-world
applications relationships among the vertices (objects) may be more complex
than pairwise, which are typically represented as hyperedges in hypergraphs.
Thus, normalized hypergraph cut (NHC) has attracted more and more attention.
Existing NHC methods cannot achieve satisfactory performance in real
applications. In this paper, we propose a novel relaxation approach, which is
called relaxed NHC (RNHC), to solve the NHC problem. Our model is defined as an
optimization problem on the Stiefel manifold. To solve this problem, we resort
to the Cayley transformation to devise a feasible learning algorithm.
Experimental results on a set of large hypergraph benchmarks for clustering and
partitioning in VLSI domain show that RNHC can outperform the state-of-the-art
methods.
| Cong Xie, Wu-Jun Li and Zhihua Zhang | null | 1511.02595 | null | null |
Decomposition Bounds for Marginal MAP | cs.LG cs.AI cs.IT math.IT stat.ML | Marginal MAP inference involves making MAP predictions in systems defined
with latent variables or missing information. It is significantly more
difficult than pure marginalization and MAP tasks, for which a large class of
efficient and convergent variational algorithms, such as dual decomposition,
exist. In this work, we generalize dual decomposition to a generic power sum
inference task, which includes marginal MAP, along with pure marginalization
and MAP, as special cases. Our method is based on a block coordinate descent
algorithm on a new convex decomposition bound, that is guaranteed to converge
monotonically, and can be parallelized efficiently. We demonstrate our approach
on marginal MAP queries defined on real-world problems from the UAI approximate
inference challenge, showing that our framework is faster and more reliable
than previous methods.
| Wei Ping, Qiang Liu, Alexander Ihler | null | 1511.02619 | null | null |
Generating Images from Captions with Attention | cs.LG cs.CV | Motivated by the recent progress in generative models, we introduce a model
that generates images from natural language descriptions. The proposed model
iteratively draws patches on a canvas, while attending to the relevant words in
the description. After training on Microsoft COCO, we compare our model with
several baseline generative models on image generation and retrieval tasks. We
demonstrate that our model produces higher quality samples than other
approaches and generates images with novel scene compositions corresponding to
previously unseen captions in the dataset.
| Elman Mansimov, Emilio Parisotto, Jimmy Lei Ba, Ruslan Salakhutdinov | null | 1511.02793 | null | null |
Neural Module Networks | cs.CV cs.CL cs.LG cs.NE | Visual question answering is fundamentally compositional in nature---a
question like "where is the dog?" shares substructure with questions like "what
color is the dog?" and "where is the cat?" This paper seeks to simultaneously
exploit the representational capacity of deep networks and the compositional
linguistic structure of questions. We describe a procedure for constructing and
learning *neural module networks*, which compose collections of jointly-trained
neural "modules" into deep networks for question answering. Our approach
decomposes questions into their linguistic substructures, and uses these
structures to dynamically instantiate modular networks (with reusable
components for recognizing dogs, classifying colors, etc.). The resulting
compound networks are jointly trained. We evaluate our approach on two
challenging datasets for visual question answering, achieving state-of-the-art
results on both the VQA natural image dataset and a new dataset of complex
questions about abstract shapes.
| Jacob Andreas, Marcus Rohrbach, Trevor Darrell, Dan Klein | null | 1511.02799 | null | null |
Multiple Instance Dictionary Learning using Functions of Multiple
Instances | cs.CV cs.LG stat.ML | A multiple instance dictionary learning method using functions of multiple
instances (DL-FUMI) is proposed to address target detection and two-class
classification problems with inaccurate training labels. Given inaccurate
training labels, DL-FUMI learns a set of target dictionary atoms that describe
the most distinctive and representative features of the true positive class as
well as a set of nontarget dictionary atoms that account for the shared
information found in both the positive and negative instances. Experimental
results show that the estimated target dictionary atoms found by DL-FUMI are
more representative prototypes and identify better discriminative features of
the true positive class than existing methods in the literature. DL-FUMI is
shown to have significantly better performance on several target detection and
classification problems as compared to other multiple instance learning (MIL)
dictionary learning algorithms on a variety of MIL problems.
| Changzhe Jiao, Alina Zare | null | 1511.02825 | null | null |
Symmetries and control in generative neural nets | cs.CV cs.LG | We study generative nets which can control and modify observations, after
being trained on real-life datasets. In order to zoom-in on an object, some
spatial, color and other attributes are learned by classifiers in specialized
attention nets. In field-theoretical terms, these learned symmetry statistics
form the gauge group of the data set. Plugging them in the generative layers of
auto-classifiers-encoders (ACE) appears to be the most direct way to
simultaneously: i) generate new observations with arbitrary attributes, from a
given class, ii) describe the low-dimensional manifold encoding the "essence"
of the data, after superfluous attributes are factored out, and iii)
organically control, i.e., move or modify objects within given observations. We
demonstrate the sharp improvement of the generative qualities of shallow ACE,
with added spatial and color symmetry statistics, on the distorted MNIST and
CIFAR10 datasets.
| Galin Georgiev | null | 1511.02841 | null | null |
Visual Language Modeling on CNN Image Representations | cs.CV cs.AI cs.LG | Measuring the naturalness of images is important to generate realistic images
or to detect unnatural regions in images. Additionally, a method to measure
naturalness can be complementary to Convolutional Neural Network (CNN) based
features, which are known to be insensitive to the naturalness of images.
However, most probabilistic image models have insufficient capability of
modeling the complex and abstract naturalness that we feel because they are
built directly on raw image pixels. In this work, we assume that naturalness
can be measured by the predictability on high-level features during eye
movement. Based on this assumption, we propose a novel method to evaluate the
naturalness by building a variant of Recurrent Neural Network Language Models
on pre-trained CNN representations. Our method is applied to two tasks,
demonstrating that 1) using our method as a regularizer enables us to generate
more understandable images from image features than existing approaches, and 2)
unnaturalness maps produced by our method achieve state-of-the-art eye fixation
prediction performance on two well-studied datasets.
| Hiroharu Kato and Tatsuya Harada | null | 1511.02872 | null | null |
Neighbourhood NILM: A Big-data Approach to Household Energy
Disaggregation | cs.LG | In this paper, we investigate whether "big-data" is more valuable than
"precise" data for the problem of energy disaggregation: the process of
breaking down aggregate energy usage on a per-appliance basis. Existing
techniques for disaggregation rely on energy metering at a resolution of 1
minute or higher, but most power meters today only provide a reading once per
month, and at most once every 15 minutes. In this paper, we propose a new
technique called Neighbourhood NILM that leverages data from 'neighbouring'
homes to disaggregate energy given only a single energy reading per month. The
key intuition behind our approach is that 'similar' homes have 'similar' energy
consumption on a per-appliance basis. Neighbourhood NILM matches every home
with a set of 'neighbours' that have direct submetering infrastructure, i.e.
power meters on individual circuits or loads. Many such homes already exist.
Then, it estimates the appliance-level energy consumption of the target home to
be the average of its K neighbours. We evaluate this approach using 25 homes
and results show that our approach gives comparable or better disaggregation in
comparison to state-of-the-art accuracy reported in the literature that depend
on manual model training, high frequency power metering, or both. Results show
that Neighbourhood NILM can achieve 83% and 79% accuracy disaggregating fridge
and heating/cooling loads, compared to 74% and 73% for a technique called FHMM.
Furthermore, it achieves up to 64% accuracy on washing machine, dryer,
dishwasher, and lighting loads, which is higher than previously reported
results. Many existing techniques are not able to disaggregate these loads at
all. These results indicate a potentially substantial advantage to installing
submetering infrastructure in a select few homes rather than installing new
high-frequency smart metering infrastructure in all homes.
| Nipun Batra and Amarjeet Singh and Kamin Whitehouse | null | 1511.02900 | null | null |
Efficient Construction of Local Parametric Reduced Order Models Using
Machine Learning Techniques | cs.LG | Reduced order models are computationally inexpensive approximations that
capture the important dynamical characteristics of large, high-fidelity
computer models of physical systems. This paper applies machine learning
techniques to improve the design of parametric reduced order models.
Specifically, machine learning is used to develop feasible regions in the
parameter space where the admissible target accuracy is achieved with a
predefined reduced order basis, to construct parametric maps, to chose the best
two already existing bases for a new parameter configuration from accuracy
point of view and to pre-select the optimal dimension of the reduced basis such
as to meet the desired accuracy. By combining available information using bases
concatenation and interpolation as well as high-fidelity solutions
interpolation we are able to build accurate reduced order models associated
with new parameter settings. Promising numerical results with a viscous Burgers
model illustrate the potential of machine learning approaches to help design
better reduced order models.
| Azam Moosavi and Razvan Stefanescu and Adrian Sandu | null | 1511.02909 | null | null |
Spectral-Spatial Classification of Hyperspectral Image Using
Autoencoders | cs.CV cs.AI cs.LG | Hyperspectral image (HSI) classification is a hot topic in the remote sensing
community. This paper proposes a new framework of spectral-spatial feature
extraction for HSI classification, in which for the first time the concept of
deep learning is introduced. Specifically, the model of autoencoder is
exploited in our framework to extract various kinds of features. First we
verify the eligibility of autoencoder by following classical spectral
information based classification and use autoencoders with different depth to
classify hyperspectral image. Further in the proposed framework, we combine PCA
on spectral dimension and autoencoder on the other two spatial dimensions to
extract spectral-spatial information for classification. The experimental
results show that this framework achieves the highest classification accuracy
among all methods, and outperforms classical classifiers such as SVM and
PCA-based SVM.
| Zhouhan Lin, Yushi Chen, Xing Zhao, Gang Wang | 10.1109/ICICS.2013.6782778 | 1511.02916 | null | null |
Reducing the Training Time of Neural Networks by Partitioning | cs.NE cs.LG | This paper presents a new method for pre-training neural networks that can
decrease the total training time for a neural network while maintaining the
final performance, which motivates its use on deep neural networks. By
partitioning the training task in multiple training subtasks with sub-models,
which can be performed independently and in parallel, it is shown that the size
of the sub-models reduces almost quadratically with the number of subtasks
created, quickly scaling down the sub-models used for the pre-training. The
sub-models are then merged to provide a pre-trained initial set of weights for
the original model. The proposed method is independent of the other aspects of
the training, such as architecture of the neural network, training method, and
objective, making it compatible with a wide range of existing approaches. The
speedup without loss of performance is validated experimentally on MNIST and on
CIFAR10 data sets, also showing that even performing the subtasks sequentially
can decrease the training time. Moreover, we show that larger models may
present higher speedups and conjecture about the benefits of the method in
distributed learning systems.
| Conrado S. Miranda and Fernando J. Von Zuben | null | 1511.02954 | null | null |
Learning with a Strong Adversary | cs.LG | The robustness of neural networks to intended perturbations has recently
attracted significant attention. In this paper, we propose a new method,
\emph{learning with a strong adversary}, that learns robust classifiers from
supervised data. The proposed method takes finding adversarial examples as an
intermediate step. A new and simple way of finding adversarial examples is
presented and experimentally shown to be efficient. Experimental results
demonstrate that resulting learning method greatly improves the robustness of
the classification models produced.
| Ruitong Huang, Bing Xu, Dale Schuurmans, Csaba Szepesvari | null | 1511.03034 | null | null |
Tiny Descriptors for Image Retrieval with Unsupervised Triplet Hashing | cs.IR cs.CV cs.LG | A typical image retrieval pipeline starts with the comparison of global
descriptors from a large database to find a short list of candidate matches. A
good image descriptor is key to the retrieval pipeline and should reconcile two
contradictory requirements: providing recall rates as high as possible and
being as compact as possible for fast matching. Following the recent successes
of Deep Convolutional Neural Networks (DCNN) for large scale image
classification, descriptors extracted from DCNNs are increasingly used in place
of the traditional hand crafted descriptors such as Fisher Vectors (FV) with
better retrieval performances. Nevertheless, the dimensionality of a typical
DCNN descriptor --extracted either from the visual feature pyramid or the
fully-connected layers-- remains quite high at several thousands of scalar
values. In this paper, we propose Unsupervised Triplet Hashing (UTH), a fully
unsupervised method to compute extremely compact binary hashes --in the 32-256
bits range-- from high-dimensional global descriptors. UTH consists of two
successive deep learning steps. First, Stacked Restricted Boltzmann Machines
(SRBM), a type of unsupervised deep neural nets, are used to learn binary
embedding functions able to bring the descriptor size down to the desired
bitrate. SRBMs are typically able to ensure a very high compression rate at the
expense of loosing some desirable metric properties of the original DCNN
descriptor space. Then, triplet networks, a rank learning scheme based on
weight sharing nets is used to fine-tune the binary embedding functions to
retain as much as possible of the useful metric properties of the original
space. A thorough empirical evaluation conducted on multiple publicly available
dataset using DCNN descriptors shows that our method is able to significantly
outperform state-of-the-art unsupervised schemes in the target bit range.
| Jie Lin, Olivier Mor\`ere, Julie Petta, Vijay Chandrasekhar, Antoine
Veillard | null | 1511.03055 | null | null |
The CTU Prague Relational Learning Repository | cs.LG cs.DB | The aim of the CTU Prague Relational Learning Repository is to support
machine learning research with multi-relational data. The repository currently
contains 50 SQL databases hosted on a public MySQL server located at
relational.fit.cvut.cz. A searchable meta-database provides metadata (e.g., the
number of tables in the database, the number of rows and columns in the tables,
the number of foreign key constraints between tables).
| Jan Motl and Oliver Schulte | null | 1511.03086 | null | null |
Semi-supervised Tuning from Temporal Coherence | cs.LG stat.ML | Recent works demonstrated the usefulness of temporal coherence to regularize
supervised training or to learn invariant features with deep architectures. In
particular, enforcing smooth output changes while presenting temporally-closed
frames from video sequences, proved to be an effective strategy. In this paper
we prove the efficacy of temporal coherence for semi-supervised incremental
tuning. We show that a deep architecture, just mildly trained in a supervised
manner, can progressively improve its classification accuracy, if exposed to
video sequences of unlabeled data. The extent to which, in some cases, a
semi-supervised tuning allows to improve classification accuracy (approaching
the supervised one) is somewhat surprising. A number of control experiments
pointed out the fundamental role of temporal coherence.
| Davide Maltoni and Vincenzo Lomonaco | null | 1511.03163 | null | null |
Sliced Wasserstein Kernels for Probability Distributions | cs.LG stat.ML | Optimal transport distances, otherwise known as Wasserstein distances, have
recently drawn ample attention in computer vision and machine learning as a
powerful discrepancy measure for probability distributions. The recent
developments on alternative formulations of the optimal transport have allowed
for faster solutions to the problem and has revamped its practical applications
in machine learning. In this paper, we exploit the widely used kernel methods
and provide a family of provably positive definite kernels based on the Sliced
Wasserstein distance and demonstrate the benefits of these kernels in a variety
of learning tasks. Our work provides a new perspective on the application of
optimal transport flavored distances through kernel methods in machine learning
tasks.
| Soheil Kolouri, Yang Zou, and Gustavo K. Rohde | null | 1511.03198 | null | null |
Label Efficient Learning by Exploiting Multi-class Output Codes | cs.LG | We present a new perspective on the popular multi-class algorithmic
techniques of one-vs-all and error correcting output codes. Rather than
studying the behavior of these techniques for supervised learning, we establish
a connection between the success of these methods and the existence of
label-efficient learning procedures. We show that in both the realizable and
agnostic cases, if output codes are successful at learning from labeled data,
they implicitly assume structure on how the classes are related. By making that
structure explicit, we design learning algorithms to recover the classes with
low label complexity. We provide results for the commonly studied cases of
one-vs-all learning and when the codewords of the classes are well separated.
We additionally consider the more challenging case where the codewords are not
well separated, but satisfy a boundary features condition that captures the
natural intuition that every bit of the codewords should be significant.
| Maria Florina Balcan, Travis Dick, Yishay Mansour | null | 1511.03225 | null | null |
Learning Communities in the Presence of Errors | cs.DS cs.LG math.ST stat.TH | We study the problem of learning communities in the presence of modeling
errors and give robust recovery algorithms for the Stochastic Block Model
(SBM). This model, which is also known as the Planted Partition Model, is
widely used for community detection and graph partitioning in various fields,
including machine learning, statistics, and social sciences. Many algorithms
exist for learning communities in the Stochastic Block Model, but they do not
work well in the presence of errors.
In this paper, we initiate the study of robust algorithms for partial
recovery in SBM with modeling errors or noise. We consider graphs generated
according to the Stochastic Block Model and then modified by an adversary. We
allow two types of adversarial errors, Feige---Kilian or monotone errors, and
edge outlier errors. Mossel, Neeman and Sly (STOC 2015) posed an open question
about whether an almost exact recovery is possible when the adversary is
allowed to add $o(n)$ edges. Our work answers this question affirmatively even
in the case of $k>2$ communities.
We then show that our algorithms work not only when the instances come from
SBM, but also work when the instances come from any distribution of graphs that
is $\epsilon m$ close to SBM in the Kullback---Leibler divergence. This result
also works in the presence of adversarial errors. Finally, we present almost
tight lower bounds for two communities.
| Konstantin Makarychev, Yury Makarychev and Aravindan Vijayaraghavan | null | 1511.03229 | null | null |
A Hierarchical Spectral Method for Extreme Classification | stat.ML cs.LG | Extreme classification problems are multiclass and multilabel classification
problems where the number of outputs is so large that straightforward
strategies are neither statistically nor computationally viable. One strategy
for dealing with the computational burden is via a tree decomposition of the
output space. While this typically leads to training and inference that scales
sublinearly with the number of outputs, it also results in reduced statistical
performance. In this work, we identify two shortcomings of tree decomposition
methods, and describe two heuristic mitigations. We compose these with an
eigenvalue technique for constructing the tree. The end result is a
computationally efficient algorithm that provides good statistical performance
on several extreme data sets.
| Paul Mineiro and Nikos Karampatziakis | null | 1511.03260 | null | null |
Anchored Discrete Factor Analysis | stat.ML cs.LG | We present a semi-supervised learning algorithm for learning discrete factor
analysis models with arbitrary structure on the latent variables. Our algorithm
assumes that every latent variable has an "anchor", an observed variable with
only that latent variable as its parent. Given such anchors, we show that it is
possible to consistently recover moments of the latent variables and use these
moments to learn complete models. We also introduce a new technique for
improving the robustness of method-of-moment algorithms by optimizing over the
marginal polytope or its relaxations. We evaluate our algorithm using two
real-world tasks, tag prediction on questions from the Stack Overflow website
and medical diagnosis in an emergency department.
| Yoni Halpern and Steven Horng and David Sontag | null | 1511.03299 | null | null |
Visual7W: Grounded Question Answering in Images | cs.CV cs.LG cs.NE | We have seen great progress in basic perceptual tasks such as object
recognition and detection. However, AI models still fail to match humans in
high-level vision tasks due to the lack of capacities for deeper reasoning.
Recently the new task of visual question answering (QA) has been proposed to
evaluate a model's capacity for deep image understanding. Previous works have
established a loose, global association between QA sentences and images.
However, many questions and answers, in practice, relate to local regions in
the images. We establish a semantic link between textual descriptions and image
regions by object-level grounding. It enables a new type of QA with visual
answers, in addition to textual answers used in previous work. We study the
visual QA tasks in a grounded setting with a large collection of 7W
multiple-choice QA pairs. Furthermore, we evaluate human performance and
several baseline models on the QA tasks. Finally, we propose a novel LSTM model
with spatial attention to tackle the 7W QA tasks.
| Yuke Zhu, Oliver Groth, Michael Bernstein and Li Fei-Fei | null | 1511.03416 | null | null |
Hierarchical Latent Semantic Mapping for Automated Topic Generation | cs.LG cs.CL cs.IR | Much of information sits in an unprecedented amount of text data. Managing
allocation of these large scale text data is an important problem for many
areas. Topic modeling performs well in this problem. The traditional generative
models (PLSA,LDA) are the state-of-the-art approaches in topic modeling and
most recent research on topic generation has been focusing on improving or
extending these models. However, results of traditional generative models are
sensitive to the number of topics K, which must be specified manually. The
problem of generating topics from corpus resembles community detection in
networks. Many effective algorithms can automatically detect communities from
networks without a manually specified number of the communities. Inspired by
these algorithms, in this paper, we propose a novel method named Hierarchical
Latent Semantic Mapping (HLSM), which automatically generates topics from
corpus. HLSM calculates the association between each pair of words in the
latent topic space, then constructs a unipartite network of words with this
association and hierarchically generates topics from this network. We apply
HLSM to several document collections and the experimental comparisons against
several state-of-the-art approaches demonstrate the promising performance.
| Guorui Zhou, Guang Chen | null | 1511.03546 | null | null |
Federated Optimization:Distributed Optimization Beyond the Datacenter | cs.LG math.OC | We introduce a new and increasingly relevant setting for distributed
optimization in machine learning, where the data defining the optimization are
distributed (unevenly) over an extremely large number of \nodes, but the goal
remains to train a high-quality centralized model. We refer to this setting as
Federated Optimization. In this setting, communication efficiency is of utmost
importance.
A motivating example for federated optimization arises when we keep the
training data locally on users' mobile devices rather than logging it to a data
center for training. Instead, the mobile devices are used as nodes performing
computation on their local data in order to update a global model. We suppose
that we have an extremely large number of devices in our network, each of which
has only a tiny fraction of data available totally; in particular, we expect
the number of data points available locally to be much smaller than the number
of devices. Additionally, since different users generate data with different
patterns, we assume that no device has a representative sample of the overall
distribution.
We show that existing algorithms are not suitable for this setting, and
propose a new algorithm which shows encouraging experimental results. This work
also sets a path for future research needed in the context of federated
optimization.
| Jakub Kone\v{c}n\'y, Brendan McMahan, Daniel Ramage | null | 1511.03575 | null | null |
DataGrinder: Fast, Accurate, Fully non-Parametric Classification
Approach Using 2D Convex Hulls | cs.DB cs.CG cs.LG | It has been a long time, since data mining technologies have made their ways
to the field of data management. Classification is one of the most important
data mining tasks for label prediction, categorization of objects into groups,
advertisement and data management. In this paper, we focus on the standard
classification problem which is predicting unknown labels in Euclidean space.
Most efforts in Machine Learning communities are devoted to methods that use
probabilistic algorithms which are heavy on Calculus and Linear Algebra. Most
of these techniques have scalability issues for big data, and are hardly
parallelizable if they are to maintain their high accuracies in their standard
form. Sampling is a new direction for improving scalability, using many small
parallel classifiers. In this paper, rather than conventional sampling methods,
we focus on a discrete classification algorithm with O(n) expected running
time. Our approach performs a similar task as sampling methods. However, we use
column-wise sampling of data, rather than the row-wise sampling used in the
literature. In either case, our algorithm is completely deterministic. Our
algorithm, proposes a way of combining 2D convex hulls in order to achieve high
classification accuracy as well as scalability in the same time. First, we
thoroughly describe and prove our O(n) algorithm for finding the convex hull of
a point set in 2D. Then, we show with experiments our classifier model built
based on this idea is very competitive compared with existing sophisticated
classification algorithms included in commercial statistical applications such
as MATLAB.
| Mohammad Khabbaz | null | 1511.03576 | null | null |
The Fourier Transform of Poisson Multinomial Distributions and its
Algorithmic Applications | cs.DS cs.GT cs.LG math.PR math.ST stat.TH | An $(n, k)$-Poisson Multinomial Distribution (PMD) is a random variable of
the form $X = \sum_{i=1}^n X_i$, where the $X_i$'s are independent random
vectors supported on the set of standard basis vectors in $\mathbb{R}^k.$ In
this paper, we obtain a refined structural understanding of PMDs by analyzing
their Fourier transform. As our core structural result, we prove that the
Fourier transform of PMDs is {\em approximately sparse}, i.e., roughly
speaking, its $L_1$-norm is small outside a small set. By building on this
result, we obtain the following applications:
{\bf Learning Theory.} We design the first computationally efficient learning
algorithm for PMDs with respect to the total variation distance. Our algorithm
learns an arbitrary $(n, k)$-PMD within variation distance $\epsilon$ using a
near-optimal sample size of $\widetilde{O}_k(1/\epsilon^2),$ and runs in time
$\widetilde{O}_k(1/\epsilon^2) \cdot \log n.$ Previously, no algorithm with a
$\mathrm{poly}(1/\epsilon)$ runtime was known, even for $k=3.$
{\bf Game Theory.} We give the first efficient polynomial-time approximation
scheme (EPTAS) for computing Nash equilibria in anonymous games. For normalized
anonymous games with $n$ players and $k$ strategies, our algorithm computes a
well-supported $\epsilon$-Nash equilibrium in time $n^{O(k^3)} \cdot
(k/\epsilon)^{O(k^3\log(k/\epsilon)/\log\log(k/\epsilon))^{k-1}}.$ The best
previous algorithm for this problem had running time $n^{(f(k)/\epsilon)^k},$
where $f(k) = \Omega(k^{k^2})$, for any $k>2.$
{\bf Statistics.} We prove a multivariate central limit theorem (CLT) that
relates an arbitrary PMD to a discretized multivariate Gaussian with the same
mean and covariance, in total variation distance. Our new CLT strengthens the
CLT of Valiant and Valiant by completely removing the dependence on $n$ in the
error bound.
| Ilias Diakonikolas, Daniel M. Kane, Alistair Stewart | null | 1511.03592 | null | null |
A Size-Free CLT for Poisson Multinomials and its Applications | cs.DS cs.GT cs.LG math.PR math.ST stat.TH | An $(n,k)$-Poisson Multinomial Distribution (PMD) is the distribution of the
sum of $n$ independent random vectors supported on the set ${\cal
B}_k=\{e_1,\ldots,e_k\}$ of standard basis vectors in $\mathbb{R}^k$. We show
that any $(n,k)$-PMD is ${\rm poly}\left({k\over \sigma}\right)$-close in total
variation distance to the (appropriately discretized) multi-dimensional
Gaussian with the same first two moments, removing the dependence on $n$ from
the Central Limit Theorem of Valiant and Valiant. Interestingly, our CLT is
obtained by bootstrapping the Valiant-Valiant CLT itself through the structural
characterization of PMDs shown in recent work by Daskalakis, Kamath, and
Tzamos. In turn, our stronger CLT can be leveraged to obtain an efficient PTAS
for approximate Nash equilibria in anonymous games, significantly improving the
state of the art, and matching qualitatively the running time dependence on $n$
and $1/\varepsilon$ of the best known algorithm for two-strategy anonymous
games. Our new CLT also enables the construction of covers for the set of
$(n,k)$-PMDs, which are proper and whose size is shown to be essentially
optimal. Our cover construction combines our CLT with the Shapley-Folkman
theorem and recent sparsification results for Laplacian matrices by Batson,
Spielman, and Srivastava. Our cover size lower bound is based on an algebraic
geometric construction. Finally, leveraging the structural properties of the
Fourier spectrum of PMDs we show that these distributions can be learned from
$O_k(1/\varepsilon^2)$ samples in ${\rm poly}_k(1/\varepsilon)$-time, removing
the quasi-polynomial dependence of the running time on $1/\varepsilon$ from the
algorithm of Daskalakis, Kamath, and Tzamos.
| Constantinos Daskalakis, Anindya De, Gautam Kamath, Christos Tzamos | 10.1145/2897518.2897519 | 1511.03641 | null | null |
Unifying distillation and privileged information | stat.ML cs.LG | Distillation (Hinton et al., 2015) and privileged information (Vapnik &
Izmailov, 2015) are two techniques that enable machines to learn from other
machines. This paper unifies these two techniques into generalized
distillation, a framework to learn from multiple machines and data
representations. We provide theoretical and causal insight about the inner
workings of generalized distillation, extend it to unsupervised, semisupervised
and multitask learning scenarios, and illustrate its efficacy on a variety of
numerical simulations on both synthetic and real-world data.
| David Lopez-Paz, L\'eon Bottou, Bernhard Sch\"olkopf, Vladimir Vapnik | null | 1511.03643 | null | null |
Learning to Diagnose with LSTM Recurrent Neural Networks | cs.LG | Clinical medical data, especially in the intensive care unit (ICU), consist
of multivariate time series of observations. For each patient visit (or
episode), sensor data and lab test results are recorded in the patient's
Electronic Health Record (EHR). While potentially containing a wealth of
insights, the data is difficult to mine effectively, owing to varying length,
irregular sampling and missing data. Recurrent Neural Networks (RNNs),
particularly those using Long Short-Term Memory (LSTM) hidden units, are
powerful and increasingly popular models for learning from sequence data. They
effectively model varying length sequences and capture long range dependencies.
We present the first study to empirically evaluate the ability of LSTMs to
recognize patterns in multivariate time series of clinical measurements.
Specifically, we consider multilabel classification of diagnoses, training a
model to classify 128 diagnoses given 13 frequently but irregularly sampled
clinical measurements. First, we establish the effectiveness of a simple LSTM
network for modeling clinical data. Then we demonstrate a straightforward and
effective training strategy in which we replicate targets at each sequence
step. Trained only on raw time series, our models outperform several strong
baselines, including a multilayer perceptron trained on hand-engineered
features.
| Zachary C. Lipton, David C. Kale, Charles Elkan, Randall Wetzel | null | 1511.03677 | null | null |
Generative Concatenative Nets Jointly Learn to Write and Classify
Reviews | cs.CL cs.LG | A recommender system's basic task is to estimate how users will respond to
unseen items. This is typically modeled in terms of how a user might rate a
product, but here we aim to extend such approaches to model how a user would
write about the product. To do so, we design a character-level Recurrent Neural
Network (RNN) that generates personalized product reviews. The network
convincingly learns styles and opinions of nearly 1000 distinct authors, using
a large corpus of reviews from BeerAdvocate.com. It also tailors reviews to
describe specific items, categories, and star ratings. Using a simple input
replication strategy, the Generative Concatenative Network (GCN) preserves the
signal of static auxiliary inputs across wide sequence intervals. Without any
additional training, the generative model can classify reviews, identifying the
author of the review, the product category, and the sentiment (rating), with
remarkable accuracy. Our evaluation shows the GCN captures complex dynamics in
text, such as the effect of negation, misspellings, slang, and large
vocabularies gracefully absent any machinery explicitly dedicated to the
purpose.
| Zachary C. Lipton, Sharad Vikram, Julian McAuley | null | 1511.03683 | null | null |
Online Principal Component Analysis in High Dimension: Which Algorithm
to Choose? | stat.ML cs.LG stat.ME | In the current context of data explosion, online techniques that do not
require storing all data in memory are indispensable to routinely perform tasks
like principal component analysis (PCA). Recursive algorithms that update the
PCA with each new observation have been studied in various fields of research
and found wide applications in industrial monitoring, computer vision,
astronomy, and latent semantic indexing, among others. This work provides
guidance for selecting an online PCA algorithm in practice. We present the main
approaches to online PCA, namely, perturbation techniques, incremental methods,
and stochastic optimization, and compare their statistical accuracy,
computation time, and memory requirements using artificial and real data.
Extensions to missing data and to functional data are discussed. All studied
algorithms are available in the R package onlinePCA on CRAN.
| Herv\'e Cardot and David Degras | null | 1511.03688 | null | null |
Universum Prescription: Regularization using Unlabeled Data | cs.LG | This paper shows that simply prescribing "none of the above" labels to
unlabeled data has a beneficial regularization effect to supervised learning.
We call it universum prescription by the fact that the prescribed labels cannot
be one of the supervised labels. In spite of its simplicity, universum
prescription obtained competitive results in training deep convolutional
networks for CIFAR-10, CIFAR-100, STL-10 and ImageNet datasets. A qualitative
justification of these approaches using Rademacher complexity is presented. The
effect of a regularization parameter -- probability of sampling from unlabeled
data -- is also studied empirically.
| Xiang Zhang, Yann LeCun | null | 1511.03719 | null | null |
Doubly Robust Off-policy Value Evaluation for Reinforcement Learning | cs.LG cs.AI cs.SY stat.ME stat.ML | We study the problem of off-policy value evaluation in reinforcement learning
(RL), where one aims to estimate the value of a new policy based on data
collected by a different policy. This problem is often a critical step when
applying RL in real-world problems. Despite its importance, existing general
methods either have uncontrolled bias or suffer high variance. In this work, we
extend the doubly robust estimator for bandits to sequential decision-making
problems, which gets the best of both worlds: it is guaranteed to be unbiased
and can have a much lower variance than the popular importance sampling
estimators. We demonstrate the estimator's accuracy in several benchmark
problems, and illustrate its use as a subroutine in safe policy improvement. We
also provide theoretical results on the hardness of the problem, and show that
our estimator can match the lower bound in certain scenarios.
| Nan Jiang and Lihong Li | null | 1511.03722 | null | null |
Grounding of Textual Phrases in Images by Reconstruction | cs.CV cs.CL cs.LG | Grounding (i.e. localizing) arbitrary, free-form textual phrases in visual
content is a challenging problem with many applications for human-computer
interaction and image-text reference resolution. Few datasets provide the
ground truth spatial localization of phrases, thus it is desirable to learn
from data with no or little grounding supervision. We propose a novel approach
which learns grounding by reconstructing a given phrase using an attention
mechanism, which can be either latent or optimized directly. During training
our approach encodes the phrase using a recurrent network language model and
then learns to attend to the relevant image region in order to reconstruct the
input phrase. At test time, the correct attention, i.e., the grounding, is
evaluated. If grounding supervision is available it can be directly applied via
a loss over the attention mechanism. We demonstrate the effectiveness of our
approach on the Flickr 30k Entities and ReferItGame datasets with different
levels of supervision, ranging from no supervision over partial supervision to
full supervision. Our supervised variant improves by a large margin over the
state-of-the-art on both datasets.
| Anna Rohrbach, Marcus Rohrbach, Ronghang Hu, Trevor Darrell, Bernt
Schiele | 10.1007/978-3-319-46448-0_49 | 1511.03745 | null | null |
Random Multi-Constraint Projection: Stochastic Gradient Methods for
Convex Optimization with Many Constraints | stat.ML cs.LG math.OC | Consider convex optimization problems subject to a large number of
constraints. We focus on stochastic problems in which the objective takes the
form of expected values and the feasible set is the intersection of a large
number of convex sets. We propose a class of algorithms that perform both
stochastic gradient descent and random feasibility updates simultaneously. At
every iteration, the algorithms sample a number of projection points onto a
randomly selected small subsets of all constraints. Three feasibility update
schemes are considered: averaging over random projected points, projecting onto
the most distant sample, projecting onto a special polyhedral set constructed
based on sample points. We prove the almost sure convergence of these
algorithms, and analyze the iterates' feasibility error and optimality error,
respectively. We provide new convergence rate benchmarks for stochastic
first-order optimization with many constraints. The rate analysis and numerical
experiments reveal that the algorithm using the polyhedral-set projection
scheme is the most efficient one within known algorithms.
| Mengdi Wang, Yichen Chen, Jialin Liu, Yuantao Gu | null | 1511.03760 | null | null |
Sparse Learning for Large-scale and High-dimensional Data: A Randomized
Convex-concave Optimization Approach | cs.LG | In this paper, we develop a randomized algorithm and theory for learning a
sparse model from large-scale and high-dimensional data, which is usually
formulated as an empirical risk minimization problem with a sparsity-inducing
regularizer. Under the assumption that there exists a (approximately) sparse
solution with high classification accuracy, we argue that the dual solution is
also sparse or approximately sparse. The fact that both primal and dual
solutions are sparse motivates us to develop a randomized approach for a
general convex-concave optimization problem. Specifically, the proposed
approach combines the strength of random projection with that of sparse
learning: it utilizes random projection to reduce the dimensionality, and
introduces $\ell_1$-norm regularization to alleviate the approximation error
caused by random projection. Theoretical analysis shows that under favored
conditions, the randomized algorithm can accurately recover the optimal
solutions to the convex-concave optimization problem (i.e., recover both the
primal and dual solutions).
| Lijun Zhang, Tianbao Yang, Rong Jin, Zhi-Hua Zhou | null | 1511.03766 | null | null |
Improving performance of recurrent neural network with relu nonlinearity | cs.NE cs.LG | In recent years significant progress has been made in successfully training
recurrent neural networks (RNNs) on sequence learning problems involving long
range temporal dependencies. The progress has been made on three fronts: (a)
Algorithmic improvements involving sophisticated optimization techniques, (b)
network design involving complex hidden layer nodes and specialized recurrent
layer connections and (c) weight initialization methods. In this paper, we
focus on recently proposed weight initialization with identity matrix for the
recurrent weights in a RNN. This initialization is specifically proposed for
hidden nodes with Rectified Linear Unit (ReLU) non linearity. We offer a simple
dynamical systems perspective on weight initialization process, which allows us
to propose a modified weight initialization strategy. We show that this
initialization technique leads to successfully training RNNs composed of ReLUs.
We demonstrate that our proposal produces comparable or better solution for
three toy problems involving long range temporal structure: the addition
problem, the multiplication problem and the MNIST classification problem using
sequence of pixels. In addition, we present results for a benchmark action
recognition problem.
| Sachin S. Talathi and Aniket Vartak | null | 1511.03771 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.