categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
list |
---|---|---|---|---|---|---|---|---|---|---|
cs.CL cs.LG | null | 1308.0658 | null | null | http://arxiv.org/pdf/1308.0658v1 | 2013-08-03T04:20:21Z | 2013-08-03T04:20:21Z | Exploring The Contribution of Unlabeled Data in Financial Sentiment
Analysis | With the proliferation of its applications in various industries, sentiment
analysis by using publicly available web data has become an active research
area in text classification during these years. It is argued by researchers
that semi-supervised learning is an effective approach to this problem since it
is capable to mitigate the manual labeling effort which is usually expensive
and time-consuming. However, there was a long-term debate on the effectiveness
of unlabeled data in text classification. This was partially caused by the fact
that many assumptions in theoretic analysis often do not hold in practice. We
argue that this problem may be further understood by adding an additional
dimension in the experiment. This allows us to address this problem in the
perspective of bias and variance in a broader view. We show that the well-known
performance degradation issue caused by unlabeled data can be reproduced as a
subset of the whole scenario. We argue that if the bias-variance trade-off is
to be better balanced by a more effective feature selection method unlabeled
data is very likely to boost the classification performance. We then propose a
feature selection framework in which labeled and unlabeled training samples are
both considered. We discuss its potential in achieving such a balance. Besides,
the application in financial sentiment analysis is chosen because it not only
exemplifies an important application, the data possesses better illustrative
power as well. The implications of this study in text classification and
financial sentiment analysis are both discussed.
| [
"Jimmy SJ. Ren, Wei Wang, Jiawei Wang, Stephen Shaoyi Liao",
"['Jimmy SJ. Ren' 'Wei Wang' 'Jiawei Wang' 'Stephen Shaoyi Liao']"
] |
cs.NI cs.LG | null | 1308.0768 | null | null | http://arxiv.org/pdf/1308.0768v1 | 2013-08-04T02:07:54Z | 2013-08-04T02:07:54Z | MonoStream: A Minimal-Hardware High Accuracy Device-free WLAN
Localization System | Device-free (DF) localization is an emerging technology that allows the
detection and tracking of entities that do not carry any devices nor
participate actively in the localization process. Typically, DF systems require
a large number of transmitters and receivers to achieve acceptable accuracy,
which is not available in many scenarios such as homes and small businesses. In
this paper, we introduce MonoStream as an accurate single-stream DF
localization system that leverages the rich Channel State Information (CSI) as
well as MIMO information from the physical layer to provide accurate DF
localization with only one stream. To boost its accuracy and attain low
computational requirements, MonoStream models the DF localization problem as an
object recognition problem and uses a novel set of CSI-context features and
techniques with proven accuracy and efficiency. Experimental evaluation in two
typical testbeds, with a side-by-side comparison with the state-of-the-art,
shows that MonoStream can achieve an accuracy of 0.95m with at least 26%
enhancement in median distance error using a single stream only. This
enhancement in accuracy comes with an efficient execution of less than 23ms per
location update on a typical laptop. This highlights the potential of
MonoStream usage for real-time DF tracking applications.
| [
"['Ibrahim Sabek' 'Moustafa Youssef']",
"Ibrahim Sabek and Moustafa Youssef"
] |
stat.ML cs.LG | null | 1308.0900 | null | null | http://arxiv.org/pdf/1308.0900v2 | 2013-10-30T02:13:47Z | 2013-08-05T08:16:30Z | Trading USDCHF filtered by Gold dynamics via HMM coupling | We devise a USDCHF trading strategy using the dynamics of gold as a filter.
Our strategy involves modelling both USDCHF and gold using a coupled hidden
Markov model (CHMM). The observations will be indicators, RSI and CCI, which
will be used as triggers for our trading signals. Upon decoding the model in
each iteration, we can get the next most probable state and the next most
probable observation. Hopefully by taking advantage of intermarket analysis and
the Markov property implicit in the model, trading with these most probable
values will produce profitable results.
| [
"Donny Lee",
"['Donny Lee']"
] |
cs.DS cs.DM cs.LG | null | 1308.1006 | null | null | http://arxiv.org/pdf/1308.1006v1 | 2013-08-05T15:19:48Z | 2013-08-05T15:19:48Z | Fast Semidifferential-based Submodular Function Optimization | We present a practical and powerful new framework for both unconstrained and
constrained submodular function optimization based on discrete
semidifferentials (sub- and super-differentials). The resulting algorithms,
which repeatedly compute and then efficiently optimize submodular
semigradients, offer new and generalize many old methods for submodular
optimization. Our approach, moreover, takes steps towards providing a unifying
paradigm applicable to both submodular min- imization and maximization,
problems that historically have been treated quite distinctly. The practicality
of our algorithms is important since interest in submodularity, owing to its
natural and wide applicability, has recently been in ascendance within machine
learning. We analyze theoretical properties of our algorithms for minimization
and maximization, and show that many state-of-the-art maximization algorithms
are special cases. Lastly, we complement our theoretical analyses with
supporting empirical experiments.
| [
"Rishabh Iyer, Stefanie Jegelka and Jeff Bilmes",
"['Rishabh Iyer' 'Stefanie Jegelka' 'Jeff Bilmes']"
] |
cs.LG cs.DS cs.IR | null | 1308.1009 | null | null | http://arxiv.org/pdf/1308.1009v1 | 2013-08-05T15:25:51Z | 2013-08-05T15:25:51Z | Sign Stable Projections, Sign Cauchy Projections and Chi-Square Kernels | The method of stable random projections is popular for efficiently computing
the Lp distances in high dimension (where 0<p<=2), using small space. Because
it adopts nonadaptive linear projections, this method is naturally suitable
when the data are collected in a dynamic streaming fashion (i.e., turnstile
data streams). In this paper, we propose to use only the signs of the projected
data and analyze the probability of collision (i.e., when the two signs
differ). We derive a bound of the collision probability which is exact when p=2
and becomes less sharp when p moves away from 2. Interestingly, when p=1 (i.e.,
Cauchy random projections), we show that the probability of collision can be
accurately approximated as functions of the chi-square similarity. For example,
when the (un-normalized) data are binary, the maximum approximation error of
the collision probability is smaller than 0.0192. In text and vision
applications, the chi-square similarity is a popular measure for nonnegative
data when the features are generated from histograms. Our experiments confirm
that the proposed method is promising for large-scale learning applications.
| [
"Ping Li, Gennady Samorodnitsky, John Hopcroft",
"['Ping Li' 'Gennady Samorodnitsky' 'John Hopcroft']"
] |
cs.MA cs.LG nlin.AO | 10.1103/PhysRevE.88.012815 | 1308.1049 | null | null | http://arxiv.org/abs/1308.1049v1 | 2013-08-05T17:43:58Z | 2013-08-05T17:43:58Z | Coevolutionary networks of reinforcement-learning agents | This paper presents a model of network formation in repeated games where the
players adapt their strategies and network ties simultaneously using a simple
reinforcement-learning scheme. It is demonstrated that the coevolutionary
dynamics of such systems can be described via coupled replicator equations. We
provide a comprehensive analysis for three-player two-action games, which is
the minimum system size with nontrivial structural dynamics. In particular, we
characterize the Nash equilibria (NE) in such games and examine the local
stability of the rest points corresponding to those equilibria. We also study
general n-player networks via both simulations and analytical methods and find
that in the absence of exploration, the stable equilibria consist of star
motifs as the main building blocks of the network. Furthermore, in all stable
equilibria the agents play pure strategies, even when the game allows mixed NE.
Finally, we study the impact of exploration on learning outcomes, and observe
that there is a critical exploration rate above which the symmetric and
uniformly connected network topology becomes stable.
| [
"['Ardeshir Kianercy' 'Aram Galstyan']",
"Ardeshir Kianercy and Aram Galstyan"
] |
stat.AP cs.LG | null | 1308.1066 | null | null | http://arxiv.org/pdf/1308.1066v1 | 2013-08-05T18:44:17Z | 2013-08-05T18:44:17Z | Theoretical Issues for Global Cumulative Treatment Analysis (GCTA) | Adaptive trials are now mainstream science. Recently, researchers have taken
the adaptive trial concept to its natural conclusion, proposing what we call
"Global Cumulative Treatment Analysis" (GCTA). Similar to the adaptive trial,
decision making and data collection and analysis in the GCTA are continuous and
integrated, and treatments are ranked in accord with the statistics of this
information, combined with what offers the most information gain. Where GCTA
differs from an adaptive trial, or, for that matter, from any trial design, is
that all patients are implicitly participants in the GCTA process, regardless
of whether they are formally enrolled in a trial. This paper discusses some of
the theoretical and practical issues that arise in the design of a GCTA, along
with some preliminary thoughts on how they might be approached.
| [
"Jeff Shrager",
"['Jeff Shrager']"
] |
math.ST cs.LG stat.TH | 10.3150/14-BEJ679 | 1308.1147 | null | null | http://arxiv.org/abs/1308.1147v3 | 2017-07-03T13:29:39Z | 2013-08-06T01:05:52Z | Empirical entropy, minimax regret and minimax risk | We consider the random design regression model with square loss. We propose a
method that aggregates empirical minimizers (ERM) over appropriately chosen
random subsets and reduces to ERM in the extreme case, and we establish sharp
oracle inequalities for its risk. We show that, under the $\varepsilon^{-p}$
growth of the empirical $\varepsilon$-entropy, the excess risk of the proposed
method attains the rate $n^{-2/(2+p)}$ for $p\in(0,2)$ and $n^{-1/p}$ for $p>2$
where $n$ is the sample size. Furthermore, for $p\in(0,2)$, the excess risk
rate matches the behavior of the minimax risk of function estimation in
regression problems under the well-specified model. This yields a conclusion
that the rates of statistical estimation in well-specified models (minimax
risk) and in misspecified models (minimax regret) are equivalent in the regime
$p\in(0,2)$. In other words, for $p\in(0,2)$ the problem of statistical
learning enjoys the same minimax rate as the problem of statistical estimation.
On the contrary, for $p>2$ we show that the rates of the minimax regret are, in
general, slower than for the minimax risk. Our oracle inequalities also imply
the $v\log(n/v)/n$ rates for Vapnik-Chervonenkis type classes of dimension $v$
without the usual convexity assumption on the class; we show that these rates
are optimal. Finally, for a slightly modified method, we derive a bound on the
excess risk of $s$-sparse convex aggregation improving that of Lounici [Math.
Methods Statist. 16 (2007) 246-259] and providing the optimal rate.
| [
"Alexander Rakhlin, Karthik Sridharan, Alexandre B. Tsybakov",
"['Alexander Rakhlin' 'Karthik Sridharan' 'Alexandre B. Tsybakov']"
] |
cs.CV cs.LG | null | 1308.1187 | null | null | http://arxiv.org/pdf/1308.1187v1 | 2013-08-06T05:57:08Z | 2013-08-06T05:57:08Z | Spatial-Aware Dictionary Learning for Hyperspectral Image Classification | This paper presents a structured dictionary-based model for hyperspectral
data that incorporates both spectral and contextual characteristics of a
spectral sample, with the goal of hyperspectral image classification. The idea
is to partition the pixels of a hyperspectral image into a number of spatial
neighborhoods called contextual groups and to model each pixel with a linear
combination of a few dictionary elements learned from the data. Since pixels
inside a contextual group are often made up of the same materials, their linear
combinations are constrained to use common elements from the dictionary. To
this end, dictionary learning is carried out with a joint sparse regularizer to
induce a common sparsity pattern in the sparse coefficients of each contextual
group. The sparse coefficients are then used for classification using a linear
SVM. Experimental results on a number of real hyperspectral images confirm the
effectiveness of the proposed representation for hyperspectral image
classification. Moreover, experiments with simulated multispectral data show
that the proposed model is capable of finding representations that may
effectively be used for classification of multispectral-resolution samples.
| [
"Ali Soltani-Farani, Hamid R. Rabiee, Seyyed Abbas Hosseini",
"['Ali Soltani-Farani' 'Hamid R. Rabiee' 'Seyyed Abbas Hosseini']"
] |
cs.LG cs.IR | null | 1308.1792 | null | null | http://arxiv.org/pdf/1308.1792v1 | 2013-08-08T09:24:24Z | 2013-08-08T09:24:24Z | OFF-Set: One-pass Factorization of Feature Sets for Online
Recommendation in Persistent Cold Start Settings | One of the most challenging recommendation tasks is recommending to a new,
previously unseen user. This is known as the 'user cold start' problem.
Assuming certain features or attributes of users are known, one approach for
handling new users is to initially model them based on their features.
Motivated by an ad targeting application, this paper describes an extreme
online recommendation setting where the cold start problem is perpetual. Every
user is encountered by the system just once, receives a recommendation, and
either consumes or ignores it, registering a binary reward.
We introduce One-pass Factorization of Feature Sets, OFF-Set, a novel
recommendation algorithm based on Latent Factor analysis, which models users by
mapping their features to a latent space. Furthermore, OFF-Set is able to model
non-linear interactions between pairs of features. OFF-Set is designed for
purely online recommendation, performing lightweight updates of its model per
each recommendation-reward observation. We evaluate OFF-Set against several
state of the art baselines, and demonstrate its superiority on real
ad-targeting data.
| [
"['Michal Aharon' 'Natalie Aizenberg' 'Edward Bortnikov' 'Ronny Lempel'\n 'Roi Adadi' 'Tomer Benyamini' 'Liron Levin' 'Ran Roth' 'Ohad Serfaty']",
"Michal Aharon, Natalie Aizenberg, Edward Bortnikov, Ronny Lempel, Roi\n Adadi, Tomer Benyamini, Liron Levin, Ran Roth, Ohad Serfaty"
] |
q-bio.QM cs.CE cs.LG math.OC q-bio.BM stat.ML | 10.1093/bioinformatics/btt211 | 1308.1975 | null | null | http://arxiv.org/abs/1308.1975v2 | 2013-08-19T16:24:06Z | 2013-08-08T20:44:01Z | Predicting protein contact map using evolutionary and physical
constraints by integer programming (extended version) | Motivation. Protein contact map describes the pairwise spatial and functional
relationship of residues in a protein and contains key information for protein
3D structure prediction. Although studied extensively, it remains very
challenging to predict contact map using only sequence information. Most
existing methods predict the contact map matrix element-by-element, ignoring
correlation among contacts and physical feasibility of the whole contact map. A
couple of recent methods predict contact map based upon residue co-evolution,
taking into consideration contact correlation and enforcing a sparsity
restraint, but these methods require a very large number of sequence homologs
for the protein under consideration and the resultant contact map may be still
physically unfavorable.
Results. This paper presents a novel method PhyCMAP for contact map
prediction, integrating both evolutionary and physical restraints by machine
learning and integer linear programming (ILP). The evolutionary restraints
include sequence profile, residue co-evolution and context-specific statistical
potential. The physical restraints specify more concrete relationship among
contacts than the sparsity restraint. As such, our method greatly reduces the
solution space of the contact map matrix and thus, significantly improves
prediction accuracy. Experimental results confirm that PhyCMAP outperforms
currently popular methods no matter how many sequence homologs are available
for the protein under consideration. PhyCMAP can predict contacts within
minutes after PSIBLAST search for sequence homologs is done, much faster than
the two recent methods PSICOV and EvFold.
See http://raptorx.uchicago.edu for the web server.
| [
"Zhiyong Wang and Jinbo Xu",
"['Zhiyong Wang' 'Jinbo Xu']"
] |
cs.LG cs.DS cs.IT math.IT stat.CO | null | 1308.2218 | null | null | http://arxiv.org/pdf/1308.2218v1 | 2013-08-09T19:50:24Z | 2013-08-09T19:50:24Z | Coding for Random Projections | The method of random projections has become very popular for large-scale
applications in statistical learning, information retrieval, bio-informatics
and other applications. Using a well-designed coding scheme for the projected
data, which determines the number of bits needed for each projected value and
how to allocate these bits, can significantly improve the effectiveness of the
algorithm, in storage cost as well as computational speed. In this paper, we
study a number of simple coding schemes, focusing on the task of similarity
estimation and on an application to training linear classifiers. We demonstrate
that uniform quantization outperforms the standard existing influential method
(Datar et. al. 2004). Indeed, we argue that in many cases coding with just a
small number of bits suffices. Furthermore, we also develop a non-uniform 2-bit
coding scheme that generally performs well in practice, as confirmed by our
experiments on training linear support vector machines (SVM).
| [
"Ping Li, Michael Mitzenmacher, Anshumali Shrivastava",
"['Ping Li' 'Michael Mitzenmacher' 'Anshumali Shrivastava']"
] |
cs.LG stat.ML | 10.1007/s11222-014-9461-5 | 1308.2302 | null | null | http://arxiv.org/abs/1308.2302v3 | 2013-12-20T15:15:53Z | 2013-08-10T10:47:25Z | High-Dimensional Regression with Gaussian Mixtures and Partially-Latent
Response Variables | In this work we address the problem of approximating high-dimensional data
with a low-dimensional representation. We make the following contributions. We
propose an inverse regression method which exchanges the roles of input and
response, such that the low-dimensional variable becomes the regressor, and
which is tractable. We introduce a mixture of locally-linear probabilistic
mapping model that starts with estimating the parameters of inverse regression,
and follows with inferring closed-form solutions for the forward parameters of
the high-dimensional regression problem of interest. Moreover, we introduce a
partially-latent paradigm, such that the vector-valued response variable is
composed of both observed and latent entries, thus being able to deal with data
contaminated by experimental artifacts that cannot be explained with noise
models. The proposed probabilistic formulation could be viewed as a
latent-variable augmentation of regression. We devise expectation-maximization
(EM) procedures based on a data augmentation strategy which facilitates the
maximum-likelihood search over the model parameters. We propose two
augmentation schemes and we describe in detail the associated EM inference
procedures that may well be viewed as generalizations of a number of EM
regression, dimension reduction, and factor analysis algorithms. The proposed
framework is validated with both synthetic and real data. We provide
experimental evidence that our method outperforms several existing regression
techniques.
| [
"['Antoine Deleforge' 'Florence Forbes' 'Radu Horaud']",
"Antoine Deleforge and Florence Forbes and Radu Horaud"
] |
cs.NE cs.AI cs.CV cs.LG q-bio.NC | null | 1308.2350 | null | null | http://arxiv.org/pdf/1308.2350v1 | 2013-08-10T22:56:26Z | 2013-08-10T22:56:26Z | Learning Features and their Transformations by Spatial and Temporal
Spherical Clustering | Learning features invariant to arbitrary transformations in the data is a
requirement for any recognition system, biological or artificial. It is now
widely accepted that simple cells in the primary visual cortex respond to
features while the complex cells respond to features invariant to different
transformations. We present a novel two-layered feedforward neural model that
learns features in the first layer by spatial spherical clustering and
invariance to transformations in the second layer by temporal spherical
clustering. Learning occurs in an online and unsupervised manner following the
Hebbian rule. When exposed to natural videos acquired by a camera mounted on a
cat's head, the first and second layer neurons in our model develop simple and
complex cell-like receptive field properties. The model can predict by learning
lateral connections among the first layer neurons. A topographic map to their
spatial features emerges by exponentially decaying the flow of activation with
distance from one neuron to another in the first layer that fire in close
temporal proximity, thereby minimizing the pooling length in an online manner
simultaneously with feature learning.
| [
"['Jayanta K. Dutta' 'Bonny Banerjee']",
"Jayanta K. Dutta, Bonny Banerjee"
] |
cs.LG cs.AI stat.ML | null | 1308.2655 | null | null | http://arxiv.org/pdf/1308.2655v2 | 2013-08-18T19:30:19Z | 2013-08-12T19:31:59Z | KL-based Control of the Learning Schedule for Surrogate Black-Box
Optimization | This paper investigates the control of an ML component within the Covariance
Matrix Adaptation Evolution Strategy (CMA-ES) devoted to black-box
optimization. The known CMA-ES weakness is its sample complexity, the number of
evaluations of the objective function needed to approximate the global optimum.
This weakness is commonly addressed through surrogate optimization, learning an
estimate of the objective function a.k.a. surrogate model, and replacing most
evaluations of the true objective function with the (inexpensive) evaluation of
the surrogate model. This paper presents a principled control of the learning
schedule (when to relearn the surrogate model), based on the Kullback-Leibler
divergence of the current search distribution and the training distribution of
the former surrogate model. The experimental validation of the proposed
approach shows significant performance gains on a comprehensive set of
ill-conditioned benchmark problems, compared to the best state of the art
including the quasi-Newton high-precision BFGS method.
| [
"Ilya Loshchilov (LIS), Marc Schoenauer (INRIA Saclay - Ile de France,\n LRI), Mich\\`ele Sebag (LRI)",
"['Ilya Loshchilov' 'Marc Schoenauer' 'Michèle Sebag']"
] |
cs.LG cs.IR math.NA math.ST stat.ML stat.TH | null | 1308.2853 | null | null | http://arxiv.org/pdf/1308.2853v1 | 2013-08-13T13:16:10Z | 2013-08-13T13:16:10Z | When are Overcomplete Topic Models Identifiable? Uniqueness of Tensor
Tucker Decompositions with Structured Sparsity | Overcomplete latent representations have been very popular for unsupervised
feature learning in recent years. In this paper, we specify which overcomplete
models can be identified given observable moments of a certain order. We
consider probabilistic admixture or topic models in the overcomplete regime,
where the number of latent topics can greatly exceed the size of the observed
word vocabulary. While general overcomplete topic models are not identifiable,
we establish generic identifiability under a constraint, referred to as topic
persistence. Our sufficient conditions for identifiability involve a novel set
of "higher order" expansion conditions on the topic-word matrix or the
population structure of the model. This set of higher-order expansion
conditions allow for overcomplete models, and require the existence of a
perfect matching from latent topics to higher order observed words. We
establish that random structured topic models are identifiable w.h.p. in the
overcomplete regime. Our identifiability results allows for general
(non-degenerate) distributions for modeling the topic proportions, and thus, we
can handle arbitrarily correlated topics in our framework. Our identifiability
results imply uniqueness of a class of tensor decompositions with structured
sparsity which is contained in the class of Tucker decompositions, but is more
general than the Candecomp/Parafac (CP) decomposition.
| [
"['Animashree Anandkumar' 'Daniel Hsu' 'Majid Janzamin' 'Sham Kakade']",
"Animashree Anandkumar, Daniel Hsu, Majid Janzamin, Sham Kakade"
] |
stat.ML cs.LG math.OC | null | 1308.2867 | null | null | http://arxiv.org/pdf/1308.2867v2 | 2014-04-14T15:20:52Z | 2013-08-13T13:55:12Z | Composite Self-Concordant Minimization | We propose a variable metric framework for minimizing the sum of a
self-concordant function and a possibly non-smooth convex function, endowed
with an easily computable proximal operator. We theoretically establish the
convergence of our framework without relying on the usual Lipschitz gradient
assumption on the smooth part. An important highlight of our work is a new set
of analytic step-size selection and correction procedures based on the
structure of the problem. We describe concrete algorithmic instances of our
framework for several interesting applications and demonstrate them numerically
on both synthetic and real data.
| [
"Quoc Tran-Dinh, Anastasios Kyrillidis and Volkan Cevher",
"['Quoc Tran-Dinh' 'Anastasios Kyrillidis' 'Volkan Cevher']"
] |
cs.LG | null | 1308.2893 | null | null | http://arxiv.org/pdf/1308.2893v2 | 2014-11-24T09:34:18Z | 2013-08-13T15:15:37Z | Multiclass learnability and the ERM principle | We study the sample complexity of multiclass prediction in several learning
settings. For the PAC setting our analysis reveals a surprising phenomenon: In
sharp contrast to binary classification, we show that there exist multiclass
hypothesis classes for which some Empirical Risk Minimizers (ERM learners) have
lower sample complexity than others. Furthermore, there are classes that are
learnable by some ERM learners, while other ERM learners will fail to learn
them. We propose a principle for designing good ERM learners, and use this
principle to prove tight bounds on the sample complexity of learning {\em
symmetric} multiclass hypothesis classes---classes that are invariant under
permutations of label names. We further provide a characterization of mistake
and regret bounds for multiclass learning in the online setting and the bandit
setting, using new generalizations of Littlestone's dimension.
| [
"Amit Daniely and Sivan Sabato and Shai Ben-David and Shai\n Shalev-Shwartz",
"['Amit Daniely' 'Sivan Sabato' 'Shai Ben-David' 'Shai Shalev-Shwartz']"
] |
cs.CV cs.LG stat.ML | null | 1308.3101 | null | null | http://arxiv.org/pdf/1308.3101v2 | 2017-04-11T17:51:30Z | 2013-08-14T12:27:24Z | Compact Relaxations for MAP Inference in Pairwise MRFs with Piecewise
Linear Priors | Label assignment problems with large state spaces are important tasks
especially in computer vision. Often the pairwise interaction (or smoothness
prior) between labels assigned at adjacent nodes (or pixels) can be described
as a function of the label difference. Exact inference in such labeling tasks
is still difficult, and therefore approximate inference methods based on a
linear programming (LP) relaxation are commonly used in practice. In this work
we study how compact linear programs can be constructed for general piecwise
linear smoothness priors. The number of unknowns is O(LK) per pairwise clique
in terms of the state space size $L$ and the number of linear segments K. This
compares to an O(L^2) size complexity of the standard LP relaxation if the
piecewise linear structure is ignored. Our compact construction and the
standard LP relaxation are equivalent and lead to the same (approximate) label
assignment.
| [
"['Christopher Zach' 'Christian Häne']",
"Christopher Zach and Christian H\\\"ane"
] |
cs.IR cs.LG | null | 1308.3177 | null | null | http://arxiv.org/pdf/1308.3177v1 | 2013-08-14T17:04:15Z | 2013-08-14T17:04:15Z | Normalized Google Distance of Multisets with Applications | Normalized Google distance (NGD) is a relative semantic distance based on the
World Wide Web (or any other large electronic database, for instance Wikipedia)
and a search engine that returns aggregate page counts. The earlier NGD between
pairs of search terms (including phrases) is not sufficient for all
applications. We propose an NGD of finite multisets of search terms that is
better for many applications. This gives a relative semantics shared by a
multiset of search terms. We give applications and compare the results with
those obtained using the pairwise NGD. The derivation of NGD method is based on
Kolmogorov complexity.
| [
"Andrew R. Cohen (Dept Electrical and Comput. Engin., Drexel Univ.),\n P.M.B. Vitanyi (CWI and Comput. Sci., Univ. Amsterdam)",
"['Andrew R. Cohen' 'P. M. B. Vitanyi']"
] |
stat.ML cs.LG | null | 1308.3314 | null | null | http://arxiv.org/pdf/1308.3314v1 | 2013-08-15T06:15:21Z | 2013-08-15T06:15:21Z | The algorithm of noisy k-means | In this note, we introduce a new algorithm to deal with finite dimensional
clustering with errors in variables. The design of this algorithm is based on
recent theoretical advances (see Loustau (2013a,b)) in statistical learning
with errors in variables. As the previous mentioned papers, the algorithm mixes
different tools from the inverse problem literature and the machine learning
community. Coarsely, it is based on a two-step procedure: (1) a deconvolution
step to deal with noisy inputs and (2) Newton's iterations as the popular
k-means.
| [
"Camille Brunet (LAREMA), S\\'ebastien Loustau (LAREMA)",
"['Camille Brunet' 'Sébastien Loustau']"
] |
stat.ML cs.LG | null | 1308.3381 | null | null | http://arxiv.org/pdf/1308.3381v3 | 2013-10-05T13:18:05Z | 2013-08-15T13:17:47Z | High dimensional Sparse Gaussian Graphical Mixture Model | This paper considers the problem of networks reconstruction from
heterogeneous data using a Gaussian Graphical Mixture Model (GGMM). It is well
known that parameter estimation in this context is challenging due to large
numbers of variables coupled with the degeneracy of the likelihood. We propose
as a solution a penalized maximum likelihood technique by imposing an $l_{1}$
penalty on the precision matrix. Our approach shrinks the parameters thereby
resulting in better identifiability and variable selection. We use the
Expectation Maximization (EM) algorithm which involves the graphical LASSO to
estimate the mixing coefficients and the precision matrices. We show that under
certain regularity conditions the Penalized Maximum Likelihood (PML) estimates
are consistent. We demonstrate the performance of the PML estimator through
simulations and we show the utility of our method for high dimensional data
analysis in a genomic application.
| [
"['Anani Lotsi' 'Ernst Wit']",
"Anani Lotsi and Ernst Wit"
] |
cs.CV cs.LG stat.ML | null | 1308.3383 | null | null | http://arxiv.org/pdf/1308.3383v2 | 2014-07-22T16:22:29Z | 2013-08-15T13:22:24Z | Axioms for graph clustering quality functions | We investigate properties that intuitively ought to be satisfied by graph
clustering quality functions, that is, functions that assign a score to a
clustering of a graph. Graph clustering, also known as network community
detection, is often performed by optimizing such a function. Two axioms
tailored for graph clustering quality functions are introduced, and the four
axioms introduced in previous work on distance based clustering are
reformulated and generalized for the graph setting. We show that modularity, a
standard quality function for graph clustering, does not satisfy all of these
six properties. This motivates the derivation of a new family of quality
functions, adaptive scale modularity, which does satisfy the proposed axioms.
Adaptive scale modularity has two parameters, which give greater flexibility in
the kinds of clusterings that can be found. Standard graph clustering quality
functions, such as normalized cut and unnormalized cut, are obtained as special
cases of adaptive scale modularity.
In general, the results of our investigation indicate that the considered
axiomatic framework covers existing `good' quality functions for graph
clustering, and can be used to derive an interesting new family of quality
functions.
| [
"Twan van Laarhoven, Elena Marchiori",
"['Twan van Laarhoven' 'Elena Marchiori']"
] |
cs.LG | null | 1308.3432 | null | null | http://arxiv.org/pdf/1308.3432v1 | 2013-08-15T15:19:34Z | 2013-08-15T15:19:34Z | Estimating or Propagating Gradients Through Stochastic Neurons for
Conditional Computation | Stochastic neurons and hard non-linearities can be useful for a number of
reasons in deep learning models, but in many cases they pose a challenging
problem: how to estimate the gradient of a loss function with respect to the
input of such stochastic or non-smooth neurons? I.e., can we "back-propagate"
through these stochastic neurons? We examine this question, existing
approaches, and compare four families of solutions, applicable in different
settings. One of them is the minimum variance unbiased gradient estimator for
stochatic binary neurons (a special case of the REINFORCE algorithm). A second
approach, introduced here, decomposes the operation of a binary stochastic
neuron into a stochastic binary part and a smooth differentiable part, which
approximates the expected effect of the pure stochatic binary neuron to first
order. A third approach involves the injection of additive or multiplicative
noise in a computational graph that is otherwise differentiable. A fourth
approach heuristically copies the gradient with respect to the stochastic
output directly as an estimator of the gradient with respect to the sigmoid
argument (we call this the straight-through estimator). To explore a context
where these estimators are useful, we consider a small-scale version of {\em
conditional computation}, where sparse stochastic units form a distributed
representation of gaters that can turn off in combinatorially many ways large
chunks of the computation performed in the rest of the neural network. In this
case, it is important that the gating units produce an actual 0 most of the
time. The resulting sparsity can be potentially be exploited to greatly reduce
the computational cost of large deep networks for which conditional computation
would be useful.
| [
"['Yoshua Bengio' 'Nicholas Léonard' 'Aaron Courville']",
"Yoshua Bengio, Nicholas L\\'eonard and Aaron Courville"
] |
cs.GT cs.LG stat.ML | null | 1308.3506 | null | null | http://arxiv.org/pdf/1308.3506v1 | 2013-08-15T20:43:47Z | 2013-08-15T20:43:47Z | Computational Rationalization: The Inverse Equilibrium Problem | Modeling the purposeful behavior of imperfect agents from a small number of
observations is a challenging task. When restricted to the single-agent
decision-theoretic setting, inverse optimal control techniques assume that
observed behavior is an approximately optimal solution to an unknown decision
problem. These techniques learn a utility function that explains the example
behavior and can then be used to accurately predict or imitate future behavior
in similar observed or unobserved situations.
In this work, we consider similar tasks in competitive and cooperative
multi-agent domains. Here, unlike single-agent settings, a player cannot
myopically maximize its reward; it must speculate on how the other agents may
act to influence the game's outcome. Employing the game-theoretic notion of
regret and the principle of maximum entropy, we introduce a technique for
predicting and generalizing behavior.
| [
"['Kevin Waugh' 'Brian D. Ziebart' 'J. Andrew Bagnell']",
"Kevin Waugh and Brian D. Ziebart and J. Andrew Bagnell"
] |
cs.LG | null | 1308.3509 | null | null | http://arxiv.org/pdf/1308.3509v1 | 2013-08-15T20:59:32Z | 2013-08-15T20:59:32Z | Stochastic Optimization for Machine Learning | It has been found that stochastic algorithms often find good solutions much
more rapidly than inherently-batch approaches. Indeed, a very useful rule of
thumb is that often, when solving a machine learning problem, an iterative
technique which relies on performing a very large number of
relatively-inexpensive updates will often outperform one which performs a
smaller number of much "smarter" but computationally-expensive updates.
In this thesis, we will consider the application of stochastic algorithms to
two of the most important machine learning problems. Part i is concerned with
the supervised problem of binary classification using kernelized linear
classifiers, for which the data have labels belonging to exactly two classes
(e.g. "has cancer" or "doesn't have cancer"), and the learning problem is to
find a linear classifier which is best at predicting the label. In Part ii, we
will consider the unsupervised problem of Principal Component Analysis, for
which the learning task is to find the directions which contain most of the
variance of the data distribution.
Our goal is to present stochastic algorithms for both problems which are,
above all, practical--they work well on real-world data, in some cases better
than all known competing algorithms. A secondary, but still very important,
goal is to derive theoretical bounds on the performance of these algorithms
which are at least competitive with, and often better than, those known for
other approaches.
| [
"['Andrew Cotter']",
"Andrew Cotter"
] |
cs.LG cs.AI | null | 1308.3513 | null | null | http://arxiv.org/pdf/1308.3513v1 | 2013-08-15T21:21:05Z | 2013-08-15T21:21:05Z | Hidden Parameter Markov Decision Processes: A Semiparametric Regression
Approach for Discovering Latent Task Parametrizations | Control applications often feature tasks with similar, but not identical,
dynamics. We introduce the Hidden Parameter Markov Decision Process (HiP-MDP),
a framework that parametrizes a family of related dynamical systems with a
low-dimensional set of latent factors, and introduce a semiparametric
regression approach for learning its structure from data. In the control
setting, we show that a learned HiP-MDP rapidly identifies the dynamics of a
new task instance, allowing an agent to flexibly adapt to task variations.
| [
"['Finale Doshi-Velez' 'George Konidaris']",
"Finale Doshi-Velez and George Konidaris"
] |
cs.LG | null | 1308.3541 | null | null | http://arxiv.org/pdf/1308.3541v2 | 2014-03-15T19:42:29Z | 2013-08-16T03:46:25Z | Knapsack Constrained Contextual Submodular List Prediction with
Application to Multi-document Summarization | We study the problem of predicting a set or list of options under knapsack
constraint. The quality of such lists are evaluated by a submodular reward
function that measures both quality and diversity. Similar to DAgger (Ross et
al., 2010), by a reduction to online learning, we show how to adapt two
sequence prediction models to imitate greedy maximization under knapsack
constraint problems: CONSEQOPT (Dey et al., 2012) and SCP (Ross et al., 2013).
Experiments on extractive multi-document summarization show that our approach
outperforms existing state-of-the-art methods.
| [
"['Jiaji Zhou' 'Stephane Ross' 'Yisong Yue' 'Debadeepta Dey'\n 'J. Andrew Bagnell']",
"Jiaji Zhou, Stephane Ross, Yisong Yue, Debadeepta Dey, J. Andrew\n Bagnell"
] |
null | null | 1308.3558 | null | null | http://arxiv.org/pdf/1308.3558v1 | 2013-08-16T05:48:29Z | 2013-08-16T05:48:29Z | Fast Stochastic Alternating Direction Method of Multipliers | In this paper, we propose a new stochastic alternating direction method of multipliers (ADMM) algorithm, which incrementally approximates the full gradient in the linearized ADMM formulation. Besides having a low per-iteration complexity as existing stochastic ADMM algorithms, the proposed algorithm improves the convergence rate on convex problems from $O(frac 1 {sqrt{T}})$ to $O(frac 1 T)$, where $T$ is the number of iterations. This matches the convergence rate of the batch ADMM algorithm, but without the need to visit all the samples in each iteration. Experiments on the graph-guided fused lasso demonstrate that the new algorithm is significantly faster than state-of-the-art stochastic and batch ADMM algorithms. | [
"['Leon Wenliang Zhong' 'James T. Kwok']"
] |
stat.AP cs.LG stat.ML | null | 1308.3740 | null | null | http://arxiv.org/pdf/1308.3740v1 | 2013-08-16T23:42:05Z | 2013-08-16T23:42:05Z | Standardizing Interestingness Measures for Association Rules | Interestingness measures provide information that can be used to prune or
select association rules. A given value of an interestingness measure is often
interpreted relative to the overall range of the values that the
interestingness measure can take. However, properties of individual association
rules restrict the values an interestingness measure can achieve. An
interesting measure can be standardized to take this into account, but this has
only been done for one interestingness measure to date, i.e., the lift.
Standardization provides greater insight than the raw value and may even alter
researchers' perception of the data. We derive standardized analogues of three
interestingness measures and use real and simulated data to compare them to
their raw versions, each other, and the standardized lift.
| [
"['Mateen Shaikh' 'Paul D. McNicholas' 'M. Luiza Antonie'\n 'T. Brendan Murphy']",
"Mateen Shaikh, Paul D. McNicholas, M. Luiza Antonie and T. Brendan\n Murphy"
] |
cs.LG | null | 1308.3750 | null | null | http://arxiv.org/pdf/1308.3750v1 | 2013-08-17T03:56:03Z | 2013-08-17T03:56:03Z | Comment on "robustness and regularization of support vector machines" by
H. Xu, et al., (Journal of Machine Learning Research, vol. 10, pp. 1485-1510,
2009, arXiv:0803.3490) | This paper comments on the published work dealing with robustness and
regularization of support vector machines (Journal of Machine Learning
Research, vol. 10, pp. 1485-1510, 2009) [arXiv:0803.3490] by H. Xu, etc. They
proposed a theorem to show that it is possible to relate robustness in the
feature space and robustness in the sample space directly. In this paper, we
propose a counter example that rejects their theorem.
| [
"['Yahya Forghani' 'Hadi Sadoghi Yazdi']",
"Yahya Forghani, Hadi Sadoghi Yazdi"
] |
cs.LG stat.ML | null | 1308.3818 | null | null | http://arxiv.org/pdf/1308.3818v1 | 2013-08-18T01:08:55Z | 2013-08-18T01:08:55Z | Reference Distance Estimator | A theoretical study is presented for a simple linear classifier called
reference distance estimator (RDE), which assigns the weight of each feature j
as P(r|j)-P(r), where r is a reference feature relevant to the target class y.
The analysis shows that if r performs better than random guess in predicting y
and is conditionally independent with each feature j, the RDE will have the
same classification performance as that from P(y|j)-P(y), a classifier trained
with the gold standard y. Since the estimation of P(r|j)-P(r) does not require
labeled data, under the assumption above, RDE trained with a large number of
unlabeled examples would be close to that trained with infinite labeled
examples. For the case the assumption does not hold, we theoretically analyze
the factors that influence the closeness of the RDE to the perfect one under
the assumption, and present an algorithm to select reference features and
combine multiple RDEs from different reference features using both labeled and
unlabeled data. The experimental results on 10 text classification tasks show
that the semi-supervised learning method improves supervised methods using
5,000 labeled examples and 13 million unlabeled ones, and in many tasks, its
performance is even close to a classifier trained with 13 million labeled
examples. In addition, the bounds in the theorems provide good estimation of
the classification performance and can be useful for new algorithm design.
| [
"['Yanpeng Li']",
"Yanpeng Li"
] |
cs.DS cs.IT cs.LG math.IT | null | 1308.3946 | null | null | http://arxiv.org/pdf/1308.3946v1 | 2013-08-19T07:45:07Z | 2013-08-19T07:45:07Z | Optimal Algorithms for Testing Closeness of Discrete Distributions | We study the question of closeness testing for two discrete distributions.
More precisely, given samples from two distributions $p$ and $q$ over an
$n$-element set, we wish to distinguish whether $p=q$ versus $p$ is at least
$\eps$-far from $q$, in either $\ell_1$ or $\ell_2$ distance. Batu et al. gave
the first sub-linear time algorithms for these problems, which matched the
lower bounds of Valiant up to a logarithmic factor in $n$, and a polynomial
factor of $\eps.$
In this work, we present simple (and new) testers for both the $\ell_1$ and
$\ell_2$ settings, with sample complexity that is information-theoretically
optimal, to constant factors, both in the dependence on $n$, and the dependence
on $\eps$; for the $\ell_1$ testing problem we establish that the sample
complexity is $\Theta(\max\{n^{2/3}/\eps^{4/3}, n^{1/2}/\eps^2 \}).$
| [
"['Siu-On Chan' 'Ilias Diakonikolas' 'Gregory Valiant' 'Paul Valiant']",
"Siu-On Chan and Ilias Diakonikolas and Gregory Valiant and Paul\n Valiant"
] |
math.OC cs.LG stat.ML | null | 1308.4004 | null | null | http://arxiv.org/pdf/1308.4004v2 | 2016-08-03T11:57:36Z | 2013-08-19T12:46:33Z | A balanced k-means algorithm for weighted point sets | The classical $k$-means algorithm for partitioning $n$ points in
$\mathbb{R}^d$ into $k$ clusters is one of the most popular and widely spread
clustering methods. The need to respect prescribed lower bounds on the cluster
sizes has been observed in many scientific and business applications.
In this paper, we present and analyze a generalization of $k$-means that is
capable of handling weighted point sets and prescribed lower and upper bounds
on the cluster sizes. We call it weight-balanced $k$-means. The key difference
to existing models lies in the ability to handle the combination of weighted
point sets with prescribed bounds on the cluster sizes. This imposes the need
to perform partial membership clustering, and leads to significant differences.
For example, while finite termination of all $k$-means variants for
unweighted point sets is a simple consequence of the existence of only finitely
many partitions of a given set of points, the situation is more involved for
weighted point sets, as there are infinitely many partial membership
clusterings. Using polyhedral theory, we show that the number of iterations of
weight-balanced $k$-means is bounded above by $n^{O(dk)}$, so in particular it
is polynomial for fixed $k$ and $d$. This is similar to the known worst-case
upper bound for classical $k$-means for unweighted point sets and unrestricted
cluster sizes, despite the much more general framework. We conclude with the
discussion of some additional favorable properties of our method.
| [
"['Steffen Borgwardt' 'Andreas Brieden' 'Peter Gritzmann']",
"Steffen Borgwardt, Andreas Brieden and Peter Gritzmann"
] |
cs.IT cs.LG math.IT math.PR math.ST stat.TH | null | 1308.4077 | null | null | http://arxiv.org/pdf/1308.4077v2 | 2013-08-20T03:36:59Z | 2013-08-19T17:12:40Z | Support Recovery for the Drift Coefficient of High-Dimensional
Diffusions | Consider the problem of learning the drift coefficient of a $p$-dimensional
stochastic differential equation from a sample path of length $T$. We assume
that the drift is parametrized by a high-dimensional vector, and study the
support recovery problem when both $p$ and $T$ can tend to infinity. In
particular, we prove a general lower bound on the sample-complexity $T$ by
using a characterization of mutual information as a time integral of
conditional variance, due to Kadota, Zakai, and Ziv. For linear stochastic
differential equations, the drift coefficient is parametrized by a $p\times p$
matrix which describes which degrees of freedom interact under the dynamics. In
this case, we analyze a $\ell_1$-regularized least squares estimator and prove
an upper bound on $T$ that nearly matches the lower bound on specific classes
of sparse matrices.
| [
"Jose Bento, and Morteza Ibrahimi",
"['Jose Bento' 'Morteza Ibrahimi']"
] |
math.PR cs.LG math.ST stat.TH | null | 1308.4123 | null | null | http://arxiv.org/pdf/1308.4123v1 | 2013-08-18T22:40:41Z | 2013-08-18T22:40:41Z | A Likelihood Ratio Approach for Probabilistic Inequalities | We propose a new approach for deriving probabilistic inequalities based on
bounding likelihood ratios. We demonstrate that this approach is more general
and powerful than the classical method frequently used for deriving
concentration inequalities such as Chernoff bounds. We discover that the
proposed approach is inherently related to statistical concepts such as
monotone likelihood ratio, maximum likelihood, and the method of moments for
parameter estimation. A connection between the proposed approach and the large
deviation theory is also established. We show that, without using moment
generating functions, tightest possible concentration inequalities may be
readily derived by the proposed approach. We have derived new concentration
inequalities using the proposed approach, which cannot be obtained by the
classical approach based on moment generating functions.
| [
"['Xinjia Chen']",
"Xinjia Chen"
] |
cs.CV cs.LG stat.ML | null | 1308.4200 | null | null | http://arxiv.org/pdf/1308.4200v1 | 2013-08-20T01:07:35Z | 2013-08-20T01:07:35Z | Towards Adapting ImageNet to Reality: Scalable Domain Adaptation with
Implicit Low-rank Transformations | Images seen during test time are often not from the same distribution as
images used for learning. This problem, known as domain shift, occurs when
training classifiers from object-centric internet image databases and trying to
apply them directly to scene understanding tasks. The consequence is often
severe performance degradation and is one of the major barriers for the
application of classifiers in real-world systems. In this paper, we show how to
learn transform-based domain adaptation classifiers in a scalable manner. The
key idea is to exploit an implicit rank constraint, originated from a
max-margin domain adaptation formulation, to make optimization tractable.
Experiments show that the transformation between domains can be very
efficiently learned from data and easily applied to new categories. This begins
to bridge the gap between large-scale internet image collections and object
images captured in everyday life environments.
| [
"['Erik Rodner' 'Judy Hoffman' 'Jeff Donahue' 'Trevor Darrell'\n 'Kate Saenko']",
"Erik Rodner, Judy Hoffman, Jeff Donahue, Trevor Darrell, Kate Saenko"
] |
stat.ME cs.LG | null | 1308.4206 | null | null | http://arxiv.org/pdf/1308.4206v2 | 2013-09-06T02:50:54Z | 2013-08-20T01:59:49Z | Nested Nonnegative Cone Analysis | Motivated by the analysis of nonnegative data objects, a novel Nested
Nonnegative Cone Analysis (NNCA) approach is proposed to overcome some
drawbacks of existing methods. The application of traditional PCA/SVD method to
nonnegative data often cause the approximation matrix leave the nonnegative
cone, which leads to non-interpretable and sometimes nonsensical results. The
nonnegative matrix factorization (NMF) approach overcomes this issue, however
the NMF approximation matrices suffer several drawbacks: 1) the factorization
may not be unique, 2) the resulting approximation matrix at a specific rank may
not be unique, and 3) the subspaces spanned by the approximation matrices at
different ranks may not be nested. These drawbacks will cause troubles in
determining the number of components and in multi-scale (in ranks)
interpretability. The NNCA approach proposed in this paper naturally generates
a nested structure, and is shown to be unique at each rank. Simulations are
used in this paper to illustrate the drawbacks of the traditional methods, and
the usefulness of the NNCA method.
| [
"['Lingsong Zhang' 'J. S. Marron' 'Shu Lu']",
"Lingsong Zhang and J. S. Marron and Shu Lu"
] |
stat.ML cs.LG cs.MS | null | 1308.4214 | null | null | http://arxiv.org/pdf/1308.4214v1 | 2013-08-20T02:50:43Z | 2013-08-20T02:50:43Z | Pylearn2: a machine learning research library | Pylearn2 is a machine learning research library. This does not just mean that
it is a collection of machine learning algorithms that share a common API; it
means that it has been designed for flexibility and extensibility in order to
facilitate research projects that involve new or unusual use cases. In this
paper we give a brief history of the library, an overview of its basic
philosophy, a summary of the library's architecture, and a description of how
the Pylearn2 community functions socially.
| [
"Ian J. Goodfellow, David Warde-Farley, Pascal Lamblin, Vincent\n Dumoulin, Mehdi Mirza, Razvan Pascanu, James Bergstra, Fr\\'ed\\'eric Bastien,\n Yoshua Bengio",
"['Ian J. Goodfellow' 'David Warde-Farley' 'Pascal Lamblin'\n 'Vincent Dumoulin' 'Mehdi Mirza' 'Razvan Pascanu' 'James Bergstra'\n 'Frédéric Bastien' 'Yoshua Bengio']"
] |
cs.LG cs.MA | null | 1308.4565 | null | null | http://arxiv.org/pdf/1308.4565v2 | 2013-08-25T14:23:42Z | 2013-08-21T13:17:00Z | Decentralized Online Big Data Classification - a Bandit Framework | Distributed, online data mining systems have emerged as a result of
applications requiring analysis of large amounts of correlated and
high-dimensional data produced by multiple distributed data sources. We propose
a distributed online data classification framework where data is gathered by
distributed data sources and processed by a heterogeneous set of distributed
learners which learn online, at run-time, how to classify the different data
streams either by using their locally available classification functions or by
helping each other by classifying each other's data. Importantly, since the
data is gathered at different locations, sending the data to another learner to
process incurs additional costs such as delays, and hence this will be only
beneficial if the benefits obtained from a better classification will exceed
the costs. We assume that the classification functions available to each
processing element are fixed, but their prediction accuracy for various types
of incoming data are unknown and can change dynamically over time, and thus
they need to be learned online. We model the problem of joint classification by
the distributed and heterogeneous learners from multiple data sources as a
distributed contextual bandit problem where each data is characterized by a
specific context. We develop distributed online learning algorithms for which
we can prove that they have sublinear regret. Compared to prior work in
distributed online data mining, our work is the first to provide analytic
regret results characterizing the performance of the proposed algorithms.
| [
"Cem Tekin and Mihaela van der Schaar",
"['Cem Tekin' 'Mihaela van der Schaar']"
] |
cs.LG stat.ML | null | 1308.4568 | null | null | http://arxiv.org/pdf/1308.4568v4 | 2015-03-23T14:06:27Z | 2013-08-21T13:28:43Z | Distributed Online Learning via Cooperative Contextual Bandits | In this paper we propose a novel framework for decentralized, online learning
by many learners. At each moment of time, an instance characterized by a
certain context may arrive to each learner; based on the context, the learner
can select one of its own actions (which gives a reward and provides
information) or request assistance from another learner. In the latter case,
the requester pays a cost and receives the reward but the provider learns the
information. In our framework, learners are modeled as cooperative contextual
bandits. Each learner seeks to maximize the expected reward from its arrivals,
which involves trading off the reward received from its own actions, the
information learned from its own actions, the reward received from the actions
requested of others and the cost paid for these actions - taking into account
what it has learned about the value of assistance from each other learner. We
develop distributed online learning algorithms and provide analytic bounds to
compare the efficiency of these with algorithms with the complete knowledge
(oracle) benchmark (in which the expected reward of every action in every
context is known by every learner). Our estimates show that regret - the loss
incurred by the algorithm - is sublinear in time. Our theoretical framework can
be used in many practical applications including Big Data mining, event
detection in surveillance sensor networks and distributed online recommendation
systems.
| [
"Cem Tekin and Mihaela van der Schaar",
"['Cem Tekin' 'Mihaela van der Schaar']"
] |
cs.NA cs.LG stat.ML | null | 1308.4757 | null | null | http://arxiv.org/pdf/1308.4757v9 | 2016-12-21T07:05:13Z | 2013-08-22T03:40:41Z | Online and stochastic Douglas-Rachford splitting method for large scale
machine learning | Online and stochastic learning has emerged as powerful tool in large scale
optimization. In this work, we generalize the Douglas-Rachford splitting (DRs)
method for minimizing composite functions to online and stochastic settings (to
our best knowledge this is the first time DRs been generalized to sequential
version). We first establish an $O(1/\sqrt{T})$ regret bound for batch DRs
method. Then we proved that the online DRs splitting method enjoy an $O(1)$
regret bound and stochastic DRs splitting has a convergence rate of
$O(1/\sqrt{T})$. The proof is simple and intuitive, and the results and
technique can be served as a initiate for the research on the large scale
machine learning employ the DRs method. Numerical experiments of the proposed
method demonstrate the effectiveness of the online and stochastic update rule,
and further confirm our regret and convergence analysis.
| [
"['Ziqiang Shi' 'Rujie Liu']",
"Ziqiang Shi and Rujie Liu"
] |
cs.LG | null | 1308.4828 | null | null | http://arxiv.org/pdf/1308.4828v1 | 2013-08-22T11:39:06Z | 2013-08-22T11:39:06Z | The Sample-Complexity of General Reinforcement Learning | We present a new algorithm for general reinforcement learning where the true
environment is known to belong to a finite class of N arbitrary models. The
algorithm is shown to be near-optimal for all but O(N log^2 N) time-steps with
high probability. Infinite classes are also considered where we show that
compactness is a key criterion for determining the existence of uniform
sample-complexity bounds. A matching lower bound is given for the finite case.
| [
"Tor Lattimore and Marcus Hutter and Peter Sunehag",
"['Tor Lattimore' 'Marcus Hutter' 'Peter Sunehag']"
] |
math.OC cs.LG stat.ML | 10.1137/130934568 | 1308.4915 | null | null | http://arxiv.org/abs/1308.4915v2 | 2014-05-20T04:13:06Z | 2013-08-22T17:02:57Z | Minimal Dirichlet energy partitions for graphs | Motivated by a geometric problem, we introduce a new non-convex graph
partitioning objective where the optimality criterion is given by the sum of
the Dirichlet eigenvalues of the partition components. A relaxed formulation is
identified and a novel rearrangement algorithm is proposed, which we show is
strictly decreasing and converges in a finite number of iterations to a local
minimum of the relaxed objective function. Our method is applied to several
clustering problems on graphs constructed from synthetic data, MNIST
handwritten digits, and manifold discretizations. The model has a
semi-supervised extension and provides a natural representative for the
clusters as well.
| [
"['Braxton Osting' 'Chris D. White' 'Edouard Oudet']",
"Braxton Osting, Chris D. White, Edouard Oudet"
] |
cs.LG stat.ML | null | 1308.4922 | null | null | http://arxiv.org/pdf/1308.4922v2 | 2014-01-02T23:35:03Z | 2013-08-22T17:15:36Z | Learning Deep Representation Without Parameter Inference for Nonlinear
Dimensionality Reduction | Unsupervised deep learning is one of the most powerful representation
learning techniques. Restricted Boltzman machine, sparse coding, regularized
auto-encoders, and convolutional neural networks are pioneering building blocks
of deep learning. In this paper, we propose a new building block -- distributed
random models. The proposed method is a special full implementation of the
product of experts: (i) each expert owns multiple hidden units and different
experts have different numbers of hidden units; (ii) the model of each expert
is a k-center clustering, whose k-centers are only uniformly sampled examples,
and whose output (i.e. the hidden units) is a sparse code that only the
similarity values from a few nearest neighbors are reserved. The relationship
between the pioneering building blocks, several notable research branches and
the proposed method is analyzed. Experimental results show that the proposed
deep model can learn better representations than deep belief networks and
meanwhile can train a much larger network with much less time than deep belief
networks.
| [
"Xiao-Lei Zhang",
"['Xiao-Lei Zhang']"
] |
cs.CV cs.LG stat.ML | 10.1109/TSP.2014.2329274 | 1308.5038 | null | null | http://arxiv.org/abs/1308.5038v2 | 2013-11-30T19:18:49Z | 2013-08-23T03:32:57Z | Group-Sparse Signal Denoising: Non-Convex Regularization, Convex
Optimization | Convex optimization with sparsity-promoting convex regularization is a
standard approach for estimating sparse signals in noise. In order to promote
sparsity more strongly than convex regularization, it is also standard practice
to employ non-convex optimization. In this paper, we take a third approach. We
utilize a non-convex regularization term chosen such that the total cost
function (consisting of data consistency and regularization terms) is convex.
Therefore, sparsity is more strongly promoted than in the standard convex
formulation, but without sacrificing the attractive aspects of convex
optimization (unique minimum, robust algorithms, etc.). We use this idea to
improve the recently developed 'overlapping group shrinkage' (OGS) algorithm
for the denoising of group-sparse signals. The algorithm is applied to the
problem of speech enhancement with favorable results in terms of both SNR and
perceptual quality.
| [
"Po-Yu Chen, Ivan W. Selesnick",
"['Po-Yu Chen' 'Ivan W. Selesnick']"
] |
cs.MS cs.LG math.OC stat.ML | null | 1308.5200 | null | null | http://arxiv.org/pdf/1308.5200v1 | 2013-08-23T18:35:59Z | 2013-08-23T18:35:59Z | Manopt, a Matlab toolbox for optimization on manifolds | Optimization on manifolds is a rapidly developing branch of nonlinear
optimization. Its focus is on problems where the smooth geometry of the search
space can be leveraged to design efficient numerical algorithms. In particular,
optimization on manifolds is well-suited to deal with rank and orthogonality
constraints. Such structured constraints appear pervasively in machine learning
applications, including low-rank matrix completion, sensor network
localization, camera network registration, independent component analysis,
metric learning, dimensionality reduction and so on. The Manopt toolbox,
available at www.manopt.org, is a user-friendly, documented piece of software
dedicated to simplify experimenting with state of the art Riemannian
optimization algorithms. We aim particularly at reaching practitioners outside
our field.
| [
"Nicolas Boumal and Bamdev Mishra and P.-A. Absil and Rodolphe\n Sepulchre",
"['Nicolas Boumal' 'Bamdev Mishra' 'P. -A. Absil' 'Rodolphe Sepulchre']"
] |
cs.LG cs.IR stat.ML | null | 1308.5275 | null | null | http://arxiv.org/pdf/1308.5275v1 | 2013-08-24T01:31:22Z | 2013-08-24T01:31:22Z | The Lovasz-Bregman Divergence and connections to rank aggregation,
clustering, and web ranking | We extend the recently introduced theory of Lovasz-Bregman (LB) divergences
(Iyer & Bilmes, 2012) in several ways. We show that they represent a distortion
between a 'score' and an 'ordering', thus providing a new view of rank
aggregation and order based clustering with interesting connections to web
ranking. We show how the LB divergences have a number of properties akin to
many permutation based metrics, and in fact have as special cases forms very
similar to the Kendall-$\tau$ metric. We also show how the LB divergences
subsume a number of commonly used ranking measures in information retrieval,
like the NDCG and AUC. Unlike the traditional permutation based metrics,
however, the LB divergence naturally captures a notion of "confidence" in the
orderings, thus providing a new representation to applications involving
aggregating scores as opposed to just orderings. We show how a number of
recently used web ranking models are forms of Lovasz-Bregman rank aggregation
and also observe that a natural form of Mallow's model using the LB divergence
has been used as conditional ranking models for the 'Learning to Rank' problem.
| [
"Rishabh Iyer and Jeff Bilmes",
"['Rishabh Iyer' 'Jeff Bilmes']"
] |
cs.LG | null | 1308.5281 | null | null | http://arxiv.org/pdf/1308.5281v1 | 2013-08-24T02:33:11Z | 2013-08-24T02:33:11Z | Ensemble of Distributed Learners for Online Classification of Dynamic
Data Streams | We present an efficient distributed online learning scheme to classify data
captured from distributed, heterogeneous, and dynamic data sources. Our scheme
consists of multiple distributed local learners, that analyze different streams
of data that are correlated to a common event that needs to be classified. Each
learner uses a local classifier to make a local prediction. The local
predictions are then collected by each learner and combined using a weighted
majority rule to output the final prediction. We propose a novel online
ensemble learning algorithm to update the aggregation rule in order to adapt to
the underlying data dynamics. We rigorously determine a bound for the worst
case misclassification probability of our algorithm which depends on the
misclassification probabilities of the best static aggregation rule, and of the
best local classifier. Importantly, the worst case misclassification
probability of our algorithm tends asymptotically to 0 if the misclassification
probability of the best static aggregation rule or the misclassification
probability of the best local classifier tend to 0. Then we extend our
algorithm to address challenges specific to the distributed implementation and
we prove new bounds that apply to these settings. Finally, we test our scheme
by performing an evaluation study on several data sets. When applied to data
sets widely used by the literature dealing with dynamic data streams and
concept drift, our scheme exhibits performance gains ranging from 34% to 71%
with respect to state of the art solutions.
| [
"Luca Canzian, Yu Zhang, and Mihaela van der Schaar",
"['Luca Canzian' 'Yu Zhang' 'Mihaela van der Schaar']"
] |
cs.LO cs.LG cs.SY | 10.4204/EPTCS.124.1 | 1308.5329 | null | null | http://arxiv.org/abs/1308.5329v1 | 2013-08-24T14:33:16Z | 2013-08-24T14:33:16Z | Monitoring with uncertainty | We discuss the problem of runtime verification of an instrumented program
that misses to emit and to monitor some events. These gaps can occur when a
monitoring overhead control mechanism is introduced to disable the monitor of
an application with real-time constraints. We show how to use statistical
models to learn the application behavior and to "fill in" the introduced gaps.
Finally, we present and discuss some techniques developed in the last three
years to estimate the probability that a property of interest is violated in
the presence of an incomplete trace.
| [
"['Ezio Bartocci' 'Radu Grosu']",
"Ezio Bartocci (TU Wien), Radu Grosu (TU Wien)"
] |
cs.LG cs.CE q-bio.MN | 10.4204/EPTCS.124.10 | 1308.5338 | null | null | http://arxiv.org/abs/1308.5338v1 | 2013-08-24T14:34:38Z | 2013-08-24T14:34:38Z | A stochastic hybrid model of a biological filter | We present a hybrid model of a biological filter, a genetic circuit which
removes fast fluctuations in the cell's internal representation of the extra
cellular environment. The model takes the classic feed-forward loop (FFL) motif
and represents it as a network of continuous protein concentrations and binary,
unobserved gene promoter states. We address the problem of statistical
inference and parameter learning for this class of models from partial,
discrete time observations. We show that the hybrid representation leads to an
efficient algorithm for approximate statistical inference in this circuit, and
show its effectiveness on a simulated data set.
| [
"Andrea Ocone (School of Informatics, University of Edinburgh), Guido\n Sanguinetti (School of Informatics, University of Edinburgh)",
"['Andrea Ocone' 'Guido Sanguinetti']"
] |
stat.ML cs.LG | 10.1109/TSP.2013.2279358 | 1308.5546 | null | null | http://arxiv.org/abs/1308.5546v1 | 2013-08-26T11:31:38Z | 2013-08-26T11:31:38Z | Sparse and Non-Negative BSS for Noisy Data | Non-negative blind source separation (BSS) has raised interest in various
fields of research, as testified by the wide literature on the topic of
non-negative matrix factorization (NMF). In this context, it is fundamental
that the sources to be estimated present some diversity in order to be
efficiently retrieved. Sparsity is known to enhance such contrast between the
sources while producing very robust approaches, especially to noise. In this
paper we introduce a new algorithm in order to tackle the blind separation of
non-negative sparse sources from noisy measurements. We first show that
sparsity and non-negativity constraints have to be carefully applied on the
sought-after solution. In fact, improperly constrained solutions are unlikely
to be stable and are therefore sub-optimal. The proposed algorithm, named nGMCA
(non-negative Generalized Morphological Component Analysis), makes use of
proximal calculus techniques to provide properly constrained solutions. The
performance of nGMCA compared to other state-of-the-art algorithms is
demonstrated by numerical experiments encompassing a wide variety of settings,
with negligible parameter tuning. In particular, nGMCA is shown to provide
robustness to noise and performs well on synthetic mixtures of real NMR
spectra.
| [
"J\\'er\\'emy Rapin, J\\'er\\^ome Bobin, Anthony Larue and Jean-Luc Starck",
"['Jérémy Rapin' 'Jérôme Bobin' 'Anthony Larue' 'Jean-Luc Starck']"
] |
cs.NI cs.GT cs.LG | 10.1109/TWC.2013.092413.130221 | 1308.5835 | null | null | http://arxiv.org/abs/1308.5835v1 | 2013-08-27T12:02:50Z | 2013-08-27T12:02:50Z | Backhaul-Aware Interference Management in the Uplink of Wireless Small
Cell Networks | The design of distributed mechanisms for interference management is one of
the key challenges in emerging wireless small cell networks whose backhaul is
capacity limited and heterogeneous (wired, wireless and a mix thereof). In this
paper, a novel, backhaul-aware approach to interference management in wireless
small cell networks is proposed. The proposed approach enables macrocell user
equipments (MUEs) to optimize their uplink performance, by exploiting the
presence of neighboring small cell base stations. The problem is formulated as
a noncooperative game among the MUEs that seek to optimize their delay-rate
tradeoff, given the conditions of both the radio access network and the --
possibly heterogeneous -- backhaul. To solve this game, a novel, distributed
learning algorithm is proposed using which the MUEs autonomously choose their
optimal uplink transmission strategies, given a limited amount of available
information. The convergence of the proposed algorithm is shown and its
properties are studied. Simulation results show that, under various types of
backhauls, the proposed approach yields significant performance gains, in terms
of both average throughput and delay for the MUEs, when compared to existing
benchmark algorithms.
| [
"['Sumudu Samarakoon' 'Mehdi Bennis' 'Walid Saad' 'Matti Latva-aho']",
"Sumudu Samarakoon and Mehdi Bennis and Walid Saad and Matti Latva-aho"
] |
cs.LG stat.ML | null | 1308.6181 | null | null | http://arxiv.org/pdf/1308.6181v1 | 2013-08-28T15:14:47Z | 2013-08-28T15:14:47Z | Bayesian Conditional Gaussian Network Classifiers with Applications to
Mass Spectra Classification | Classifiers based on probabilistic graphical models are very effective. In
continuous domains, maximum likelihood is usually used to assess the
predictions of those classifiers. When data is scarce, this can easily lead to
overfitting. In any probabilistic setting, Bayesian averaging (BA) provides
theoretically optimal predictions and is known to be robust to overfitting. In
this work we introduce Bayesian Conditional Gaussian Network Classifiers, which
efficiently perform exact Bayesian averaging over the parameters. We evaluate
the proposed classifiers against the maximum likelihood alternatives proposed
so far over standard UCI datasets, concluding that performing BA improves the
quality of the assessed probabilities (conditional log likelihood) whilst
maintaining the error rate.
Overfitting is more likely to occur in domains where the number of data items
is small and the number of variables is large. These two conditions are met in
the realm of bioinformatics, where the early diagnosis of cancer from mass
spectra is a relevant task. We provide an application of our classification
framework to that problem, comparing it with the standard maximum likelihood
alternative, where the improvement of quality in the assessed probabilities is
confirmed.
| [
"Victor Bellon and Jesus Cerquides and Ivo Grosse",
"['Victor Bellon' 'Jesus Cerquides' 'Ivo Grosse']"
] |
cs.DS cs.LG stat.ML | null | 1308.6273 | null | null | http://arxiv.org/pdf/1308.6273v5 | 2014-05-26T17:38:58Z | 2013-08-28T19:57:31Z | New Algorithms for Learning Incoherent and Overcomplete Dictionaries | In sparse recovery we are given a matrix $A$ (the dictionary) and a vector of
the form $A X$ where $X$ is sparse, and the goal is to recover $X$. This is a
central notion in signal processing, statistics and machine learning. But in
applications such as sparse coding, edge detection, compression and super
resolution, the dictionary $A$ is unknown and has to be learned from random
examples of the form $Y = AX$ where $X$ is drawn from an appropriate
distribution --- this is the dictionary learning problem. In most settings, $A$
is overcomplete: it has more columns than rows. This paper presents a
polynomial-time algorithm for learning overcomplete dictionaries; the only
previously known algorithm with provable guarantees is the recent work of
Spielman, Wang and Wright who gave an algorithm for the full-rank case, which
is rarely the case in applications. Our algorithm applies to incoherent
dictionaries which have been a central object of study since they were
introduced in seminal work of Donoho and Huo. In particular, a dictionary is
$\mu$-incoherent if each pair of columns has inner product at most $\mu /
\sqrt{n}$.
The algorithm makes natural stochastic assumptions about the unknown sparse
vector $X$, which can contain $k \leq c \min(\sqrt{n}/\mu \log n, m^{1/2
-\eta})$ non-zero entries (for any $\eta > 0$). This is close to the best $k$
allowable by the best sparse recovery algorithms even if one knows the
dictionary $A$ exactly. Moreover, both the running time and sample complexity
depend on $\log 1/\epsilon$, where $\epsilon$ is the target accuracy, and so
our algorithms converge very quickly to the true dictionary. Our algorithm can
also tolerate substantial amounts of noise provided it is incoherent with
respect to the dictionary (e.g., Gaussian). In the noisy setting, our running
time and sample complexity depend polynomially on $1/\epsilon$, and this is
necessary.
| [
"['Sanjeev Arora' 'Rong Ge' 'Ankur Moitra']",
"Sanjeev Arora and Rong Ge and Ankur Moitra"
] |
cs.LG | null | 1308.6324 | null | null | http://arxiv.org/pdf/1308.6324v2 | 2013-10-30T16:10:27Z | 2013-08-28T22:08:29Z | Prediction of breast cancer recurrence using Classification Restricted
Boltzmann Machine with Dropping | In this paper, we apply Classification Restricted Boltzmann Machine
(ClassRBM) to the problem of predicting breast cancer recurrence. According to
the Polish National Cancer Registry, in 2010 only, the breast cancer caused
almost 25% of all diagnosed cases of cancer in Poland. We propose how to use
ClassRBM for predicting breast cancer return and discovering relevant inputs
(symptoms) in illness reappearance. Next, we outline a general probabilistic
framework for learning Boltzmann machines with masks, which we refer to as
Dropping. The fashion of generating masks leads to different learning methods,
i.e., DropOut, DropConnect. We propose a new method called DropPart which is a
generalization of DropConnect. In DropPart the Beta distribution instead of
Bernoulli distribution in DropConnect is used. At the end, we carry out an
experiment using real-life dataset consisting of 949 cases, provided by the
Institute of Oncology Ljubljana.
| [
"['Jakub M. Tomczak']",
"Jakub M. Tomczak"
] |
stat.ML cs.LG | null | 1308.6342 | null | null | http://arxiv.org/pdf/1308.6342v4 | 2014-02-05T17:59:18Z | 2013-08-29T01:55:37Z | Linear and Parallel Learning of Markov Random Fields | We introduce a new embarrassingly parallel parameter learning algorithm for
Markov random fields with untied parameters which is efficient for a large
class of practical models. Our algorithm parallelizes naturally over cliques
and, for graphs of bounded degree, its complexity is linear in the number of
cliques. Unlike its competitors, our algorithm is fully parallel and for
log-linear models it is also data efficient, requiring only the local
sufficient statistics of the data to estimate parameters.
| [
"['Yariv Dror Mizrahi' 'Misha Denil' 'Nando de Freitas']",
"Yariv Dror Mizrahi, Misha Denil and Nando de Freitas"
] |
cs.AI cs.HC cs.LG cs.NE | null | 1308.6415 | null | null | http://arxiv.org/pdf/1308.6415v2 | 2013-10-09T10:49:29Z | 2013-08-29T10:06:38Z | Learning-Based Procedural Content Generation | Procedural content generation (PCG) has recently become one of the hottest
topics in computational intelligence and AI game researches. Among a variety of
PCG techniques, search-based approaches overwhelmingly dominate PCG development
at present. While SBPCG leads to promising results and successful applications,
it poses a number of challenges ranging from representation to evaluation of
the content being generated. In this paper, we present an alternative yet
generic PCG framework, named learning-based procedure content generation
(LBPCG), to provide potential solutions to several challenging problems in
existing PCG techniques. By exploring and exploiting information gained in game
development and public beta test via data-driven learning, our framework can
generate robust content adaptable to end-user or target players on-line with
minimal interruption to their experience. Furthermore, we develop enabling
techniques to implement the various models required in our framework. For a
proof of concept, we have developed a prototype based on the classic open
source first-person shooter game, Quake. Simulation results suggest that our
framework is promising in generating quality content.
| [
"Jonathan Roberts and Ke Chen",
"['Jonathan Roberts' 'Ke Chen']"
] |
cs.CV cs.LG | null | 1308.6721 | null | null | http://arxiv.org/pdf/1308.6721v1 | 2013-08-30T12:13:11Z | 2013-08-30T12:13:11Z | Discriminative Parameter Estimation for Random Walks Segmentation | The Random Walks (RW) algorithm is one of the most e - cient and easy-to-use
probabilistic segmentation methods. By combining contrast terms with prior
terms, it provides accurate segmentations of medical images in a fully
automated manner. However, one of the main drawbacks of using the RW algorithm
is that its parameters have to be hand-tuned. we propose a novel discriminative
learning framework that estimates the parameters using a training dataset. The
main challenge we face is that the training samples are not fully supervised.
Speci cally, they provide a hard segmentation of the images, instead of a
proba- bilistic segmentation. We overcome this challenge by treating the opti-
mal probabilistic segmentation that is compatible with the given hard
segmentation as a latent variable. This allows us to employ the latent support
vector machine formulation for parameter estimation. We show that our approach
signi cantly outperforms the baseline methods on a challenging dataset
consisting of real clinical 3D MRI volumes of skeletal muscles.
| [
"['Pierre-Yves Baudin' 'Danny Goodman' 'Puneet Kumar' 'Noura Azzabou'\n 'Pierre G. Carlier' 'Nikos Paragios' 'M. Pawan Kumar']",
"Pierre-Yves Baudin (INRIA Saclay - Ile de France), Danny Goodman,\n Puneet Kumar (INRIA Saclay - Ile de France, CVN), Noura Azzabou (MIRCEN,\n UPMC), Pierre G. Carlier (UPMC), Nikos Paragios (INRIA Saclay - Ile de\n France, MAS, LIGM, ENPC), M. Pawan Kumar (INRIA Saclay - Ile de France, CVN)"
] |
cs.CR cs.DB cs.LG | null | 1308.6744 | null | null | http://arxiv.org/pdf/1308.6744v1 | 2013-08-28T08:34:08Z | 2013-08-28T08:34:08Z | Preventing Disclosure of Sensitive Knowledge by Hiding Inference | Data Mining is a way of extracting data or uncovering hidden patterns of
information from databases. So, there is a need to prevent the inference rules
from being disclosed such that the more secure data sets cannot be identified
from non sensitive attributes. This can be done through removing or adding
certain item sets in the transactions Sanitization. The purpose is to hide the
Inference rules, so that the user may not be able to discover any valuable
information from other non sensitive data and any organisation can release all
samples of their data without the fear of Knowledge Discovery In Databases
which can be achieved by investigating frequently occurring item sets, rules
that can be mined from them with the objective of hiding them. Another way is
to release only limited samples in the new database so that there is no
information loss and it also satisfies the legitimate needs of the users. The
major problem is uncovering hidden patterns, which causes a threat to the
database security. Sensitive data are inferred from non-sensitive data based on
the semantics of the application the user has, commonly known as the inference
problem. Two fundamental approaches to protect sensitive rules from disclosure
are that, preventing rules from being generated by hiding the frequent sets of
data items and reducing the importance of the rules by setting their confidence
below a user-specified threshold.
| [
"['A. S. Syed Navaz' 'M. Ravi' 'T. Prabhu']",
"A.S.Syed Navaz, M.Ravi and T.Prabhu"
] |
cs.LG cs.GT stat.ML | null | 1308.6797 | null | null | http://arxiv.org/pdf/1308.6797v5 | 2013-10-14T14:44:41Z | 2013-08-30T17:03:16Z | Online Ranking: Discrete Choice, Spearman Correlation and Other Feedback | Given a set $V$ of $n$ objects, an online ranking system outputs at each time
step a full ranking of the set, observes a feedback of some form and suffers a
loss. We study the setting in which the (adversarial) feedback is an element in
$V$, and the loss is the position (0th, 1st, 2nd...) of the item in the
outputted ranking. More generally, we study a setting in which the feedback is
a subset $U$ of at most $k$ elements in $V$, and the loss is the sum of the
positions of those elements.
We present an algorithm of expected regret $O(n^{3/2}\sqrt{Tk})$ over a time
horizon of $T$ steps with respect to the best single ranking in hindsight. This
improves previous algorithms and analyses either by a factor of either
$\Omega(\sqrt{k})$, a factor of $\Omega(\sqrt{\log n})$ or by improving running
time from quadratic to $O(n\log n)$ per round. We also prove a matching lower
bound. Our techniques also imply an improved regret bound for online rank
aggregation over the Spearman correlation measure, and to other more complex
ranking loss functions.
| [
"Nir Ailon",
"['Nir Ailon']"
] |
math.PR cs.LG math.ST stat.TH | null | 1309.0003 | null | null | http://arxiv.org/pdf/1309.0003v1 | 2013-08-30T18:27:01Z | 2013-08-30T18:27:01Z | Concentration Inequalities for Bounded Random Vectors | We derive simple concentration inequalities for bounded random vectors, which
generalize Hoeffding's inequalities for bounded scalar random variables. As
applications, we apply the general results to multinomial and Dirichlet
distributions to obtain multivariate concentration inequalities.
| [
"['Xinjia Chen']",
"Xinjia Chen"
] |
math.OC cs.LG | null | 1309.0113 | null | null | http://arxiv.org/pdf/1309.0113v1 | 2013-08-31T13:39:00Z | 2013-08-31T13:39:00Z | Non-Asymptotic Convergence Analysis of Inexact Gradient Methods for
Machine Learning Without Strong Convexity | Many recent applications in machine learning and data fitting call for the
algorithmic solution of structured smooth convex optimization problems.
Although the gradient descent method is a natural choice for this task, it
requires exact gradient computations and hence can be inefficient when the
problem size is large or the gradient is difficult to evaluate. Therefore,
there has been much interest in inexact gradient methods (IGMs), in which an
efficiently computable approximate gradient is used to perform the update in
each iteration. Currently, non-asymptotic linear convergence results for IGMs
are typically established under the assumption that the objective function is
strongly convex, which is not satisfied in many applications of interest; while
linear convergence results that do not require the strong convexity assumption
are usually asymptotic in nature. In this paper, we combine the best of these
two types of results and establish---under the standard assumption that the
gradient approximation errors decrease linearly to zero---the non-asymptotic
linear convergence of IGMs when applied to a class of structured convex
optimization problems. Such a class covers settings where the objective
function is not necessarily strongly convex and includes the least squares and
logistic regression problems. We believe that our techniques will find further
applications in the non-asymptotic convergence analysis of other first-order
methods.
| [
"['Anthony Man-Cho So']",
"Anthony Man-Cho So"
] |
cs.LG cs.MS | null | 1309.0238 | null | null | http://arxiv.org/pdf/1309.0238v1 | 2013-09-01T16:22:48Z | 2013-09-01T16:22:48Z | API design for machine learning software: experiences from the
scikit-learn project | Scikit-learn is an increasingly popular machine learning li- brary. Written
in Python, it is designed to be simple and efficient, accessible to
non-experts, and reusable in various contexts. In this paper, we present and
discuss our design choices for the application programming interface (API) of
the project. In particular, we describe the simple and elegant interface shared
by all learning and processing units in the library and then discuss its
advantages in terms of composition and reusability. The paper also comments on
implementation details specific to the Python ecosystem and analyzes obstacles
faced by users and developers of the library.
| [
"['Lars Buitinck' 'Gilles Louppe' 'Mathieu Blondel' 'Fabian Pedregosa'\n 'Andreas Mueller' 'Olivier Grisel' 'Vlad Niculae' 'Peter Prettenhofer'\n 'Alexandre Gramfort' 'Jaques Grobler' 'Robert Layton' 'Jake Vanderplas'\n 'Arnaud Joly' 'Brian Holt' 'Gaël Varoquaux']",
"Lars Buitinck (ILPS), Gilles Louppe, Mathieu Blondel, Fabian Pedregosa\n (INRIA Saclay - Ile de France), Andreas Mueller, Olivier Grisel, Vlad\n Niculae, Peter Prettenhofer, Alexandre Gramfort (INRIA Saclay - Ile de\n France, LTCI), Jaques Grobler (INRIA Saclay - Ile de France), Robert Layton,\n Jake Vanderplas, Arnaud Joly, Brian Holt, Ga\\\"el Varoquaux (INRIA Saclay -\n Ile de France)"
] |
physics.soc-ph cs.LG cs.SI stat.ML | null | 1309.0242 | null | null | http://arxiv.org/pdf/1309.0242v1 | 2013-09-01T16:59:55Z | 2013-09-01T16:59:55Z | Ensemble approaches for improving community detection methods | Statistical estimates can often be improved by fusion of data from several
different sources. One example is so-called ensemble methods which have been
successfully applied in areas such as machine learning for classification and
clustering. In this paper, we present an ensemble method to improve community
detection by aggregating the information found in an ensemble of community
structures. This ensemble can found by re-sampling methods, multiple runs of a
stochastic community detection method, or by several different community
detection algorithms applied to the same network. The proposed method is
evaluated using random networks with community structures and compared with two
commonly used community detection methods. The proposed method when applied on
a stochastic community detection algorithm performs well with low computational
complexity, thus offering both a new approach to community detection and an
additional community detection method.
| [
"['Johan Dahlin' 'Pontus Svenson']",
"Johan Dahlin and Pontus Svenson"
] |
stat.ML cs.DS cs.LG | null | 1309.0302 | null | null | http://arxiv.org/pdf/1309.0302v1 | 2013-09-02T05:07:31Z | 2013-09-02T05:07:31Z | Unmixing Incoherent Structures of Big Data by Randomized or Greedy
Decomposition | Learning big data by matrix decomposition always suffers from expensive
computation, mixing of complicated structures and noise. In this paper, we
study more adaptive models and efficient algorithms that decompose a data
matrix as the sum of semantic components with incoherent structures. We firstly
introduce "GO decomposition (GoDec)", an alternating projection method
estimating the low-rank part $L$ and the sparse part $S$ from data matrix
$X=L+S+G$ corrupted by noise $G$. Two acceleration strategies are proposed to
obtain scalable unmixing algorithm on big data: 1) Bilateral random projection
(BRP) is developed to speed up the update of $L$ in GoDec by a closed-form
built from left and right random projections of $X-S$ in lower dimensions; 2)
Greedy bilateral (GreB) paradigm updates the left and right factors of $L$ in a
mutually adaptive and greedy incremental manner, and achieve significant
improvement in both time and sample complexities. Then we proposes three
nontrivial variants of GoDec that generalizes GoDec to more general data type
and whose fast algorithms can be derived from the two strategies......
| [
"Tianyi Zhou and Dacheng Tao",
"['Tianyi Zhou' 'Dacheng Tao']"
] |
stat.ML cs.IR cs.LG | null | 1309.0337 | null | null | http://arxiv.org/pdf/1309.0337v1 | 2013-09-02T09:34:50Z | 2013-09-02T09:34:50Z | Scalable Probabilistic Entity-Topic Modeling | We present an LDA approach to entity disambiguation. Each topic is associated
with a Wikipedia article and topics generate either content words or entity
mentions. Training such models is challenging because of the topic and
vocabulary size, both in the millions. We tackle these problems using a novel
distributed inference and representation framework based on a parallel Gibbs
sampler guided by the Wikipedia link graph, and pipelines of MapReduce allowing
fast and memory-frugal processing of large datasets. We report state-of-the-art
performance on a public dataset.
| [
"Neil Houlsby, Massimiliano Ciaramita",
"['Neil Houlsby' 'Massimiliano Ciaramita']"
] |
cs.LG | null | 1309.0489 | null | null | http://arxiv.org/pdf/1309.0489v3 | 2014-04-15T20:32:08Z | 2013-09-02T19:29:34Z | Relative Comparison Kernel Learning with Auxiliary Kernels | In this work we consider the problem of learning a positive semidefinite
kernel matrix from relative comparisons of the form: "object A is more similar
to object B than it is to C", where comparisons are given by humans. Existing
solutions to this problem assume many comparisons are provided to learn a high
quality kernel. However, this can be considered unrealistic for many real-world
tasks since relative assessments require human input, which is often costly or
difficult to obtain. Because of this, only a limited number of these
comparisons may be provided. In this work, we explore methods for aiding the
process of learning a kernel with the help of auxiliary kernels built from more
easily extractable information regarding the relationships among objects. We
propose a new kernel learning approach in which the target kernel is defined as
a conic combination of auxiliary kernels and a kernel whose elements are
learned directly. We formulate a convex optimization to solve for this target
kernel that adds only minor overhead to methods that use no auxiliary
information. Empirical results show that in the presence of few training
relative comparisons, our method can learn kernels that generalize to more
out-of-sample comparisons than methods that do not utilize auxiliary
information, as well as similar methods that learn metrics over objects.
| [
"Eric Heim (University of Pittsburgh), Hamed Valizadegan (NASA Ames\n Research Center), and Milos Hauskrecht (University of Pittsburgh)",
"['Eric Heim' 'Hamed Valizadegan' 'Milos Hauskrecht']"
] |
cs.RO cs.AI cs.LG cs.MS | null | 1309.0671 | null | null | http://arxiv.org/pdf/1309.0671v1 | 2013-09-03T13:38:05Z | 2013-09-03T13:38:05Z | BayesOpt: A Library for Bayesian optimization with Robotics Applications | The purpose of this paper is twofold. On one side, we present a general
framework for Bayesian optimization and we compare it with some related fields
in active learning and Bayesian numerical analysis. On the other hand, Bayesian
optimization and related problems (bandits, sequential experimental design) are
highly dependent on the surrogate model that is selected. However, there is no
clear standard in the literature. Thus, we present a fast and flexible toolbox
that allows to test and combine different models and criteria with little
effort. It includes most of the state-of-the-art contributions, algorithms and
models. Its speed also removes part of the stigma that Bayesian optimization
methods are only good for "expensive functions". The software is free and it
can be used in many operating systems and computer languages.
| [
"Ruben Martinez-Cantin",
"['Ruben Martinez-Cantin']"
] |
cs.LG cs.DC cs.SI stat.ML | null | 1309.0787 | null | null | http://arxiv.org/pdf/1309.0787v5 | 2015-10-03T04:26:19Z | 2013-09-03T19:30:55Z | Online Tensor Methods for Learning Latent Variable Models | We introduce an online tensor decomposition based approach for two latent
variable modeling problems namely, (1) community detection, in which we learn
the latent communities that the social actors in social networks belong to, and
(2) topic modeling, in which we infer hidden topics of text articles. We
consider decomposition of moment tensors using stochastic gradient descent. We
conduct optimization of multilinear operations in SGD and avoid directly
forming the tensors, to save computational and storage costs. We present
optimized algorithm in two platforms. Our GPU-based implementation exploits the
parallelism of SIMD architectures to allow for maximum speed-up by a careful
optimization of storage and data transfer, whereas our CPU-based implementation
uses efficient sparse matrix computations and is suitable for large sparse
datasets. For the community detection problem, we demonstrate accuracy and
computational efficiency on Facebook, Yelp and DBLP datasets, and for the topic
modeling problem, we also demonstrate good performance on the New York Times
dataset. We compare our results to the state-of-the-art algorithms such as the
variational method, and report a gain of accuracy and a gain of several orders
of magnitude in the execution time.
| [
"Furong Huang, U. N. Niranjan, Mohammad Umar Hakeem, Animashree\n Anandkumar",
"['Furong Huang' 'U. N. Niranjan' 'Mohammad Umar Hakeem'\n 'Animashree Anandkumar']"
] |
astro-ph.IM cs.LG cs.NE physics.data-an stat.ML | 10.1093/mnras/stu642 | 1309.0790 | null | null | http://arxiv.org/abs/1309.0790v2 | 2014-01-27T19:23:30Z | 2013-09-03T19:33:28Z | SKYNET: an efficient and robust neural network training tool for machine
learning in astronomy | We present the first public release of our generic neural network training
algorithm, called SkyNet. This efficient and robust machine learning tool is
able to train large and deep feed-forward neural networks, including
autoencoders, for use in a wide range of supervised and unsupervised learning
applications, such as regression, classification, density estimation,
clustering and dimensionality reduction. SkyNet uses a `pre-training' method to
obtain a set of network parameters that has empirically been shown to be close
to a good solution, followed by further optimisation using a regularised
variant of Newton's method, where the level of regularisation is determined and
adjusted automatically; the latter uses second-order derivative information to
improve convergence, but without the need to evaluate or store the full Hessian
matrix, by using a fast approximate method to calculate Hessian-vector
products. This combination of methods allows for the training of complicated
networks that are difficult to optimise using standard backpropagation
techniques. SkyNet employs convergence criteria that naturally prevent
overfitting, and also includes a fast algorithm for estimating the accuracy of
network outputs. The utility and flexibility of SkyNet are demonstrated by
application to a number of toy problems, and to astronomical problems focusing
on the recovery of structure from blurred and noisy images, the identification
of gamma-ray bursters, and the compression and denoising of galaxy images. The
SkyNet software, which is implemented in standard ANSI C and fully parallelised
using MPI, is available at http://www.mrao.cam.ac.uk/software/skynet/.
| [
"['Philip Graff' 'Farhan Feroz' 'Michael P. Hobson' 'Anthony N. Lasenby']",
"Philip Graff, Farhan Feroz, Michael P. Hobson, Anthony N. Lasenby"
] |
cs.LO cs.AI cs.LG cs.SY | 10.4204/EPTCS.125.1 | 1309.0866 | null | null | http://arxiv.org/abs/1309.0866v1 | 2013-09-03T23:40:49Z | 2013-09-03T23:40:49Z | On the Robustness of Temporal Properties for Stochastic Models | Stochastic models such as Continuous-Time Markov Chains (CTMC) and Stochastic
Hybrid Automata (SHA) are powerful formalisms to model and to reason about the
dynamics of biological systems, due to their ability to capture the
stochasticity inherent in biological processes. A classical question in formal
modelling with clear relevance to biological modelling is the model checking
problem. i.e. calculate the probability that a behaviour, expressed for
instance in terms of a certain temporal logic formula, may occur in a given
stochastic process. However, one may not only be interested in the notion of
satisfiability, but also in the capacity of a system to mantain a particular
emergent behaviour unaffected by the perturbations, caused e.g. from extrinsic
noise, or by possible small changes in the model parameters. To address this
issue, researchers from the verification community have recently proposed
several notions of robustness for temporal logic providing suitable definitions
of distance between a trajectory of a (deterministic) dynamical system and the
boundaries of the set of trajectories satisfying the property of interest. The
contributions of this paper are twofold. First, we extend the notion of
robustness to stochastic systems, showing that this naturally leads to a
distribution of robustness scores. By discussing two examples, we show how to
approximate the distribution of the robustness score and its key indicators:
the average robustness and the conditional average robustness. Secondly, we
show how to combine these indicators with the satisfaction probability to
address the system design problem, where the goal is to optimize some control
parameters of a stochastic model in order to best maximize robustness of the
desired specifications.
| [
"['Ezio Bartocci' 'Luca Bortolussi' 'Laura Nenzi' 'Guido Sanguinetti']",
"Ezio Bartocci (TU Wien, Austria), Luca Bortolussi (University of\n Trieste, Italy), Laura Nenzi (IMT Lucca, Italy), Guido Sanguinetti\n (University of Edinburgh, UK)"
] |
math.PR cs.LG math.FA | null | 1309.1007 | null | null | http://arxiv.org/pdf/1309.1007v2 | 2013-09-11T16:24:52Z | 2013-09-04T12:40:31Z | Concentration in unbounded metric spaces and algorithmic stability | We prove an extension of McDiarmid's inequality for metric spaces with
unbounded diameter. To this end, we introduce the notion of the {\em
subgaussian diameter}, which is a distribution-dependent refinement of the
metric diameter. Our technique provides an alternative approach to that of
Kutin and Niyogi's method of weakly difference-bounded functions, and yields
nontrivial, dimension-free results in some interesting cases where the former
does not. As an application, we give apparently the first generalization bound
in the algorithmic stability setting that holds for unbounded loss functions.
We furthermore extend our concentration inequality to strongly mixing
processes.
| [
"Aryeh Kontorovich",
"['Aryeh Kontorovich']"
] |
stat.ML cs.LG | null | 1309.1193 | null | null | http://arxiv.org/pdf/1309.1193v2 | 2013-10-09T17:59:10Z | 2013-09-04T21:46:55Z | Confidence-constrained joint sparsity recovery under the Poisson noise
model | Our work is focused on the joint sparsity recovery problem where the common
sparsity pattern is corrupted by Poisson noise. We formulate the
confidence-constrained optimization problem in both least squares (LS) and
maximum likelihood (ML) frameworks and study the conditions for perfect
reconstruction of the original row sparsity and row sparsity pattern. However,
the confidence-constrained optimization problem is non-convex. Using convex
relaxation, an alternative convex reformulation of the problem is proposed. We
evaluate the performance of the proposed approach using simulation results on
synthetic data and show the effectiveness of proposed row sparsity and row
sparsity pattern recovery framework.
| [
"E. Chunikhina, R. Raich, and T. Nguyen",
"['E. Chunikhina' 'R. Raich' 'T. Nguyen']"
] |
stat.ML cs.LG math.NA stat.CO | null | 1309.1369 | null | null | http://arxiv.org/pdf/1309.1369v4 | 2014-02-17T22:18:34Z | 2013-09-05T15:12:11Z | Semistochastic Quadratic Bound Methods | Partition functions arise in a variety of settings, including conditional
random fields, logistic regression, and latent gaussian models. In this paper,
we consider semistochastic quadratic bound (SQB) methods for maximum likelihood
inference based on partition function optimization. Batch methods based on the
quadratic bound were recently proposed for this class of problems, and
performed favorably in comparison to state-of-the-art techniques.
Semistochastic methods fall in between batch algorithms, which use all the
data, and stochastic gradient type methods, which use small random selections
at each iteration. We build semistochastic quadratic bound-based methods, and
prove both global convergence (to a stationary point) under very weak
assumptions, and linear convergence rate under stronger assumptions on the
objective. To make the proposed methods faster and more stable, we consider
inexact subproblem minimization and batch-size selection schemes. The efficacy
of SQB methods is demonstrated via comparison with several state-of-the-art
techniques on commonly used datasets.
| [
"['Aleksandr Y. Aravkin' 'Anna Choromanska' 'Tony Jebara'\n 'Dimitri Kanevsky']",
"Aleksandr Y. Aravkin, Anna Choromanska, Tony Jebara, and Dimitri\n Kanevsky"
] |
stat.ML cs.LG math.ST nlin.CD physics.data-an stat.TH | 10.1103/PhysRevE.89.042119 | 1309.1392 | null | null | http://arxiv.org/abs/1309.1392v2 | 2013-12-09T05:21:31Z | 2013-09-05T16:18:35Z | Bayesian Structural Inference for Hidden Processes | We introduce a Bayesian approach to discovering patterns in structurally
complex processes. The proposed method of Bayesian Structural Inference (BSI)
relies on a set of candidate unifilar HMM (uHMM) topologies for inference of
process structure from a data series. We employ a recently developed exact
enumeration of topological epsilon-machines. (A sequel then removes the
topological restriction.) This subset of the uHMM topologies has the added
benefit that inferred models are guaranteed to be epsilon-machines,
irrespective of estimated transition probabilities. Properties of
epsilon-machines and uHMMs allow for the derivation of analytic expressions for
estimating transition probabilities, inferring start states, and comparing the
posterior probability of candidate model topologies, despite process internal
structure being only indirectly present in data. We demonstrate BSI's
effectiveness in estimating a process's randomness, as reflected by the Shannon
entropy rate, and its structure, as quantified by the statistical complexity.
We also compare using the posterior distribution over candidate models and the
single, maximum a posteriori model for point estimation and show that the
former more accurately reflects uncertainty in estimated values. We apply BSI
to in-class examples of finite- and infinite-order Markov processes, as well to
an out-of-class, infinite-state hidden process.
| [
"Christopher C. Strelioff and James P. Crutchfield",
"['Christopher C. Strelioff' 'James P. Crutchfield']"
] |
cs.LG cs.CL cs.NE math.OC stat.ML | null | 1309.1501 | null | null | http://arxiv.org/pdf/1309.1501v3 | 2013-12-10T11:51:39Z | 2013-09-05T22:06:58Z | Improvements to deep convolutional neural networks for LVCSR | Deep Convolutional Neural Networks (CNNs) are more powerful than Deep Neural
Networks (DNN), as they are able to better reduce spectral variation in the
input signal. This has also been confirmed experimentally, with CNNs showing
improvements in word error rate (WER) between 4-12% relative compared to DNNs
across a variety of LVCSR tasks. In this paper, we describe different methods
to further improve CNN performance. First, we conduct a deep analysis comparing
limited weight sharing and full weight sharing with state-of-the-art features.
Second, we apply various pooling strategies that have shown improvements in
computer vision to an LVCSR speech task. Third, we introduce a method to
effectively incorporate speaker adaptation, namely fMLLR, into log-mel
features. Fourth, we introduce an effective strategy to use dropout during
Hessian-free sequence training. We find that with these improvements,
particularly with fMLLR and dropout, we are able to achieve an additional 2-3%
relative improvement in WER on a 50-hour Broadcast News task over our previous
best CNN baseline. On a larger 400-hour BN task, we find an additional 4-5%
relative improvement over our previous best CNN baseline.
| [
"Tara N. Sainath, Brian Kingsbury, Abdel-rahman Mohamed, George E.\n Dahl, George Saon, Hagen Soltau, Tomas Beran, Aleksandr Y. Aravkin, Bhuvana\n Ramabhadran",
"['Tara N. Sainath' 'Brian Kingsbury' 'Abdel-rahman Mohamed'\n 'George E. Dahl' 'George Saon' 'Hagen Soltau' 'Tomas Beran'\n 'Aleksandr Y. Aravkin' 'Bhuvana Ramabhadran']"
] |
cs.LG cs.CL cs.NE math.OC stat.ML | null | 1309.1508 | null | null | http://arxiv.org/pdf/1309.1508v3 | 2013-12-10T12:05:51Z | 2013-09-05T23:21:02Z | Accelerating Hessian-free optimization for deep neural networks by
implicit preconditioning and sampling | Hessian-free training has become a popular parallel second or- der
optimization technique for Deep Neural Network training. This study aims at
speeding up Hessian-free training, both by means of decreasing the amount of
data used for training, as well as through reduction of the number of Krylov
subspace solver iterations used for implicit estimation of the Hessian. In this
paper, we develop an L-BFGS based preconditioning scheme that avoids the need
to access the Hessian explicitly. Since L-BFGS cannot be regarded as a
fixed-point iteration, we further propose the employment of flexible Krylov
subspace solvers that retain the desired theoretical convergence guarantees of
their conventional counterparts. Second, we propose a new sampling algorithm,
which geometrically increases the amount of data utilized for gradient and
Krylov subspace iteration calculations. On a 50-hr English Broadcast News task,
we find that these methodologies provide roughly a 1.5x speed-up, whereas, on a
300-hr Switchboard task, these techniques provide over a 2.3x speedup, with no
loss in WER. These results suggest that even further speed-up is expected, as
problems scale and complexity grows.
| [
"Tara N. Sainath, Lior Horesh, Brian Kingsbury, Aleksandr Y. Aravkin,\n Bhuvana Ramabhadran",
"['Tara N. Sainath' 'Lior Horesh' 'Brian Kingsbury' 'Aleksandr Y. Aravkin'\n 'Bhuvana Ramabhadran']"
] |
cs.LG math.OC stat.ML | null | 1309.1541 | null | null | http://arxiv.org/pdf/1309.1541v1 | 2013-09-06T05:48:40Z | 2013-09-06T05:48:40Z | Projection onto the probability simplex: An efficient algorithm with a
simple proof, and an application | We provide an elementary proof of a simple, efficient algorithm for computing
the Euclidean projection of a point onto the probability simplex. We also show
an application in Laplacian K-modes clustering.
| [
"['Weiran Wang' 'Miguel Á. Carreira-Perpiñán']",
"Weiran Wang, Miguel \\'A. Carreira-Perpi\\~n\\'an"
] |
cs.LG cs.GT | null | 1309.1543 | null | null | http://arxiv.org/pdf/1309.1543v1 | 2013-09-06T06:06:15Z | 2013-09-06T06:06:15Z | A Comparism of the Performance of Supervised and Unsupervised Machine
Learning Techniques in evolving Awale/Mancala/Ayo Game Player | Awale games have become widely recognized across the world, for their
innovative strategies and techniques which were used in evolving the agents
(player) and have produced interesting results under various conditions. This
paper will compare the results of the two major machine learning techniques by
reviewing their performance when using minimax, endgame database, a combination
of both techniques or other techniques, and will determine which are the best
techniques.
| [
"O.A. Randle, O. O. Ogunduyile, T. Zuva, N. A. Fashola",
"['O. A. Randle' 'O. O. Ogunduyile' 'T. Zuva' 'N. A. Fashola']"
] |
cs.LG stat.ML | null | 1309.1761 | null | null | http://arxiv.org/pdf/1309.1761v1 | 2013-09-06T18:52:16Z | 2013-09-06T18:52:16Z | Convergence of Nearest Neighbor Pattern Classification with Selective
Sampling | In the panoply of pattern classification techniques, few enjoy the intuitive
appeal and simplicity of the nearest neighbor rule: given a set of samples in
some metric domain space whose value under some function is known, we estimate
the function anywhere in the domain by giving the value of the nearest sample
per the metric. More generally, one may use the modal value of the m nearest
samples, where m is a fixed positive integer (although m=1 is known to be
admissible in the sense that no larger value is asymptotically superior in
terms of prediction error). The nearest neighbor rule is nonparametric and
extremely general, requiring in principle only that the domain be a metric
space. The classic paper on the technique, proving convergence under
independent, identically-distributed (iid) sampling, is due to Cover and Hart
(1967). Because taking samples is costly, there has been much research in
recent years on selective sampling, in which each sample is selected from a
pool of candidates ranked by a heuristic; the heuristic tries to guess which
candidate would be the most "informative" sample. Lindenbaum et al. (2004)
apply selective sampling to the nearest neighbor rule, but their approach
sacrifices the austere generality of Cover and Hart; furthermore, their
heuristic algorithm is complex and computationally expensive. Here we report
recent results that enable selective sampling in the original Cover-Hart
setting. Our results pose three selection heuristics and prove that their
nearest neighbor rule predictions converge to the true pattern. Two of the
algorithms are computationally cheap, with complexity growing linearly in the
number of samples. We believe that these results constitute an important
advance in the art.
| [
"['Shaun N. Joseph' 'Seif Omar Abu Bakr' 'Gabriel Lugo']",
"Shaun N. Joseph and Seif Omar Abu Bakr and Gabriel Lugo"
] |
cs.LG cs.CV | null | 1309.1853 | null | null | http://arxiv.org/pdf/1309.1853v1 | 2013-09-07T11:33:36Z | 2013-09-07T11:33:36Z | A General Two-Step Approach to Learning-Based Hashing | Most existing approaches to hashing apply a single form of hash function, and
an optimization process which is typically deeply coupled to this specific
form. This tight coupling restricts the flexibility of the method to respond to
the data, and can result in complex optimization problems that are difficult to
solve. Here we propose a flexible yet simple framework that is able to
accommodate different types of loss functions and hash functions. This
framework allows a number of existing approaches to hashing to be placed in
context, and simplifies the development of new problem-specific hashing
methods. Our framework decomposes hashing learning problem into two steps: hash
bit learning and hash function learning based on the learned bits. The first
step can typically be formulated as binary quadratic problems, and the second
step can be accomplished by training standard binary classifiers. Both problems
have been extensively studied in the literature. Our extensive experiments
demonstrate that the proposed framework is effective, flexible and outperforms
the state-of-the-art.
| [
"['Guosheng Lin' 'Chunhua Shen' 'David Suter' 'Anton van den Hengel']",
"Guosheng Lin, Chunhua Shen, David Suter, Anton van den Hengel"
] |
stat.ML cs.LG math.OC | null | 1309.1952 | null | null | http://arxiv.org/pdf/1309.1952v2 | 2014-07-07T05:10:23Z | 2013-09-08T12:55:39Z | A Clustering Approach to Learn Sparsely-Used Overcomplete Dictionaries | We consider the problem of learning overcomplete dictionaries in the context
of sparse coding, where each sample selects a sparse subset of dictionary
elements. Our main result is a strategy to approximately recover the unknown
dictionary using an efficient algorithm. Our algorithm is a clustering-style
procedure, where each cluster is used to estimate a dictionary element. The
resulting solution can often be further cleaned up to obtain a high accuracy
estimate, and we provide one simple scenario where $\ell_1$-regularized
regression can be used for such a second stage.
| [
"['Alekh Agarwal' 'Animashree Anandkumar' 'Praneeth Netrapalli']",
"Alekh Agarwal and Animashree Anandkumar and Praneeth Netrapalli"
] |
cs.CV cs.LG stat.ML | null | 1309.2074 | null | null | http://arxiv.org/pdf/1309.2074v2 | 2014-03-09T18:50:35Z | 2013-09-09T09:16:02Z | Learning Transformations for Clustering and Classification | A low-rank transformation learning framework for subspace clustering and
classification is here proposed. Many high-dimensional data, such as face
images and motion sequences, approximately lie in a union of low-dimensional
subspaces. The corresponding subspace clustering problem has been extensively
studied in the literature to partition such high-dimensional data into clusters
corresponding to their underlying low-dimensional subspaces. However,
low-dimensional intrinsic structures are often violated for real-world
observations, as they can be corrupted by errors or deviate from ideal models.
We propose to address this by learning a linear transformation on subspaces
using matrix rank, via its convex surrogate nuclear norm, as the optimization
criteria. The learned linear transformation restores a low-rank structure for
data from the same subspace, and, at the same time, forces a a maximally
separated structure for data from different subspaces. In this way, we reduce
variations within subspaces, and increase separation between subspaces for a
more robust subspace clustering. This proposed learned robust subspace
clustering framework significantly enhances the performance of existing
subspace clustering methods. Basic theoretical results here presented help to
further support the underlying framework. To exploit the low-rank structures of
the transformed subspaces, we further introduce a fast subspace clustering
technique, which efficiently combines robust PCA with sparse modeling. When
class labels are present at the training stage, we show this low-rank
transformation framework also significantly enhances classification
performance. Extensive experiments using public datasets are presented, showing
that the proposed approach significantly outperforms state-of-the-art methods
for subspace clustering and classification.
| [
"Qiang Qiu, Guillermo Sapiro",
"['Qiang Qiu' 'Guillermo Sapiro']"
] |
cs.LG cs.AI | 10.1017/S1471068413000689 | 1309.2080 | null | null | http://arxiv.org/abs/1309.2080v1 | 2013-09-09T09:24:44Z | 2013-09-09T09:24:44Z | Structure Learning of Probabilistic Logic Programs by Searching the
Clause Space | Learning probabilistic logic programming languages is receiving an increasing
attention and systems are available for learning the parameters (PRISM,
LeProbLog, LFI-ProbLog and EMBLEM) or both the structure and the parameters
(SEM-CP-logic and SLIPCASE) of these languages. In this paper we present the
algorithm SLIPCOVER for "Structure LearnIng of Probabilistic logic programs by
searChing OVER the clause space". It performs a beam search in the space of
probabilistic clauses and a greedy search in the space of theories, using the
log likelihood of the data as the guiding heuristics. To estimate the log
likelihood SLIPCOVER performs Expectation Maximization with EMBLEM. The
algorithm has been tested on five real world datasets and compared with
SLIPCASE, SEM-CP-logic, Aleph and two algorithms for learning Markov Logic
Networks (Learning using Structural Motifs (LSM) and ALEPH++ExactL1). SLIPCOVER
achieves higher areas under the precision-recall and ROC curves in most cases.
| [
"['Elena Bellodi' 'Fabrizio Riguzzi']",
"Elena Bellodi, Fabrizio Riguzzi"
] |
math.OC cs.LG cs.NA | null | 1309.2168 | null | null | http://arxiv.org/pdf/1309.2168v2 | 2015-02-16T17:40:28Z | 2013-09-09T14:19:10Z | Large-scale optimization with the primal-dual column generation method | The primal-dual column generation method (PDCGM) is a general-purpose column
generation technique that relies on the primal-dual interior point method to
solve the restricted master problems. The use of this interior point method
variant allows to obtain suboptimal and well-centered dual solutions which
naturally stabilizes the column generation. As recently presented in the
literature, reductions in the number of calls to the oracle and in the CPU
times are typically observed when compared to the standard column generation,
which relies on extreme optimal dual solutions. However, these results are
based on relatively small problems obtained from linear relaxations of
combinatorial applications. In this paper, we investigate the behaviour of the
PDCGM in a broader context, namely when solving large-scale convex optimization
problems. We have selected applications that arise in important real-life
contexts such as data analysis (multiple kernel learning problem),
decision-making under uncertainty (two-stage stochastic programming problems)
and telecommunication and transportation networks (multicommodity network flow
problem). In the numerical experiments, we use publicly available benchmark
instances to compare the performance of the PDCGM against recent results for
different methods presented in the literature, which were the best available
results to date. The analysis of these results suggests that the PDCGM offers
an attractive alternative over specialized methods since it remains competitive
in terms of number of iterations and CPU times even for large-scale
optimization problems.
| [
"Jacek Gondzio, Pablo Gonz\\'alez-Brevis and Pedro Munari",
"['Jacek Gondzio' 'Pablo González-Brevis' 'Pedro Munari']"
] |
cs.LG cs.SI math.OC stat.ML | null | 1309.2350 | null | null | http://arxiv.org/pdf/1309.2350v1 | 2013-09-10T00:36:44Z | 2013-09-10T00:36:44Z | Exponentially Fast Parameter Estimation in Networks Using Distributed
Dual Averaging | In this paper we present an optimization-based view of distributed parameter
estimation and observational social learning in networks. Agents receive a
sequence of random, independent and identically distributed (i.i.d.) signals,
each of which individually may not be informative about the underlying true
state, but the signals together are globally informative enough to make the
true state identifiable. Using an optimization-based characterization of
Bayesian learning as proximal stochastic gradient descent (with
Kullback-Leibler divergence from a prior as a proximal function), we show how
to efficiently use a distributed, online variant of Nesterov's dual averaging
method to solve the estimation with purely local information. When the true
state is globally identifiable, and the network is connected, we prove that
agents eventually learn the true parameter using a randomized gossip scheme. We
demonstrate that with high probability the convergence is exponentially fast
with a rate dependent on the KL divergence of observations under the true state
from observations under the second likeliest state. Furthermore, our work also
highlights the possibility of learning under continuous adaptation of network
which is a consequence of employing constant, unit stepsize for the algorithm.
| [
"['Shahin Shahrampour' 'Ali Jadbabaie']",
"Shahin Shahrampour and Ali Jadbabaie"
] |
stat.ML cs.LG cs.NA stat.CO | null | 1309.2375 | null | null | http://arxiv.org/pdf/1309.2375v2 | 2013-10-08T06:06:09Z | 2013-09-10T05:39:25Z | Accelerated Proximal Stochastic Dual Coordinate Ascent for Regularized
Loss Minimization | We introduce a proximal version of the stochastic dual coordinate ascent
method and show how to accelerate the method using an inner-outer iteration
procedure. We analyze the runtime of the framework and obtain rates that
improve state-of-the-art results for various key machine learning optimization
problems including SVM, logistic regression, ridge regression, Lasso, and
multiclass SVM. Experiments validate our theoretical findings.
| [
"Shai Shalev-Shwartz and Tong Zhang",
"['Shai Shalev-Shwartz' 'Tong Zhang']"
] |
math.OC cs.LG stat.CO stat.ML | null | 1309.2388 | null | null | http://arxiv.org/pdf/1309.2388v2 | 2016-05-11T06:51:31Z | 2013-09-10T06:49:15Z | Minimizing Finite Sums with the Stochastic Average Gradient | We propose the stochastic average gradient (SAG) method for optimizing the
sum of a finite number of smooth convex functions. Like stochastic gradient
(SG) methods, the SAG method's iteration cost is independent of the number of
terms in the sum. However, by incorporating a memory of previous gradient
values the SAG method achieves a faster convergence rate than black-box SG
methods. The convergence rate is improved from O(1/k^{1/2}) to O(1/k) in
general, and when the sum is strongly-convex the convergence rate is improved
from the sub-linear O(1/k) to a linear convergence rate of the form O(p^k) for
p \textless{} 1. Further, in many cases the convergence rate of the new method
is also faster than black-box deterministic gradient methods, in terms of the
number of gradient evaluations. Numerical experiments indicate that the new
algorithm often dramatically outperforms existing SG and deterministic gradient
methods, and that the performance may be further improved through the use of
non-uniform sampling strategies.
| [
"['Mark Schmidt' 'Nicolas Le Roux' 'Francis Bach']",
"Mark Schmidt (SIERRA, LIENS), Nicolas Le Roux (SIERRA, LIENS), Francis\n Bach (SIERRA, LIENS)"
] |
cs.LG math.OC | null | 1309.2593 | null | null | http://arxiv.org/pdf/1309.2593v1 | 2013-09-10T18:04:15Z | 2013-09-10T18:04:15Z | Maximizing submodular functions using probabilistic graphical models | We consider the problem of maximizing submodular functions; while this
problem is known to be NP-hard, several numerically efficient local search
techniques with approximation guarantees are available. In this paper, we
propose a novel convex relaxation which is based on the relationship between
submodular functions, entropies and probabilistic graphical models. In a
graphical model, the entropy of the joint distribution decomposes as a sum of
marginal entropies of subsets of variables; moreover, for any distribution, the
entropy of the closest distribution factorizing in the graphical model provides
an bound on the entropy. For directed graphical models, this last property
turns out to be a direct consequence of the submodularity of the entropy
function, and allows the generalization of graphical-model-based upper bounds
to any submodular functions. These upper bounds may then be jointly maximized
with respect to a set, while minimized with respect to the graph, leading to a
convex variational inference scheme for maximizing submodular functions, based
on outer approximations of the marginal polytope and maximum likelihood bounded
treewidth structures. By considering graphs of increasing treewidths, we may
then explore the trade-off between computational complexity and tightness of
the relaxation. We also present extensions to constrained problems and
maximizing the difference of submodular functions, which include all possible
set functions.
| [
"['K. S. Sesh Kumar' 'Francis Bach']",
"K. S. Sesh Kumar (LIENS, INRIA Paris - Rocquencourt), Francis Bach\n (LIENS, INRIA Paris - Rocquencourt)"
] |
cs.LG stat.ML | null | 1309.2765 | null | null | http://arxiv.org/pdf/1309.2765v1 | 2013-09-11T08:59:07Z | 2013-09-11T08:59:07Z | Enhancements of Multi-class Support Vector Machine Construction from
Binary Learners using Generalization Performance | We propose several novel methods for enhancing the multi-class SVMs by
applying the generalization performance of binary classifiers as the core idea.
This concept will be applied on the existing algorithms, i.e., the Decision
Directed Acyclic Graph (DDAG), the Adaptive Directed Acyclic Graphs (ADAG), and
Max Wins. Although in the previous approaches there have been many attempts to
use some information such as the margin size and the number of support vectors
as performance estimators for binary SVMs, they may not accurately reflect the
actual performance of the binary SVMs. We show that the generalization ability
evaluated via a cross-validation mechanism is more suitable to directly extract
the actual performance of binary SVMs. Our methods are built around this
performance measure, and each of them is crafted to overcome the weakness of
the previous algorithm. The proposed methods include the Reordering Adaptive
Directed Acyclic Graph (RADAG), Strong Elimination of the classifiers (SE),
Weak Elimination of the classifiers (WE), and Voting based Candidate Filtering
(VCF). Experimental results demonstrate that our methods give significantly
higher accuracy than all of the traditional ones. Especially, WE provides
significantly superior results compared to Max Wins which is recognized as the
state of the art algorithm in terms of both accuracy and classification speed
with two times faster in average.
| [
"Patoomsiri Songsiri, Thimaporn Phetkaew, Boonserm Kijsirikul",
"['Patoomsiri Songsiri' 'Thimaporn Phetkaew' 'Boonserm Kijsirikul']"
] |
cs.DS cs.AI cs.LG | null | 1309.2796 | null | null | http://arxiv.org/pdf/1309.2796v2 | 2014-07-26T15:42:05Z | 2013-09-11T11:50:44Z | Decision Trees for Function Evaluation - Simultaneous Optimization of
Worst and Expected Cost | In several applications of automatic diagnosis and active learning a central
problem is the evaluation of a discrete function by adaptively querying the
values of its variables until the values read uniquely determine the value of
the function. In general, the process of reading the value of a variable might
involve some cost, computational or even a fee to be paid for the experiment
required for obtaining the value. This cost should be taken into account when
deciding the next variable to read. The goal is to design a strategy for
evaluating the function incurring little cost (in the worst case or in
expectation according to a prior distribution on the possible variables'
assignments). Our algorithm builds a strategy (decision tree) which attains a
logarithmic approxima- tion simultaneously for the expected and worst cost
spent. This is best possible under the assumption that $P \neq NP.$
| [
"['Ferdinando Cicalese' 'Eduardo Laber' 'Aline Medeiros Saettler']",
"Ferdinando Cicalese and Eduardo Laber and Aline Medeiros Saettler"
] |
q-bio.QM cs.LG q-bio.NC stat.AP | null | 1309.2848 | null | null | http://arxiv.org/pdf/1309.2848v1 | 2013-09-11T14:55:50Z | 2013-09-11T14:55:50Z | High-dimensional cluster analysis with the Masked EM Algorithm | Cluster analysis faces two problems in high dimensions: first, the `curse of
dimensionality' that can lead to overfitting and poor generalization
performance; and second, the sheer time taken for conventional algorithms to
process large amounts of high-dimensional data. In many applications, only a
small subset of features provide information about the cluster membership of
any one data point, however this informative feature subset may not be the same
for all data points. Here we introduce a `Masked EM' algorithm for fitting
mixture of Gaussians models in such cases. We show that the algorithm performs
close to optimally on simulated Gaussian data, and in an application of `spike
sorting' of high channel-count neuronal recordings.
| [
"['Shabnam N. Kadir' 'Dan F. M. Goodman' 'Kenneth D. Harris']",
"Shabnam N. Kadir, Dan F. M. Goodman, and Kenneth D. Harris"
] |
stat.ML cs.LG | null | 1309.3103 | null | null | http://arxiv.org/pdf/1309.3103v1 | 2013-09-12T10:39:50Z | 2013-09-12T10:39:50Z | Temporal Autoencoding Improves Generative Models of Time Series | Restricted Boltzmann Machines (RBMs) are generative models which can learn
useful representations from samples of a dataset in an unsupervised fashion.
They have been widely employed as an unsupervised pre-training method in
machine learning. RBMs have been modified to model time series in two main
ways: The Temporal RBM stacks a number of RBMs laterally and introduces
temporal dependencies between the hidden layer units; The Conditional RBM, on
the other hand, considers past samples of the dataset as a conditional bias and
learns a representation which takes these into account. Here we propose a new
training method for both the TRBM and the CRBM, which enforces the dynamic
structure of temporal datasets. We do so by treating the temporal models as
denoising autoencoders, considering past frames of the dataset as corrupted
versions of the present frame and minimizing the reconstruction error of the
present data by the model. We call this approach Temporal Autoencoding. This
leads to a significant improvement in the performance of both models in a
filling-in-frames task across a number of datasets. The error reduction for
motion capture data is 56\% for the CRBM and 80\% for the TRBM. Taking the
posterior mean prediction instead of single samples further improves the
model's estimates, decreasing the error by as much as 91\% for the CRBM on
motion capture data. We also trained the model to perform forecasting on a
large number of datasets and have found TA pretraining to consistently improve
the performance of the forecasts. Furthermore, by looking at the prediction
error across time, we can see that this improvement reflects a better
representation of the dynamics of the data as opposed to a bias towards
reconstructing the observed data on a short time scale.
| [
"Chris H\\\"ausler, Alex Susemihl, Martin P Nawrot, Manfred Opper",
"['Chris Häusler' 'Alex Susemihl' 'Martin P Nawrot' 'Manfred Opper']"
] |
cs.LG math.OC | null | 1309.3117 | null | null | http://arxiv.org/pdf/1309.3117v1 | 2013-09-12T11:28:12Z | 2013-09-12T11:28:12Z | Convex relaxations of structured matrix factorizations | We consider the factorization of a rectangular matrix $X $ into a positive
linear combination of rank-one factors of the form $u v^\top$, where $u$ and
$v$ belongs to certain sets $\mathcal{U}$ and $\mathcal{V}$, that may encode
specific structures regarding the factors, such as positivity or sparsity. In
this paper, we show that computing the optimal decomposition is equivalent to
computing a certain gauge function of $X$ and we provide a detailed analysis of
these gauge functions and their polars. Since these gauge functions are
typically hard to compute, we present semi-definite relaxations and several
algorithms that may recover approximate decompositions with approximation
guarantees. We illustrate our results with simulations on finding
decompositions with elements in $\{0,1\}$. As side contributions, we present a
detailed analysis of variational quadratic representations of norms as well as
a new iterative basis pursuit algorithm that can deal with inexact first-order
oracles.
| [
"Francis Bach (INRIA Paris - Rocquencourt, LIENS)",
"['Francis Bach']"
] |
stat.ML cs.LG math.ST stat.TH | null | 1309.3233 | null | null | http://arxiv.org/pdf/1309.3233v1 | 2013-09-12T18:23:33Z | 2013-09-12T18:23:33Z | Efficient Orthogonal Tensor Decomposition, with an Application to Latent
Variable Model Learning | Decomposing tensors into orthogonal factors is a well-known task in
statistics, machine learning, and signal processing. We study orthogonal outer
product decompositions where the factors in the summands in the decomposition
are required to be orthogonal across summands, by relating this orthogonal
decomposition to the singular value decompositions of the flattenings. We show
that it is a non-trivial assumption for a tensor to have such an orthogonal
decomposition, and we show that it is unique (up to natural symmetries) in case
it exists, in which case we also demonstrate how it can be efficiently and
reliably obtained by a sequence of singular value decompositions. We
demonstrate how the factoring algorithm can be applied for parameter
identification in latent variable and mixture models.
| [
"Franz J. Kir\\'aly",
"['Franz J. Király']"
] |
stat.ML cs.CV cs.LG | null | 1309.3256 | null | null | http://arxiv.org/pdf/1309.3256v2 | 2014-02-03T03:56:31Z | 2013-09-12T19:38:18Z | Recovery guarantees for exemplar-based clustering | For a certain class of distributions, we prove that the linear programming
relaxation of $k$-medoids clustering---a variant of $k$-means clustering where
means are replaced by exemplars from within the dataset---distinguishes points
drawn from nonoverlapping balls with high probability once the number of points
drawn and the separation distance between any two balls are sufficiently large.
Our results hold in the nontrivial regime where the separation distance is
small enough that points drawn from different balls may be closer to each other
than points drawn from the same ball; in this case, clustering by thresholding
pairwise distances between points can fail. We also exhibit numerical evidence
of high-probability recovery in a substantially more permissive regime.
| [
"['Abhinav Nellore' 'Rachel Ward']",
"Abhinav Nellore and Rachel Ward"
] |
stat.ME cs.LG stat.ML | null | 1309.3533 | null | null | http://arxiv.org/pdf/1309.3533v1 | 2013-09-13T18:31:02Z | 2013-09-13T18:31:02Z | Mixed Membership Models for Time Series | In this article we discuss some of the consequences of the mixed membership
perspective on time series analysis. In its most abstract form, a mixed
membership model aims to associate an individual entity with some set of
attributes based on a collection of observed data. Although much of the
literature on mixed membership models considers the setting in which
exchangeable collections of data are associated with each member of a set of
entities, it is equally natural to consider problems in which an entire time
series is viewed as an entity and the goal is to characterize the time series
in terms of a set of underlying dynamic attributes or "dynamic regimes".
Indeed, this perspective is already present in the classical hidden Markov
model, where the dynamic regimes are referred to as "states", and the
collection of states realized in a sample path of the underlying process can be
viewed as a mixed membership characterization of the observed time series. Our
goal here is to review some of the richer modeling possibilities for time
series that are provided by recent developments in the mixed membership
framework.
| [
"Emily B. Fox and Michael I. Jordan",
"['Emily B. Fox' 'Michael I. Jordan']"
] |
cs.IT cs.LG math.IT stat.ML | 10.1016/j.acha.2013.08.005 | 1309.3676 | null | null | http://arxiv.org/abs/1309.3676v1 | 2013-09-14T15:08:48Z | 2013-09-14T15:08:48Z | Optimized projections for compressed sensing via rank-constrained
nearest correlation matrix | Optimizing the acquisition matrix is useful for compressed sensing of signals
that are sparse in overcomplete dictionaries, because the acquisition matrix
can be adapted to the particular correlations of the dictionary atoms. In this
paper a novel formulation of the optimization problem is proposed, in the form
of a rank-constrained nearest correlation matrix problem. Furthermore,
improvements for three existing optimization algorithms are introduced, which
are shown to be particular instances of the proposed formulation. Simulation
results show notable improvements and superior robustness in sparse signal
recovery.
| [
"['Nicolae Cleju']",
"Nicolae Cleju"
] |
cs.LG | null | 1309.3697 | null | null | http://arxiv.org/pdf/1309.3697v1 | 2013-09-14T19:56:58Z | 2013-09-14T19:56:58Z | Group Learning and Opinion Diffusion in a Broadcast Network | We analyze the following group learning problem in the context of opinion
diffusion: Consider a network with $M$ users, each facing $N$ options. In a
discrete time setting, at each time step, each user chooses $K$ out of the $N$
options, and receive randomly generated rewards, whose statistics depend on the
options chosen as well as the user itself, and are unknown to the users. Each
user aims to maximize their expected total rewards over a certain time horizon
through an online learning process, i.e., a sequence of exploration (sampling
the return of each option) and exploitation (selecting empirically good
options) steps.
Within this context we consider two group learning scenarios, (1) users with
uniform preferences and (2) users with diverse preferences, and examine how a
user should construct its learning process to best extract information from
other's decisions and experiences so as to maximize its own reward. Performance
is measured in {\em weak regret}, the difference between the user's total
reward and the reward from a user-specific best single-action policy (i.e.,
always selecting the set of options generating the highest mean rewards for
this user). Within each scenario we also consider two cases: (i) when users
exchange full information, meaning they share the actual rewards they obtained
from their choices, and (ii) when users exchange limited information, e.g.,
only their choices but not rewards obtained from these choices.
| [
"['Yang Liu' 'Mingyan Liu']",
"Yang Liu, Mingyan Liu"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.