corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-4801 | 0809.1522 | On the permutation capacity of digraphs | <|reference_start|>On the permutation capacity of digraphs: We extend several results of the third author and C. Malvenuto on graph-different permutations to the case of directed graphs and introduce new open problems. Permutation capacity is a natural extension of Sperner capacity from finite directed graphs to infinite digraphs. Our subject is combinatorial in nature, but can be equally regarded as zero-error information theory.<|reference_end|> | arxiv | @article{cohen2008on,
title={On the permutation capacity of digraphs},
author={Gerard Cohen, Emanuela Fachini, Janos Korner},
journal={arXiv preprint arXiv:0809.1522},
year={2008},
archivePrefix={arXiv},
eprint={0809.1522},
primaryClass={math.CO cs.IT math.IT}
} | cohen2008on |
arxiv-4802 | 0809.1551 | Consistent Query Answers in the Presence of Universal Constraints | <|reference_start|>Consistent Query Answers in the Presence of Universal Constraints: The framework of consistent query answers and repairs has been introduced to alleviate the impact of inconsistent data on the answers to a query. A repair is a minimally different consistent instance and an answer is consistent if it is present in every repair. In this article we study the complexity of consistent query answers and repair checking in the presence of universal constraints. We propose an extended version of the conflict hypergraph which allows to capture all repairs w.r.t. a set of universal constraints. We show that repair checking is in PTIME for the class of full tuple-generating dependencies and denial constraints, and we present a polynomial repair algorithm. This algorithm is sound, i.e. always produces a repair, but also complete, i.e. every repair can be constructed. Next, we present a polynomial-time algorithm computing consistent answers to ground quantifier-free queries in the presence of denial constraints, join dependencies, and acyclic full-tuple generating dependencies. Finally, we show that extending the class of constraints leads to intractability. For arbitrary full tuple-generating dependencies consistent query answering becomes coNP-complete. For arbitrary universal constraints consistent query answering is \Pi_2^p-complete and repair checking coNP-complete.<|reference_end|> | arxiv | @article{staworko2008consistent,
title={Consistent Query Answers in the Presence of Universal Constraints},
author={Slawomir Staworko, Jan Chomicki},
journal={arXiv preprint arXiv:0809.1551},
year={2008},
number={UB CSE TR 2008-15},
archivePrefix={arXiv},
eprint={0809.1551},
primaryClass={cs.DB}
} | staworko2008consistent |
arxiv-4803 | 0809.1552 | A computer verified, monadic, functional implementation of the integral | <|reference_start|>A computer verified, monadic, functional implementation of the integral: We provide a computer verified exact monadic functional implementation of the Riemann integral in type theory. Together with previous work by O'Connor, this may be seen as the beginning of the realization of Bishop's vision to use constructive mathematics as a programming language for exact analysis.<|reference_end|> | arxiv | @article{o'connor2008a,
title={A computer verified, monadic, functional implementation of the integral},
author={Russell O'Connor and Bas Spitters},
journal={Theoretical Computer Science, Volume 411, Issue 37, 7 August 2010,
Pages 3386-3402},
year={2008},
doi={10.1016/j.tcs.2010.05.031},
archivePrefix={arXiv},
eprint={0809.1552},
primaryClass={cs.LO cs.NA}
} | o'connor2008a |
arxiv-4804 | 0809.1570 | Mumford dendrograms and discrete p-adic symmetries | <|reference_start|>Mumford dendrograms and discrete p-adic symmetries: In this article, we present an effective encoding of dendrograms by embedding them into the Bruhat-Tits trees associated to $p$-adic number fields. As an application, we show how strings over a finite alphabet can be encoded in cyclotomic extensions of $\mathbb{Q}_p$ and discuss $p$-adic DNA encoding. The application leads to fast $p$-adic agglomerative hierarchic algorithms similar to the ones recently used e.g. by A. Khrennikov and others. From the viewpoint of $p$-adic geometry, to encode a dendrogram $X$ in a $p$-adic field $K$ means to fix a set $S$ of $K$-rational punctures on the $p$-adic projective line $\mathbb{P}^1$. To $\mathbb{P}^1\setminus S$ is associated in a natural way a subtree inside the Bruhat-Tits tree which recovers $X$, a method first used by F. Kato in 1999 in the classification of discrete subgroups of $\textrm{PGL}_2(K)$. Next, we show how the $p$-adic moduli space $\mathfrak{M}_{0,n}$ of $\mathbb{P}^1$ with $n$ punctures can be applied to the study of time series of dendrograms and those symmetries arising from hyperbolic actions on $\mathbb{P}^1$. In this way, we can associate to certain classes of dynamical systems a Mumford curve, i.e. a $p$-adic algebraic curve with totally degenerate reduction modulo $p$. Finally, we indicate some of our results in the study of general discrete actions on $\mathbb{P}^1$, and their relation to $p$-adic Hurwitz spaces.<|reference_end|> | arxiv | @article{bradley2008mumford,
title={Mumford dendrograms and discrete p-adic symmetries},
author={Patrick Erik Bradley},
journal={p-Adic Numbers, Ultrametric Analysis and Applications, Vol. 1, No.
2 (2009), 118-127},
year={2008},
doi={10.1134/S2070046609020034},
archivePrefix={arXiv},
eprint={0809.1570},
primaryClass={cs.DM math-ph math.MP q-bio.GN}
} | bradley2008mumford |
arxiv-4805 | 0809.1590 | When is there a representer theorem? Vector versus matrix regularizers | <|reference_start|>When is there a representer theorem? Vector versus matrix regularizers: We consider a general class of regularization methods which learn a vector of parameters on the basis of linear measurements. It is well known that if the regularizer is a nondecreasing function of the inner product then the learned vector is a linear combination of the input data. This result, known as the {\em representer theorem}, is at the basis of kernel-based methods in machine learning. In this paper, we prove the necessity of the above condition, thereby completing the characterization of kernel methods based on regularization. We further extend our analysis to regularization methods which learn a matrix, a problem which is motivated by the application to multi-task learning. In this context, we study a more general representer theorem, which holds for a larger class of regularizers. We provide a necessary and sufficient condition for these class of matrix regularizers and highlight them with some concrete examples of practical importance. Our analysis uses basic principles from matrix theory, especially the useful notion of matrix nondecreasing function.<|reference_end|> | arxiv | @article{argyriou2008when,
title={When is there a representer theorem? Vector versus matrix regularizers},
author={Andreas Argyriou, Charles Micchelli and Massimiliano Pontil},
journal={Journal of Machine Learning Research, 10:2507-2529, 2009},
year={2008},
archivePrefix={arXiv},
eprint={0809.1590},
primaryClass={cs.LG}
} | argyriou2008when |
arxiv-4806 | 0809.1593 | Constructing Perfect Steganographic Systems | <|reference_start|>Constructing Perfect Steganographic Systems: We propose steganographic systems for the case when covertexts (containers) are generated by a finite-memory source with possibly unknown statistics. The probability distributions of covertexts with and without hidden information are the same; this means that the proposed stegosystems are perfectly secure, i.e. an observer cannot determine whether hidden information is being transmitted. The speed of transmission of hidden information can be made arbitrary close to the theoretical limit - the Shannon entropy of the source of covertexts. An interesting feature of the suggested stegosystems is that they do not require any (secret or public) key. At the same time, we outline some principled computational limitations on steganography. We show that there are such sources of covertexts, that any stegosystem that has linear (in the length of the covertext) speed of transmission of hidden text must have an exponential Kolmogorov complexity. This shows, in particular, that some assumptions on the sources of covertext are necessary.<|reference_end|> | arxiv | @article{ryabko2008constructing,
title={Constructing Perfect Steganographic Systems},
author={Boris Ryabko, Daniil Ryabko},
journal={Information and Computation, 2011, Vol. 209, No. 9, pp. 1223-1230},
year={2008},
doi={10.1016/j.ic.2011.06.004},
archivePrefix={arXiv},
eprint={0809.1593},
primaryClass={cs.CR cs.IT math.IT}
} | ryabko2008constructing |
arxiv-4807 | 0809.1618 | ECOLANG - Communications Language for Ecological Simulations Network | <|reference_start|>ECOLANG - Communications Language for Ecological Simulations Network: This document describes the communication language used in one multiagent system environment for ecological simulations, based on EcoDynamo simulator application linked with several intelligent agents and visualisation applications, and extends the initial definition of the language. The agents actions and perceptions are translated into messages exchanged with the simulator application and other agents. The concepts and definitions used follow the BNF notation (Backus et al. 1960) and is inspired in the Coach Unilang language (Reis and Lau 2002).<|reference_end|> | arxiv | @article{pereira2008ecolang,
title={ECOLANG - Communications Language for Ecological Simulations Network},
author={Antonio Pereira},
journal={arXiv preprint arXiv:0809.1618},
year={2008},
number={TR-LIACC-FEUP-AMCP 01.1},
archivePrefix={arXiv},
eprint={0809.1618},
primaryClass={cs.AI cs.MA}
} | pereira2008ecolang |
arxiv-4808 | 0809.1644 | Computing with Classical Real Numbers | <|reference_start|>Computing with Classical Real Numbers: There are two incompatible Coq libraries that have a theory of the real numbers; the Coq standard library gives an axiomatic treatment of classical real numbers, while the CoRN library from Nijmegen defines constructively valid real numbers. Unfortunately, this means results about one structure cannot easily be used in the other structure. We present a way interfacing these two libraries by showing that their real number structures are isomorphic assuming the classical axioms already present in the standard library reals. This allows us to use O'Connor's decision procedure for solving ground inequalities present in CoRN to solve inequalities about the reals from the Coq standard library, and it allows theorems from the Coq standard library to apply to problem about the CoRN reals.<|reference_end|> | arxiv | @article{kaliszyk2008computing,
title={Computing with Classical Real Numbers},
author={Cezary Kaliszyk and Russell O'Connor},
journal={Journal of Formalized Reasoning, 2(1):27-39, 2009},
year={2008},
archivePrefix={arXiv},
eprint={0809.1644},
primaryClass={cs.LO}
} | kaliszyk2008computing |
arxiv-4809 | 0809.1659 | A Tiered Security System for Mobile Devices | <|reference_start|>A Tiered Security System for Mobile Devices: We have designed a tiered security system for mobile devices where each security tier holds user-defined security triggers and actions. It has a friendly interface that allows users to easily define and configure the different circumstances and actions they need according to context. The system can be set up and activated from any browser or directly on the mobile device itself. When the security system is operated from a Web site or server, its configuration can be readily shared across multiple devices. When operated directly from the mobile device, no server is needed for activation. Many different types of security circumstances and actions can be set up and employed from its tiers. Security circumstances can range from temporary misplacement of a mobile device at home to malicious theft in a hostile region. Security actions can range from ringing a simple alarm to automatically erasing, overwriting, and re-erasing drives.<|reference_end|> | arxiv | @article{bardsley2008a,
title={A Tiered Security System for Mobile Devices},
author={Scott Bardsley, Theodosios Thomas, and R. Paul Morris},
journal={arXiv preprint arXiv:0809.1659},
year={2008},
archivePrefix={arXiv},
eprint={0809.1659},
primaryClass={cs.CR}
} | bardsley2008a |
arxiv-4810 | 0809.1681 | Multirate Anypath Routing in Wireless Mesh Networks | <|reference_start|>Multirate Anypath Routing in Wireless Mesh Networks: In this paper, we present a new routing paradigm that generalizes opportunistic routing in wireless mesh networks. In multirate anypath routing, each node uses both a set of next hops and a selected transmission rate to reach a destination. Using this rate, a packet is broadcast to the nodes in the set and one of them forwards the packet on to the destination. To date, there is no theory capable of jointly optimizing both the set of next hops and the transmission rate used by each node. We bridge this gap by introducing a polynomial-time algorithm to this problem and provide the proof of its optimality. The proposed algorithm runs in the same running time as regular shortest-path algorithms and is therefore suitable for deployment in link-state routing protocols. We conducted experiments in a 802.11b testbed network, and our results show that multirate anypath routing performs on average 80% and up to 6.4 times better than anypath routing with a fixed rate of 11 Mbps. If the rate is fixed at 1 Mbps instead, performance improves by up to one order of magnitude.<|reference_end|> | arxiv | @article{laufer2008multirate,
title={Multirate Anypath Routing in Wireless Mesh Networks},
author={Rafael Laufer and Leonard Kleinrock},
journal={IEEE INFOCOM 2009},
year={2008},
doi={10.1109/INFCOM.2009.5061904},
number={UCLA-CSD-TR080025},
archivePrefix={arXiv},
eprint={0809.1681},
primaryClass={cs.NI cs.DS}
} | laufer2008multirate |
arxiv-4811 | 0809.1686 | Agent-based Ecological Model Calibration - on the Edge of a New Approach | <|reference_start|>Agent-based Ecological Model Calibration - on the Edge of a New Approach: The purpose of this paper is to present a new approach to ecological model calibration -- an agent-based software. This agent works on three stages: 1- It builds a matrix that synthesizes the inter-variable relationships; 2- It analyses the steady-state sensitivity of different variables to different parameters; 3- It runs the model iteratively and measures model lack of fit, adequacy and reliability. Stage 3 continues until some convergence criteria are attained. At each iteration, the agent knows from stages 1 and 2, which parameters are most likely to produce the desired shift on predicted results.<|reference_end|> | arxiv | @article{pereira2008agent-based,
title={Agent-based Ecological Model Calibration - on the Edge of a New Approach},
author={Antonio Pereira (1 and 2), Pedro Duarte (1), Luis Paulo Reis (2) ((1)
UFP, Porto, Portugal (2) FEUP, Porto, Portugal)},
journal={arXiv preprint arXiv:0809.1686},
year={2008},
archivePrefix={arXiv},
eprint={0809.1686},
primaryClass={cs.AI cs.MA}
} | pereira2008agent-based |
arxiv-4812 | 0809.1687 | Incoherent dictionaries and the statistical restricted isometry property | <|reference_start|>Incoherent dictionaries and the statistical restricted isometry property: In this article we present a statistical version of the Candes-Tao restricted isometry property (SRIP for short) which holds in general for any incoherent dictionary which is a disjoint union of orthonormal bases. In addition, under appropriate normalization, the eigenvalues of the associated Gram matrix fluctuate around 1 according to the Wigner semicircle distribution. The result is then applied to various dictionaries that arise naturally in the setting of finite harmonic analysis, giving, in particular, a better understanding on a remark of Applebaum-Howard-Searle-Calderbank concerning RIP for the Heisenberg dictionary of chirp like functions.<|reference_end|> | arxiv | @article{gurevich2008incoherent,
title={Incoherent dictionaries and the statistical restricted isometry property},
author={Shamgar Gurevich (UC Berkeley) and Ronny Hadani (University of
Chicago)},
journal={arXiv preprint arXiv:0809.1687},
year={2008},
archivePrefix={arXiv},
eprint={0809.1687},
primaryClass={cs.IT cs.DM math.IT math.PR}
} | gurevich2008incoherent |
arxiv-4813 | 0809.1710 | Circumference, Chromatic Number and Online Coloring | <|reference_start|>Circumference, Chromatic Number and Online Coloring: Erd\"os conjectured that if $G$ is a triangle free graph of chromatic number at least $k\geq 3$, then it contains an odd cycle of length at least $k^{2-o(1)}$ \cite{sudakovverstraete, verstraete}. Nothing better than a linear bound (\cite{gyarfas}, Problem 5.1.55 in \cite{West}) was so far known. We make progress on this conjecture by showing that $G$ contains an odd cycle of length at least $O(k\log\log k)$. Erd\"os' conjecture is known to hold for graphs with girth at least 5. We show that if a girth 4 graph is $C_5$ free, then Erd\"os' conjecture holds. When the number of vertices is not too large we can prove better bounds on $\chi$. We also give bounds on the chromatic number of graphs with at most $r$ cycles of length $1\bmod k$, or at most $s$ cycles of length $2\bmod k$, or no cycles of length $3\bmod k$. Our techniques essentially consist of using a depth first search tree to decompose the graph into ordered paths, which are then fed to an online coloring algorithm. Using this technique we give simple proofs of some old results, and also obtain several simpler results. We also obtain a lower bound on the number of colors an online coloring algorithm needs to use on triangle free graphs.<|reference_end|> | arxiv | @article{diwan2008circumference,,
title={Circumference, Chromatic Number and Online Coloring},
author={Ajit A. Diwan, Sreyash Kenkre, Sundar Vishwanathan},
journal={arXiv preprint arXiv:0809.1710},
year={2008},
archivePrefix={arXiv},
eprint={0809.1710},
primaryClass={cs.DM}
} | diwan2008circumference, |
arxiv-4814 | 0809.1715 | Improved Smoothed Analysis of the k-Means Method | <|reference_start|>Improved Smoothed Analysis of the k-Means Method: The k-means method is a widely used clustering algorithm. One of its distinguished features is its speed in practice. Its worst-case running-time, however, is exponential, leaving a gap between practical and theoretical performance. Arthur and Vassilvitskii (FOCS 2006) aimed at closing this gap, and they proved a bound of $\poly(n^k, \sigma^{-1})$ on the smoothed running-time of the k-means method, where n is the number of data points and $\sigma$ is the standard deviation of the Gaussian perturbation. This bound, though better than the worst-case bound, is still much larger than the running-time observed in practice. We improve the smoothed analysis of the k-means method by showing two upper bounds on the expected running-time of k-means. First, we prove that the expected running-time is bounded by a polynomial in $n^{\sqrt k}$ and $\sigma^{-1}$. Second, we prove an upper bound of $k^{kd} \cdot \poly(n, \sigma^{-1})$, where d is the dimension of the data space. The polynomial is independent of k and d, and we obtain a polynomial bound for the expected running-time for $k, d \in O(\sqrt{\log n/\log \log n})$. Finally, we show that k-means runs in smoothed polynomial time for one-dimensional instances.<|reference_end|> | arxiv | @article{manthey2008improved,
title={Improved Smoothed Analysis of the k-Means Method},
author={Bodo Manthey and Heiko R"oglin},
journal={arXiv preprint arXiv:0809.1715},
year={2008},
archivePrefix={arXiv},
eprint={0809.1715},
primaryClass={cs.DS}
} | manthey2008improved |
arxiv-4815 | 0809.1790 | Cellular Automata as a Model of Physical Systems | <|reference_start|>Cellular Automata as a Model of Physical Systems: Cellular Automata (CA), as they are presented in the literature, are abstract mathematical models of computation. In this pa- per we present an alternate approach: using the CA as a model or theory of physical systems and devices. While this approach abstracts away all details of the underlying physical system, it remains faithful to the fact that there is an underlying physical reality which it describes. This imposes certain restrictions on the types of computations a CA can physically carry out, and the resources it needs to do so. In this paper we explore these and other consequences of our reformalization.<|reference_end|> | arxiv | @article{cheung2008cellular,
title={Cellular Automata as a Model of Physical Systems},
author={Donny Cheung, Carlos A. Perez-Delgado},
journal={arXiv preprint arXiv:0809.1790},
year={2008},
archivePrefix={arXiv},
eprint={0809.1790},
primaryClass={cs.DM}
} | cheung2008cellular |
arxiv-4816 | 0809.1802 | Automatic Identification and Data Extraction from 2-Dimensional Plots in Digital Documents | <|reference_start|>Automatic Identification and Data Extraction from 2-Dimensional Plots in Digital Documents: Most search engines index the textual content of documents in digital libraries. However, scholarly articles frequently report important findings in figures for visual impact and the contents of these figures are not indexed. These contents are often invaluable to the researcher in various fields, for the purposes of direct comparison with their own work. Therefore, searching for figures and extracting figure data are important problems. To the best of our knowledge, there exists no tool to automatically extract data from figures in digital documents. If we can extract data from these images automatically and store them in a database, an end-user can query and combine data from multiple digital documents simultaneously and efficiently. We propose a framework based on image analysis and machine learning to extract information from 2-D plot images and store them in a database. The proposed algorithm identifies a 2-D plot and extracts the axis labels, legend and the data points from the 2-D plot. We also segregate overlapping shapes that correspond to different data points. We demonstrate performance of individual algorithms, using a combination of generated and real-life images.<|reference_end|> | arxiv | @article{brouwer2008automatic,
title={Automatic Identification and Data Extraction from 2-Dimensional Plots in
Digital Documents},
author={William Brouwer, Saurabh Kataria, Sujatha Das, Prasenjit Mitra, C. L.
Giles},
journal={arXiv preprint arXiv:0809.1802},
year={2008},
archivePrefix={arXiv},
eprint={0809.1802},
primaryClass={cs.CV}
} | brouwer2008automatic |
arxiv-4817 | 0809.1806 | Graph Operations that are Good for Greedoids | <|reference_start|>Graph Operations that are Good for Greedoids: S is a local maximum stable set of a graph G, if the set S is a maximum stable set of the subgraph induced by its closed neighborhood. In (Levit, Mandrescu, 2002) we have proved that the family of all local maximum stable sets is a greedoid for every forest. The cases of bipartite graphs and triangle-free graphs were analyzed in (Levit, Mandrescu, 2004) and (Levit, Mandrescu, 2007), respectively. In this paper we give necessary and sufficient conditions for the family of all local maximum stable sets of a graph G to form a greedoid, where G is: (a) the disjoint union of a family of graphs; (b) the Zykov sum of a family of graphs, or (c) the corona X*{H_1,H_2,...,H_n} obtained by joining each vertex k of a graph X to all the vertices of a graph H_k.<|reference_end|> | arxiv | @article{levit2008graph,
title={Graph Operations that are Good for Greedoids},
author={Vadim E. Levit and Eugen Mandrescu},
journal={arXiv preprint arXiv:0809.1806},
year={2008},
archivePrefix={arXiv},
eprint={0809.1806},
primaryClass={math.CO cs.DM}
} | levit2008graph |
arxiv-4818 | 0809.1810 | Characterization of the errors of the FMM in particle simulations | <|reference_start|>Characterization of the errors of the FMM in particle simulations: The Fast Multipole Method (FMM) offers an acceleration for pairwise interaction calculation, known as $N$-body problems, from $\mathcal{O}(N^2)$ to $\mathcal{O}(N)$ with $N$ particles. This has brought dramatic increase in the capability of particle simulations in many application areas, such as electrostatics, particle formulations of fluid mechanics, and others. Although the literature on the subject provides theoretical error bounds for the FMM approximation, there are not many reports of the measured errors in a suite of computational experiments. We have performed such an experimental investigation, and summarized the results of about 1000 calculations using the FMM algorithm, to characterize the accuracy of the method in relation with the different parameters available to the user. In addition to the more standard diagnostic of the maximum error, we supply illustrations of the spatial distribution of the errors, which offers visual evidence of all the contributing factors to the overall approximation accuracy: multipole expansion, local expansion, hierarchical spatial decomposition (interaction lists, local domain, far domain). This presentation is a contribution to any researcher wishing to incorporate the FMM acceleration to their application code, as it aids in understanding where accuracy is gained or compromised.<|reference_end|> | arxiv | @article{cruz2008characterization,
title={Characterization of the errors of the FMM in particle simulations},
author={Felipe A. Cruz, L. A. Barba},
journal={Int. J. Num. Meth. Engrg., 79(13):1577-1604 (2009)},
year={2008},
doi={10.1002/nme.2611},
archivePrefix={arXiv},
eprint={0809.1810},
primaryClass={cs.DS physics.comp-ph}
} | cruz2008characterization |
arxiv-4819 | 0809.1812 | Topological Complexity of omega-Powers : Extended Abstract | <|reference_start|>Topological Complexity of omega-Powers : Extended Abstract: This is an extended abstract presenting new results on the topological complexity of omega-powers (which are included in a paper "Classical and effective descriptive complexities of omega-powers" available from arXiv:0708.4176) and reflecting also some open questions which were discussed during the Dagstuhl seminar on "Topological and Game-Theoretic Aspects of Infinite Computations" 29.06.08 - 04.07.08.<|reference_end|> | arxiv | @article{finkel2008topological,
title={Topological Complexity of omega-Powers : Extended Abstract},
author={Olivier Finkel (LIP), Dominique Lecomte (UMR 7586)},
journal={arXiv preprint arXiv:0809.1812},
year={2008},
number={LIP Research Report RR 2008-27},
archivePrefix={arXiv},
eprint={0809.1812},
primaryClass={cs.LO cs.CC math.LO}
} | finkel2008topological |
arxiv-4820 | 0809.1836 | The complexity of counting solutions to Generalised Satisfiability Problems modulo k | <|reference_start|>The complexity of counting solutions to Generalised Satisfiability Problems modulo k: Generalised Satisfiability Problems (or Boolean Constraint Satisfaction Problems), introduced by Schaefer in 1978, are a general class of problem which allow the systematic study of the complexity of satisfiability problems with different types of constraints. In 1979, Valiant introduced the complexity class parity P, the problem of counting the number of solutions to NP problems modulo two. Others have since considered the question of counting modulo other integers. We give a dichotomy theorem for the complexity of counting the number of solutions to Generalised Satisfiability Problems modulo integers. This follows from an earlier result of Creignou and Hermann which gave a counting dichotomy for these types of problem, and the dichotomy itself is almost identical. Specifically, counting the number of solutions to a Generalised Satisfiability Problem can be done in polynomial time if all the relations are affine. Otherwise, except for one special case with k = 2, it is #_kP-complete.<|reference_end|> | arxiv | @article{faben2008the,
title={The complexity of counting solutions to Generalised Satisfiability
Problems modulo k},
author={John Faben},
journal={arXiv preprint arXiv:0809.1836},
year={2008},
archivePrefix={arXiv},
eprint={0809.1836},
primaryClass={cs.CC}
} | faben2008the |
arxiv-4821 | 0809.1895 | Thinking Twice about Second-Price Ad Auctions | <|reference_start|>Thinking Twice about Second-Price Ad Auctions: Recent work has addressed the algorithmic problem of allocating advertisement space for keywords in sponsored search auctions so as to maximize revenue, most of which assume that pricing is done via a first-price auction. This does not realistically model the Generalized Second Price (GSP) auction used in practice, in which bidders pay the next-highest bid for keywords that they are allocated. Towards the goal of more realistically modeling these auctions, we introduce the Second-Price Ad Auctions problem, in which bidders' payments are determined by the GSP mechanism. We show that the complexity of the Second-Price Ad Auctions problem is quite different than that of the more studied First-Price Ad Auctions problem. First, unlike the first-price variant, for which small constant-factor approximations are known, it is NP-hard to approximate the Second-Price Ad Auctions problem to any non-trivial factor, even when the bids are small compared to the budgets. Second, this discrepancy extends even to the 0-1 special case that we call the Second-Price Matching problem (2PM). Offline 2PM is APX-hard, and for online 2PM there is no deterministic algorithm achieving a non-trivial competitive ratio and no randomized algorithm achieving a competitive ratio better than 2. This contrasts with the results for the analogous special case in the first-price model, the standard bipartite matching problem, which is solvable in polynomial time and which has deterministic and randomized online algorithms achieving better competitive ratios. On the positive side, we provide a 2-approximation for offline 2PM and a 5.083-competitive randomized algorithm for online 2PM. The latter result makes use of a new generalization of a result on the performance of the "Ranking" algorithm for online bipartite matching.<|reference_end|> | arxiv | @article{azar2008thinking,
title={Thinking Twice about Second-Price Ad Auctions},
author={Yossi Azar, Benjamin Birnbaum, Anna R. Karlin, and C. Thach Nguyen},
journal={arXiv preprint arXiv:0809.1895},
year={2008},
archivePrefix={arXiv},
eprint={0809.1895},
primaryClass={cs.DS}
} | azar2008thinking |
arxiv-4822 | 0809.1900 | Distributed Detection in Sensor Networks with Limited Range Multi-Modal Sensors | <|reference_start|>Distributed Detection in Sensor Networks with Limited Range Multi-Modal Sensors: We consider a multi-object detection problem over a sensor network (SNET) with limited range multi-modal sensors. Limited range sensing environment arises in a sensing field prone to signal attenuation and path losses. The general problem complements the widely considered decentralized detection problem where all sensors observe the same object. In this paper we develop a distributed detection approach based on recent development of the false discovery rate (FDR) and the associated BH test procedure. The BH procedure is based on rank ordering of scalar test statistics. We first develop scalar test statistics for multidimensional data to handle multi-modal sensor observations and establish its optimality in terms of the BH procedure. We then propose a distributed algorithm in the ideal case of infinite attenuation for identification of sensors that are in the immediate vicinity of an object. We demonstrate communication message scalability to large SNETs by showing that the upper bound on the communication message complexity scales linearly with the number of sensors that are in the vicinity of objects and is independent of the total number of sensors in the SNET. This brings forth an important principle for evaluating the performance of an SNET, namely, the need for scalability of communications and performance with respect to the number of objects or events in an SNET irrespective of the network size. We then account for finite attenuation by modeling sensor observations as corrupted by uncertain interference arising from distant objects and developing robust extensions to our idealized distributed scheme. The robustness properties ensure that both the error performance and communication message complexity degrade gracefully with interference.<|reference_end|> | arxiv | @article{ermis2008distributed,
title={Distributed Detection in Sensor Networks with Limited Range Multi-Modal
Sensors},
author={E. Ermis and V. Saligrama},
journal={arXiv preprint arXiv:0809.1900},
year={2008},
archivePrefix={arXiv},
eprint={0809.1900},
primaryClass={cs.IT math.IT}
} | ermis2008distributed |
arxiv-4823 | 0809.1902 | Fast C-K-R Partitions of Sparse Graphs | <|reference_start|>Fast C-K-R Partitions of Sparse Graphs: We present fast algorithms for constructing probabilistic embeddings and approximate distance oracles in sparse graphs. The main ingredient is a fast algorithm for sampling the probabilistic partitions of Calinescu, Karloff, and Rabani in sparse graphs.<|reference_end|> | arxiv | @article{mendel2008fast,
title={Fast C-K-R Partitions of Sparse Graphs},
author={Manor Mendel, Chaya Schwob},
journal={Chicago J. Theoretical Comp. Sci., 2009(2), 2009},
year={2008},
archivePrefix={arXiv},
eprint={0809.1902},
primaryClass={cs.DS}
} | mendel2008fast |
arxiv-4824 | 0809.1906 | Betweenness Centrality : Algorithms and Lower Bounds | <|reference_start|>Betweenness Centrality : Algorithms and Lower Bounds: One of the most fundamental problems in large scale network analysis is to determine the importance of a particular node in a network. Betweenness centrality is the most widely used metric to measure the importance of a node in a network. In this paper, we present a randomized parallel algorithm and an algebraic method for computing betweenness centrality of all nodes in a network. We prove that any path-comparison based algorithm cannot compute betweenness in less than O(nm) time.<|reference_end|> | arxiv | @article{kintali2008betweenness,
title={Betweenness Centrality : Algorithms and Lower Bounds},
author={Shiva Kintali},
journal={arXiv preprint arXiv:0809.1906},
year={2008},
archivePrefix={arXiv},
eprint={0809.1906},
primaryClass={cs.DS}
} | kintali2008betweenness |
arxiv-4825 | 0809.1910 | Reliable Communications with Asymmetric Codebooks: An Information Theoretic Analysis of Robust Signal Hashing | <|reference_start|>Reliable Communications with Asymmetric Codebooks: An Information Theoretic Analysis of Robust Signal Hashing: In this paper, a generalization of the traditional point-to-point to communication setup, which is named as "reliable communications with asymmetric codebooks", is proposed. Under the assumption of independent identically distributed (i.i.d) encoder codewords, it is proven that the operational capacity of the system is equal to the information capacity of the system, which is given by $\max_{p(x)} I(U;Y)$, where $X, U$ and $Y$ denote the individual random elements of encoder codewords, decoder codewords and decoder inputs. The capacity result is derived in the "binary symmetric" case (which is an analogous formulation of the traditional "binary symmetric channel" for our case), as a function of the system parameters. A conceptually insightful inference is made by attributing the difference from the classical Shannon-type capacity of binary symmetric channel to the {\em gap} due to the codebook asymmetry.<|reference_end|> | arxiv | @article{altug2008reliable,
title={Reliable Communications with Asymmetric Codebooks: An Information
Theoretic Analysis of Robust Signal Hashing},
author={Yucel Altug, M. Kivanc Mihcak, Onur Ozyesil, Vishal Monga},
journal={arXiv preprint arXiv:0809.1910},
year={2008},
archivePrefix={arXiv},
eprint={0809.1910},
primaryClass={cs.IT math.IT}
} | altug2008reliable |
arxiv-4826 | 0809.1916 | Randomized Distributed Configuration Management of Wireless Networks: Multi-layer Markov Random Fields and Near-Optimality | <|reference_start|>Randomized Distributed Configuration Management of Wireless Networks: Multi-layer Markov Random Fields and Near-Optimality: Distributed configuration management is imperative for wireless infrastructureless networks where each node adjusts locally its physical and logical configuration through information exchange with neighbors. Two issues remain open. The first is the optimality. The second is the complexity. We study these issues through modeling, analysis, and randomized distributed algorithms. Modeling defines the optimality. We first derive a global probabilistic model for a network configuration which characterizes jointly the statistical spatial dependence of a physical- and a logical-configuration. We then show that a local model which approximates the global model is a two-layer Markov Random Field or a random bond model. The complexity of the local model is the communication range among nodes. The local model is near-optimal when the approximation error to the global model is within a given error bound. We analyze the trade-off between an approximation error and complexity, and derive sufficient conditions on the near-optimality of the local model. We validate the model, the analysis and the randomized distributed algorithms also through simulation.<|reference_end|> | arxiv | @article{jeon2008randomized,
title={Randomized Distributed Configuration Management of Wireless Networks:
Multi-layer Markov Random Fields and Near-Optimality},
author={Sung-eok Jeon, and Chunayi Ji},
journal={arXiv preprint arXiv:0809.1916},
year={2008},
archivePrefix={arXiv},
eprint={0809.1916},
primaryClass={cs.DC cs.AI}
} | jeon2008randomized |
arxiv-4827 | 0809.1949 | Protocol Channels | <|reference_start|>Protocol Channels: Covert channel techniques are used by attackers to transfer data in a way prohibited by the security policy. There are two main categories of covert channels: timing channels and storage channels. This paper introduces a new storage channel technique called a protocol channel. A protocol channel switches one of at least two protocols to send a bit combination to a destination. The main goal of a protocol channel is that packets containing covert information look equal to all other packets within a network, what makes a protocol channel hard to detect.<|reference_end|> | arxiv | @article{wendzel2008protocol,
title={Protocol Channels},
author={Steffen Wendzel},
journal={arXiv preprint arXiv:0809.1949},
year={2008},
archivePrefix={arXiv},
eprint={0809.1949},
primaryClass={cs.CR}
} | wendzel2008protocol |
arxiv-4828 | 0809.1963 | Materialized View Selection by Query Clustering in XML Data Warehouses | <|reference_start|>Materialized View Selection by Query Clustering in XML Data Warehouses: XML data warehouses form an interesting basis for decision-support applications that exploit complex data. However, native XML database management systems currently bear limited performances and it is necessary to design strategies to optimize them. In this paper, we propose an automatic strategy for the selection of XML materialized views that exploits a data mining technique, more precisely the clustering of the query workload. To validate our strategy, we implemented an XML warehouse modeled along the XCube specifications. We executed a workload of XQuery decision-support queries on this warehouse, with and without using our strategy. Our experimental results demonstrate its efficiency, even when queries are complex.<|reference_end|> | arxiv | @article{mahboubi2008materialized,
title={Materialized View Selection by Query Clustering in XML Data Warehouses},
author={Hadj Mahboubi (ERIC), Kamel Aouiche (ERIC), J'er^ome Darmont (ERIC)},
journal={4th International Multiconference on Computer Science and
Information Technology (CSIT 06), Amman : Jordanie (2006)},
year={2008},
archivePrefix={arXiv},
eprint={0809.1963},
primaryClass={cs.DB}
} | mahboubi2008materialized |
arxiv-4829 | 0809.1965 | Dynamic index selection in data warehouses | <|reference_start|>Dynamic index selection in data warehouses: Analytical queries defined on data warehouses are complex and use several join operations that are very costly, especially when run on very large data volumes. To improve response times, data warehouse administrators casually use indexing techniques. This task is nevertheless complex and fastidious. In this paper, we present an automatic, dynamic index selection method for data warehouses that is based on incremental frequent itemset mining from a given query workload. The main advantage of this approach is that it helps update the set of selected indexes when workload evolves instead of recreating it from scratch. Preliminary experimental results illustrate the efficiency of this approach, both in terms of performance enhancement and overhead.<|reference_end|> | arxiv | @article{azefack2008dynamic,
title={Dynamic index selection in data warehouses},
author={St'ephane Azefack (ERIC), Kamel Aouiche (ERIC), J'er^ome Darmont
(ERIC)},
journal={4th International Conference on Innovations in Information
Technology (Innovations 07), Dubai : \'Emirats arabes unis (2006)},
year={2008},
archivePrefix={arXiv},
eprint={0809.1965},
primaryClass={cs.DB}
} | azefack2008dynamic |
arxiv-4830 | 0809.1971 | Knowledge and Metadata Integration for Warehousing Complex Data | <|reference_start|>Knowledge and Metadata Integration for Warehousing Complex Data: With the ever-growing availability of so-called complex data, especially on the Web, decision-support systems such as data warehouses must store and process data that are not only numerical or symbolic. Warehousing and analyzing such data requires the joint exploitation of metadata and domain-related knowledge, which must thereby be integrated. In this paper, we survey the types of knowledge and metadata that are needed for managing complex data, discuss the issue of knowledge and metadata integration, and propose a CWM-compliant integration solution that we incorporate into an XML complex data warehousing framework we previously designed.<|reference_end|> | arxiv | @article{ralaivao2008knowledge,
title={Knowledge and Metadata Integration for Warehousing Complex Data},
author={Jean-Christian Ralaivao (ERIC), J'er^ome Darmont (ERIC)},
journal={arXiv preprint arXiv:0809.1971},
year={2008},
archivePrefix={arXiv},
eprint={0809.1971},
primaryClass={cs.DB}
} | ralaivao2008knowledge |
arxiv-4831 | 0809.1981 | A Join Index for XML Data Warehouses | <|reference_start|>A Join Index for XML Data Warehouses: XML data warehouses form an interesting basis for decision-support applications that exploit complex data. However, native-XML database management systems (DBMSs) currently bear limited performances and it is necessary to research for ways to optimize them. In this paper, we propose a new join index that is specifically adapted to the multidimensional architecture of XML warehouses. It eliminates join operations while preserving the information contained in the original warehouse. A theoretical study and experimental results demonstrate the efficiency of our join index. They also show that native XML DBMSs can compete with XML-compatible, relational DBMSs when warehousing and analyzing XML data.<|reference_end|> | arxiv | @article{mahboubi2008a,
title={A Join Index for XML Data Warehouses},
author={Hadj Mahboubi (ERIC), Kamel Aouiche (ERIC), J'er^ome Darmont (ERIC)},
journal={arXiv preprint arXiv:0809.1981},
year={2008},
archivePrefix={arXiv},
eprint={0809.1981},
primaryClass={cs.DB}
} | mahboubi2008a |
arxiv-4832 | 0809.1989 | Distributing Labels on Infinite Trees | <|reference_start|>Distributing Labels on Infinite Trees: Sturmian words are infinite binary words with many equivalent definitions: They have a minimal factor complexity among all aperiodic sequences; they are balanced sequences (the labels 0 and 1 are as evenly distributed as possible) and they can be constructed using a mechanical definition. All this properties make them good candidates for being extremal points in scheduling problems over two processors. In this paper, we consider the problem of generalizing Sturmian words to trees. The problem is to evenly distribute labels 0 and 1 over infinite trees. We show that (strongly) balanced trees exist and can also be constructed using a mechanical process as long as the tree is irrational. Such trees also have a minimal factor complexity. Therefore they bring the hope that extremal scheduling properties of Sturmian words can be extended to such trees, as least partially. Such possible extensions are illustrated by one such example.<|reference_end|> | arxiv | @article{gast2008distributing,
title={Distributing Labels on Infinite Trees},
author={Nicolas Gast and Bruno Gaujal},
journal={arXiv preprint arXiv:0809.1989},
year={2008},
archivePrefix={arXiv},
eprint={0809.1989},
primaryClass={cs.DM}
} | gast2008distributing |
arxiv-4833 | 0809.2032 | On consistency of determinants on cubic lattices | <|reference_start|>On consistency of determinants on cubic lattices: We propose a modified condition of consistency on cubic lattices for some special classes of two-dimensional discrete equations and prove that the discrete nonlinear equations defined by determinants of matrices of orders N > 2 are consistent on cubic lattices in this sense.<|reference_end|> | arxiv | @article{mokhov2008on,
title={On consistency of determinants on cubic lattices},
author={O.I. Mokhov},
journal={arXiv preprint arXiv:0809.2032},
year={2008},
archivePrefix={arXiv},
eprint={0809.2032},
primaryClass={nlin.SI cs.DM math.CO}
} | mokhov2008on |
arxiv-4834 | 0809.2061 | Weyl's Predicative Classical Mathematics as a Logic-Enriched Type Theory | <|reference_start|>Weyl's Predicative Classical Mathematics as a Logic-Enriched Type Theory: We construct a logic-enriched type theory LTTW that corresponds closely to the predicative system of foundations presented by Hermann Weyl in Das Kontinuum. We formalise many results from that book in LTTW, including Weyl's definition of the cardinality of a set and several results from real analysis, using the proof assistant Plastic that implements the logical framework LF. This case study shows how type theory can be used to represent a non-constructive foundation for mathematics.<|reference_end|> | arxiv | @article{adams2008weyl's,
title={Weyl's Predicative Classical Mathematics as a Logic-Enriched Type Theory},
author={Robin Adams and Zhaohui Luo},
journal={ACM TOCL 11(2), 2010},
year={2008},
doi={10.1145/1656242.1656246},
archivePrefix={arXiv},
eprint={0809.2061},
primaryClass={cs.LO}
} | adams2008weyl's |
arxiv-4835 | 0809.2075 | Low congestion online routing and an improved mistake bound for online prediction of graph labeling | <|reference_start|>Low congestion online routing and an improved mistake bound for online prediction of graph labeling: In this paper, we show a connection between a certain online low-congestion routing problem and an online prediction of graph labeling. More specifically, we prove that if there exists a routing scheme that guarantees a congestion of $\alpha$ on any edge, there exists an online prediction algorithm with mistake bound $\alpha$ times the cut size, which is the size of the cut induced by the label partitioning of graph vertices. With previous known bound of $O(\log n)$ for $\alpha$ for the routing problem on trees with $n$ vertices, we obtain an improved prediction algorithm for graphs with high effective resistance. In contrast to previous approaches that move the graph problem into problems in vector space using graph Laplacian and rely on the analysis of the perceptron algorithm, our proof are purely combinatorial. Further more, our approach directly generalizes to the case where labels are not binary.<|reference_end|> | arxiv | @article{fakcharoenphol2008low,
title={Low congestion online routing and an improved mistake bound for online
prediction of graph labeling},
author={Jittat Fakcharoenphol, Boonserm Kijsirikul},
journal={arXiv preprint arXiv:0809.2075},
year={2008},
archivePrefix={arXiv},
eprint={0809.2075},
primaryClass={cs.DS cs.DM cs.LG}
} | fakcharoenphol2008low |
arxiv-4836 | 0809.2083 | How to Integrate a Polynomial over a Simplex | <|reference_start|>How to Integrate a Polynomial over a Simplex: This paper settles the computational complexity of the problem of integrating a polynomial function f over a rational simplex. We prove that the problem is NP-hard for arbitrary polynomials via a generalization of a theorem of Motzkin and Straus. On the other hand, if the polynomial depends only on a fixed number of variables, while its degree and the dimension of the simplex are allowed to vary, we prove that integration can be done in polynomial time. As a consequence, for polynomials of fixed total degree, there is a polynomial time algorithm as well. We conclude the article with extensions to other polytopes, discussion of other available methods and experimental results.<|reference_end|> | arxiv | @article{baldoni2008how,
title={How to Integrate a Polynomial over a Simplex},
author={Velleda Baldoni, Nicole Berline (CMLS-EcolePolytechnique), Jesus De
Loera, Matthias K"oppe, Mich`ele Vergne (CMLS-EcolePolytechnique)},
journal={Mathematics of Computation 80, 273 (2011) 297-325},
year={2008},
doi={10.1090/S0025-5718-2010-02378-6},
archivePrefix={arXiv},
eprint={0809.2083},
primaryClass={math.MG cs.CC cs.SC}
} | baldoni2008how |
arxiv-4837 | 0809.2085 | Clustered Multi-Task Learning: A Convex Formulation | <|reference_start|>Clustered Multi-Task Learning: A Convex Formulation: In multi-task learning several related tasks are considered simultaneously, with the hope that by an appropriate sharing of information across tasks, each task may benefit from the others. In the context of learning linear functions for supervised classification or regression, this can be achieved by including a priori information about the weight vectors associated with the tasks, and how they are expected to be related to each other. In this paper, we assume that tasks are clustered into groups, which are unknown beforehand, and that tasks within a group have similar weight vectors. We design a new spectral norm that encodes this a priori assumption, without the prior knowledge of the partition of tasks into groups, resulting in a new convex optimization formulation for multi-task learning. We show in simulations on synthetic examples and on the IEDB MHC-I binding dataset, that our approach outperforms well-known convex methods for multi-task learning, as well as related non convex methods dedicated to the same problem.<|reference_end|> | arxiv | @article{jacob2008clustered,
title={Clustered Multi-Task Learning: A Convex Formulation},
author={Laurent Jacob, Francis Bach (INRIA Rocquencourt), Jean-Philippe Vert},
journal={arXiv preprint arXiv:0809.2085},
year={2008},
archivePrefix={arXiv},
eprint={0809.2085},
primaryClass={cs.LG}
} | jacob2008clustered |
arxiv-4838 | 0809.2093 | An approximation algorithm for approximation rank | <|reference_start|>An approximation algorithm for approximation rank: One of the strongest techniques available for showing lower bounds on quantum communication complexity is the logarithm of the approximation rank of the communication matrix--the minimum rank of a matrix which is entrywise close to the communication matrix. This technique has two main drawbacks: it is difficult to compute, and it is not known to lower bound quantum communication complexity with entanglement. Linial and Shraibman recently introduced a norm, called gamma_2^{alpha}, to quantum communication complexity, showing that it can be used to lower bound communication with entanglement. Here the parameter alpha is a measure of approximation which is related to the allowable error probability of the protocol. This bound can be written as a semidefinite program and gives bounds at least as large as many techniques in the literature, although it is smaller than the corresponding alpha-approximation rank, rk_alpha. We show that in fact log gamma_2^{alpha}(A)$ and log rk_{alpha}(A)$ agree up to small factors. As corollaries we obtain a constant factor polynomial time approximation algorithm to the logarithm of approximate rank, and that the logarithm of approximation rank is a lower bound for quantum communication complexity with entanglement.<|reference_end|> | arxiv | @article{lee2008an,
title={An approximation algorithm for approximation rank},
author={Troy Lee, Adi Shraibman},
journal={arXiv preprint arXiv:0809.2093},
year={2008},
archivePrefix={arXiv},
eprint={0809.2093},
primaryClass={cs.CC}
} | lee2008an |
arxiv-4839 | 0809.2097 | Algorithms for Locating Constrained Optimal Intervals | <|reference_start|>Algorithms for Locating Constrained Optimal Intervals: In this work, we obtain the following new results. 1. Given a sequence $D=((h_1,s_1), (h_2,s_2) ..., (h_n,s_n))$ of number pairs, where $s_i>0$ for all $i$, and a number $L_h$, we propose an O(n)-time algorithm for finding an index interval $[i,j]$ that maximizes $\frac{\sum_{k=i}^{j} h_k}{\sum_{k=i}^{j} s_k}$ subject to $\sum_{k=i}^{j} h_k \geq L_h$. 2. Given a sequence $D=((h_1,s_1), (h_2,s_2) ..., (h_n,s_n))$ of number pairs, where $s_i=1$ for all $i$, and an integer $L_s$ with $1\leq L_s\leq n$, we propose an $O(n\frac{T(L_s^{1/2})}{L_s^{1/2}})$-time algorithm for finding an index interval $[i,j]$ that maximizes $\frac{\sum_{k=i}^{j} h_k}{\sqrt{\sum_{k=i}^{j} s_k}}$ subject to $\sum_{k=i}^{j} s_k \geq L_s$, where $T(n')$ is the time required to solve the all-pairs shortest paths problem on a graph of $n'$ nodes. By the latest result of Chan \cite{Chan}, $T(n')=O(n'^3 \frac{(\log\log n')^3}{(\log n')^2})$, so our algorithm runs in subquadratic time $O(nL_s\frac{(\log\log L_s)^3}{(\log L_s)^2})$.<|reference_end|> | arxiv | @article{liu2008algorithms,
title={Algorithms for Locating Constrained Optimal Intervals},
author={Hsiao-Fei Liu, Peng-An Chen, and Kun-Mao Chao},
journal={arXiv preprint arXiv:0809.2097},
year={2008},
archivePrefix={arXiv},
eprint={0809.2097},
primaryClass={cs.DS}
} | liu2008algorithms |
arxiv-4840 | 0809.2136 | The Potluck Problem | <|reference_start|>The Potluck Problem: This paper proposes the Potluck Problem as a model for the behavior of independent producers and consumers under standard economic assumptions, as a problem of resource allocation in a multi-agent system in which there is no explicit communication among the agents.<|reference_end|> | arxiv | @article{enumula2008the,
title={The Potluck Problem},
author={Prabodh K. Enumula, Shrisha Rao},
journal={Economics Letters 107 (1), pp. 10--12, April 2010},
year={2008},
doi={10.1016/j.econlet.2009.12.011},
archivePrefix={arXiv},
eprint={0809.2136},
primaryClass={cs.GT cs.MA}
} | enumula2008the |
arxiv-4841 | 0809.2147 | Investigation on Multiuser Diversity in Spectrum Sharing Based Cognitive Radio Networks | <|reference_start|>Investigation on Multiuser Diversity in Spectrum Sharing Based Cognitive Radio Networks: A new form of multiuser diversity, named \emph{multiuser interference diversity}, is investigated for opportunistic communications in cognitive radio (CR) networks by exploiting the mutual interference between the CR and the existing primary radio (PR) links. The multiuser diversity gain and ergodic throughput are analyzed for different types of CR networks and compared against those in the conventional networks without the PR link.<|reference_end|> | arxiv | @article{zhang2008investigation,
title={Investigation on Multiuser Diversity in Spectrum Sharing Based Cognitive
Radio Networks},
author={Rui Zhang and Ying-Chang Liang},
journal={arXiv preprint arXiv:0809.2147},
year={2008},
archivePrefix={arXiv},
eprint={0809.2147},
primaryClass={cs.IT math.IT}
} | zhang2008investigation |
arxiv-4842 | 0809.2148 | Cognitive Beamforming Made Practical: Effective Interference Channel and Learning-Throughput Tradeoff | <|reference_start|>Cognitive Beamforming Made Practical: Effective Interference Channel and Learning-Throughput Tradeoff: This paper studies the transmit strategy for a secondary link or the so-called cognitive radio (CR) link under opportunistic spectrum sharing with an existing primary radio (PR) link. It is assumed that the CR transmitter is equipped with multi-antennas, whereby transmit precoding and power control can be jointly deployed to balance between avoiding interference at the PR terminals and optimizing performance of the CR link. This operation is named as cognitive beamforming (CB). Unlike prior study on CB that assumes perfect knowledge of the channels over which the CR transmitter interferes with the PR terminals, this paper proposes a practical CB scheme utilizing a new idea of effective interference channel (EIC), which can be efficiently estimated at the CR transmitter from its observed PR signals. Somehow surprisingly, this paper shows that the learning-based CB scheme with the EIC improves the CR channel capacity against the conventional scheme even with the exact CR-to-PR channel knowledge, when the PR link is equipped with multi-antennas but only communicates over a subspace of the total available spatial dimensions. Moreover, this paper presents algorithms for the CR to estimate the EIC over a finite learning time. Due to channel estimation errors, the proposed CB scheme causes leakage interference at the PR terminals, which leads to an interesting learning-throughput tradeoff phenomenon for the CR, pertinent to its time allocation between channel learning and data transmission. This paper derives the optimal channel learning time to maximize the effective throughput of the CR link, subject to the CR transmit power constraint and the interference power constraints for the PR terminals.<|reference_end|> | arxiv | @article{zhang2008cognitive,
title={Cognitive Beamforming Made Practical: Effective Interference Channel and
Learning-Throughput Tradeoff},
author={Rui Zhang, Feifei Gao, and Ying-Chang Liang},
journal={arXiv preprint arXiv:0809.2148},
year={2008},
archivePrefix={arXiv},
eprint={0809.2148},
primaryClass={cs.IT math.IT}
} | zhang2008cognitive |
arxiv-4843 | 0809.2152 | Informed Network Coding for Minimum Decoding Delay | <|reference_start|>Informed Network Coding for Minimum Decoding Delay: Network coding is a highly efficient data dissemination mechanism for wireless networks. Since network coded information can only be recovered after delivering a sufficient number of coded packets, the resulting decoding delay can become problematic for delay-sensitive applications such as real-time media streaming. Motivated by this observation, we consider several algorithms that minimize the decoding delay and analyze their performance by means of simulation. The algorithms differ both in the required information about the state of the neighbors' buffers and in the way this knowledge is used to decide which packets to combine through coding operations. Our results show that a greedy algorithm, whose encodings maximize the number of nodes at which a coded packet is immediately decodable significantly outperforms existing network coding protocols.<|reference_end|> | arxiv | @article{costa2008informed,
title={Informed Network Coding for Minimum Decoding Delay},
author={Rui A. Costa, Daniele Munaretto, Joerg Widmer, Joao Barros},
journal={arXiv preprint arXiv:0809.2152},
year={2008},
doi={10.1109/MAHSS.2008.4660042},
archivePrefix={arXiv},
eprint={0809.2152},
primaryClass={cs.IT math.IT}
} | costa2008informed |
arxiv-4844 | 0809.2168 | Fairness in Combinatorial Auctioning Systems | <|reference_start|>Fairness in Combinatorial Auctioning Systems: One of the Multi-Agent Systems that is widely used by various government agencies, buyers and sellers in a market economy, in such a manner so as to attain optimized resource allocation, is the Combinatorial Auctioning System (CAS). We study another important aspect of resource allocations in CAS, namely fairness. We present two important notions of fairness in CAS, extended fairness and basic fairness. We give an algorithm that works by incorporating a metric to ensure fairness in a CAS that uses the Vickrey-Clark-Groves (VCG) mechanism, and uses an algorithm of Sandholm to achieve optimality. Mathematical formulations are given to represent measures of extended fairness and basic fairness.<|reference_end|> | arxiv | @article{saini2008fairness,
title={Fairness in Combinatorial Auctioning Systems},
author={Megha Saini, Shrisha Rao},
journal={arXiv preprint arXiv:0809.2168},
year={2008},
archivePrefix={arXiv},
eprint={0809.2168},
primaryClass={cs.GT cs.MA}
} | saini2008fairness |
arxiv-4845 | 0809.2214 | On (Omega-)Regular Model Checking | <|reference_start|>On (Omega-)Regular Model Checking: Checking infinite-state systems is frequently done by encoding infinite sets of states as regular languages. Computing such a regular representation of, say, the set of reachable states of a system requires acceleration techniques that can finitely compute the effect of an unbounded number of transitions. Among the acceleration techniques that have been proposed, one finds both specific and generic techniques. Specific techniques exploit the particular type of system being analyzed, e.g. a system manipulating queues or integers, whereas generic techniques only assume that the transition relation is represented by a finite-state transducer, which has to be iterated. In this paper, we investigate the possibility of using generic techniques in cases where only specific techniques have been exploited so far. Finding that existing generic techniques are often not applicable in cases easily handled by specific techniques, we have developed a new approach to iterating transducers. This new approach builds on earlier work, but exploits a number of new conceptual and algorithmic ideas, often induced with the help of experiments, that give it a broad scope, as well as good performances.<|reference_end|> | arxiv | @article{legay2008on,
title={On (Omega-)Regular Model Checking},
author={Axel Legay and Pierre Wolper},
journal={arXiv preprint arXiv:0809.2214},
year={2008},
archivePrefix={arXiv},
eprint={0809.2214},
primaryClass={cs.LO}
} | legay2008on |
arxiv-4846 | 0809.2226 | Relay vs User Cooperation in Time-Duplexed Multiaccess Networks | <|reference_start|>Relay vs User Cooperation in Time-Duplexed Multiaccess Networks: The performance of user-cooperation in a multi-access network is compared to that of using a wireless relay. Using the total transmit and processing power consumed at all nodes as a cost metric, the outage probabilities achieved by dynamic decode-and-forward (DDF) and amplify-and-forward (AF) are compared for the two networks. A geometry-inclusive high signal-to-noise ratio (SNR) outage analysis in conjunction with area-averaged numerical simulations shows that user and relay cooperation achieve a maximum diversity of K and 2 respectively for a K-user multiaccess network under both DDF and AF. However, when accounting for energy costs of processing and communication, relay cooperation can be more energy efficient than user cooperation, i.e., relay cooperation achieves coding (SNR) gains, particularly in the low SNR regime, that override the diversity advantage of user cooperation.<|reference_end|> | arxiv | @article{sankar2008relay,
title={Relay vs. User Cooperation in Time-Duplexed Multiaccess Networks},
author={Lalitha Sankar, Gerhard Kramer, Narayan B. Mandayam},
journal={arXiv preprint arXiv:0809.2226},
year={2008},
doi={10.4304/jcm.6.4.330-339},
archivePrefix={arXiv},
eprint={0809.2226},
primaryClass={cs.IT math.IT}
} | sankar2008relay |
arxiv-4847 | 0809.2315 | On the Construction of Skew Quasi-Cyclic Codes | <|reference_start|>On the Construction of Skew Quasi-Cyclic Codes: In this paper we study a special type of quasi-cyclic (QC) codes called skew QC codes. This set of codes is constructed using a non-commutative ring called the skew polynomial rings $F[x;\theta ]$. After a brief description of the skew polynomial ring $F[x;\theta ]$ it is shown that skew QC codes are left submodules of the ring $R_{s}^{l}=(F[x;\theta ]/(x^{s}-1))^{l}.$ The notions of generator and parity-check polynomials are given. We also introduce the notion of similar polynomials in the ring $F[x;\theta ]$ and show that parity-check polynomials for skew QC codes are unique up to similarity. Our search results lead to the construction of several new codes with Hamming distances exceeding the Hamming distances of the previously best known linear codes with comparable parameters.<|reference_end|> | arxiv | @article{abualrub2008on,
title={On the Construction of Skew Quasi-Cyclic Codes},
author={Taher Abualrub, Ali Ghrayeb, Nuh Aydin, and Irfan Siap},
journal={arXiv preprint arXiv:0809.2315},
year={2008},
archivePrefix={arXiv},
eprint={0809.2315},
primaryClass={cs.IT cs.DM math.IT math.RA}
} | abualrub2008on |
arxiv-4848 | 0809.2319 | A Log-space Algorithm for Canonization of Planar Graphs | <|reference_start|>A Log-space Algorithm for Canonization of Planar Graphs: Graph Isomorphism is the prime example of a computational problem with a wide difference between the best known lower and upper bounds on its complexity. We bridge this gap for a natural and important special case, planar graph isomorphism, by presenting an upper bound that matches the known logspace hardness [Lindell'92]. In fact, we show the formally stronger result that planar graph canonization is in logspace. This improves the previously known upper bound of AC1 [MillerReif'91]. Our algorithm first constructs the biconnected component tree of a connected planar graph and then refines each biconnected component into a triconnected component tree. The next step is to logspace reduce the biconnected planar graph isomorphism and canonization problems to those for 3-connected planar graphs, which are known to be in logspace by [DattaLimayeNimbhorkar'08]. This is achieved by using the above decomposition, and by making significant modifications to Lindell's algorithm for tree canonization, along with changes in the space complexity analysis. The reduction from the connected case to the biconnected case requires further new ideas, including a non-trivial case analysis and a group theoretic lemma to bound the number of automorphisms of a colored 3-connected planar graph. This lemma is crucial for the reduction to work in logspace.<|reference_end|> | arxiv | @article{datta2008a,
title={A Log-space Algorithm for Canonization of Planar Graphs},
author={Samir Datta, Nutan Limaye, Prajakta Nimbhorkar, Thomas Thierauf,
Fabian Wagner},
journal={arXiv preprint arXiv:0809.2319},
year={2008},
archivePrefix={arXiv},
eprint={0809.2319},
primaryClass={cs.CC}
} | datta2008a |
arxiv-4849 | 0809.2322 | An Energy-Aware On-Demand Routing Protocol for Ad-Hoc Wireless Networks | <|reference_start|>An Energy-Aware On-Demand Routing Protocol for Ad-Hoc Wireless Networks: An ad-hoc wireless network is a collection of nodes that come together to dynamically create a network, with no fixed infrastructure or centralized administration. An ad-hoc network is characterized by energy constrained nodes, bandwidth constrained links and dynamic topology. With the growing use of wireless networks (including ad-hoc networks) for real-time applications, such as voice, video, and real-time data, the need for Quality of Service (QoS) guarantees in terms of delay, bandwidth, and packet loss is becoming increasingly important. Providing QoS in ad-hoc networks is a challenging task because of dynamic nature of network topology and imprecise state information. Hence, it is important to have a dynamic routing protocol with fast re-routing capability, which also provides stable route during the life-time of the flows. In this thesis, we have proposed a novel, energy aware, stable routing protocol named, Stability-based QoS-capable Ad-hoc On-demand Distance Vector (SQ-AODV), which is an enhancement of the well-known Ad-hoc On-demand Distance Vector (AODV) routing protocol for ad-hoc wireless networks. SQ-AODV utilizes a cross-layer design approach in which information about the residual energy of a node is used for route selection and maintenance. An important feature of SQ-AODV protocol is that it uses only local information and requires no additional communication or co-operation between the network nodes. SQ-AODV possesses a make-before-break re-routing capability that enables near-zero packet drops and is compatible with the basic AODV data formats and operation, making it easy to adopt in ad-hoc networks.<|reference_end|> | arxiv | @article{veerayya2008an,
title={An Energy-Aware On-Demand Routing Protocol for Ad-Hoc Wireless Networks},
author={Mallapur Veerayya},
journal={arXiv preprint arXiv:0809.2322},
year={2008},
archivePrefix={arXiv},
eprint={0809.2322},
primaryClass={cs.NI}
} | veerayya2008an |
arxiv-4850 | 0809.2350 | Random Linear Network Coding For Time Division Duplexing: When To Stop Talking And Start Listening | <|reference_start|>Random Linear Network Coding For Time Division Duplexing: When To Stop Talking And Start Listening: A new random linear network coding scheme for reliable communications for time division duplexing channels is proposed. The setup assumes a packet erasure channel and that nodes cannot transmit and receive information simultaneously. The sender transmits coded data packets back-to-back before stopping to wait for the receiver to acknowledge (ACK) the number of degrees of freedom, if any, that are required to decode correctly the information. We provide an analysis of this problem to show that there is an optimal number of coded data packets, in terms of mean completion time, to be sent before stopping to listen. This number depends on the latency, probabilities of packet erasure and ACK erasure, and the number of degrees of freedom that the receiver requires to decode the data. This scheme is optimal in terms of the mean time to complete the transmission of a fixed number of data packets. We show that its performance is very close to that of a full duplex system, while transmitting a different number of coded packets can cause large degradation in performance, especially if latency is high. Also, we study the throughput performance of our scheme and compare it to existing half-duplex Go-back-N and Selective Repeat ARQ schemes. Numerical results, obtained for different latencies, show that our scheme has similar performance to the Selective Repeat in most cases and considerable performance gain when latency and packet error probability is high.<|reference_end|> | arxiv | @article{lucani2008random,
title={Random Linear Network Coding For Time Division Duplexing: When To Stop
Talking And Start Listening},
author={Daniel E. Lucani, Milica Stojanovic, Muriel M'edard},
journal={arXiv preprint arXiv:0809.2350},
year={2008},
archivePrefix={arXiv},
eprint={0809.2350},
primaryClass={cs.IT math.IT}
} | lucani2008random |
arxiv-4851 | 0809.2386 | Datalog and Constraint Satisfaction with Infinite Templates | <|reference_start|>Datalog and Constraint Satisfaction with Infinite Templates: On finite structures, there is a well-known connection between the expressive power of Datalog, finite variable logics, the existential pebble game, and bounded hypertree duality. We study this connection for infinite structures. This has applications for constraint satisfaction with infinite templates. If the template Gamma is omega-categorical, we present various equivalent characterizations of those Gamma such that the constraint satisfaction problem (CSP) for Gamma can be solved by a Datalog program. We also show that CSP(Gamma) can be solved in polynomial time for arbitrary omega-categorical structures Gamma if the input is restricted to instances of bounded treewidth. Finally, we characterize those omega-categorical templates whose CSP has Datalog width 1, and those whose CSP has strict Datalog width k.<|reference_end|> | arxiv | @article{bodirsky2008datalog,
title={Datalog and Constraint Satisfaction with Infinite Templates},
author={Manuel Bodirsky, Victor Dalmau},
journal={arXiv preprint arXiv:0809.2386},
year={2008},
archivePrefix={arXiv},
eprint={0809.2386},
primaryClass={cs.LO cs.CC}
} | bodirsky2008datalog |
arxiv-4852 | 0809.2394 | Structures de r\'ealisabilit\'e, RAM et ultrafiltre sur N | <|reference_start|>Structures de r\'ealisabilit\'e, RAM et ultrafiltre sur N: We show how to transform into programs the proofs in classical Analysis which use the existence of an ultrafilter on the integers. The method mixes the classical realizability introduced by the author, with the "forcing" of P. Cohen. The programs we obtain, use read and write instructions in random access memory.<|reference_end|> | arxiv | @article{krivine2008structures,
title={Structures de r\'ealisabilit\'e, RAM et ultrafiltre sur N},
author={Jean-Louis Krivine (PPS)},
journal={arXiv preprint arXiv:0809.2394},
year={2008},
archivePrefix={arXiv},
eprint={0809.2394},
primaryClass={cs.LO}
} | krivine2008structures |
arxiv-4853 | 0809.2421 | Electricity Demand and Energy Consumption Management System | <|reference_start|>Electricity Demand and Energy Consumption Management System: This project describes the electricity demand and energy consumption management system and its application to Southern Peru smelter. It is composed of an hourly demand-forecasting module and of a simulation component for a plant electrical system. The first module was done using dynamic neural networks with backpropagation training algorithm; it is used to predict the electric power demanded every hour, with an error percentage below of 1%. This information allows efficient management of energy peak demands before this happen, distributing the raise of electric load to other hours or improving those equipments that increase the demand. The simulation module is based in advanced estimation techniques, such as: parametric estimation, neural network modeling, statistic regression and previously developed models, which simulates the electric behavior of the smelter plant. These modules facilitate electricity demand and consumption proper planning, because they allow knowing the behavior of the hourly demand and the consumption patterns of the plant, including the bill components, but also energy deficiencies and opportunities for improvement, based on analysis of information about equipments, processes and production plans, as well as maintenance programs. Finally the results of its application in Southern Peru smelter are presented.<|reference_end|> | arxiv | @article{sarmiento2008electricity,
title={Electricity Demand and Energy Consumption Management System},
author={Juan Ojeda Sarmiento},
journal={arXiv preprint arXiv:0809.2421},
year={2008},
archivePrefix={arXiv},
eprint={0809.2421},
primaryClass={cs.AI cs.CE}
} | sarmiento2008electricity |
arxiv-4854 | 0809.2423 | The fully connected N-dimensional skeleton: probing the evolution of the cosmic web | <|reference_start|>The fully connected N-dimensional skeleton: probing the evolution of the cosmic web: A method to compute the full hierarchy of the critical subsets of a density field is presented. It is based on a watershed technique and uses a probability propagation scheme to improve the quality of the segmentation by circumventing the discreteness of the sampling. It can be applied within spaces of arbitrary dimensions and geometry. This recursive segmentation of space yields, for a $d$-dimensional space, a $d-1$ succession of $n$-dimensional subspaces that fully characterize the topology of the density field. The final 1D manifold of the hierarchy is the fully connected network of the primary critical lines of the field : the skeleton. It corresponds to the subset of lines linking maxima to saddle points, and provides a definition of the filaments that compose the cosmic web as a precise physical object, which makes it possible to compute any of its properties such as its length, curvature, connectivity etc... When the skeleton extraction is applied to initial conditions of cosmological N-body simulations and their present day non linear counterparts, it is shown that the time evolution of the cosmic web, as traced by the skeleton, is well accounted for by the Zel'dovich approximation. Comparing this skeleton to the initial skeleton undergoing the Zel'dovich mapping shows that two effects are competing during the formation of the cosmic web: a general dilation of the larger filaments that is captured by a simple deformation of the skeleton of the initial conditions on the one hand, and the shrinking, fusion and disappearance of the more numerous smaller filaments on the other hand. Other applications of the N dimensional skeleton and its peak patch hierarchy are discussed.<|reference_end|> | arxiv | @article{sousbie2008the,
title={The fully connected N-dimensional skeleton: probing the evolution of the
cosmic web},
author={T. Sousbie, S. Colombi, C. Pichon},
journal={Mon.Not.Roy.Astron.Soc.393:457,2009},
year={2008},
doi={10.1111/j.1365-2966.2008.14244.x},
archivePrefix={arXiv},
eprint={0809.2423},
primaryClass={astro-ph cs.CG physics.comp-ph}
} | sousbie2008the |
arxiv-4855 | 0809.2443 | NP-Completeness of Hamiltonian Cycle Problem on Rooted Directed Path Graphs | <|reference_start|>NP-Completeness of Hamiltonian Cycle Problem on Rooted Directed Path Graphs: The Hamiltonian cycle problem is to decide whether a given graph has a Hamiltonian cycle. Bertossi and Bonuccelli (1986, Information Processing Letters, 23, 195-200) proved that the Hamiltonian Cycle Problem is NP-Complete even for undirected path graphs and left the Hamiltonian cycle problem open for directed path graphs. Narasimhan (1989, Information Processing Letters, 32, 167-170) proved that the Hamiltonian Cycle Problem is NP-Complete even for directed path graphs and left the Hamiltonian cycle problem open for rooted directed path graphs. In this paper we resolve this open problem by proving that the Hamiltonian Cycle Problem is also NP-Complete for rooted directed path graphs.<|reference_end|> | arxiv | @article{panda2008np-completeness,
title={NP-Completeness of Hamiltonian Cycle Problem on Rooted Directed Path
Graphs},
author={B. S. Panda and D. Pradhan},
journal={arXiv preprint arXiv:0809.2443},
year={2008},
archivePrefix={arXiv},
eprint={0809.2443},
primaryClass={cs.DM}
} | panda2008np-completeness |
arxiv-4856 | 0809.2446 | High-Rate Space-Time Coded Large MIMO Systems: Low-Complexity Detection and Channel Estimation | <|reference_start|>High-Rate Space-Time Coded Large MIMO Systems: Low-Complexity Detection and Channel Estimation: In this paper, we present a low-complexity algorithm for detection in high-rate, non-orthogonal space-time block coded (STBC) large-MIMO systems that achieve high spectral efficiencies of the order of tens of bps/Hz. We also present a training-based iterative detection/channel estimation scheme for such large STBC MIMO systems. Our simulation results show that excellent bit error rate and nearness-to-capacity performance are achieved by the proposed multistage likelihood ascent search (M-LAS) detector in conjunction with the proposed iterative detection/channel estimation scheme at low complexities. The fact that we could show such good results for large STBCs like 16x16 and 32x32 STBCs from Cyclic Division Algebras (CDA) operating at spectral efficiencies in excess of 20 bps/Hz (even after accounting for the overheads meant for pilot based training for channel estimation and turbo coding) establishes the effectiveness of the proposed detector and channel estimator. We decode perfect codes of large dimensions using the proposed detector. With the feasibility of such a low-complexity detection/channel estimation scheme, large-MIMO systems with tens of antennas operating at several tens of bps/Hz spectral efficiencies can become practical, enabling interesting high data rate wireless applications.<|reference_end|> | arxiv | @article{mohammed2008high-rate,
title={High-Rate Space-Time Coded Large MIMO Systems: Low-Complexity Detection
and Channel Estimation},
author={Saif K. Mohammed, Ahmed Zaki, A. Chockalingam, and B. Sundar Rajan},
journal={arXiv preprint arXiv:0809.2446},
year={2008},
doi={10.1109/JSTSP.2009.2035862},
archivePrefix={arXiv},
eprint={0809.2446},
primaryClass={cs.IT math.IT}
} | mohammed2008high-rate |
arxiv-4857 | 0809.2489 | The fast intersection transform with applications to counting paths | <|reference_start|>The fast intersection transform with applications to counting paths: We present an algorithm for evaluating a linear ``intersection transform'' of a function defined on the lattice of subsets of an $n$-element set. In particular, the algorithm constructs an arithmetic circuit for evaluating the transform in ``down-closure time'' relative to the support of the function and the evaluation domain. As an application, we develop an algorithm that, given as input a digraph with $n$ vertices and bounded integer weights at the edges, counts paths by weight and given length $0\leq\ell\leq n-1$ in time $O^*(\exp(n\cdot H(\ell/(2n))))$, where $H(p)=-p\log p-(1-p)\log(1-p)$, and the notation $O^*(\cdot)$ suppresses a factor polynomial in $n$.<|reference_end|> | arxiv | @article{björklund2008the,
title={The fast intersection transform with applications to counting paths},
author={Andreas Bj"orklund, Thore Husfeldt, Petteri Kaski, Mikko Koivisto},
journal={arXiv preprint arXiv:0809.2489},
year={2008},
archivePrefix={arXiv},
eprint={0809.2489},
primaryClass={cs.DS cs.DM}
} | björklund2008the |
arxiv-4858 | 0809.2508 | A fast approach for overcomplete sparse decomposition based on smoothed L0 norm | <|reference_start|>A fast approach for overcomplete sparse decomposition based on smoothed L0 norm: In this paper, a fast algorithm for overcomplete sparse decomposition, called SL0, is proposed. The algorithm is essentially a method for obtaining sparse solutions of underdetermined systems of linear equations, and its applications include underdetermined Sparse Component Analysis (SCA), atomic decomposition on overcomplete dictionaries, compressed sensing, and decoding real field codes. Contrary to previous methods, which usually solve this problem by minimizing the L1 norm using Linear Programming (LP) techniques, our algorithm tries to directly minimize the L0 norm. It is experimentally shown that the proposed algorithm is about two to three orders of magnitude faster than the state-of-the-art interior-point LP solvers, while providing the same (or better) accuracy.<|reference_end|> | arxiv | @article{mohimani2008a,
title={A fast approach for overcomplete sparse decomposition based on smoothed
L0 norm},
author={Hossein Mohimani, Massoud Babaie-Zadeh, and Christian Jutten},
journal={arXiv preprint arXiv:0809.2508},
year={2008},
doi={10.1109/TSP.2008.2007606},
archivePrefix={arXiv},
eprint={0809.2508},
primaryClass={cs.IT math.IT}
} | mohimani2008a |
arxiv-4859 | 0809.2525 | On the vertices of the k-addiive core | <|reference_start|>On the vertices of the k-addiive core: The core of a game $v$ on $N$, which is the set of additive games $\phi$ dominating $v$ such that $\phi(N)=v(N)$, is a central notion in cooperative game theory, decision making and in combinatorics, where it is related to submodular functions, matroids and the greedy algorithm. In many cases however, the core is empty, and alternative solutions have to be found. We define the $k$-additive core by replacing additive games by $k$-additive games in the definition of the core, where $k$-additive games are those games whose M\"obius transform vanishes for subsets of more than $k$ elements. For a sufficiently high value of $k$, the $k$-additive core is nonempty, and is a convex closed polyhedron. Our aim is to establish results similar to the classical results of Shapley and Ichiishi on the core of convex games (corresponds to Edmonds' theorem for the greedy algorithm), which characterize the vertices of the core.<|reference_end|> | arxiv | @article{grabisch2008on,
title={On the vertices of the k-addiive core},
author={Michel Grabisch (CES), Pedro Miranda},
journal={Discrete Mathematics (2008) 5204-5217},
year={2008},
archivePrefix={arXiv},
eprint={0809.2525},
primaryClass={cs.DM cs.GT}
} | grabisch2008on |
arxiv-4860 | 0809.2532 | Multidimensional Visualization of Oracle Performance Using Barry007 | <|reference_start|>Multidimensional Visualization of Oracle Performance Using Barry007: Most generic performance tools display only system-level performance data using 2-dimensional plots or diagrams and this limits the informational detail that can be displayed. Moreover, a modern relational database system, like Oracle, can concurrently serve thousands of client processes with different workload characteristics, so that generic performance-data displays inevitably hide important information. Drawing on our previous work, this paper demonstrates the application of Barry007 multidimensional visualization to the analysis of Oracle end-user, session-level, performance data, showing both collective trends and individual performance anomalies.<|reference_end|> | arxiv | @article{poder2008multidimensional,
title={Multidimensional Visualization of Oracle Performance Using Barry007},
author={Tanel Poder and Neil J. Gunther},
journal={arXiv preprint arXiv:0809.2532},
year={2008},
archivePrefix={arXiv},
eprint={0809.2532},
primaryClass={cs.PF cs.DB}
} | poder2008multidimensional |
arxiv-4861 | 0809.2541 | Getting in the Zone for Successful Scalability | <|reference_start|>Getting in the Zone for Successful Scalability: The universal scalability law (USL) is an analytic model used to quantify application scaling. It is universal because it subsumes Amdahl's law and Gustafson linearized scaling as special cases. Using simulation, we show: (i) that the USL is equivalent to synchronous queueing in a load-dependent machine repairman model and (ii) how USL, Amdahl's law, and Gustafson scaling can be regarded as boundaries defining three scalability zones. Typical throughput measurements lie across all three zones. Simulation scenarios provide deeper insight into queueing effects and thus provide a clearer indication of which application features should be tuned to get into the optimal performance zone.<|reference_end|> | arxiv | @article{holtman2008getting,
title={Getting in the Zone for Successful Scalability},
author={Jim Holtman and Neil J. Gunther},
journal={arXiv preprint arXiv:0809.2541},
year={2008},
archivePrefix={arXiv},
eprint={0809.2541},
primaryClass={cs.PF cs.DC}
} | holtman2008getting |
arxiv-4862 | 0809.2546 | Depth as Randomness Deficiency | <|reference_start|>Depth as Randomness Deficiency: Depth of an object concerns a tradeoff between computation time and excess of program length over the shortest program length required to obtain the object. It gives an unconditional lower bound on the computation time from a given program in absence of auxiliary information. Variants known as logical depth and computational depth are expressed in Kolmogorov complexity theory. We derive quantitative relation between logical depth and computational depth and unify the different depth notions by relating them to A. Kolmogorov and L. Levin's fruitful notion of randomness deficiency. Subsequently, we revisit the computational depth of infinite strings, introducing the notion of super deep sequences and relate it with other approaches.<|reference_end|> | arxiv | @article{antunes2008depth,
title={Depth as Randomness Deficiency},
author={Luis Antunes (Univ. Porto), Armando Matos (Univ. Porto), Andre Souto
(Univ. Porto), Paul Vitanyi (CWI and Univ. Amsterdam)},
journal={arXiv preprint arXiv:0809.2546},
year={2008},
archivePrefix={arXiv},
eprint={0809.2546},
primaryClass={cs.CC cs.IT math.IT}
} | antunes2008depth |
arxiv-4863 | 0809.2553 | Normalized Information Distance | <|reference_start|>Normalized Information Distance: The normalized information distance is a universal distance measure for objects of all kinds. It is based on Kolmogorov complexity and thus uncomputable, but there are ways to utilize it. First, compression algorithms can be used to approximate the Kolmogorov complexity if the objects have a string representation. Second, for names and abstract concepts, page count statistics from the World Wide Web can be used. These practical realizations of the normalized information distance can then be applied to machine learning tasks, expecially clustering, to perform feature-free and parameter-free data mining. This chapter discusses the theoretical foundations of the normalized information distance and both practical realizations. It presents numerous examples of successful real-world applications based on these distance measures, ranging from bioinformatics to music clustering to machine translation.<|reference_end|> | arxiv | @article{vitanyi2008normalized,
title={Normalized Information Distance},
author={Paul M.B. Vitanyi (CWI and Univ. Amsterdam), Frank J. Balbach (Univ.
Waterloo), Rudi L. Cilibrasi (CWI), and Ming Li (Univ. Waterloo)},
journal={arXiv preprint arXiv:0809.2553},
year={2008},
archivePrefix={arXiv},
eprint={0809.2553},
primaryClass={cs.IR cs.AI}
} | vitanyi2008normalized |
arxiv-4864 | 0809.2554 | Simpler Analyses of Local Search Algorithms for Facility Location | <|reference_start|>Simpler Analyses of Local Search Algorithms for Facility Location: We study local search algorithms for metric instances of facility location problems: the uncapacitated facility location problem (UFL), as well as uncapacitated versions of the $k$-median, $k$-center and $k$-means problems. All these problems admit natural local search heuristics: for example, in the UFL problem the natural moves are to open a new facility, close an existing facility, and to swap a closed facility for an open one; in $k$-medians, we are allowed only swap moves. The local-search algorithm for $k$-median was analyzed by Arya et al. (SIAM J. Comput. 33(3):544-562, 2004), who used a clever ``coupling'' argument to show that local optima had cost at most constant times the global optimum. They also used this argument to show that the local search algorithm for UFL was 3-approximation; their techniques have since been applied to other facility location problems. In this paper, we give a proof of the $k$-median result which avoids this coupling argument. These arguments can be used in other settings where the Arya et al. arguments have been used. We also show that for the problem of opening $k$ facilities $F$ to minimize the objective function $\Phi_p(F) = \big(\sum_{j \in V} d(j, F)^p\big)^{1/p}$, the natural swap-based local-search algorithm is a $\Theta(p)$-approximation. This implies constant-factor approximations for $k$-medians (when $p=1$), and $k$-means (when $p = 2$), and an $O(\log n)$-approximation algorithm for the $k$-center problem (which is essentially $p = \log n$).<|reference_end|> | arxiv | @article{gupta2008simpler,
title={Simpler Analyses of Local Search Algorithms for Facility Location},
author={Anupam Gupta and Kanat Tangwongsan},
journal={arXiv preprint arXiv:0809.2554},
year={2008},
archivePrefix={arXiv},
eprint={0809.2554},
primaryClass={cs.DS}
} | gupta2008simpler |
arxiv-4865 | 0809.2639 | Code diversity in multiple antenna wireless communication | <|reference_start|>Code diversity in multiple antenna wireless communication: The standard approach to the design of individual space-time codes is based on optimizing diversity and coding gains. This geometric approach leads to remarkable examples, such as perfect space-time block codes, for which the complexity of Maximum Likelihood (ML) decoding is considerable. Code diversity is an alternative and complementary approach where a small number of feedback bits are used to select from a family of space-time codes. Different codes lead to different induced channels at the receiver, where Channel State Information (CSI) is used to instruct the transmitter how to choose the code. This method of feedback provides gains associated with beamforming while minimizing the number of feedback bits. It complements the standard approach to code design by taking advantage of different (possibly equivalent) realizations of a particular code design. Feedback can be combined with sub-optimal low complexity decoding of the component codes to match ML decoding performance of any individual code in the family. It can also be combined with ML decoding of the component codes to improve performance beyond ML decoding performance of any individual code. One method of implementing code diversity is the use of feedback to adapt the phase of a transmitted signal as shown for 4 by 4 Quasi-Orthogonal Space-Time Block Code (QOSTBC) and multi-user detection using the Alamouti code. Code diversity implemented by selecting from equivalent variants is used to improve ML decoding performance of the Golden code. This paper introduces a family of full rate circulant codes which can be linearly decoded by fourier decomposition of circulant matrices within the code diversity framework. A 3 by 3 circulant code is shown to outperform the Alamouti code at the same transmission rate.<|reference_end|> | arxiv | @article{wu2008code,
title={Code diversity in multiple antenna wireless communication},
author={Yiyue Wu and Robert Calderbank},
journal={arXiv preprint arXiv:0809.2639},
year={2008},
doi={10.1109/JSTSP.2009.2035861},
archivePrefix={arXiv},
eprint={0809.2639},
primaryClass={cs.IT math.IT}
} | wu2008code |
arxiv-4866 | 0809.2651 | Largest Empty Circle Centered on a Query Line | <|reference_start|>Largest Empty Circle Centered on a Query Line: The Largest Empty Circle problem seeks the largest circle centered within the convex hull of a set $P$ of $n$ points in $\mathbb{R}^2$ and devoid of points from $P$. In this paper, we introduce a query version of this well-studied problem. In our query version, we are required to preprocess $P$ so that when given a query line $Q$, we can quickly compute the largest empty circle centered at some point on $Q$ and within the convex hull of $P$. We present solutions for two special cases and the general case; all our queries run in $O(\log n)$ time. We restrict the query line to be horizontal in the first special case, which we preprocess in $O(n \alpha(n) \log n)$ time and space, where $\alpha(n)$ is the slow growing inverse of the Ackermann's function. When the query line is restricted to pass through a fixed point, the second special case, our preprocessing takes $O(n \alpha(n)^{O(\alpha(n))} \log n)$ time and space. We use insights from the two special cases to solve the general version of the problem with preprocessing time and space in $O(n^3 \log n)$ and $O(n^3)$ respectively.<|reference_end|> | arxiv | @article{augustine2008largest,
title={Largest Empty Circle Centered on a Query Line},
author={John Augustine, Brian Putnam, Sasanka Roy},
journal={arXiv preprint arXiv:0809.2651},
year={2008},
doi={10.1016/j.jda.2009.10.002},
archivePrefix={arXiv},
eprint={0809.2651},
primaryClass={cs.CG}
} | augustine2008largest |
arxiv-4867 | 0809.2680 | Mathematical Tool of Discrete Dynamic Modeling of Complex Systems in Control Loop | <|reference_start|>Mathematical Tool of Discrete Dynamic Modeling of Complex Systems in Control Loop: In this paper we present a method of discrete modeling and analysis of multi-level dynamics of complex large-scale hierarchical dynamic systems subject to external dynamic control mechanism. In a model each state describes parallel dynamics and simultaneous trends of changes in system parameters. The essence of the approach is in analysis of system state dynamics while it is in the control loop.<|reference_end|> | arxiv | @article{bagdasaryan2008mathematical,
title={Mathematical Tool of Discrete Dynamic Modeling of Complex Systems in
Control Loop},
author={Armen Bagdasaryan},
journal={arXiv preprint arXiv:0809.2680},
year={2008},
archivePrefix={arXiv},
eprint={0809.2680},
primaryClass={cs.MA cs.CE}
} | bagdasaryan2008mathematical |
arxiv-4868 | 0809.2686 | An MAS-Based ETL Approach for Complex Data | <|reference_start|>An MAS-Based ETL Approach for Complex Data: In a data warehousing process, the phase of data integration is crucial. Many methods for data integration have been published in the literature. However, with the development of the Internet, the availability of various types of data (images, texts, sounds, videos, databases...) has increased, and structuring such data is a difficult task. We name these data, which may be structured or unstructured, "complex data". In this paper, we propose a new approach for complex data integration, based on a Multi-Agent System (MAS), in association to a data warehousing approach. Our objective is to take advantage of the MAS to perform the integration phase for complex data. We indeed consider the different tasks of the data integration process as services offered by agents. To validate this approach, we have actually developed an MAS for complex data integration.<|reference_end|> | arxiv | @article{boussaïd2008an,
title={An MAS-Based ETL Approach for Complex Data},
author={Omar Boussa"id (ERIC), Fadila Bentayeb (ERIC), J'er^ome Darmont
(ERIC)},
journal={arXiv preprint arXiv:0809.2686},
year={2008},
archivePrefix={arXiv},
eprint={0809.2686},
primaryClass={cs.DB}
} | boussaïd2008an |
arxiv-4869 | 0809.2687 | Frequent itemsets mining for database auto-administration | <|reference_start|>Frequent itemsets mining for database auto-administration: With the wide development of databases in general and data warehouses in particular, it is important to reduce the tasks that a database administrator must perform manually. The aim of auto-administrative systems is to administrate and adapt themselves automatically without loss (or even with a gain) in performance. The idea of using data mining techniques to extract useful knowledge for administration from the data themselves has existed for some years. However, little research has been achieved. This idea nevertheless remains a very promising approach, notably in the field of data warehousing, where queries are very heterogeneous and cannot be interpreted easily. The aim of this study is to search for a way of extracting useful knowledge from stored data themselves to automatically apply performance optimization techniques, and more particularly indexing techniques. We have designed a tool that extracts frequent itemsets from a given workload to compute an index configuration that helps optimizing data access time. The experiments we performed showed that the index configurations generated by our tool allowed performance gains of 15% to 25% on a test database and a test data warehouse.<|reference_end|> | arxiv | @article{aouiche2008frequent,
title={Frequent itemsets mining for database auto-administration},
author={Kamel Aouiche (ERIC), J'er^ome Darmont (ERIC), Le Gruenwald},
journal={arXiv preprint arXiv:0809.2687},
year={2008},
archivePrefix={arXiv},
eprint={0809.2687},
primaryClass={cs.DB}
} | aouiche2008frequent |
arxiv-4870 | 0809.2688 | A Complex Data Warehouse for Personalized, Anticipative Medicine | <|reference_start|>A Complex Data Warehouse for Personalized, Anticipative Medicine: With the growing use of new technologies, healthcare is nowadays undergoing significant changes. Information-based medicine has to exploit medical decision-support systems and requires the analysis of various, heterogeneous data, such as patient records, medical images, biological analysis results, etc. In this paper, we present the design of the complex data warehouse relating to high-level athletes. It is original in two ways. First, it is aimed at storing complex medical data. Second, it is designed to allow innovative and quite different kinds of analyses to support: (1) personalized and anticipative medicine (in opposition to curative medicine) for well-identified patients; (2) broad-band statistical studies over a given population of patients. Furthermore, the system includes data relating to several medical fields. It is also designed to be evolutionary to take into account future advances in medical research.<|reference_end|> | arxiv | @article{darmont2008a,
title={A Complex Data Warehouse for Personalized, Anticipative Medicine},
author={J'er^ome Darmont (ERIC), Emerson Olivier (ERIC)},
journal={arXiv preprint arXiv:0809.2688},
year={2008},
archivePrefix={arXiv},
eprint={0809.2688},
primaryClass={cs.DB}
} | darmont2008a |
arxiv-4871 | 0809.2691 | Expressing OLAP operators with the TAX XML algebra | <|reference_start|>Expressing OLAP operators with the TAX XML algebra: With the rise of XML as a standard for representing business data, XML data warehouses appear as suitable solutions for Web-based decision-support applications. In this context, it is necessary to allow OLAP analyses over XML data cubes (XOLAP). Thus, XQuery extensions are needed. To help define a formal framework and allow much-needed performance optimizations on analytical queries expressed in XQuery, having an algebra at one's disposal is desirable. However, XOLAP approaches and algebras from the literature still largely rely on the relational model and/or only feature a small number of OLAP operators. In opposition, we propose in this paper to express a broad set of OLAP operators with the TAX XML algebra.<|reference_end|> | arxiv | @article{hachicha2008expressing,
title={Expressing OLAP operators with the TAX XML algebra},
author={Marouane Hachicha (ERIC), Hadj Mahboubi (ERIC), J'er^ome Darmont
(ERIC)},
journal={arXiv preprint arXiv:0809.2691},
year={2008},
archivePrefix={arXiv},
eprint={0809.2691},
primaryClass={cs.DB}
} | hachicha2008expressing |
arxiv-4872 | 0809.2696 | An Unified Definition of Data Mining | <|reference_start|>An Unified Definition of Data Mining: Since many years, theoretical concepts of Data Mining have been developed and improved. Data Mining has become applied to many academic and industrial situations, and recently, soundings of public opinion about privacy have been carried out. However, a consistent and standardized definition is still missing, and the initial explanation given by Frawley et al. has pragmatically often changed over the years. Furthermore, alternative terms like Knowledge Discovery have been conjured and forged, and a necessity of a Data Warehouse has been endeavoured to persuade the users. In this work, we pick up current definitions and introduce an unified definition that covers existing attempted explanations. For this, we appeal to the natural original of chemical states of aggregation.<|reference_end|> | arxiv | @article{schommer2008an,
title={An Unified Definition of Data Mining},
author={Christoph Schommer},
journal={arXiv preprint arXiv:0809.2696},
year={2008},
archivePrefix={arXiv},
eprint={0809.2696},
primaryClass={cs.SC cs.CY}
} | schommer2008an |
arxiv-4873 | 0809.2730 | SWIM: A Simple Model to Generate Small Mobile Worlds | <|reference_start|>SWIM: A Simple Model to Generate Small Mobile Worlds: This paper presents small world in motion (SWIM), a new mobility model for ad-hoc networking. SWIM is relatively simple, is easily tuned by setting just a few parameters, and generates traces that look real--synthetic traces have the same statistical properties of real traces. SWIM shows experimentally and theoretically the presence of the power law and exponential decay dichotomy of inter-contact time, and, most importantly, our experiments show that it can predict very accurately the performance of forwarding protocols.<|reference_end|> | arxiv | @article{mei2008swim:,
title={SWIM: A Simple Model to Generate Small Mobile Worlds},
author={Alessandro Mei (1) and Julinda Stefa (1) ((1) Department of Computer
Science, Sapienza University of Rome, Italy)},
journal={arXiv preprint arXiv:0809.2730},
year={2008},
archivePrefix={arXiv},
eprint={0809.2730},
primaryClass={cs.DC cs.NI}
} | mei2008swim: |
arxiv-4874 | 0809.2754 | Algorithmic information theory | <|reference_start|>Algorithmic information theory: We introduce algorithmic information theory, also known as the theory of Kolmogorov complexity. We explain the main concepts of this quantitative approach to defining `information'. We discuss the extent to which Kolmogorov's and Shannon's information theory have a common purpose, and where they are fundamentally different. We indicate how recent developments within the theory allow one to formally distinguish between `structural' (meaningful) and `random' information as measured by the Kolmogorov structure function, which leads to a mathematical formalization of Occam's razor in inductive inference. We end by discussing some of the philosophical implications of the theory.<|reference_end|> | arxiv | @article{grunwald2008algorithmic,
title={Algorithmic information theory},
author={Peter D. Grunwald (CWI) and Paul M.B. Vitanyi (CWI and Univ.
Amsterdam)},
journal={arXiv preprint arXiv:0809.2754},
year={2008},
archivePrefix={arXiv},
eprint={0809.2754},
primaryClass={cs.IT cs.LG math.IT math.ST stat.TH}
} | grunwald2008algorithmic |
arxiv-4875 | 0809.2768 | Hubs and Clusters in the Evolving U S Internal Migration Network | <|reference_start|>Hubs and Clusters in the Evolving U S Internal Migration Network: Most nations of the world periodically publish N x N origin-destination tables, recording the number of people who lived in geographic subdivision i at time t and j at t+1. We have developed and widely applied to such national tables and other analogous (weighted, directed) socioeconomic networks, a two-stage--double-standardization and (strong component) hierarchical clustering--procedure. Previous applications of this methodology and related analytical issues are discussed. Its use is illustrated in a large-scale study, employing recorded United States internal migration flows between the 3,000+ county-level units of the nation for the periods 1965-1970 and 1995-2000. Prominent, important features--such as ''cosmopolitan hubs'' and ``functional regions''--are extracted from master dendrograms. The extent to which such characteristics have varied over the intervening thirty years is evaluated.<|reference_end|> | arxiv | @article{slater2008hubs,
title={Hubs and Clusters in the Evolving U. S. Internal Migration Network},
author={Paul B. Slater},
journal={arXiv preprint arXiv:0809.2768},
year={2008},
archivePrefix={arXiv},
eprint={0809.2768},
primaryClass={physics.soc-ph cs.SI physics.data-an stat.AP}
} | slater2008hubs |
arxiv-4876 | 0809.2792 | Predicting Abnormal Returns From News Using Text Classification | <|reference_start|>Predicting Abnormal Returns From News Using Text Classification: We show how text from news articles can be used to predict intraday price movements of financial assets using support vector machines. Multiple kernel learning is used to combine equity returns with text as predictive features to increase classification performance and we develop an analytic center cutting plane method to solve the kernel learning problem efficiently. We observe that while the direction of returns is not predictable using either text or returns, their size is, with text features producing significantly better performance than historical returns alone.<|reference_end|> | arxiv | @article{luss2008predicting,
title={Predicting Abnormal Returns From News Using Text Classification},
author={Ronny Luss, Alexandre d'Aspremont},
journal={arXiv preprint arXiv:0809.2792},
year={2008},
archivePrefix={arXiv},
eprint={0809.2792},
primaryClass={cs.LG cs.AI}
} | luss2008predicting |
arxiv-4877 | 0809.2818 | A Simple Framework to Typify Social Bibliographic Communities | <|reference_start|>A Simple Framework to Typify Social Bibliographic Communities: Social Communities in bibliographic databases exist since many years, researchers share common research interests, and work and publish together. A social community may vary in type and size, being fully connected between participating members or even more expressed by a consortium of small and individual members who play individual roles in it. In this work, we focus on social communities inside the bibliographic database DBLP and characterize communities through a simple typifying description model. Generally, we understand a publication as a transaction between the associated authors. The idea therefore is to concern with directed associative relationships among them, to decompose each pattern to its fundamental structure, and to describe the communities by expressive attributes. Finally, we argue that the decomposition supports the management of discovered structures towards the use of adaptive-incremental mind-maps.<|reference_end|> | arxiv | @article{schommer2008a,
title={A Simple Framework to Typify Social Bibliographic Communities},
author={Christoph Schommer},
journal={arXiv preprint arXiv:0809.2818},
year={2008},
archivePrefix={arXiv},
eprint={0809.2818},
primaryClass={cs.DL cs.CG}
} | schommer2008a |
arxiv-4878 | 0809.2835 | Fundamental Constraints on Multicast Capacity Regions | <|reference_start|>Fundamental Constraints on Multicast Capacity Regions: Much of the existing work on the broadcast channel focuses only on the sending of private messages. In this work we examine the scenario where the sender also wishes to transmit common messages to subsets of receivers. For an L user broadcast channel there are 2L - 1 subsets of receivers and correspondingly 2L - 1 independent messages. The set of achievable rates for this channel is a 2L - 1 dimensional region. There are fundamental constraints on the geometry of this region. For example, observe that if the transmitter is able to simultaneously send L rate-one private messages, error-free to all receivers, then by sending the same information in each message, it must be able to send a single rate-one common message, error-free to all receivers. This swapping of private and common messages illustrates that for any broadcast channel, the inclusion of a point R* in the achievable rate region implies the achievability of a set of other points that are not merely component-wise less than R*. We formerly define this set and characterize it for L = 2 and L = 3. Whereas for L = 2 all the points in the set arise only from operations relating to swapping private and common messages, for L = 3 a form of network coding is required.<|reference_end|> | arxiv | @article{grokop2008fundamental,
title={Fundamental Constraints on Multicast Capacity Regions},
author={Leonard Grokop, David N. C. Tse},
journal={arXiv preprint arXiv:0809.2835},
year={2008},
archivePrefix={arXiv},
eprint={0809.2835},
primaryClass={cs.IT math.IT}
} | grokop2008fundamental |
arxiv-4879 | 0809.2840 | Spectrum Sharing between Wireless Networks | <|reference_start|>Spectrum Sharing between Wireless Networks: We consider the problem of two wireless networks operating on the same (presumably unlicensed) frequency band. Pairs within a given network cooperate to schedule transmissions, but between networks there is competition for spectrum. To make the problem tractable, we assume transmissions are scheduled according to a random access protocol where each network chooses an access probability for its users. A game between the two networks is defined. We characterize the Nash Equilibrium behavior of the system. Three regimes are identified; one in which both networks simultaneously schedule all transmissions; one in which the denser network schedules all transmissions and the sparser only schedules a fraction; and one in which both networks schedule only a fraction of their transmissions. The regime of operation depends on the pathloss exponent $\alpha$, the latter regime being desirable, but attainable only for $\alpha>4$. This suggests that in certain environments, rival wireless networks may end up naturally cooperating. To substantiate our analytical results, we simulate a system where networks iteratively optimize their access probabilities in a greedy manner. We also discuss a distributed scheduling protocol that employs carrier sensing, and demonstrate via simulations, that again a near cooperative equilibrium exists for sufficiently large $\alpha$.<|reference_end|> | arxiv | @article{grokop2008spectrum,
title={Spectrum Sharing between Wireless Networks},
author={Leonard Grokop, David N. C. Tse},
journal={arXiv preprint arXiv:0809.2840},
year={2008},
archivePrefix={arXiv},
eprint={0809.2840},
primaryClass={cs.IT math.IT}
} | grokop2008spectrum |
arxiv-4880 | 0809.2851 | Correlation of Expert and Search Engine Rankings | <|reference_start|>Correlation of Expert and Search Engine Rankings: In previous research it has been shown that link-based web page metrics can be used to predict experts' assessment of quality. We are interested in a related question: do expert rankings of real-world entities correlate with search engine rankings of corresponding web resources? For example, each year US News & World Report publishes a list of (among others) top 50 graduate business schools. Does their expert ranking correlate with the search engine ranking of the URLs of those business schools? To answer this question we conducted 9 experiments using 8 expert rankings on a range of academic, athletic, financial and popular culture topics. We compared the expert rankings with the rankings in Google, Live Search (formerly MSN) and Yahoo (with list lengths of 10, 25, and 50). In 57 search engine vs. expert comparisons, only 1 strong and 4 moderate correlations were statistically significant. In 42 inter-search engine comparisons, only 2 strong and 4 moderate correlations were statistically significant. The correlations appeared to decrease with the size of the lists: the 3 strong correlations were for lists of 10, the 8 moderate correlations were for lists of 25, and no correlations were found for lists of 50.<|reference_end|> | arxiv | @article{nelson2008correlation,
title={Correlation of Expert and Search Engine Rankings},
author={Michael L. Nelson, Martin Klein, Manoranjan Magudamudi},
journal={arXiv preprint arXiv:0809.2851},
year={2008},
archivePrefix={arXiv},
eprint={0809.2851},
primaryClass={cs.DL}
} | nelson2008correlation |
arxiv-4881 | 0809.2858 | Polynomial kernels for 3-leaf power graph modification problems | <|reference_start|>Polynomial kernels for 3-leaf power graph modification problems: A graph G=(V,E) is a 3-leaf power iff there exists a tree T whose leaves are V and such that (u,v) is an edge iff u and v are at distance at most 3 in T. The 3-leaf power graph edge modification problems, i.e. edition (also known as the closest 3-leaf power), completion and edge-deletion, are FTP when parameterized by the size of the edge set modification. However polynomial kernel was known for none of these three problems. For each of them, we provide cubic kernels that can be computed in linear time for each of these problems. We thereby answer an open problem first mentioned by Dom, Guo, Huffner and Niedermeier (2005).<|reference_end|> | arxiv | @article{bessy2008polynomial,
title={Polynomial kernels for 3-leaf power graph modification problems},
author={Stephane Bessy and Christophe Paul and Anthony Perez},
journal={arXiv preprint arXiv:0809.2858},
year={2008},
doi={10.1007/978-3-642-10217-2_10},
archivePrefix={arXiv},
eprint={0809.2858},
primaryClass={cs.DM cs.DS}
} | bessy2008polynomial |
arxiv-4882 | 0809.2884 | On an algorithm that generates an interesting maximal set P(n) of the naturals for any n greater than or equal to 2 | <|reference_start|>On an algorithm that generates an interesting maximal set P(n) of the naturals for any n greater than or equal to 2: The paper considers the problem of finding the largest possible set P(n), a subset of the set N of the natural numbers, with the property that a number is in P(n) if and only if it is a sum of n distinct naturals all in P(n) or none in P(n). Here largest is in the set theoretic sense and n is greater than or equal to 2. We call P(n) a maximal set obeying this property. For small n say 2 or 3, it is possible to develop P(n) intuitively but we strongly felt the necessity of an algorithm for any n greater than or equal to 2. Now P(n) shall invariably be a infinite set so we define another set Q(n) such that Q(n)=N-P(n), prove that Q(n) is finite and, since P(n) is automatically known if Q(n) is known, design an algorithm of worst case O(1) complexity which generates Q(n).<|reference_end|> | arxiv | @article{das2008on,
title={On an algorithm that generates an interesting maximal set P(n) of the
naturals for any n greater than or equal to 2},
author={Bidu Prakash Das and Soubhik Chakraborty},
journal={arXiv preprint arXiv:0809.2884},
year={2008},
archivePrefix={arXiv},
eprint={0809.2884},
primaryClass={cs.DM}
} | das2008on |
arxiv-4883 | 0809.2931 | An Efficient Algorithm for Cooperative Spectrum Sensing in Cognitive Radio Networks | <|reference_start|>An Efficient Algorithm for Cooperative Spectrum Sensing in Cognitive Radio Networks: We consider the problem of Spectrum Sensing in Cognitive Radio Systems. We have developed a distributed algorithm that the Secondary users can run to sense the channel cooperatively. It is based on sequential detection algorithms which optimally use the past observations. We use the algorithm on secondary users with energy detectors although it can be used with matched filter and other spectrum sensing algorithms also. The algorithm provides very low detection delays and also consumes little energy. Furthermore it causes low interference to the primary users. We compare this algorithm to several recently proposed algorithms and show that it detects changes in spectrum faster than these algorithms and uses significantly less energy.<|reference_end|> | arxiv | @article{sharma2008an,
title={An Efficient Algorithm for Cooperative Spectrum Sensing in Cognitive
Radio Networks},
author={Vinod Sharma and ArunKumar Jayaprakasam},
journal={arXiv preprint arXiv:0809.2931},
year={2008},
archivePrefix={arXiv},
eprint={0809.2931},
primaryClass={cs.IT math.IT}
} | sharma2008an |
arxiv-4884 | 0809.2956 | Communication-Efficient Construction of the Plane Localized Delaunay Graph | <|reference_start|>Communication-Efficient Construction of the Plane Localized Delaunay Graph: Let $V$ be a finite set of points in the plane. We present a 2-local algorithm that constructs a plane $\frac{4 \pi \sqrt{3}}{9}$-spanner of the unit-disk graph $\UDG(V)$. This algorithm makes only one round of communication and each point of $V$ broadcasts at most 5 messages. This improves the previously best message-bound of 11 by Ara\'{u}jo and Rodrigues (Fast localized Delaunay triangulation, Lecture Notes in Computer Science, volume 3544, 2004).<|reference_end|> | arxiv | @article{bose2008communication-efficient,
title={Communication-Efficient Construction of the Plane Localized Delaunay
Graph},
author={Prosenjit Bose, Paz Carmi, Michiel Smid, Daming Xu},
journal={arXiv preprint arXiv:0809.2956},
year={2008},
archivePrefix={arXiv},
eprint={0809.2956},
primaryClass={cs.CG}
} | bose2008communication-efficient |
arxiv-4885 | 0809.2957 | Sorting by Placement and Shift | <|reference_start|>Sorting by Placement and Shift: In sorting situations where the final destination of each item is known, it is natural to repeatedly choose items and place them where they belong, allowing the intervening items to shift by one to make room. (In fact, a special case of this algorithm is commonly used to hand-sort files.) However, it is not obvious that this algorithm necessarily terminates. We show that in fact the algorithm terminates after at most $2^{n-1}-1$ steps in the worst case (confirming a conjecture of L. Larson), and that there are super-exponentially many permutations for which this exact bound can be achieved. The proof involves a curious symmetrical binary representation.<|reference_end|> | arxiv | @article{elizalde2008sorting,
title={Sorting by Placement and Shift},
author={Sergi Elizalde, Peter Winkler},
journal={arXiv preprint arXiv:0809.2957},
year={2008},
archivePrefix={arXiv},
eprint={0809.2957},
primaryClass={math.CO cs.DM cs.DS}
} | elizalde2008sorting |
arxiv-4886 | 0809.2965 | On Time-Bounded Incompressibility of Compressible Strings and Sequences | <|reference_start|>On Time-Bounded Incompressibility of Compressible Strings and Sequences: For every total recursive time bound $t$, a constant fraction of all compressible (low Kolmogorov complexity) strings is $t$-bounded incompressible (high time-bounded Kolmogorov complexity); there are uncountably many infinite sequences of which every initial segment of length $n$ is compressible to $\log n$ yet $t$-bounded incompressible below ${1/4}n - \log n$; and there are countable infinitely many recursive infinite sequence of which every initial segment is similarly $t$-bounded incompressible. These results are related to, but different from, Barzdins's lemma.<|reference_end|> | arxiv | @article{daylight2008on,
title={On Time-Bounded Incompressibility of Compressible Strings and Sequences},
author={E.G. Daylight (Univ. Amsterdam), W.M. Koolen (CWI), P.M.B. Vitanyi
(CWI and Univ Amsterdam)},
journal={arXiv preprint arXiv:0809.2965},
year={2008},
archivePrefix={arXiv},
eprint={0809.2965},
primaryClass={cs.CC cs.IT math.IT}
} | daylight2008on |
arxiv-4887 | 0809.2968 | Bounds on Covering Codes with the Rank Metric | <|reference_start|>Bounds on Covering Codes with the Rank Metric: In this paper, we investigate geometrical properties of the rank metric space and covering properties of rank metric codes. We first establish an analytical expression for the intersection of two balls with rank radii, and then derive an upper bound on the volume of the union of multiple balls with rank radii. Using these geometrical properties, we derive both upper and lower bounds on the minimum cardinality of a code with a given rank covering radius. The geometrical properties and bounds proposed in this paper are significant to the design, decoding, and performance analysis of rank metric codes.<|reference_end|> | arxiv | @article{gadouleau2008bounds,
title={Bounds on Covering Codes with the Rank Metric},
author={Maximilien Gadouleau and Zhiyuan Yan},
journal={arXiv preprint arXiv:0809.2968},
year={2008},
archivePrefix={arXiv},
eprint={0809.2968},
primaryClass={cs.IT math.IT}
} | gadouleau2008bounds |
arxiv-4888 | 0809.2970 | Single source shortest paths in $H$-minor free graphs | <|reference_start|>Single source shortest paths in $H$-minor free graphs: We present an algorithm for the Single Source Shortest Paths (SSSP) problem in \emph{$H$-minor free} graphs. For every fixed $H$, if $G$ is a graph with $n$ vertices having integer edge lengths and $s$ is a designated source vertex of $G$, the algorithm runs in $\tilde{O}(n^{\sqrt{11.5}-2} \log L) \le O(n^{1.392} \log L)$ time, where $L$ is the absolute value of the smallest edge length. The algorithm computes shortest paths and the distances from $s$ to all vertices of the graph, or else provides a certificate that $G$ is not $H$-minor free. Our result improves an earlier $O(n^{1.5} \log L)$ time algorithm for this problem, which follows from a general SSSP algorithm of Goldberg.<|reference_end|> | arxiv | @article{yuster2008single,
title={Single source shortest paths in $H$-minor free graphs},
author={Raphael Yuster},
journal={arXiv preprint arXiv:0809.2970},
year={2008},
archivePrefix={arXiv},
eprint={0809.2970},
primaryClass={cs.DS}
} | yuster2008single |
arxiv-4889 | 0809.2978 | A local construction of the Smith normal form of a matrix polynomial | <|reference_start|>A local construction of the Smith normal form of a matrix polynomial: We present an algorithm for computing a Smith form with multipliers of a regular matrix polynomial over a field. This algorithm differs from previous ones in that it computes a local Smith form for each irreducible factor in the determinant separately and then combines them into a global Smith form, whereas other algorithms apply a sequence of unimodular row and column operations to the original matrix. The performance of the algorithm in exact arithmetic is reported for several test cases.<|reference_end|> | arxiv | @article{wilkening2008a,
title={A local construction of the Smith normal form of a matrix polynomial},
author={Jon Wilkening and Jia Yu},
journal={arXiv preprint arXiv:0809.2978},
year={2008},
archivePrefix={arXiv},
eprint={0809.2978},
primaryClass={cs.SC}
} | wilkening2008a |
arxiv-4890 | 0809.2995 | Navigating ultrasmall worlds in ultrashort time | <|reference_start|>Navigating ultrasmall worlds in ultrashort time: Random scale-free networks are ultrasmall worlds. The average length of the shortest paths in networks of size N scales as lnlnN. Here we show that these ultrasmall worlds can be navigated in ultrashort time. Greedy routing on scale-free networks embedded in metric spaces finds paths with the average length scaling also as lnlnN. Greedy routing uses only local information to navigate a network. Nevertheless, it finds asymptotically the shortest paths, a direct computation of which requires global topology knowledge. Our findings imply that the peculiar structure of complex networks ensures that the lack of global topological awareness has asymptotically no impact on the length of communication paths. These results have important consequences for communication systems such as the Internet, where maintaining knowledge of current topology is a major scalability bottleneck.<|reference_end|> | arxiv | @article{boguna2008navigating,
title={Navigating ultrasmall worlds in ultrashort time},
author={Marian Boguna and Dmitri Krioukov},
journal={Phys. Rev. Lett. 102, 058701 (2009)},
year={2008},
doi={10.1103/PhysRevLett.102.058701},
archivePrefix={arXiv},
eprint={0809.2995},
primaryClass={cond-mat.dis-nn cs.NI physics.soc-ph}
} | boguna2008navigating |
arxiv-4891 | 0809.3009 | Metrics-Based Spreadsheet Visualization: Support for Focused Maintenance | <|reference_start|>Metrics-Based Spreadsheet Visualization: Support for Focused Maintenance: Legacy spreadsheets are both, an asset, and an enduring problem concerning spreadsheets in business. To make spreadsheets stay alive and remain correct, comprehension of a given spreadsheet is highly important. Visualization techniques should ease the complex and mindblowing challenges of finding structures in a huge set of spreadsheet cells for building an adequate mental model of spreadsheet programs. Since spreadsheet programs are as diverse as the purpose they are serving and as inhomogeneous as their programmers, to find an appropriate representation or visualization technique for every spreadsheet program seems futile. We thus propose different visualization and representation methods that may ease spreadsheet comprehension but should not be applied with all kind of spreadsheet programs. Therefore, this paper proposes to use (complexity) measures as indicators for proper visualization.<|reference_end|> | arxiv | @article{hodnigg2008metrics-based,
title={Metrics-Based Spreadsheet Visualization: Support for Focused Maintenance},
author={Karin Hodnigg, Roland T. Mittermeir},
journal={Proc. European Spreadsheet Risks Int. Grp. (EuSpRIG) 2008 79-94
ISBN 978-905617-69-2},
year={2008},
archivePrefix={arXiv},
eprint={0809.3009},
primaryClass={cs.SE cs.HC}
} | hodnigg2008metrics-based |
arxiv-4892 | 0809.3010 | Improved Upper Bounds for the Information Rates of the Secret Sharing Schemes Induced by the Vamos Matroid | <|reference_start|>Improved Upper Bounds for the Information Rates of the Secret Sharing Schemes Induced by the Vamos Matroid: An access structure specifying the qualified sets of a secret sharing scheme must have information rate less than or equal to one. The Vamos matroid induces two non-isomorphic access structures V1 and V6, which were shown by Marti-Farre and Padro to have information rates of at least 3/4. Beimel, Livne, and Padro showed that the information rates of V1 and V6 are bounded above by 10/11 and 9/10 respectively. Here we improve those upper bounds to 19/21 for V1 and 17/19 for V6.<|reference_end|> | arxiv | @article{metcalf-burton2008improved,
title={Improved Upper Bounds for the Information Rates of the Secret Sharing
Schemes Induced by the Vamos Matroid},
author={Jessica Ruth Metcalf-Burton},
journal={arXiv preprint arXiv:0809.3010},
year={2008},
archivePrefix={arXiv},
eprint={0809.3010},
primaryClass={cs.CR cs.IT math.IT}
} | metcalf-burton2008improved |
arxiv-4893 | 0809.3016 | Automating Spreadsheet Discovery & Risk Assessment | <|reference_start|>Automating Spreadsheet Discovery & Risk Assessment: There have been many articles and mishaps published about the risks of uncontrolled spreadsheets in today's business environment, including non-compliance, operational risk, errors, and fraud all leading to significant loss events. Spreadsheets fall into the realm of end user developed applications and are often absent the proper safeguards and controls an IT organization would enforce for enterprise applications. There is also an overall lack of software programming discipline enforced in how spreadsheets are developed. However, before an organization can apply proper controls and discipline to critical spreadsheets, an accurate and living inventory of spreadsheets across the enterprise must be created, and all critical spreadsheets must be identified. As such, this paper proposes an automated approach to the initial stages of the spreadsheet management lifecycle - discovery, inventory and risk assessment. Without the use of technology, these phases are often treated as a one-off project. By leveraging technology, they become a sustainable business process.<|reference_end|> | arxiv | @article{perry2008automating,
title={Automating Spreadsheet Discovery & Risk Assessment},
author={Eric Perry},
journal={Proc. European Spreadsheet Risks Int. Grp. (EuSpRIG) 2008 61-67
ISBN 978-905617-69-2},
year={2008},
archivePrefix={arXiv},
eprint={0809.3016},
primaryClass={cs.SE cs.HC}
} | perry2008automating |
arxiv-4894 | 0809.3023 | Graph-based Logic and Sketches | <|reference_start|>Graph-based Logic and Sketches: We present the basic ideas of forms (a generalization of Ehresmann's sketches) and their theories and models, more explicitly than in previous expositions. Forms provide the ability to specify mathematical structures and data types in any appropriate category, including many types of structures (e.g. function spaces) that cannot be specified by sketches. We also outline a new kind of formal logic (based on graphs instead of strings of symbols) that gives an intrinsically categorial definition of assertion and proof for each type of form. This formal logic is new to this monograph. The relationship between multisorted equational logic and finite product theories is worked out in detail.<|reference_end|> | arxiv | @article{bagchi2008graph-based,
title={Graph-based Logic and Sketches},
author={Atish Bagchi and Charles Wells},
journal={arXiv preprint arXiv:0809.3023},
year={2008},
archivePrefix={arXiv},
eprint={0809.3023},
primaryClass={math.CT cs.IT math.IT math.LO}
} | bagchi2008graph-based |
arxiv-4895 | 0809.3027 | Finding links and initiators: a graph reconstruction problem | <|reference_start|>Finding links and initiators: a graph reconstruction problem: Consider a 0-1 observation matrix M, where rows correspond to entities and columns correspond to signals; a value of 1 (or 0) in cell (i,j) of M indicates that signal j has been observed (or not observed) in entity i. Given such a matrix we study the problem of inferring the underlying directed links between entities (rows) and finding which entries in the matrix are initiators. We formally define this problem and propose an MCMC framework for estimating the links and the initiators given the matrix of observations M. We also show how this framework can be extended to incorporate a temporal aspect; instead of considering a single observation matrix M we consider a sequence of observation matrices M1,..., Mt over time. We show the connection between our problem and several problems studied in the field of social-network analysis. We apply our method to paleontological and ecological data and show that our algorithms work well in practice and give reasonable results.<|reference_end|> | arxiv | @article{mannila2008finding,
title={Finding links and initiators: a graph reconstruction problem},
author={Heikki Mannila and Evimaria Terzi},
journal={arXiv preprint arXiv:0809.3027},
year={2008},
archivePrefix={arXiv},
eprint={0809.3027},
primaryClass={cs.AI cs.DB physics.soc-ph}
} | mannila2008finding |
arxiv-4896 | 0809.3030 | Crowdsourcing, Attention and Productivity | <|reference_start|>Crowdsourcing, Attention and Productivity: The tragedy of the digital commons does not prevent the copious voluntary production of content that one witnesses in the web. We show through an analysis of a massive data set from \texttt{YouTube} that the productivity exhibited in crowdsourcing exhibits a strong positive dependence on attention, measured by the number of downloads. Conversely, a lack of attention leads to a decrease in the number of videos uploaded and the consequent drop in productivity, which in many cases asymptotes to no uploads whatsoever. Moreover, uploaders compare themselves to others when having low productivity and to themselves when exceeding a threshold.<|reference_end|> | arxiv | @article{huberman2008crowdsourcing,,
title={Crowdsourcing, Attention and Productivity},
author={Bernardo A. Huberman, Daniel M. Romero and Fang Wu},
journal={arXiv preprint arXiv:0809.3030},
year={2008},
archivePrefix={arXiv},
eprint={0809.3030},
primaryClass={cs.CY physics.soc-ph}
} | huberman2008crowdsourcing, |
arxiv-4897 | 0809.3035 | Interference Alignment for Line-of-Sight Channels | <|reference_start|>Interference Alignment for Line-of-Sight Channels: The fully connected K-user interference channel is studied in a multipath environment with bandwidth W. We show that when each link consists of D physical paths, the total spectral efficiency can grow {\it linearly} with K. This result holds not merely in the limit of large transmit power P, but for any fixed P, and is therefore a stronger characterization than degrees of freedom. It is achieved via a form of interference alignment in the time domain. A caveat of this result is that W must grow with K, a phenomenon we refer to as {\it bandwidth scaling}. Our insight comes from examining channels with single path links (D=1), which we refer to as line-of-sight (LOS) links. For such channels we build a time-indexed interference graph and associate the communication problem with finding its maximal independent set. This graph has a stationarity property that we exploit to solve the problem efficiently via dynamic programming. Additionally, the interference graph enables us to demonstrate the necessity of bandwidth scaling for any scheme operating over LOS interference channels. Bandwidth scaling is then shown to also be a necessary ingredient for interference alignment in the K-user interference channel.<|reference_end|> | arxiv | @article{grokop2008interference,
title={Interference Alignment for Line-of-Sight Channels},
author={Leonard Grokop, David N. C. Tse, Roy D. Yates},
journal={arXiv preprint arXiv:0809.3035},
year={2008},
archivePrefix={arXiv},
eprint={0809.3035},
primaryClass={cs.IT math.IT}
} | grokop2008interference |
arxiv-4898 | 0809.3044 | Kinetostatic Performance of a Planar Parallel Mechanism with Variable Actuation | <|reference_start|>Kinetostatic Performance of a Planar Parallel Mechanism with Variable Actuation: This paper deals with a new planar parallel mechanism with variable actuation and its kinetostatic performance. A drawback of parallel mechanisms is the non homogeneity of kinetostatic performance within their workspace. The common approach to solve this problem is the introduction of actuation redundancy, that involves force control algorithms. Another approach, highlighted in this paper, is to select the actuated joint in each limb with regard to the pose of the end-effector. First, the architecture of the mechanism and two kinetostatic performance indices are described. Then, the actuating modes of the mechanism are compared.<|reference_end|> | arxiv | @article{rakotomanga2008kinetostatic,
title={Kinetostatic Performance of a Planar Parallel Mechanism with Variable
Actuation},
author={Novona Rakotomanga (GPA), Damien Chablat (IRCCyN), St'ephane Caro
(IRCCyN)},
journal={arXiv preprint arXiv:0809.3044},
year={2008},
archivePrefix={arXiv},
eprint={0809.3044},
primaryClass={cs.RO}
} | rakotomanga2008kinetostatic |
arxiv-4899 | 0809.3083 | Supervised Dictionary Learning | <|reference_start|>Supervised Dictionary Learning: It is now well established that sparse signal models are well suited to restoration tasks and can effectively be learned from audio, image, and video data. Recent research has been aimed at learning discriminative sparse models instead of purely reconstructive ones. This paper proposes a new step in that direction, with a novel sparse representation for signals belonging to different classes in terms of a shared dictionary and multiple class-decision functions. The linear variant of the proposed model admits a simple probabilistic interpretation, while its most general variant admits an interpretation in terms of kernels. An optimization framework for learning all the components of the proposed model is presented, along with experimental results on standard handwritten digit and texture classification tasks.<|reference_end|> | arxiv | @article{mairal2008supervised,
title={Supervised Dictionary Learning},
author={Julien Mairal (WILLOW), Francis Bach (WILLOW), Jean Ponce (WILLOW,
LIENS), Guillermo Sapiro, Andrew Zisserman (WILLOW, VGG)},
journal={arXiv preprint arXiv:0809.3083},
year={2008},
number={RR-6652},
archivePrefix={arXiv},
eprint={0809.3083},
primaryClass={cs.CV}
} | mairal2008supervised |
arxiv-4900 | 0809.3091 | A Distributed Algorithm for Fair and Efficient User-Network Association in Multi-Technology Wireless Networks | <|reference_start|>A Distributed Algorithm for Fair and Efficient User-Network Association in Multi-Technology Wireless Networks: Recent mobile equipment (as well as the norm IEEE 802.21) now offers the possibility for users to switch from one technology to another (vertical handover). This allows flexibility in resource assignments and, consequently, increases the potential throughput allocated to each user. In this paper, we design a fully distributed algorithm based on trial and error mechanisms that exploits the benefits of vertical handover by finding fair and efficient assignment schemes. On the one hand, mobiles gradually update the fraction of data packets they send to each network based on the rewards they receive from the stations. On the other hand, network stations send rewards to each mobile that represent the impact each mobile has on the cell throughput. This reward function is closely related to the concept of marginal cost in the pricing literature. Both the station and the mobile algorithms are simple enough to be implemented in current standard equipment. Based on tools from evolutionary games, potential games and replicator dynamics, we analytically show the convergence of the algorithm to solutions that are efficient and fair in terms of throughput. Moreover, we show that after convergence, each user is connected to a single network cell which avoids costly repeated vertical handovers. Several simple heuristics based on this algorithm are proposed to achieve fast convergence. Indeed, for implementation purposes, the number of iterations should remain in the order of a few tens. We also compare, for different loads, the quality of their solutions.<|reference_end|> | arxiv | @article{coucheney2008a,
title={A Distributed Algorithm for Fair and Efficient User-Network Association
in Multi-Technology Wireless Networks},
author={Pierre Coucheney (INRIA Rh^one-Alpes / LIG laboratoire d'Informatique
de Grenoble), Corinne Touati (INRIA Rh^one-Alpes / LIG laboratoire
d'Informatique de Grenoble), Bruno Gaujal (INRIA Rh^one-Alpes / LIG
laboratoire d'Informatique de Grenoble)},
journal={arXiv preprint arXiv:0809.3091},
year={2008},
number={RR-6653},
archivePrefix={arXiv},
eprint={0809.3091},
primaryClass={cs.GT}
} | coucheney2008a |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.