corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-2501
0801.4405
A Locked Orthogonal Tree
<|reference_start|>A Locked Orthogonal Tree: We give a counterexample to a conjecture of Poon [Poo06] that any orthogonal tree in two dimensions can always be flattened by a continuous motion that preserves edge lengths and avoids self-intersection. We show our example is locked by extending results on strongly locked self-touching linkages due to Connelly, Demaine and Rote [CDR02] to allow zero-length edges as defined in [ADG07], which may be of independent interest. Our results also yield a locked tree with only eleven edges, which is the smallest known example of a locked tree.<|reference_end|>
arxiv
@article{charlton2008a, title={A Locked Orthogonal Tree}, author={David Charlton, Erik D. Demaine, Martin L. Demaine, Gregory Price, Yaa-Lirng Tu}, journal={arXiv preprint arXiv:0801.4405}, year={2008}, archivePrefix={arXiv}, eprint={0801.4405}, primaryClass={cs.CG} }
charlton2008a
arxiv-2502
0801.4417
Controllable coherent population transfers in superconducting qubits for quantum computing
<|reference_start|>Controllable coherent population transfers in superconducting qubits for quantum computing: We propose an approach to coherently transfer populations between selected quantum states in one- and two-qubit systems by using controllable Stark-chirped rapid adiabatic passages (SCRAPs). These {\it evolution-time insensitive} transfers, assisted by easily implementable single-qubit phase-shift operations, could serve as elementary logic gates for quantum computing. Specifically, this proposal could be conveniently demonstrated with existing Josephson phase qubits. Our proposal can find an immediate application in the readout of these qubits. Indeed, the broken parity symmetries of the bound states in these artificial "atoms" provide an efficient approach to design the required adiabatic pulses.<|reference_end|>
arxiv
@article{wei2008controllable, title={Controllable coherent population transfers in superconducting qubits for quantum computing}, author={L.F. Wei, J.R. Johansson, L.X. Cen, S. Ashhab, Franco Nori}, journal={Phys. Rev. Lett. 100, 113601 (2008)}, year={2008}, doi={10.1103/PhysRevLett.100.113601}, archivePrefix={arXiv}, eprint={0801.4417}, primaryClass={quant-ph cond-mat.mes-hall cond-mat.supr-con cs.GT} }
wei2008controllable
arxiv-2503
0801.4423
It's Not What You Have, But How You Use It: Compromises in Mobile Device Use
<|reference_start|>It's Not What You Have, But How You Use It: Compromises in Mobile Device Use: As users begin to use many more devices for personal information management (PIM) than just the traditional desktop computer, it is essential for HCI researchers to understand how these devices are being used in the wild and their roles in users' information environments. We conducted a study of 220 knowledge workers about their devices, the activities they performed on each, and the groups of devices used together. Our findings indicate that several devices are often used in groups; integrated multi-function portable devices have begun to replace single-function devices for communication (e.g. email and IM). Users use certain features opportunistically because they happen to be carrying a multi-function device with them. The use of multiple devices and multi-function devices is fraught with compromises as users must choose and make trade-offs among various factors.<|reference_end|>
arxiv
@article{tungare2008it's, title={It's Not What You Have, But How You Use It: Compromises in Mobile Device Use}, author={Manas Tungare, Manuel Perez-Quinones}, journal={arXiv preprint arXiv:0801.4423}, year={2008}, archivePrefix={arXiv}, eprint={0801.4423}, primaryClass={cs.HC} }
tungare2008it's
arxiv-2504
0801.4480
Abstractions for biomolecular computations
<|reference_start|>Abstractions for biomolecular computations: Deoxyribonucleic acid is increasingly being understood to be an informational molecule, capable of information processing.It has found application in the determination of non-deterministic algorithms and in the design of molecular computing devices. This is a theoretical analysis of the mathematical properties and relations of the molecules which constituting DNA, which explains in part why DNA is a successful computing molecule.<|reference_end|>
arxiv
@article{okunoye2008abstractions, title={Abstractions for biomolecular computations}, author={Babatunde O. Okunoye}, journal={arXiv preprint arXiv:0801.4480}, year={2008}, archivePrefix={arXiv}, eprint={0801.4480}, primaryClass={cs.OH} }
okunoye2008abstractions
arxiv-2505
0801.4483
Biopsies prostatiques sous guidage \'echographique 3D et temps r\'eel (4D) sur fant\^ome Etude comparative versus guidage 2D
<|reference_start|>Biopsies prostatiques sous guidage \'echographique 3D et temps r\'eel (4D) sur fant\^ome Etude comparative versus guidage 2D: This paper analyzes the impact of using 2D or 3D ultrasound on the efficiency of prostate biopsies. The evaluation is performed on home-made phantoms. The study shows that the accuracy is significantly improved.<|reference_end|>
arxiv
@article{long2008biopsies, title={Biopsies prostatiques sous guidage \'echographique 3D et temps r\'eel (4D) sur fant\^ome. Etude comparative versus guidage 2D}, author={Jean-Alexandre Long (TIMC), Vincent Daanen (TIMC), Alexandre Moreau-Gaudry (TIMC, CHU-Grenoble CIC), Jocelyne Troccaz (TIMC), Jean-Jacques Rambeaud, Jean-Luc Descotes}, journal={Progr\`es en Urologie 17, 7 (2007) 1137-1143}, year={2008}, archivePrefix={arXiv}, eprint={0801.4483}, primaryClass={cs.OH} }
long2008biopsies
arxiv-2506
0801.4544
A Neyman-Pearson Approach to Universal Erasure and List Decoding
<|reference_start|>A Neyman-Pearson Approach to Universal Erasure and List Decoding: When information is to be transmitted over an unknown, possibly unreliable channel, an erasure option at the decoder is desirable. Using constant-composition random codes, we propose a generalization of Csiszar and Korner's Maximum Mutual Information decoder with erasure option for discrete memoryless channels. The new decoder is parameterized by a weighting function that is designed to optimize the fundamental tradeoff between undetected-error and erasure exponents for a compound class of channels. The class of weighting functions may be further enlarged to optimize a similar tradeoff for list decoders -- in that case, undetected-error probability is replaced with average number of incorrect messages in the list. Explicit solutions are identified. The optimal exponents admit simple expressions in terms of the sphere-packing exponent, at all rates below capacity. For small erasure exponents, these expressions coincide with those derived by Forney (1968) for symmetric channels, using Maximum a Posteriori decoding. Thus for those channels at least, ignorance of the channel law is inconsequential. Conditions for optimality of the Csiszar-Korner rule and of the simpler empirical-mutual-information thresholding rule are identified. The error exponents are evaluated numerically for the binary symmetric channel.<|reference_end|>
arxiv
@article{moulin2008a, title={A Neyman-Pearson Approach to Universal Erasure and List Decoding}, author={Pierre Moulin}, journal={arXiv preprint arXiv:0801.4544}, year={2008}, doi={10.1109/ISIT.2008.4594948}, archivePrefix={arXiv}, eprint={0801.4544}, primaryClass={cs.IT math.IT} }
moulin2008a
arxiv-2507
0801.4571
Is SP BP?
<|reference_start|>Is SP BP?: The Survey Propagation (SP) algorithm for solving $k$-SAT problems has been shown recently as an instance of the Belief Propagation (BP) algorithm. In this paper, we show that for general constraint-satisfaction problems, SP may not be reducible from BP. We also establish the conditions under which such a reduction is possible. Along our development, we present a unification of the existing SP algorithms in terms of a probabilistically interpretable iterative procedure -- weighted Probabilistic Token Passing.<|reference_end|>
arxiv
@article{tu2008is, title={Is SP BP?}, author={Ronghui Tu, Yongyi Mao and Jiying Zhao}, journal={arXiv preprint arXiv:0801.4571}, year={2008}, archivePrefix={arXiv}, eprint={0801.4571}, primaryClass={cs.IT math.IT} }
tu2008is
arxiv-2508
0801.4585
The Complexity of Power-Index Comparison
<|reference_start|>The Complexity of Power-Index Comparison: We study the complexity of the following problem: Given two weighted voting games G' and G'' that each contain a player p, in which of these games is p's power index value higher? We study this problem with respect to both the Shapley-Shubik power index [SS54] and the Banzhaf power index [Ban65,DS79]. Our main result is that for both of these power indices the problem is complete for probabilistic polynomial time (i.e., is PP-complete). We apply our results to partially resolve some recently proposed problems regarding the complexity of weighted voting games. We also study the complexity of the raw Shapley-Shubik power index. Deng and Papadimitriou [DP94] showed that the raw Shapley-Shubik power index is #P-metric-complete. We strengthen this by showing that the raw Shapley-Shubik power index is many-one complete for #P. And our strengthening cannot possibly be further improved to parsimonious completeness, since we observe that, in contrast with the raw Banzhaf power index, the raw Shapley-Shubik power index is not #P-parsimonious-complete.<|reference_end|>
arxiv
@article{faliszewski2008the, title={The Complexity of Power-Index Comparison}, author={Piotr Faliszewski, Lane A. Hemaspaandra}, journal={arXiv preprint arXiv:0801.4585}, year={2008}, number={URCS TR-2008-929}, archivePrefix={arXiv}, eprint={0801.4585}, primaryClass={cs.CC cs.GT} }
faliszewski2008the
arxiv-2509
0801.4592
Understanding the Paradoxical Effects of Power Control on the Capacity of Wireless Networks
<|reference_start|>Understanding the Paradoxical Effects of Power Control on the Capacity of Wireless Networks: Recent works show conflicting results: network capacity may increase or decrease with higher transmission power under different scenarios. In this work, we want to understand this paradox. Specifically, we address the following questions: (1)Theoretically, should we increase or decrease transmission power to maximize network capacity? (2) Theoretically, how much network capacity gain can we achieve by power control? (3) Under realistic situations, how do power control, link scheduling and routing interact with each other? Under which scenarios can we expect a large capacity gain by using higher transmission power? To answer these questions, firstly, we prove that the optimal network capacity is a non-decreasing function of transmission power. Secondly, we prove that the optimal network capacity can be increased unlimitedly by higher transmission power in some network configurations. However, when nodes are distributed uniformly, the gain of optimal network capacity by higher transmission power is upper-bounded by a positive constant. Thirdly, we discuss why network capacity in practice may increase or decrease with higher transmission power under different scenarios using carrier sensing and the minimum hop-count routing. Extensive simulations are carried out to verify our analysis.<|reference_end|>
arxiv
@article{wang2008understanding, title={Understanding the Paradoxical Effects of Power Control on the Capacity of Wireless Networks}, author={Yue Wang, John C. S. Lui, Dah-Ming Chiu}, journal={arXiv preprint arXiv:0801.4592}, year={2008}, doi={10.1109/T-WC.2009.080142}, archivePrefix={arXiv}, eprint={0801.4592}, primaryClass={cs.NI cs.PF} }
wang2008understanding
arxiv-2510
0801.4664
A Molecular Model for Communication through a Secrecy System
<|reference_start|>A Molecular Model for Communication through a Secrecy System: Codes have been used for centuries to convey secret information.To a cryptanalyst, the interception of a code is only the first step in recovering a secret message.Deoxyribonucleic acid (DNA) is a biological and molecular code.Through the work of Marshall Nirenberg and others, DNA is now understood to specify for amino acids in triplet codes of bases.The possibilty of DNA encoding secret information in a natural language is explored, since a code is expected to have a distinct mathematical solution.<|reference_end|>
arxiv
@article{babatunde2008a, title={A Molecular Model for Communication through a Secrecy System}, author={O. Okunoye Babatunde}, journal={arXiv preprint arXiv:0801.4664}, year={2008}, archivePrefix={arXiv}, eprint={0801.4664}, primaryClass={cs.CR} }
babatunde2008a
arxiv-2511
0801.4699
On Cobweb Admissible Sequences - The Production Theorem
<|reference_start|>On Cobweb Admissible Sequences - The Production Theorem: In this note further clue decisive observations on cobweb admissible sequences are shared with the audience. In particular an announced proof of the Theorem 1 (by Dziemia\'nczuk) from [1] announced in India -Kolkata- December 2007 is delivered here. Namely here and there we claim that any cobweb admissible sequence F is at the point product of primary cobweb admissible sequences taking values one and/or certain power of an appropriate primary number p. Here also an algorithm to produce the family of all cobweb-admissible sequences i.e. the Problem 1 from [1] i.e. one of several problems posed in source papers [2,3] is solved using the idea and methods implicitly present already in [4]<|reference_end|>
arxiv
@article{dziemiańczuk2008on, title={On Cobweb Admissible Sequences - The Production Theorem}, author={M. Dziemia'nczuk}, journal={Proceedings of FCS'08, Interesting results, new models, and methodologies, pp.163-165, July 14-17, 2008, Las Vegas, USA}, year={2008}, archivePrefix={arXiv}, eprint={0801.4699}, primaryClass={math.CO cs.DM} }
dziemiańczuk2008on
arxiv-2512
0801.4706
A Class of Errorless Codes for Over-loaded Synchronous Wireless and Optical CDMA Systems
<|reference_start|>A Class of Errorless Codes for Over-loaded Synchronous Wireless and Optical CDMA Systems: In this paper we introduce a new class of codes for over-loaded synchronous wireless and optical CDMA systems which increases the number of users for fixed number of chips without introducing any errors. Equivalently, the chip rate can be reduced for a given number of users, which implies bandwidth reduction for downlink wireless systems. An upper bound for the maximum number of users for a given number of chips is derived. Also, lower and upper bounds for the sum channel capacity of a binary over-loaded CDMA are derived that can predict the existence of such over-loaded codes. We also propose a simplified maximum likelihood method for decoding these types of over-loaded codes. Although a high percentage of the over-loading factor degrades the system performance in noisy channels, simulation results show that this degradation is not significant. More importantly, for moderate values of Eb/N0 (in the range of 6-10 dB) or higher, the proposed codes perform much better than the binary Welch bound equality sequences.<|reference_end|>
arxiv
@article{pad2008a, title={A Class of Errorless Codes for Over-loaded Synchronous Wireless and Optical CDMA Systems}, author={P. Pad, F. Marvasti, K. Alishahi, S. Akbari}, journal={arXiv preprint arXiv:0801.4706}, year={2008}, archivePrefix={arXiv}, eprint={0801.4706}, primaryClass={cs.IT math.CO math.IT} }
pad2008a
arxiv-2513
0801.4709
Temporal Correlations of Local Network Losses
<|reference_start|>Temporal Correlations of Local Network Losses: We introduce a continuum model describing data losses in a single node of a packet-switched network (like the Internet) which preserves the discrete nature of the data loss process. {\em By construction}, the model has critical behavior with a sharp transition from exponentially small to finite losses with increasing data arrival rate. We show that such a model exhibits strong fluctuations in the loss rate at the critical point and non-Markovian power-law correlations in time, in spite of the Markovian character of the data arrival process. The continuum model allows for rather general incoming data packet distributions and can be naturally generalized to consider the buffer server idleness statistics.<|reference_end|>
arxiv
@article{stepanenko2008temporal, title={Temporal Correlations of Local Network Losses}, author={A. S. Stepanenko, C. C. Constantinou, I. V. Yurkevich and I. V. Lerner}, journal={Phys. Rev. E 77, 046115 (2008)}, year={2008}, doi={10.1103/PhysRevE.77.046115}, archivePrefix={arXiv}, eprint={0801.4709}, primaryClass={cs.NI cond-mat.stat-mech} }
stepanenko2008temporal
arxiv-2514
0801.4714
Breaking One-Round Key-Agreement Protocols in the Random Oracle Model
<|reference_start|>Breaking One-Round Key-Agreement Protocols in the Random Oracle Model: In this paper we study one-round key-agreement protocols analogous to Merkle's puzzles in the random oracle model. The players Alice and Bob are allowed to query a random permutation oracle $n$ times and upon their queries and communication, they both output the same key with high probability. We prove that Eve can always break such a protocol by querying the oracle $O(n^2)$ times. The long-time unproven optimality of the quadratic bound in the fully general, multi-round scenario has been shown recently by Barak and Mahmoody-Ghidary. The results in this paper have been found independently of their work.<|reference_end|>
arxiv
@article{sotakova2008breaking, title={Breaking One-Round Key-Agreement Protocols in the Random Oracle Model}, author={Miroslava Sotakova}, journal={arXiv preprint arXiv:0801.4714}, year={2008}, archivePrefix={arXiv}, eprint={0801.4714}, primaryClass={cs.CC cs.CR} }
sotakova2008breaking
arxiv-2515
0801.4716
Methods to integrate a language model with semantic information for a word prediction component
<|reference_start|>Methods to integrate a language model with semantic information for a word prediction component: Most current word prediction systems make use of n-gram language models (LM) to estimate the probability of the following word in a phrase. In the past years there have been many attempts to enrich such language models with further syntactic or semantic information. We want to explore the predictive powers of Latent Semantic Analysis (LSA), a method that has been shown to provide reliable information on long-distance semantic dependencies between words in a context. We present and evaluate here several methods that integrate LSA-based information with a standard language model: a semantic cache, partial reranking, and different forms of interpolation. We found that all methods show significant improvements, compared to the 4-gram baseline, and most of them to a simple cache model as well.<|reference_end|>
arxiv
@article{wandmacher2008methods, title={Methods to integrate a language model with semantic information for a word prediction component}, author={Tonio Wandmacher, Jean-Yves Antoine}, journal={arXiv preprint arXiv:0801.4716}, year={2008}, archivePrefix={arXiv}, eprint={0801.4716}, primaryClass={cs.CL} }
wandmacher2008methods
arxiv-2516
0801.4746
Concerning Olga, the Beautiful Little Street Dancer (Adjectives as Higher-Order Polymorphic Functions)
<|reference_start|>Concerning Olga, the Beautiful Little Street Dancer (Adjectives as Higher-Order Polymorphic Functions): In this paper we suggest a typed compositional seman-tics for nominal compounds of the form [Adj Noun] that models adjectives as higher-order polymorphic functions, and where types are assumed to represent concepts in an ontology that reflects our commonsense view of the world and the way we talk about it in or-dinary language. In addition to [Adj Noun] compounds our proposal seems also to suggest a plausible explana-tion for well known adjective ordering restrictions.<|reference_end|>
arxiv
@article{saba2008concerning, title={Concerning Olga, the Beautiful Little Street Dancer (Adjectives as Higher-Order Polymorphic Functions)}, author={Walid S. Saba}, journal={arXiv preprint arXiv:0801.4746}, year={2008}, archivePrefix={arXiv}, eprint={0801.4746}, primaryClass={cs.CL cs.LO} }
saba2008concerning
arxiv-2517
0801.4750
Graceful Degradation of Air Traffic Operations
<|reference_start|>Graceful Degradation of Air Traffic Operations: The introduction of new technologies and concepts of operation in the air transportation system is not possible, unless they can be proven not to adversely affect the system operation under not only nominal, but also degraded conditions. In extreme scenarios, degraded operations due to partial or complete technological failures should never endanger system safety. Many past system evolutions, whether ground-based or airborne, have been based on trial-and-error, and system safety was addressed only after a specific event yielded dramatic or near- dramatic consequences. Future system evolutions, however, must leverage available computation, prior knowledge and abstract reasoning to anticipate all possible system degradations and prove that such degradations are graceful and safe. This paper is concerned with the graceful degradation of high-density, structured arrival traffic against partial or complete surveillance failures. It is shown that for equal performance requirements, some traffic configurations might be easier to handle than others, thereby offering a quantitative perspective on these traffic configurations. ability to "gracefully degrade". To support our work, we also introduce a new conflict resolution algorithm, aimed at solving conflicts involving many aircraft when aircraft position information is in the process of degrading.<|reference_end|>
arxiv
@article{gariel2008graceful, title={Graceful Degradation of Air Traffic Operations}, author={Maxime Gariel, Eric Feron}, journal={arXiv preprint arXiv:0801.4750}, year={2008}, archivePrefix={arXiv}, eprint={0801.4750}, primaryClass={cs.OH} }
gariel2008graceful
arxiv-2518
0801.4774
Source Code Protection for Applications Written in Microsoft Excel and Google Spreadsheet
<|reference_start|>Source Code Protection for Applications Written in Microsoft Excel and Google Spreadsheet: Spreadsheets are used to develop application software that is distributed to users. Unfortunately, the users often have the ability to change the programming statements ("source code") of the spreadsheet application. This causes a host of problems. By critically examining the suitability of spreadsheet computer programming languages for application development, six "application development features" are identified, with source code protection being the most important. We investigate the status of these features and discuss how they might be implemented in the dominant Microsoft Excel spreadsheet and in the new Google Spreadsheet. Although Google Spreadsheet currently provides no source code control, its web-centric delivery model offers technical advantages for future provision of a rich set of features. Excel has a number of tools that can be combined to provide "pretty good protection" of source code, but weak passwords reduce its robustness. User access to Excel source code must be considered a programmer choice rather than an attribute of the spreadsheet.<|reference_end|>
arxiv
@article{grossman2008source, title={Source Code Protection for Applications Written in Microsoft Excel and Google Spreadsheet}, author={Thomas A. Grossman}, journal={Proc. European Spreadsheet Risks Int. Grp. 2007 81-91 ISBN 978-905617-58-6}, year={2008}, archivePrefix={arXiv}, eprint={0801.4774}, primaryClass={cs.SE} }
grossman2008source
arxiv-2519
0801.4775
Spreadsheet Assurance by "Control Around" is a Viable Alternative to the Traditional Approach
<|reference_start|>Spreadsheet Assurance by "Control Around" is a Viable Alternative to the Traditional Approach: The traditional approach to spreadsheet auditing generally consists of auditing every distinct formula within a spreadsheet. Although tools are developed to support auditors during this process, the approach is still very time consuming and therefore relatively expensive. As an alternative to the traditional "control through" approach, this paper discusses a "control around" approach. Within the proposed approach not all distinct formulas are audited separately, but the relationship between input data and output data of a spreadsheet is audited through comparison with a shadow model developed in a modelling language. Differences between the two models then imply possible errors in the spreadsheet. This paper describes relevant issues regarding the "control around" approach and the circumstances in which this approach is preferred above a traditional spreadsheet audit approach.<|reference_end|>
arxiv
@article{ettema2008spreadsheet, title={Spreadsheet Assurance by "Control Around" is a Viable Alternative to the Traditional Approach}, author={Harmen Ettema, Paul Janssen, Jacques de Swart}, journal={Proc. European Spreadsheet Risks Int. Grp. 2001 107-116 ISBN:1 86166 179 7}, year={2008}, archivePrefix={arXiv}, eprint={0801.4775}, primaryClass={cs.SE} }
ettema2008spreadsheet
arxiv-2520
0801.4777
Non-Deterministic Communication Complexity of Regular Languages
<|reference_start|>Non-Deterministic Communication Complexity of Regular Languages: In this thesis, we study the place of regular languages within the communication complexity setting. In particular, we are interested in the non-deterministic communication complexity of regular languages. We show that a regular language has either O(1) or Omega(log n) non-deterministic complexity. We obtain several linear lower bound results which cover a wide range of regular languages having linear non-deterministic complexity. These lower bound results also imply a result in semigroup theory: we obtain sufficient conditions for not being in the positive variety Pol(Com). To obtain our results, we use algebraic techniques. In the study of regular languages, the algebraic point of view pioneered by Eilenberg (\cite{Eil74}) has led to many interesting results. Viewing a semigroup as a computational device that recognizes languages has proven to be prolific from both semigroup theory and formal languages perspectives. In this thesis, we provide further instances of such mutualism.<|reference_end|>
arxiv
@article{ada2008non-deterministic, title={Non-Deterministic Communication Complexity of Regular Languages}, author={Anil Ada}, journal={arXiv preprint arXiv:0801.4777}, year={2008}, archivePrefix={arXiv}, eprint={0801.4777}, primaryClass={cs.CC} }
ada2008non-deterministic
arxiv-2521
0801.4790
Information Width
<|reference_start|>Information Width: Kolmogorov argued that the concept of information exists also in problems with no underlying stochastic model (as Shannon's information representation) for instance, the information contained in an algorithm or in the genome. He introduced a combinatorial notion of entropy and information $I(x:\sy)$ conveyed by a binary string $x$ about the unknown value of a variable $\sy$. The current paper poses the following questions: what is the relationship between the information conveyed by $x$ about $\sy$ to the description complexity of $x$ ? is there a notion of cost of information ? are there limits on how efficient $x$ conveys information ? To answer these questions Kolmogorov's definition is extended and a new concept termed {\em information width} which is similar to $n$-widths in approximation theory is introduced. Information of any input source, e.g., sample-based, general side-information or a hybrid of both can be evaluated by a single common formula. An application to the space of binary functions is considered.<|reference_end|>
arxiv
@article{ratsaby2008information, title={Information Width}, author={Joel Ratsaby}, journal={arXiv preprint arXiv:0801.4790}, year={2008}, archivePrefix={arXiv}, eprint={0801.4790}, primaryClass={cs.DM cs.IT cs.LG math.IT} }
ratsaby2008information
arxiv-2522
0801.4794
On the Complexity of Binary Samples
<|reference_start|>On the Complexity of Binary Samples: Consider a class $\mH$ of binary functions $h: X\to\{-1, +1\}$ on a finite interval $X=[0, B]\subset \Real$. Define the {\em sample width} of $h$ on a finite subset (a sample) $S\subset X$ as $\w_S(h) \equiv \min_{x\in S} |\w_h(x)|$, where $\w_h(x) = h(x) \max\{a\geq 0: h(z)=h(x), x-a\leq z\leq x+a\}$. Let $\mathbb{S}_\ell$ be the space of all samples in $X$ of cardinality $\ell$ and consider sets of wide samples, i.e., {\em hypersets} which are defined as $A_{\beta, h} = \{S\in \mathbb{S}_\ell: \w_{S}(h) \geq \beta\}$. Through an application of the Sauer-Shelah result on the density of sets an upper estimate is obtained on the growth function (or trace) of the class $\{A_{\beta, h}: h\in\mH\}$, $\beta>0$, i.e., on the number of possible dichotomies obtained by intersecting all hypersets with a fixed collection of samples $S\in\mathbb{S}_\ell$ of cardinality $m$. The estimate is $2\sum_{i=0}^{2\lfloor B/(2\beta)\rfloor}{m-\ell\choose i}$.<|reference_end|>
arxiv
@article{ratsaby2008on, title={On the Complexity of Binary Samples}, author={Joel Ratsaby}, journal={arXiv preprint arXiv:0801.4794}, year={2008}, archivePrefix={arXiv}, eprint={0801.4794}, primaryClass={cs.DM cs.AI cs.LG} }
ratsaby2008on
arxiv-2523
0801.4802
Investigating the Potential of Test-Driven Development for Spreadsheet Engineering
<|reference_start|>Investigating the Potential of Test-Driven Development for Spreadsheet Engineering: It is widely documented that the absence of a structured approach to spreadsheet engineering is a key factor in the high level of spreadsheet errors. In this paper we propose and investigate the application of Test-Driven Development to the creation of spreadsheets. Test-Driven Development is an emerging development technique in software engineering that has been shown to result in better quality software code. It has also been shown that this code requires less testing and is easier to maintain. Through a pair of case studies we demonstrate that Test-Driven Development can be applied to the development of spreadsheets. We present the detail of these studies preceded by a clear explanation of the technique and its application to spreadsheet engineering. A supporting tool under development by the authors is also documented along with proposed research to determine the effectiveness of the methodology and the associated tool.<|reference_end|>
arxiv
@article{rust2008investigating, title={Investigating the Potential of Test-Driven Development for Spreadsheet Engineering}, author={Alan Rust, Brian Bishop, Kevin McDaid}, journal={Proc. European Spreadsheet Risks Int. Grp. 2006 95-105 ISBN:1-905617-08-9}, year={2008}, archivePrefix={arXiv}, eprint={0801.4802}, primaryClass={cs.SE} }
rust2008investigating
arxiv-2524
0801.4807
Automatic Text Area Segmentation in Natural Images
<|reference_start|>Automatic Text Area Segmentation in Natural Images: We present a hierarchical method for segmenting text areas in natural images. The method assumes that the text is written with a contrasting color on a more or less uniform background. But no assumption is made regarding the language or character set used to write the text. In particular, the text can contain simple graphics or symbols. The key feature of our approach is that we first concentrate on finding the background of the text, before testing whether there is actually text on the background. Since uniform areas are easy to find in natural images, and since text backgrounds define areas which contain "holes" (where the text is written) we thus look for uniform areas containing "holes" and label them as text backgrounds candidates. Each candidate area is then further tested for the presence of text within its convex hull. We tested our method on a database of 65 images including English and Urdu text. The method correctly segmented all the text areas in 63 of these images, and in only 4 of these were areas that do not contain text also segmented.<|reference_end|>
arxiv
@article{jafri2008automatic, title={Automatic Text Area Segmentation in Natural Images}, author={Syed Ali Raza Jafri, Mireille Boutin, and Edward J. Delp}, journal={arXiv preprint arXiv:0801.4807}, year={2008}, archivePrefix={arXiv}, eprint={0801.4807}, primaryClass={cs.CV} }
jafri2008automatic
arxiv-2525
0801.4817
The REESSE2+ Public-key Encryption Scheme
<|reference_start|>The REESSE2+ Public-key Encryption Scheme: This paper gives the definitions of an anomalous super-increasing sequence and an anomalous subset sum separately, proves the two properties of an anomalous super-increasing sequence, and proposes the REESSE2+ public-key encryption scheme which includes the three algorithms for key generation, encryption and decryption. The paper discusses the necessity and sufficiency of the lever function for preventing the Shamir extremum attack, analyzes the security of REESSE2+ against extracting a private key from a public key through the exhaustive search, recovering a plaintext from a ciphertext plus a knapsack of high density through the L3 lattice basis reduction method, and heuristically obtaining a plaintext through the meet-in-the-middle attack or the adaptive-chosen-ciphertext attack. The authors evaluate the time complexity of REESSE2+ encryption and decryption algorithms, compare REESSE2+ with ECC and NTRU, and find that the encryption speed of REESSE2+ is ten thousand times faster than ECC and NTRU bearing the equivalent security, and the decryption speed of REESSE2+ is roughly equivalent to ECC and NTRU respectively.<|reference_end|>
arxiv
@article{su2008the, title={The REESSE2+ Public-key Encryption Scheme}, author={Shenghui Su and Shuwang Lv}, journal={arXiv preprint arXiv:0801.4817}, year={2008}, archivePrefix={arXiv}, eprint={0801.4817}, primaryClass={cs.CR cs.CC} }
su2008the
arxiv-2526
0801.4845
Improved lower bound for deterministic broadcasting in radio networks
<|reference_start|>Improved lower bound for deterministic broadcasting in radio networks: We consider the problem of deterministic broadcasting in radio networks when the nodes have limited knowledge about the topology of the network. We show that for every deterministic broadcasting protocol there exists a network, of radius 2, for which the protocol takes at least $\Omega(\sqrt{n}) rounds for completing the broadcast. Our argument can be extended to prove a lower bound of Omega(\sqrt{nD}) rounds for broadcasting in radio networks of radius D. This resolves one of the open problems posed in [29], where in the authors proved a lower bound of $\Omega(n^{1/4}) rounds for broadcasting in constant diameter networks. We prove the new lower $\Omega(\sqrt{n})$ bound for a special family of radius 2 networks. Each network of this family consists of O(\sqrt{n}) components which are connected to each other via only the source node. At the heart of the proof is a novel simulation argument, which essentially says that any arbitrarily complicated strategy of the source node can be simulated by the nodes of the networks, if the source node just transmits partial topological knowledge about some component instead of arbitrary complicated messages. To the best of our knowledge this type of simulation argument is novel and may be useful in further improving the lower bound or may find use in other applications. Keywords: radio networks, deterministic broadcast, lower bound, advice string, simulation, selective families, limited topological knowledge.<|reference_end|>
arxiv
@article{brito2008improved, title={Improved lower bound for deterministic broadcasting in radio networks}, author={Carlos Brito and Shailesh Vaya}, journal={arXiv preprint arXiv:0801.4845}, year={2008}, archivePrefix={arXiv}, eprint={0801.4845}, primaryClass={cs.DM cs.DC} }
brito2008improved
arxiv-2527
0801.4851
Bicretieria Optimization in Routing Games
<|reference_start|>Bicretieria Optimization in Routing Games: Two important metrics for measuring the quality of routing paths are the maximum edge congestion $C$ and maximum path length $D$. Here, we study bicriteria in routing games where each player $i$ selfishly selects a path that simultaneously minimizes its maximum edge congestion $C_i$ and path length $D_i$. We study the stability and price of anarchy of two bicriteria games: - {\em Max games}, where the social cost is $\max(C,D)$ and the player cost is $\max(C_i, D_i)$. We prove that max games are stable and convergent under best-response dynamics, and that the price of anarchy is bounded above by the maximum path length in the players' strategy sets. We also show that this bound is tight in worst-case scenarios. - {\em Sum games}, where the social cost is $C+D$ and the player cost is $C_i+D_i$. For sum games, we first show the negative result that there are game instances that have no Nash-equilibria. Therefore, we examine an approximate game called the {\em sum-bucket game} that is always convergent (and therefore stable). We show that the price of anarchy in sum-bucket games is bounded above by $C^* \cdot D^* / (C^* + D^*)$ (with a poly-log factor), where $C^*$ and $D^*$ are the optimal coordinated congestion and path length. Thus, the sum-bucket game has typically superior price of anarchy bounds than the max game. In fact, when either $C^*$ or $D^*$ is small (e.g. constant) the social cost of the Nash-equilibria is very close to the coordinated optimal $C^* + D^*$ (within a poly-log factor). We also show that the price of anarchy bound is tight for cases where both $C^*$ and $D^*$ are large.<|reference_end|>
arxiv
@article{busch2008bicretieria, title={Bicretieria Optimization in Routing Games}, author={Costas Busch and Rajgopal Kannan}, journal={arXiv preprint arXiv:0801.4851}, year={2008}, archivePrefix={arXiv}, eprint={0801.4851}, primaryClass={cs.GT cs.DS} }
busch2008bicretieria
arxiv-2528
0801.4911
On the Double Coset Membership Problem for Permutation Groups
<|reference_start|>On the Double Coset Membership Problem for Permutation Groups: We show that the Double Coset Membership problem for permutation groups possesses perfect zero-knowledge proofs.<|reference_end|>
arxiv
@article{verbitsky2008on, title={On the Double Coset Membership Problem for Permutation Groups}, author={Oleg Verbitsky}, journal={Algebraic structures and their applications, pp. 351--363 (2002)}, year={2008}, archivePrefix={arXiv}, eprint={0801.4911}, primaryClass={cs.CC} }
verbitsky2008on
arxiv-2529
0801.4917
Zero-Knowledge Proofs of the Conjugacy for Permutation Groups
<|reference_start|>Zero-Knowledge Proofs of the Conjugacy for Permutation Groups: We design a perfect zero-knowledge proof system for recognition if two permutation groups are conjugate.<|reference_end|>
arxiv
@article{verbitsky2008zero-knowledge, title={Zero-Knowledge Proofs of the Conjugacy for Permutation Groups}, author={Oleg Verbitsky}, journal={Bulletin of the Lviv University, Series in Mechanics and Mathematics. Vol. 61, pp. 195--205 (2003)}, year={2008}, archivePrefix={arXiv}, eprint={0801.4917}, primaryClass={cs.CC} }
verbitsky2008zero-knowledge
arxiv-2530
0802.0003
On mobile sets in the binary hypercube
<|reference_start|>On mobile sets in the binary hypercube: If two distance-3 codes have the same neighborhood, then each of them is called a mobile set. In the (4k+3)-dimensional binary hypercube, there exists a mobile set of cardinality 2*6^k that cannot be split into mobile sets of smaller cardinalities or represented as a natural extension of a mobile set in a hypercube of smaller dimension. Keywords: mobile set; 1-perfect code.<|reference_end|>
arxiv
@article{vasil'ev2008on, title={On mobile sets in the binary hypercube}, author={Yuriy Vasil'ev (Sobolev Institute of Mathematics, Novosibirsk, Russia), Sergey Avgustinovich (Sobolev Institute of Mathematics, Novosibirsk, Russia), Denis Krotov (Sobolev Institute of Mathematics, Novosibirsk, Russia)}, journal={Diskretn. Anal. Issled. Oper. 15(3) 2008, 11-21 (in Russian)}, year={2008}, archivePrefix={arXiv}, eprint={0802.0003}, primaryClass={math.CO cs.IT math.IT} }
vasil'ev2008on
arxiv-2531
0802.0006
New Perspectives and some Celebrated Quantum Inequalities
<|reference_start|>New Perspectives and some Celebrated Quantum Inequalities: Some of the important inequalities associated with quantum entropy are immediate algebraic consequences of the Hansen-Pedersen-Jensen inequality. A general argument is given in terms of the matrix perspective of an operator convex function. A matrix analogue of Mar\'{e}chal's extended perspectives provides additional inequalities, including a $p+q\leq 1$ result of Lieb.<|reference_end|>
arxiv
@article{effros2008new, title={New Perspectives and some Celebrated Quantum Inequalities}, author={Edward G. Effros}, journal={arXiv preprint arXiv:0802.0006}, year={2008}, archivePrefix={arXiv}, eprint={0802.0006}, primaryClass={math-ph cs.IT math.IT math.MP} }
effros2008new
arxiv-2532
0802.0017
Improved Deterministic Length Reduction
<|reference_start|>Improved Deterministic Length Reduction: This paper presents a new technique for deterministic length reduction. This technique improves the running time of the algorithm presented in \cite{LR07} for performing fast convolution in sparse data. While the regular fast convolution of vectors $V_1,V_2$ whose sizes are $N_1,N_2$ respectively, takes $O(N_1 \log N_2)$ using FFT, using the new technique for length reduction, the algorithm proposed in \cite{LR07} performs the convolution in $O(n_1 \log^3 n_1)$, where $n_1$ is the number of non-zero values in $V_1$. The algorithm assumes that $V_1$ is given in advance, and $V_2$ is given in running time. The novel technique presented in this paper improves the convolution time to $O(n_1 \log^2 n_1)$ {\sl deterministically}, which equals the best running time given achieved by a {\sl randomized} algorithm. The preprocessing time of the new technique remains the same as the preprocessing time of \cite{LR07}, which is $O(n_1^2)$. This assumes and deals the case where $N_1$ is polynomial in $n_1$. In the case where $N_1$ is exponential in $n_1$, a reduction to a polynomial case can be used. In this paper we also improve the preprocessing time of this reduction from $O(n_1^4)$ to $O(n_1^3{\rm polylog}(n_1))$.<|reference_end|>
arxiv
@article{amir2008improved, title={Improved Deterministic Length Reduction}, author={Amihood Amir and Klim Efremenko and Oren Kapah and Ely Porat and Amir Rothschild}, journal={arXiv preprint arXiv:0802.0017}, year={2008}, archivePrefix={arXiv}, eprint={0802.0017}, primaryClass={cs.DS} }
amir2008improved
arxiv-2533
0802.0024
Solving the Maximum Agreement SubTree and the Maximum Compatible Tree problems on many bounded degree trees
<|reference_start|>Solving the Maximum Agreement SubTree and the Maximum Compatible Tree problems on many bounded degree trees: Given a set of leaf-labeled trees with identical leaf sets, the well-known "Maximum Agreement SubTree" problem (MAST) consists of finding a subtree homeomorphically included in all input trees and with the largest number of leaves. Its variant called "Maximum Compatible Tree" (MCT) is less stringent, as it allows the input trees to be refined. Both problems are of particular interest in computational biology, where trees encountered have often small degrees. In this paper, we study the parameterized complexity of MAST and MCT with respect to the maximum degree, denoted by D, of the input trees. It is known that MAST is polynomial for bounded D. As a counterpart, we show that the problem is W[1]-hard with respect to parameter D. Moreover, relying on recent advances in parameterized complexity we obtain a tight lower bound: while MAST can be solved in O(N^{O(D)}) time where N denotes the input length, we show that an O(N^{o(D)}) bound is not achievable, unless SNP is contained in SE. We also show that MCT is W[1]-hard with respect to D, and that MCT cannot be solved in O(N^{o(2^{D/2})}) time, SNP is contained in SE.<|reference_end|>
arxiv
@article{guillemot2008solving, title={Solving the Maximum Agreement SubTree and the Maximum Compatible Tree problems on many bounded degree trees}, author={Sylvain Guillemot and Francois Nicolas}, journal={Proceedings of the 17th Annual Symposium on Combinatorial Pattern Matching (CPM'06), volume 4009 of Lecture Notes in Computer Science, pages 165--176. Springer-Verlag, 2006}, year={2008}, archivePrefix={arXiv}, eprint={0802.0024}, primaryClass={cs.CC cs.DM} }
guillemot2008solving
arxiv-2534
0802.0030
Mission impossible: Computing the network coding capacity region
<|reference_start|>Mission impossible: Computing the network coding capacity region: One of the main theoretical motivations for the emerging area of network coding is the achievability of the max-flow/min-cut rate for single source multicast. This can exceed the rate achievable with routing alone, and is achievable with linear network codes. The multi-source problem is more complicated. Computation of its capacity region is equivalent to determination of the set of all entropy functions $\Gamma^*$, which is non-polyhedral. The aim of this paper is to demonstrate that this difficulty can arise even in single source problems. In particular, for single source networks with hierarchical sink requirements, and for single source networks with secrecy constraints. In both cases, we exhibit networks whose capacity regions involve $\Gamma^*$. As in the multi-source case, linear codes are insufficient.<|reference_end|>
arxiv
@article{chan2008mission, title={Mission impossible: Computing the network coding capacity region}, author={Terence Chan and Alex Grant}, journal={arXiv preprint arXiv:0802.0030}, year={2008}, archivePrefix={arXiv}, eprint={0802.0030}, primaryClass={cs.IT math.IT} }
chan2008mission
arxiv-2535
0802.0116
Shallow Models for Non-Iterative Modal Logics
<|reference_start|>Shallow Models for Non-Iterative Modal Logics: The methods used to establish PSPACE-bounds for modal logics can roughly be grouped into two classes: syntax driven methods establish that exhaustive proof search can be performed in polynomial space whereas semantic approaches directly construct shallow models. In this paper, we follow the latter approach and establish generic PSPACE-bounds for a large and heterogeneous class of modal logics in a coalgebraic framework. In particular, no complete axiomatisation of the logic under scrutiny is needed. This does not only complement our earlier, syntactic, approach conceptually, but also covers a wide variety of new examples which are difficult to harness by purely syntactic means. Apart from re-proving known complexity bounds for a large variety of structurally different logics, we apply our method to obtain previously unknown PSPACE-bounds for Elgesem's logic of agency and for graded modal logic over reflexive frames.<|reference_end|>
arxiv
@article{schröder2008shallow, title={Shallow Models for Non-Iterative Modal Logics}, author={Lutz Schr"oder and Dirk Patinson}, journal={arXiv preprint arXiv:0802.0116}, year={2008}, number={Imperial College TR Computing 2008/3}, archivePrefix={arXiv}, eprint={0802.0116}, primaryClass={cs.LO cs.AI cs.CC cs.MA} }
schröder2008shallow
arxiv-2536
0802.0130
About the true type of smoothers
<|reference_start|>About the true type of smoothers: We employ the variational formulation and the Euler-Lagrange equations to study the steady-state error in linear non-causal estimators (smoothers). We give a complete description of the steady-state error for inputs that are polynomial in time. We show that the steady-state error regime in a smoother is similar to that in a filter of double the type. This means that the steady-state error in the optimal smoother is significantly smaller than that in the Kalman filter. The results reveal a significant advantage of smoothing over filtering with respect to robustness to model uncertainty.<|reference_end|>
arxiv
@article{ezri2008about, title={About the true type of smoothers}, author={D. Ezri, B.Z. Bobrovsky, Z. Schuss}, journal={arXiv preprint arXiv:0802.0130}, year={2008}, archivePrefix={arXiv}, eprint={0802.0130}, primaryClass={math.OC cs.IT math.IT} }
ezri2008about
arxiv-2537
0802.0137
Fault-Tolerant Partial Replication in Large-Scale Database Systems
<|reference_start|>Fault-Tolerant Partial Replication in Large-Scale Database Systems: We investigate a decentralised approach to committing transactions in a replicated database, under partial replication. Previous protocols either re-execute transactions entirely and/or compute a total order of transactions. In contrast, ours applies update values, and orders only conflicting transactions. It results that transactions execute faster, and distributed databases commit in small committees. Both effects contribute to preserve scalability as the number of databases and transactions increase. Our algorithm ensures serializability, and is live and safe in spite of faults.<|reference_end|>
arxiv
@article{sutra2008fault-tolerant, title={Fault-Tolerant Partial Replication in Large-Scale Database Systems}, author={Pierre Sutra (INRIA Rocquencourt), Marc Shapiro (INRIA Rocquencourt)}, journal={arXiv preprint arXiv:0802.0137}, year={2008}, number={RR-6440}, archivePrefix={arXiv}, eprint={0802.0137}, primaryClass={cs.DB} }
sutra2008fault-tolerant
arxiv-2538
0802.0179
On the Relation Between the Index Coding and the Network Coding Problems
<|reference_start|>On the Relation Between the Index Coding and the Network Coding Problems: In this paper we show that the Index Coding problem captures several important properties of the more general Network Coding problem. An instance of the Index Coding problem includes a server that holds a set of information messages $X=\{x_1,...,x_k\}$ and a set of receivers $R$. Each receiver has some side information, known to the server, represented by a subset of $X$ and demands another subset of $X$. The server uses a noiseless communication channel to broadcast encodings of messages in $X$ to satisfy the receivers' demands. The goal of the server is to find an encoding scheme that requires the minimum number of transmissions. We show that any instance of the Network Coding problem can be efficiently reduced to an instance of the Index Coding problem. Our reduction shows that several important properties of the Network Coding problem carry over to the Index Coding problem. In particular, we prove that both scalar linear and vector linear codes are insufficient for achieving the minimal number of transmissions.<|reference_end|>
arxiv
@article{rouayheb2008on, title={On the Relation Between the Index Coding and the Network Coding Problems}, author={Salim El Rouayheb, Alex Sprintson, Costas Georghiades}, journal={arXiv preprint arXiv:0802.0179}, year={2008}, archivePrefix={arXiv}, eprint={0802.0179}, primaryClass={cs.IT math.IT} }
rouayheb2008on
arxiv-2539
0802.0188
Partitioning the Threads of a Mobile System
<|reference_start|>Partitioning the Threads of a Mobile System: In this paper, we show how thread partitioning helps in proving properties of mobile systems. Thread partitioning consists in gathering the threads of a mobile system into several classes. The partitioning criterion is left as a parameter of both the mobility model and the properties we are interested in. Then, we design a polynomial time abstract interpretation-based static analysis that counts the number of threads inside each partition class.<|reference_end|>
arxiv
@article{feret2008partitioning, title={Partitioning the Threads of a Mobile System}, author={J'er^ome Feret}, journal={arXiv preprint arXiv:0802.0188}, year={2008}, archivePrefix={arXiv}, eprint={0802.0188}, primaryClass={cs.OH} }
feret2008partitioning
arxiv-2540
0802.0212
A topological formal treatment for scenario-based software specification of concurrent real-time systems
<|reference_start|>A topological formal treatment for scenario-based software specification of concurrent real-time systems: Real-time systems are computing systems in which the meeting of their requirements is vital for their correctness. Consequently, if the real-time requirements of these systems are poorly understood and verified, the results can be disastrous and lead to irremediable project failures at the early phases of development. The present work addresses the problem of detecting deadlock situations early in the requirements specification phase of a concurrent real time system, proposing a simple proof-of-concepts prototype that joins scenario-based requirements specifications and techniques based on topology. The efforts are concentrated in the integration of the formal representation of Message Sequence Chart scenarios into the deadlock detection algorithm of Fajstrup et al., based on geometric and algebraic topology.<|reference_end|>
arxiv
@article{alves2008a, title={A topological formal treatment for scenario-based software specification of concurrent real-time systems}, author={Miriam C. B. Alves, Christine C. Dantas, Nanci N. Arai, Rovedy B. da Silva (Institute of Aeronautics and Space - IAE/CTA, Brazil)}, journal={arXiv preprint arXiv:0802.0212}, year={2008}, archivePrefix={arXiv}, eprint={0802.0212}, primaryClass={cs.SE cs.LO} }
alves2008a
arxiv-2541
0802.0249
Hopf Algebras in General and in Combinatorial Physics: a practical introduction
<|reference_start|>Hopf Algebras in General and in Combinatorial Physics: a practical introduction: This tutorial is intended to give an accessible introduction to Hopf algebras. The mathematical context is that of representation theory, and we also illustrate the structures with examples taken from combinatorics and quantum physics, showing that in this latter case the axioms of Hopf algebra arise naturally. The text contains many exercises, some taken from physics, aimed at expanding and exemplifying the concepts introduced.<|reference_end|>
arxiv
@article{duchamp2008hopf, title={Hopf Algebras in General and in Combinatorial Physics: a practical introduction}, author={G. H. E. Duchamp (LIPN), P. Blasiak (IFJ-Pan), A. Horzela (IFJ-Pan), K. A. Penson (LPTMC), A. I. Solomon}, journal={arXiv preprint arXiv:0802.0249}, year={2008}, archivePrefix={arXiv}, eprint={0802.0249}, primaryClass={quant-ph cs.SC math.CO} }
duchamp2008hopf
arxiv-2542
0802.0251
Multi-Layer Perceptrons and Symbolic Data
<|reference_start|>Multi-Layer Perceptrons and Symbolic Data: In some real world situations, linear models are not sufficient to represent accurately complex relations between input variables and output variables of a studied system. Multilayer Perceptrons are one of the most successful non-linear regression tool but they are unfortunately restricted to inputs and outputs that belong to a normed vector space. In this chapter, we propose a general recoding method that allows to use symbolic data both as inputs and outputs to Multilayer Perceptrons. The recoding is quite simple to implement and yet provides a flexible framework that allows to deal with almost all practical cases. The proposed method is illustrated on a real world data set.<|reference_end|>
arxiv
@article{rossi2008multi-layer, title={Multi-Layer Perceptrons and Symbolic Data}, author={Fabrice Rossi (INRIA Rocquencourt / INRIA Sophia Antipolis, CEREMADE), Brieuc Conan-Guez (INRIA Rocquencourt / INRIA Sophia Antipolis, LITA)}, journal={Symbolic Data Analysis and the SODAS Software Wiley (Ed.) (2008) 373-391}, year={2008}, archivePrefix={arXiv}, eprint={0802.0251}, primaryClass={cs.NE} }
rossi2008multi-layer
arxiv-2543
0802.0252
Acc\'el\'eration des cartes auto-organisatrices sur tableau de dissimilarit\'es par s\'eparation et \'evaluation
<|reference_start|>Acc\'el\'eration des cartes auto-organisatrices sur tableau de dissimilarit\'es par s\'eparation et \'evaluation: In this paper, a new implementation of the adaptation of Kohonen self-organising maps (SOM) to dissimilarity matrices is proposed. This implementation relies on the branch and bound principle to reduce the algorithm running time. An important property of this new approach is that the obtained algorithm produces exactly the same results as the standard algorithm.<|reference_end|>
arxiv
@article{conan-guez2008acc\'el\'eration, title={Acc\'el\'eration des cartes auto-organisatrices sur tableau de dissimilarit\'es par s\'eparation et \'evaluation}, author={Brieuc Conan-Guez (LITA), Fabrice Rossi (INRIA Rocquencourt / INRIA Sophia Antipolis)}, journal={REVUE DES NOUVELLES TECHNOLOGIES DE L'INFORMATION (2008)}, year={2008}, archivePrefix={arXiv}, eprint={0802.0252}, primaryClass={cs.NE} }
conan-guez2008acc\'el\'eration
arxiv-2544
0802.0260
A proposal to a generalised splicing with a self assembly approach
<|reference_start|>A proposal to a generalised splicing with a self assembly approach: Theory of splicing is an abstract model of the recombinant behaviour of DNAs. In a splicing system, two strings to be spliced are taken from the same set and the splicing rule is from another set. Here we propose a generalised splicing (GS) model with three components, two strings from two languages and a splicing rule from third component. We propose a generalised self assembly (GSA) of strings. Two strings $u_1xv_1$ and $u_2xv_2$ self assemble over $x$ and generate $u_1xv_2$ and $u_2xv_1$. We study the relationship between GS and GSA. We study some classes of generalised splicing languages with the help of generalised self assembly.<|reference_end|>
arxiv
@article{jeganathan2008a, title={A proposal to a generalised splicing with a self assembly approach}, author={L. Jeganathan, R. Rama, Ritabrata Sengupta}, journal={arXiv preprint arXiv:0802.0260}, year={2008}, archivePrefix={arXiv}, eprint={0802.0260}, primaryClass={cs.DM} }
jeganathan2008a
arxiv-2545
0802.0287
A data-driven functional projection approach for the selection of feature ranges in spectra with ICA or cluster analysis
<|reference_start|>A data-driven functional projection approach for the selection of feature ranges in spectra with ICA or cluster analysis: Prediction problems from spectra are largely encountered in chemometry. In addition to accurate predictions, it is often needed to extract information about which wavelengths in the spectra contribute in an effective way to the quality of the prediction. This implies to select wavelengths (or wavelength intervals), a problem associated to variable selection. In this paper, it is shown how this problem may be tackled in the specific case of smooth (for example infrared) spectra. The functional character of the spectra (their smoothness) is taken into account through a functional variable projection procedure. Contrarily to standard approaches, the projection is performed on a basis that is driven by the spectra themselves, in order to best fit their characteristics. The methodology is illustrated by two examples of functional projection, using Independent Component Analysis and functional variable clustering, respectively. The performances on two standard infrared spectra benchmarks are illustrated.<|reference_end|>
arxiv
@article{krier2008a, title={A data-driven functional projection approach for the selection of feature ranges in spectra with ICA or cluster analysis}, author={Catherine Krier (DICE), Fabrice Rossi (INRIA Rocquencourt / INRIA Sophia Antipolis), Damien Franc{c}ois (CESAME), Michel Verleysen (DICE - MLG)}, journal={Chemometrics and Intelligent Laboratory Systems (2008)}, year={2008}, doi={10.1016/j.chemolab.2007.09.004}, archivePrefix={arXiv}, eprint={0802.0287}, primaryClass={cs.NE} }
krier2008a
arxiv-2546
0802.0314
On the complexity of finding gapped motifs
<|reference_start|>On the complexity of finding gapped motifs: This paper has been withdrawn by the corresponding author because the newest version is now published in Journal of Discrete Algorithms.<|reference_end|>
arxiv
@article{michael2008on, title={On the complexity of finding gapped motifs}, author={Morris Michael and Francois Nicolas and Esko Ukkonen}, journal={arXiv preprint arXiv:0802.0314}, year={2008}, doi={10.1016/j.jda.2009.12.001}, archivePrefix={arXiv}, eprint={0802.0314}, primaryClass={cs.CC cs.DM} }
michael2008on
arxiv-2547
0802.0342
The Case for Structured Random Codes in Network Capacity Theorems
<|reference_start|>The Case for Structured Random Codes in Network Capacity Theorems: Random coding arguments are the backbone of most channel capacity achievability proofs. In this paper, we show that in their standard form, such arguments are insufficient for proving some network capacity theorems: structured coding arguments, such as random linear or lattice codes, attain higher rates. Historically, structured codes have been studied as a stepping stone to practical constructions. However, K\"{o}rner and Marton demonstrated their usefulness for capacity theorems through the derivation of the optimal rate region of a distributed functional source coding problem. Here, we use multicasting over finite field and Gaussian multiple-access networks as canonical examples to demonstrate that even if we want to send bits over a network, structured codes succeed where simple random codes fail. Beyond network coding, we also consider distributed computation over noisy channels and a special relay-type problem.<|reference_end|>
arxiv
@article{nazer2008the, title={The Case for Structured Random Codes in Network Capacity Theorems}, author={Bobak Nazer and Michael Gastpar}, journal={arXiv preprint arXiv:0802.0342}, year={2008}, archivePrefix={arXiv}, eprint={0802.0342}, primaryClass={cs.IT math.IT} }
nazer2008the
arxiv-2548
0802.0351
Path Loss Exponent Estimation in a Large Field of Interferers
<|reference_start|>Path Loss Exponent Estimation in a Large Field of Interferers: In wireless channels, the path loss exponent (PLE) has a strong impact on the quality of links, and hence, it needs to be accurately estimated for the efficient design and operation of wireless networks. In this paper, we address the problem of PLE estimation in large wireless networks, which is relevant to several important issues in networked communications such as localization, energy-efficient routing, and channel access. We consider a large ad hoc network where nodes are distributed as a homogeneous Poisson point process on the plane and the channels are subject to Nakagami-m fading. We propose and discuss three distributed algorithms for estimating the PLE under these settings which explicitly take into account the interference in the network. In addition, we provide simulation results to demonstrate the performance of the algorithms and quantify the estimation errors. We also describe how to estimate the PLE accurately even in networks with spatially varying PLEs and more general node distributions.<|reference_end|>
arxiv
@article{srinivasa2008path, title={Path Loss Exponent Estimation in a Large Field of Interferers}, author={Sunil Srinivasa and Martin Haenggi}, journal={arXiv preprint arXiv:0802.0351}, year={2008}, archivePrefix={arXiv}, eprint={0802.0351}, primaryClass={cs.IT math.IT} }
srinivasa2008path
arxiv-2549
0802.0414
The exit problem in optimal non-causal extimation
<|reference_start|>The exit problem in optimal non-causal extimation: We study the phenomenon of loss of lock in the optimal non-causal phase estimation problem, a benchmark problem in nonlinear estimation. Our method is based on the computation of the asymptotic distribution of the optimal estimation error in case the number of trajectories in the optimization problem is finite. The computation is based directly on the minimum noise energy optimality criterion rather than on state equations of the error, as is the usual case in the literature. The results include an asymptotic computation of the mean time to lose lock (MTLL) in the optimal smoother. We show that the MTLL in the first and second order smoothers is significantly longer than that in the causal extended Kalman filter.<|reference_end|>
arxiv
@article{ezri2008the, title={The exit problem in optimal non-causal extimation}, author={Doron Ezri, Ben-Tzion Bobrovsky, Zeev Schuss}, journal={arXiv preprint arXiv:0802.0414}, year={2008}, archivePrefix={arXiv}, eprint={0802.0414}, primaryClass={math.OC cs.IT math.IT} }
ezri2008the
arxiv-2550
0802.0423
Approximability Distance in the Space of H-Colourability Problems
<|reference_start|>Approximability Distance in the Space of H-Colourability Problems: A graph homomorphism is a vertex map which carries edges from a source graph to edges in a target graph. We study the approximability properties of the Weighted Maximum H-Colourable Subgraph problem (MAX H-COL). The instances of this problem are edge-weighted graphs G and the objective is to find a subgraph of G that has maximal total edge weight, under the condition that the subgraph has a homomorphism to H; note that for H=K_k this problem is equivalent to MAX k-CUT. To this end, we introduce a metric structure on the space of graphs which allows us to extend previously known approximability results to larger classes of graphs. Specifically, the approximation algorithms for MAX CUT by Goemans and Williamson and MAX k-CUT by Frieze and Jerrum can be used to yield non-trivial approximation results for MAX H-COL. For a variety of graphs, we show near-optimality results under the Unique Games Conjecture. We also use our method for comparing the performance of Frieze & Jerrum's algorithm with Hastad's approximation algorithm for general MAX 2-CSP. This comparison is, in most cases, favourable to Frieze & Jerrum.<|reference_end|>
arxiv
@article{färnqvist2008approximability, title={Approximability Distance in the Space of H-Colourability Problems}, author={Tommy F"arnqvist, Peter Jonsson and Johan Thapper}, journal={arXiv preprint arXiv:0802.0423}, year={2008}, archivePrefix={arXiv}, eprint={0802.0423}, primaryClass={cs.CC} }
färnqvist2008approximability
arxiv-2551
0802.0483
Popularity, Novelty and Attention
<|reference_start|>Popularity, Novelty and Attention: We analyze the role that popularity and novelty play in attracting the attention of users to dynamic websites. We do so by determining the performance of three different strategies that can be utilized to maximize attention. The first one prioritizes novelty while the second emphasizes popularity. A third strategy looks myopically into the future and prioritizes stories that are expected to generate the most clicks within the next few minutes. We show that the first two strategies should be selected on the basis of the rate of novelty decay, while the third strategy performs sub-optimally in most cases. We also demonstrate that the relative performance of the first two strategies as a function of the rate of novelty decay changes abruptly around a critical value, resembling a phase transition in the physical world. 1<|reference_end|>
arxiv
@article{wu2008popularity,, title={Popularity, Novelty and Attention}, author={Fang Wu and Bernardo A. Huberman}, journal={arXiv preprint arXiv:0802.0483}, year={2008}, archivePrefix={arXiv}, eprint={0802.0483}, primaryClass={cs.CY} }
wu2008popularity,
arxiv-2552
0802.0487
Algorithmically independent sequences
<|reference_start|>Algorithmically independent sequences: Two objects are independent if they do not affect each other. Independence is well-understood in classical information theory, but less in algorithmic information theory. Working in the framework of algorithmic information theory, the paper proposes two types of independence for arbitrary infinite binary sequences and studies their properties. Our two proposed notions of independence have some of the intuitive properties that one naturally expects. For example, for every sequence $x$, the set of sequences that are independent (in the weaker of the two senses) with $x$ has measure one. For both notions of independence we investigate to what extent pairs of independent sequences, can be effectively constructed via Turing reductions (from one or more input sequences). In this respect, we prove several impossibility results. For example, it is shown that there is no effective way of producing from an arbitrary sequence with positive constructive Hausdorff dimension two sequences that are independent (even in the weaker type of independence) and have super-logarithmic complexity. Finally, a few conjectures and open questions are discussed.<|reference_end|>
arxiv
@article{calude2008algorithmically, title={Algorithmically independent sequences}, author={Cristian Calude, Marius Zimand}, journal={arXiv preprint arXiv:0802.0487}, year={2008}, archivePrefix={arXiv}, eprint={0802.0487}, primaryClass={cs.IT cs.SE math.AG math.IT} }
calude2008algorithmically
arxiv-2553
0802.0534
Capacity of Wireless Networks within o(log(SNR)) - the Impact of Relays, Feedback, Cooperation and Full-Duplex Operation
<|reference_start|>Capacity of Wireless Networks within o(log(SNR)) - the Impact of Relays, Feedback, Cooperation and Full-Duplex Operation: Recent work has characterized the sum capacity of time-varying/frequency-selective wireless interference networks and $X$ networks within $o(\log({SNR}))$, i.e., with an accuracy approaching 100% at high SNR (signal to noise power ratio). In this paper, we seek similar capacity characterizations for wireless networks with relays, feedback, full duplex operation, and transmitter/receiver cooperation through noisy channels. First, we consider a network with $S$ source nodes, $R$ relay nodes and $D$ destination nodes with random time-varying/frequency-selective channel coefficients and global channel knowledge at all nodes. We allow full-duplex operation at all nodes, as well as causal noise-free feedback of all received signals to all source and relay nodes. The sum capacity of this network is characterized as $\frac{SD}{S+D-1}\log({SNR})+o(\log({SNR}))$. The implication of the result is that the capacity benefits of relays, causal feedback, transmitter/receiver cooperation through physical channels and full duplex operation become a negligible fraction of the network capacity at high SNR. Some exceptions to this result are also pointed out in the paper. Second, we consider a network with $K$ full duplex nodes with an independent message from every node to every other node in the network. We find that the sum capacity of this network is bounded below by $\frac{K(K-1)}{2K-2}+o(\log({SNR}))$ and bounded above by $\frac{K(K-1)}{2K-3}+o(\log({SNR}))$.<|reference_end|>
arxiv
@article{cadambe2008capacity, title={Capacity of Wireless Networks within o(log(SNR)) - the Impact of Relays, Feedback, Cooperation and Full-Duplex Operation}, author={Viveck R. Cadambe, Syed A. Jafar}, journal={IEEE Transactions on Information Theory, Vol. 55, No. 5, May 2009, Pages: 2334-2344}, year={2008}, archivePrefix={arXiv}, eprint={0802.0534}, primaryClass={cs.IT math.IT} }
cadambe2008capacity
arxiv-2554
0802.0543
Improving Performance of Cluster Based Routing Protocol using Cross-Layer Design
<|reference_start|>Improving Performance of Cluster Based Routing Protocol using Cross-Layer Design: The main goal of routing protocol is to efficiency delivers data from source to destination. All routing protocols are the same in this goal, but the way they adopt to achieve it is different, so routing strategy has an egregious role on the performance of an ad hoc network. Most of routing protocols proposed for ad hoc networks have a flat structure. These protocols expand the control overhead packets to discover or maintain a route. On the other hand a number of hierarchical-based routing protocols have been developed, mostly are based on layered design. These protocols improve network performances especially when the network size grows up since details about remote portion of network can be handled in an aggregate manner. Although, there is another approach to design a protocol called cross-layer design. Using this approach information can exchange between different layer of protocol stack, result in optimizing network performances. In this paper, we intend to exert cross-layer design to optimize Cluster Based Routing Protocol (Cross-CBRP). Using NS-2 network simulator we evaluate rate of cluster head changes, throughput and packet delivery ratio. Comparisons denote that Cross-CBRP has better performances with respect to the original CBRP.<|reference_end|>
arxiv
@article{jahanbakhsh2008improving, title={Improving Performance of Cluster Based Routing Protocol using Cross-Layer Design}, author={Kazem Jahanbakhsh, Marzieh Hajhosseini}, journal={arXiv preprint arXiv:0802.0543}, year={2008}, archivePrefix={arXiv}, eprint={0802.0543}, primaryClass={cs.NI} }
jahanbakhsh2008improving
arxiv-2555
0802.0550
Energy Aware Self-Organizing Density Management in Wireless Sensor Networks
<|reference_start|>Energy Aware Self-Organizing Density Management in Wireless Sensor Networks: Energy consumption is the most important factor that determines sensor node lifetime. The optimization of wireless sensor network lifetime targets not only the reduction of energy consumption of a single sensor node but also the extension of the entire network lifetime. We propose a simple and adaptive energy-conserving topology management scheme, called SAND (Self-Organizing Active Node Density). SAND is fully decentralized and relies on a distributed probing approach and on the redundancy resolution of sensors for energy optimizations, while preserving the data forwarding and sensing capabilities of the network. We present the SAND's algorithm, its analysis of convergence, and simulation results. Simulation results show that, though slightly increasing path lengths from sensor to sink nodes, the proposed scheme improves significantly the network lifetime for different neighborhood densities degrees, while preserving both sensing and routing fidelity.<|reference_end|>
arxiv
@article{merrer2008energy, title={Energy Aware Self-Organizing Density Management in Wireless Sensor Networks}, author={Erwan Le Merrer (IRISA, FT R&D), Vincent Gramoli (IRISA), Anne-Marie Kermarrec (IRISA), Aline Viana (IRISA), Marin Bertier (IRISA)}, journal={Dans International Workshop on Decentralized Resource Sharing in Mobile Computing and Networking (2006) 23--29}, year={2008}, archivePrefix={arXiv}, eprint={0802.0550}, primaryClass={cs.DC} }
merrer2008energy
arxiv-2556
0802.0552
Timed Quorum System for Large-Scale and Dynamic Environments
<|reference_start|>Timed Quorum System for Large-Scale and Dynamic Environments: This paper presents Timed Quorum System (TQS), a new quorum system especially suited for large-scale and dynamic systems. TQS requires that two quorums intersect with high probability if they are used in the same small period of time. It proposed an algorithm that implements TQS and that verifies probabilistic atomicity: a consistency criterion that requires each operation to respect atomicity with high probability. This TQS implementation has quorum of size O(\sqrt{nD}) and expected access time of O(log \sqrt{nD}) message delays, where n measures the size of the system and D is a required parameter to handle dynamism.<|reference_end|>
arxiv
@article{gramoli2008timed, title={Timed Quorum System for Large-Scale and Dynamic Environments}, author={Vincent Gramoli (INRIA Futurs), Michel Raynal (IRISA)}, journal={Dans 11th International Conference On Principles Of Distributed Systems 4878 (2007) 429--442}, year={2008}, archivePrefix={arXiv}, eprint={0802.0552}, primaryClass={cs.DC cs.NI} }
gramoli2008timed
arxiv-2557
0802.0554
Message-Passing Decoding of Lattices Using Gaussian Mixtures
<|reference_start|>Message-Passing Decoding of Lattices Using Gaussian Mixtures: A lattice decoder which represents messages explicitly as a mixture of Gaussians functions is given. In order to prevent the number of functions in a mixture from growing as the decoder iterations progress, a method for replacing N Gaussian functions with M Gaussian functions, with M < N, is given. A squared distance metric is used to select functions for combining. A pair of selected Gaussians is replaced by a single Gaussian with the same first and second moments. The metric can be computed efficiently, and at the same time, the proposed algorithm empirically gives good results, for example, a dimension 100 lattice has a loss of 0.2 dB in signal-to-noise ratio at a probability of symbol error of 10^{-5}.<|reference_end|>
arxiv
@article{kurkoski2008message-passing, title={Message-Passing Decoding of Lattices Using Gaussian Mixtures}, author={Brian M. Kurkoski and Justin Dauwels}, journal={arXiv preprint arXiv:0802.0554}, year={2008}, archivePrefix={arXiv}, eprint={0802.0554}, primaryClass={cs.IT math.IT} }
kurkoski2008message-passing
arxiv-2558
0802.0580
Rotated and Scaled Alamouti Coding
<|reference_start|>Rotated and Scaled Alamouti Coding: Repetition-based retransmission is used in Alamouti-modulation [1998] for $2\times 2$ MIMO systems. We propose to use instead of ordinary repetition so-called "scaled repetition" together with rotation. It is shown that the rotated and scaled Alamouti code has a hard-decision performance which is only slightly worse than that of the Golden code [2005], the best known $2\times 2$ space-time code. Decoding the Golden code requires an exhaustive search over all codewords, while our rotated and scaled Alamouti code can be decoded with an acceptable complexity however.<|reference_end|>
arxiv
@article{willems2008rotated, title={Rotated and Scaled Alamouti Coding}, author={Frans M.J. Willems}, journal={arXiv preprint arXiv:0802.0580}, year={2008}, archivePrefix={arXiv}, eprint={0802.0580}, primaryClass={cs.IT math.IT} }
willems2008rotated
arxiv-2559
0802.0603
Trusted-HB: a low-cost version of HB+ secure against Man-in-The-Middle attacks
<|reference_start|>Trusted-HB: a low-cost version of HB+ secure against Man-in-The-Middle attacks: Since the introduction at Crypto'05 by Juels and Weis of the protocol HB+, a lightweight protocol secure against active attacks but only in a detection based-model, many works have tried to enhance its security. We propose here a new approach to achieve resistance against Man-in-The-Middle attacks. Our requirements - in terms of extra communications and hardware - are surprisingly low.<|reference_end|>
arxiv
@article{bringer2008trusted-hb:, title={Trusted-HB: a low-cost version of HB+ secure against Man-in-The-Middle attacks}, author={Julien Bringer and Herve Chabanne}, journal={IEEE Trans. IT. 54:9 (2008) 4339-4342}, year={2008}, doi={10.1109/TIT.2008.928290}, archivePrefix={arXiv}, eprint={0802.0603}, primaryClass={cs.CR} }
bringer2008trusted-hb:
arxiv-2560
0802.0726
(Generalized) Post Correspondence Problem and semi-Thue systems
<|reference_start|>(Generalized) Post Correspondence Problem and semi-Thue systems: Let PCP(k) denote the Post Correspondence Problem for k input pairs of strings. Let ACCESSIBILITY(k) denote the the word problem for k-rule semi-Thue systems. In 1980, Claus showed that if ACCESSIBILITY(k) is undecidable then PCP(k + 4) is also undecidable. The aim of the paper is to present a clean, detailed proof of the statement. We proceed in two steps, using the Generalized Post Correspondence Problem as an auxiliary. First, we prove that if ACCESSIBILITY(k) is undecidable then GPCP(k + 2) is also undecidable. Then, we prove that if GPCP(k) is undecidable then PCP(k + 2) is also undecidable. (The latter result has also been shown by Harju and Karhumaki.) To date, the sharpest undecidability bounds for both PCP and GPCP have been deduced from Claus's result: since Matiyasevich and Senizergues showed that ACCESSIBILITY(3) is undecidable, GPCP(5) and PCP(7) are undecidable.<|reference_end|>
arxiv
@article{nicolas2008(generalized), title={(Generalized) Post Correspondence Problem and semi-Thue systems}, author={Francois Nicolas}, journal={arXiv preprint arXiv:0802.0726}, year={2008}, archivePrefix={arXiv}, eprint={0802.0726}, primaryClass={cs.DM} }
nicolas2008(generalized)
arxiv-2561
0802.0738
MIMO Networks: the Effects of Interference
<|reference_start|>MIMO Networks: the Effects of Interference: Multiple-input/multiple-output (MIMO) systems promise enormous capacity increase and are being considered as one of the key technologies for future wireless networks. However, the decrease in capacity due to the presence of interferers in MIMO networks is not well understood. In this paper, we develop an analytical framework to characterize the capacity of MIMO communication systems in the presence of multiple MIMO co-channel interferers and noise. We consider the situation in which transmitters have no information about the channel and all links undergo Rayleigh fading. We first generalize the known determinant representation of hypergeometric functions with matrix arguments to the case when the argument matrices have eigenvalues of arbitrary multiplicity. This enables the derivation of the distribution of the eigenvalues of Gaussian quadratic forms and Wishart matrices with arbitrary correlation, with application to both single user and multiuser MIMO systems. In particular, we derive the ergodic mutual information for MIMO systems in the presence of multiple MIMO interferers. Our analysis is valid for any number of interferers, each with arbitrary number of antennas having possibly unequal power levels. This framework, therefore, accommodates the study of distributed MIMO systems and accounts for different positions of the MIMO interferers.<|reference_end|>
arxiv
@article{chiani2008mimo, title={MIMO Networks: the Effects of Interference}, author={Marco Chiani, Moe Z. Win, Hyundong Shin}, journal={IEEE Trans. Inform. Theory, vol. 56, no. 1, pp. 336-349, Jan. 2010}, year={2008}, doi={10.1109/TIT.2009.2034810}, archivePrefix={arXiv}, eprint={0802.0738}, primaryClass={cs.IT math.IT} }
chiani2008mimo
arxiv-2562
0802.0745
Knowledge management by wikis
<|reference_start|>Knowledge management by wikis: Wikis provide a new way of collaboration and knowledge sharing. Wikis are software that allows users to work collectively on a web-based knowledge base. Wikis are characterised by a sense of anarchism, collaboration, connectivity, organic development and self-healing, and they rely on trust. We list several concerns about applying wikis in professional organisation. After these concerns are met, wikis can provide a progessive, new knowledge sharing and collaboration tool.<|reference_end|>
arxiv
@article{spek2008knowledge, title={Knowledge management by wikis}, author={Sander Spek}, journal={arXiv preprint arXiv:0802.0745}, year={2008}, archivePrefix={arXiv}, eprint={0802.0745}, primaryClass={cs.DL} }
spek2008knowledge
arxiv-2563
0802.0766
Modelling and Analysis of the Distributed Coordination Function of IEEE 80211 with Multirate Capability
<|reference_start|>Modelling and Analysis of the Distributed Coordination Function of IEEE 80211 with Multirate Capability: The aim of this paper is twofold. On one hand, it presents a multi-dimensional Markovian state transition model characterizing the behavior at the Medium Access Control (MAC) layer by including transmission states that account for packet transmission failures due to errors caused by propagation through the channel, along with a state characterizing the system when there are no packets to be transmitted in the queue of a station (to model non-saturated traffic conditions). On the other hand, it provides a throughput analysis of the IEEE 802.11 protocol at the data link layer in both saturated and non-saturated traffic conditions taking into account the impact of both transmission channel and multirate transmission in Rayleigh fading environment. Simulation results closely match the theoretical derivations confirming the effectiveness of the proposed model.<|reference_end|>
arxiv
@article{daneshgaran2008modelling, title={Modelling and Analysis of the Distributed Coordination Function of IEEE 802.11 with Multirate Capability}, author={F. Daneshgaran, M. Laddomada, F. Mesiti, and M. Mondin}, journal={arXiv preprint arXiv:0802.0766}, year={2008}, doi={10.1109/WCNC.2008.242}, archivePrefix={arXiv}, eprint={0802.0766}, primaryClass={cs.NI} }
daneshgaran2008modelling
arxiv-2564
0802.0776
Distributed Compression for the Uplink of a Backhaul-Constrained Coordinated Cellular Network
<|reference_start|>Distributed Compression for the Uplink of a Backhaul-Constrained Coordinated Cellular Network: We consider a backhaul-constrained coordinated cellular network. That is, a single-frequency network with $N+1$ multi-antenna base stations (BSs) that cooperate in order to decode the users' data, and that are linked by means of a common lossless backhaul, of limited capacity $\mathrm{R}$. To implement receive cooperation, we propose distributed compression: $N$ BSs, upon receiving their signals, compress them using a multi-source lossy compression code. Then, they send the compressed vectors to a central BS, which performs users' decoding. Distributed Wyner-Ziv coding is proposed to be used, and is optimally designed in this work. The first part of the paper is devoted to a network with a unique multi-antenna user, that transmits a predefined Gaussian space-time codeword. For such a scenario, the compression codebooks at the BSs are optimized, considering the user's achievable rate as the performance metric. In particular, for $N = 1$ the optimum codebook distribution is derived in closed form, while for $N>1$ an iterative algorithm is devised. The second part of the contribution focusses on the multi-user scenario. For it, the achievable rate region is obtained by means of the optimum compression codebooks for sum-rate and weighted sum-rate, respectively.<|reference_end|>
arxiv
@article{del coso2008distributed, title={Distributed Compression for the Uplink of a Backhaul-Constrained Coordinated Cellular Network}, author={Aitor del Coso and Sebastien Simoens}, journal={arXiv preprint arXiv:0802.0776}, year={2008}, archivePrefix={arXiv}, eprint={0802.0776}, primaryClass={cs.IT math.IT} }
del coso2008distributed
arxiv-2565
0802.0797
Central Limit Theorems for Wavelet Packet Decompositions of Stationary Random Processes
<|reference_start|>Central Limit Theorems for Wavelet Packet Decompositions of Stationary Random Processes: This paper provides central limit theorems for the wavelet packet decomposition of stationary band-limited random processes. The asymptotic analysis is performed for the sequences of the wavelet packet coefficients returned at the nodes of any given path of the $M$-band wavelet packet decomposition tree. It is shown that if the input process is centred and strictly stationary, these sequences converge in distribution to white Gaussian processes when the resolution level increases, provided that the decomposition filters satisfy a suitable property of regularity. For any given path, the variance of the limit white Gaussian process directly relates to the value of the input process power spectral density at a specific frequency.<|reference_end|>
arxiv
@article{atto2008central, title={Central Limit Theorems for Wavelet Packet Decompositions of Stationary Random Processes}, author={Abdourrahmane Atto (TAMCIC), Dominique Pastor (TAMCIC)}, journal={IEEE Transactions on Signal Processing (2008) 1-12}, year={2008}, doi={10.1109/TSP.2009.2031726}, archivePrefix={arXiv}, eprint={0802.0797}, primaryClass={cs.IT math.IT} }
atto2008central
arxiv-2566
0802.0799
D\'eveloppement et analyse multi outils d'un protocole MAC d\'eterministe pour un r\'eseau de capteurs sans fil
<|reference_start|>D\'eveloppement et analyse multi outils d'un protocole MAC d\'eterministe pour un r\'eseau de capteurs sans fil: In this article, we present a multi-tool method for the development and the analysis of a new medium access method. IEEE 802.15.4 / ZigBee technology has been used as a basis for this new determinist MAC layer which enables a high level of QoS. This WPAN can be typically used for wireless sensor networks which require strong temporal constraints. To validate the proposed protocol, three complementary and adequate tools are used: Petri Nets for the formal validation of the algorithm, a dedicated simulator for the temporal aspects, and some measures on a real prototype based on a couple of ZigBee FREESCALE components for the hardware characterization of layers #1 and #2.<|reference_end|>
arxiv
@article{val2008d\'eveloppement, title={D\'eveloppement et analyse multi outils d'un protocole MAC d\'eterministe pour un r\'eseau de capteurs sans fil}, author={Thierry Val (LATTIS), Adrien Van Den Bossche (LATTIS)}, journal={Colloque Francophone sur l'Ing\'enierie des Protocoles (CFIP), Les Arcs : France (2008)}, year={2008}, archivePrefix={arXiv}, eprint={0802.0799}, primaryClass={cs.NI} }
val2008d\'eveloppement
arxiv-2567
0802.0802
On Approximating Frequency Moments of Data Streams with Skewed Projections
<|reference_start|>On Approximating Frequency Moments of Data Streams with Skewed Projections: We propose skewed stable random projections for approximating the pth frequency moments of dynamic data streams (0<p<=2), which has been frequently studied in theoretical computer science and database communities. Our method significantly (or even infinitely when p->1) improves previous methods based on (symmetric) stable random projections. Our proposed method is applicable to data streams that are (a) insertion only (the cash-register model); or (b) always non-negative (the strict Turnstile model), or (c) eventually non-negative at check points. This is only a minor restriction for practical applications. Our method works particularly well when p = 1+/- \Delta and \Delta is small, which is a practically important scenario. For example, \Delta may be the decay rate or interest rate, which are usually small. Of course, when \Delta = 0, one can compute the 1th frequent moment (i.e., the sum) essentially error-free using a simple couter. Our method may be viewed as a ``genearlized counter'' in that it can count the total value in the future, taking in account of the effect of decaying or interest accruement. In a summary, our contributions are two-fold. (A) This is the first propsal of skewed stable random projections. (B) Based on first principle, we develop various statistical estimators for skewed stable distributions, including their variances and error (tail) probability bounds, and consequently the sample complexity bounds.<|reference_end|>
arxiv
@article{li2008on, title={On Approximating Frequency Moments of Data Streams with Skewed Projections}, author={Ping Li}, journal={arXiv preprint arXiv:0802.0802}, year={2008}, archivePrefix={arXiv}, eprint={0802.0802}, primaryClass={cs.DS cs.IT math.IT} }
li2008on
arxiv-2568
0802.0808
Turbo Interleaving inside the cdma2000 and W-CDMA Mobile Communication Systems: A Tutorial
<|reference_start|>Turbo Interleaving inside the cdma2000 and W-CDMA Mobile Communication Systems: A Tutorial: In this paper a discussion of the detailed operation of the interleavers used by the turbo codes defined on the telecommunications standards cdma2000 (3GPP2 C.S0024-B V2.0) and W-CDMA (3GPP TS 25.212 V7.4.0) is presented. Differences in the approach used by each turbo interleaver as well as dispersion analysis and frequency analysis are also discussed. Two examples are presented to illustrate the complete interleaving process defined by each standard. These two interleaving approaches are also representative for other communications standards.<|reference_end|>
arxiv
@article{guerrero2008turbo, title={Turbo Interleaving inside the cdma2000 and W-CDMA Mobile Communication Systems: A Tutorial}, author={Fabio G. Guerrero, Maribell Sacanamboy}, journal={arXiv preprint arXiv:0802.0808}, year={2008}, archivePrefix={arXiv}, eprint={0802.0808}, primaryClass={cs.IT math.IT} }
guerrero2008turbo
arxiv-2569
0802.0820
Independence and concurrent separation logic
<|reference_start|>Independence and concurrent separation logic: A compositional Petri net-based semantics is given to a simple language allowing pointer manipulation and parallelism. The model is then applied to give a notion of validity to the judgements made by concurrent separation logic that emphasizes the process-environment duality inherent in such rely-guarantee reasoning. Soundness of the rules of concurrent separation logic with respect to this definition of validity is shown. The independence information retained by the Petri net model is then exploited to characterize the independence of parallel processes enforced by the logic. This is shown to permit a refinement operation capable of changing the granularity of atomic actions.<|reference_end|>
arxiv
@article{hayman2008independence, title={Independence and concurrent separation logic}, author={Jonathan Hayman and Glynn Winskel}, journal={Logical Methods in Computer Science, Volume 4, Issue 1 (March 19, 2008) lmcs:1100}, year={2008}, doi={10.2168/LMCS-4(1:6)2008}, archivePrefix={arXiv}, eprint={0802.0820}, primaryClass={cs.LO cs.PL} }
hayman2008independence
arxiv-2570
0802.0823
Doubly-Generalized LDPC Codes: Stability Bound over the BEC
<|reference_start|>Doubly-Generalized LDPC Codes: Stability Bound over the BEC: The iterative decoding threshold of low-density parity-check (LDPC) codes over the binary erasure channel (BEC) fulfills an upper bound depending only on the variable and check nodes with minimum distance 2. This bound is a consequence of the stability condition, and is here referred to as stability bound. In this paper, a stability bound over the BEC is developed for doubly-generalized LDPC codes, where the variable and the check nodes can be generic linear block codes, assuming maximum a posteriori erasure correction at each node. It is proved that in this generalized context as well the bound depends only on the variable and check component codes with minimum distance 2. A condition is also developed, namely the derivative matching condition, under which the bound is achieved with equality.<|reference_end|>
arxiv
@article{paolini2008doubly-generalized, title={Doubly-Generalized LDPC Codes: Stability Bound over the BEC}, author={Enrico Paolini, Marc Fossorier, Marco Chiani}, journal={IEEE Trans. Inform. Theory, vol. 55, no. 3, pp. 1027-1046, March 2009}, year={2008}, doi={10.1109/TIT.2008.2011446}, archivePrefix={arXiv}, eprint={0802.0823}, primaryClass={cs.IT math.IT} }
paolini2008doubly-generalized
arxiv-2571
0802.0832
Distributed Double Spending Prevention
<|reference_start|>Distributed Double Spending Prevention: We study the problem of preventing double spending in electronic payment schemes in a distributed fashion. This problem occurs, for instance, when the spending of electronic coins needs to be controlled by a large collection of nodes (eg. in a peer-to-peer (P2P) system) instead of one central bank. Contrary to the commonly held belief that this is fundamentally impossible, we propose several solutions that do achieve a reasonable level of double spending prevention, and analyse their efficiency under varying assumptions.<|reference_end|>
arxiv
@article{hoepman2008distributed, title={Distributed Double Spending Prevention}, author={Jaap-Henk Hoepman}, journal={arXiv preprint arXiv:0802.0832}, year={2008}, archivePrefix={arXiv}, eprint={0802.0832}, primaryClass={cs.CR} }
hoepman2008distributed
arxiv-2572
0802.0834
The Ephemeral Pairing Problem
<|reference_start|>The Ephemeral Pairing Problem: In wireless ad-hoc broadcast networks the pairing problem consists of establishing a (long-term) connection between two specific physical nodes in the network that do not yet know each other. We focus on the ephemeral version of this problem. Ephemeral pairings occur, for example, when electronic business cards are exchanged between two people that meet, or when one pays at a check-out using a wireless wallet. This problem can, in more abstract terms, be phrased as an ephemeral key exchange problem: given a low bandwidth authentic (or private) communication channel between two nodes, and a high bandwidth broadcast channel, can we establish a high-entropy shared secret session key between the two nodes without relying on any a priori shared secret information. Apart from introducing this new problem, we present several ephemeral key exchange protocols, both for the case of authentic channels as well as for the case of private channels.<|reference_end|>
arxiv
@article{hoepman2008the, title={The Ephemeral Pairing Problem}, author={Jaap-Henk Hoepman}, journal={In 8th Int. Conf. Financial Cryptography, LNCS 3110, pages 212-226, Key West, FL, USA, February 9-12 2004. Springer}, year={2008}, archivePrefix={arXiv}, eprint={0802.0834}, primaryClass={cs.CR} }
hoepman2008the
arxiv-2573
0802.0835
Bit-Optimal Lempel-Ziv compression
<|reference_start|>Bit-Optimal Lempel-Ziv compression: One of the most famous and investigated lossless data-compression scheme is the one introduced by Lempel and Ziv about 40 years ago. This compression scheme is known as "dictionary-based compression" and consists of squeezing an input string by replacing some of its substrings with (shorter) codewords which are actually pointers to a dictionary of phrases built as the string is processed. Surprisingly enough, although many fundamental results are nowadays known about upper bounds on the speed and effectiveness of this compression process and references therein), ``we are not aware of any parsing scheme that achieves optimality when the LZ77-dictionary is in use under any constraint on the codewords other than being of equal length'' [N. Rajpoot and C. Sahinalp. Handbook of Lossless Data Compression, chapter Dictionary-based data compression. Academic Press, 2002. pag. 159]. Here optimality means to achieve the minimum number of bits in compressing each individual input string, without any assumption on its generating source. In this paper we provide the first LZ-based compressor which computes the bit-optimal parsing of any input string in efficient time and optimal space, for a general class of variable-length codeword encodings which encompasses most of the ones typically used in data compression and in the design of search engines and compressed indexes.<|reference_end|>
arxiv
@article{ferragina2008bit-optimal, title={Bit-Optimal Lempel-Ziv compression}, author={Paolo Ferragina, Igor Nitto and Rossano Venturini}, journal={arXiv preprint arXiv:0802.0835}, year={2008}, archivePrefix={arXiv}, eprint={0802.0835}, primaryClass={cs.DS cs.IT math.IT} }
ferragina2008bit-optimal
arxiv-2574
0802.0861
Using Bayesian Blocks to Partition Self-Organizing Maps
<|reference_start|>Using Bayesian Blocks to Partition Self-Organizing Maps: Self organizing maps (SOMs) are widely-used for unsupervised classification. For this application, they must be combined with some partitioning scheme that can identify boundaries between distinct regions in the maps they produce. We discuss a novel partitioning scheme for SOMs based on the Bayesian Blocks segmentation algorithm of Scargle [1998]. This algorithm minimizes a cost function to identify contiguous regions over which the values of the attributes can be represented as approximately constant. Because this cost function is well-defined and largely independent of assumptions regarding the number and structure of clusters in the original sample space, this partitioning scheme offers significant advantages over many conventional methods. Sample code is available.<|reference_end|>
arxiv
@article{gazis2008using, title={Using Bayesian Blocks to Partition Self-Organizing Maps}, author={Paul R. Gazis and Jeffrey D. Scargle}, journal={arXiv preprint arXiv:0802.0861}, year={2008}, archivePrefix={arXiv}, eprint={0802.0861}, primaryClass={cs.NE} }
gazis2008using
arxiv-2575
0802.0865
Combining generic judgments with recursive definitions
<|reference_start|>Combining generic judgments with recursive definitions: Many semantical aspects of programming languages, such as their operational semantics and their type assignment calculi, are specified by describing appropriate proof systems. Recent research has identified two proof-theoretic features that allow direct, logic-based reasoning about such descriptions: the treatment of atomic judgments as fixed points (recursive definitions) and an encoding of binding constructs via generic judgments. However, the logics encompassing these two features have thus far treated them orthogonally: that is, they do not provide the ability to define object-logic properties that themselves depend on an intrinsic treatment of binding. We propose a new and simple integration of these features within an intuitionistic logic enhanced with induction over natural numbers and we show that the resulting logic is consistent. The pivotal benefit of the integration is that it allows recursive definitions to not just encode simple, traditional forms of atomic judgments but also to capture generic properties pertaining to such judgments. The usefulness of this logic is illustrated by showing how it can provide elegant treatments of object-logic contexts that appear in proofs involving typing calculi and of arbitrarily cascading substitutions that play a role in reducibility arguments.<|reference_end|>
arxiv
@article{gacek2008combining, title={Combining generic judgments with recursive definitions}, author={Andrew Gacek, Dale Miller, and Gopalan Nadathur}, journal={arXiv preprint arXiv:0802.0865}, year={2008}, archivePrefix={arXiv}, eprint={0802.0865}, primaryClass={cs.LO} }
gacek2008combining
arxiv-2576
0802.0914
Shrinkage Effect in Ancestral Maximum Likelihood
<|reference_start|>Shrinkage Effect in Ancestral Maximum Likelihood: Ancestral maximum likelihood (AML) is a method that simultaneously reconstructs a phylogenetic tree and ancestral sequences from extant data (sequences at the leaves). The tree and ancestral sequences maximize the probability of observing the given data under a Markov model of sequence evolution, in which branch lengths are also optimized but constrained to take the same value on any edge across all sequence sites. AML differs from the more usual form of maximum likelihood (ML) in phylogenetics because ML averages over all possible ancestral sequences. ML has long been known to be statistically consistent -- that is, it converges on the correct tree with probability approaching 1 as the sequence length grows. However, the statistical consistency of AML has not been formally determined, despite informal remarks in a literature that dates back 20 years. In this short note we prove a general result that implies that AML is statistically inconsistent. In particular we show that AML can `shrink' short edges in a tree, resulting in a tree that has no internal resolution as the sequence length grows. Our results apply to any number of taxa.<|reference_end|>
arxiv
@article{mossel2008shrinkage, title={Shrinkage Effect in Ancestral Maximum Likelihood}, author={Elchanan Mossel and Sebastien Roch and Mike Steel}, journal={arXiv preprint arXiv:0802.0914}, year={2008}, archivePrefix={arXiv}, eprint={0802.0914}, primaryClass={q-bio.PE cs.CE math.PR math.ST stat.TH} }
mossel2008shrinkage
arxiv-2577
0802.1002
New Estimation Procedures for PLS Path Modelling
<|reference_start|>New Estimation Procedures for PLS Path Modelling: Given R groups of numerical variables X1, ... XR, we assume that each group is the result of one underlying latent variable, and that all latent variables are bound together through a linear equation system. Moreover, we assume that some explanatory latent variables may interact pairwise in one or more equations. We basically consider PLS Path Modelling's algorithm to estimate both latent variables and the model's coefficients. New "external" estimation schemes are proposed that draw latent variables towards strong group structures in a more flexible way. New "internal" estimation schemes are proposed to enable PLSPM to make good use of variable group complementarity and to deal with interactions. Application examples are given.<|reference_end|>
arxiv
@article{bry2008new, title={New Estimation Procedures for PLS Path Modelling}, author={Xavier Bry (I3M)}, journal={arXiv preprint arXiv:0802.1002}, year={2008}, archivePrefix={arXiv}, eprint={0802.1002}, primaryClass={cs.LG} }
bry2008new
arxiv-2578
0802.1015
Small Is Not Always Beautiful
<|reference_start|>Small Is Not Always Beautiful: Peer-to-peer content distribution systems have been enjoying great popularity, and are now gaining momentum as a means of disseminating video streams over the Internet. In many of these protocols, including the popular BitTorrent, content is split into mostly fixed-size pieces, allowing a client to download data from many peers simultaneously. This makes piece size potentially critical for performance. However, previous research efforts have largely overlooked this parameter, opting to focus on others instead. This paper presents the results of real experiments with varying piece sizes on a controlled BitTorrent testbed. We demonstrate that this parameter is indeed critical, as it determines the degree of parallelism in the system, and we investigate optimal piece sizes for distributing small and large content. We also pinpoint a related design trade-off, and explain how BitTorrent's choice of dividing pieces into subpieces attempts to address it.<|reference_end|>
arxiv
@article{marciniak2008small, title={Small Is Not Always Beautiful}, author={Pawel Marciniak, Nikitas Liogkas (UCLA), Arnaud Legout (INRIA Sophia Antipolis / INRIA Rh^one-Alpes), Eddie Kohler (UCLA)}, journal={Dans IPTPS'2008 (2008)}, year={2008}, archivePrefix={arXiv}, eprint={0802.1015}, primaryClass={cs.NI} }
marciniak2008small
arxiv-2579
0802.1026
An Empirical Study of Cache-Oblivious Priority Queues and their Application to the Shortest Path Problem
<|reference_start|>An Empirical Study of Cache-Oblivious Priority Queues and their Application to the Shortest Path Problem: In recent years the Cache-Oblivious model of external memory computation has provided an attractive theoretical basis for the analysis of algorithms on massive datasets. Much progress has been made in discovering algorithms that are asymptotically optimal or near optimal. However, to date there are still relatively few successful experimental studies. In this paper we compare two different Cache-Oblivious priority queues based on the Funnel and Bucket Heap and apply them to the single source shortest path problem on graphs with positive edge weights. Our results show that when RAM is limited and data is swapping to external storage, the Cache-Oblivious priority queues achieve orders of magnitude speedups over standard internal memory techniques. However, for the single source shortest path problem both on simulated and real world graph data, these speedups are markedly lower due to the time required to access the graph adjacency list itself.<|reference_end|>
arxiv
@article{sach2008an, title={An Empirical Study of Cache-Oblivious Priority Queues and their Application to the Shortest Path Problem}, author={Benjamin Sach and Rapha"el Clifford}, journal={arXiv preprint arXiv:0802.1026}, year={2008}, archivePrefix={arXiv}, eprint={0802.1026}, primaryClass={cs.DS cs.SE} }
sach2008an
arxiv-2580
0802.1059
Average-Case Analysis of Online Topological Ordering
<|reference_start|>Average-Case Analysis of Online Topological Ordering: Many applications like pointer analysis and incremental compilation require maintaining a topological ordering of the nodes of a directed acyclic graph (DAG) under dynamic updates. All known algorithms for this problem are either only analyzed for worst-case insertion sequences or only evaluated experimentally on random DAGs. We present the first average-case analysis of online topological ordering algorithms. We prove an expected runtime of O(n^2 polylog(n)) under insertion of the edges of a complete DAG in a random order for the algorithms of Alpern et al. (SODA, 1990), Katriel and Bodlaender (TALG, 2006), and Pearce and Kelly (JEA, 2006). This is much less than the best known worst-case bound O(n^{2.75}) for this problem.<|reference_end|>
arxiv
@article{ajwani2008average-case, title={Average-Case Analysis of Online Topological Ordering}, author={Deepak Ajwani, Tobias Friedrich}, journal={arXiv preprint arXiv:0802.1059}, year={2008}, archivePrefix={arXiv}, eprint={0802.1059}, primaryClass={cs.DS} }
ajwani2008average-case
arxiv-2581
0802.1076
New Extensions of Pairing-based Signatures into Universal (Multi) Designated Verifier Signatures
<|reference_start|>New Extensions of Pairing-based Signatures into Universal (Multi) Designated Verifier Signatures: The concept of universal designated verifier signatures was introduced by Steinfeld, Bull, Wang and Pieprzyk at Asiacrypt 2003. These signatures can be used as standard publicly verifiable digital signatures but have an additional functionality which allows any holder of a signature to designate the signature to any desired verifier. This designated verifier can check that the message was indeed signed, but is unable to convince anyone else of this fact. We propose new efficient constructions for pairing-based short signatures. Our first scheme is based on Boneh-Boyen signatures and its security can be analyzed in the standard security model. We prove its resistance to forgery assuming the hardness of the so-called strong Diffie-Hellman problem, under the knowledge-of-exponent assumption. The second scheme is compatible with the Boneh-Lynn-Shacham signatures and is proven unforgeable, in the random oracle model, under the assumption that the computational bilinear Diffie-Hellman problem is untractable. Both schemes are designed for devices with constrained computation capabilities since the signing and the designation procedure are pairing-free. Finally, we present extensions of these schemes in the multi-user setting proposed by Desmedt in 2003.<|reference_end|>
arxiv
@article{vergnaud2008new, title={New Extensions of Pairing-based Signatures into Universal (Multi) Designated Verifier Signatures}, author={Damien Vergnaud}, journal={arXiv preprint arXiv:0802.1076}, year={2008}, archivePrefix={arXiv}, eprint={0802.1076}, primaryClass={cs.CR} }
vergnaud2008new
arxiv-2582
0802.1113
Multi-Use Unidirectional Proxy Re-Signatures
<|reference_start|>Multi-Use Unidirectional Proxy Re-Signatures: In 1998, Blaze, Bleumer, and Strauss suggested a cryptographic primitive named proxy re-signatures where a proxy turns a signature computed under Alice's secret key into one from Bob on the same message. The semi-trusted proxy does not learn either party's signing key and cannot sign arbitrary messages on behalf of Alice or Bob. At CCS 2005, Ateniese and Hohenberger revisited the primitive by providing appropriate security definitions and efficient constructions in the random oracle model. Nonetheless, they left open the problem of designing a multi-use unidirectional scheme where the proxy is able to translate in only one direction and signatures can be re-translated several times. This paper solves this problem, suggested for the first time 10 years ago, and shows the first multi-hop unidirectional proxy re-signature schemes. We describe a random-oracle-using system that is secure in the Ateniese-Hohenberger model. The same technique also yields a similar construction in the standard model (i.e. without relying on random oracles). Both schemes are efficient and require newly defined -- but falsifiable -- Diffie-Hellman-like assumptions in bilinear groups.<|reference_end|>
arxiv
@article{libert2008multi-use, title={Multi-Use Unidirectional Proxy Re-Signatures}, author={Beno^it Libert and Damien Vergnaud}, journal={arXiv preprint arXiv:0802.1113}, year={2008}, archivePrefix={arXiv}, eprint={0802.1113}, primaryClass={cs.CR} }
libert2008multi-use
arxiv-2583
0802.1123
Snap-Stabilization in Message-Passing Systems
<|reference_start|>Snap-Stabilization in Message-Passing Systems: In this paper, we tackle the open problem of snap-stabilization in message-passing systems. Snap-stabilization is a nice approach to design protocols that withstand transient faults. Compared to the well-known self-stabilizing approach, snap-stabilization guarantees that the effect of faults is contained immediately after faults cease to occur. Our contribution is twofold: we show that (1) snap-stabilization is impossible for a wide class of problems if we consider networks with finite yet unbounded channel capacity; (2) snap-stabilization becomes possible in the same setting if we assume bounded-capacity channels. We propose three snap-stabilizing protocols working in fully-connected networks. Our work opens exciting new research perspectives, as it enables the snap-stabilizing paradigm to be implemented in actual networks.<|reference_end|>
arxiv
@article{delaët2008snap-stabilization, title={Snap-Stabilization in Message-Passing Systems}, author={Sylvie Dela"et (LRI), St'ephane Devismes (LRI), Mikhail Nesterenko, S'ebastien Tixeuil (INRIA Futurs, LIP6)}, journal={arXiv preprint arXiv:0802.1123}, year={2008}, archivePrefix={arXiv}, eprint={0802.1123}, primaryClass={cs.DC cs.NI cs.PF} }
delaët2008snap-stabilization
arxiv-2584
0802.1162
Approximate substitutions and the normal ordering problem
<|reference_start|>Approximate substitutions and the normal ordering problem: In this paper, we show that the infinite generalised Stirling matrices associated with boson strings with one annihilation operator are projective limits of approximate substitutions, the latter being characterised by a finite set of algebraic equations.<|reference_end|>
arxiv
@article{cheballah2008approximate, title={Approximate substitutions and the normal ordering problem}, author={H. Cheballah (LIPN), G. H. E. Duchamp (LIPN), K. A. Penson (LPTMC)}, journal={arXiv preprint arXiv:0802.1162}, year={2008}, doi={10.1088/1742-6596/104/1/012031}, archivePrefix={arXiv}, eprint={0802.1162}, primaryClass={quant-ph cs.SC math.CO} }
cheballah2008approximate
arxiv-2585
0802.1176
Note sur les temps de service r\'esiduels dans les syst\`emes type M/G/c
<|reference_start|>Note sur les temps de service r\'esiduels dans les syst\`emes type M/G/c: Approximations for the mean performance indices for the M/G/c queue rely on the approximate computation of the probability that an arriving request has to wait for service and of the minimum of residual service times if all servers are found busy. Using numerical examples, we investigate properties of these two quantities. In particular, we show that the minimum of residual service times depends on higher order properties, beyond the first two moments, of the service time distribution. Improved knowledge of the properties of the two quantities studied in this paper provides insight into avenues for improving the accuracy of approximations for the M/G/c queue.<|reference_end|>
arxiv
@article{begin2008note, title={Note sur les temps de service r\'esiduels dans les syst\`emes type M/G/c}, author={Thomas Begin (LIP6), Alexandre Brandwajn (UCSC)}, journal={Colloque Francophone sur l'Ing\'enierie des Protocoles (CFIP), Les Arcs : France (2008)}, year={2008}, archivePrefix={arXiv}, eprint={0802.1176}, primaryClass={cs.NI cs.PF} }
begin2008note
arxiv-2586
0802.1220
Complexity of Decoding Positive-Rate Reed-Solomon Codes
<|reference_start|>Complexity of Decoding Positive-Rate Reed-Solomon Codes: The complexity of maximal likelihood decoding of the Reed-Solomon codes $[q-1, k]_q$ is a well known open problem. The only known result in this direction states that it is at least as hard as the discrete logarithm in some cases where the information rate unfortunately goes to zero. In this paper, we remove the rate restriction and prove that the same complexity result holds for any positive information rate. In particular, this resolves an open problem left in [4], and rules out the possibility of a polynomial time algorithm for maximal likelihood decoding problem of Reed-Solomon codes of any rate under a well known cryptographical hardness assumption. As a side result, we give an explicit construction of Hamming balls of radius bounded away from the minimum distance, which contain exponentially many codewords for Reed-Solomon code of any positive rate less than one. The previous constructions only apply to Reed-Solomon codes of diminishing rates. We also give an explicit construction of Hamming balls of relative radius less than 1 which contain subexponentially many codewords for Reed-Solomon code of rate approaching one.<|reference_end|>
arxiv
@article{cheng2008complexity, title={Complexity of Decoding Positive-Rate Reed-Solomon Codes}, author={Qi Cheng and Daqing Wan}, journal={arXiv preprint arXiv:0802.1220}, year={2008}, archivePrefix={arXiv}, eprint={0802.1220}, primaryClass={cs.IT math.IT} }
cheng2008complexity
arxiv-2587
0802.1226
Lower Bounds for Complementation of omega-Automata Via the Full Automata Technique
<|reference_start|>Lower Bounds for Complementation of omega-Automata Via the Full Automata Technique: In this paper, we first introduce a lower bound technique for the state complexity of transformations of automata. Namely we suggest first considering the class of full automata in lower bound analysis, and later reducing the size of the large alphabet via alphabet substitutions. Then we apply such technique to the complementation of nondeterministic \omega-automata, and obtain several lower bound results. Particularly, we prove an \omega((0.76n)^n) lower bound for B\"uchi complementation, which also holds for almost every complementation or determinization transformation of nondeterministic omega-automata, and prove an optimal (\omega(nk))^n lower bound for the complementation of generalized B\"uchi automata, which holds for Streett automata as well.<|reference_end|>
arxiv
@article{yan2008lower, title={Lower Bounds for Complementation of omega-Automata Via the Full Automata Technique}, author={Qiqi Yan}, journal={Logical Methods in Computer Science, Volume 4, Issue 1 (March 19, 2008) lmcs:992}, year={2008}, doi={10.2168/LMCS-4(1:5)2008}, archivePrefix={arXiv}, eprint={0802.1226}, primaryClass={cs.LO} }
yan2008lower
arxiv-2588
0802.1237
Minimum Entropy Orientations
<|reference_start|>Minimum Entropy Orientations: We study graph orientations that minimize the entropy of the in-degree sequence. The problem of finding such an orientation is an interesting special case of the minimum entropy set cover problem previously studied by Halperin and Karp [Theoret. Comput. Sci., 2005] and by the current authors [Algorithmica, to appear]. We prove that the minimum entropy orientation problem is NP-hard even if the graph is planar, and that there exists a simple linear-time algorithm that returns an approximate solution with an additive error guarantee of 1 bit. This improves on the only previously known algorithm which has an additive error guarantee of log_2 e bits (approx. 1.4427 bits).<|reference_end|>
arxiv
@article{cardinal2008minimum, title={Minimum Entropy Orientations}, author={Jean Cardinal, Samuel Fiorini, and Gwena"el Joret}, journal={Operations Research Letters 36 (2008), pp. 680-683}, year={2008}, doi={10.1016/j.orl.2008.06.010}, archivePrefix={arXiv}, eprint={0802.1237}, primaryClass={cs.DS cs.DM} }
cardinal2008minimum
arxiv-2589
0802.1244
Learning Balanced Mixtures of Discrete Distributions with Small Sample
<|reference_start|>Learning Balanced Mixtures of Discrete Distributions with Small Sample: We study the problem of partitioning a small sample of $n$ individuals from a mixture of $k$ product distributions over a Boolean cube $\{0, 1\}^K$ according to their distributions. Each distribution is described by a vector of allele frequencies in $\R^K$. Given two distributions, we use $\gamma$ to denote the average $\ell_2^2$ distance in frequencies across $K$ dimensions, which measures the statistical divergence between them. We study the case assuming that bits are independently distributed across $K$ dimensions. This work demonstrates that, for a balanced input instance for $k = 2$, a certain graph-based optimization function returns the correct partition with high probability, where a weighted graph $G$ is formed over $n$ individuals, whose pairwise hamming distances between their corresponding bit vectors define the edge weights, so long as $K = \Omega(\ln n/\gamma)$ and $Kn = \tilde\Omega(\ln n/\gamma^2)$. The function computes a maximum-weight balanced cut of $G$, where the weight of a cut is the sum of the weights across all edges in the cut. This result demonstrates a nice property in the high-dimensional feature space: one can trade off the number of features that are required with the size of the sample to accomplish certain tasks like clustering.<|reference_end|>
arxiv
@article{zhou2008learning, title={Learning Balanced Mixtures of Discrete Distributions with Small Sample}, author={Shuheng Zhou}, journal={arXiv preprint arXiv:0802.1244}, year={2008}, archivePrefix={arXiv}, eprint={0802.1244}, primaryClass={cs.LG stat.ML} }
zhou2008learning
arxiv-2590
0802.1258
Bayesian Nonlinear Principal Component Analysis Using Random Fields
<|reference_start|>Bayesian Nonlinear Principal Component Analysis Using Random Fields: We propose a novel model for nonlinear dimension reduction motivated by the probabilistic formulation of principal component analysis. Nonlinearity is achieved by specifying different transformation matrices at different locations of the latent space and smoothing the transformation using a Markov random field type prior. The computation is made feasible by the recent advances in sampling from von Mises-Fisher distributions.<|reference_end|>
arxiv
@article{lian2008bayesian, title={Bayesian Nonlinear Principal Component Analysis Using Random Fields}, author={Heng Lian}, journal={arXiv preprint arXiv:0802.1258}, year={2008}, archivePrefix={arXiv}, eprint={0802.1258}, primaryClass={cs.CV cs.LG} }
lian2008bayesian
arxiv-2591
0802.1274
The Invar tensor package: Differential invariants of Riemann
<|reference_start|>The Invar tensor package: Differential invariants of Riemann: The long standing problem of the relations among the scalar invariants of the Riemann tensor is computationally solved for all 6x10^23 objects with up to 12 derivatives of the metric. This covers cases ranging from products of up to 6 undifferentiated Riemann tensors to cases with up to 10 covariant derivatives of a single Riemann. We extend our computer algebra system Invar to produce within seconds a canonical form for any of those objects in terms of a basis. The process is as follows: (1) an invariant is converted in real time into a canonical form with respect to the permutation symmetries of the Riemann tensor; (2) Invar reads a database of more than 6x10^5 relations and applies those coming from the cyclic symmetry of the Riemann tensor; (3) then applies the relations coming from the Bianchi identity, (4) the relations coming from commutations of covariant derivatives, (5) the dimensionally-dependent identities for dimension 4, and finally (6) simplifies invariants that can be expressed as product of dual invariants. Invar runs on top of the tensor computer algebra systems xTensor (for Mathematica) and Canon (for Maple).<|reference_end|>
arxiv
@article{martin-garcia2008the, title={The Invar tensor package: Differential invariants of Riemann}, author={Jose M. Martin-Garcia, David Yllanes, Renato Portugal}, journal={Comp.Phys.Commun.179:586-590,2008}, year={2008}, doi={10.1016/j.cpc.2008.04.018}, archivePrefix={arXiv}, eprint={0802.1274}, primaryClass={cs.SC gr-qc hep-th} }
martin-garcia2008the
arxiv-2592
0802.1296
On quantum statistics in data analysis
<|reference_start|>On quantum statistics in data analysis: Originally, quantum probability theory was developed to analyze statistical phenomena in quantum systems, where classical probability theory does not apply, because the lattice of measurable sets is not necessarily distributive. On the other hand, it is well known that the lattices of concepts, that arise in data analysis, are in general also non-distributive, albeit for completely different reasons. In his recent book, van Rijsbergen argues that many of the logical tools developed for quantum systems are also suitable for applications in information retrieval. I explore the mathematical support for this idea on an abstract vector space model, covering several forms of data analysis (information retrieval, data mining, collaborative filtering, formal concept analysis...), and roughly based on an idea from categorical quantum mechanics. It turns out that quantum (i.e., noncommutative) probability distributions arise already in this rudimentary mathematical framework. We show that a Bell-type inequality must be satisfied by the standard similarity measures, if they are used for preference predictions. The fact that already a very general, abstract version of the vector space model yields simple counterexamples for such inequalities seems to be an indicator of a genuine need for quantum statistics in data analysis.<|reference_end|>
arxiv
@article{pavlovic2008on, title={On quantum statistics in data analysis}, author={Dusko Pavlovic}, journal={arXiv preprint arXiv:0802.1296}, year={2008}, archivePrefix={arXiv}, eprint={0802.1296}, primaryClass={cs.IR math.CT quant-ph} }
pavlovic2008on
arxiv-2593
0802.1303
On GCD-morphic sequences
<|reference_start|>On GCD-morphic sequences: This note is a response to one of the problems posed by Kwa\'sniewski in [1,2], see also [3] i.e. GCD-morphic Problem III. We show that any GCD-morphic sequence $F$ is at the point product of primary GCD-morphic sequences and any GCD-morphic sequence is encoded by natural number valued sequence satisfying condition (C1). The problem of general importance - for example in number theory was formulated in [1,2] while investigating a new class of DAG's and their correspondent p.o. sets encoded uniquely by sequences with combinatorially interpretable properties.<|reference_end|>
arxiv
@article{dziemiańczuk2008on, title={On GCD-morphic sequences}, author={M. Dziemia'nczuk, W. Bajguz}, journal={arXiv preprint arXiv:0802.1303}, year={2008}, archivePrefix={arXiv}, eprint={0802.1303}, primaryClass={math.CO cs.DM} }
dziemiańczuk2008on
arxiv-2594
0802.1306
Network as a computer: ranking paths to find flows
<|reference_start|>Network as a computer: ranking paths to find flows: We explore a simple mathematical model of network computation, based on Markov chains. Similar models apply to a broad range of computational phenomena, arising in networks of computers, as well as in genetic, and neural nets, in social networks, and so on. The main problem of interaction with such spontaneously evolving computational systems is that the data are not uniformly structured. An interesting approach is to try to extract the semantical content of the data from their distribution among the nodes. A concept is then identified by finding the community of nodes that share it. The task of data structuring is thus reduced to the task of finding the network communities, as groups of nodes that together perform some non-local data processing. Towards this goal, we extend the ranking methods from nodes to paths. This allows us to extract some information about the likely flow biases from the available static information about the network.<|reference_end|>
arxiv
@article{pavlovic2008network, title={Network as a computer: ranking paths to find flows}, author={Dusko Pavlovic}, journal={arXiv preprint arXiv:0802.1306}, year={2008}, archivePrefix={arXiv}, eprint={0802.1306}, primaryClass={cs.IR cs.AI math.CT} }
pavlovic2008network
arxiv-2595
0802.1312
Untangling polygons and graphs
<|reference_start|>Untangling polygons and graphs: Untangling is a process in which some vertices of a planar graph are moved to obtain a straight-line plane drawing. The aim is to move as few vertices as possible. We present an algorithm that untangles the cycle graph C_n while keeping at least \Omega(n^{2/3}) vertices fixed. For any graph G, we also present an upper bound on the number of fixed vertices in the worst case. The bound is a function of the number of vertices, maximum degree and diameter of G. One of its consequences is the upper bound O((n log n)^{2/3}) for all 3-vertex-connected planar graphs.<|reference_end|>
arxiv
@article{cibulka2008untangling, title={Untangling polygons and graphs}, author={Josef Cibulka}, journal={Discrete and Computational Geometry 43(2): 402-411 (2010)}, year={2008}, doi={10.1007/s00454-009-9150-x}, archivePrefix={arXiv}, eprint={0802.1312}, primaryClass={cs.CG cs.DM} }
cibulka2008untangling
arxiv-2596
0802.1327
Exchange of Limits: Why Iterative Decoding Works
<|reference_start|>Exchange of Limits: Why Iterative Decoding Works: We consider communication over binary-input memoryless output-symmetric channels using low-density parity-check codes and message-passing decoding. The asymptotic (in the length) performance of such a combination for a fixed number of iterations is given by density evolution. Letting the number of iterations tend to infinity we get the density evolution threshold, the largest channel parameter so that the bit error probability tends to zero as a function of the iterations. In practice we often work with short codes and perform a large number of iterations. It is therefore interesting to consider what happens if in the standard analysis we exchange the order in which the blocklength and the number of iterations diverge to infinity. In particular, we can ask whether both limits give the same threshold. Although empirical observations strongly suggest that the exchange of limits is valid for all channel parameters, we limit our discussion to channel parameters below the density evolution threshold. Specifically, we show that under some suitable technical conditions the bit error probability vanishes below the density evolution threshold regardless of how the limit is taken.<|reference_end|>
arxiv
@article{korada2008exchange, title={Exchange of Limits: Why Iterative Decoding Works}, author={Satish Babu Korada, Ruediger Urbanke}, journal={arXiv preprint arXiv:0802.1327}, year={2008}, archivePrefix={arXiv}, eprint={0802.1327}, primaryClass={cs.IT math.IT} }
korada2008exchange
arxiv-2597
0802.1332
A connection between palindromic and factor complexity using return words
<|reference_start|>A connection between palindromic and factor complexity using return words: In this paper we prove that for any infinite word W whose set of factors is closed under reversal, the following conditions are equivalent: (I) all complete returns to palindromes are palindromes; (II) P(n) + P(n+1) = C(n+1) - C(n) + 2 for all n, where P (resp. C) denotes the palindromic complexity (resp. factor complexity) function of W, which counts the number of distinct palindromic factors (resp. factors) of each length in W.<|reference_end|>
arxiv
@article{bucci2008a, title={A connection between palindromic and factor complexity using return words}, author={Michelangelo Bucci, Alessandro De Luca, Amy Glen, Luca Q. Zamboni}, journal={Advances In Applied Mathematics 42 (2009) 60--74}, year={2008}, doi={10.1016/j.aam.2008.03.005}, archivePrefix={arXiv}, eprint={0802.1332}, primaryClass={math.CO cs.DM} }
bucci2008a
arxiv-2598
0802.1338
Some results on (a:b)-choosability
<|reference_start|>Some results on (a:b)-choosability: A solution to a problem of Erd\H{o}s, Rubin and Taylor is obtained by showing that if a graph $G$ is $(a:b)$-choosable, and $c/d > a/b$, then $G$ is not necessarily $(c:d)$-choosable. Applying probabilistic methods, an upper bound for the $k^{th}$ choice number of a graph is given. We also prove that a directed graph with maximum outdegree $d$ and no odd directed cycle is $(k(d+1):k)$-choosable for every $k \geq 1$. Other results presented in this article are related to the strong choice number of graphs (a generalization of the strong chromatic number). We conclude with complexity analysis of some decision problems related to graph choosability.<|reference_end|>
arxiv
@article{gutner2008some, title={Some results on (a:b)-choosability}, author={Shai Gutner and Michael Tarsi}, journal={arXiv preprint arXiv:0802.1338}, year={2008}, archivePrefix={arXiv}, eprint={0802.1338}, primaryClass={cs.DM cs.CC cs.DS} }
gutner2008some
arxiv-2599
0802.1348
Fourier-Based Spectral Analysis with Adaptive Resolution
<|reference_start|>Fourier-Based Spectral Analysis with Adaptive Resolution: Despite being the most popular methods of data analysis, Fourier-based techniques suffer from the problem of static resolution that is currently believed to be a fundamental limitation of the Fourier Transform. Although alternative solutions overcome this limitation, none provide the simplicity, versatility, and convenience of the Fourier analysis. The lack of convenience often prevents these alternatives from replacing classical spectral methods - even in applications that suffer from the limitation of static resolution. This work demonstrates that, contrary to the generally accepted belief, the Fourier Transform can be generalized to the case of adaptive resolution. The generalized transform provides backward compatibility with classical spectral techniques and introduces minimal computational overhead.<|reference_end|>
arxiv
@article{khilko2008fourier-based, title={Fourier-Based Spectral Analysis with Adaptive Resolution}, author={Andrey Khilko}, journal={arXiv preprint arXiv:0802.1348}, year={2008}, archivePrefix={arXiv}, eprint={0802.1348}, primaryClass={physics.data-an cs.NA math.GM} }
khilko2008fourier-based
arxiv-2600
0802.1361
Guarding curvilinear art galleries with edge or mobile guards via 2-dominance of triangulation graphs
<|reference_start|>Guarding curvilinear art galleries with edge or mobile guards via 2-dominance of triangulation graphs: We consider the problem of monitoring an art gallery modeled as a polygon, the edges of which are arcs of curves, with edge or mobile guards. Our focus is on piecewise-convex polygons, i.e., polygons that are locally convex, except possibly at the vertices, and their edges are convex arcs. We transform the problem of monitoring a piecewise-convex polygon to the problem of 2-dominating a properly defined triangulation graph with edges or diagonals, where 2-dominance requires that every triangle in the triangulation graph has at least two of its vertices in its 2-dominating set. We show that $\lfloor\frac{n+1}{3}\rfloor$ diagonal guards or $\lfloor\frac{2n+1}{5}\rfloor$ edge guards are always sufficient and sometimes necessary, in order to 2-dominate a triangulation graph. Furthermore, we show how to compute: a diagonal 2-dominating set of size $\lfloor\frac{n+1}{3}\rfloor$ in linear time, an edge 2-dominating set of size $\lfloor\frac{2n+1}{5}\rfloor$ in $O(n^2)$ time, and an edge 2-dominating set of size $\lfloor\frac{3n}{7}\rfloor$ in O(n) time. Based on the above-mentioned results, we prove that, for piecewise-convex polygons, we can compute: a mobile guard set of size $\lfloor\frac{n+1}{3}\rfloor$ in $O(n\log{}n)$ time, an edge guard set of size $\lfloor\frac{2n+1}{5}\rfloor$ in $O(n^2)$ time, and an edge guard set of size $\lfloor\frac{3n}{7}\rfloor$ in $O(n\log{}n)$ time. Finally, we show that $\lfloor\frac{n}{3}\rfloor$ mobile or $\lceil\frac{n}{3}\rceil$ edge guards are sometimes necessary. When restricting our attention to monotone piecewise-convex polygons, the bounds mentioned above drop: $\lceil\frac{n+1}{4}\rceil$ edge or mobile guards are always sufficient and sometimes necessary; such an edge or mobile guard set, of size at most $\lceil\frac{n+1}{4}\rceil$, can be computed in O(n) time.<|reference_end|>
arxiv
@article{karavelas2008guarding, title={Guarding curvilinear art galleries with edge or mobile guards via 2-dominance of triangulation graphs}, author={Menelaos I. Karavelas}, journal={Comput. Geom. Theory Appl. 44(1):20-51, 2011}, year={2008}, doi={10.1016/j.comgeo.2010.07.002}, archivePrefix={arXiv}, eprint={0802.1361}, primaryClass={cs.CG} }
karavelas2008guarding