corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-673101 | cs/0507035 | Enhancing Global SLS-Resolution with Loop Cutting and Tabling Mechanisms | <|reference_start|>Enhancing Global SLS-Resolution with Loop Cutting and Tabling Mechanisms: Global SLS-resolution is a well-known procedural semantics for top-down computation of queries under the well-founded model. It inherits from SLDNF-resolution the {\em linearity} property of derivations, which makes it easy and efficient to implement using a simple stack-based memory structure. However, like SLDNF-resolution it suffers from the problem of infinite loops and redundant computations. To resolve this problem, in this paper we develop a new procedural semantics, called {\em SLTNF-resolution}, by enhancing Global SLS-resolution with loop cutting and tabling mechanisms. SLTNF-resolution is sound and complete w.r.t. the well-founded semantics for logic programs with the bounded-term-size property, and is superior to existing linear tabling procedural semantics such as SLT-resolution.<|reference_end|> | arxiv | @article{shen2005enhancing,
title={Enhancing Global SLS-Resolution with Loop Cutting and Tabling Mechanisms},
author={Yi-Dong Shen, Jia-Huai You and Li-Yan Yuan},
journal={Theoretical Computer Science 328(3):271-287, 2004},
year={2005},
archivePrefix={arXiv},
eprint={cs/0507035},
primaryClass={cs.LO cs.AI}
} | shen2005enhancing |
arxiv-673102 | cs/0507036 | Improved Inference for Checking Annotations | <|reference_start|>Improved Inference for Checking Annotations: We consider type inference in the Hindley/Milner system extended with type annotations and constraints with a particular focus on Haskell-style type classes. We observe that standard inference algorithms are incomplete in the presence of nested type annotations. To improve the situation we introduce a novel inference scheme for checking type annotations. Our inference scheme is also incomplete in general but improves over existing implementations as found e.g. in the Glasgow Haskell Compiler (GHC). For certain cases (e.g. Haskell 98) our inference scheme is complete. Our approach has been fully implemented as part of the Chameleon system (experimental version of Haskell).<|reference_end|> | arxiv | @article{stuckey2005improved,
title={Improved Inference for Checking Annotations},
author={Peter J Stuckey, Martin Sulzmann, Jeremy Wazny},
journal={arXiv preprint arXiv:cs/0507036},
year={2005},
archivePrefix={arXiv},
eprint={cs/0507036},
primaryClass={cs.PL cs.LO}
} | stuckey2005improved |
arxiv-673103 | cs/0507037 | Type Inference for Guarded Recursive Data Types | <|reference_start|>Type Inference for Guarded Recursive Data Types: We consider type inference for guarded recursive data types (GRDTs) -- a recent generalization of algebraic data types. We reduce type inference for GRDTs to unification under a mixed prefix. Thus, we obtain efficient type inference. Inference is incomplete because the set of type constraints allowed to appear in the type system is only a subset of those type constraints generated by type inference. Hence, inference only succeeds if the program is sufficiently type annotated. We present refined procedures to infer types incrementally and to assist the user in identifying which pieces of type information are missing. Additionally, we introduce procedures to test if a type is not principal and to find a principal type if one exists.<|reference_end|> | arxiv | @article{stuckey2005type,
title={Type Inference for Guarded Recursive Data Types},
author={Peter J. Stuckey, Martin Sulzmann},
journal={arXiv preprint arXiv:cs/0507037},
year={2005},
archivePrefix={arXiv},
eprint={cs/0507037},
primaryClass={cs.PL cs.LO}
} | stuckey2005type |
arxiv-673104 | cs/0507038 | Upper Bound on the Number of Vertices of Polyhedra with $0,1$-Constraint Matrices | <|reference_start|>Upper Bound on the Number of Vertices of Polyhedra with $0,1$-Constraint Matrices: In this note we show that the maximum number of vertices in any polyhedron $P=\{x\in \mathbb{R}^d : Ax\leq b\}$ with $0,1$-constraint matrix $A$ and a real vector $b$ is at most $d!$.<|reference_end|> | arxiv | @article{elbassioni2005upper,
title={Upper Bound on the Number of Vertices of Polyhedra with $0,1$-Constraint
Matrices},
author={Khaled Elbassioni, Zvi Lotker, Raimund Seidel},
journal={arXiv preprint arXiv:cs/0507038},
year={2005},
archivePrefix={arXiv},
eprint={cs/0507038},
primaryClass={cs.CG}
} | elbassioni2005upper |
arxiv-673105 | cs/0507039 | Distributed Regression in Sensor Networks: Training Distributively with Alternating Projections | <|reference_start|>Distributed Regression in Sensor Networks: Training Distributively with Alternating Projections: Wireless sensor networks (WSNs) have attracted considerable attention in recent years and motivate a host of new challenges for distributed signal processing. The problem of distributed or decentralized estimation has often been considered in the context of parametric models. However, the success of parametric methods is limited by the appropriateness of the strong statistical assumptions made by the models. In this paper, a more flexible nonparametric model for distributed regression is considered that is applicable in a variety of WSN applications including field estimation. Here, starting with the standard regularized kernel least-squares estimator, a message-passing algorithm for distributed estimation in WSNs is derived. The algorithm can be viewed as an instantiation of the successive orthogonal projection (SOP) algorithm. Various practical aspects of the algorithm are discussed and several numerical simulations validate the potential of the approach.<|reference_end|> | arxiv | @article{predd2005distributed,
title={Distributed Regression in Sensor Networks: Training Distributively with
Alternating Projections},
author={Joel B. Predd and Sanjeev R. Kulkarni and H. Vincent Poor},
journal={arXiv preprint arXiv:cs/0507039},
year={2005},
doi={10.1117/12.620194},
archivePrefix={arXiv},
eprint={cs/0507039},
primaryClass={cs.LG cs.AI cs.CV cs.DC cs.IT math.IT}
} | predd2005distributed |
arxiv-673106 | cs/0507040 | Pattern Recognition for Conditionally Independent Data | <|reference_start|>Pattern Recognition for Conditionally Independent Data: In this work we consider the task of relaxing the i.i.d assumption in pattern recognition (or classification), aiming to make existing learning algorithms applicable to a wider range of tasks. Pattern recognition is guessing a discrete label of some object based on a set of given examples (pairs of objects and labels). We consider the case of deterministically defined labels. Traditionally, this task is studied under the assumption that examples are independent and identically distributed. However, it turns out that many results of pattern recognition theory carry over a weaker assumption. Namely, under the assumption of conditional independence and identical distribution of objects, while the only assumption on the distribution of labels is that the rate of occurrence of each label should be above some positive threshold. We find a broad class of learning algorithms for which estimations of the probability of a classification error achieved under the classical i.i.d. assumption can be generalised to the similar estimates for the case of conditionally i.i.d. examples.<|reference_end|> | arxiv | @article{ryabko2005pattern,
title={Pattern Recognition for Conditionally Independent Data},
author={Daniil Ryabko},
journal={Journal of Machine Learning Research 7(Apr):645-664, 2006},
year={2005},
archivePrefix={arXiv},
eprint={cs/0507040},
primaryClass={cs.LG cs.AI cs.CV}
} | ryabko2005pattern |
arxiv-673107 | cs/0507041 | Monotone Conditional Complexity Bounds on Future Prediction Errors | <|reference_start|>Monotone Conditional Complexity Bounds on Future Prediction Errors: We bound the future loss when predicting any (computably) stochastic sequence online. Solomonoff finitely bounded the total deviation of his universal predictor M from the true distribution m by the algorithmic complexity of m. Here we assume we are at a time t>1 and already observed x=x_1...x_t. We bound the future prediction performance on x_{t+1}x_{t+2}... by a new variant of algorithmic complexity of m given x, plus the complexity of the randomness deficiency of x. The new complexity is monotone in its condition in the sense that this complexity can only decrease if the condition is prolonged. We also briefly discuss potential generalizations to Bayesian model classes and to classification problems.<|reference_end|> | arxiv | @article{chernov2005monotone,
title={Monotone Conditional Complexity Bounds on Future Prediction Errors},
author={Alexey Chernov and Marcus Hutter},
journal={Proc. 16th International Conf. on Algorithmic Learning Theory (ALT
2005) 414-428},
year={2005},
number={IDSIA-16-05},
archivePrefix={arXiv},
eprint={cs/0507041},
primaryClass={cs.LG cs.AI cs.IT math.IT}
} | chernov2005monotone |
arxiv-673108 | cs/0507042 | The MammoGrid Virtual Organisation - Federating Distributed Mammograms | <|reference_start|>The MammoGrid Virtual Organisation - Federating Distributed Mammograms: The MammoGrid project aims to deliver a prototype which enables the effective collaboration between radiologists using grid, service-orientation and database solutions. The grid technologies and service-based database management solution provide the platform for integrating diverse and distributed resources, creating what is called a virtual organisation. The MammoGrid Virtual Organisation facilitates the sharing and coordinated access to mammography data, medical imaging software and computing resources of participating hospitals. Hospitals manage their local database of mammograms, but in addition, radiologists who are part of this organisation can share mammograms, reports, results and image analysis software. The MammoGrid Virtual Organisation is a federation of autonomous multi-centres sites which transcends national boundaries. This paper outlines the service-based approach in the creation and management of the federated distributed mammography database and discusses the role of virtual organisations in distributed image analysis.<|reference_end|> | arxiv | @article{estrella2005the,
title={The MammoGrid Virtual Organisation - Federating Distributed Mammograms},
author={Florida Estrella, Richard McClatchey, Dmitry Rogulin},
journal={arXiv preprint arXiv:cs/0507042},
year={2005},
number={Medical Informatics Europe MIE2005 paper publication},
archivePrefix={arXiv},
eprint={cs/0507042},
primaryClass={cs.DC cs.DB}
} | estrella2005the |
arxiv-673109 | cs/0507043 | Proof rules for purely quantum programs | <|reference_start|>Proof rules for purely quantum programs: We apply the notion of quantum predicate proposed by D'Hondt and Panangaden to analyze a purely quantum language fragment which describes the quantum part of a future quantum computer in Knill's architecture. The denotational semantics, weakest precondition semantics, and weakest liberal precondition semantics of this language fragment are introduced. To help reasoning about quantum programs involving quantum loops, we extend proof rules for classical probabilistic programs to our purely quantum programs.<|reference_end|> | arxiv | @article{feng2005proof,
title={Proof rules for purely quantum programs},
author={Yuan Feng, Runyao Duan, Zhengfeng Ji, and Mingsheng Ying},
journal={arXiv preprint arXiv:cs/0507043},
year={2005},
archivePrefix={arXiv},
eprint={cs/0507043},
primaryClass={cs.PL quant-ph}
} | feng2005proof |
arxiv-673110 | cs/0507044 | Defensive Universal Learning with Experts | <|reference_start|>Defensive Universal Learning with Experts: This paper shows how universal learning can be achieved with expert advice. To this aim, we specify an experts algorithm with the following characteristics: (a) it uses only feedback from the actions actually chosen (bandit setup), (b) it can be applied with countably infinite expert classes, and (c) it copes with losses that may grow in time appropriately slowly. We prove loss bounds against an adaptive adversary. From this, we obtain a master algorithm for "reactive" experts problems, which means that the master's actions may influence the behavior of the adversary. Our algorithm can significantly outperform standard experts algorithms on such problems. Finally, we combine it with a universal expert class. The resulting universal learner performs -- in a certain sense -- almost as well as any computable strategy, for any online decision problem. We also specify the (worst-case) convergence speed, which is very slow.<|reference_end|> | arxiv | @article{poland2005defensive,
title={Defensive Universal Learning with Experts},
author={Jan Poland and Marcus Hutter},
journal={Proc. 16th International Conf. on Algorithmic Learning Theory (ALT
2005) 356-370},
year={2005},
number={IDSIA-15-05},
archivePrefix={arXiv},
eprint={cs/0507044},
primaryClass={cs.LG}
} | poland2005defensive |
arxiv-673111 | cs/0507045 | In the beginning was game semantics | <|reference_start|>In the beginning was game semantics: This article presents an overview of computability logic -- the game-semantically constructed logic of interactive computational tasks and resources. There is only one non-overview, technical section in it, devoted to a proof of the soundness of affine logic with respect to the semantics of computability logic. A comprehensive online source on the subject can be found at http://www.cis.upenn.edu/~giorgi/cl.html<|reference_end|> | arxiv | @article{japaridze2005in,
title={In the beginning was game semantics},
author={Giorgi Japaridze},
journal={Games: Unifying Logic, Language and Philosophy. O. Majer, A.-V.
Pietarinen and T. Tulenheimo, eds. Springer 2009, pp. 249-350},
year={2005},
doi={10.1007/978-1-4020-9374-6_11},
archivePrefix={arXiv},
eprint={cs/0507045},
primaryClass={cs.LO cs.AI math.LO}
} | japaridze2005in |
arxiv-673112 | cs/0507046 | Revisiting Internet AS-level Topology Discovery | <|reference_start|>Revisiting Internet AS-level Topology Discovery: The development of veracious models of the Internet topology has received a lot of attention in the last few years. Many proposed models are based on topologies derived from RouteViews BGP table dumps (BTDs). However, BTDs do not capture all AS-links of the Internet topology and most importantly the number of the hidden AS-links is unknown, resulting in AS-graphs of questionable quality. As a first step to address this problem, we introduce a new AS-topology discovery methodology that results in more complete and accurate graphs. Moreover, we use data available from existing measurement facilities, circumventing the burden of additional measurement infrastructure. We deploy our methodology and construct an AS-topology that has at least 61.5% more AS-links than BTD-derived AS-topologies we examined. Finally, we analyze the temporal and topological properties of the augmented graph and pinpoint the differences from BTD-derived AS-topologies.<|reference_end|> | arxiv | @article{dimitropoulos2005revisiting,
title={Revisiting Internet AS-level Topology Discovery},
author={Xenofontas Dimitropoulos, Dmitri Krioukov, George Riley},
journal={PAM 2005; LNCS 3431, p. 177, 2005},
year={2005},
doi={10.1007/b135479},
archivePrefix={arXiv},
eprint={cs/0507046},
primaryClass={cs.NI}
} | dimitropoulos2005revisiting |
arxiv-673113 | cs/0507047 | Inferring AS Relationships: Dead End or Lively Beginning? | <|reference_start|>Inferring AS Relationships: Dead End or Lively Beginning?: Recent techniques for inferring business relationships between ASs have yielded maps that have extremely few invalid BGP paths in the terminology of Gao. However, some relationships inferred by these newer algorithms are incorrect, leading to the deduction of unrealistic AS hierarchies. We investigate this problem and discover what causes it. Having obtained such insight, we generalize the problem of AS relationship inference as a multiobjective optimization problem with node-degree-based corrections to the original objective function of minimizing the number of invalid paths. We solve the generalized version of the problem using the semidefinite programming relaxation of the MAX2SAT problem. Keeping the number of invalid paths small, we obtain a more veracious solution than that yielded by recent heuristics.<|reference_end|> | arxiv | @article{dimitropoulos2005inferring,
title={Inferring AS Relationships: Dead End or Lively Beginning?},
author={Xenofontas Dimitropoulos, Dmitri Krioukov, Bradley Huffaker, kc
claffy, George Riley},
journal={WEA 2005; LNCS 3503, p. 113, 2005},
year={2005},
doi={10.1007/11427186_12},
archivePrefix={arXiv},
eprint={cs/0507047},
primaryClass={cs.NI cs.DS}
} | dimitropoulos2005inferring |
arxiv-673114 | cs/0507048 | Redundancy in Logic III: Non-Mononotonic Reasoning | <|reference_start|>Redundancy in Logic III: Non-Mononotonic Reasoning: Results about the redundancy of circumscriptive and default theories are presented. In particular, the complexity of establishing whether a given theory is redundant is establihsed.<|reference_end|> | arxiv | @article{liberatore2005redundancy,
title={Redundancy in Logic III: Non-Mononotonic Reasoning},
author={Paolo Liberatore},
journal={arXiv preprint arXiv:cs/0507048},
year={2005},
doi={10.1016/j.artint.2008.02.003},
archivePrefix={arXiv},
eprint={cs/0507048},
primaryClass={cs.LO cs.AI cs.CC}
} | liberatore2005redundancy |
arxiv-673115 | cs/0507049 | The Skip Quadtree: A Simple Dynamic Data Structure for Multidimensional Data | <|reference_start|>The Skip Quadtree: A Simple Dynamic Data Structure for Multidimensional Data: We present a new multi-dimensional data structure, which we call the skip quadtree (for point data in R^2) or the skip octree (for point data in R^d, with constant d>2). Our data structure combines the best features of two well-known data structures, in that it has the well-defined "box"-shaped regions of region quadtrees and the logarithmic-height search and update hierarchical structure of skip lists. Indeed, the bottom level of our structure is exactly a region quadtree (or octree for higher dimensional data). We describe efficient algorithms for inserting and deleting points in a skip quadtree, as well as fast methods for performing point location and approximate range queries.<|reference_end|> | arxiv | @article{eppstein2005the,
title={The Skip Quadtree: A Simple Dynamic Data Structure for Multidimensional
Data},
author={David Eppstein, Michael T. Goodrich, Jonathan Z. Sun},
journal={arXiv preprint arXiv:cs/0507049},
year={2005},
archivePrefix={arXiv},
eprint={cs/0507049},
primaryClass={cs.CG}
} | eppstein2005the |
arxiv-673116 | cs/0507050 | Skip-Webs: Efficient Distributed Data Structures for Multi-Dimensional Data Sets | <|reference_start|>Skip-Webs: Efficient Distributed Data Structures for Multi-Dimensional Data Sets: We present a framework for designing efficient distributed data structures for multi-dimensional data. Our structures, which we call skip-webs, extend and improve previous randomized distributed data structures, including skipnets and skip graphs. Our framework applies to a general class of data querying scenarios, which include linear (one-dimensional) data, such as sorted sets, as well as multi-dimensional data, such as d-dimensional octrees and digital tries of character strings defined over a fixed alphabet. We show how to perform a query over such a set of n items spread among n hosts using O(log n / log log n) messages for one-dimensional data, or O(log n) messages for fixed-dimensional data, while using only O(log n) space per host. We also show how to make such structures dynamic so as to allow for insertions and deletions in O(log n) messages for quadtrees, octrees, and digital tries, and O(log n / log log n) messages for one-dimensional data. Finally, we show how to apply a blocking strategy to skip-webs to further improve message complexity for one-dimensional data when hosts can store more data.<|reference_end|> | arxiv | @article{arge2005skip-webs:,
title={Skip-Webs: Efficient Distributed Data Structures for Multi-Dimensional
Data Sets},
author={Lars Arge, David Eppstein, Michael T. Goodrich},
journal={arXiv preprint arXiv:cs/0507050},
year={2005},
archivePrefix={arXiv},
eprint={cs/0507050},
primaryClass={cs.DC cs.CG cs.DS}
} | arge2005skip-webs: |
arxiv-673117 | cs/0507051 | Confluent Layered Drawings | <|reference_start|>Confluent Layered Drawings: We combine the idea of confluent drawings with Sugiyama style drawings, in order to reduce the edge crossings in the resultant drawings. Furthermore, it is easier to understand the structures of graphs from the mixed style drawings. The basic idea is to cover a layered graph by complete bipartite subgraphs (bicliques), then replace bicliques with tree-like structures. The biclique cover problem is reduced to a special edge coloring problem and solved by heuristic coloring algorithms. Our method can be extended to obtain multi-depth confluent layered drawings.<|reference_end|> | arxiv | @article{eppstein2005confluent,
title={Confluent Layered Drawings},
author={David Eppstein, Michael T. Goodrich, Jeremy Yu Meng},
journal={Algorithmica 47(4):439-452, 2007},
year={2005},
doi={10.1007/s00453-006-0159-8},
archivePrefix={arXiv},
eprint={cs/0507051},
primaryClass={cs.CG cs.DS}
} | eppstein2005confluent |
arxiv-673118 | cs/0507052 | Finite automata for testing uniqueness of Eulerian trails | <|reference_start|>Finite automata for testing uniqueness of Eulerian trails: We investigate the condition under which the Eulerian trail of a digraph is unique, and design a finite automaton to examine it. The algorithm is effective, for if the condition is violated, it will be noticed immediately without the need to trace through the whole trail.<|reference_end|> | arxiv | @article{li2005finite,
title={Finite automata for testing uniqueness of Eulerian trails},
author={Qiang Li, Hui-Min Xie},
journal={J. Comput. System Sci., 2008. 74(5): 870-874},
year={2005},
doi={10.1016/j.jcss.2007.10.004},
archivePrefix={arXiv},
eprint={cs/0507052},
primaryClass={cs.CC cs.LO}
} | li2005finite |
arxiv-673119 | cs/0507053 | Nonrepetitive Paths and Cycles in Graphs with Application to Sudoku | <|reference_start|>Nonrepetitive Paths and Cycles in Graphs with Application to Sudoku: We provide a simple linear time transformation from a directed or undirected graph with labeled edges to an unlabeled digraph, such that paths in the input graph in which no two consecutive edges have the same label correspond to paths in the transformed graph and vice versa. Using this transformation, we provide efficient algorithms for finding paths and cycles with no two consecutive equal labels. We also consider related problems where the paths and cycles are required to be simple; we find efficient algorithms for the undirected case of these problems but show the directed case to be NP-complete. We apply our path and cycle finding algorithms in a program for generating and solving Sudoku puzzles, and show experimentally that they lead to effective puzzle-solving rules that may also be of interest to human Sudoku puzzle solvers.<|reference_end|> | arxiv | @article{eppstein2005nonrepetitive,
title={Nonrepetitive Paths and Cycles in Graphs with Application to Sudoku},
author={David Eppstein},
journal={arXiv preprint arXiv:cs/0507053},
year={2005},
archivePrefix={arXiv},
eprint={cs/0507053},
primaryClass={cs.DS cs.AI}
} | eppstein2005nonrepetitive |
arxiv-673120 | cs/0507054 | f2mma: FORTRAN to Mathematica translator | <|reference_start|>f2mma: FORTRAN to Mathematica translator: f2mma program can be used to translate programs written in some subset of the FORTRAN language into {\sl Mathematica} system's programming language. This subset have been enough to translate GAPP (Global Analysis of Particle Properties) programm into {\sl Mathematica} language automatically. Observables table calculated with GAPP({\sl Mathematica}) is presented.<|reference_end|> | arxiv | @article{siver2005f2mma:,
title={f2mma: FORTRAN to Mathematica translator},
author={A. S. Siver},
journal={arXiv preprint arXiv:cs/0507054},
year={2005},
archivePrefix={arXiv},
eprint={cs/0507054},
primaryClass={cs.OH}
} | siver2005f2mma: |
arxiv-673121 | cs/0507055 | ReacProc: A Tool to Process Reactions Describing Particle Interactions | <|reference_start|>ReacProc: A Tool to Process Reactions Describing Particle Interactions: ReacProc is a program written in C/C++ programming language which can be used (1) to check out of reactions describing particles interactions against conservation laws and (2) to reduce input reaction into some canonical form. A table with particles properties is available within ReacProc package.<|reference_end|> | arxiv | @article{siver2005reacproc:,
title={ReacProc: A Tool to Process Reactions Describing Particle Interactions},
author={A. S. Siver},
journal={arXiv preprint arXiv:cs/0507055},
year={2005},
archivePrefix={arXiv},
eprint={cs/0507055},
primaryClass={cs.CE}
} | siver2005reacproc: |
arxiv-673122 | cs/0507056 | Explorations in engagement for humans and robots | <|reference_start|>Explorations in engagement for humans and robots: This paper explores the concept of engagement, the process by which individuals in an interaction start, maintain and end their perceived connection to one another. The paper reports on one aspect of engagement among human interactors--the effect of tracking faces during an interaction. It also describes the architecture of a robot that can participate in conversational, collaborative interactions with engagement gestures. Finally, the paper reports on findings of experiments with human participants who interacted with a robot when it either performed or did not perform engagement gestures. Results of the human-robot studies indicate that people become engaged with robots: they direct their attention to the robot more often in interactions where engagement gestures are present, and they find interactions more appropriate when engagement gestures are present than when they are not.<|reference_end|> | arxiv | @article{sidner2005explorations,
title={Explorations in engagement for humans and robots},
author={Candace L. Sidner, Christopher Lee, Cory Kidd, Neal Lesh, Charles Rich},
journal={Artificial Intelligence, volume 166, issues 1-2, August 2005, pp.
140-164},
year={2005},
number={MERL TR2005-017},
archivePrefix={arXiv},
eprint={cs/0507056},
primaryClass={cs.AI cs.CL cs.RO}
} | sidner2005explorations |
arxiv-673123 | cs/0507057 | A new sibling of BQP | <|reference_start|>A new sibling of BQP: We present a new quantum complexity class, called MQ^2, which is contained in AWPP. This class has a compact and simple mathematical definition, involving only polynomial-time computable functions and a unitarity condition. It contains both Deutsch-Jozsa's and Shor's algorithm, while its relation to BQP is unknown. This shows that in the complexity class hierarchy, BQP is not an extraordinary isolated island, but has ''siblings'' which as well can solve prime-factorization.<|reference_end|> | arxiv | @article{tusarova2005a,
title={A new sibling of BQP},
author={Tereza Tusarova},
journal={arXiv preprint arXiv:cs/0507057},
year={2005},
archivePrefix={arXiv},
eprint={cs/0507057},
primaryClass={cs.CC}
} | tusarova2005a |
arxiv-673124 | cs/0507058 | Paving the Way for Image Understanding: A New Kind of Image Decomposition is Desired | <|reference_start|>Paving the Way for Image Understanding: A New Kind of Image Decomposition is Desired: In this paper we present an unconventional image segmentation approach which is devised to meet the requirements of image understanding and pattern recognition tasks. Generally image understanding assumes interplay of two sub-processes: image information content discovery and image information content interpretation. Despite of its widespread use, the notion of "image information content" is still ill defined, intuitive, and ambiguous. Most often, it is used in the Shannon's sense, which means information content assessment averaged over the whole signal ensemble. Humans, however,rarely resort to such estimates. They are very effective in decomposing images into their meaningful constituents and focusing attention to the perceptually relevant image parts. We posit that following the latest findings in human attention vision studies and the concepts of Kolmogorov's complexity theory an unorthodox segmentation approach can be proposed that provides effective image decomposition to information preserving image fragments well suited for subsequent image interpretation. We provide some illustrative examples, demonstrating effectiveness of this approach.<|reference_end|> | arxiv | @article{diamant2005paving,
title={Paving the Way for Image Understanding: A New Kind of Image
Decomposition is Desired},
author={Emanuel Diamant},
journal={LNCS vol. 3540, pp. 17-24, Springer Verlag, 2005},
year={2005},
archivePrefix={arXiv},
eprint={cs/0507058},
primaryClass={cs.CV}
} | diamant2005paving |
arxiv-673125 | cs/0507059 | Data complexity of answering conjunctive queries over SHIQ knowledge bases | <|reference_start|>Data complexity of answering conjunctive queries over SHIQ knowledge bases: An algorithm for answering conjunctive queries over SHIQ knowledge bases that is coNP in data complexity is given. The algorithm is based on the tableau algorithm for reasoning with individuals in SHIQ. The blocking conditions of the tableau are weakened in such a way that the set of models the modified algorithm yields suffices to check query entailment. The modified blocking conditions are based on the ones proposed by Levy and Rousset for reasoning with Horn Rules in the description logic ALCNR.<|reference_end|> | arxiv | @article{de la fuente2005data,
title={Data complexity of answering conjunctive queries over SHIQ knowledge
bases},
author={M. Magdalena Ortiz de la Fuente, Diego Calvanese, Thomas Eiter and
Enrico Franconi},
journal={arXiv preprint arXiv:cs/0507059},
year={2005},
archivePrefix={arXiv},
eprint={cs/0507059},
primaryClass={cs.LO cs.AI cs.CC}
} | de la fuente2005data |
arxiv-673126 | cs/0507060 | The Entropy of a Binary Hidden Markov Process | <|reference_start|>The Entropy of a Binary Hidden Markov Process: The entropy of a binary symmetric Hidden Markov Process is calculated as an expansion in the noise parameter epsilon. We map the problem onto a one-dimensional Ising model in a large field of random signs and calculate the expansion coefficients up to second order in epsilon. Using a conjecture we extend the calculation to 11th order and discuss the convergence of the resulting series.<|reference_end|> | arxiv | @article{zuk2005the,
title={The Entropy of a Binary Hidden Markov Process},
author={O. Zuk, I. Kanter, E. Domany},
journal={arXiv preprint arXiv:cs/0507060},
year={2005},
doi={10.1007/s10955-005-7576-y},
archivePrefix={arXiv},
eprint={cs/0507060},
primaryClass={cs.IT cond-mat.stat-mech math.IT math.ST stat.TH}
} | zuk2005the |
arxiv-673127 | cs/0507061 | Software Architecture Overview | <|reference_start|>Software Architecture Overview: What is Software Architecture? The rules, paradigmen, pattern that help to construct, build and test a serious piece of software. It is the practical experience boiled down to abstract level. Software Architecture builds on System Engineering and the scientific method as established by Galileo Galilei: Measure what you can and make measureable what you can not. The experiment (test) is more important then the deduction. Pieces of information about software architecture are all over the internet. This paper uses citation as much as possible. The aim is to bring together an overview, not to rephrase the wording.<|reference_end|> | arxiv | @article{adrian2005software,
title={Software Architecture Overview},
author={Andre Adrian},
journal={arXiv preprint arXiv:cs/0507061},
year={2005},
archivePrefix={arXiv},
eprint={cs/0507061},
primaryClass={cs.SE}
} | adrian2005software |
arxiv-673128 | cs/0507062 | FPL Analysis for Adaptive Bandits | <|reference_start|>FPL Analysis for Adaptive Bandits: A main problem of "Follow the Perturbed Leader" strategies for online decision problems is that regret bounds are typically proven against oblivious adversary. In partial observation cases, it was not clear how to obtain performance guarantees against adaptive adversary, without worsening the bounds. We propose a conceptually simple argument to resolve this problem. Using this, a regret bound of O(t^(2/3)) for FPL in the adversarial multi-armed bandit problem is shown. This bound holds for the common FPL variant using only the observations from designated exploration rounds. Using all observations allows for the stronger bound of O(t^(1/2)), matching the best bound known so far (and essentially the known lower bound) for adversarial bandits. Surprisingly, this variant does not even need explicit exploration, it is self-stabilizing. However the sampling probabilities have to be either externally provided or approximated to sufficient accuracy, using O(t^2 log t) samples in each step.<|reference_end|> | arxiv | @article{poland2005fpl,
title={FPL Analysis for Adaptive Bandits},
author={Jan Poland},
journal={arXiv preprint arXiv:cs/0507062},
year={2005},
archivePrefix={arXiv},
eprint={cs/0507062},
primaryClass={cs.LG}
} | poland2005fpl |
arxiv-673129 | cs/0507063 | Theoretical cryptanalysis of the Klimov-Shamir number generator TF-1 | <|reference_start|>Theoretical cryptanalysis of the Klimov-Shamir number generator TF-1: The internal state of the Klimov-Shamir number generator TF-1 consists of four words of size w bits each, whereas its intended strength is 2^{2w}. We exploit an asymmetry in its output function to show that the internal state can be recovered after having 2^w outputs, using 2^{1.5w} operations. For w=32 the attack is practical, but for their recommended w=64 it is only of theoretical interest.<|reference_end|> | arxiv | @article{tsaban2005theoretical,
title={Theoretical cryptanalysis of the Klimov-Shamir number generator TF-1},
author={Boaz Tsaban},
journal={Journal of Cryptology 20 (2007), 389-392},
year={2005},
doi={10.1007/s00145-007-0564-4},
archivePrefix={arXiv},
eprint={cs/0507063},
primaryClass={cs.CR cs.CC}
} | tsaban2005theoretical |
arxiv-673130 | cs/0507064 | Termination of rewriting strategies: a generic approach | <|reference_start|>Termination of rewriting strategies: a generic approach: We propose a generic termination proof method for rewriting under strategies, based on an explicit induction on the termination property. Rewriting trees on ground terms are modeled by proof trees, generated by alternatively applying narrowing and abstracting steps. The induction principle is applied through the abstraction mechanism, where terms are replaced by variables representing any of their normal forms. The induction ordering is not given a priori, but defined with ordering constraints, incrementally set during the proof. Abstraction constraints can be used to control the narrowing mechanism, well known to easily diverge. The generic method is then instantiated for the innermost, outermost and local strategies.<|reference_end|> | arxiv | @article{gnaedig2005termination,
title={Termination of rewriting strategies: a generic approach},
author={Isabelle Gnaedig and Helene Kirchner},
journal={arXiv preprint arXiv:cs/0507064},
year={2005},
archivePrefix={arXiv},
eprint={cs/0507064},
primaryClass={cs.LO}
} | gnaedig2005termination |
arxiv-673131 | cs/0507065 | A Fast Greedy Algorithm for Outlier Mining | <|reference_start|>A Fast Greedy Algorithm for Outlier Mining: The task of outlier detection is to find small groups of data objects that are exceptional when compared with rest large amount of data. In [38], the problem of outlier detection in categorical data is defined as an optimization problem and a local-search heuristic based algorithm (LSA) is presented. However, as is the case with most iterative type algorithms, the LSA algorithm is still very time-consuming on very large datasets. In this paper, we present a very fast greedy algorithm for mining outliers under the same optimization model. Experimental results on real datasets and large synthetic datasets show that: (1) Our algorithm has comparable performance with respect to those state-of-art outlier detection algorithms on identifying true outliers and (2) Our algorithm can be an order of magnitude faster than LSA algorithm.<|reference_end|> | arxiv | @article{he2005a,
title={A Fast Greedy Algorithm for Outlier Mining},
author={Zengyou He, Xiaofei Xu, Shengchun Deng},
journal={arXiv preprint arXiv:cs/0507065},
year={2005},
number={Tr-05-0406},
archivePrefix={arXiv},
eprint={cs/0507065},
primaryClass={cs.DB cs.AI}
} | he2005a |
arxiv-673132 | cs/0507066 | Authentication Schemes Using Braid Groups | <|reference_start|>Authentication Schemes Using Braid Groups: In this paper we proposed two identification schemes based on the root problem. The proposed schemes are secure against passive attacks assuming that the root problem (RP) is hard in braid groups.<|reference_end|> | arxiv | @article{lal2005authentication,
title={Authentication Schemes Using Braid Groups},
author={Sunder Lal and Atul Chaturvedi (Department of Mathematics Institute of
Basic Science)},
journal={arXiv preprint arXiv:cs/0507066},
year={2005},
archivePrefix={arXiv},
eprint={cs/0507066},
primaryClass={cs.CR cs.CY}
} | lal2005authentication |
arxiv-673133 | cs/0507067 | Conjunctive Query Containment and Answering under Description Logics Constraints | <|reference_start|>Conjunctive Query Containment and Answering under Description Logics Constraints: Query containment and query answering are two important computational tasks in databases. While query answering amounts to compute the result of a query over a database, query containment is the problem of checking whether for every database, the result of one query is a subset of the result of another query. In this paper, we deal with unions of conjunctive queries, and we address query containment and query answering under Description Logic constraints. Every such constraint is essentially an inclusion dependencies between concepts and relations, and their expressive power is due to the possibility of using complex expressions, e.g., intersection and difference of relations, special forms of quantification, regular expressions over binary relations, in the specification of the dependencies. These types of constraints capture a great variety of data models, including the relational, the entity-relationship, and the object-oriented model, all extended with various forms of constraints, and also the basic features of the ontology languages used in the context of the Semantic Web. We present the following results on both query containment and query answering. We provide a method for query containment under Description Logic constraints, thus showing that the problem is decidable, and analyze its computational complexity. We prove that query containment is undecidable in the case where we allow inequalities in the right-hand side query, even for very simple constraints and queries. We show that query answering under Description Logic constraints can be reduced to query containment, and illustrate how such a reduction provides upper bound results with respect to both combined and data complexity.<|reference_end|> | arxiv | @article{calvanese2005conjunctive,
title={Conjunctive Query Containment and Answering under Description Logics
Constraints},
author={Diego Calvanese, Giuseppe De Giacomo, Maurizio Lenzerini},
journal={arXiv preprint arXiv:cs/0507067},
year={2005},
archivePrefix={arXiv},
eprint={cs/0507067},
primaryClass={cs.DB cs.AI}
} | calvanese2005conjunctive |
arxiv-673134 | cs/0507068 | On parity check collections for iterative erasure decoding that correct all correctable erasure patterns of a given size | <|reference_start|>On parity check collections for iterative erasure decoding that correct all correctable erasure patterns of a given size: Recently there has been interest in the construction of small parity check sets for iterative decoding of the Hamming code with the property that each uncorrectable (or stopping) set of size three is the support of a codeword and hence uncorrectable anyway. Here we reformulate and generalise the problem, and improve on this construction. First we show that a parity check collection that corrects all correctable erasure patterns of size m for the r-th order Hamming code (i.e, the Hamming code with codimension r) provides for all codes of codimension $r$ a corresponding ``generic'' parity check collection with this property. This leads naturally to a necessary and sufficient condition on such generic parity check collections. We use this condition to construct a generic parity check collection for codes of codimension r correcting all correctable erasure patterns of size at most m, for all r and m <= r, thus generalising the known construction for m=3. Then we discussoptimality of our construction and show that it can be improved for m>=3 and r large enough. Finally we discuss some directions for further research.<|reference_end|> | arxiv | @article{hollmann2005on,
title={On parity check collections for iterative erasure decoding that correct
all correctable erasure patterns of a given size},
author={Henk D. L. Hollmann, Ludo M. G. M. Tolhuizen (Philips Research
Laboratories, Eindhoven, Netherlands)},
journal={arXiv preprint arXiv:cs/0507068},
year={2005},
doi={10.1109/TIT.2006.888996},
number={PR-MS 25.332},
archivePrefix={arXiv},
eprint={cs/0507068},
primaryClass={cs.IT cs.DM math.IT}
} | hollmann2005on |
arxiv-673135 | cs/0507069 | Users and Assessors in the Context of INEX: Are Relevance Dimensions Relevant? | <|reference_start|>Users and Assessors in the Context of INEX: Are Relevance Dimensions Relevant?: The main aspects of XML retrieval are identified by analysing and comparing the following two behaviours: the behaviour of the assessor when judging the relevance of returned document components; and the behaviour of users when interacting with components of XML documents. We argue that the two INEX relevance dimensions, Exhaustivity and Specificity, are not orthogonal dimensions; indeed, an empirical analysis of each dimension reveals that the grades of the two dimensions are correlated to each other. By analysing the level of agreement between the assessor and the users, we aim at identifying the best units of retrieval. The results of our analysis show that the highest level of agreement is on highly relevant and on non-relevant document components, suggesting that only the end points of the INEX 10-point relevance scale are perceived in the same way by both the assessor and the users. We propose a new definition of relevance for XML retrieval and argue that its corresponding relevance scale would be a better choice for INEX.<|reference_end|> | arxiv | @article{pehcevski2005users,
title={Users and Assessors in the Context of INEX: Are Relevance Dimensions
Relevant?},
author={Jovan Pehcevski (RMIT), James A. Thom (RMIT), Anne-Marie Vercoustre},
journal={Dans INEX 2005 Workshop on Element Retrieval Methodology [OAI :
oai:hal.inria.fr:inria-00000182_v1] - http://hal.inria.fr/inria-00000182},
year={2005},
archivePrefix={arXiv},
eprint={cs/0507069},
primaryClass={cs.IR}
} | pehcevski2005users |
arxiv-673136 | cs/0507070 | Hybrid XML Retrieval: Combining Information Retrieval and a Native XML Database | <|reference_start|>Hybrid XML Retrieval: Combining Information Retrieval and a Native XML Database: This paper investigates the impact of three approaches to XML retrieval: using Zettair, a full-text information retrieval system; using eXist, a native XML database; and using a hybrid system that takes full article answers from Zettair and uses eXist to extract elements from those articles. For the content-only topics, we undertake a preliminary analysis of the INEX 2003 relevance assessments in order to identify the types of highly relevant document components. Further analysis identifies two complementary sub-cases of relevance assessments ("General" and "Specific") and two categories of topics ("Broad" and "Narrow"). We develop a novel retrieval module that for a content-only topic utilises the information from the resulting answer list of a native XML database and dynamically determines the preferable units of retrieval, which we call "Coherent Retrieval Elements". The results of our experiments show that -- when each of the three systems is evaluated against different retrieval scenarios (such as different cases of relevance assessments, different topic categories and different choices of evaluation metrics) -- the XML retrieval systems exhibit varying behaviour and the best performance can be reached for different values of the retrieval parameters. In the case of INEX 2003 relevance assessments for the content-only topics, our newly developed hybrid XML retrieval system is substantially more effective than either Zettair or eXist, and yields a robust and a very effective XML retrieval.<|reference_end|> | arxiv | @article{pehcevski2005hybrid,
title={Hybrid XML Retrieval: Combining Information Retrieval and a Native XML
Database},
author={Jovan Pehcevski (RMIT), James A. Thom (RMIT), Anne-Marie Vercoustre},
journal={arXiv preprint arXiv:cs/0507070},
year={2005},
doi={10.1007/s10791-005-0748-1},
archivePrefix={arXiv},
eprint={cs/0507070},
primaryClass={cs.IR}
} | pehcevski2005hybrid |
arxiv-673137 | cs/0507071 | Security for Distributed Web-Applications via Aspect-Oriented Programming | <|reference_start|>Security for Distributed Web-Applications via Aspect-Oriented Programming: Identity Management is becoming more and more important in business systems as they are opened for third parties including trading partners, consumers and suppliers. This paper presents an approach securing a system without any knowledge of the system source code. The security module adds to the existing system authentication and authorisation based on aspect oriented programming and the liberty alliance framework, an upcoming industrie standard providing single sign on. In an initial training phase the module is adapted to the application which is to be secured. Moreover the use of hardware tokens and proactive computing is demonstrated. The high modularisation is achived through use of AspectJ, a programming language extension of Java.<|reference_end|> | arxiv | @article{kuntze2005security,
title={Security for Distributed Web-Applications via Aspect-Oriented
Programming},
author={Nicolai Kuntze, Thomas Rauch, Andreas U. Schmidt},
journal={arXiv preprint arXiv:cs/0507071},
year={2005},
archivePrefix={arXiv},
eprint={cs/0507071},
primaryClass={cs.CR}
} | kuntze2005security |
arxiv-673138 | cs/0507072 | Reliable Data Storage in Distributed Hash Tables | <|reference_start|>Reliable Data Storage in Distributed Hash Tables: Distributed Hash Tables offer a resilient lookup service for unstable distributed environments. Resilient data storage, however, requires additional data replication and maintenance algorithms. These algorithms can have an impact on both the performance and the scalability of the system. In this paper, we describe the goals and design space of these replication algorithms. We examine an existing replication algorithm, and present a new analysis of its reliability. We then present a new dynamic replication algorithm which can operate in unstable environments. We give several possible replica placement strategies for this algorithm, and show how they impact reliability and performance. Finally we compare all replication algorithms through simulation, showing quantitatively the difference between their bandwidth use, fault tolerance and performance.<|reference_end|> | arxiv | @article{leslie2005reliable,
title={Reliable Data Storage in Distributed Hash Tables},
author={Matthew Leslie},
journal={arXiv preprint arXiv:cs/0507072},
year={2005},
archivePrefix={arXiv},
eprint={cs/0507072},
primaryClass={cs.DC cs.NI}
} | leslie2005reliable |
arxiv-673139 | cs/0507073 | Software Performance Analysis | <|reference_start|>Software Performance Analysis: The key to speeding up applications is often understanding where the elapsed time is spent, and why. This document reviews in depth the full array of performance analysis tools and techniques available on Linux for this task, from the traditional tools like gcov and gprof, to the more advanced tools still under development like oprofile and the Linux Trace Toolkit. The focus is more on the underlying data collection and processing algorithms, and their overhead and precision, than on the cosmetic details of the graphical user interface frontends.<|reference_end|> | arxiv | @article{dagenais2005software,
title={Software Performance Analysis},
author={Michel R. Dagenais (Dept. of Computer Engineering, Ecole
Polytechnique, Montreal, Canada) Karim Yaghmour (Opersys, Montreal, Canada)
Charles Levert (Ericsson Research, Montreal, Canada) Makan Pourzandi
(Ericsson Research, Montreal, Canada)},
journal={arXiv preprint arXiv:cs/0507073},
year={2005},
archivePrefix={arXiv},
eprint={cs/0507073},
primaryClass={cs.PF cs.OS}
} | dagenais2005software |
arxiv-673140 | cs/0508001 | Dimensions of Copeland-Erdos Sequences | <|reference_start|>Dimensions of Copeland-Erdos Sequences: The base-$k$ {\em Copeland-Erd\"os sequence} given by an infinite set $A$ of positive integers is the infinite sequence $\CE_k(A)$ formed by concatenating the base-$k$ representations of the elements of $A$ in numerical order. This paper concerns the following four quantities. The {\em finite-state dimension} $\dimfs (\CE_k(A))$, a finite-state version of classical Hausdorff dimension introduced in 2001. The {\em finite-state strong dimension} $\Dimfs(\CE_k(A))$, a finite-state version of classical packing dimension introduced in 2004. This is a dual of $\dimfs(\CE_k(A))$ satisfying $\Dimfs(\CE_k(A))$ $\geq \dimfs(\CE_k(A))$. The {\em zeta-dimension} $\Dimzeta(A)$, a kind of discrete fractal dimension discovered many times over the past few decades. The {\em lower zeta-dimension} $\dimzeta(A)$, a dual of $\Dimzeta(A)$ satisfying $\dimzeta(A)\leq \Dimzeta(A)$. We prove the following. $\dimfs(\CE_k(A))\geq \dimzeta(A)$. This extends the 1946 proof by Copeland and Erd\"os that the sequence $\CE_k(\mathrm{PRIMES})$ is Borel normal. $\Dimfs(\CE_k(A))\geq \Dimzeta(A)$. These bounds are tight in the strong sense that these four quantities can have (simultaneously) any four values in $[0,1]$ satisfying the four above-mentioned inequalities.<|reference_end|> | arxiv | @article{gu2005dimensions,
title={Dimensions of Copeland-Erdos Sequences},
author={Xiaoyang Gu, Jack H. Lutz, Philippe Moser},
journal={arXiv preprint arXiv:cs/0508001},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508001},
primaryClass={cs.CC cs.IT math.IT}
} | gu2005dimensions |
arxiv-673141 | cs/0508002 | Methods for Analytical Understanding of Agent-Based Modeling of Complex Systems | <|reference_start|>Methods for Analytical Understanding of Agent-Based Modeling of Complex Systems: Von Neuman's work on universal machines and the hardware development have allowed the simulation of dynamical systems through a large set of interacting agents. This is a bottom-up approach which tries to derive global properties of a complex system through local interaction rules and agent behaviour. Traditionally, such systems are modeled and simulated through top-down methods based on differential equations. Agent-Based Modeling has the advantage of simplicity and low computational cost. However, unlike differential equations, there is no standard way to express agent behaviour. Besides, it is not clear how to analytically predict the results obtained by the simulation. In this paper we survey some of these methods. For expressing agent behaviour formal methods, like Stochastic Process Algebras have been used. Such approach is useful if the global properties of interest can be expressed as a function of stochastic time series. However, if space variables must be considered, we shall change the focus. In this case, multiscale techniques, based on Chapman-Enskog expansion, was used to establish the connection between the microscopic dynamics and the macroscopic observables. Also, we use data mining techniques,like Principal Component Analysis (PCA), to study agent systems like Cellular Automata. With the help of these tools we will discuss a simple society model, a Lattice Gas Automaton for fluid modeling, and knowledge discovery in CA databases. Besides, we show the capabilities of the NetLogo, a software for agent simulation of complex system and show our experience about.<|reference_end|> | arxiv | @article{giraldi2005methods,
title={Methods for Analytical Understanding of Agent-Based Modeling of Complex
Systems},
author={Gilson A. Giraldi, Luis C. da Costa, Adilson V. Xavier, Paulo S.
Rodrigues},
journal={arXiv preprint arXiv:cs/0508002},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508002},
primaryClass={cs.GR}
} | giraldi2005methods |
arxiv-673142 | cs/0508003 | Model Checking Probabilistic Pushdown Automata | <|reference_start|>Model Checking Probabilistic Pushdown Automata: We consider the model checking problem for probabilistic pushdown automata (pPDA) and properties expressible in various probabilistic logics. We start with properties that can be formulated as instances of a generalized random walk problem. We prove that both qualitative and quantitative model checking for this class of properties and pPDA is decidable. Then we show that model checking for the qualitative fragment of the logic PCTL and pPDA is also decidable. Moreover, we develop an error-tolerant model checking algorithm for PCTL and the subclass of stateless pPDA. Finally, we consider the class of omega-regular properties and show that both qualitative and quantitative model checking for pPDA is decidable.<|reference_end|> | arxiv | @article{esparza2005model,
title={Model Checking Probabilistic Pushdown Automata},
author={Javier Esparza, Antonin Kucera, Richard Mayr},
journal={Logical Methods in Computer Science, Volume 2, Issue 1 (March 7,
2006) lmcs:2256},
year={2005},
doi={10.2168/LMCS-2(1:2)2006},
archivePrefix={arXiv},
eprint={cs/0508003},
primaryClass={cs.LO}
} | esparza2005model |
arxiv-673143 | cs/0508004 | A three-valued semantics for logic programmers | <|reference_start|>A three-valued semantics for logic programmers: This paper describes a simpler way for programmers to reason about the correctness of their code. The study of semantics of logic programs has shown strong links between the model theoretic semantics (truth and falsity of atoms in the programmer's interpretation of a program), procedural semantics (for example, SLD resolution) and fixpoint semantics (which is useful for program analysis and alternative execution mechanisms). Most of this work assumes that intended interpretations are two-valued: a ground atom is true (and should succeed according to the procedural semantics) or false (and should not succeed). In reality, intended interpretations are less precise. Programmers consider that some atoms "should not occur" or are "ill-typed" or "inadmissible". Programmers don't know and don't care whether such atoms succeed. In this paper we propose a three-valued semantics for (essentially) pure Prolog programs with (ground) negation as failure which reflects this. The semantics of Fitting is similar but only associates the third truth value with non-termination. We provide tools to reason about correctness of programs without the need for unnatural precision or undue restrictions on programming style. As well as theoretical results, we provide a programmer-oriented synopsis. This work has come out of work on declarative debugging, where it has been recognised that inadmissible calls are important. This paper has been accepted to appear in Theory and Practice of Logic Programming.<|reference_end|> | arxiv | @article{naish2005a,
title={A three-valued semantics for logic programmers},
author={Lee Naish},
journal={Theory and Practice of Logic Programming 6(5), September 2006, pp.
509-538.},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508004},
primaryClass={cs.LO}
} | naish2005a |
arxiv-673144 | cs/0508005 | Logic Column 13: Reasoning Formally about Quantum Systems: An Overview | <|reference_start|>Logic Column 13: Reasoning Formally about Quantum Systems: An Overview: This article is intended as an introduction to the subject of quantum logic, and as a brief survey of the relevant literature. Also discussed here are logics for specification and analysis of quantum information systems, in particular, recent work by P. Mateus and A. Sernadas, and also by R. van der Meyden and M. Patra. Overall, our objective is to provide a high-level presentation of the logical aspects of quantum theory. Mateus' and Sernadas' EQPL logic is illustrated with a small example, namely the state of an entangled pair of qubits. The "KT" logic of van der Meyden and Patra is demonstrated briefly in the context of the B92 protocol for quantum key distribution.<|reference_end|> | arxiv | @article{papanikolaou2005logic,
title={Logic Column 13: Reasoning Formally about Quantum Systems: An Overview},
author={Nick Papanikolaou},
journal={SIGACT News, 36(3), pp. 51-66, 2005},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508005},
primaryClass={cs.LO}
} | papanikolaou2005logic |
arxiv-673145 | cs/0508006 | A New Approach for Boundary Recognition in Geometric Sensor Networks | <|reference_start|>A New Approach for Boundary Recognition in Geometric Sensor Networks: We describe a new approach for dealing with the following central problem in the self-organization of a geometric sensor network: Given a polygonal region R, and a large, dense set of sensor nodes that are scattered uniformly at random in R. There is no central control unit, and nodes can only communicate locally by wireless radio to all other nodes that are within communication radius r, without knowing their coordinates or distances to other nodes. The objective is to develop a simple distributed protocol that allows nodes to identify themselves as being located near the boundary of R and form connected pieces of the boundary. We give a comparison of several centrality measures commonly used in the analysis of social networks and show that restricted stress centrality is particularly suited for geometric networks; we provide mathematical as well as experimental evidence for the quality of this measure.<|reference_end|> | arxiv | @article{fekete2005a,
title={A New Approach for Boundary Recognition in Geometric Sensor Networks},
author={Sandor P. Fekete, Michael Kaufmann, Alexander Kroeller, and Katharina
Lehmann},
journal={arXiv preprint arXiv:cs/0508006},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508006},
primaryClass={cs.DS cs.DC}
} | fekete2005a |
arxiv-673146 | cs/0508007 | Regularity of Position Sequences | <|reference_start|>Regularity of Position Sequences: A person is given a numbered sequence of positions on a sheet of paper. The person is asked, "Which will be the next (or the next after that) position?" Everyone has an opinion as to how he or she would proceed. There are regular sequences for which there is general agreement on how to continue. However, there are less regular sequences for which this assessment is less certain. There are sequences for which every continuation is perceived to be arbitrary. I would like to present a mathematical model that reflects these opinions and perceptions with the aid of a valuation function. It is necessary to apply a rich set of invariant features of position sequences to ensure the quality of this model. All other properties of the model are arbitrary.<|reference_end|> | arxiv | @article{harringer2005regularity,
title={Regularity of Position Sequences},
author={Manfred Harringer},
journal={arXiv preprint arXiv:cs/0508007},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508007},
primaryClass={cs.CV cs.AI cs.LG q-bio.NC}
} | harringer2005regularity |
arxiv-673147 | cs/0508008 | The accurate optimal-success/error-rate calculations applied to the realizations of the reliable and short-period integer ambiguity resolution in carrier-phase GPS/GNSS positioning | <|reference_start|>The accurate optimal-success/error-rate calculations applied to the realizations of the reliable and short-period integer ambiguity resolution in carrier-phase GPS/GNSS positioning: The maximum-marginal-a-posteriori success rate of statistical decision under multivariate Gaussian error distribution on an integer lattice is almost rigorously calculated by using union-bound approximation and Monte Carlo integration. These calculations are applied to the revelation of the various possible realizations of the reliable and short-period integer ambiguity resolution in precise carrier-phase relative positioning by GPS/GNSS. The theoretical foundation and efficient methodology are systematically developed, and two types of the enhancement of union-bound approximation are proposed and examined. The results revealed include an extremely high reliability under the condition of accurate carrier-phase measurements and a large number of visible satellites, its heavy degradation caused by the slight amount of differentiated ionospheric delays due to the nonvanishing baseline length between rover and reference receivers, and the advantages of the use of the multiple carrier frequencies. The succeeding initialization of the integer ambiguities is shown to overcome the disadvantageous condition of the nonvanishing baseline length effectively due to the reasonably assumed temporal and spatial constancy of differentiated ionospheric delays.<|reference_end|> | arxiv | @article{kondo2005the,
title={The accurate optimal-success/error-rate calculations applied to the
realizations of the reliable and short-period integer ambiguity resolution in
carrier-phase GPS/GNSS positioning},
author={Kentaro Kondo},
journal={arXiv preprint arXiv:cs/0508008},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508008},
primaryClass={cs.IT math.IT}
} | kondo2005the |
arxiv-673148 | cs/0508009 | IMPACT: Investigation of Mobile-user Patterns Across University Campuses using WLAN Trace Analysis | <|reference_start|>IMPACT: Investigation of Mobile-user Patterns Across University Campuses using WLAN Trace Analysis: We conduct the most comprehensive study of WLAN traces to date. Measurements collected from four major university campuses are analyzed with the aim of developing fundamental understanding of realistic user behavior in wireless networks. Both individual user and inter-node (group) behaviors are investigated and two classes of metrics are devised to capture the underlying structure of such behaviors. For individual user behavior we observe distinct patterns in which most users are 'on' for a small fraction of the time, the number of access points visited is very small and the overall on-line user mobility is quite low. We clearly identify categories of heavy and light users. In general, users exhibit high degree of similarity over days and weeks. For group behavior, we define metrics for encounter patterns and friendship. Surprisingly, we find that a user, on average, encounters less than 6% of the network user population within a month, and that encounter and friendship relations are highly asymmetric. We establish that number of encounters follows a biPareto distribution, while friendship indexes follow an exponential distribution. We capture the encounter graph using a small world model, the characteristics of which reach steady state after only one day. We hope for our study to have a great impact on realistic modeling of network usage and mobility patterns in wireless networks.<|reference_end|> | arxiv | @article{hsu2005impact:,
title={IMPACT: Investigation of Mobile-user Patterns Across University Campuses
using WLAN Trace Analysis},
author={Wei-jen Hsu, Ahmed Helmy},
journal={arXiv preprint arXiv:cs/0508009},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508009},
primaryClass={cs.NI}
} | hsu2005impact: |
arxiv-673149 | cs/0508010 | ATTENTION: ATTackEr traceback using MAC layer abNormality detecTION | <|reference_start|>ATTENTION: ATTackEr traceback using MAC layer abNormality detecTION: Denial-of-Service (DoS) and Distributed DoS (DDoS) attacks can cause serious problems in wireless networks due to limited network and host resources. Attacker traceback is a promising solution to take a proper countermeasure near the attack origins, to discourage attackers from launching attacks, and for forensics. However, attacker traceback in Mobile Ad-hoc Networks (MANETs) is a challenging problem due to the dynamic topology, and limited network resources. It is especially difficult to trace back attacker(s) when they are moving to avoid traceback. In this paper, we introduce the ATTENTION protocol framework, which pays special attention to MAC layer abnormal activity under attack. ATTENTION consists of three classes, namely, coarse-grained traceback, fine-grained traceback and spatio-temporal fusion architecture. For energy-efficient attacker searching in MANETs, we also utilize small-world model. Our simulation analysis shows 79% of success rate in DoS attacker traceback with coarse-grained attack signature. In addition, with fine-grained attack signature, it shows 97% of success rate in DoS attacker traceback and 83% of success rate in DDoS attacker traceback. We also show that ATTENTION has robustness against node collusion and mobility.<|reference_end|> | arxiv | @article{kim2005attention:,
title={ATTENTION: ATTackEr traceback using MAC layer abNormality detecTION},
author={Yongjin Kim, Ahmed Helmy},
journal={arXiv preprint arXiv:cs/0508010},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508010},
primaryClass={cs.NI}
} | kim2005attention: |
arxiv-673150 | cs/0508011 | A Secure Traitor Tracing Scheme against Key Exposure | <|reference_start|>A Secure Traitor Tracing Scheme against Key Exposure: Copyright protection is a major issue in distributing digital content. On the other hand, improvements to usability are sought by content users. In this paper, we propose a secure {\it traitor tracing scheme against key exposure (TTaKE)} which contains the properties of both a traitor tracing scheme and a forward secure public key cryptosystem. Its structure fits current digital broadcasting systems and it may be useful in preventing traitors from making illegal decoders and in minimizing the damage from accidental key exposure. It can improve usability through these properties.<|reference_end|> | arxiv | @article{ogawa2005a,
title={A Secure Traitor Tracing Scheme against Key Exposure},
author={Kazuto Ogawa, Goichiro Hanaoka and Hideki Imai},
journal={arXiv preprint arXiv:cs/0508011},
year={2005},
doi={10.1109/ISIT.2005.1523670},
archivePrefix={arXiv},
eprint={cs/0508011},
primaryClass={cs.CR}
} | ogawa2005a |
arxiv-673151 | cs/0508012 | n-Channel Asymmetric Multiple-Description Lattice Vector Quantization | <|reference_start|>n-Channel Asymmetric Multiple-Description Lattice Vector Quantization: We present analytical expressions for optimal entropy-constrained multiple-description lattice vector quantizers which, under high-resolutions assumptions, minimize the expected distortion for given packet-loss probabilities. We consider the asymmetric case where packet-loss probabilities and side entropies are allowed to be unequal and find optimal quantizers for any number of descriptions in any dimension. We show that the normalized second moments of the side-quantizers are given by that of an $L$-dimensional sphere independent of the choice of lattices. Furthermore, we show that the optimal bit-distribution among the descriptions is not unique. In fact, within certain limits, bits can be arbitrarily distributed.<|reference_end|> | arxiv | @article{ostergaard2005n-channel,
title={n-Channel Asymmetric Multiple-Description Lattice Vector Quantization},
author={Jan Ostergaard, Richard Heusdens, and Jesper Jensen},
journal={arXiv preprint arXiv:cs/0508012},
year={2005},
doi={10.1109/ISIT.2005.1523654},
archivePrefix={arXiv},
eprint={cs/0508012},
primaryClass={cs.IT math.IT}
} | ostergaard2005n-channel |
arxiv-673152 | cs/0508013 | Relations between the Local Weight Distributions of a Linear Block Code, Its Extended Code, and Its Even Weight Subcode | <|reference_start|>Relations between the Local Weight Distributions of a Linear Block Code, Its Extended Code, and Its Even Weight Subcode: Relations between the local weight distributions of a binary linear code, its extended code, and its even weight subcode are presented. In particular, for a code of which the extended code is transitive invariant and contains only codewords with weight multiples of four, the local weight distribution can be obtained from that of the extended code. Using the relations, the local weight distributions of the $(127,k)$ primitive BCH codes for $k\leq50$, the $(127,64)$ punctured third-order Reed-Muller, and their even weight subcodes are obtained from the local weight distribution of the $(128,k)$ extended primitive BCH codes for $k\leq50$ and the $(128,64)$ third-order Reed-Muller code. We also show an approach to improve an algorithm for computing the local weight distribution proposed before.<|reference_end|> | arxiv | @article{yasunaga2005relations,
title={Relations between the Local Weight Distributions of a Linear Block Code,
Its Extended Code, and Its Even Weight Subcode},
author={Kenji Yasunaga, Toru Fujiwara},
journal={arXiv preprint arXiv:cs/0508013},
year={2005},
doi={10.1109/ISIT.2005.1523360},
archivePrefix={arXiv},
eprint={cs/0508013},
primaryClass={cs.IT math.IT}
} | yasunaga2005relations |
arxiv-673153 | cs/0508014 | The Benefit of Thresholding in LP Decoding of LDPC Codes | <|reference_start|>The Benefit of Thresholding in LP Decoding of LDPC Codes: Consider data transmission over a binary-input additive white Gaussian noise channel using a binary low-density parity-check code. We ask the following question: Given a decoder that takes log-likelihood ratios as input, does it help to modify the log-likelihood ratios before decoding? If we use an optimal decoder then it is clear that modifying the log-likelihoods cannot possibly help the decoder's performance, and so the answer is "no." However, for a suboptimal decoder like the linear programming decoder, the answer might be "yes": In this paper we prove that for certain interesting classes of low-density parity-check codes and large enough SNRs, it is advantageous to truncate the log-likelihood ratios before passing them to the linear programming decoder.<|reference_end|> | arxiv | @article{feldman2005the,
title={The Benefit of Thresholding in LP Decoding of LDPC Codes},
author={Jon Feldman, Ralf Koetter, Pascal O. Vontobel},
journal={arXiv preprint arXiv:cs/0508014},
year={2005},
doi={10.1109/ISIT.2005.1523344},
archivePrefix={arXiv},
eprint={cs/0508014},
primaryClass={cs.IT math.IT}
} | feldman2005the |
arxiv-673154 | cs/0508015 | Chosen-ciphertext attack on noncommutative Polly Cracker | <|reference_start|>Chosen-ciphertext attack on noncommutative Polly Cracker: We propose a chosen-ciphertext attack on recently presented noncommutative variant of the well-known Polly Cracker cryptosystem. We show that if one chooses parameters for this noncommutative Polly Cracker as initially proposed, than the system should be claimed as insecure.<|reference_end|> | arxiv | @article{bulygin2005chosen-ciphertext,
title={Chosen-ciphertext attack on noncommutative Polly Cracker},
author={Stanislav Bulygin},
journal={arXiv preprint arXiv:cs/0508015},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508015},
primaryClass={cs.IT cs.CR math.IT}
} | bulygin2005chosen-ciphertext |
arxiv-673155 | cs/0508016 | Distributed Algorithms for Optimal Rate-Reliability Tradeoff in Networks | <|reference_start|>Distributed Algorithms for Optimal Rate-Reliability Tradeoff in Networks: The current framework of network utility maximization for distributed rate allocation assumes fixed channel code rates. However, by adapting the physical layer channel coding, different rate-reliability tradeoffs can be achieved on each link and for each end user. Consider a network where each user has a utility function that depends on both signal quality and data rate, and each link may provide a `fatter' (`thinner') information `pipe' by allowing a higher (lower) decoding error probability. We propose two distributed, pricing-based algorithms to attain optimal rate-reliability tradeoff, with an interpretation that each user provides its willingness to pay for reliability to the network and the network feeds back congestion prices to users. The proposed algorithms converge to a tradeoff point between rate and reliability, which is proved to be globally optimal for codes with sufficiently large codeword lengths and user utilities with sufficiently negative curvatures.<|reference_end|> | arxiv | @article{lee2005distributed,
title={Distributed Algorithms for Optimal Rate-Reliability Tradeoff in Networks},
author={Jang-Won Lee, Mung Chiang, and A. Robert Calderbank},
journal={arXiv preprint arXiv:cs/0508016},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508016},
primaryClass={cs.NI}
} | lee2005distributed |
arxiv-673156 | cs/0508017 | Enhancing Content-And-Structure Information Retrieval using a Native XML Database | <|reference_start|>Enhancing Content-And-Structure Information Retrieval using a Native XML Database: Three approaches to content-and-structure XML retrieval are analysed in this paper: first by using Zettair, a full-text information retrieval system; second by using eXist, a native XML database, and third by using a hybrid XML retrieval system that uses eXist to produce the final answers from likely relevant articles retrieved by Zettair. INEX 2003 content-and-structure topics can be classified in two categories: the first retrieving full articles as final answers, and the second retrieving more specific elements within articles as final answers. We show that for both topic categories our initial hybrid system improves the retrieval effectiveness of a native XML database. For ranking the final answer elements, we propose and evaluate a novel retrieval model that utilises the structural relationships between the answer elements of a native XML database and retrieves Coherent Retrieval Elements. The final results of our experiments show that when the XML retrieval task focusses on highly relevant elements our hybrid XML retrieval system with the Coherent Retrieval Elements module is 1.8 times more effective than Zettair and 3 times more effective than eXist, and yields an effective content-and-structure XML retrieval.<|reference_end|> | arxiv | @article{pehcevski2005enhancing,
title={Enhancing Content-And-Structure Information Retrieval using a Native XML
Database},
author={Jovan Pehcevski (RMIT), James A. Thom (RMIT), Anne-Marie Vercoustre},
journal={arXiv preprint arXiv:cs/0508017},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508017},
primaryClass={cs.IR}
} | pehcevski2005enhancing |
arxiv-673157 | cs/0508018 | Spectral Factorization, Whitening- and Estimation Filter -- Stability, Smoothness Properties and FIR Approximation Behavior | <|reference_start|>Spectral Factorization, Whitening- and Estimation Filter -- Stability, Smoothness Properties and FIR Approximation Behavior: A Wiener filter can be interpreted as a cascade of a whitening- and an estimation filter. This paper gives a detailed investigates of the properties of these two filters. Then the practical consequences for the overall Wiener filter are ascertained. It is shown that if the given spectral densities are smooth (Hoelder continuous) functions, the resulting Wiener filter will always be stable and can be approximated arbitrarily well by a finite impulse response (FIR) filter. Moreover, the smoothness of the spectral densities characterizes how fast the FIR filter approximates the desired filter characteristic. If on the other hand the spectral densities are continuous but not smooth enough, the resulting Wiener filter may not be stable.<|reference_end|> | arxiv | @article{boche2005spectral,
title={Spectral Factorization, Whitening- and Estimation Filter -- Stability,
Smoothness Properties and FIR Approximation Behavior},
author={Holger Boche and Volker Pohl},
journal={arXiv preprint arXiv:cs/0508018},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508018},
primaryClass={cs.IT math.IT}
} | boche2005spectral |
arxiv-673158 | cs/0508019 | On the Minimal Pseudo-Codewords of Codes from Finite Geometries | <|reference_start|>On the Minimal Pseudo-Codewords of Codes from Finite Geometries: In order to understand the performance of a code under maximum-likelihood (ML) decoding, it is crucial to know the minimal codewords. In the context of linear programming (LP) decoding, it turns out to be necessary to know the minimal pseudo-codewords. This paper studies the minimal codewords and minimal pseudo-codewords of some families of codes derived from projective and Euclidean planes. Although our numerical results are only for codes of very modest length, they suggest that these code families exhibit an interesting property. Namely, all minimal pseudo-codewords that are not multiples of a minimal codeword have an AWGNC pseudo-weight that is strictly larger than the minimum Hamming weight of the code. This observation has positive consequences not only for LP decoding but also for iterative decoding.<|reference_end|> | arxiv | @article{vontobel2005on,
title={On the Minimal Pseudo-Codewords of Codes from Finite Geometries},
author={Pascal O. Vontobel, Roxana Smarandache, Negar Kiyavash, Jason Teutsch,
Dejan Vukobratovic},
journal={arXiv preprint arXiv:cs/0508019},
year={2005},
doi={10.1109/ISIT.2005.1523484},
archivePrefix={arXiv},
eprint={cs/0508019},
primaryClass={cs.IT cs.DM math.IT}
} | vontobel2005on |
arxiv-673159 | cs/0508020 | Capacity Gain from Transmitter and Receiver Cooperation | <|reference_start|>Capacity Gain from Transmitter and Receiver Cooperation: Capacity gain from transmitter and receiver cooperation are compared in a relay network where the cooperating nodes are close together. When all nodes have equal average transmit power along with full channel state information (CSI), it is proved that transmitter cooperation outperforms receiver cooperation, whereas the opposite is true when power is optimally allocated among the nodes but only receiver phase CSI is available. In addition, when the nodes have equal average power with receiver phase CSI only, cooperation is shown to offer no capacity improvement over a non-cooperative scheme with the same average network power. When the system is under optimal power allocation with full CSI, the decode-and-forward transmitter cooperation rate is close to its cut-set capacity upper bound, and outperforms compress-and-forward receiver cooperation. Moreover, it is shown that full CSI is essential in transmitter cooperation, while optimal power allocation is essential in receiver cooperation.<|reference_end|> | arxiv | @article{ng2005capacity,
title={Capacity Gain from Transmitter and Receiver Cooperation},
author={Chris T. K. Ng and Andrea J. Goldsmith},
journal={arXiv preprint arXiv:cs/0508020},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508020},
primaryClass={cs.IT math.IT}
} | ng2005capacity |
arxiv-673160 | cs/0508021 | Toward Compact Interdomain Routing | <|reference_start|>Toward Compact Interdomain Routing: Despite prevailing concerns that the current Internet interdomain routing system will not scale to meet the needs of the 21st century global Internet, networking research has not yet led to the construction of a new routing architecture with satisfactory and mathematically provable scalability characteristics. Worse, continuing empirical trends of the existing routing and topology structure of the Internet are alarming: the foundational principles of the current routing and addressing architecture are an inherently bad match for the naturally evolving structure of Internet interdomain topology. We are fortunate that a sister discipline, theory of distributed computation, has developed routing algorithms that offer promising potential for genuinely scalable routing on realistic Internet-like topologies. Indeed, there are many recent breakthroughs in the area of compact routing, which has been shown to drastically outperform, in terms of efficiency and scalability, even the boldest proposals developed in networking research. Many open questions remain, but we believe the applicability of compact routing techniques to Internet interdomain routing is a research area whose potential payoff for the future of networking is too high to ignore.<|reference_end|> | arxiv | @article{krioukov2005toward,
title={Toward Compact Interdomain Routing},
author={Dmitri Krioukov, kc claffy},
journal={arXiv preprint arXiv:cs/0508021},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508021},
primaryClass={cs.NI}
} | krioukov2005toward |
arxiv-673161 | cs/0508022 | Matrix Construction Using Cyclic Shifts of a Column | <|reference_start|>Matrix Construction Using Cyclic Shifts of a Column: This paper describes the synthesis of matrices with good correlation, from cyclic shifts of pseudonoise columns. Optimum matrices result whenever the shift sequence satisfies the constant difference property. Known shift sequences with the constant (or almost constant) difference property are: Quadratic (Polynomial) and Reciprocal Shift modulo prime, Exponential Shift, Legendre Shift, Zech Logarithm Shift, and the shift sequences of some m-arrays. We use these shift sequences to produce arrays for watermarking of digital images. Matrices can also be unfolded into long sequences by diagonal unfolding (with no deterioration in correlation) or row-by-row unfolding, with some degradation in correlation.<|reference_end|> | arxiv | @article{tirkel2005matrix,
title={Matrix Construction Using Cyclic Shifts of a Column},
author={Andrew Z Tirkel and Tom E Hall},
journal={arXiv preprint arXiv:cs/0508022},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508022},
primaryClass={cs.DM cs.CR cs.IT math.IT}
} | tirkel2005matrix |
arxiv-673162 | cs/0508023 | Software Libraries and Their Reuse: Entropy, Kolmogorov Complexity, and Zipf's Law | <|reference_start|>Software Libraries and Their Reuse: Entropy, Kolmogorov Complexity, and Zipf's Law: We analyze software reuse from the perspective of information theory and Kolmogorov complexity, assessing our ability to ``compress'' programs by expressing them in terms of software components reused from libraries. A common theme in the software reuse literature is that if we can only get the right environment in place-- the right tools, the right generalizations, economic incentives, a ``culture of reuse'' -- then reuse of software will soar, with consequent improvements in productivity and software quality. The analysis developed in this paper paints a different picture: the extent to which software reuse can occur is an intrinsic property of a problem domain, and better tools and culture can have only marginal impact on reuse rates if the domain is inherently resistant to reuse. We define an entropy parameter $H \in [0,1]$ of problem domains that measures program diversity, and deduce from this upper bounds on code reuse and the scale of components with which we may work. For ``low entropy'' domains with $H$ near 0, programs are highly similar to one another and the domain is amenable to the Component-Based Software Engineering (CBSE) dream of programming by composing large-scale components. For problem domains with $H$ near 1, programs require substantial quantities of new code, with only a modest proportion of an application comprised of reused, small-scale components. Preliminary empirical results from Unix platforms support some of the predictions of our model.<|reference_end|> | arxiv | @article{veldhuizen2005software,
title={Software Libraries and Their Reuse: Entropy, Kolmogorov Complexity, and
Zipf's Law},
author={Todd L. Veldhuizen},
journal={Library-Centric Software Design (LCSD 2005), an OOPSLA 2005
workshop},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508023},
primaryClass={cs.SE cs.IT cs.PL math.IT}
} | veldhuizen2005software |
arxiv-673163 | cs/0508024 | New Codes for OFDM with Low PMEPR | <|reference_start|>New Codes for OFDM with Low PMEPR: In this paper new codes for orthogonal frequency-division multiplexing (OFDM) with tightly controlled peak-to-mean envelope power ratio (PMEPR) are proposed. We identify a new family of sequences occuring in complementary sets and show that such sequences form subsets of a new generalization of the Reed--Muller codes. Contrarily to previous constructions we present a compact description of such codes, which makes them suitable even for larger block lengths. We also show that some previous constructions just occur as special cases in our construction.<|reference_end|> | arxiv | @article{schmidt2005new,
title={New Codes for OFDM with Low PMEPR},
author={Kai-Uwe Schmidt and Adolf Finger},
journal={arXiv preprint arXiv:cs/0508024},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508024},
primaryClass={cs.IT math.IT}
} | schmidt2005new |
arxiv-673164 | cs/0508025 | Signature coding for OR channel with asynchronous access | <|reference_start|>Signature coding for OR channel with asynchronous access: Signature coding for multiple access OR channel is considered. We prove that in block asynchronous case the upper bound on the minimum code length asymptotically is the same as in the case of synchronous access.<|reference_end|> | arxiv | @article{győri2005signature,
title={Signature coding for OR channel with asynchronous access},
author={S'andor GyH{o}ri},
journal={arXiv preprint arXiv:cs/0508025},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508025},
primaryClass={cs.IT math.IT}
} | győri2005signature |
arxiv-673165 | cs/0508026 | Simple Maximum-Likelihood Decoding of Generalized First-order Reed-Muller Codes | <|reference_start|>Simple Maximum-Likelihood Decoding of Generalized First-order Reed-Muller Codes: An efficient decoder for the generalized first-order Reed-Muller code RM_q(1,m) is essential for the decoding of various block-coding schemes for orthogonal frequency-division multiplexing with reduced peak-to-mean power ratio. We present an efficient and simple maximum-likelihood decoding algorithm for RM_q(1,m). It is shown that this algorithm has lower complexity than other previously known maximum-likelihood decoders for RM_q(1,m).<|reference_end|> | arxiv | @article{schmidt2005simple,
title={Simple Maximum-Likelihood Decoding of Generalized First-order
Reed-Muller Codes},
author={Kai-Uwe Schmidt and Adolf Finger},
journal={IEEE Commun. Lett., vol. 9, no. 10, pp. 912-914, Oct. 2005},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508026},
primaryClass={cs.IT math.IT}
} | schmidt2005simple |
arxiv-673166 | cs/0508027 | Expectation maximization as message passing | <|reference_start|>Expectation maximization as message passing: Based on prior work by Eckford, it is shown how expectation maximization (EM) may be viewed, and used, as a message passing algorithm in factor graphs.<|reference_end|> | arxiv | @article{dauwels2005expectation,
title={Expectation maximization as message passing},
author={J. Dauwels, S. Korl, H.-A. Loeliger},
journal={arXiv preprint arXiv:cs/0508027},
year={2005},
doi={10.1109/ISIT.2005.1523402},
archivePrefix={arXiv},
eprint={cs/0508027},
primaryClass={cs.IT cs.LG math.IT}
} | dauwels2005expectation |
arxiv-673167 | cs/0508028 | Truth-telling Reservations | <|reference_start|>Truth-telling Reservations: We present a mechanism for reservations of bursty resources that is both truthful and robust. It consists of option contracts whose pricing structure induces users to reveal the true likelihoods that they will purchase a given resource. Users are also allowed to adjust their options as their likelihood changes. This scheme helps users save cost and the providers to plan ahead so as to reduce the risk of under-utilization and overbooking. The mechanism extracts revenue similar to that of a monopoly provider practicing temporal pricing discrimination with a user population whose preference distribution is known in advance.<|reference_end|> | arxiv | @article{wu2005truth-telling,
title={Truth-telling Reservations},
author={Fang Wu, Zi Zhang and Bernardo A. Huberman},
journal={arXiv preprint arXiv:cs/0508028},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508028},
primaryClass={cs.GT cond-mat.stat-mech cs.MA}
} | wu2005truth-telling |
arxiv-673168 | cs/0508029 | Selfish vs Unselfish Optimization of Network Creation | <|reference_start|>Selfish vs Unselfish Optimization of Network Creation: We investigate several variants of a network creation model: a group of agents builds up a network between them while trying to keep the costs of this network small. The cost function consists of two addends, namely (i) a constant amount for each edge an agent buys and (ii) the minimum number of hops it takes sending messages to other agents. Despite the simplicity of this model, various complex network structures emerge depending on the weight between the two addends of the cost function and on the selfish or unselfish behaviour of the agents.<|reference_end|> | arxiv | @article{schneider2005selfish,
title={Selfish vs. Unselfish Optimization of Network Creation},
author={Johannes J. Schneider and Scott Kirkpatrick},
journal={arXiv preprint arXiv:cs/0508029},
year={2005},
doi={10.1088/1742-5468/2005/08/P08007},
archivePrefix={arXiv},
eprint={cs/0508029},
primaryClass={cs.NI cs.AR cs.MA}
} | schneider2005selfish |
arxiv-673169 | cs/0508030 | Terminated LDPC Convolutional Codes with Thresholds Close to Capacity | <|reference_start|>Terminated LDPC Convolutional Codes with Thresholds Close to Capacity: An ensemble of LDPC convolutional codes with parity-check matrices composed of permutation matrices is considered. The convergence of the iterative belief propagation based decoder for terminated convolutional codes in the ensemble is analyzed for binary-input output-symmetric memoryless channels using density evolution techniques. We observe that the structured irregularity in the Tanner graph of the codes leads to significantly better thresholds when compared to corresponding LDPC block codes.<|reference_end|> | arxiv | @article{lentmaier2005terminated,
title={Terminated LDPC Convolutional Codes with Thresholds Close to Capacity},
author={Michael Lentmaier, Arvind Sridharan, Kamil Sh. Zigangirov, and Daniel
J. Costello Jr},
journal={arXiv preprint arXiv:cs/0508030},
year={2005},
doi={10.1109/ISIT.2005.1523567},
archivePrefix={arXiv},
eprint={cs/0508030},
primaryClass={cs.IT math.IT}
} | lentmaier2005terminated |
arxiv-673170 | cs/0508031 | Capacity Theorems for Quantum Multiple Access Channels | <|reference_start|>Capacity Theorems for Quantum Multiple Access Channels: We consider quantum channels with two senders and one receiver. For an arbitrary such channel, we give multi-letter characterizations of two different two-dimensional capacity regions. The first region characterizes the rates at which it is possible for one sender to send classical information while the other sends quantum information. The second region gives the rates at which each sender can send quantum information. We give an example of a channel for which each region has a single-letter description, concluding with a characterization of the rates at which each user can simultaneously send classical and quantum information.<|reference_end|> | arxiv | @article{yard2005capacity,
title={Capacity Theorems for Quantum Multiple Access Channels},
author={Jon Yard, Igor Devetak, Patrick Hayden},
journal={Proceedings of the IEEE Symposium on Information Theory, Adelaide,
pp. 884-888, 2005},
year={2005},
doi={10.1109/ISIT.2005.1523464},
archivePrefix={arXiv},
eprint={cs/0508031},
primaryClass={cs.IT math.IT quant-ph}
} | yard2005capacity |
arxiv-673171 | cs/0508032 | Polymorphic Self-* Agents for Stigmergic Fault Mitigation in Large-Scale Real-Time Embedded Systems | <|reference_start|>Polymorphic Self-* Agents for Stigmergic Fault Mitigation in Large-Scale Real-Time Embedded Systems: Organization and coordination of agents within large-scale, complex, distributed environments is one of the primary challenges in the field of multi-agent systems. A lot of interest has surfaced recently around self-* (self-organizing, self-managing, self-optimizing, self-protecting) agents. This paper presents polymorphic self-* agents that evolve a core set of roles and behavior based on environmental cues. The agents adapt these roles based on the changing demands of the environment, and are directly implementable in computer systems applications. The design combines strategies from game theory, stigmergy, and other biologically inspired models to address fault mitigation in large-scale, real-time, distributed systems. The agents are embedded within the individual digital signal processors of BTeV, a High Energy Physics experiment consisting of 2500 such processors. Results obtained using a SWARM simulation of the BTeV environment demonstrate the polymorphic character of the agents, and show how this design exceeds performance and reliability metrics obtained from comparable centralized, and even traditional decentralized approaches.<|reference_end|> | arxiv | @article{messie2005polymorphic,
title={Polymorphic Self-* Agents for Stigmergic Fault Mitigation in Large-Scale
Real-Time Embedded Systems},
author={Derek Messie (1) and Jae C. Oh (1) ((1) Syracuse University)},
journal={arXiv preprint arXiv:cs/0508032},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508032},
primaryClass={cs.AI cs.MA}
} | messie2005polymorphic |
arxiv-673172 | cs/0508033 | Lessons from Three Views of the Internet Topology | <|reference_start|>Lessons from Three Views of the Internet Topology: Network topology plays a vital role in understanding the performance of network applications and protocols. Thus, recently there has been tremendous interest in generating realistic network topologies. Such work must begin with an understanding of existing network topologies, which today typically consists of a relatively small number of data sources. In this paper, we calculate an extensive set of important characteristics of Internet AS-level topologies extracted from the three data sources most frequently used by the research community: traceroutes, BGP, and WHOIS. We find that traceroute and BGP topologies are similar to one another but differ substantially from the WHOIS topology. We discuss the interplay between the properties of the data sources that result from specific data collection mechanisms and the resulting topology views. We find that, among metrics widely considered, the joint degree distribution appears to fundamentally characterize Internet AS-topologies: it narrowly defines values for other important metrics. We also introduce an evaluation criteria for the accuracy of topology generators and verify previous observations that generators solely reproducing degree distributions cannot capture the full spectrum of critical topological characteristics of any of the three topologies. Finally, we release to the community the input topology datasets, along with the scripts and output of our calculations. This supplement should enable researchers to validate their models against real data and to make more informed selection of topology data sources for their specific needs.<|reference_end|> | arxiv | @article{mahadevan2005lessons,
title={Lessons from Three Views of the Internet Topology},
author={Priya Mahadevan, Dmitri Krioukov, Marina Fomenkov, Bradley Huffaker,
Xenofontas Dimitropoulos, kc claffy, Amin Vahdat},
journal={arXiv preprint arXiv:cs/0508033},
year={2005},
number={CAIDA-TR-2005-02},
archivePrefix={arXiv},
eprint={cs/0508033},
primaryClass={cs.NI physics.soc-ph}
} | mahadevan2005lessons |
arxiv-673173 | cs/0508034 | Channel combining and splitting for cutoff rate improvement | <|reference_start|>Channel combining and splitting for cutoff rate improvement: The cutoff rate $R_0(W)$ of a discrete memoryless channel (DMC) $W$ is often used as a figure of merit, alongside the channel capacity $C(W)$. Given a channel $W$ consisting of two possibly correlated subchannels $W_1$, $W_2$, the capacity function always satisfies $C(W_1)+C(W_2) \le C(W)$, while there are examples for which $R_0(W_1)+R_0(W_2) > R_0(W)$. This fact that cutoff rate can be ``created'' by channel splitting was noticed by Massey in his study of an optical modulation system modeled as a $M$'ary erasure channel. This paper demonstrates that similar gains in cutoff rate can be achieved for general DMC's by methods of channel combining and splitting. Relation of the proposed method to Pinsker's early work on cutoff rate improvement and to Imai-Hirakawa multi-level coding are also discussed.<|reference_end|> | arxiv | @article{arikan2005channel,
title={Channel combining and splitting for cutoff rate improvement},
author={Erdal Arikan},
journal={arXiv preprint arXiv:cs/0508034},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508034},
primaryClass={cs.IT math.IT}
} | arikan2005channel |
arxiv-673174 | cs/0508035 | Codes for error detection, good or not good | <|reference_start|>Codes for error detection, good or not good: Linear codes for error detection on a q-ary symmetric channel are studied. It is shown that for given dimension k and minimum distance d, there exists a value \mu(d,k) such that if C is a code of length n >= \mu(d,k), then neither C nor its dual are good for error detection. For d >> k or k << d good approximations for \mu(d,k) are given. A generalization to non-linear codes is also given.<|reference_end|> | arxiv | @article{naydenova2005codes,
title={Codes for error detection, good or not good},
author={Irina Naydenova, Torleiv Klove},
journal={arXiv preprint arXiv:cs/0508035},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508035},
primaryClass={cs.IT math.IT}
} | naydenova2005codes |
arxiv-673175 | cs/0508036 | Exp\'eriences de classification d'une collection de documents XML de structure homog\`ene | <|reference_start|>Exp\'eriences de classification d'une collection de documents XML de structure homog\`ene: This paper presents some experiments in clustering homogeneous XMLdocuments to validate an existing classification or more generally anorganisational structure. Our approach integrates techniques for extracting knowledge from documents with unsupervised classification (clustering) of documents. We focus on the feature selection used for representing documents and its impact on the emerging classification. We mix the selection of structured features with fine textual selection based on syntactic characteristics.We illustrate and evaluate this approach with a collection of Inria activity reports for the year 2003. The objective is to cluster projects into larger groups (Themes), based on the keywords or different chapters of these activity reports. We then compare the results of clustering using different feature selections, with the official theme structure used by Inria.<|reference_end|> | arxiv | @article{despeyroux2005exp\'{e}riences,
title={Exp\'{e}riences de classification d'une collection de documents XML de
structure homog\`{e}ne},
author={Thierry Despeyroux, Yves Lechevallier, Brigitte Trousse, Anne-Marie
Vercoustre},
journal={Dans 5\`{e}me Journ\'{e}es d' Extraction et de Gestion des
Connaissances (EGC 2005)},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508036},
primaryClass={cs.IR}
} | despeyroux2005exp\'{e}riences |
arxiv-673176 | cs/0508037 | The Phase Transition in Exact Cover | <|reference_start|>The Phase Transition in Exact Cover: We study EC3, a variant of Exact Cover which is equivalent to Positive 1-in-3 SAT. Random instances of EC3 were recently used as benchmarks for simulations of an adiabatic quantum algorithm. Empirical results suggest that EC3 has a phase transition from satisfiability to unsatisfiability when the number of clauses per variable r exceeds some threshold r* ~= 0.62 +- 0.01. Using the method of differential equations, we show that if r <= 0.546 w.h.p. a random instance of EC3 is satisfiable. Combined with previous results this limits the location of the threshold, if it exists, to the range 0.546 < r* < 0.644.<|reference_end|> | arxiv | @article{kalapala2005the,
title={The Phase Transition in Exact Cover},
author={Vamsi Kalapala, Cris Moore},
journal={arXiv preprint arXiv:cs/0508037},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508037},
primaryClass={cs.CC}
} | kalapala2005the |
arxiv-673177 | cs/0508038 | Quantum Algorithm Processor For Finding Exact Divisors | <|reference_start|>Quantum Algorithm Processor For Finding Exact Divisors: Wiring diagrams are given for a quantum algorithm processor in CMOS to compute, in parallel, all divisors of an n-bit integer. Lines required in a wiring diagram are proportional to n. Execution time is proportional to the square of n.<|reference_end|> | arxiv | @article{burger2005quantum,
title={Quantum Algorithm Processor For Finding Exact Divisors},
author={John Robert Burger},
journal={arXiv preprint arXiv:cs/0508038},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508038},
primaryClass={cs.AR}
} | burger2005quantum |
arxiv-673178 | cs/0508039 | Tight Bounds on the Redundancy of Huffman Codes | <|reference_start|>Tight Bounds on the Redundancy of Huffman Codes: In this paper we study the redundancy of Huffman codes. In particular, we consider sources for which the probability of one of the source symbols is known. We prove a conjecture of Ye and Yeung regarding the upper bound on the redundancy of such Huffman codes, which yields in a tight upper bound. We also derive a tight lower bound for the redundancy under the same assumption. We further apply the method introduced in this paper to other related problems. It is shown that several other previously known bounds with different constraints follow immediately from our results.<|reference_end|> | arxiv | @article{mohajer2005tight,
title={Tight Bounds on the Redundancy of Huffman Codes},
author={Soheil Mohajer, Payam Pakzad and Ali Kakhbod},
journal={arXiv preprint arXiv:cs/0508039},
year={2005},
doi={10.1109/ITW.2006.1633796},
archivePrefix={arXiv},
eprint={cs/0508039},
primaryClass={cs.IT math.IT}
} | mohajer2005tight |
arxiv-673179 | cs/0508040 | Bounds on the Capacity of the Blockwise Noncoherent APSK-AWGN Channels | <|reference_start|>Bounds on the Capacity of the Blockwise Noncoherent APSK-AWGN Channels: Capacity of M-ary Amplitude and Phase-Shift Keying(M-APSK) over an Additive White Gaussian Noise(AWGN) channel that also introduces an unknown carrier phase rotation is considered. The phase remains constant over a block of L symbols and it is independent from block to block. Aiming to design codes with equally probable symbols, uniformly distributed channel inputs are assumed. Based on results of Peleg and Shamai for M-ary Phase Shift Keying(M-PSK) modulation, easily computable upper and lower bounds on the effective M-APSK capacity are derived. For moderate M and L and a broad range of Signal-to-Noise Ratios(SNR's), the bounds come close together. As in the case of M-PSK modulation, for large L the coherent capacity is approached.<|reference_end|> | arxiv | @article{cunha2005bounds,
title={Bounds on the Capacity of the Blockwise Noncoherent APSK-AWGN Channels},
author={Daniel C. Cunha and Jaime Portugheis},
journal={arXiv preprint arXiv:cs/0508040},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508040},
primaryClass={cs.IT math.IT}
} | cunha2005bounds |
arxiv-673180 | cs/0508041 | OpenVanilla - A Non-Intrusive Plug-In Framework of Text Services | <|reference_start|>OpenVanilla - A Non-Intrusive Plug-In Framework of Text Services: Input method (IM) is a sine qua non for text entry of many Asian languages, but its potential applications on other languages remain under-explored. This paper proposes a philosophy of input method design by seeing it as a nonintrusive plug-in text service framework. Such design allows new functionalities of text processing to be attached onto a running application without any tweaking of code. We also introduce OpenVanilla, a cross-platform framework that is designed with the above-mentioned model in mind. Frameworks like OpenVanilla have shown that an input method can be more than just a text entry tool: it offers a convenient way for developing various text service and language tools.<|reference_end|> | arxiv | @article{jiang2005openvanilla,
title={OpenVanilla - A Non-Intrusive Plug-In Framework of Text Services},
author={Tian-Jian Jiang, Deng-Liu, Kang-min Liu, Weizhong Yang, Pek-tiong Tan,
Mengjuei Hsieh, Tsung-hsiang Chang, Wen-Lien Hsu},
journal={arXiv preprint arXiv:cs/0508041},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508041},
primaryClass={cs.HC}
} | jiang2005openvanilla |
arxiv-673181 | cs/0508042 | OpenVanilla - A Non-Intrusive Plug-In Framework of Text Services | <|reference_start|>OpenVanilla - A Non-Intrusive Plug-In Framework of Text Services: This paper has been withdrawn by the author, because it was merged into cs.HC/0508041<|reference_end|> | arxiv | @article{chiang2005openvanilla,
title={OpenVanilla - A Non-Intrusive Plug-In Framework of Text Services},
author={Tien-chien Chiang, Deng-Liu, Kang-min Liu, Weizhong Yang, Pek-tiong
Tan, Mengjuei Hsieh, Tsung-hsiang Chang, Wen-Lien Hsu},
journal={arXiv preprint arXiv:cs/0508042},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508042},
primaryClass={cs.HC}
} | chiang2005openvanilla |
arxiv-673182 | cs/0508043 | Sequential Predictions based on Algorithmic Complexity | <|reference_start|>Sequential Predictions based on Algorithmic Complexity: This paper studies sequence prediction based on the monotone Kolmogorov complexity Km=-log m, i.e. based on universal deterministic/one-part MDL. m is extremely close to Solomonoff's universal prior M, the latter being an excellent predictor in deterministic as well as probabilistic environments, where performance is measured in terms of convergence of posteriors or losses. Despite this closeness to M, it is difficult to assess the prediction quality of m, since little is known about the closeness of their posteriors, which are the important quantities for prediction. We show that for deterministic computable environments, the "posterior" and losses of m converge, but rapid convergence could only be shown on-sequence; the off-sequence convergence can be slow. In probabilistic environments, neither the posterior nor the losses converge, in general.<|reference_end|> | arxiv | @article{hutter2005sequential,
title={Sequential Predictions based on Algorithmic Complexity},
author={Marcus Hutter},
journal={Journal of Computer and System Sciences, 72:1 (2006) pages 95-117},
year={2005},
number={IDSIA-16-04},
archivePrefix={arXiv},
eprint={cs/0508043},
primaryClass={cs.IT cs.LG math.IT}
} | hutter2005sequential |
arxiv-673183 | cs/0508044 | Deciding Quantifier-Free Presburger Formulas Using Parameterized Solution Bounds | <|reference_start|>Deciding Quantifier-Free Presburger Formulas Using Parameterized Solution Bounds: Given a formula in quantifier-free Presburger arithmetic, if it has a satisfying solution, there is one whose size, measured in bits, is polynomially bounded in the size of the formula. In this paper, we consider a special class of quantifier-free Presburger formulas in which most linear constraints are difference (separation) constraints, and the non-difference constraints are sparse. This class has been observed to commonly occur in software verification. We derive a new solution bound in terms of parameters characterizing the sparseness of linear constraints and the number of non-difference constraints, in addition to traditional measures of formula size. In particular, we show that the number of bits needed per integer variable is linear in the number of non-difference constraints and logarithmic in the number and size of non-zero coefficients in them, but is otherwise independent of the total number of linear constraints in the formula. The derived bound can be used in a decision procedure based on instantiating integer variables over a finite domain and translating the input quantifier-free Presburger formula to an equi-satisfiable Boolean formula, which is then checked using a Boolean satisfiability solver. In addition to our main theoretical result, we discuss several optimizations for deriving tighter bounds in practice. Empirical evidence indicates that our decision procedure can greatly outperform other decision procedures.<|reference_end|> | arxiv | @article{seshia2005deciding,
title={Deciding Quantifier-Free Presburger Formulas Using Parameterized
Solution Bounds},
author={Sanjit A. Seshia, Randal E. Bryant},
journal={Logical Methods in Computer Science, Volume 1, Issue 2 (December
19, 2005) lmcs:2270},
year={2005},
doi={10.2168/LMCS-1(2:6)2005},
archivePrefix={arXiv},
eprint={cs/0508044},
primaryClass={cs.LO}
} | seshia2005deciding |
arxiv-673184 | cs/0508045 | Multicommodity Flow Algorithms for Buffered Global Routing | <|reference_start|>Multicommodity Flow Algorithms for Buffered Global Routing: In this paper we describe a new algorithm for buffered global routing according to a prescribed buffer site map. Specifically, we describe a provably good multi-commodity flow based algorithm that finds a global routing minimizing buffer and wire congestion subject to given constraints on routing area (wirelength and number of buffers) and sink delays. Our algorithm allows computing the tradeoff curve between routing area and wire/buffer congestion under any combination of delay and capacity constraints, and simultaneously performs buffer/wire sizing, as well as layer and pin assignment. Experimental results show that near-optimal results are obtained with a practical runtime.<|reference_end|> | arxiv | @article{albrecht2005multicommodity,
title={Multicommodity Flow Algorithms for Buffered Global Routing},
author={Christoph Albrecht, Andrew B. Kahng, Ion I. Mandoiu, and Alexander
Zelikovsky},
journal={arXiv preprint arXiv:cs/0508045},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508045},
primaryClass={cs.DS}
} | albrecht2005multicommodity |
arxiv-673185 | cs/0508046 | Relaxation Bounds on the Minimum Pseudo-Weight of Linear Block Codes | <|reference_start|>Relaxation Bounds on the Minimum Pseudo-Weight of Linear Block Codes: Just as the Hamming weight spectrum of a linear block code sheds light on the performance of a maximum likelihood decoder, the pseudo-weight spectrum provides insight into the performance of a linear programming decoder. Using properties of polyhedral cones, we find the pseudo-weight spectrum of some short codes. We also present two general lower bounds on the minimum pseudo-weight. The first bound is based on the column weight of the parity-check matrix. The second bound is computed by solving an optimization problem. In some cases, this bound is more tractable to compute than previously known bounds and thus can be applied to longer codes.<|reference_end|> | arxiv | @article{chaichanavong2005relaxation,
title={Relaxation Bounds on the Minimum Pseudo-Weight of Linear Block Codes},
author={Panu Chaichanavong, Paul H. Siegel},
journal={arXiv preprint arXiv:cs/0508046},
year={2005},
doi={10.1109/ISIT.2005.1523448},
archivePrefix={arXiv},
eprint={cs/0508046},
primaryClass={cs.IT math.IT}
} | chaichanavong2005relaxation |
arxiv-673186 | cs/0508047 | Further Results on Coding for Reliable Communication over Packet Networks | <|reference_start|>Further Results on Coding for Reliable Communication over Packet Networks: In "On Coding for Reliable Communication over Packet Networks" (Lun, Medard, and Effros, Proc. 42nd Annu. Allerton Conf. Communication, Control, and Computing, 2004), a capacity-achieving coding scheme for unicast or multicast over lossy wireline or wireless packet networks is presented. We extend that paper's results in two ways: First, we extend the network model to allow packets received on a link to arrive according to any process with an average rate, as opposed to the assumption of Poisson traffic with i.i.d. losses that was previously made. Second, in the case of Poisson traffic with i.i.d. losses, we derive error exponents that quantify the rate at which the probability of error decays with coding delay.<|reference_end|> | arxiv | @article{lun2005further,
title={Further Results on Coding for Reliable Communication over Packet
Networks},
author={Desmond S. Lun, Muriel Medard, Ralf Koetter, Michelle Effros},
journal={Proc. 2005 IEEE International Symposium on Information Theory
(ISIT 2005), pages 1848-1852, September 2005},
year={2005},
doi={10.1109/ISIT.2005.1523665},
archivePrefix={arXiv},
eprint={cs/0508047},
primaryClass={cs.IT cs.NI math.IT}
} | lun2005further |
arxiv-673187 | cs/0508048 | An Operational Foundation for Delimited Continuations in the CPS Hierarchy | <|reference_start|>An Operational Foundation for Delimited Continuations in the CPS Hierarchy: We present an abstract machine and a reduction semantics for the lambda-calculus extended with control operators that give access to delimited continuations in the CPS hierarchy. The abstract machine is derived from an evaluator in continuation-passing style (CPS); the reduction semantics (i.e., a small-step operational semantics with an explicit representation of evaluation contexts) is constructed from the abstract machine; and the control operators are the shift and reset family. We also present new applications of delimited continuations in the CPS hierarchy: finding list prefixes and normalization by evaluation for a hierarchical language of units and products.<|reference_end|> | arxiv | @article{biernacka2005an,
title={An Operational Foundation for Delimited Continuations in the CPS
Hierarchy},
author={Malgorzata Biernacka and Dariusz Biernacki and Olivier Danvy},
journal={Logical Methods in Computer Science, Volume 1, Issue 2 (November
8, 2005) lmcs:2269},
year={2005},
doi={10.2168/LMCS-1(2:5)2005},
archivePrefix={arXiv},
eprint={cs/0508048},
primaryClass={cs.LO cs.PL}
} | biernacka2005an |
arxiv-673188 | cs/0508049 | Characterizations of Pseudo-Codewords of LDPC Codes | <|reference_start|>Characterizations of Pseudo-Codewords of LDPC Codes: An important property of high-performance, low complexity codes is the existence of highly efficient algorithms for their decoding. Many of the most efficient, recent graph-based algorithms, e.g. message passing algorithms and decoding based on linear programming, crucially depend on the efficient representation of a code in a graphical model. In order to understand the performance of these algorithms, we argue for the characterization of codes in terms of a so called fundamental cone in Euclidean space which is a function of a given parity check matrix of a code, rather than of the code itself. We give a number of properties of this fundamental cone derived from its connection to unramified covers of the graphical models on which the decoding algorithms operate. For the class of cycle codes, these developments naturally lead to a characterization of the fundamental polytope as the Newton polytope of the Hashimoto edge zeta function of the underlying graph.<|reference_end|> | arxiv | @article{koetter2005characterizations,
title={Characterizations of Pseudo-Codewords of LDPC Codes},
author={Ralf Koetter, Wen-Ching W. Li, Pascal O. Vontobel, Judy L. Walker},
journal={arXiv preprint arXiv:cs/0508049},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508049},
primaryClass={cs.IT cs.DM math.IT}
} | koetter2005characterizations |
arxiv-673189 | cs/0508050 | Duality between channel capacity and rate distortion with two-sided state information | <|reference_start|>Duality between channel capacity and rate distortion with two-sided state information: We show that the duality between channel capacity and data compression is retained when state information is available to the sender, to the receiver, to both, or to neither. We present a unified theory for eight special cases of channel capacity and rate distortion with state information, which also extends existing results to arbitrary pairs of independent and identically distributed (i.i.d.) correlated state information available at the sender and at the receiver, respectively. In particular, the resulting general formula for channel capacity assumes the same form as the generalized Wyner Ziv rate distortion function.<|reference_end|> | arxiv | @article{cover2005duality,
title={Duality between channel capacity and rate distortion with two-sided
state information},
author={T. M. Cover and M. Chiang},
journal={arXiv preprint arXiv:cs/0508050},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508050},
primaryClass={cs.IT math.IT}
} | cover2005duality |
arxiv-673190 | cs/0508051 | Trellis-Based Equalization for Sparse ISI Channels Revisited | <|reference_start|>Trellis-Based Equalization for Sparse ISI Channels Revisited: Sparse intersymbol-interference (ISI) channels are encountered in a variety of high-data-rate communication systems. Such channels have a large channel memory length, but only a small number of significant channel coefficients. In this paper, trellis-based equalization of sparse ISI channels is revisited. Due to the large channel memory length, the complexity of maximum-likelihood detection, e.g., by means of the Viterbi algorithm (VA), is normally prohibitive. In the first part of the paper, a unified framework based on factor graphs is presented for complexity reduction without loss of optimality. In this new context, two known reduced-complexity algorithms for sparse ISI channels are recapitulated: The multi-trellis VA (M-VA) and the parallel-trellis VA (P-VA). It is shown that the M-VA, although claimed, does not lead to a reduced computational complexity. The P-VA, on the other hand, leads to a significant complexity reduction, but can only be applied for a certain class of sparse channels. In the second part of the paper, a unified approach is investigated to tackle general sparse channels: It is shown that the use of a linear filter at the receiver renders the application of standard reduced-state trellis-based equalizer algorithms feasible, without significant loss of optimality. Numerical results verify the efficiency of the proposed receiver structure.<|reference_end|> | arxiv | @article{mietzner2005trellis-based,
title={Trellis-Based Equalization for Sparse ISI Channels Revisited},
author={Jan Mietzner, Sabah Badri-Hoeher, Ingmar Land, and Peter A. Hoeher},
journal={arXiv preprint arXiv:cs/0508051},
year={2005},
doi={10.1109/ISIT.2005.1523328},
archivePrefix={arXiv},
eprint={cs/0508051},
primaryClass={cs.IT math.IT}
} | mietzner2005trellis-based |
arxiv-673191 | cs/0508052 | Energy Optimal Data Propagation in Wireless Sensor Networks | <|reference_start|>Energy Optimal Data Propagation in Wireless Sensor Networks: We propose an algorithm which produces a randomized strategy reaching optimal data propagation in wireless sensor networks (WSN).In [6] and [8], an energy balanced solution is sought using an approximation algorithm. Our algorithm improves by (a) when an energy-balanced solution does not exist, it still finds an optimal solution (whereas previous algorithms did not consider this case and provide no useful solution) (b) instead of being an approximation algorithm, it finds the exact solution in one pass. We also provide a rigorous proof of the optimality of our solution.<|reference_end|> | arxiv | @article{leone2005energy,
title={Energy Optimal Data Propagation in Wireless Sensor Networks},
author={Pierre Leone, Olivier Powell and Jose Rolim},
journal={Journal of Parallel and Distributed Computing Volume 67, Issue 3 ,
March 2007, Pages 302-317},
year={2005},
doi={10.1016/j.jpdc.2006.10.007},
archivePrefix={arXiv},
eprint={cs/0508052},
primaryClass={cs.DC}
} | leone2005energy |
arxiv-673192 | cs/0508053 | Measuring Semantic Similarity by Latent Relational Analysis | <|reference_start|>Measuring Semantic Similarity by Latent Relational Analysis: This paper introduces Latent Relational Analysis (LRA), a method for measuring semantic similarity. LRA measures similarity in the semantic relations between two pairs of words. When two pairs have a high degree of relational similarity, they are analogous. For example, the pair cat:meow is analogous to the pair dog:bark. There is evidence from cognitive science that relational similarity is fundamental to many cognitive and linguistic tasks (e.g., analogical reasoning). In the Vector Space Model (VSM) approach to measuring relational similarity, the similarity between two pairs is calculated by the cosine of the angle between the vectors that represent the two pairs. The elements in the vectors are based on the frequencies of manually constructed patterns in a large corpus. LRA extends the VSM approach in three ways: (1) patterns are derived automatically from the corpus, (2) Singular Value Decomposition is used to smooth the frequency data, and (3) synonyms are used to reformulate word pairs. This paper describes the LRA algorithm and experimentally compares LRA to VSM on two tasks, answering college-level multiple-choice word analogy questions and classifying semantic relations in noun-modifier expressions. LRA achieves state-of-the-art results, reaching human-level performance on the analogy questions and significantly exceeding VSM performance on both tasks.<|reference_end|> | arxiv | @article{turney2005measuring,
title={Measuring Semantic Similarity by Latent Relational Analysis},
author={Peter D. Turney (National Research Council of Canada)},
journal={Proceedings of the Nineteenth International Joint Conference on
Artificial Intelligence (IJCAI-05), (2005), Edinburgh, Scotland, 1136-1141},
year={2005},
number={NRC-48255},
archivePrefix={arXiv},
eprint={cs/0508053},
primaryClass={cs.LG cs.CL cs.IR}
} | turney2005measuring |
arxiv-673193 | cs/0508054 | Sensing Capacity for Markov Random Fields | <|reference_start|>Sensing Capacity for Markov Random Fields: This paper computes the sensing capacity of a sensor network, with sensors of limited range, sensing a two-dimensional Markov random field, by modeling the sensing operation as an encoder. Sensor observations are dependent across sensors, and the sensor network output across different states of the environment is neither identically nor independently distributed. Using a random coding argument, based on the theory of types, we prove a lower bound on the sensing capacity of the network, which characterizes the ability of the sensor network to distinguish among environments with Markov structure, to within a desired accuracy.<|reference_end|> | arxiv | @article{rachlin2005sensing,
title={Sensing Capacity for Markov Random Fields},
author={Yaron Rachlin, Rohit Negi, and Pradeep Khosla},
journal={arXiv preprint arXiv:cs/0508054},
year={2005},
doi={10.1109/ISIT.2005.1523308},
archivePrefix={arXiv},
eprint={cs/0508054},
primaryClass={cs.IT math.IT}
} | rachlin2005sensing |
arxiv-673194 | cs/0508055 | DNA Codes that Avoid Secondary Structures | <|reference_start|>DNA Codes that Avoid Secondary Structures: In this paper, we consider the problem of designing DNA sequences (codewords) for DNA storage systems and DNA computing that are unlikely to fold back onto themselves to form undesirable secondary structures. The paper addresses both the issue of enumerating the sequences with such properties and the problem of practical code construction.<|reference_end|> | arxiv | @article{milenkovic2005dna,
title={DNA Codes that Avoid Secondary Structures},
author={Olgica Milenkovic, Navin Kashyap},
journal={arXiv preprint arXiv:cs/0508055},
year={2005},
doi={10.1109/ISIT.2005.1523340},
archivePrefix={arXiv},
eprint={cs/0508055},
primaryClass={cs.DM cs.IT math.IT}
} | milenkovic2005dna |
arxiv-673195 | cs/0508056 | Very Simple Chaitin Machines for Concrete AIT | <|reference_start|>Very Simple Chaitin Machines for Concrete AIT: In 1975, Chaitin introduced his celebrated Omega number, the halting probability of a universal Chaitin machine, a universal Turing machine with a prefix-free domain. The Omega number's bits are {\em algorithmically random}--there is no reason the bits should be the way they are, if we define ``reason'' to be a computable explanation smaller than the data itself. Since that time, only {\em two} explicit universal Chaitin machines have been proposed, both by Chaitin himself. Concrete algorithmic information theory involves the study of particular universal Turing machines, about which one can state theorems with specific numerical bounds, rather than include terms like O(1). We present several new tiny Chaitin machines (those with a prefix-free domain) suitable for the study of concrete algorithmic information theory. One of the machines, which we call Keraia, is a binary encoding of lambda calculus based on a curried lambda operator. Source code is included in the appendices. We also give an algorithm for restricting the domain of blank-endmarker machines to a prefix-free domain over an alphabet that does not include the endmarker; this allows one to take many universal Turing machines and construct universal Chaitin machines from them.<|reference_end|> | arxiv | @article{stay2005very,
title={Very Simple Chaitin Machines for Concrete AIT},
author={Michael Stay},
journal={Fundamenta Informaticae 68 (3) 2005. pp. 231--247},
year={2005},
number={CDMTCS Report 265},
archivePrefix={arXiv},
eprint={cs/0508056},
primaryClass={cs.IT math.IT}
} | stay2005very |
arxiv-673196 | cs/0508057 | On the Performance of Turbo Codes in Quasi-Static Fading Channels | <|reference_start|>On the Performance of Turbo Codes in Quasi-Static Fading Channels: In this paper, we investigate in detail the performance of turbo codes in quasi-static fading channels both with and without antenna diversity. First, we develop a simple and accurate analytic technique to evaluate the performance of turbo codes in quasi-static fading channels. The proposed analytic technique relates the frame error rate of a turbo code to the iterative decoder convergence threshold, rather than to the turbo code distance spectrum. Subsequently, we compare the performance of various turbo codes in quasi-static fading channels. We show that, in contrast to the situation in the AWGN channel, turbo codes with different interleaver sizes or turbo codes based on RSC codes with different constraint lengths and generator polynomials exhibit identical performance. Moreover, we also compare the performance of turbo codes and convolutional codes in quasi-static fading channels under the condition of identical decoding complexity. In particular, we show that turbo codes do not outperform convolutional codes in quasi-static fading channels with no antenna diversity; and that turbo codes only outperform convolutional codes in quasi-static fading channels with antenna diversity.<|reference_end|> | arxiv | @article{rodrigues2005on,
title={On the Performance of Turbo Codes in Quasi-Static Fading Channels},
author={M. R. D. Rodrigues, I. Chatzigeorgiou, I. J. Wassell and R. Carrasco},
journal={arXiv preprint arXiv:cs/0508057},
year={2005},
doi={10.1109/ISIT.2005.1523410},
archivePrefix={arXiv},
eprint={cs/0508057},
primaryClass={cs.IT math.IT}
} | rodrigues2005on |
arxiv-673197 | cs/0508058 | Entropy coding with Variable Length Re-writing Systems | <|reference_start|>Entropy coding with Variable Length Re-writing Systems: This paper describes a new set of block source codes well suited for data compression. These codes are defined by sets of productions rules of the form a.l->b, where a in A represents a value from the source alphabet A and l, b are -small- sequences of bits. These codes naturally encompass other Variable Length Codes (VLCs) such as Huffman codes. It is shown that these codes may have a similar or even a shorter mean description length than Huffman codes for the same encoding and decoding complexity. A first code design method allowing to preserve the lexicographic order in the bit domain is described. The corresponding codes have the same mean description length (mdl) as Huffman codes from which they are constructed. Therefore, they outperform from a compression point of view the Hu-Tucker codes designed to offer the lexicographic property in the bit domain. A second construction method allows to obtain codes such that the marginal bit probability converges to 0.5 as the sequence length increases and this is achieved even if the probability distribution function is not known by the encoder.<|reference_end|> | arxiv | @article{jegou2005entropy,
title={Entropy coding with Variable Length Re-writing Systems},
author={Herve Jegou and Christine Guillemot},
journal={arXiv preprint arXiv:cs/0508058},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508058},
primaryClass={cs.IT math.IT}
} | jegou2005entropy |
arxiv-673198 | cs/0508059 | Honesty can be the best policy within quantum mechanics | <|reference_start|>Honesty can be the best policy within quantum mechanics: Honesty has never been scientifically proved to be the best policy in any case. It is pointed out that only honest person can prevent his dishonest partner to bias the outcome of quantum coin tossing.<|reference_end|> | arxiv | @article{mitra2005honesty,
title={Honesty can be the best policy within quantum mechanics},
author={Arindam Mitra},
journal={arXiv preprint arXiv:cs/0508059},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508059},
primaryClass={cs.CR}
} | mitra2005honesty |
arxiv-673199 | cs/0508060 | Algorithms for Discrete Denoising Under Channel Uncertainty | <|reference_start|>Algorithms for Discrete Denoising Under Channel Uncertainty: The goal of a denoising algorithm is to reconstruct a signal from its noise-corrupted observations. Perfect reconstruction is seldom possible and performance is measured under a given fidelity criterion. In a recent work, the authors addressed the problem of denoising unknown discrete signals corrupted by a discrete memoryless channel when the channel, rather than being completely known, is only known to lie in some uncertainty set of possible channels. A sequence of denoisers was derived for this case and shown to be asymptotically optimal with respect to a worst-case criterion argued most relevant to this setting. In the present work we address the implementation and complexity of this denoiser for channels parametrized by a scalar, establishing its practicality. We show that for symmetric channels, the problem can be mapped into a convex optimization problem, which can be solved efficiently. We also present empirical results suggesting the potential of these schemes to do well in practice. A key component of our schemes is an estimator of the subset of channels in the uncertainty set that are feasible in the sense of being able to give rise to the noise-corrupted signal statistics for some channel input distribution. We establish the efficiency of this estimator, both algorithmically and experimentally. We also present a modification of the recently developed discrete universal denoiser (DUDE) that assumes a channel based on the said estimator, and show that, in practice, the resulting scheme performs well. For concreteness, we focus on the binary alphabet case and binary symmetric channels, but also discuss the extensions of the algorithms to general finite alphabets and to general channels parameterized by a scalar.<|reference_end|> | arxiv | @article{gemelos2005algorithms,
title={Algorithms for Discrete Denoising Under Channel Uncertainty},
author={George Gemelos, Styrmir Sigurjonsson, Tsachy Weissman},
journal={arXiv preprint arXiv:cs/0508060},
year={2005},
doi={10.1109/TSP.2006.874295},
archivePrefix={arXiv},
eprint={cs/0508060},
primaryClass={cs.IT math.IT}
} | gemelos2005algorithms |
arxiv-673200 | cs/0508061 | SciBlog : A Tool for Scientific Collaboration | <|reference_start|>SciBlog : A Tool for Scientific Collaboration: I describe a newly developed online scientific web-log (SciBlog). The online facility consists of several moduls needed in a common and conventional research activity. I show that this enables scientists around the world to perform an online collaboration over the net.<|reference_end|> | arxiv | @article{handoko2005sciblog,
title={SciBlog : A Tool for Scientific Collaboration},
author={L.T. Handoko},
journal={Proceeding of the WKM 2004},
year={2005},
number={FISIKALIPI-04013},
archivePrefix={arXiv},
eprint={cs/0508061},
primaryClass={cs.CY}
} | handoko2005sciblog |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.