corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-673801
cs/0601122
Reducibility of Gene Patterns in Ciliates using the Breakpoint Graph
<|reference_start|>Reducibility of Gene Patterns in Ciliates using the Breakpoint Graph: Gene assembly in ciliates is one of the most involved DNA processings going on in any organism. This process transforms one nucleus (the micronucleus) into another functionally different nucleus (the macronucleus). We continue the development of the theoretical models of gene assembly, and in particular we demonstrate the use of the concept of the breakpoint graph, known from another branch of DNA transformation research. More specifically: (1) we characterize the intermediate gene patterns that can occur during the transformation of a given micronuclear gene pattern to its macronuclear form; (2) we determine the number of applications of the loop recombination operation (the most basic of the three molecular operations that accomplish gene assembly) needed in this transformation; (3) we generalize previous results (and give elegant alternatives for some proofs) concerning characterizations of the micronuclear gene patterns that can be assembled using a specific subset of the three molecular operations.<|reference_end|>
arxiv
@article{brijder2006reducibility, title={Reducibility of Gene Patterns in Ciliates using the Breakpoint Graph}, author={Robert Brijder, Hendrik Jan Hoogeboom, Grzegorz Rozenberg}, journal={Theoretical Computer Science, v. 356, 26-45, 2006}, year={2006}, doi={10.1016/j.tcs.2006.01.041}, archivePrefix={arXiv}, eprint={cs/0601122}, primaryClass={cs.LO q-bio.GN} }
brijder2006reducibility
arxiv-673802
cs/0601123
Low density codes achieve the rate-distortion bound
<|reference_start|>Low density codes achieve the rate-distortion bound: We propose a new construction for low-density source codes with multiple parameters that can be tuned to optimize the performance of the code. In addition, we introduce a set of analysis techniques for deriving upper bounds for the expected distortion of our construction, as well as more general low-density constructions. We show that (with an optimal encoding algorithm) our codes achieve the rate-distortion bound for a binary symmetric source and Hamming distortion. Our methods also provide rigorous upper bounds on the minimum distortion achievable by previously proposed low-density constructions.<|reference_end|>
arxiv
@article{martinian2006low, title={Low density codes achieve the rate-distortion bound}, author={Emin Martinian, Martin J. Wainwright}, journal={arXiv preprint arXiv:cs/0601123}, year={2006}, archivePrefix={arXiv}, eprint={cs/0601123}, primaryClass={cs.IT math.IT} }
martinian2006low
arxiv-673803
cs/0601124
Power Control for User Cooperation
<|reference_start|>Power Control for User Cooperation: For a fading Gaussian multiple access channel with user cooperation, we obtain the optimal power allocation policies that maximize the rates achievable by block Markov superposition coding. The optimal policies result in a coding scheme that is simpler than the one for a general multiple access channel with generalized feedback. This simpler coding scheme also leads to the possibility of formulating an otherwise non-concave optimization problem as a concave one. Using the channel state information at the transmitters to adapt the powers, we demonstrate significant gains over the achievable rates for existing cooperative systems.<|reference_end|>
arxiv
@article{kaya2006power, title={Power Control for User Cooperation}, author={Onur Kaya and Sennur Ulukus}, journal={arXiv preprint arXiv:cs/0601124}, year={2006}, archivePrefix={arXiv}, eprint={cs/0601124}, primaryClass={cs.IT math.IT} }
kaya2006power
arxiv-673804
cs/0601125
Metadata aggregation and "automated digital libraries": A retrospective on the NSDL experience
<|reference_start|>Metadata aggregation and "automated digital libraries": A retrospective on the NSDL experience: Over three years ago, the Core Integration team of the National Science Digital Library (NSDL) implemented a digital library based on metadata aggregation using Dublin Core and OAI-PMH. The initial expectation was that such low-barrier technologies would be relatively easy to automate and administer. While this architectural choice permitted rapid deployment of a production NSDL, our three years of experience have contradicted our original expectations of easy automation and low people cost. We have learned that alleged "low-barrier" standards are often harder to deploy than expected. In this paper we report on this experience and comment on the general cost, the functionality, and the ultimate effectiveness of this architecture.<|reference_end|>
arxiv
@article{lagoze2006metadata, title={Metadata aggregation and "automated digital libraries": A retrospective on the NSDL experience}, author={Carl Lagoze, Dean Krafft, Tim Cornwell, Naomi Dushay, Dean Eckstrom, and John Saylor}, journal={arXiv preprint arXiv:cs/0601125}, year={2006}, archivePrefix={arXiv}, eprint={cs/0601125}, primaryClass={cs.DL} }
lagoze2006metadata
arxiv-673805
cs/0601126
Approximate Linear Time ML Decoding on Tail-Biting Trellises in Two Rounds
<|reference_start|>Approximate Linear Time ML Decoding on Tail-Biting Trellises in Two Rounds: A linear time approximate maximum likelihood decoding algorithm on tail-biting trellises is prsented, that requires exactly two rounds on the trellis. This is an adaptation of an algorithm proposed earlier with the advantage that it reduces the time complexity from O(mlogm) to O(m) where m is the number of nodes in the tail-biting trellis. A necessary condition for the output of the algorithm to differ from the output of the ideal ML decoder is reduced and simulation results on an AWGN channel using tail-biting rrellises for two rate 1/2 convoluational codes with memory 4 and 6 respectively are reported<|reference_end|>
arxiv
@article{krishnan2006approximate, title={Approximate Linear Time ML Decoding on Tail-Biting Trellises in Two Rounds}, author={K. Murali Krishnan, Priti Shankar}, journal={Proc. ISIT 2006, pp.2245-2249}, year={2006}, archivePrefix={arXiv}, eprint={cs/0601126}, primaryClass={cs.IT math.IT} }
krishnan2006approximate
arxiv-673806
cs/0601127
Truly Online Paging with Locality of Reference
<|reference_start|>Truly Online Paging with Locality of Reference: The competitive analysis fails to model locality of reference in the online paging problem. To deal with it, Borodin et. al. introduced the access graph model, which attempts to capture the locality of reference. However, the access graph model has a number of troubling aspects. The access graph has to be known in advance to the paging algorithm and the memory required to represent the access graph itself may be very large. In this paper we present truly online strongly competitive paging algorithms in the access graph model that do not have any prior information on the access sequence. We present both deterministic and randomized algorithms. The algorithms need only O(k log n) bits of memory, where k is the number of page slots available and n is the size of the virtual address space. I.e., asymptotically no more memory than needed to store the virtual address translation table. We also observe that our algorithms adapt themselves to temporal changes in the locality of reference. We model temporal changes in the locality of reference by extending the access graph model to the so called extended access graph model, in which many vertices of the graph can correspond to the same virtual page. We define a measure for the rate of change in the locality of reference in G denoted by Delta(G). We then show our algorithms remain strongly competitive as long as Delta(G) >= (1+ epsilon)k, and no truly online algorithm can be strongly competitive on a class of extended access graphs that includes all graphs G with Delta(G) >= k- o(k).<|reference_end|>
arxiv
@article{fiat2006truly, title={Truly Online Paging with Locality of Reference}, author={Amos Fiat, Manor Mendel}, journal={38th Annual Symposium on Foundations of Computer Science (FOCS '97), 1997, pp. 326}, year={2006}, doi={10.1109/SFCS.1997.646121}, archivePrefix={arXiv}, eprint={cs/0601127}, primaryClass={cs.DS} }
fiat2006truly
arxiv-673807
cs/0601128
On the 3-distortion of a path
<|reference_start|>On the 3-distortion of a path: We prove that, when a path of length n is embedded in R^2, the 3-distortion is an Omega(n^{1/2}), and that, when embedded in R^d, the 3-distortion is an O(n^{1/d-1}).<|reference_end|>
arxiv
@article{dehornoy2006on, title={On the 3-distortion of a path}, author={Pierre Dehornoy (DMA)}, journal={European Journal of Combinatorics (2008) http://www.elsevier.com/wps/find/journaldescription.cws_home/622824/description#description}, year={2006}, doi={10.1016/j.ejc.2006.11.002}, archivePrefix={arXiv}, eprint={cs/0601128}, primaryClass={cs.CG} }
dehornoy2006on
arxiv-673808
cs/0601129
Instantaneously Trained Neural Networks
<|reference_start|>Instantaneously Trained Neural Networks: This paper presents a review of instantaneously trained neural networks (ITNNs). These networks trade learning time for size and, in the basic model, a new hidden node is created for each training sample. Various versions of the corner-classification family of ITNNs, which have found applications in artificial intelligence (AI), are described. Implementation issues are also considered.<|reference_end|>
arxiv
@article{ponnath2006instantaneously, title={Instantaneously Trained Neural Networks}, author={Abhilash Ponnath}, journal={arXiv preprint arXiv:cs/0601129}, year={2006}, archivePrefix={arXiv}, eprint={cs/0601129}, primaryClass={cs.NE cs.AI} }
ponnath2006instantaneously
arxiv-673809
cs/0601130
From Dumb Wireless Sensors to Smart Networks using Network Coding
<|reference_start|>From Dumb Wireless Sensors to Smart Networks using Network Coding: The vision of wireless sensor networks is one of a smart collection of tiny, dumb devices. These motes may be individually cheap, unintelligent, imprecise, and unreliable. Yet they are able to derive strength from numbers, rendering the whole to be strong, reliable and robust. Our approach is to adopt a distributed and randomized mindset and rely on in network processing and network coding. Our general abstraction is that nodes should act only locally and independently, and the desired global behavior should arise as a collective property of the network. We summarize our work and present how these ideas can be applied for communication and storage in sensor networks.<|reference_end|>
arxiv
@article{dimakis2006from, title={From Dumb Wireless Sensors to Smart Networks using Network Coding}, author={A. G. Dimakis, D. Petrovic, K. Ramchandran}, journal={arXiv preprint arXiv:cs/0601130}, year={2006}, archivePrefix={arXiv}, eprint={cs/0601130}, primaryClass={cs.IT cs.NI math.IT} }
dimakis2006from
arxiv-673810
cs/0601131
Scalable Algorithms for Aggregating Disparate Forecasts of Probability
<|reference_start|>Scalable Algorithms for Aggregating Disparate Forecasts of Probability: In this paper, computational aspects of the panel aggregation problem are addressed. Motivated primarily by applications of risk assessment, an algorithm is developed for aggregating large corpora of internally incoherent probability assessments. The algorithm is characterized by a provable performance guarantee, and is demonstrated to be orders of magnitude faster than existing tools when tested on several real-world data-sets. In addition, unexpected connections between research in risk assessment and wireless sensor networks are exposed, as several key ideas are illustrated to be useful in both fields.<|reference_end|>
arxiv
@article{predd2006scalable, title={Scalable Algorithms for Aggregating Disparate Forecasts of Probability}, author={Joel B. Predd, Sanjeev R. Kulkarni, Daniel N. Osherson, and H. Vincent Poor}, journal={arXiv preprint arXiv:cs/0601131}, year={2006}, archivePrefix={arXiv}, eprint={cs/0601131}, primaryClass={cs.AI cs.DC cs.IT math.IT} }
predd2006scalable
arxiv-673811
cs/0601132
A Study on the Global Convergence Time Complexity of Estimation of Distribution Algorithms
<|reference_start|>A Study on the Global Convergence Time Complexity of Estimation of Distribution Algorithms: The Estimation of Distribution Algorithm is a new class of population based search methods in that a probabilistic model of individuals is estimated based on the high quality individuals and used to generate the new individuals. In this paper we compute 1) some upper bounds on the number of iterations required for global convergence of EDA 2) the exact number of iterations needed for EDA to converge to global optima.<|reference_end|>
arxiv
@article{rastegar2006a, title={A Study on the Global Convergence Time Complexity of Estimation of Distribution Algorithms}, author={R. Rastegar, M. R. Meybodi}, journal={arXiv preprint arXiv:cs/0601132}, year={2006}, archivePrefix={arXiv}, eprint={cs/0601132}, primaryClass={cs.AI cs.NE} }
rastegar2006a
arxiv-673812
cs/0601133
Dense Linear Algebra over Finite Fields: the FFLAS and FFPACK packages
<|reference_start|>Dense Linear Algebra over Finite Fields: the FFLAS and FFPACK packages: In the past two decades, some major efforts have been made to reduce exact (e.g. integer, rational, polynomial) linear algebra problems to matrix multiplication in order to provide algorithms with optimal asymptotic complexity. To provide efficient implementations of such algorithms one need to be careful with the underlying arithmetic. It is well known that modular techniques such as the Chinese remainder algorithm or the p-adic lifting allow very good practical performance, especially when word size arithmetic are used. Therefore, finite field arithmetic becomes an important core for efficient exact linear algebra libraries. In this paper, we study high performance implementations of basic linear algebra routines over word size prime fields: specially the matrix multiplication; our goal being to provide an exact alternate to the numerical BLAS library. We show that this is made possible by a carefull combination of numerical computations and asymptotically faster algorithms. Our kernel has several symbolic linear algebra applications enabled by diverse matrix multiplication reductions: symbolic triangularization, system solving, determinant and matrix inverse implementations are thus studied.<|reference_end|>
arxiv
@article{dumas2006dense, title={Dense Linear Algebra over Finite Fields: the FFLAS and FFPACK packages}, author={Jean-Guillaume Dumas (LJK), Pascal Giorgi (LIRMM), Cl'ement Pernet (INRIA Rh^one-Alpes / LIG Laboratoire d'Informatique de Grenoble)}, journal={ACM Transactions on Mathematical Software 35, 3 (2009) article 19}, year={2006}, archivePrefix={arXiv}, eprint={cs/0601133}, primaryClass={cs.SC} }
dumas2006dense
arxiv-673813
cs/0601134
Combining decision procedures for the reals
<|reference_start|>Combining decision procedures for the reals: <p>We address the general problem of determining the validity of boolean combinations of equalities and inequalities between real-valued expressions. In particular, we consider methods of establishing such assertions using only restricted forms of distributivity. At the same time, we explore ways in which "local" decision or heuristic procedures for fragments of the theory of the reals can be amalgamated into global ones. </p> <p>Let <em>Tadd[Q]</em> be the first-order theory of the real numbers in the language of ordered groups, with negation, a constant <em>1</em>, and function symbols for multiplication by rational constants. Let <em>Tmult[Q]</em> be the analogous theory for the multiplicative structure, and let <em>T[Q]</em> be the union of the two. We show that although <em>T[Q]</em> is undecidable, the universal fragment of <em>T[Q]</em> is decidable. We also show that terms of <em>T[Q]</em>can fruitfully be put in a normal form. We prove analogous results for theories in which <em>Q</em> is replaced, more generally, by suitable subfields <em>F</em> of the reals. Finally, we consider practical methods of establishing quantifier-free validities that approximate our (impractical) decidability results.</p><|reference_end|>
arxiv
@article{avigad2006combining, title={Combining decision procedures for the reals}, author={Jeremy Avigad and Harvey Friedman}, journal={Logical Methods in Computer Science, Volume 2, Issue 4 (October 18, 2006) lmcs:2240}, year={2006}, doi={10.2168/LMCS-2(4:4)2006}, archivePrefix={arXiv}, eprint={cs/0601134}, primaryClass={cs.LO} }
avigad2006combining
arxiv-673814
cs/0601135
Strategies of Loop Recombination in Ciliates
<|reference_start|>Strategies of Loop Recombination in Ciliates: Gene assembly in ciliates is an extremely involved DNA transformation process, which transforms a nucleus, the micronucleus, to another functionally different nucleus, the macronucleus. In this paper we characterize which loop recombination operations (one of the three types of molecular operations that accomplish gene assembly) can possibly be applied in the transformation of a given gene from its micronuclear form to its macronuclear form. We also characterize in which order these loop recombination operations are applicable. This is done in the abstract and more general setting of so-called legal strings.<|reference_end|>
arxiv
@article{brijder2006strategies, title={Strategies of Loop Recombination in Ciliates}, author={Robert Brijder, Hendrik Jan Hoogeboom, Michael Muskulus}, journal={Discrete Applied Mathematics, v. 156, 1736-1753, 2008}, year={2006}, doi={10.1016/j.dam.2007.08.032}, number={LIACS Technical Report 2006-01}, archivePrefix={arXiv}, eprint={cs/0601135}, primaryClass={cs.LO q-bio.GN} }
brijder2006strategies
arxiv-673815
cs/0602001
Query-Monotonic Turing Reductions
<|reference_start|>Query-Monotonic Turing Reductions: We study reductions that limit the extreme adaptivity of Turing reductions. In particular, we study reductions that make a rapid, structured progression through the set to which they are reducing: Each query is strictly longer (shorter) than the previous one. We call these reductions query-increasing (query-decreasing) Turing reductions. We also study query-nonincreasing (query-nondecreasing) Turing reductions. These are Turing reductions in which the sequence of query lengths is nonincreasing (nondecreasing). We ask whether these restrictions in fact limit the power of reductions. We prove that query-increasing and query-decreasing Turing reductions are incomparable with (that is, are neither strictly stronger than nor strictly weaker than) truth-table reductions and are strictly weaker than Turing reductions. In addition, we prove that query-nonincreasing and query-nondecreasing Turing reductions are strictly stronger than truth-table reductions and strictly weaker than Turing reductions. Despite the fact that we prove query-increasing and query-decreasing Turing reductions to in the general case be strictly weaker than Turing reductions, we identify a broad class of sets A for which any set that Turing reduces to A will also reduce to A via both query-increasing and query-decreasing Turing reductions. In particular, this holds for all tight paddable sets, where a set is said to be tight paddable exactly if it is paddable via a function whose output length is bounded tightly both from above and from below in the length of the input. We prove that many natural NP-complete problems such as satisfiability, clique, and vertex cover are tight paddable.<|reference_end|>
arxiv
@article{hemaspaandra2006query-monotonic, title={Query-Monotonic Turing Reductions}, author={Lane A. Hemaspaandra and Mayur Thakur}, journal={arXiv preprint arXiv:cs/0602001}, year={2006}, number={URCS-TR-818}, archivePrefix={arXiv}, eprint={cs/0602001}, primaryClass={cs.CC} }
hemaspaandra2006query-monotonic
arxiv-673816
cs/0602002
Simulating Network Influence Algorithms Using Particle-Swarms: PageRank and PageRank-Priors
<|reference_start|>Simulating Network Influence Algorithms Using Particle-Swarms: PageRank and PageRank-Priors: A particle-swarm is a set of indivisible processing elements that traverse a network in order to perform a distributed function. This paper will describe a particular implementation of a particle-swarm that can simulate the behavior of the popular PageRank algorithm in both its {\it global-rank} and {\it relative-rank} incarnations. PageRank is compared against the particle-swarm method on artificially generated scale-free networks of 1,000 nodes constructed using a common gamma value, $\gamma = 2.5$. The running time of the particle-swarm algorithm is $O(|P|+|P|t)$ where $|P|$ is the size of the particle population and $t$ is the number of particle propagation iterations. The particle-swarm method is shown to be useful due to its ease of extension and running time.<|reference_end|>
arxiv
@article{rodriguez2006simulating, title={Simulating Network Influence Algorithms Using Particle-Swarms: PageRank and PageRank-Priors}, author={Marko A. Rodriguez and Johan Bollen}, journal={arXiv preprint arXiv:cs/0602002}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602002}, primaryClass={cs.DS} }
rodriguez2006simulating
arxiv-673817
cs/0602003
Watermarking Using Decimal Sequences
<|reference_start|>Watermarking Using Decimal Sequences: This paper introduces the use of decimal sequences in a code division multiple access (CDMA) based watermarking system to hide information for authentication in black and white images. Matlab version 6.5 was used to implement the algorithms discussed in this paper. The advantage of using d-sequences over PN sequences is that one can choose from a variety of prime numbers which provides a more flexible system.<|reference_end|>
arxiv
@article{mandhani2006watermarking, title={Watermarking Using Decimal Sequences}, author={Navneet Mandhani and Subhash Kak}, journal={Cryptologia, vol. 29, pp. 50-58, 2005}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602003}, primaryClass={cs.CR} }
mandhani2006watermarking
arxiv-673818
cs/0602004
Conjunctive Queries over Trees
<|reference_start|>Conjunctive Queries over Trees: We study the complexity and expressive power of conjunctive queries over unranked labeled trees represented using a variety of structure relations such as ``child'', ``descendant'', and ``following'' as well as unary relations for node labels. We establish a framework for characterizing structures representing trees for which conjunctive queries can be evaluated efficiently. Then we completely chart the tractability frontier of the problem and establish a dichotomy theorem for our axis relations, i.e., we find all subset-maximal sets of axes for which query evaluation is in polynomial time and show that for all other cases, query evaluation is NP-complete. All polynomial-time results are obtained immediately using the proof techniques from our framework. Finally, we study the expressiveness of conjunctive queries over trees and show that for each conjunctive query, there is an equivalent acyclic positive query (i.e., a set of acyclic conjunctive queries), but that in general this query is not of polynomial size.<|reference_end|>
arxiv
@article{gottlob2006conjunctive, title={Conjunctive Queries over Trees}, author={Georg Gottlob, Christoph Koch, Klaus U. Schulz}, journal={arXiv preprint arXiv:cs/0602004}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602004}, primaryClass={cs.DB cs.AI cs.CC cs.LO} }
gottlob2006conjunctive
arxiv-673819
cs/0602005
A library of Taylor models for PVS automatic proof checker
<|reference_start|>A library of Taylor models for PVS automatic proof checker: We present in this paper a library to compute with Taylor models, a technique extending interval arithmetic to reduce decorrelation and to solve differential equations. Numerical software usually produces only numerical results. Our library can be used to produce both results and proofs. As seen during the development of Fermat's last theorem reported by Aczel 1996, providing a proof is not sufficient. Our library provides a proof that has been thoroughly scrutinized by a trustworthy and tireless assistant. PVS is an automatic proof assistant that has been fairly developed and used and that has no internal connection with interval arithmetic or Taylor models. We built our library so that PVS validates each result as it is produced. As producing and validating a proof, is and will certainly remain a bigger task than just producing a numerical result our library will never be a replacement to imperative implementations of Taylor models such as Cosy Infinity. Our library should mainly be used to validate small to medium size results that are involved in safety or life critical applications.<|reference_end|>
arxiv
@article{cháves2006a, title={A library of Taylor models for PVS automatic proof checker}, author={Francisco Ch'aves (LIP), Marc Daumas (LIRMM, LP2A)}, journal={arXiv preprint arXiv:cs/0602005}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602005}, primaryClass={cs.MS} }
cháves2006a
arxiv-673820
cs/0602006
A Visual Query Language for Complex-Value Databases
<|reference_start|>A Visual Query Language for Complex-Value Databases: In this paper, a visual language, VCP, for queries on complex-value databases is proposed. The main strength of the new language is that it is purely visual: (i) It has no notion of variable, quantification, partiality, join, pattern matching, regular expression, recursion, or any other construct proper to logical, functional, or other database query languages and (ii) has a very natural, strong, and intuitive design metaphor. The main operation is that of copying and pasting in a schema tree. We show that despite its simplicity, VCP precisely captures complex-value algebra without powerset, or equivalently, monad algebra with union and difference. Thus, its expressive power is precisely that of the language that is usually considered to play the role of relational algebra for complex-value databases.<|reference_end|>
arxiv
@article{koch2006a, title={A Visual Query Language for Complex-Value Databases}, author={Christoph Koch}, journal={arXiv preprint arXiv:cs/0602006}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602006}, primaryClass={cs.DB cs.HC} }
koch2006a
arxiv-673821
cs/0602007
Fuzzy Extractors: How to Generate Strong Keys from Biometrics and Other Noisy Data
<|reference_start|>Fuzzy Extractors: How to Generate Strong Keys from Biometrics and Other Noisy Data: We provide formal definitions and efficient secure techniques for - turning noisy information into keys usable for any cryptographic application, and, in particular, - reliably and securely authenticating biometric data. Our techniques apply not just to biometric information, but to any keying material that, unlike traditional cryptographic keys, is (1) not reproducible precisely and (2) not distributed uniformly. We propose two primitives: a "fuzzy extractor" reliably extracts nearly uniform randomness R from its input; the extraction is error-tolerant in the sense that R will be the same even if the input changes, as long as it remains reasonably close to the original. Thus, R can be used as a key in a cryptographic application. A "secure sketch" produces public information about its input w that does not reveal w, and yet allows exact recovery of w given another value that is close to w. Thus, it can be used to reliably reproduce error-prone biometric inputs without incurring the security risk inherent in storing them. We define the primitives to be both formally secure and versatile, generalizing much prior work. In addition, we provide nearly optimal constructions of both primitives for various measures of ``closeness'' of input data, such as Hamming distance, edit distance, and set difference.<|reference_end|>
arxiv
@article{dodis2006fuzzy, title={Fuzzy Extractors: How to Generate Strong Keys from Biometrics and Other Noisy Data}, author={Yevgeniy Dodis and Rafail Ostrovsky and Leonid Reyzin and Adam Smith}, journal={SIAM Journal on Computing, 38(1):97-139, 2008}, year={2006}, doi={10.1137/060651380}, archivePrefix={arXiv}, eprint={cs/0602007}, primaryClass={cs.CR cs.IT math.IT} }
dodis2006fuzzy
arxiv-673822
cs/0602008
Demand Analysis with Partial Predicates
<|reference_start|>Demand Analysis with Partial Predicates: In order to alleviate the inefficiencies caused by the interaction of the logic and functional sides, integrated languages may take advantage of \emph{demand} information -- i.e. knowing in advance which computations are needed and, to which extent, in a particular context. This work studies \emph{demand analysis} -- which is closely related to \emph{backwards strictness analysis} -- in a semantic framework of \emph{partial predicates}, which in turn are constructive realizations of ideals in a domain. This will allow us to give a concise, unified presentation of demand analysis, to relate it to other analyses based on abstract interpretation or strictness logics, some hints for the implementation, and, more important, to prove the soundness of our analysis based on \emph{demand equations}. There are also some innovative results. One of them is that a set constraint-based analysis has been derived in a stepwise manner using ideas taken from the area of program transformation. The other one is the possibility of using program transformation itself to perform the analysis, specially in those domains of properties where algorithms based on constraint solving are too weak.<|reference_end|>
arxiv
@article{marino2006demand, title={Demand Analysis with Partial Predicates}, author={Julio Marino, Angel Herranz and Juan Jose Moreno-Navarro}, journal={arXiv preprint arXiv:cs/0602008}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602008}, primaryClass={cs.PL cs.SC} }
marino2006demand
arxiv-673823
cs/0602009
Efficient Teamwork
<|reference_start|>Efficient Teamwork: Our goal is to solve both problems of adverse selection and moral hazard for multi-agent projects. In our model, each selected agent can work according to his private "capability tree". This means a process involving hidden actions, hidden chance events and hidden costs in a dynamic manner, and providing contractible consequences which are affecting each other's working process and the outcome of the project. We will construct a mechanism that induces truthful revelation of the agents' capability trees and chance events and to follow the instructions about their hidden decisions. This enables the planner to select the optimal subset of agents and obtain the efficient joint execution. We will construct another mechanism that is collusion-resistant but implements an only approximately efficient outcome. The latter mechanism is widely applicable, and the major application details will be elaborated.<|reference_end|>
arxiv
@article{csóka2006efficient, title={Efficient Teamwork}, author={Endre Cs'oka}, journal={arXiv preprint arXiv:cs/0602009}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602009}, primaryClass={cs.OH} }
csóka2006efficient
arxiv-673824
cs/0602010
Reducing Tile Complexity for Self-Assembly Through Temperature Programming
<|reference_start|>Reducing Tile Complexity for Self-Assembly Through Temperature Programming: We consider the tile self-assembly model and how tile complexity can be eliminated by permitting the temperature of the self-assembly system to be adjusted throughout the assembly process. To do this, we propose novel techniques for designing tile sets that permit an arbitrary length $m$ binary number to be encoded into a sequence of $O(m)$ temperature changes such that the tile set uniquely assembles a supertile that precisely encodes the corresponding binary number. As an application, we show how this provides a general tile set of size O(1) that is capable of uniquely assembling essentially any $n\times n$ square, where the assembled square is determined by a temperature sequence of length $O(\log n)$ that encodes a binary description of $n$. This yields an important decrease in tile complexity from the required $\Omega(\frac{\log n}{\log\log n})$ for almost all $n$ when the temperature of the system is fixed. We further show that for almost all $n$, no tile system can simultaneously achieve both $o(\log n)$ temperature complexity and $o(\frac{\log n}{\log\log n})$ tile complexity, showing that both versions of an optimal square building scheme have been discovered. This work suggests that temperature change can constitute a natural, dynamic method for providing input to self-assembly systems that is potentially superior to the current technique of designing large tile sets with specific inputs hardwired into the tileset.<|reference_end|>
arxiv
@article{kao2006reducing, title={Reducing Tile Complexity for Self-Assembly Through Temperature Programming}, author={Ming-Yang Kao, Robert Schweller}, journal={arXiv preprint arXiv:cs/0602010}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602010}, primaryClass={cs.CC} }
kao2006reducing
arxiv-673825
cs/0602011
The intuitionistic fragment of computability logic at the propositional level
<|reference_start|>The intuitionistic fragment of computability logic at the propositional level: This paper presents a soundness and completeness proof for propositional intuitionistic calculus with respect to the semantics of computability logic. The latter interprets formulas as interactive computational problems, formalized as games between a machine and its environment. Intuitionistic implication is understood as algorithmic reduction in the weakest possible -- and hence most natural -- sense, disjunction and conjunction as deterministic-choice combinations of problems (disjunction = machine's choice, conjunction = environment's choice), and "absurd" as a computational problem of universal strength. See http://www.cis.upenn.edu/~giorgi/cl.html for a comprehensive online source on computability logic.<|reference_end|>
arxiv
@article{japaridze2006the, title={The intuitionistic fragment of computability logic at the propositional level}, author={Giorgi Japaridze}, journal={Annals of Pure and Applied Logic 147 (2007), pp. 187-227}, year={2006}, doi={10.1016/j.apal.2007.05.001}, archivePrefix={arXiv}, eprint={cs/0602011}, primaryClass={cs.LO cs.AI math.LO} }
japaridze2006the
arxiv-673826
cs/0602012
Wreath Products in Stream Cipher Design
<|reference_start|>Wreath Products in Stream Cipher Design: The paper develops a novel approach to stream cipher design: Both the state update function and the output function of the corresponding pseudorandom generators are compositions of arithmetic and bitwise logical operations, which are standard instructions of modern microprocessors. Moreover, both the state update function and the output function are being modified dynamically during the encryption. Also, these compositions could be keyed, so the only information available to an attacker is that these functions belong to some exponentially large class. The paper shows that under rather loose conditions the output sequence is uniformly distributed, achieves maximum period length and has high linear complexity and high $\ell$-error linear complexity. Ciphers of this kind are flexible: One could choose a suitable combination of instructions to obtain due performance without affecting the quality of the output sequence. Finally, some evidence is given that a key recovery problem for (reasonably designed) stream ciphers of this kind is intractable up to plausible conjectures.<|reference_end|>
arxiv
@article{anashin2006wreath, title={Wreath Products in Stream Cipher Design}, author={Vladimir Anashin}, journal={"Applied Algebraic Dynamics", volume 49 of de Gruyter Expositions in Mathematics, 2009, 269-304}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602012}, primaryClass={cs.CR} }
anashin2006wreath
arxiv-673827
cs/0602013
An Optimal Distributed Edge-Biconnectivity Algorithm
<|reference_start|>An Optimal Distributed Edge-Biconnectivity Algorithm: We describe a synchronous distributed algorithm which identifies the edge-biconnected components of a connected network. It requires a leader, and uses messages of size O(log |V|). The main idea is to preorder a BFS spanning tree, and then to efficiently compute least common ancestors so as to mark cycle edges. This algorithm takes O(Diam) time and uses O(|E|) messages. Furthermore, we show that no correct singly-initiated edge-biconnectivity algorithm can beat either bound on any graph by more than a constant factor. We also describe a near-optimal local algorithm for edge-biconnectivity.<|reference_end|>
arxiv
@article{pritchard2006an, title={An Optimal Distributed Edge-Biconnectivity Algorithm}, author={David Pritchard}, journal={arXiv preprint arXiv:cs/0602013}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602013}, primaryClass={cs.DC} }
pritchard2006an
arxiv-673828
cs/0602014
Game theoretic aspects of distributed spectral coordination with application to DSL networks
<|reference_start|>Game theoretic aspects of distributed spectral coordination with application to DSL networks: In this paper we use game theoretic techniques to study the value of cooperation in distributed spectrum management problems. We show that the celebrated iterative water-filling algorithm is subject to the prisoner's dilemma and therefore can lead to severe degradation of the achievable rate region in an interference channel environment. We also provide thorough analysis of a simple two bands near-far situation where we are able to provide closed form tight bounds on the rate region of both fixed margin iterative water filling (FM-IWF) and dynamic frequency division multiplexing (DFDM) methods. This is the only case where such analytic expressions are known and all previous studies included only simulated results of the rate region. We then propose an alternative algorithm that alleviates some of the drawbacks of the IWF algorithm in near-far scenarios relevant to DSL access networks. We also provide experimental analysis based on measured DSL channels of both algorithms as well as the centralized optimum spectrum management.<|reference_end|>
arxiv
@article{laufer2006game, title={Game theoretic aspects of distributed spectral coordination with application to DSL networks}, author={Amir Laufer, Amir Leshem, Hagit Messer}, journal={arXiv preprint arXiv:cs/0602014}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602014}, primaryClass={cs.IT math.IT} }
laufer2006game
arxiv-673829
cs/0602015
On the Asymptotic Performance of Multiple Antenna Channels with Fast Channel Feedback
<|reference_start|>On the Asymptotic Performance of Multiple Antenna Channels with Fast Channel Feedback: In this paper, we analyze the asymptotic performance of multiple antenna channels where the transmitter has either perfect or finite bit channel state information. Using the diversity-multiplexing tradeoff to characterize the system performance, we demonstrate that channel feedback can fundamentally change the system behavior. Even one-bit of information can increase the diversity order of the system compared to the system with no transmitter information. In addition, as the amount of channel information at the transmitter increases, the diversity order for each multiplexing gain increases and goes to infinity for perfect transmitter information. The major reason for diversity order gain is a "location-dependent" temporal power control, which adapts the power control strategy based on the average channel conditions of the channel.<|reference_end|>
arxiv
@article{khoshnevis2006on, title={On the Asymptotic Performance of Multiple Antenna Channels with Fast Channel Feedback}, author={Ahmad Khoshnevis and Ashutosh Sabharwal}, journal={arXiv preprint arXiv:cs/0602015}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602015}, primaryClass={cs.IT math.IT} }
khoshnevis2006on
arxiv-673830
cs/0602016
Finding total unimodularity in optimization problems solved by linear programs
<|reference_start|>Finding total unimodularity in optimization problems solved by linear programs: A popular approach in combinatorial optimization is to model problems as integer linear programs. Ideally, the relaxed linear program would have only integer solutions, which happens for instance when the constraint matrix is totally unimodular. Still, sometimes it is possible to build an integer solution with the same cost from the fractional solution. Examples are two scheduling problems and the single disk prefetching/caching problem. We show that problems such as the three previously mentioned can be separated into two subproblems: (1) finding an optimal feasible set of slots, and (2) assigning the jobs or pages to the slots. It is straigthforward to show that the latter can be solved greedily. We are able to solve the former with a totally unimodular linear program, from which we obtain simple combinatorial algorithms with improved worst case running time.<|reference_end|>
arxiv
@article{durr2006finding, title={Finding total unimodularity in optimization problems solved by linear programs}, author={Christoph Durr and Mathilde Hurand}, journal={arXiv preprint arXiv:cs/0602016}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602016}, primaryClass={cs.DS cs.DC} }
durr2006finding
arxiv-673831
cs/0602017
Quasi-Linear Soft Tissue Models Revisited
<|reference_start|>Quasi-Linear Soft Tissue Models Revisited: Incompressibility, nonlinear deformation under stress and viscoelasticity are the fingerprint of soft tissue mechanical behavior. In order to model soft tissues appropriately, we must pursue to complete these requirements. In this work we revisited different soft tissue quasi-linear model possibilities in trying to achieve for this commitment.<|reference_end|>
arxiv
@article{ortiz2006quasi-linear, title={Quasi-Linear Soft Tissue Models Revisited}, author={J. S. Espinoza Ortiz, Gilson A. Giraldi, E.A. de Souza Neto, Raul A. Feij'oo}, journal={arXiv preprint arXiv:cs/0602017}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602017}, primaryClass={cs.OH} }
ortiz2006quasi-linear
arxiv-673832
cs/0602018
Improving the CSIEC Project and Adapting It to the English Teaching and Learning in China
<|reference_start|>Improving the CSIEC Project and Adapting It to the English Teaching and Learning in China: In this paper after short review of the CSIEC project initialized by us in 2003 we present the continuing development and improvement of the CSIEC project in details, including the design of five new Microsoft agent characters representing different virtual chatting partners and the limitation of simulated dialogs in specific practical scenarios like graduate job application interview, then briefly analyze the actual conditions and features of its application field: web-based English education in China. Finally we introduce our efforts to adapt this system to the requirements of English teaching and learning in China and point out the work next to do.<|reference_end|>
arxiv
@article{jia2006improving, title={Improving the CSIEC Project and Adapting It to the English Teaching and Learning in China}, author={Jiyou Jia, Shufen Hou, Weichao Chen}, journal={arXiv preprint arXiv:cs/0602018}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602018}, primaryClass={cs.CY cs.AI cs.CL cs.HC cs.MA} }
jia2006improving
arxiv-673833
cs/0602019
Adaptive Channel Allocation Spectrum Etiquette for Cognitive Radio Networks
<|reference_start|>Adaptive Channel Allocation Spectrum Etiquette for Cognitive Radio Networks: In this work, we propose a game theoretic framework to analyze the behavior of cognitive radios for distributed adaptive channel allocation. We define two different objective functions for the spectrum sharing games, which capture the utility of selfish users and cooperative users, respectively. Based on the utility definition for cooperative users, we show that the channel allocation problem can be formulated as a potential game, and thus converges to a deterministic channel allocation Nash equilibrium point. Alternatively, a no-regret learning implementation is proposed for both scenarios and it is shown to have similar performance with the potential game when cooperation is enforced, but with a higher variability across users. The no-regret learning formulation is particularly useful to accommodate selfish users. Non-cooperative learning games have the advantage of a very low overhead for information exchange in the network. We show that cooperation based spectrum sharing etiquette improves the overall network performance at the expense of an increased overhead required for information exchange.<|reference_end|>
arxiv
@article{nie2006adaptive, title={Adaptive Channel Allocation Spectrum Etiquette for Cognitive Radio Networks}, author={Nie Nie and Cristina Comaniciu}, journal={arXiv preprint arXiv:cs/0602019}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602019}, primaryClass={cs.GT} }
nie2006adaptive
arxiv-673834
cs/0602020
Inter-Block Permuted Turbo Codes
<|reference_start|>Inter-Block Permuted Turbo Codes: The structure and size of the interleaver used in a turbo code critically affect the distance spectrum and the covariance property of a component decoder's information input and soft output. This paper introduces a new class of interleavers, the inter-block permutation (IBP) interleavers, that can be build on any existing "good" block-wise interleaver by simply adding an IBP stage. The IBP interleavers reduce the above-mentioned correlation and increase the effective interleaving size. The increased effective interleaving size improves the distance spectrum while the reduced covariance enhances the iterative decoder's performance. Moreover, the structure of the IBP(-interleaved) turbo codes (IBPTC) is naturally fit for high rate applications that necessitate parallel decoding. We present some useful bounds and constraints associated with the IBPTC that can be used as design guidelines. The corresponding codeword weight upper bounds for weight-2 and weight-4 input sequences are derived. Based on some of the design guidelines, we propose a simple IBP algorithm and show that the associated IBPTC yields 0.3 to 1.2 dB performance gain, or equivalently, an IBPTC renders the same performance with a much reduced interleaving delay. The EXIT and covariance behaviors provide another numerical proof of the superiority of the proposed IBPTC.<|reference_end|>
arxiv
@article{zheng2006inter-block, title={Inter-Block Permuted Turbo Codes}, author={Yan-Xiu Zheng and Yu T. Su}, journal={arXiv preprint arXiv:cs/0602020}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602020}, primaryClass={cs.IT math.IT} }
zheng2006inter-block
arxiv-673835
cs/0602021
Using Domain Knowledge in Evolutionary System Identification
<|reference_start|>Using Domain Knowledge in Evolutionary System Identification: Two example of Evolutionary System Identification are presented to highlight the importance of incorporating Domain Knowledge: the discovery of an analytical indentation law in Structural Mechanics using constrained Genetic Programming, and the identification of the repartition of underground velocities in Seismic Prospection. Critical issues for sucessful ESI are discussed in the light of these results.<|reference_end|>
arxiv
@article{schoenauer2006using, title={Using Domain Knowledge in Evolutionary System Identification}, author={Marc Schoenauer, Mich`ele Sebag (LMS)}, journal={arXiv preprint arXiv:cs/0602021}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602021}, primaryClass={cs.AI math.AP} }
schoenauer2006using
arxiv-673836
cs/0602022
Avoiding the Bloat with Stochastic Grammar-based Genetic Programming
<|reference_start|>Avoiding the Bloat with Stochastic Grammar-based Genetic Programming: The application of Genetic Programming to the discovery of empirical laws is often impaired by the huge size of the search space, and consequently by the computer resources needed. In many cases, the extreme demand for memory and CPU is due to the massive growth of non-coding segments, the introns. The paper presents a new program evolution framework which combines distribution-based evolution in the PBIL spirit, with grammar-based genetic programming; the information is stored as a probability distribution on the gra mmar rules, rather than in a population. Experiments on a real-world like problem show that this approach gives a practical solution to the problem of intron growth.<|reference_end|>
arxiv
@article{ratle2006avoiding, title={Avoiding the Bloat with Stochastic Grammar-based Genetic Programming}, author={Alain Ratle (LMS), Mich`ele Sebag (LMS)}, journal={arXiv preprint arXiv:cs/0602022}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602022}, primaryClass={cs.AI} }
ratle2006avoiding
arxiv-673837
cs/0602023
Information theory and Thermodynamics
<|reference_start|>Information theory and Thermodynamics: A communication theory for a transmitter broadcasting to many receivers is presented. In this case energetic considerations cannot be neglected as in Shannon theory. It is shown that, when energy is assigned to the information bit, information theory complies with classical thermodynamic and is part of it. To provide a thermodynamic theory of communication it is necessary to define equilibrium for informatics systems that are not in thermal equilibrium and to calculate temperature, heat, and entropy with accordance to Clausius inequality. It is shown that for a binary file the temperature is proportional to the bit energy and that information is thermodynamic entropy. Equilibrium exists in random files that cannot be compressed. Thermodynamic bounds on the computing power of a physical device, and the maximum information that an antenna can broadcast are calculated.<|reference_end|>
arxiv
@article{kafri2006information, title={Information theory and Thermodynamics}, author={Oded Kafri}, journal={arXiv preprint arXiv:cs/0602023}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602023}, primaryClass={cs.IT math.IT} }
kafri2006information
arxiv-673838
cs/0602024
Algorithmic correspondence and completeness in modal logic I The core algorithm SQEMA
<|reference_start|>Algorithmic correspondence and completeness in modal logic I The core algorithm SQEMA: Modal formulae express monadic second-order properties on Kripke frames, but in many important cases these have first-order equivalents. Computing such equivalents is important for both logical and computational reasons. On the other hand, canonicity of modal formulae is important, too, because it implies frame-completeness of logics axiomatized with canonical formulae. Computing a first-order equivalent of a modal formula amounts to elimination of second-order quantifiers. Two algorithms have been developed for second-order quantifier elimination: SCAN, based on constraint resolution, and DLS, based on a logical equivalence established by Ackermann. In this paper we introduce a new algorithm, SQEMA, for computing first-order equivalents (using a modal version of Ackermann's lemma) and, moreover, for proving canonicity of modal formulae. Unlike SCAN and DLS, it works directly on modal formulae, thus avoiding Skolemization and the subsequent problem of unskolemization. We present the core algorithm and illustrate it with some examples. We then prove its correctness and the canonicity of all formulae on which the algorithm succeeds. We show that it succeeds not only on all Sahlqvist formulae, but also on the larger class of inductive formulae, introduced in our earlier papers. Thus, we develop a purely algorithmic approach to proving canonical completeness in modal logic and, in particular, establish one of the most general completeness results in modal logic so far.<|reference_end|>
arxiv
@article{conradie2006algorithmic, title={Algorithmic correspondence and completeness in modal logic. I. The core algorithm SQEMA}, author={Willem Conradie, Valentin Goranko and Dimiter Vakarelov}, journal={Logical Methods in Computer Science, Volume 2, Issue 1 (March 7, 2006) lmcs:2259}, year={2006}, doi={10.2168/LMCS-2(1:5)2006}, archivePrefix={arXiv}, eprint={cs/0602024}, primaryClass={cs.LO} }
conradie2006algorithmic
arxiv-673839
cs/0602025
On local symbolic approximation and resolution of ODEs using Implicit Function Theorem
<|reference_start|>On local symbolic approximation and resolution of ODEs using Implicit Function Theorem: In this work the implicit function theorem is used for searching local symbolic resolution of differential equations. General results of existence for first order equations are proven and some examples, one relative to cavitation in a fluid, are developed. These examples seem to show that local approximation of non linear differential equations can give useful informations about symbolic form of possible solutions, and in the case a global solution is known, locally the accuracy of approximation can be good.<|reference_end|>
arxiv
@article{argentini2006on, title={On local symbolic approximation and resolution of ODEs using Implicit Function Theorem}, author={Gianluca Argentini}, journal={arXiv preprint arXiv:cs/0602025}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602025}, primaryClass={cs.NA math.CA physics.comp-ph} }
argentini2006on
arxiv-673840
cs/0602026
Bulk Scheduling with DIANA Scheduler
<|reference_start|>Bulk Scheduling with DIANA Scheduler: Results from and progress on the development of a Data Intensive and Network Aware (DIANA) Scheduling engine, primarily for data intensive sciences such as physics analysis, are described. Scientific analysis tasks can involve thousands of computing, data handling, and network resources and the size of the input and output files and the amount of overall storage space allocated to a user necessarily can have significant bearing on the scheduling of data intensive applications. If the input or output files must be retrieved from a remote location, then the time required transferring the files must also be taken into consideration when scheduling compute resources for the given application. The central problem in this study is the coordinated management of computation and data at multiple locations and not simply data movement. However, this can be a very costly operation and efficient scheduling can be a challenge if compute and data resources are mapped without network cost. We have implemented an adaptive algorithm within the DIANA Scheduler which takes into account data location and size, network performance and computation capability to make efficient global scheduling decisions. DIANA is a performance-aware as well as an economy-guided Meta Scheduler. It iteratively allocates each job to the site that is likely to produce the best performance as well as optimizing the global queue for any remaining pending jobs. Therefore it is equally suitable whether a single job is being submitted or bulk scheduling is being performed. Results suggest that considerable performance improvements are to be gained by adopting the DIANA scheduling approach.<|reference_end|>
arxiv
@article{anjum2006bulk, title={Bulk Scheduling with DIANA Scheduler}, author={Ashiq Anjum, Richard McClatchey, Arshad Ali & Ian Willers}, journal={arXiv preprint arXiv:cs/0602026}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602026}, primaryClass={cs.DC} }
anjum2006bulk
arxiv-673841
cs/0602027
Explaining Constraint Programming
<|reference_start|>Explaining Constraint Programming: We discuss here constraint programming (CP) by using a proof-theoretic perspective. To this end we identify three levels of abstraction. Each level sheds light on the essence of CP. In particular, the highest level allows us to bring CP closer to the computation as deduction paradigm. At the middle level we can explain various constraint propagation algorithms. Finally, at the lowest level we can address the issue of automatic generation and optimization of the constraint propagation algorithms.<|reference_end|>
arxiv
@article{apt2006explaining, title={Explaining Constraint Programming}, author={Krzysztof R. Apt}, journal={arXiv preprint arXiv:cs/0602027}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602027}, primaryClass={cs.PL cs.AI} }
apt2006explaining
arxiv-673842
cs/0602028
Analysis of Belief Propagation for Non-Linear Problems: The Example of CDMA (or: How to Prove Tanaka's Formula)
<|reference_start|>Analysis of Belief Propagation for Non-Linear Problems: The Example of CDMA (or: How to Prove Tanaka's Formula): We consider the CDMA (code-division multiple-access) multi-user detection problem for binary signals and additive white gaussian noise. We propose a spreading sequences scheme based on random sparse signatures, and a detection algorithm based on belief propagation (BP) with linear time complexity. In the new scheme, each user conveys its power onto a finite number of chips l, in the large system limit. We analyze the performances of BP detection and prove that they coincide with the ones of optimal (symbol MAP) detection in the l->\infty limit. In the same limit, we prove that the information capacity of the system converges to Tanaka's formula for random `dense' signatures, thus providing the first rigorous justification of this formula. Apart from being computationally convenient, the new scheme allows for optimization in close analogy with irregular low density parity check code ensembles.<|reference_end|>
arxiv
@article{montanari2006analysis, title={Analysis of Belief Propagation for Non-Linear Problems: The Example of CDMA (or: How to Prove Tanaka's Formula)}, author={Andrea Montanari and David Tse}, journal={arXiv preprint arXiv:cs/0602028}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602028}, primaryClass={cs.IT math.IT} }
montanari2006analysis
arxiv-673843
cs/0602029
Approximate Weighted Farthest Neighbors and Minimum Dilation Stars
<|reference_start|>Approximate Weighted Farthest Neighbors and Minimum Dilation Stars: We provide an efficient reduction from the problem of querying approximate multiplicatively weighted farthest neighbors in a metric space to the unweighted problem. Combining our techniques with core-sets for approximate unweighted farthest neighbors, we show how to find (1+epsilon)-approximate farthest neighbors in time O(log n) per query in D-dimensional Euclidean space for any constants D and epsilon. As an application, we find an O(n log n) expected time algorithm for choosing the center of a star topology network connecting a given set of points, so as to approximately minimize the maximum dilation between any pair of points.<|reference_end|>
arxiv
@article{augustine2006approximate, title={Approximate Weighted Farthest Neighbors and Minimum Dilation Stars}, author={John Augustine and David Eppstein and Kevin A. Wortman}, journal={arXiv preprint arXiv:cs/0602029}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602029}, primaryClass={cs.CG cs.DS} }
augustine2006approximate
arxiv-673844
cs/0602030
Single-Symbol Maximum Likelihood Decodable Linear STBCs
<|reference_start|>Single-Symbol Maximum Likelihood Decodable Linear STBCs: Space-Time block codes (STBC) from Orthogonal Designs (OD) and Co-ordinate Interleaved Orthogonal Designs (CIOD) have been attracting wider attention due to their amenability for fast (single-symbol) ML decoding, and full-rate with full-rank over quasi-static fading channels. However, these codes are instances of single-symbol decodable codes and it is natural to ask, if there exist codes other than STBCs form ODs and CIODs that allow single-symbol coding? In this paper, the above question is answered in the affirmative by characterizing all linear STBCs, that allow single-symbol ML decoding (not necessarily full-diversity) over quasi-static fading channels-calling them single-symbol decodable designs (SDD). The class SDD includes ODs and CIODs as proper subclasses. Further, among the SDD, a class of those that offer full-diversity, called Full-rank SDD (FSDD) are characterized and classified.<|reference_end|>
arxiv
@article{khan2006single-symbol, title={Single-Symbol Maximum Likelihood Decodable Linear STBCs}, author={Md. Zafar ALi Khan and B. Sundar Rajan}, journal={arXiv preprint arXiv:cs/0602030}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602030}, primaryClass={cs.IT math.IT} }
khan2006single-symbol
arxiv-673845
cs/0602031
Classifying Signals with Local Classifiers
<|reference_start|>Classifying Signals with Local Classifiers: This paper deals with the problem of classifying signals. The new method for building so called local classifiers and local features is presented. The method is a combination of the lifting scheme and the support vector machines. Its main aim is to produce effective and yet comprehensible classifiers that would help in understanding processes hidden behind classified signals. To illustrate the method we present the results obtained on an artificial and a real dataset.<|reference_end|>
arxiv
@article{jakuczun2006classifying, title={Classifying Signals with Local Classifiers}, author={Wit Jakuczun}, journal={arXiv preprint arXiv:cs/0602031}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602031}, primaryClass={cs.AI} }
jakuczun2006classifying
arxiv-673846
cs/0602032
Finite-State Dimension and Real Arithmetic
<|reference_start|>Finite-State Dimension and Real Arithmetic: We use entropy rates and Schur concavity to prove that, for every integer k >= 2, every nonzero rational number q, and every real number alpha, the base-k expansions of alpha, q+alpha, and q*alpha all have the same finite-state dimension and the same finite-state strong dimension. This extends, and gives a new proof of, Wall's 1949 theorem stating that the sum or product of a nonzero rational number and a Borel normal number is always Borel normal.<|reference_end|>
arxiv
@article{doty2006finite-state, title={Finite-State Dimension and Real Arithmetic}, author={David Doty, Jack H. Lutz, Satyadev Nandakumar}, journal={arXiv preprint arXiv:cs/0602032}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602032}, primaryClass={cs.CC cs.IT math.IT} }
doty2006finite-state
arxiv-673847
cs/0602033
Self-stabilization of Circular Arrays of Automata
<|reference_start|>Self-stabilization of Circular Arrays of Automata: [Gacs, Kurdiumov, Levin, 78] proposed simple one-dimensional cellular automata with 2 states. In an infinite array they are self-stabilizing: if all but a finite minority of automata are in the same state, the minority states disappear. Implicit in the paper was a stronger result that a sufficiently small minority of states vanish even in a finite circular array. The following note makes this strengthening explicit.<|reference_end|>
arxiv
@article{levin2006self-stabilization, title={Self-stabilization of Circular Arrays of Automata}, author={Leonid A. Levin}, journal={Theoretical Computer Science, 235(1):143-144, 2000}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602033}, primaryClass={cs.DC cs.DM} }
levin2006self-stabilization
arxiv-673848
cs/0602034
A topology visualisation tool for large-scale communications networks
<|reference_start|>A topology visualisation tool for large-scale communications networks: A visualisation tool is presented to facilitate the study on large-scale communications networks. This tool provides a simple and effective way to summarise the topology of a complex network at a coarse level.<|reference_end|>
arxiv
@article{guo2006a, title={A topology visualisation tool for large-scale communications networks}, author={Yuchun Guo, Changjia Chen and Shi Zhou}, journal={ELECTRONICS LETTERS, Vol. 43, No. 10, PP. 597-598, May 2007.}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602034}, primaryClass={cs.NI} }
guo2006a
arxiv-673849
cs/0602035
n-Channel Entropy-Constrained Multiple-Description Lattice Vector Quantization
<|reference_start|>n-Channel Entropy-Constrained Multiple-Description Lattice Vector Quantization: In this paper we derive analytical expressions for the central and side quantizers which, under high-resolutions assumptions, minimize the expected distortion of a symmetric multiple-description lattice vector quantization (MD-LVQ) system subject to entropy constraints on the side descriptions for given packet-loss probabilities. We consider a special case of the general n-channel symmetric multiple-description problem where only a single parameter controls the redundancy tradeoffs between the central and the side distortions. Previous work on two-channel MD-LVQ showed that the distortions of the side quantizers can be expressed through the normalized second moment of a sphere. We show here that this is also the case for three-channel MD-LVQ. Furthermore, we conjecture that this is true for the general n-channel MD-LVQ. For given source, target rate and packet-loss probabilities we find the optimal number of descriptions and construct the MD-LVQ system that minimizes the expected distortion. We verify theoretical expressions by numerical simulations and show in a practical setup that significant performance improvements can be achieved over state-of-the-art two-channel MD-LVQ by using three-channel MD-LVQ.<|reference_end|>
arxiv
@article{ostergaard2006n-channel, title={n-Channel Entropy-Constrained Multiple-Description Lattice Vector Quantization}, author={Jan Ostergaard, Jesper Jensen, and Richard Heusdens}, journal={arXiv preprint arXiv:cs/0602035}, year={2006}, doi={10.1109/TIT.2006.872847}, archivePrefix={arXiv}, eprint={cs/0602035}, primaryClass={cs.IT math.IT} }
ostergaard2006n-channel
arxiv-673850
cs/0602036
R\'eseaux d'Automates de Caianiello Revisit\'e
<|reference_start|>R\'eseaux d'Automates de Caianiello Revisit\'e: We exhibit a family of neural networks of McCulloch and Pitts of size $2nk+2$ which can be simulated by a neural networks of Caianiello of size $2n+2$ and memory length $k$. This simulation allows us to find again one of the result of the following article: [Cycles exponentiels des r\'{e}seaux de Caianiello et compteurs en arithm\'{e}tique redondante, Technique et Science Informatiques Vol. 19, pages 985-1008] on the existence of neural networks of Caianiello of size $2n+2$ and memory length $k$ which describes a cycle of length $k \times 2^{nk}$.<|reference_end|>
arxiv
@article{ndoundam2006r\'{e}seaux, title={R\'{e}seaux d'Automates de Caianiello Revisit\'{e}}, author={Ren'e Ndoundam, Maurice Tchuente}, journal={arXiv preprint arXiv:cs/0602036}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602036}, primaryClass={cs.NE} }
ndoundam2006r\'{e}seaux
arxiv-673851
cs/0602037
Cryptanalysis of the CFVZ cryptosystem
<|reference_start|>Cryptanalysis of the CFVZ cryptosystem: The paper analyzes a new public key cryptosystem whose security is based on a matrix version of the discrete logarithm problem over an elliptic curve. It is shown that the complexity of solving the underlying problem for the proposed system is dominated by the complexity of solving a fixed number of discrete logarithm problems in the group of an elliptic curve. Using an adapted Pollard rho algorithm it is shown that this problem is essentially as hard as solving one discrete logarithm problem in the group of an elliptic curve.<|reference_end|>
arxiv
@article{climent2006cryptanalysis, title={Cryptanalysis of the CFVZ cryptosystem}, author={J. J. Climent, E. Gorla, J. Rosenthal}, journal={arXiv preprint arXiv:cs/0602037}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602037}, primaryClass={cs.CR} }
climent2006cryptanalysis
arxiv-673852
cs/0602038
Minimum Cost Homomorphisms to Proper Interval Graphs and Bigraphs
<|reference_start|>Minimum Cost Homomorphisms to Proper Interval Graphs and Bigraphs: For graphs $G$ and $H$, a mapping $f: V(G)\dom V(H)$ is a homomorphism of $G$ to $H$ if $uv\in E(G)$ implies $f(u)f(v)\in E(H).$ If, moreover, each vertex $u \in V(G)$ is associated with costs $c_i(u), i \in V(H)$, then the cost of the homomorphism $f$ is $\sum_{u\in V(G)}c_{f(u)}(u)$. For each fixed graph $H$, we have the {\em minimum cost homomorphism problem}, written as MinHOM($H)$. The problem is to decide, for an input graph $G$ with costs $c_i(u),$ $u \in V(G), i\in V(H)$, whether there exists a homomorphism of $G$ to $H$ and, if one exists, to find one of minimum cost. Minimum cost homomorphism problems encompass (or are related to) many well studied optimization problems. We describe a dichotomy of the minimum cost homomorphism problems for graphs $H$, with loops allowed. When each connected component of $H$ is either a reflexive proper interval graph or an irreflexive proper interval bigraph, the problem MinHOM($H)$ is polynomial time solvable. In all other cases the problem MinHOM($H)$ is NP-hard. This solves an open problem from an earlier paper. Along the way, we prove a new characterization of the class of proper interval bigraphs.<|reference_end|>
arxiv
@article{gutin2006minimum, title={Minimum Cost Homomorphisms to Proper Interval Graphs and Bigraphs}, author={G. Gutin, P. Hell, A. Rafiey, A. Yeo}, journal={arXiv preprint arXiv:cs/0602038}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602038}, primaryClass={cs.DM cs.AI} }
gutin2006minimum
arxiv-673853
cs/0602039
Path Summaries and Path Partitioning in Modern XML Databases
<|reference_start|>Path Summaries and Path Partitioning in Modern XML Databases: We study the applicability of XML path summaries in the context of current-day XML databases. We find that summaries provide an excellent basis for optimizing data access methods, which furthermore mixes very well with path-partitioned stores. We provide practical algorithms for building and exploiting summaries, and prove its benefits through extensive experiments.<|reference_end|>
arxiv
@article{arion2006path, title={Path Summaries and Path Partitioning in Modern XML Databases}, author={Andrei Arion (INRIA Futurs), Angela Bonifati, Ioana Manolescu (INRIA Futurs), Andrea Pugliese}, journal={arXiv preprint arXiv:cs/0602039}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602039}, primaryClass={cs.DB} }
arion2006path
arxiv-673854
cs/0602040
PLTL Partitioned Model Checking for Reactive Systems under Fairness Assumptions
<|reference_start|>PLTL Partitioned Model Checking for Reactive Systems under Fairness Assumptions: We are interested in verifying dynamic properties of finite state reactive systems under fairness assumptions by model checking. The systems we want to verify are specified through a top-down refinement process. In order to deal with the state explosion problem, we have proposed in previous works to partition the reachability graph, and to perform the verification on each part separately. Moreover, we have defined a class, called Bmod, of dynamic properties that are verifiable by parts, whatever the partition. We decide if a property P belongs to Bmod by looking at the form of the Buchi automaton that accepts the negation of P. However, when a property P belongs to Bmod, the property f => P, where f is a fairness assumption, does not necessarily belong to Bmod. In this paper, we propose to use the refinement process in order to build the parts on which the verification has to be performed. We then show that with such a partition, if a property P is verifiable by parts and if f is the expression of the fairness assumptions on a system, then the property f => P is still verifiable by parts. This approach is illustrated by its application to the chip card protocol T=1 using the B engineering design language.<|reference_end|>
arxiv
@article{chouali2006pltl, title={PLTL Partitioned Model Checking for Reactive Systems under Fairness Assumptions}, author={Samir Chouali (LIFC), Jacques Julliand (LIFC), Pierre-Alain Masson (LIFC), Franc{c}oise Bellegarde (LIFC)}, journal={ACM Transactions on Embedded Computing Systems 4(2) (2005) 267-301}, year={2006}, doi={10.1145/1067915.1067918}, archivePrefix={arXiv}, eprint={cs/0602040}, primaryClass={cs.LO} }
chouali2006pltl
arxiv-673855
cs/0602041
Why neighbor-joining works
<|reference_start|>Why neighbor-joining works: We show that the neighbor-joining algorithm is a robust quartet method for constructing trees from distances. This leads to a new performance guarantee that contains Atteson's optimal radius bound as a special case and explains many cases where neighbor-joining is successful even when Atteson's criterion is not satisfied. We also provide a proof for Atteson's conjecture on the optimal edge radius of the neighbor-joining algorithm. The strong performance guarantees we provide also hold for the quadratic time fast neighbor-joining algorithm, thus providing a theoretical basis for inferring very large phylogenies with neighbor-joining.<|reference_end|>
arxiv
@article{mihaescu2006why, title={Why neighbor-joining works}, author={Radu Mihaescu, Dan Levy, Lior Pachter}, journal={arXiv preprint arXiv:cs/0602041}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602041}, primaryClass={cs.DS cs.DM} }
mihaescu2006why
arxiv-673856
cs/0602042
New security and control protocol for VoIP based on steganography and digital watermarking
<|reference_start|>New security and control protocol for VoIP based on steganography and digital watermarking: In this paper new security and control protocol for Voice over Internet Protocol (VoIP) service is presented. It is the alternative for the IETF's (Internet Engineering Task Force) RTCP (Real-Time Control Protocol) for real-time application's traffic. Additionally this solution offers authentication and integrity, it is capable of exchanging and verifying QoS and security parameters. It is based on digital watermarking and steganography that is why it does not consume additional bandwidth and the data transmitted is inseparably bound to the voice content.<|reference_end|>
arxiv
@article{mazurczyk2006new, title={New security and control protocol for VoIP based on steganography and digital watermarking}, author={Wojciech Mazurczyk, Zbigniew Kotulski}, journal={Annales UMCS, Informatica, AI 4 (2006), ISNN 1732-1360}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602042}, primaryClass={cs.CR cs.MM} }
mazurczyk2006new
arxiv-673857
cs/0602043
Computing Nash Equilibria: Approximation and Smoothed Complexity
<|reference_start|>Computing Nash Equilibria: Approximation and Smoothed Complexity: We show that the BIMATRIX game does not have a fully polynomial-time approximation scheme, unless PPAD is in P. In other words, no algorithm with time polynomial in n and 1/\epsilon can compute an \epsilon-approximate Nash equilibrium of an n by nbimatrix game, unless PPAD is in P. Instrumental to our proof, we introduce a new discrete fixed-point problem on a high-dimensional cube with a constant side-length, such as on an n-dimensional cube with side-length 7, and show that they are PPAD-complete. Furthermore, we prove, unless PPAD is in RP, that the smoothed complexity of the Lemke-Howson algorithm or any algorithm for computing a Nash equilibrium of a bimatrix game is polynomial in n and 1/\sigma under perturbations with magnitude \sigma. Our result answers a major open question in the smoothed analysis of algorithms and the approximation of Nash equilibria.<|reference_end|>
arxiv
@article{chen2006computing, title={Computing Nash Equilibria: Approximation and Smoothed Complexity}, author={Xi Chen, Xiaotie Deng and Shang-Hua Teng}, journal={arXiv preprint arXiv:cs/0602043}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602043}, primaryClass={cs.CC cs.GT} }
chen2006computing
arxiv-673858
cs/0602044
Multilevel Thresholding for Image Segmentation through a Fast Statistical Recursive Algorithm
<|reference_start|>Multilevel Thresholding for Image Segmentation through a Fast Statistical Recursive Algorithm: A novel algorithm is proposed for segmenting an image into multiple levels using its mean and variance. Starting from the extreme pixel values at both ends of the histogram plot, the algorithm is applied recursively on sub-ranges computed from the previous step, so as to find a threshold level and a new sub-range for the next step, until no significant improvement in image quality can be achieved. The method makes use of the fact that a number of distributions tend towards Dirac delta function, peaking at the mean, in the limiting condition of vanishing variance. The procedure naturally provides for variable size segmentation with bigger blocks near the extreme pixel values and finer divisions around the mean or other chosen value for better visualization. Experiments on a variety of images show that the new algorithm effectively segments the image in computationally very less time.<|reference_end|>
arxiv
@article{arora2006multilevel, title={Multilevel Thresholding for Image Segmentation through a Fast Statistical Recursive Algorithm}, author={Siddharth Arora, Jayadev Acharya, Amit Verma, Prasanta K. Panigrahi}, journal={arXiv preprint arXiv:cs/0602044}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602044}, primaryClass={cs.CV} }
arora2006multilevel
arxiv-673859
cs/0602045
Emergence Explained
<|reference_start|>Emergence Explained: Emergence (macro-level effects from micro-level causes) is at the heart of the conflict between reductionism and functionalism. How can there be autonomous higher level laws of nature (the functionalist claim) if everything can be reduced to the fundamental forces of physics (the reductionist position)? We cut through this debate by applying a computer science lens to the way we view nature. We conclude (a) that what functionalism calls the special sciences (sciences other than physics) do indeed study autonomous laws and furthermore that those laws pertain to real higher level entities but (b) that interactions among such higher-level entities is epiphenomenal in that they can always be reduced to primitive physical forces. In other words, epiphenomena, which we will identify with emergent phenomena, do real higher-level work. The proposed perspective provides a framework for understanding many thorny issues including the nature of entities, stigmergy, the evolution of complexity, phase transitions, supervenience, and downward entailment. We also discuss some practical considerations pertaining to systems of systems and the limitations of modeling.<|reference_end|>
arxiv
@article{abbott2006emergence, title={Emergence Explained}, author={Russ Abbott}, journal={arXiv preprint arXiv:cs/0602045}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602045}, primaryClass={cs.MA cs.DC cs.GL} }
abbott2006emergence
arxiv-673860
cs/0602046
Analysis of LDGM and compound codes for lossy compression and binning
<|reference_start|>Analysis of LDGM and compound codes for lossy compression and binning: Recent work has suggested that low-density generator matrix (LDGM) codes are likely to be effective for lossy source coding problems. We derive rigorous upper bounds on the effective rate-distortion function of LDGM codes for the binary symmetric source, showing that they quickly approach the rate-distortion function as the degree increases. We also compare and contrast the standard LDGM construction with a compound LDPC/LDGM construction introduced in our previous work, which provably saturates the rate-distortion bound with finite degrees. Moreover, this compound construction can be used to generate nested codes that are simultaneously good as source and channel codes, and are hence well-suited to source/channel coding with side information. The sparse and high-girth graphical structure of our constructions render them well-suited to message-passing encoding.<|reference_end|>
arxiv
@article{martinian2006analysis, title={Analysis of LDGM and compound codes for lossy compression and binning}, author={Emin Martinian, Martin J. Wainwright}, journal={arXiv preprint arXiv:cs/0602046}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602046}, primaryClass={cs.IT math.IT} }
martinian2006analysis
arxiv-673861
cs/0602047
Approximability of Integer Programming with Generalised Constraints
<|reference_start|>Approximability of Integer Programming with Generalised Constraints: We study a family of problems, called \prob{Maximum Solution}, where the objective is to maximise a linear goal function over the feasible integer assignments to a set of variables subject to a set of constraints. When the domain is Boolean (i.e. restricted to $\{0,1\}$), the maximum solution problem is identical to the well-studied \prob{Max Ones} problem, and the approximability is completely understood for all restrictions on the underlying constraints [Khanna et al., SIAM J. Comput., 30 (2001), pp. 1863-1920]. We continue this line of research by considering domains containing more than two elements. We present two main results: a complete classification for the approximability of all maximal constraint languages over domains of cardinality at most 4, and a complete classification of the approximability of the problem when the set of allowed constraints contains all permutation constraints. Under the assumption that a conjecture due to Szczepara holds, we give a complete classification for all maximal constraint languages. These classes of languages are well-studied in universal algebra and computer science; they have, for instance, been considered in connection with machine learning and constraint satisfaction. Our results are proved by using algebraic results from clone theory and the results indicates that this approach is very powerful for classifying the approximability of certain optimisation problems.<|reference_end|>
arxiv
@article{jonsson2006approximability, title={Approximability of Integer Programming with Generalised Constraints}, author={Peter Jonsson, Fredrik Kuivinen, Gustav Nordh}, journal={arXiv preprint arXiv:cs/0602047}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602047}, primaryClass={cs.CC} }
jonsson2006approximability
arxiv-673862
cs/0602048
On the Optimality of the ARQ-DDF Protocol
<|reference_start|>On the Optimality of the ARQ-DDF Protocol: The performance of the automatic repeat request-dynamic decode and forward (ARQ-DDF) cooperation protocol is analyzed in two distinct scenarios. The first scenario is the multiple access relay (MAR) channel where a single relay is dedicated to simultaneously help several multiple access users. For this setup, it is shown that the ARQ-DDF protocol achieves the optimal diversity multiplexing tradeoff (DMT) of the channel. The second scenario is the cooperative vector multiple access (CVMA) channel where the users cooperate in delivering their messages to a destination equipped with multiple receiving antennas. For this setup, we develop a new variant of the ARQ-DDF protocol where the users are purposefully instructed not to cooperate in the first round of transmission. Lower and upper bounds on the achievable DMT are then derived. These bounds are shown to converge to the optimal tradeoff as the number of transmission rounds increases.<|reference_end|>
arxiv
@article{azarian2006on, title={On the Optimality of the ARQ-DDF Protocol}, author={Kambiz Azarian, Hesham El Gamal and Philip Schniter}, journal={arXiv preprint arXiv:cs/0602048}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602048}, primaryClass={cs.IT math.IT} }
azarian2006on
arxiv-673863
cs/0602049
Cooperative Lattice Coding and Decoding
<|reference_start|>Cooperative Lattice Coding and Decoding: A novel lattice coding framework is proposed for outage-limited cooperative channels. This framework provides practical implementations for the optimal cooperation protocols proposed by Azarian et al. In particular, for the relay channel we implement a variant of the dynamic decode and forward protocol, which uses orthogonal constellations to reduce the channel seen by the destination to a single-input single-output time-selective one, while inheriting the same diversity-multiplexing tradeoff. This simplification allows for building the receiver using traditional belief propagation or tree search architectures. Our framework also generalizes the coding scheme of Yang and Belfiore in the context of amplify and forward cooperation. For the cooperative multiple access channel, a tree coding approach, matched to the optimal linear cooperation protocol of Azarain et al, is developed. For this scenario, the MMSE-DFE Fano decoder is shown to enjoy an excellent tradeoff between performance and complexity. Finally, the utility of the proposed schemes is established via a comprehensive simulation study.<|reference_end|>
arxiv
@article{murugan2006cooperative, title={Cooperative Lattice Coding and Decoding}, author={Arul Murugan, Kambiz Azarian and Hesham El Gamal}, journal={arXiv preprint arXiv:cs/0602049}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602049}, primaryClass={cs.IT math.IT} }
murugan2006cooperative
arxiv-673864
cs/0602050
Outage Capacity of the Fading Relay Channel in the Low SNR Regime
<|reference_start|>Outage Capacity of the Fading Relay Channel in the Low SNR Regime: In slow fading scenarios, cooperation between nodes can increase the amount of diversity for communication. We study the performance limit in such scenarios by analyzing the outage capacity of slow fading relay channels. Our focus is on the low SNR and low outage probability regime, where the adverse impact of fading is greatest but so are the potential gains from cooperation. We showed that while the standard Amplify-Forward protocol performs very poorly in this regime, a modified version we called the Bursty Amplify-Forward protocol is optimal and achieves the outage capacity of the network. Moreover, this performance can be achieved without a priori channel knowledge at the receivers. In contrast, the Decode-Forward protocol is strictly sub-optimal in this regime. Our results directly yield the outage capacity per unit energy of fading relay channels.<|reference_end|>
arxiv
@article{avestimehr2006outage, title={Outage Capacity of the Fading Relay Channel in the Low SNR Regime}, author={Salman Avestimehr and David N.C. Tse}, journal={arXiv preprint arXiv:cs/0602050}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602050}, primaryClass={cs.IT math.IT} }
avestimehr2006outage
arxiv-673865
cs/0602051
On the utility of the multimodal problem generator for assessing the performance of Evolutionary Algorithms
<|reference_start|>On the utility of the multimodal problem generator for assessing the performance of Evolutionary Algorithms: This paper looks in detail at how an evolutionary algorithm attempts to solve instances from the multimodal problem generator. The paper shows that in order to consistently reach the global optimum, an evolutionary algorithm requires a population size that should grow at least linearly with the number of peaks. It is also shown a close relationship between the supply and decision making issues that have been identified previously in the context of population sizing models for additively decomposable problems. The most important result of the paper, however, is that solving an instance of the multimodal problem generator is like solving a peak-in-a-haystack, and it is argued that evolutionary algorithms are not the best algorithms for such a task. Finally, and as opposed to what several researchers have been doing, it is our strong belief that the multimodal problem generator is not adequate for assessing the performance of evolutionary algorithms.<|reference_end|>
arxiv
@article{lobo2006on, title={On the utility of the multimodal problem generator for assessing the performance of Evolutionary Algorithms}, author={Fernando G. Lobo, Claudio F. Lima}, journal={arXiv preprint arXiv:cs/0602051}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602051}, primaryClass={cs.NE} }
lobo2006on
arxiv-673866
cs/0602052
The OverRelational Manifesto
<|reference_start|>The OverRelational Manifesto: The OverRelational Manifesto (below ORM) proposes a possible approach to creation of data storage systems of the next generation. ORM starts from the requirement that information in a relational database is represented by a set of relation values. Accordingly, it is assumed that the information about any entity of an enterprise must also be represented as a set of relation values (the ORM main requirement). A system of types is introduced, which allows one to fulfill the main requirement. The data are represented in the form of complex objects, and the state of any object is described as a set of relation values. Emphasize that the types describing the objects are encapsulated, inherited, and polymorphic. Then, it is shown that the data represented as a set of such objects may also be represented as a set of relational values defined on the set of scalar domains (dual data representation). In the general case, any class is associated with a set of relation variables (R-variables) each one containing some data about all objects of this class existing in the system. One of the key points is the fact that the usage of complex (from the user's viewpoint) refined names of R-variables and their attributes makes it possible to preserve the semantics of complex data structures represented in the form of a set of relation values. The most important part of the data storage system created on the approach proposed is an object-oriented translator operating over a relational DBMS. The expressiveness of such a system is comparable with that of OO programming languages.<|reference_end|>
arxiv
@article{grigoriev2006the, title={The OverRelational Manifesto}, author={Evgeniy Grigoriev}, journal={arXiv preprint arXiv:cs/0602052}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602052}, primaryClass={cs.DB cs.DS} }
grigoriev2006the
arxiv-673867
cs/0602053
How to Beat the Adaptive Multi-Armed Bandit
<|reference_start|>How to Beat the Adaptive Multi-Armed Bandit: The multi-armed bandit is a concise model for the problem of iterated decision-making under uncertainty. In each round, a gambler must pull one of $K$ arms of a slot machine, without any foreknowledge of their payouts, except that they are uniformly bounded. A standard objective is to minimize the gambler's regret, defined as the gambler's total payout minus the largest payout which would have been achieved by any fixed arm, in hindsight. Note that the gambler is only told the payout for the arm actually chosen, not for the unchosen arms. Almost all previous work on this problem assumed the payouts to be non-adaptive, in the sense that the distribution of the payout of arm $j$ in round $i$ is completely independent of the choices made by the gambler on rounds $1, \dots, i-1$. In the more general model of adaptive payouts, the payouts in round $i$ may depend arbitrarily on the history of past choices made by the algorithm. We present a new algorithm for this problem, and prove nearly optimal guarantees for the regret against both non-adaptive and adaptive adversaries. After $T$ rounds, our algorithm has regret $O(\sqrt{T})$ with high probability (the tail probability decays exponentially). This dependence on $T$ is best possible, and matches that of the full-information version of the problem, in which the gambler is told the payouts for all $K$ arms after each round. Previously, even for non-adaptive payouts, the best high-probability bounds known were $O(T^{2/3})$, due to Auer, Cesa-Bianchi, Freund and Schapire. The expected regret of their algorithm is $O(T^{1/2}) for non-adaptive payouts, but as we show, $\Omega(T^{2/3})$ for adaptive payouts.<|reference_end|>
arxiv
@article{dani2006how, title={How to Beat the Adaptive Multi-Armed Bandit}, author={Varsha Dani and Thomas P. Hayes}, journal={arXiv preprint arXiv:cs/0602053}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602053}, primaryClass={cs.DS cs.LG} }
dani2006how
arxiv-673868
cs/0602054
Explicit Space-Time Codes Achieving The Diversity-Multiplexing Gain Tradeoff
<|reference_start|>Explicit Space-Time Codes Achieving The Diversity-Multiplexing Gain Tradeoff: A recent result of Zheng and Tse states that over a quasi-static channel, there exists a fundamental tradeoff, referred to as the diversity-multiplexing gain (D-MG) tradeoff, between the spatial multiplexing gain and the diversity gain that can be simultaneously achieved by a space-time (ST) block code. This tradeoff is precisely known in the case of i.i.d. Rayleigh-fading, for T>= n_t+n_r-1 where T is the number of time slots over which coding takes place and n_t,n_r are the number of transmit and receive antennas respectively. For T < n_t+n_r-1, only upper and lower bounds on the D-MG tradeoff are available. In this paper, we present a complete solution to the problem of explicitly constructing D-MG optimal ST codes, i.e., codes that achieve the D-MG tradeoff for any number of receive antennas. We do this by showing that for the square minimum-delay case when T=n_t=n, cyclic-division-algebra (CDA) based ST codes having the non-vanishing determinant property are D-MG optimal. While constructions of such codes were previously known for restricted values of n, we provide here a construction for such codes that is valid for all n. For the rectangular, T > n_t case, we present two general techniques for building D-MG-optimal rectangular ST codes from their square counterparts. A byproduct of our results establishes that the D-MG tradeoff for all T>= n_t is the same as that previously known to hold for T >= n_t + n_r -1.<|reference_end|>
arxiv
@article{elia2006explicit, title={Explicit Space-Time Codes Achieving The Diversity-Multiplexing Gain Tradeoff}, author={Petros Elia, K. Raj Kumar, Sameer A. Pawar, P. Vijay Kumar and Hsiao-feng Lu}, journal={arXiv preprint arXiv:cs/0602054}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602054}, primaryClass={cs.IT math.IT} }
elia2006explicit
arxiv-673869
cs/0602055
Revisiting Evolutionary Algorithms with On-the-Fly Population Size Adjustment
<|reference_start|>Revisiting Evolutionary Algorithms with On-the-Fly Population Size Adjustment: In an evolutionary algorithm, the population has a very important role as its size has direct implications regarding solution quality, speed, and reliability. Theoretical studies have been done in the past to investigate the role of population sizing in evolutionary algorithms. In addition to those studies, several self-adjusting population sizing mechanisms have been proposed in the literature. This paper revisits the latter topic and pays special attention to the genetic algorithm with adaptive population size (APGA), for which several researchers have claimed to be very effective at autonomously (re)sizing the population. As opposed to those previous claims, this paper suggests a complete opposite view. Specifically, it shows that APGA is not capable of adapting the population size at all. This claim is supported on theoretical grounds and confirmed by computer simulations.<|reference_end|>
arxiv
@article{lobo2006revisiting, title={Revisiting Evolutionary Algorithms with On-the-Fly Population Size Adjustment}, author={Fernando G. Lobo, Claudio F. Lima}, journal={arXiv preprint arXiv:cs/0602055}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602055}, primaryClass={cs.NE} }
lobo2006revisiting
arxiv-673870
cs/0602056
Building Scenarios for Environmental Management and Planning: An IT-Based Approach
<|reference_start|>Building Scenarios for Environmental Management and Planning: An IT-Based Approach: Oftentimes, the need to build multidiscipline knowledge bases, oriented to policy scenarios, entails the involvement of stakeholders in manifold domains, with a juxtaposition of different languages whose semantics can hardly allow inter-domain transfers. A useful support for planning is the building up of durable IT based interactive platforms, where it is possible to modify initial positions toward a semantic convergence. The present paper shows an area-based application of these tools, for the integrated distance-management of different forms of knowledge expressed by selected stakeholders about environmental planning issues, in order to build alternative development scenarios. Keywords: Environmental planning, Scenario building, Multi-source knowledge, IT-based<|reference_end|>
arxiv
@article{borri2006building, title={Building Scenarios for Environmental Management and Planning: An IT-Based Approach}, author={Dino Borri, Domenico Camarda}, journal={arXiv preprint arXiv:cs/0602056}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602056}, primaryClass={cs.MA} }
borri2006building
arxiv-673871
cs/0602057
Plane Decompositions as Tools for Approximation
<|reference_start|>Plane Decompositions as Tools for Approximation: Tree decompositions were developed by Robertson and Seymour. Since then algorithms have been developed to solve intractable problems efficiently for graphs of bounded treewidth. In this paper we extend tree decompositions to allow cycles to exist in the decomposition graph; we call these new decompositions plane decompositions because we require that the decomposition graph be planar. First, we give some background material about tree decompositions and an overview of algorithms both for decompositions and for approximations of planar graphs. Then, we give our plane decomposition definition and an algorithm that uses this decomposition to approximate the size of the maximum independent set of the underlying graph in polynomial time.<|reference_end|>
arxiv
@article{agnew2006plane, title={Plane Decompositions as Tools for Approximation}, author={Melanie J. Agnew and Christopher M. Homan}, journal={arXiv preprint arXiv:cs/0602057}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602057}, primaryClass={cs.DS} }
agnew2006plane
arxiv-673872
cs/0602058
Incremental Redundancy Cooperative Coding for Wireless Networks: Cooperative Diversity, Coding, and Transmission Energy Gain
<|reference_start|>Incremental Redundancy Cooperative Coding for Wireless Networks: Cooperative Diversity, Coding, and Transmission Energy Gain: We study an incremental redundancy (IR) cooperative coding scheme for wireless networks. To exploit the spatial diversity benefit we propose a cluster-based collaborating strategy for a quasi-static Rayleigh fading channel model and based on a network geometric distance profile. Our scheme enhances the network performance by embedding an IR cooperative coding scheme into an existing noncooperative route. More precisely, for each hop, we form a collaborating cluster of M-1 nodes between the (hop) sender and the (hop) destination. The transmitted message is encoded using a mother code and partitioned into M blocks corresponding to the each of M slots. In the first slot, the (hop) sender broadcasts its information by transmitting the first block, and its helpers attempt to relay this message. In the remaining slots, the each of left-over M-1 blocks is sent either through a helper which has successfully decoded the message or directly by the (hop) sender where a dynamic schedule is based on the ACK-based feedback from the cluster. By employing powerful good codes (e.g., turbo codes, LDPC codes, and raptor codes) whose performance is characterized by a threshold behavior, our approach improves the reliability of a multi-hop routing through not only cooperation diversity benefit but also a coding advantage. The study of the diversity and the coding gain of the proposed scheme is based on a new simple threshold bound on the frame-error rate (FER) of maximum likelihood decoding. A average FER upper bound and its asymptotic (in large SNR) version are derived as a function of the average fading channel SNRs and the code threshold.<|reference_end|>
arxiv
@article{liu2006incremental, title={Incremental Redundancy Cooperative Coding for Wireless Networks: Cooperative Diversity, Coding, and Transmission Energy Gain}, author={Ruoheng Liu, Predrag Spasojevic, and Emina Soljanin}, journal={arXiv preprint arXiv:cs/0602058}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602058}, primaryClass={cs.IT math.IT} }
liu2006incremental
arxiv-673873
cs/0602059
D2D: Digital Archive to MPEG-21 DIDL
<|reference_start|>D2D: Digital Archive to MPEG-21 DIDL: Digital Archive to MPEG-21 DIDL (D2D) analyzes the contents of the digital archive and produces an MPEG-21 Digital Item Declaration Language (DIDL) encapsulating the analysis results. DIDL is an extensible XML-based language that aggregates resources and the metadata. We provide a brief report on several analysis techniques applied on the digital archive by the D2D and provide an evaluation of its run-time performance.<|reference_end|>
arxiv
@article{manepalli2006d2d:, title={D2D: Digital Archive to MPEG-21 DIDL}, author={Suchitra Manepalli, Giridhar Manepalli, Michael L. Nelson}, journal={arXiv preprint arXiv:cs/0602059}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602059}, primaryClass={cs.DL} }
manepalli2006d2d:
arxiv-673874
cs/0602060
eJournal interface can influence usage statistics: implications for libraries, publishers, and Project COUNTER
<|reference_start|>eJournal interface can influence usage statistics: implications for libraries, publishers, and Project COUNTER: The design of a publisher's electronic interface can have a measurable effect on electronic journal usage statistics. A study of journal usage from six COUNTER-compliant publishers at thirty-two research institutions in the United States, the United Kingdom and Sweden indicates that the ratio of PDF to HTML views is not consistent across publisher interfaces, even after controlling for differences in publisher content. The number of fulltext downloads may be artificially inflated when publishers require users to view HTML versions before accessing PDF versions or when linking mechanisms, such as CrossRef, direct users to the full text, rather than the abstract, of each article. These results suggest that usage reports from COUNTER-compliant publishers are not directly comparable in their current form. One solution may be to modify publisher numbers with adjustment factors deemed to be representative of the benefit or disadvantage due to its interface. Standardization of some interface and linking protocols may obviate these differences and allow for more accurate cross-publisher comparisons.<|reference_end|>
arxiv
@article{davis2006ejournal, title={eJournal interface can influence usage statistics: implications for libraries, publishers, and Project COUNTER}, author={Philip M. Davis, Jason S. Price}, journal={JASIST v57 n9 (2006):1243-1248}, year={2006}, doi={10.1002/asi.20405}, archivePrefix={arXiv}, eprint={cs/0602060}, primaryClass={cs.IR cs.DL} }
davis2006ejournal
arxiv-673875
cs/0602061
The Computational and Storage Potential of Volunteer Computing
<|reference_start|>The Computational and Storage Potential of Volunteer Computing: "Volunteer computing" uses Internet-connected computers, volunteered by their owners, as a source of computing power and storage. This paper studies the potential capacity of volunteer computing. We analyzed measurements of over 330,000 hosts participating in a volunteer computing project. These measurements include processing power, memory, disk space, network throughput, host availability, user-specified limits on resource usage, and host churn. We show that volunteer computing can support applications that are significantly more data-intensive, or have larger memory and storage requirements, than those in current projects.<|reference_end|>
arxiv
@article{anderson2006the, title={The Computational and Storage Potential of Volunteer Computing}, author={David P. Anderson, Gilles Fedak}, journal={arXiv preprint arXiv:cs/0602061}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602061}, primaryClass={cs.DC cs.PF} }
anderson2006the
arxiv-673876
cs/0602062
Learning rational stochastic languages
<|reference_start|>Learning rational stochastic languages: Given a finite set of words w1,...,wn independently drawn according to a fixed unknown distribution law P called a stochastic language, an usual goal in Grammatical Inference is to infer an estimate of P in some class of probabilistic models, such as Probabilistic Automata (PA). Here, we study the class of rational stochastic languages, which consists in stochastic languages that can be generated by Multiplicity Automata (MA) and which strictly includes the class of stochastic languages generated by PA. Rational stochastic languages have minimal normal representation which may be very concise, and whose parameters can be efficiently estimated from stochastic samples. We design an efficient inference algorithm DEES which aims at building a minimal normal representation of the target. Despite the fact that no recursively enumerable class of MA computes exactly the set of rational stochastic languages over Q, we show that DEES strongly identifies tis set in the limit. We study the intermediary MA output by DEES and show that they compute rational series which converge absolutely to one and which can be used to provide stochastic languages which closely estimate the target.<|reference_end|>
arxiv
@article{denis2006learning, title={Learning rational stochastic languages}, author={Franc{c}ois Denis (LIF), Yann Esposito (LIF), Amaury Habrard (LIF)}, journal={arXiv preprint arXiv:cs/0602062}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602062}, primaryClass={cs.LG} }
denis2006learning
arxiv-673877
cs/0602063
Group Signature Schemes Using Braid Groups
<|reference_start|>Group Signature Schemes Using Braid Groups: Artin's braid groups have been recently suggested as a new source for public-key cryptography. In this paper we propose the first group signature schemes based on the conjugacy problem, decomposition problem and root problem in the braid groups which are believed to be hard problems.<|reference_end|>
arxiv
@article{thomas2006group, title={Group Signature Schemes Using Braid Groups}, author={Tony Thomas, Arbind Kumar Lal}, journal={arXiv preprint arXiv:cs/0602063}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602063}, primaryClass={cs.CR} }
thomas2006group
arxiv-673878
cs/0602064
Computing spectral sequences
<|reference_start|>Computing spectral sequences: In this paper, a set of programs enhancing the Kenzo system is presented. Kenzo is a Common Lisp program designed for computing in Algebraic Topology, in particular it allows the user to calculate homology and homotopy groups of complicated spaces. The new programs presented here entirely compute Serre and Eilenberg-Moore spectral sequences, in particular the groups and differential maps for arbitrary r. They also determine when the spectral sequence has converged and describe the filtration of the target homology groups induced by the spectral sequence.<|reference_end|>
arxiv
@article{romero2006computing, title={Computing spectral sequences}, author={A. Romero, J. Rubio, F. Sergeraert}, journal={arXiv preprint arXiv:cs/0602064}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602064}, primaryClass={cs.SC} }
romero2006computing
arxiv-673879
cs/0602065
Similarity of Objects and the Meaning of Words
<|reference_start|>Similarity of Objects and the Meaning of Words: We survey the emerging area of compression-based, parameter-free, similarity distance measures useful in data-mining, pattern recognition, learning and automatic semantics extraction. Given a family of distances on a set of objects, a distance is universal up to a certain precision for that family if it minorizes every distance in the family between every two objects in the set, up to the stated precision (we do not require the universal distance to be an element of the family). We consider similarity distances for two types of objects: literal objects that as such contain all of their meaning, like genomes or books, and names for objects. The latter may have literal embodyments like the first type, but may also be abstract like ``red'' or ``christianity.'' For the first type we consider a family of computable distance measures corresponding to parameters expressing similarity according to particular featuresdistances generated by web users corresponding to particular semantic relations between the (names for) the designated objects. For both families we give universal similarity distance measures, incorporating all particular distance measures in the family. In the first case the universal distance is based on compression and in the second case it is based on Google page counts related to search terms. In both cases experiments on a massive scale give evidence of the viability of the approaches. between pairs of literal objects. For the second type we consider similarity<|reference_end|>
arxiv
@article{cilibrasi2006similarity, title={Similarity of Objects and the Meaning of Words}, author={Rudi Cilibrasi (CWI) and Paul Vitanyi (CWI and University of Amsterdam)}, journal={arXiv preprint arXiv:cs/0602065}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602065}, primaryClass={cs.CV cs.IR} }
cilibrasi2006similarity
arxiv-673880
cs/0602066
Natural Economics
<|reference_start|>Natural Economics: A few considerations on the nature of Economics and its relationship to human communities through the prism of Self-Organizing-Systems.<|reference_end|>
arxiv
@article{mello2006natural, title={Natural Economics}, author={Louis Mello}, journal={arXiv preprint arXiv:cs/0602066}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602066}, primaryClass={cs.OH} }
mello2006natural
arxiv-673881
cs/0602067
Renyi to Renyi -- Source Coding under Siege
<|reference_start|>Renyi to Renyi -- Source Coding under Siege: A novel lossless source coding paradigm applies to problems of unreliable lossless channels with low bit rates, in which a vital message needs to be transmitted prior to termination of communications. This paradigm can be applied to Alfred Renyi's secondhand account of an ancient siege in which a spy was sent to scout the enemy but was captured. After escaping, the spy returned to his base in no condition to speak and unable to write. His commander asked him questions that he could answer by nodding or shaking his head, and the fortress was defended with this information. Renyi told this story with reference to prefix coding, but maximizing probability of survival in the siege scenario is distinct from yet related to the traditional source coding objective of minimizing expected codeword length. Rather than finding a code minimizing expected codeword length $\sum_{i=1}^n p(i) l(i)$, the siege problem involves maximizing $\sum_{i=1}^n p(i) \theta^{l(i)}$ for a known $\theta \in (0,1)$. When there are no restrictions on codewords, this problem can be solve using a known generalization of Huffman coding. The optimal solution has coding bounds which are functions of Renyi entropy; in addition to known bounds, new bounds are derived here. The alphabetically constrained version of this problem has applications in search trees and diagnostic testing. A novel dynamic programming algorithm -- based upon the oldest known algorithm for the traditional alphabetic problem -- optimizes this problem in $O(n^3)$ time and $O(n^2)$ space, whereas two novel approximation algorithms can find a suboptimal solution faster: one in linear time, the other in $O(n \log n)$. Coding bounds for the alphabetic version of this problem are also presented.<|reference_end|>
arxiv
@article{baer2006renyi, title={Renyi to Renyi -- Source Coding under Siege}, author={Michael B. Baer}, journal={arXiv preprint arXiv:cs/0602067}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602067}, primaryClass={cs.IT cs.DS math.IT} }
baer2006renyi
arxiv-673882
cs/0602068
Parallel Symbolic Computation of Curvature Invariants in General Relativity
<|reference_start|>Parallel Symbolic Computation of Curvature Invariants in General Relativity: We present a practical application of parallel symbolic computation in General Relativity: the calculation of curvature invariants for large dimension. We discuss the structure of the calculations, an implementation of the technique and scaling of the computation with spacetime dimension for various invariants.<|reference_end|>
arxiv
@article{koehler2006parallel, title={Parallel Symbolic Computation of Curvature Invariants in General Relativity}, author={K. R. Koehler}, journal={arXiv preprint arXiv:cs/0602068}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602068}, primaryClass={cs.DC cs.SC gr-qc} }
koehler2006parallel
arxiv-673883
cs/0602069
Faster Algorithms for Constructing a Concept (Galois) Lattice
<|reference_start|>Faster Algorithms for Constructing a Concept (Galois) Lattice: In this paper, we present a fast algorithm for constructing a concept (Galois) lattice of a binary relation, including computing all concepts and their lattice order. We also present two efficient variants of the algorithm, one for computing all concepts only, and one for constructing a frequent closed itemset lattice. The running time of our algorithms depends on the lattice structure and is faster than all other existing algorithms for these problems.<|reference_end|>
arxiv
@article{choi2006faster, title={Faster Algorithms for Constructing a Concept (Galois) Lattice}, author={Vicky Choi}, journal={arXiv preprint arXiv:cs/0602069}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602069}, primaryClass={cs.DM cs.DS} }
choi2006faster
arxiv-673884
cs/0602070
Methods for scaling a large member base
<|reference_start|>Methods for scaling a large member base: The technical challenges of scaling websites with large and growing member bases, like social networking sites, are numerous. One of these challenges is how to evenly distribute the growing member base across all available resources. This paper will explore various methods that address this issue. The techniques used in this paper can be generalized and applied to various other problems that need to distribute data evenly amongst a finite amount of resources.<|reference_end|>
arxiv
@article{boeger2006methods, title={Methods for scaling a large member base}, author={Nathan Boeger}, journal={arXiv preprint arXiv:cs/0602070}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602070}, primaryClass={cs.GL} }
boeger2006methods
arxiv-673885
cs/0602071
Geographic Gossip: Efficient Aggregation for Sensor Networks
<|reference_start|>Geographic Gossip: Efficient Aggregation for Sensor Networks: Gossip algorithms for aggregation have recently received significant attention for sensor network applications because of their simplicity and robustness in noisy and uncertain environments. However, gossip algorithms can waste significant energy by essentially passing around redundant information multiple times. For realistic sensor network model topologies like grids and random geometric graphs, the inefficiency of gossip schemes is caused by slow mixing times of random walks on those graphs. We propose and analyze an alternative gossiping scheme that exploits geographic information. By utilizing a simple resampling method, we can demonstrate substantial gains over previously proposed gossip protocols. In particular, for random geometric graphs, our algorithm computes the true average to accuracy $1/n^a$ using $O(n^{1.5}\sqrt{\log n})$ radio transmissions, which reduces the energy consumption by a $\sqrt{\frac{n}{\log n}}$ factor over standard gossip algorithms.<|reference_end|>
arxiv
@article{dimakis2006geographic, title={Geographic Gossip: Efficient Aggregation for Sensor Networks}, author={Alexandros G. Dimakis, Anand D. Sarwate, Martin J. Wainwright}, journal={arXiv preprint arXiv:cs/0602071}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602071}, primaryClass={cs.IT math.IT} }
dimakis2006geographic
arxiv-673886
cs/0602072
Turbo Decoding on the Binary Erasure Channel: Finite-Length Analysis and Turbo Stopping Sets
<|reference_start|>Turbo Decoding on the Binary Erasure Channel: Finite-Length Analysis and Turbo Stopping Sets: This paper is devoted to the finite-length analysis of turbo decoding over the binary erasure channel (BEC). The performance of iterative belief-propagation (BP) decoding of low-density parity-check (LDPC) codes over the BEC can be characterized in terms of stopping sets. We describe turbo decoding on the BEC which is simpler than turbo decoding on other channels. We then adapt the concept of stopping sets to turbo decoding and state an exact condition for decoding failure. Apply turbo decoding until the transmitted codeword has been recovered, or the decoder fails to progress further. Then the set of erased positions that will remain when the decoder stops is equal to the unique maximum-size turbo stopping set which is also a subset of the set of erased positions. Furthermore, we present some improvements of the basic turbo decoding algorithm on the BEC. The proposed improved turbo decoding algorithm has substantially better error performance as illustrated by the given simulation results. Finally, we give an expression for the turbo stopping set size enumerating function under the uniform interleaver assumption, and an efficient enumeration algorithm of small-size turbo stopping sets for a particular interleaver. The solution is based on the algorithm proposed by Garello et al. in 2001 to compute an exhaustive list of all low-weight codewords in a turbo code.<|reference_end|>
arxiv
@article{rosnes2006turbo, title={Turbo Decoding on the Binary Erasure Channel: Finite-Length Analysis and Turbo Stopping Sets}, author={Eirik Rosnes and {O}yvind Ytrehus}, journal={IEEE Trans. Inf. Theory, vol. 53, no. 11, pp. 4059-4075, Nov. 2007}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602072}, primaryClass={cs.IT math.IT} }
rosnes2006turbo
arxiv-673887
cs/0602073
An O(n^275) algorithm for online topological ordering
<|reference_start|>An O(n^275) algorithm for online topological ordering: We present a simple algorithm which maintains the topological order of a directed acyclic graph with n nodes under an online edge insertion sequence in O(n^{2.75}) time, independent of the number of edges m inserted. For dense DAGs, this is an improvement over the previous best result of O(min(m^{3/2} log(n), m^{3/2} + n^2 log(n)) by Katriel and Bodlaender. We also provide an empirical comparison of our algorithm with other algorithms for online topological sorting. Our implementation outperforms them on certain hard instances while it is still competitive on random edge insertion sequences leading to complete DAGs.<|reference_end|>
arxiv
@article{ajwani2006an, title={An O(n^{2.75}) algorithm for online topological ordering}, author={Deepak Ajwani, Tobias Friedrich and Ulrich Meyer}, journal={arXiv preprint arXiv:cs/0602073}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602073}, primaryClass={cs.DS} }
ajwani2006an
arxiv-673888
cs/0602074
The entropy rate of the binary symmetric channel in the rare transitions regime
<|reference_start|>The entropy rate of the binary symmetric channel in the rare transitions regime: This note has been withdrawn by the author as the more complete result was recently proved by A.Quas and Y.Peres<|reference_end|>
arxiv
@article{chigansky2006the, title={The entropy rate of the binary symmetric channel in the rare transitions regime}, author={Pavel Chigansky}, journal={arXiv preprint arXiv:cs/0602074}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602074}, primaryClass={cs.IT math.IT} }
chigansky2006the
arxiv-673889
cs/0602075
The approximability of MAX CSP with fixed-value constraints
<|reference_start|>The approximability of MAX CSP with fixed-value constraints: In the maximum constraint satisfaction problem (MAX CSP), one is given a finite collection of (possibly weighted) constraints on overlapping sets of variables, and the goal is to assign values from a given finite domain to the variables so as to maximize the number (or the total weight, for the weighted case) of satisfied constraints. This problem is NP-hard in general, and, therefore, it is natural to study how restricting the allowed types of constraints affects the approximability of the problem. In this paper, we show that any MAX CSP problem with a finite set of allowed constraint types, which includes all fixed-value constraints (i.e., constraints of the form x=a), is either solvable exactly in polynomial-time or else is APX-complete, even if the number of occurrences of variables in instances are bounded. Moreover, we present a simple description of all polynomial-time solvable cases of our problem. This description relies on the well-known algebraic combinatorial property of supermodularity.<|reference_end|>
arxiv
@article{deineko2006the, title={The approximability of MAX CSP with fixed-value constraints}, author={Vladimir Deineko, Peter Jonsson, Mikael Klasson, and Andrei Krokhin}, journal={arXiv preprint arXiv:cs/0602075}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602075}, primaryClass={cs.CC} }
deineko2006the
arxiv-673890
cs/0602076
Exploring term-document matrices from matrix models in text mining
<|reference_start|>Exploring term-document matrices from matrix models in text mining: We explore a matrix-space model, that is a natural extension to the vector space model for Information Retrieval. Each document can be represented by a matrix that is based on document extracts (e.g. sentences, paragraphs, sections). We focus on the performance of this model for the specific case in which documents are originally represented as term-by-sentence matrices. We use the singular value decomposition to approximate the term-by-sentence matrices and assemble these results to form the pseudo-``term-document'' matrix that forms the basis of a text mining method alternative to traditional VSM and LSI. We investigate the singular values of this matrix and provide experimental evidence suggesting that the method can be particularly effective in terms of accuracy for text collections with multi-topic documents, such as web pages with news.<|reference_end|>
arxiv
@article{antonellis2006exploring, title={Exploring term-document matrices from matrix models in text mining}, author={Ioannis Antonellis and Efstratios Gallopoulos}, journal={arXiv preprint arXiv:cs/0602076}, year={2006}, number={03/02-06}, archivePrefix={arXiv}, eprint={cs/0602076}, primaryClass={cs.IR cs.DB cs.DL} }
antonellis2006exploring
arxiv-673891
cs/0602077
Bisimulations of enrichments
<|reference_start|>Bisimulations of enrichments: In this paper we show that classical notions from automata theory such as simulation and bisimulation can be lifted to the context of enriched categories. The usual properties of bisimulation are nearly all preserved in this new context. The class of enriched functors that correspond to functionnal bisimulations surjective on objects is investigated and appears "nearly" open in the sense of Joyal and Moerdijk. Seeing the change of base techniques as a convenient means to define process refinement/abstractions, we give sufficient conditions for the change of base categories to preserve bisimularity. We apply these concepts to Betti's generalized automata, categorical transition systems, and other exotic categories.<|reference_end|>
arxiv
@article{schmitt2006bisimulations, title={Bisimulations of enrichments}, author={Vincent Schmitt and Krzysztof Worytkiewicz}, journal={arXiv preprint arXiv:cs/0602077}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602077}, primaryClass={cs.LO} }
schmitt2006bisimulations
arxiv-673892
cs/0602078
Associative Memory For Reversible Programming and Charge Recovery
<|reference_start|>Associative Memory For Reversible Programming and Charge Recovery: Presented below is an interesting type of associative memory called toggle memory based on the concept of T flip flops, as opposed to D flip flops. Toggle memory supports both reversible programming and charge recovery. Circuits designed using the principles delineated below permit matchlines to charge and discharge with near zero energy dissipation. The resulting lethargy is compensated by the massive parallelism of associative memory. Simulation indicates over 33x reduction in energy dissipation using a sinusoidal power supply at 2 MHz, assuming realistic 50 nm MOSFET models.<|reference_end|>
arxiv
@article{burger2006associative, title={Associative Memory For Reversible Programming and Charge Recovery}, author={John Robert Burger}, journal={arXiv preprint arXiv:cs/0602078}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602078}, primaryClass={cs.AR cs.DC} }
burger2006associative
arxiv-673893
cs/0602079
SISO APP Searches in Lattices with Tanner Graphs
<|reference_start|>SISO APP Searches in Lattices with Tanner Graphs: An efficient, low-complexity, soft-output detector for general lattices is presented, based on their Tanner graph (TG) representations. Closest-point searches in lattices can be performed as non-binary belief propagation on associated TGs; soft-information output is naturally generated in the process; the algorithm requires no backtrack (cf. classic sphere decoding), and extracts extrinsic information. A lattice's coding gain enables equivalence relations between lattice points, which can be thereby partitioned in cosets. Total and extrinsic a posteriori probabilities at the detector's output further enable the use of soft detection information in iterative schemes. The algorithm is illustrated via two scenarios that transmit a 32-point, uncoded super-orthogonal (SO) constellation for multiple-input multiple-output (MIMO) channels, carved from an 8-dimensional non-orthogonal lattice (a direct sum of two 4-dimensional checkerboard lattice): it achieves maximum likelihood performance in quasistatic fading; and, performs close to interference-free transmission, and identically to list sphere decoding, in independent fading with coordinate interleaving and iterative equalization and detection. Latter scenario outperforms former despite the absence of forward error correction coding---because the inherent lattice coding gain allows for the refining of extrinsic information. The lattice constellation is the same as the one employed in the SO space-time trellis codes first introduced for 2-by-2 MIMO by Ionescu et al., then independently by Jafarkhani and Seshadri. Complexity is log-linear in lattice dimensionality, vs. cubic in sphere decoders.<|reference_end|>
arxiv
@article{ionescu2006siso, title={SISO APP Searches in Lattices with Tanner Graphs}, author={Dumitru Mihai Ionescu, Haidong Zhu}, journal={IEEE Trans. Inf. Theory, pp. 2672-2688, vol. 58, May 2012}, year={2006}, doi={10.1109/TIT.2011.2178130}, archivePrefix={arXiv}, eprint={cs/0602079}, primaryClass={cs.IT cs.DS math.IT} }
ionescu2006siso
arxiv-673894
cs/0602080
Pants Decomposition of the Punctured Plane
<|reference_start|>Pants Decomposition of the Punctured Plane: A pants decomposition of an orientable surface S is a collection of simple cycles that partition S into pants, i.e., surfaces of genus zero with three boundary cycles. Given a set P of n points in the plane, we consider the problem of computing a pants decomposition of the surface S which is the plane minus P, of minimum total length. We give a polynomial-time approximation scheme using Mitchell's guillotine rectilinear subdivisions. We give a quartic-time algorithm to compute the shortest pants decomposition of S when the cycles are restricted to be axis-aligned boxes, and a quadratic-time algorithm when all the points lie on a line; both exact algorithms use dynamic programming with Yao's speedup.<|reference_end|>
arxiv
@article{poon2006pants, title={Pants Decomposition of the Punctured Plane}, author={Sheung-Hung Poon (1) and Shripad Thite (1) ((1) Department of Mathematics and Computer Science, Technische Universiteit Eindhoven, The Netherlands)}, journal={arXiv preprint arXiv:cs/0602080}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602080}, primaryClass={cs.CG} }
poon2006pants
arxiv-673895
cs/0602081
Low-Density Parity-Check Code with Fast Decoding Speed
<|reference_start|>Low-Density Parity-Check Code with Fast Decoding Speed: Low-Density Parity-Check (LDPC) codes received much attention recently due to their capacity-approaching performance. The iterative message-passing algorithm is a widely adopted decoding algorithm for LDPC codes \cite{Kschischang01}. An important design issue for LDPC codes is designing codes with fast decoding speed while maintaining capacity-approaching performance. In another words, it is desirable that the code can be successfully decoded in few number of decoding iterations, at the same time, achieves a significant portion of the channel capacity. Despite of its importance, this design issue received little attention so far. In this paper, we address this design issue for the case of binary erasure channel. We prove that density-efficient capacity-approaching LDPC codes satisfy a so called "flatness condition". We show an asymptotic approximation to the number of decoding iterations. Based on these facts, we propose an approximated optimization approach to finding the codes with good decoding speed. We further show that the optimal codes in the sense of decoding speed are "right-concentrated". That is, the degrees of check nodes concentrate around the average right degree.<|reference_end|>
arxiv
@article{ma2006low-density, title={Low-Density Parity-Check Code with Fast Decoding Speed}, author={Xudong Ma, En-hui Yang}, journal={arXiv preprint arXiv:cs/0602081}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602081}, primaryClass={cs.IT math.IT} }
ma2006low-density
arxiv-673896
cs/0602082
Digital Libraries: From Process Modelling to Grid-based Service Oriented Architecture
<|reference_start|>Digital Libraries: From Process Modelling to Grid-based Service Oriented Architecture: Graphical Business Process Modelling Languages (BPML) like Role Activity Diagrams (RAD) provide ease and flexibility for modelling business behaviour. However, these languages show limited applicability in terms of enactment over distributed systems paradigms like Service Oriented Architecture (SOA) based grid computing. This paper investigates RAD modelling of a Scientific Publishing Process (SPP) for Digital Libraries (DL) and tries to determine the suitability of Pi-Calculus based formal approaches to enact SOA based grid computing. In order to achieve this purpose, the Pi-Calculus based formal transformation from a RAD model of SPP for DL draws attention towards a number of challenging issues including issues that require particular design considerations for appropriate enactment in a SOA based grid system.<|reference_end|>
arxiv
@article{khan2006digital, title={Digital Libraries: From Process Modelling to Grid-based Service Oriented Architecture}, author={Zaheer Abbas Khan, Mohammed Odeh, Richard McClatchey}, journal={arXiv preprint arXiv:cs/0602082}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602082}, primaryClass={cs.DL cs.SE} }
khan2006digital
arxiv-673897
cs/0602083
A third level trigger programmable on FPGA for the gamma/hadron separation in a Cherenkov telescope using pseudo-Zernike moments and the SVM classifier
<|reference_start|>A third level trigger programmable on FPGA for the gamma/hadron separation in a Cherenkov telescope using pseudo-Zernike moments and the SVM classifier: We studied the application of the Pseudo-Zernike features as image parameters (instead of the Hillas parameters) for the discrimination between the images produced by atmospheric electromagnetic showers caused by gamma-rays and the ones produced by atmospheric electromagnetic showers caused by hadrons in the MAGIC Experiment. We used a Support Vector Machine as classification algorithm with the computed Pseudo-Zernike features as classification parameters. We implemented on a FPGA board a kernel function of the SVM and the Pseudo-Zernike features to build a third level trigger for the gamma-hadron separation task of the MAGIC Experiment.<|reference_end|>
arxiv
@article{frailis2006a, title={A third level trigger programmable on FPGA for the gamma/hadron separation in a Cherenkov telescope using pseudo-Zernike moments and the SVM classifier}, author={Marco Frailis, Oriana Mansutti, Praveen Boinee, Giuseppe Cabras, Alessandro De Angelis, Barbara De Lotto, Alberto Forti, Mauro Dell'Orso, Riccardo Paoletti, Angelo Scribano, Nicola Turini, Mose' Mariotti, Luigi Peruzzo, Antonio Saggion}, journal={arXiv preprint arXiv:cs/0602083}, year={2006}, doi={10.1142/9789812773548_0024}, archivePrefix={arXiv}, eprint={cs/0602083}, primaryClass={cs.CV cs.AI} }
frailis2006a
arxiv-673898
cs/0602084
Universal Codes as a Basis for Time Series Testing
<|reference_start|>Universal Codes as a Basis for Time Series Testing: We suggest a new approach to hypothesis testing for ergodic and stationary processes. In contrast to standard methods, the suggested approach gives a possibility to make tests, based on any lossless data compression method even if the distribution law of the codeword lengths is not known. We apply this approach to the following four problems: goodness-of-fit testing (or identity testing), testing for independence, testing of serial independence and homogeneity testing and suggest nonparametric statistical tests for these problems. It is important to note that practically used so-called archivers can be used for suggested testing.<|reference_end|>
arxiv
@article{ryabko2006universal, title={Universal Codes as a Basis for Time Series Testing}, author={Boris Ryabko, Jaakko Astola}, journal={arXiv preprint arXiv:cs/0602084}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602084}, primaryClass={cs.IT math.IT} }
ryabko2006universal
arxiv-673899
cs/0602085
Twenty (or so) Questions: $D$-ary Length-Bounded Prefix Coding
<|reference_start|>Twenty (or so) Questions: $D$-ary Length-Bounded Prefix Coding: Efficient optimal prefix coding has long been accomplished via the Huffman algorithm. However, there is still room for improvement and exploration regarding variants of the Huffman problem. Length-limited Huffman coding, useful for many practical applications, is one such variant, for which codes are restricted to the set of codes in which none of the $n$ codewords is longer than a given length, $l_{\max}$. Binary length-limited coding can be done in $O(n l_{\max})$ time and O(n) space via the widely used Package-Merge algorithm and with even smaller asymptotic complexity using a lesser-known algorithm. In this paper these algorithms are generalized without increasing complexity in order to introduce a minimum codeword length constraint $l_{\min}$, to allow for objective functions other than the minimization of expected codeword length, and to be applicable to both binary and nonbinary codes; nonbinary codes were previously addressed using a slower dynamic programming approach. These extensions have various applications -- including fast decompression and a modified version of the game ``Twenty Questions'' -- and can be used to solve the problem of finding an optimal code with limited fringe, that is, finding the best code among codes with a maximum difference between the longest and shortest codewords. The previously proposed method for solving this problem was nonpolynomial time, whereas solving this using the novel linear-space algorithm requires only $O(n (l_{\max}- l_{\min})^2)$ time, or even less if $l_{\max}- l_{\min}$ is not $O(\log n)$.<|reference_end|>
arxiv
@article{baer2006twenty, title={Twenty (or so) Questions: $D$-ary Length-Bounded Prefix Coding}, author={Michael B. Baer}, journal={arXiv preprint arXiv:cs/0602085}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602085}, primaryClass={cs.IT cs.DS math.IT} }
baer2006twenty
arxiv-673900
cs/0602086
On the Block Error Probability of LP Decoding of LDPC Codes
<|reference_start|>On the Block Error Probability of LP Decoding of LDPC Codes: In his thesis, Wiberg showed the existence of thresholds for families of regular low-density parity-check codes under min-sum algorithm decoding. He also derived analytic bounds on these thresholds. In this paper, we formulate similar results for linear programming decoding of regular low-density parity-check codes.<|reference_end|>
arxiv
@article{koetter2006on, title={On the Block Error Probability of LP Decoding of LDPC Codes}, author={Ralf Koetter and Pascal O. Vontobel}, journal={Proc. Inaugural Workshop of the Center for Information Theory and its Applications, UC San Diego, San Diego, CA, USA, Feb. 6-10, 2006}, year={2006}, archivePrefix={arXiv}, eprint={cs/0602086}, primaryClass={cs.IT math.IT} }
koetter2006on