corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-673201 | cs/0508062 | Decoding of Expander Codes at Rates Close to Capacity | <|reference_start|>Decoding of Expander Codes at Rates Close to Capacity: The decoding error probability of codes is studied as a function of their block length. It is shown that the existence of codes with a polynomially small decoding error probability implies the existence of codes with an exponentially small decoding error probability. Specifically, it is assumed that there exists a family of codes of length N and rate R=(1-\epsilon)C (C is a capacity of a binary symmetric channel), whose decoding probability decreases polynomially in 1/N. It is shown that if the decoding probability decreases sufficiently fast, but still only polynomially fast in 1/N, then there exists another such family of codes whose decoding error probability decreases exponentially fast in N. Moreover, if the decoding time complexity of the assumed family of codes is polynomial in N and 1/\epsilon, then the decoding time complexity of the presented family is linear in N and polynomial in 1/\epsilon. These codes are compared to the recently presented codes of Barg and Zemor, ``Error Exponents of Expander Codes,'' IEEE Trans. Inform. Theory, 2002, and ``Concatenated Codes: Serial and Parallel,'' IEEE Trans. Inform. Theory, 2005. It is shown that the latter families can not be tuned to have exponentially decaying (in N) error probability, and at the same time to have decoding time complexity linear in N and polynomial in 1/\epsilon.<|reference_end|> | arxiv | @article{ashikhmin2005decoding,
title={Decoding of Expander Codes at Rates Close to Capacity},
author={Alexei Ashikhmin, Vitaly Skachek},
journal={arXiv preprint arXiv:cs/0508062},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508062},
primaryClass={cs.IT math.IT}
} | ashikhmin2005decoding |
arxiv-673202 | cs/0508063 | Disks, Partitions, Volumes and RAID Performance with the Linux Operating System | <|reference_start|>Disks, Partitions, Volumes and RAID Performance with the Linux Operating System: Block devices in computer operating systems typically correspond to disks or disk partitions, and are used to store files in a filesystem. Disks are not the only real or virtual device which adhere to the block accessible stream of bytes block device model. Files, remote devices, or even RAM may be used as a virtual disks. This article examines several common combinations of block device layers used as virtual disks in the Linux operating system: disk partitions, loopback files, software RAID, Logical Volume Manager, and Network Block Devices. It measures their relative performance using different filesystems: Ext2, Ext3, ReiserFS, JFS, XFS,NFS.<|reference_end|> | arxiv | @article{dagenais2005disks,,
title={Disks, Partitions, Volumes and RAID Performance with the Linux Operating
System},
author={Michel R. Dagenais (Dept. of Computer Engineering, Ecole
Polytechnique, Montreal, Canada)},
journal={arXiv preprint arXiv:cs/0508063},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508063},
primaryClass={cs.PF cs.OS}
} | dagenais2005disks, |
arxiv-673203 | cs/0508064 | Layered Orthogonal Lattice Detector for Two Transmit Antenna Communications | <|reference_start|>Layered Orthogonal Lattice Detector for Two Transmit Antenna Communications: A novel detector for multiple-input multiple-output (MIMO) communications is presented. The algorithm belongs to the class of the lattice detectors, i.e. it finds a reduced complexity solution to the problem of finding the closest vector to the received observations. The algorithm achieves optimal maximum-likelihood (ML) performance in case of two transmit antennas, at the same time keeping a complexity much lower than the exhaustive search-based ML detection technique. Also, differently from the state-of-art lattice detector (namely sphere decoder), the proposed algorithm is suitable for a highly parallel hardware architecture and for a reliable bit soft-output information generation, thus making it a promising option for real-time high-data rate transmission.<|reference_end|> | arxiv | @article{siti2005layered,
title={Layered Orthogonal Lattice Detector for Two Transmit Antenna
Communications},
author={Massimiliano Siti, Michael P. Fitz},
journal={arXiv preprint arXiv:cs/0508064},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508064},
primaryClass={cs.IT math.IT}
} | siti2005layered |
arxiv-673204 | cs/0508065 | Representing Digital Assets using MPEG-21 Digital Item Declaration | <|reference_start|>Representing Digital Assets using MPEG-21 Digital Item Declaration: Various XML-based approaches aimed at representing compound digital assets have emerged over the last several years. Approaches that are of specific relevance to the digital library community include the Metadata Encoding and Transmission Standard (METS), the IMS Content Packaging XML Binding, and the XML Formatted Data Units (XFDU) developed by CCSDS Panel 2. The MPEG-21 Digital Item Declaration (MPEG-21 DID) is another standard specifying the representation of digital assets in XML that, so far, has received little attention in the digital library community. This article gives a brief insight into the MPEG-21 standardization effort, highlights the major characteristics of the MPEG-21 DID Abstract Model, and describes the MPEG-21 Digital Item Declaration Language (MPEG-21 DIDL), an XML syntax for the representation of digital assets based on the MPEG-21 DID Abstract Model. Also, it briefly demonstrates the potential relevance of MPEG-21 DID to the digital library community by describing its use in the aDORe repository environment at the Research Library of the Los Alamos National Laboratory (LANL) for the representation of digital assets.<|reference_end|> | arxiv | @article{bekaert2005representing,
title={Representing Digital Assets using MPEG-21 Digital Item Declaration},
author={Jeroen Bekaert, Herbert Van de Sompel},
journal={arXiv preprint arXiv:cs/0508065},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508065},
primaryClass={cs.DL cs.AR}
} | bekaert2005representing |
arxiv-673205 | cs/0508066 | Can Small Museums Develop Compelling, Educational and Accessible Web Resources? The Case of Accademia Carrara | <|reference_start|>Can Small Museums Develop Compelling, Educational and Accessible Web Resources? The Case of Accademia Carrara: Due to the lack of budget, competence, personnel and time, small museums are often unable to develop compelling, educational and accessible web resources for their permanent collections or temporary exhibitions. In an attempt to prove that investing in these types of resources can be very fruitful even for small institutions, we will illustrate the case of Accademia Carrara, a museum in Bergamo, northern Italy, which, for a current temporary exhibition on Cezanne and Renoir's masterpieces from the Paul Guillaume collection, developed a series of multimedia applications, including an accessible website, rich in content and educational material [www.cezannerenoir.it].<|reference_end|> | arxiv | @article{filippini-fantoni2005can,
title={Can Small Museums Develop Compelling, Educational and Accessible Web
Resources? The Case of Accademia Carrara},
author={Silvia Filippini-Fantoni and Jonathan P. Bowen},
journal={In James Hemsley, Vito Cappellini and Gerd Stanke (eds.), EVA 2005
London Conference Proceedings, University College London, UK, 25-29 July
2005, pages 18.1-18.14. ISBN: 0-9543146-6-2},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508066},
primaryClass={cs.MM cs.CY cs.DL cs.IR}
} | filippini-fantoni2005can |
arxiv-673206 | cs/0508067 | Copyright and Promotion: Oxymoron or Opportunity? | <|reference_start|>Copyright and Promotion: Oxymoron or Opportunity?: Copyright in the cultural sphere can act as a barrier to the dissemination of high-quality information. On the other hand it protects works of art that might not be made available otherwise. This dichotomy makes the area of copyright difficult, especially when it applies to the digital arena of the web where copying is so easy and natural. Here we present a snapshot of the issues for online copyright, with particular emphasis on the relevance to cultural institutions. We concentrate on Europe and the US; as an example we include a special section dedicated to the situation in Italy.<|reference_end|> | arxiv | @article{numerico2005copyright,
title={Copyright and Promotion: Oxymoron or Opportunity?},
author={Teresa Numerico and Jonathan P. Bowen},
journal={In James Hemsley, Vito Cappellini and Gerd Stanke (eds.), EVA 2005
London Conference Proceedings, University College London, UK, 25-29 July
2005, pages 25.1-25.10. ISBN: 0-9543146-6-2},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508067},
primaryClass={cs.CY cs.DL}
} | numerico2005copyright |
arxiv-673207 | cs/0508068 | Lossy source encoding via message-passing and decimation over generalized codewords of LDGM codes | <|reference_start|>Lossy source encoding via message-passing and decimation over generalized codewords of LDGM codes: We describe message-passing and decimation approaches for lossy source coding using low-density generator matrix (LDGM) codes. In particular, this paper addresses the problem of encoding a Bernoulli(0.5) source: for randomly generated LDGM codes with suitably irregular degree distributions, our methods yield performance very close to the rate distortion limit over a range of rates. Our approach is inspired by the survey propagation (SP) algorithm, originally developed by Mezard et al. for solving random satisfiability problems. Previous work by Maneva et al. shows how SP can be understood as belief propagation (BP) for an alternative representation of satisfiability problems. In analogy to this connection, our approach is to define a family of Markov random fields over generalized codewords, from which local message-passing rules can be derived in the standard way. The overall source encoding method is based on message-passing, setting a subset of bits to their preferred values (decimation), and reducing the code.<|reference_end|> | arxiv | @article{wainwright2005lossy,
title={Lossy source encoding via message-passing and decimation over
generalized codewords of LDGM codes},
author={Martin J. Wainwright and Elitza Maneva},
journal={arXiv preprint arXiv:cs/0508068},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508068},
primaryClass={cs.IT cs.AI math.IT}
} | wainwright2005lossy |
arxiv-673208 | cs/0508069 | Real Hypercomputation and Continuity | <|reference_start|>Real Hypercomputation and Continuity: By the sometimes so-called 'Main Theorem' of Recursive Analysis, every computable real function is necessarily continuous. We wonder whether and which kinds of HYPERcomputation allow for the effective evaluation of also discontinuous f:R->R. More precisely the present work considers the following three super-Turing notions of real function computability: * relativized computation; specifically given oracle access to the Halting Problem 0' or its jump 0''; * encoding real input x and/or output y=f(x) in weaker ways also related to the Arithmetic Hierarchy; * non-deterministic computation. It turns out that any f:R->R computable in the first or second sense is still necessarily continuous whereas the third type of hypercomputation does provide the required power to evaluate for instance the discontinuous sign function.<|reference_end|> | arxiv | @article{ziegler2005real,
title={Real Hypercomputation and Continuity},
author={Martin Ziegler},
journal={pp.177-206 in Theory of Computing Systems vol.41 (2007)},
year={2005},
doi={10.1007/s00224-006-1343-6},
archivePrefix={arXiv},
eprint={cs/0508069},
primaryClass={cs.LO cs.CC}
} | ziegler2005real |
arxiv-673209 | cs/0508070 | MAP estimation via agreement on (hyper)trees: Message-passing and linear programming | <|reference_start|>MAP estimation via agreement on (hyper)trees: Message-passing and linear programming: We develop and analyze methods for computing provably optimal {\em maximum a posteriori} (MAP) configurations for a subclass of Markov random fields defined on graphs with cycles. By decomposing the original distribution into a convex combination of tree-structured distributions, we obtain an upper bound on the optimal value of the original problem (i.e., the log probability of the MAP assignment) in terms of the combined optimal values of the tree problems. We prove that this upper bound is tight if and only if all the tree distributions share an optimal configuration in common. An important implication is that any such shared configuration must also be a MAP configuration for the original distribution. Next we develop two approaches to attempting to obtain tight upper bounds: (a) a {\em tree-relaxed linear program} (LP), which is derived from the Lagrangian dual of the upper bounds; and (b) a {\em tree-reweighted max-product message-passing algorithm} that is related to but distinct from the max-product algorithm. In this way, we establish a connection between a certain LP relaxation of the mode-finding problem, and a reweighted form of the max-product (min-sum) message-passing algorithm.<|reference_end|> | arxiv | @article{wainwright2005map,
title={MAP estimation via agreement on (hyper)trees: Message-passing and linear
programming},
author={Martin J. Wainwright and Tommi S. Jaakkola and Alan S. Willsky},
journal={arXiv preprint arXiv:cs/0508070},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508070},
primaryClass={cs.IT cs.AI math.IT}
} | wainwright2005map |
arxiv-673210 | cs/0508071 | Every decision tree has an influential variable | <|reference_start|>Every decision tree has an influential variable: We prove that for any decision tree calculating a boolean function $f:\{-1,1\}^n\to\{-1,1\}$, \[ \Var[f] \le \sum_{i=1}^n \delta_i \Inf_i(f), \] where $\delta_i$ is the probability that the $i$th input variable is read and $\Inf_i(f)$ is the influence of the $i$th variable on $f$. The variance, influence and probability are taken with respect to an arbitrary product measure on $\{-1,1\}^n$. It follows that the minimum depth of a decision tree calculating a given balanced function is at least the reciprocal of the largest influence of any input variable. Likewise, any balanced boolean function with a decision tree of depth $d$ has a variable with influence at least $\frac{1}{d}$. The only previous nontrivial lower bound known was $\Omega(d 2^{-d})$. Our inequality has many generalizations, allowing us to prove influence lower bounds for randomized decision trees, decision trees on arbitrary product probability spaces, and decision trees with non-boolean outputs. As an application of our results we give a very easy proof that the randomized query complexity of nontrivial monotone graph properties is at least $\Omega(v^{4/3}/p^{1/3})$, where $v$ is the number of vertices and $p \leq \half$ is the critical threshold probability. This supersedes the milestone $\Omega(v^{4/3})$ bound of Hajnal and is sometimes superior to the best known lower bounds of Chakrabarti-Khot and Friedgut-Kahn-Wigderson.<|reference_end|> | arxiv | @article{o'donnell2005every,
title={Every decision tree has an influential variable},
author={Ryan O'Donnell, Michael Saks, Oded Schramm, Rocco A. Servedio},
journal={arXiv preprint arXiv:cs/0508071},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508071},
primaryClass={cs.CC cs.DM math.PR}
} | o'donnell2005every |
arxiv-673211 | cs/0508072 | On Achievable Rates and Complexity of LDPC Codes for Parallel Channels with Application to Puncturing | <|reference_start|>On Achievable Rates and Complexity of LDPC Codes for Parallel Channels with Application to Puncturing: This paper considers the achievable rates and decoding complexity of low-density parity-check (LDPC) codes over statistically independent parallel channels. The paper starts with the derivation of bounds on the conditional entropy of the transmitted codeword given the received sequence at the output of the parallel channels; the component channels are considered to be memoryless, binary-input, and output-symmetric (MBIOS). These results serve for the derivation of an upper bound on the achievable rates of ensembles of LDPC codes under optimal maximum-likelihood (ML) decoding when their transmission takes place over parallel MBIOS channels. The paper relies on the latter bound for obtaining upper bounds on the achievable rates of ensembles of randomly and intentionally punctured LDPC codes over MBIOS channels. The paper also provides a lower bound on the decoding complexity (per iteration) of ensembles of LDPC codes under message-passing iterative decoding over parallel MBIOS channels; the bound is given in terms of the gap between the rate of these codes for which reliable communication is achievable and the channel capacity. The paper presents a diagram which shows interconnections between the theorems introduced in this paper and some other previously reported results. The setting which serves for the derivation of the bounds on the achievable rates and decoding complexity is general, and the bounds can be applied to other scenarios which can be treated as different forms of communication over parallel channels.<|reference_end|> | arxiv | @article{sason2005on,
title={On Achievable Rates and Complexity of LDPC Codes for Parallel Channels
with Application to Puncturing},
author={Igal Sason and Gil Wiechman},
journal={arXiv preprint arXiv:cs/0508072},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508072},
primaryClass={cs.IT math.IT}
} | sason2005on |
arxiv-673212 | cs/0508073 | Universal Learning of Repeated Matrix Games | <|reference_start|>Universal Learning of Repeated Matrix Games: We study and compare the learning dynamics of two universal learning algorithms, one based on Bayesian learning and the other on prediction with expert advice. Both approaches have strong asymptotic performance guarantees. When confronted with the task of finding good long-term strategies in repeated 2x2 matrix games, they behave quite differently.<|reference_end|> | arxiv | @article{poland2005universal,
title={Universal Learning of Repeated Matrix Games},
author={Jan Poland and Marcus Hutter},
journal={Proc. 15th Annual Machine Learning Conf. of Belgium and The
Netherlands (Benelearn 2006) pages 7-14},
year={2005},
number={IDSIA-18-05},
archivePrefix={arXiv},
eprint={cs/0508073},
primaryClass={cs.LG cs.AI}
} | poland2005universal |
arxiv-673213 | cs/0508074 | Throughput and Delay in Random Wireless Networks with Restricted Mobility | <|reference_start|>Throughput and Delay in Random Wireless Networks with Restricted Mobility: Grossglauser and Tse (2001) introduced a mobile random network model where each node moves independently on a unit disk according to a stationary uniform distribution and showed that a throughput of $\Theta(1)$ is achievable. El Gamal, Mammen, Prabhakar and Shah (2004) showed that the delay associated with this throughput scales as $\Theta(n\log n)$, when each node moves according to an independent random walk. In a later work, Diggavi, Grossglauser and Tse (2002) considered a random network on a sphere with a restricted mobility model, where each node moves along a randomly chosen great circle on the unit sphere. They showed that even with this one-dimensional restriction on mobility, constant throughput scaling is achievable. Thus, this particular mobility restriction does not affect the throughput scaling. This raises the question whether this mobility restriction affects the delay scaling. This paper studies the delay scaling at $\Theta(1)$ throughput for a random network with restricted mobility. First, a variant of the scheme presented by Diggavi, Grossglauser and Tse (2002) is presented and it is shown to achieve $\Theta(1)$ throughput using different (and perhaps simpler) techniques. The exact order of delay scaling for this scheme is determined, somewhat surprisingly, to be of $\Theta(n\log n)$, which is the same as that without the mobility restriction. Thus, this particular mobility restriction \emph{does not} affect either the maximal throughput scaling or the corresponding delay scaling of the network. This happens because under this 1-D restriction, each node is in the proximity of every other node in essentially the same manner as without this restriction.<|reference_end|> | arxiv | @article{mammen2005throughput,
title={Throughput and Delay in Random Wireless Networks with Restricted
Mobility},
author={James Mammen and Devavrat Shah},
journal={arXiv preprint arXiv:cs/0508074},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508074},
primaryClass={cs.IT cs.NI math.IT}
} | mammen2005throughput |
arxiv-673214 | cs/0508075 | Complexity of Networks | <|reference_start|>Complexity of Networks: Network or graph structures are ubiquitous in the study of complex systems. Often, we are interested in complexity trends of these system as it evolves under some dynamic. An example might be looking at the complexity of a food web as species enter an ecosystem via migration or speciation, and leave via extinction. In this paper, a complexity measure of networks is proposed based on the {\em complexity is information content} paradigm. To apply this paradigm to any object, one must fix two things: a representation language, in which strings of symbols from some alphabet describe, or stand for the objects being considered; and a means of determining when two such descriptions refer to the same object. With these two things set, the information content of an object can be computed in principle from the number of equivalent descriptions describing a particular object. I propose a simple representation language for undirected graphs that can be encoded as a bitstring, and equivalence is a topological equivalence. I also present an algorithm for computing the complexity of an arbitrary undirected network.<|reference_end|> | arxiv | @article{standish2005complexity,
title={Complexity of Networks},
author={Russell K. Standish},
journal={in Recent Advances in Artificial Life, Abbass et al. (eds) (World
Scientific: Singapore) p253 (2005).},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508075},
primaryClass={cs.IT math.IT}
} | standish2005complexity |
arxiv-673215 | cs/0508076 | Myopic Coding in Multiple Relay Channels | <|reference_start|>Myopic Coding in Multiple Relay Channels: In this paper, we investigate achievable rates for data transmission from sources to sinks through multiple relay networks. We consider myopic coding, a constrained communication strategy in which each node has only a local view of the network, meaning that nodes can only transmit to and decode from neighboring nodes. We compare this with omniscient coding, in which every node has a global view of the network and all nodes can cooperate. Using Gaussian channels as examples, we find that when the nodes transmit at low power, the rates achievable with two-hop myopic coding are as large as that under omniscient coding in a five-node multiple relay channel and close to that under omniscient coding in a six-node multiple relay channel. These results suggest that we may do local coding and cooperation without compromising much on the transmission rate. Practically, myopic coding schemes are more robust to topology changes because encoding and decoding at a node are not affected when there are changes at remote nodes. Furthermore, myopic coding mitigates the high computational complexity and large buffer/memory requirements of omniscient coding.<|reference_end|> | arxiv | @article{ong2005myopic,
title={Myopic Coding in Multiple Relay Channels},
author={Lawrence Ong and Mehul Motani},
journal={Proceedings of the 2005 IEEE International Symposium on
Information Theory (ISIT 2005), Adelaide Convention Centre, Adelaide,
Australia, pp. 1091-1095, Sep. 4-9 2005.},
year={2005},
doi={10.1109/ISIT.2005.1523508},
archivePrefix={arXiv},
eprint={cs/0508076},
primaryClass={cs.IT math.IT}
} | ong2005myopic |
arxiv-673216 | cs/0508077 | Families of unitary matrices achieving full diversity | <|reference_start|>Families of unitary matrices achieving full diversity: This paper presents an algebraic construction of families of unitary matrices that achieve full diversity. They are obtained as subsets of cyclic division algebras.<|reference_end|> | arxiv | @article{oggier2005families,
title={Families of unitary matrices achieving full diversity},
author={Frederique Oggier, Emmanuel Lequeu},
journal={arXiv preprint arXiv:cs/0508077},
year={2005},
doi={10.1109/ISIT.2005.1523526},
archivePrefix={arXiv},
eprint={cs/0508077},
primaryClass={cs.IT math.IT}
} | oggier2005families |
arxiv-673217 | cs/0508078 | Proceedings of the 15th Workshop on Logic-based methods in Programming Environments WLPE'05 -- October 5, 2005 -- Sitges (Barcelona), Spain | <|reference_start|>Proceedings of the 15th Workshop on Logic-based methods in Programming Environments WLPE'05 -- October 5, 2005 -- Sitges (Barcelona), Spain: This volume contains papers presented at WLPE 2005, 15th International Workshop on Logic-based methods in Programming Environments. The aim of the workshop is to provide an informal meeting for the researchers working on logic-based tools for development and analysis of programs. This year we emphasized two aspects: on one hand the presentation, pragmatics and experiences of tools for logic programming environments; on the other one, logic-based environmental tools for programming in general. The workshop took place in Sitges (Barcelona), Spain as a satellite workshop of the 21th International Conference on Logic Programming (ICLP 2005). This workshop continues the series of successful international workshops on logic programming environments held in Ohio, USA (1989), Eilat, Israel (1990), Paris, France (1991), Washington, USA (1992), Vancouver, Canada (1993), Santa Margherita Ligure, Italy (1994), Portland, USA (1995), Leuven, Belgium and Port Jefferson, USA (1997), Las Cruces, USA (1999), Paphos, Cyprus (2001), Copenhagen, Denmark (2002), Mumbai, India (2003) and Saint Malo, France (2004). We have received eight submissions (2 from France, 2 Spain-US cooperations, one Spain-Argentina cooperation, one from Japan, one from the United Kingdom and one Sweden-France cooperation). Program committee has decided to accept seven papers. This volume contains revised versions of the accepted papers. We are grateful to the authors of the papers, the reviewers and the members of the Program Committee for the help and fruitful discussions.<|reference_end|> | arxiv | @article{serebrenik2005proceedings,
title={Proceedings of the 15th Workshop on Logic-based methods in Programming
Environments WLPE'05 -- October 5, 2005 -- Sitges (Barcelona), Spain},
author={Alexander Serebrenik, Susana Munoz-Hernandez},
journal={arXiv preprint arXiv:cs/0508078},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508078},
primaryClass={cs.PL cs.LO cs.SE}
} | serebrenik2005proceedings |
arxiv-673218 | cs/0508079 | Re-visiting the One-Time Pad | <|reference_start|>Re-visiting the One-Time Pad: In 1949, Shannon proved the perfect secrecy of the Vernam cryptographic system,also popularly known as the One-Time Pad (OTP). Since then, it has been believed that the perfectly random and uncompressible OTP which is transmitted needs to have a length equal to the message length for this result to be true. In this paper, we prove that the length of the transmitted OTP which actually contains useful information need not be compromised and could be less than the message length without sacrificing perfect secrecy. We also provide a new interpretation for the OTP encryption by treating the message bits as making True/False statements about the pad, which we define as a private-object. We introduce the paradigm of private-object cryptography where messages are transmitted by verifying statements about a secret-object. We conclude by suggesting the use of Formal Axiomatic Systems for investing N bits of secret.<|reference_end|> | arxiv | @article{nagaraj2005re-visiting,
title={Re-visiting the One-Time Pad},
author={Nithin Nagaraj and Vivek Vaidya and Prabhakar G Vaidya},
journal={arXiv preprint arXiv:cs/0508079},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508079},
primaryClass={cs.CR}
} | nagaraj2005re-visiting |
arxiv-673219 | cs/0508080 | A 3D RGB Axis-based Color-oriented Cryptography | <|reference_start|>A 3D RGB Axis-based Color-oriented Cryptography: In this document, a formal approach to encrypt, decrypt, transmit and receive information using colors is explored. A piece of information consists of set of symbols with a definite property imposed on the generating set. The symbols are usually encoded using ascii scheme. A linear to 3d transformation is presented. The change of axis from traditional xyz to rgb is highlighted and its effect are studied. A point in this new axis is then represented as a unique color and a vector or matrix is associated with it, making it amenable to standard vector or matrix operations. A formal notion on hybrid cryptography is introduced as the algorithm lies on the boundary of symmetric and asymmetric cryptography. No discussion is complete, without mentioning reference to communication aspects of secure information in a channel. Transmission scheme pertaining to light as carrier is introduced and studied. Key-exchanges do not come under the scope of current frame of document.<|reference_end|> | arxiv | @article{chawla2005a,
title={A 3D RGB Axis-based Color-oriented Cryptography},
author={Kirti Chawla},
journal={arXiv preprint arXiv:cs/0508080},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508080},
primaryClass={cs.CR}
} | chawla2005a |
arxiv-673220 | cs/0508081 | ZEUS - A Domain-Oriented Fact Comparison Based Authentication Protocol | <|reference_start|>ZEUS - A Domain-Oriented Fact Comparison Based Authentication Protocol: In this paper, facts existing in different domains are explored, which are comparable by their end result. Properties of various domains and the facts that are part of such a unit are also presented, examples of comparison and methods of usage as means of zero-knowledge protocols are given, finally a zero-knowledge protocol based on afore-mentioned concept is given.<|reference_end|> | arxiv | @article{chawla2005zeus,
title={ZEUS - A Domain-Oriented Fact Comparison Based Authentication Protocol},
author={Kirti Chawla},
journal={arXiv preprint arXiv:cs/0508081},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508081},
primaryClass={cs.CR}
} | chawla2005zeus |
arxiv-673221 | cs/0508082 | The Structure of Collaborative Tagging Systems | <|reference_start|>The Structure of Collaborative Tagging Systems: Collaborative tagging describes the process by which many users add metadata in the form of keywords to shared content. Recently, collaborative tagging has grown in popularity on the web, on sites that allow users to tag bookmarks, photographs and other content. In this paper we analyze the structure of collaborative tagging systems as well as their dynamical aspects. Specifically, we discovered regularities in user activity, tag frequencies, kinds of tags used, bursts of popularity in bookmarking and a remarkable stability in the relative proportions of tags within a given url. We also present a dynamical model of collaborative tagging that predicts these stable patterns and relates them to imitation and shared knowledge.<|reference_end|> | arxiv | @article{golder2005the,
title={The Structure of Collaborative Tagging Systems},
author={Scott Golder, Bernardo A. Huberman},
journal={arXiv preprint arXiv:cs/0508082},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508082},
primaryClass={cs.DL cs.CY}
} | golder2005the |
arxiv-673222 | cs/0508083 | A General Framework for Codes Involving Redundancy Minimization | <|reference_start|>A General Framework for Codes Involving Redundancy Minimization: A framework with two scalar parameters is introduced for various problems of finding a prefix code minimizing a coding penalty function. The framework encompasses problems previously proposed by Huffman, Campbell, Nath, and Drmota and Szpankowski, shedding light on the relationships among these problems. In particular, Nath's range of problems can be seen as bridging the minimum average redundancy problem of Huffman with the minimum maximum pointwise redundancy problem of Drmota and Szpankowski. Using this framework, two linear-time Huffman-like algorithms are devised for the minimum maximum pointwise redundancy problem, the only one in the framework not previously solved with a Huffman-like algorithm. Both algorithms provide solutions common to this problem and a subrange of Nath's problems, the second algorithm being distinguished by its ability to find the minimum variance solution among all solutions common to the minimum maximum pointwise redundancy and Nath problems. Simple redundancy bounds are also presented.<|reference_end|> | arxiv | @article{baer2005a,
title={A General Framework for Codes Involving Redundancy Minimization},
author={Michael B. Baer},
journal={IEEE Transactions on Information Theory (2006)},
year={2005},
doi={10.1109/TIT.2005.860469},
archivePrefix={arXiv},
eprint={cs/0508083},
primaryClass={cs.IT cs.DS math.IT}
} | baer2005a |
arxiv-673223 | cs/0508084 | Source Coding for Quasiarithmetic Penalties | <|reference_start|>Source Coding for Quasiarithmetic Penalties: Huffman coding finds a prefix code that minimizes mean codeword length for a given probability distribution over a finite number of items. Campbell generalized the Huffman problem to a family of problems in which the goal is to minimize not mean codeword length but rather a generalized mean known as a quasiarithmetic or quasilinear mean. Such generalized means have a number of diverse applications, including applications in queueing. Several quasiarithmetic-mean problems have novel simple redundancy bounds in terms of a generalized entropy. A related property involves the existence of optimal codes: For ``well-behaved'' cost functions, optimal codes always exist for (possibly infinite-alphabet) sources having finite generalized entropy. Solving finite instances of such problems is done by generalizing an algorithm for finding length-limited binary codes to a new algorithm for finding optimal binary codes for any quasiarithmetic mean with a convex cost function. This algorithm can be performed using quadratic time and linear space, and can be extended to other penalty functions, some of which are solvable with similar space and time complexity, and others of which are solvable with slightly greater complexity. This reduces the computational complexity of a problem involving minimum delay in a queue, allows combinations of previously considered problems to be optimized, and greatly expands the space of problems solvable in quadratic time and linear space. The algorithm can be extended for purposes such as breaking ties among possibly different optimal codes, as with bottom-merge Huffman coding.<|reference_end|> | arxiv | @article{baer2005source,
title={Source Coding for Quasiarithmetic Penalties},
author={Michael B. Baer},
journal={IEEE Transactions on Information Theory (2006)},
year={2005},
doi={10.1109/TIT.2006.881728},
archivePrefix={arXiv},
eprint={cs/0508084},
primaryClass={cs.IT cs.DS math.IT}
} | baer2005source |
arxiv-673224 | cs/0508085 | On the Asymptotic Performance of Iterative Decoders for Product Codes | <|reference_start|>On the Asymptotic Performance of Iterative Decoders for Product Codes: We consider hard-decision iterative decoders for product codes over the erasure channel, which employ repeated rounds of decoding rows and columns alternatingly. We derive the exact asymptotic probability of decoding failure as a function of the error-correction capabilities of the row and column codes, the number of decoding rounds, and the channel erasure probability. We examine both the case of codes capable of correcting a constant amount of errors, and the case of codes capable of correcting a constant fraction of their length.<|reference_end|> | arxiv | @article{schwartz2005on,
title={On the Asymptotic Performance of Iterative Decoders for Product Codes},
author={Moshe Schwartz, Paul H. Siegel, Alexander Vardy},
journal={arXiv preprint arXiv:cs/0508085},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508085},
primaryClass={cs.IT cs.DM math.IT}
} | schwartz2005on |
arxiv-673225 | cs/0508086 | High-performance BWT-based Encoders | <|reference_start|>High-performance BWT-based Encoders: In 1994, Burrows and Wheeler developed a data compression algorithm which performs significantly better than Lempel-Ziv based algorithms. Since then, a lot of work has been done in order to improve their algorithm, which is based on a reversible transformation of the input string, called BWT (the Burrows-Wheeler transformation). In this paper, we propose a compression scheme based on BWT, MTF (move-to-front coding), and a version of the algorithms presented in [Dragos Trinca, ITCC-2004].<|reference_end|> | arxiv | @article{trinca2005high-performance,
title={High-performance BWT-based Encoders},
author={Dragos Trinca},
journal={arXiv preprint arXiv:cs/0508086},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508086},
primaryClass={cs.DS}
} | trinca2005high-performance |
arxiv-673226 | cs/0508087 | Modelling the Eulerian Path Problem using a String Matching Framework | <|reference_start|>Modelling the Eulerian Path Problem using a String Matching Framework: The well-known Eulerian path problem can be solved in polynomial time (more exactly, there exists a linear time algorithm for this problem). In this paper, we model the problem using a string matching framework, and then initiate an algorithmic study on a variant of this problem, called the (2,1)-STRING-MATCH problem (which is actually a generalization of the Eulerian path problem). Then, we present a polynomial-time algorithm for the (2,1)-STRING-MATCH problem, which is the most important result of this paper. Specifically, we get a lower bound of Omega(n), and an upper bound of O(n^{2}).<|reference_end|> | arxiv | @article{trinca2005modelling,
title={Modelling the Eulerian Path Problem using a String Matching Framework},
author={Dragos Trinca},
journal={arXiv preprint arXiv:cs/0508087},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508087},
primaryClass={cs.DS}
} | trinca2005modelling |
arxiv-673227 | cs/0508088 | Special Cases of Encodings by Generalized Adaptive Codes | <|reference_start|>Special Cases of Encodings by Generalized Adaptive Codes: Adaptive (variable-length) codes associate variable-length codewords to symbols being encoded depending on the previous symbols in the input data string. This class of codes has been presented in [Dragos Trinca, cs.DS/0505007] as a new class of non-standard variable-length codes. Generalized adaptive codes (GA codes, for short) have been also presented in [Dragos Trinca, cs.DS/0505007] not only as a new class of non-standard variable-length codes, but also as a natural generalization of adaptive codes of any order. This paper is intended to continue developing the theory of variable-length codes by establishing several interesting connections between adaptive codes and other classes of codes. The connections are discussed not only from a theoretical point of view (by proving new results), but also from an applicative one (by proposing several applications). First, we prove that adaptive Huffman encodings and Lempel-Ziv encodings are particular cases of encodings by GA codes. Second, we show that any (n,1,m) convolutional code satisfying certain conditions can be modelled as an adaptive code of order m. Third, we describe a cryptographic scheme based on the connection between adaptive codes and convolutional codes, and present an insightful analysis of this scheme. Finally, we conclude by generalizing adaptive codes to (p,q)-adaptive codes, and discussing connections between adaptive codes and time-varying codes.<|reference_end|> | arxiv | @article{trinca2005special,
title={Special Cases of Encodings by Generalized Adaptive Codes},
author={Dragos Trinca},
journal={arXiv preprint arXiv:cs/0508088},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508088},
primaryClass={cs.IT math.IT}
} | trinca2005special |
arxiv-673228 | cs/0508089 | Modelling the EAH Data Compression Algorithm using Graph Theory | <|reference_start|>Modelling the EAH Data Compression Algorithm using Graph Theory: Adaptive codes associate variable-length codewords to symbols being encoded depending on the previous symbols in the input data string. This class of codes has been introduced in [Dragos Trinca, cs.DS/0505007] as a new class of non-standard variable-length codes. New algorithms for data compression, based on adaptive codes of order one, have been presented in [Dragos Trinca, ITCC-2004], where we have behaviorally shown that for a large class of input data strings, these algorithms substantially outperform the Lempel-Ziv universal data compression algorithm. EAH has been introduced in [Dragos Trinca, cs.DS/0505061], as an improved generalization of these algorithms. In this paper, we present a translation of the EAH algorithm into the graph theory.<|reference_end|> | arxiv | @article{trinca2005modelling,
title={Modelling the EAH Data Compression Algorithm using Graph Theory},
author={Dragos Trinca},
journal={arXiv preprint arXiv:cs/0508089},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508089},
primaryClass={cs.DS}
} | trinca2005modelling |
arxiv-673229 | cs/0508090 | Translating the EAH Data Compression Algorithm into Automata Theory | <|reference_start|>Translating the EAH Data Compression Algorithm into Automata Theory: Adaptive codes have been introduced in [Dragos Trinca, cs.DS/0505007] as a new class of non-standard variable-length codes. These codes associate variable-length codewords to symbols being encoded depending on the previous symbols in the input data string. A new data compression algorithm, called EAH, has been introduced in [Dragos Trinca, cs.DS/0505061], where we have behaviorally shown that for a large class of input data strings, this algorithm substantially outperforms the well-known Lempel-Ziv universal data compression algorithm. In this paper, we translate the EAH encoder into automata theory.<|reference_end|> | arxiv | @article{trinca2005translating,
title={Translating the EAH Data Compression Algorithm into Automata Theory},
author={Dragos Trinca},
journal={arXiv preprint arXiv:cs/0508090},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508090},
primaryClass={cs.DS}
} | trinca2005translating |
arxiv-673230 | cs/0508091 | Extending Prolog with Incomplete Fuzzy Information | <|reference_start|>Extending Prolog with Incomplete Fuzzy Information: Incomplete information is a problem in many aspects of actual environments. Furthermore, in many sceneries the knowledge is not represented in a crisp way. It is common to find fuzzy concepts or problems with some level of uncertainty. There are not many practical systems which handle fuzziness and uncertainty and the few examples that we can find are used by a minority. To extend a popular system (which many programmers are using) with the ability of combining crisp and fuzzy knowledge representations seems to be an interesting issue. Our first work (Fuzzy Prolog) was a language that models $\mathcal{B}([0,1])$-valued Fuzzy Logic. In the Borel algebra, $\mathcal{B}([0,1])$, truth value is represented using unions of intervals of real numbers. This work was more general in truth value representation and propagation than previous works. An interpreter for this language using Constraint Logic Programming over Real numbers (CLP(${\cal R}$)) was implemented and is available in the Ciao system. Now, we enhance our former approach by using default knowledge to represent incomplete information in Logic Programming. We also provide the implementation of this new framework. This new release of Fuzzy Prolog handles incomplete information, it has a complete semantics (the previous one was incomplete as Prolog) and moreover it is able to combine crisp and fuzzy logic in Prolog programs. Therefore, new Fuzzy Prolog is more expressive to represent real world. Fuzzy Prolog inherited from Prolog its incompleteness. The incorporation of default reasoning to Fuzzy Prolog removes this problem and requires a richer semantics which we discuss.<|reference_end|> | arxiv | @article{munoz-hernandez2005extending,
title={Extending Prolog with Incomplete Fuzzy Information},
author={Susana Munoz-Hernandez and Claudio Vaucheret},
journal={arXiv preprint arXiv:cs/0508091},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508091},
primaryClass={cs.PL cs.SE}
} | munoz-hernandez2005extending |
arxiv-673231 | cs/0508092 | Summarizing Reports on Evolving Events; Part I: Linear Evolution | <|reference_start|>Summarizing Reports on Evolving Events; Part I: Linear Evolution: We present an approach for summarization from multiple documents which report on events that evolve through time, taking into account the different document sources. We distinguish the evolution of an event into linear and non-linear. According to our approach, each document is represented by a collection of messages which are then used in order to instantiate the cross-document relations that determine the summary content. The paper presents the summarization system that implements this approach through a case study on linear evolution.<|reference_end|> | arxiv | @article{afantenos2005summarizing,
title={Summarizing Reports on Evolving Events; Part I: Linear Evolution},
author={Stergos D. Afantenos, Vangelis Karkaletsis and Panagiotis
Stamatopoulos},
journal={Edited by Galia Angelova, Kalina Bontcheva, Ruslan Mitkov, Nicolas
Nicolov, and Nikolai Nikolov, Recent Advances in Natural Language Processing
(RANLP 2005). Borovets, Bulgaria: INCOMA, 18-24.},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508092},
primaryClass={cs.CL cs.IR}
} | afantenos2005summarizing |
arxiv-673232 | cs/0508093 | Performance of PPM Multipath Synchronization in the Limit of Large Bandwidth | <|reference_start|>Performance of PPM Multipath Synchronization in the Limit of Large Bandwidth: The acquisition, or synchronization, of the multipath profile for an ultrawideband pulse position modulation (PPM) communication systems is considered. Synchronization is critical for the proper operation of PPM based For the multipath channel, it is assumed that channel gains are known, but path delays are unknown. In the limit of large bandwidth, W, it is assumed that the number of paths, L, grows. The delay spread of the channel, M, is proportional to the bandwidth. The rate of growth of L versus M determines whether synchronization can occur. It is shown that if L/sqrt(M) --> 0, then the maximum likelihood synchronizer cannot acquire any of the paths and alternatively if L/M --> 0, the maximum likelihood synchronizer is guaranteed to miss at least one path.<|reference_end|> | arxiv | @article{porrat2005performance,
title={Performance of PPM Multipath Synchronization in the Limit of Large
Bandwidth},
author={Dana Porrat and Urbashi Mitra},
journal={arXiv preprint arXiv:cs/0508093},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508093},
primaryClass={cs.IT math.IT}
} | porrat2005performance |
arxiv-673233 | cs/0508094 | Conference Key Agreement and Quantum Sharing of Classical Secrets with Noisy GHZ States | <|reference_start|>Conference Key Agreement and Quantum Sharing of Classical Secrets with Noisy GHZ States: We propose a wide class of distillation schemes for multi-partite entangled states that are CSS-states. Our proposal provides not only superior efficiency, but also new insights on the connection between CSS-states and bipartite graph states. We then consider the applications of our distillation schemes for two cryptographic tasks--namely, (a) conference key agreement and (b) quantum sharing of classical secrets. In particular, we construct ``prepare-and-measure'' protocols. Also we study the yield of those protocols and the threshold value of the fidelity above which the protocols can function securely. Surprisingly, our protocols will function securely even when the initial state does not violate the standard Bell-inequalities for GHZ states. Experimental realization involving only bi-partite entanglement is also suggested.<|reference_end|> | arxiv | @article{chen2005conference,
title={Conference Key Agreement and Quantum Sharing of Classical Secrets with
Noisy GHZ States},
author={Kai Chen and Hoi-Kwong Lo},
journal={Information Theory, 2005. ISIT 2005. Proceedings. International
Symposium on 4-9 Sept. 2005 Page(s):1607 - 1611},
year={2005},
doi={10.1109/ISIT.2005.1523616},
number={CQIQC-ISIT 2005-CL1},
archivePrefix={arXiv},
eprint={cs/0508094},
primaryClass={cs.IT cs.CR math.IT}
} | chen2005conference |
arxiv-673234 | cs/0508095 | Capacity of Ultra Wide Band Wireless Ad Hoc Networks | <|reference_start|>Capacity of Ultra Wide Band Wireless Ad Hoc Networks: Throughput capacity is a critical parameter for the design and evaluation of ad-hoc wireless networks. Consider n identical randomly located nodes, on a unit area, forming an ad-hoc wireless network. Assuming a fixed per node transmission capability of T bits per second at a fixed range, it has been shown that the uniform throughput capacity per node r(n) is Theta((T)/(sqrt{n log n})), a decreasing function of node density n. However an alternate communication model may also be considered, with each node constrained to a maximum transmit power P_0 and capable of utilizing W Hz of bandwidth. Under the limiting case W rightarrow infinity, such as in Ultra Wide Band (UWB) networks, the uniform throughput per node is O ((n log n)^{(alpha-1}/2}) (upper bound) and Omega((n^{(alpha-1)/2})/((log n)^{(alpha +1)/2})) (achievable lower bound). These bounds demonstrate that throughput increases with node density $n$, in contrast to previously published results. This is the result of the large bandwidth, and the assumed power and rate adaptation, which alleviate interference. Thus, the effect of physical layer properties on the capacity of ad hoc wireless networks is demonstrated. Further, the promise of UWB as a physical layer technology for ad-hoc networks is justified.<|reference_end|> | arxiv | @article{negi2005capacity,
title={Capacity of Ultra Wide Band Wireless Ad Hoc Networks},
author={Rohit Negi and Arjunan Rajeswaran},
journal={arXiv preprint arXiv:cs/0508095},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508095},
primaryClass={cs.IT cs.NI math.IT}
} | negi2005capacity |
arxiv-673235 | cs/0508096 | On Multiple User Channels with Causal State Information at the Transmitters | <|reference_start|>On Multiple User Channels with Causal State Information at the Transmitters: We extend Shannon's result on the capacity of channels with state information to multiple user channels. More specifically, we characterize the capacity (region) of degraded broadcast channels and physically degraded relay channels where the channel state information is causally available at the transmitters. We also obtain inner and outer bounds on the capacity region for multiple access channels with causal state information at the transmitters.<|reference_end|> | arxiv | @article{sigurjonsson2005on,
title={On Multiple User Channels with Causal State Information at the
Transmitters},
author={Styrmir Sigurjonsson and Young-Han Kim},
journal={arXiv preprint arXiv:cs/0508096},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508096},
primaryClass={cs.IT math.IT}
} | sigurjonsson2005on |
arxiv-673236 | cs/0508097 | Tightness of LP via Max-product Belief Propagation | <|reference_start|>Tightness of LP via Max-product Belief Propagation: We investigate the question of tightness of linear programming (LP) relaxation for finding a maximum weight independent set (MWIS) in sparse random weighted graphs. We show that an edge-based LP relaxation is asymptotically tight for Erdos-Renyi graph $G(n,c/n)$ for $c \leq 2e$ and random regular graph $G(n,r)$ for $r\leq 4$ when node weights are i.i.d. with exponential distribution of mean 1. We establish these results, through a precise relation between the tightness of LP relaxation and convergence of the max-product belief propagation algorithm. We believe that this novel method of understanding structural properties of combinatorial problems through properties of iterative procedure such as the max-product should be of interest in its own right.<|reference_end|> | arxiv | @article{sanghavi2005tightness,
title={Tightness of LP via Max-product Belief Propagation},
author={Sujay Sanghavi and Devavrat Shah},
journal={arXiv preprint arXiv:cs/0508097},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508097},
primaryClass={cs.DS cs.DM}
} | sanghavi2005tightness |
arxiv-673237 | cs/0508098 | An Explicit Construction of Universally Decodable Matrices | <|reference_start|>An Explicit Construction of Universally Decodable Matrices: Universally decodable matrices can be used for coding purposes when transmitting over slow fading channels. These matrices are parameterized by positive integers $L$ and $n$ and a prime power $q$. Based on Pascal's triangle we give an explicit construction of universally decodable matrices for any non-zero integers $L$ and $n$ and any prime power $q$ where $L \leq q+1$. This is the largest set of possible parameter values since for any list of universally decodable matrices the value $L$ is upper bounded by $q+1$, except for the trivial case $n = 1$. For the proof of our construction we use properties of Hasse derivatives, and it turns out that our construction has connections to Reed-Solomon codes, Reed-Muller codes, and so-called repeated-root cyclic codes. Additionally, we show how universally decodable matrices can be modified so that they remain universally decodable matrices.<|reference_end|> | arxiv | @article{vontobel2005an,
title={An Explicit Construction of Universally Decodable Matrices},
author={Pascal O. Vontobel, Ashwin Ganesan},
journal={arXiv preprint arXiv:cs/0508098},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508098},
primaryClass={cs.IT cs.DM math.IT}
} | vontobel2005an |
arxiv-673238 | cs/0508099 | Search Process and Probabilistic Bifix Approach | <|reference_start|>Search Process and Probabilistic Bifix Approach: An analytical approach to a search process is a mathematical prerequisite for digital synchronization acquisition analysis and optimization. A search is performed for an arbitrary set of sequences within random but not equiprobable L-ary data. This paper derives in detail an expression for probability distribution function, from which other statistical parameters - expected value and variance - can be obtained. The probabilistic nature of (cross-) bifix indicators is shown and application examples are outlined, ranging beyond the usual telecommunication field.<|reference_end|> | arxiv | @article{bajic2005search,
title={Search Process and Probabilistic Bifix Approach},
author={Dragana Bajic, Cedomir Stefanovic, Dejan Vukobratovic},
journal={arXiv preprint arXiv:cs/0508099},
year={2005},
doi={10.1109/ISIT.2005.1523284},
archivePrefix={arXiv},
eprint={cs/0508099},
primaryClass={cs.IT cs.CV math.IT}
} | bajic2005search |
arxiv-673239 | cs/0508100 | A primer on Answer Set Programming | <|reference_start|>A primer on Answer Set Programming: A introduction to the syntax and Semantics of Answer Set Programming intended as an handout to [under]graduate students taking Artificial Intlligence or Logic Programming classes.<|reference_end|> | arxiv | @article{provetti2005a,
title={A primer on Answer Set Programming},
author={Alessandro Provetti},
journal={arXiv preprint arXiv:cs/0508100},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508100},
primaryClass={cs.AI cs.LO}
} | provetti2005a |
arxiv-673240 | cs/0508101 | Maximum Weight Matching via Max-Product Belief Propagation | <|reference_start|>Maximum Weight Matching via Max-Product Belief Propagation: Max-product "belief propagation" is an iterative, local, message-passing algorithm for finding the maximum a posteriori (MAP) assignment of a discrete probability distribution specified by a graphical model. Despite the spectacular success of the algorithm in many application areas such as iterative decoding, computer vision and combinatorial optimization which involve graphs with many cycles, theoretical results about both correctness and convergence of the algorithm are known in few cases (Weiss-Freeman Wainwright, Yeddidia-Weiss-Freeman, Richardson-Urbanke}. In this paper we consider the problem of finding the Maximum Weight Matching (MWM) in a weighted complete bipartite graph. We define a probability distribution on the bipartite graph whose MAP assignment corresponds to the MWM. We use the max-product algorithm for finding the MAP of this distribution or equivalently, the MWM on the bipartite graph. Even though the underlying bipartite graph has many short cycles, we find that surprisingly, the max-product algorithm always converges to the correct MAP assignment as long as the MAP assignment is unique. We provide a bound on the number of iterations required by the algorithm and evaluate the computational cost of the algorithm. We find that for a graph of size $n$, the computational cost of the algorithm scales as $O(n^3)$, which is the same as the computational cost of the best known algorithm. Finally, we establish the precise relation between the max-product algorithm and the celebrated {\em auction} algorithm proposed by Bertsekas. This suggests possible connections between dual algorithm and max-product algorithm for discrete optimization problems.<|reference_end|> | arxiv | @article{bayati2005maximum,
title={Maximum Weight Matching via Max-Product Belief Propagation},
author={Mohsen Bayati, Devavrat Shah, Mayank Sharma},
journal={IEEE Transactions on Information Theory, Vol 54 (3), 2008},
year={2005},
doi={10.1109/TIT.2007.915695},
archivePrefix={arXiv},
eprint={cs/0508101},
primaryClass={cs.IT cs.AI math.IT}
} | bayati2005maximum |
arxiv-673241 | cs/0508102 | Investigations of Process Damping Forces in Metal Cutting | <|reference_start|>Investigations of Process Damping Forces in Metal Cutting: Using finite element software developed for metal cutting by Third Wave Systems we investigate the forces involved in chatter, a self-sustained oscillation of the cutting tool. The phenomena is decomposed into a vibrating tool cutting a flat surface work piece, and motionless tool cutting a work piece with a wavy surface. While cutting the wavy surface, the shearplane was seen to oscillate in advance of the oscillation of the depth of cut, as were the cutting, thrust, and shear plane forces. The vibrating tool was used to investigate process damping through the interaction of the relief face of the tool and the workpiece. Crushing forces are isolated and compared to the contact length between the tool and workpiece. We found that the wavelength dependence of the forces depended on the relative size of the wavelength to the length of the relief face of the tool. The results indicate that the damping force from crushing will be proportional to the cutting speed for short tools, and inversely proportional for long tools.<|reference_end|> | arxiv | @article{stone2005investigations,
title={Investigations of Process Damping Forces in Metal Cutting},
author={Emily Stone, Suhail Ahmed, Abe Askari and Hong Tat},
journal={arXiv preprint arXiv:cs/0508102},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508102},
primaryClass={cs.CE}
} | stone2005investigations |
arxiv-673242 | cs/0508103 | Corpus-based Learning of Analogies and Semantic Relations | <|reference_start|>Corpus-based Learning of Analogies and Semantic Relations: We present an algorithm for learning from unlabeled text, based on the Vector Space Model (VSM) of information retrieval, that can solve verbal analogy questions of the kind found in the SAT college entrance exam. A verbal analogy has the form A:B::C:D, meaning "A is to B as C is to D"; for example, mason:stone::carpenter:wood. SAT analogy questions provide a word pair, A:B, and the problem is to select the most analogous word pair, C:D, from a set of five choices. The VSM algorithm correctly answers 47% of a collection of 374 college-level analogy questions (random guessing would yield 20% correct; the average college-bound senior high school student answers about 57% correctly). We motivate this research by applying it to a difficult problem in natural language processing, determining semantic relations in noun-modifier pairs. The problem is to classify a noun-modifier pair, such as "laser printer", according to the semantic relation between the noun (printer) and the modifier (laser). We use a supervised nearest-neighbour algorithm that assigns a class to a given noun-modifier pair by finding the most analogous noun-modifier pair in the training data. With 30 classes of semantic relations, on a collection of 600 labeled noun-modifier pairs, the learning algorithm attains an F value of 26.5% (random guessing: 3.3%). With 5 classes of semantic relations, the F value is 43.2% (random: 20%). The performance is state-of-the-art for both verbal analogies and noun-modifier relations.<|reference_end|> | arxiv | @article{turney2005corpus-based,
title={Corpus-based Learning of Analogies and Semantic Relations},
author={Peter D. Turney (National Research Council of Canada), Michael L.
Littman (Rutgers University)},
journal={Machine Learning, (2005), 60(1-3), 251-278},
year={2005},
number={NRC-48273},
archivePrefix={arXiv},
eprint={cs/0508103},
primaryClass={cs.LG cs.CL cs.IR}
} | turney2005corpus-based |
arxiv-673243 | cs/0508104 | A Generalised Hadamard Transform | <|reference_start|>A Generalised Hadamard Transform: A Generalised Hadamard Transform for multi-phase or multilevel signals is introduced, which includes the Fourier, Generalised, Discrete Fourier, Walsh-Hadamard and Reverse Jacket Transforms. The jacket construction is formalised and shown to admit a tensor product decomposition. Primary matrices under this decomposition are identified. New examples of primary jacket matrices of orders 8 and 12 are presented.<|reference_end|> | arxiv | @article{horadam2005a,
title={A Generalised Hadamard Transform},
author={K. J. Horadam},
journal={arXiv preprint arXiv:cs/0508104},
year={2005},
doi={10.1109/ISIT.2005.1523490},
archivePrefix={arXiv},
eprint={cs/0508104},
primaryClass={cs.IT cs.DM math.IT}
} | horadam2005a |
arxiv-673244 | cs/0508105 | A Tracer Driver for Versatile Dynamic Analyses of Constraint Logic Programs | <|reference_start|>A Tracer Driver for Versatile Dynamic Analyses of Constraint Logic Programs: Programs with constraints are hard to debug. In this paper, we describe a general architecture to help develop new debugging tools for constraint programming. The possible tools are fed by a single general-purpose tracer. A tracer-driver is used to adapt the actual content of the trace, according to the needs of the tool. This enables the tools and the tracer to communicate in a client-server scheme. Each tool describes its needs of execution data thanks to event patterns. The tracer driver scrutinizes the execution according to these event patterns and sends only the data that are relevant to the connected tools. Experimental measures show that this approach leads to good performance in the context of constraint logic programming, where a large variety of tools exists and the trace is potentially huge.<|reference_end|> | arxiv | @article{langevine2005a,
title={A Tracer Driver for Versatile Dynamic Analyses of Constraint Logic
Programs},
author={Ludovic Langevine and Mireille Ducasse},
journal={arXiv preprint arXiv:cs/0508105},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508105},
primaryClass={cs.SE}
} | langevine2005a |
arxiv-673245 | cs/0508106 | An Improved Non-Termination Criterion for Binary Constraint Logic Programs | <|reference_start|>An Improved Non-Termination Criterion for Binary Constraint Logic Programs: On one hand, termination analysis of logic programs is now a fairly established research topic within the logic programming community. On the other hand, non-termination analysis seems to remain a much less attractive subject. If we divide this line of research into two kinds of approaches: dynamic versus static analysis, this paper belongs to the latter. It proposes a criterion for detecting non-terminating atomic queries with respect to binary CLP clauses, which strictly generalizes our previous works on this subject. We give a generic operational definition and a logical form of this criterion. Then we show that the logical form is correct and complete with respect to the operational definition.<|reference_end|> | arxiv | @article{payet2005an,
title={An Improved Non-Termination Criterion for Binary Constraint Logic
Programs},
author={Etienne Payet and Fred Mesnard},
journal={arXiv preprint arXiv:cs/0508106},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508106},
primaryClass={cs.PL}
} | payet2005an |
arxiv-673246 | cs/0508107 | New Upper Bounds on A(n,d) | <|reference_start|>New Upper Bounds on A(n,d): Upper bounds on the maximum number of codewords in a binary code of a given length and minimum Hamming distance are considered. New bounds are derived by a combination of linear programming and counting arguments. Some of these bounds improve on the best known analytic bounds. Several new record bounds are obtained for codes with small lengths.<|reference_end|> | arxiv | @article{mounits2005new,
title={New Upper Bounds on A(n,d)},
author={Beniamin Mounits (1), Tuvi Etzion (1) and Simon Litsyn (2) ((1)
Technion - Israel Institute of Technology, (2) Tel Aviv University)},
journal={arXiv preprint arXiv:cs/0508107},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508107},
primaryClass={cs.IT cs.DM math.IT}
} | mounits2005new |
arxiv-673247 | cs/0508108 | Proving or Disproving likely Invariants with Constraint Reasoning | <|reference_start|>Proving or Disproving likely Invariants with Constraint Reasoning: A program invariant is a property that holds for every execution of the program. Recent work suggest to infer likely-only invariants, via dynamic analysis. A likely invariant is a property that holds for some executions but is not guaranteed to hold for all executions. In this paper, we present work in progress addressing the challenging problem of automatically verifying that likely invariants are actual invariants. We propose a constraint-based reasoning approach that is able, unlike other approaches, to both prove or disprove likely invariants. In the latter case, our approach provides counter-examples. We illustrate the approach on a motivating example where automatically generated likely invariants are verified.<|reference_end|> | arxiv | @article{denmat2005proving,
title={Proving or Disproving likely Invariants with Constraint Reasoning},
author={Tristan Denmat, Arnaud Gotlieb and Mireille Ducasse},
journal={arXiv preprint arXiv:cs/0508108},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508108},
primaryClass={cs.SE cs.PL}
} | denmat2005proving |
arxiv-673248 | cs/0508109 | Enhancing the Alloy Analyzer with Patterns of Analysis | <|reference_start|>Enhancing the Alloy Analyzer with Patterns of Analysis: Formal techniques have been shown to be useful in the development of correct software. But the level of expertise required of practitioners of these techniques prohibits their widespread adoption. Formal techniques need to be tailored to the commercial software developer. Alloy is a lightweight specification language supported by the Alloy Analyzer (AA), a tool based on off-the-shelf SAT technology. The tool allows a user to check interactively whether given properties are consistent or valid with respect to a high-level specification, providing an environment in which the correctness of such a specification may be established. However, Alloy is not particularly suited to expressing program specifications and the feedback provided by AA can be misleading where the specification under analysis or the property being checked contains inconsistencies. In this paper, we address these two shortcomings. Firstly, we present a lightweight language called "Loy", tailored to the specification of object-oriented programs. An encoding of Loy into Alloy is provided so that AA can be used for automated analysis of Loy program specifications. Secondly, we present some "patterns of analysis" that guide a developer through the analysis of a Loy specification in order to establish its correctness before implementation.<|reference_end|> | arxiv | @article{heaven2005enhancing,
title={Enhancing the Alloy Analyzer with Patterns of Analysis},
author={William Heaven and Alessandra Russo},
journal={15th Workshop on Logic Programming Environments (WLPE05), Sitges
(Barcelona), Spain, 2005},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508109},
primaryClass={cs.SE cs.LO}
} | heaven2005enhancing |
arxiv-673249 | cs/0508110 | Relations between semantic security and indistinguishability against cpa, non-adaptive cca and adaptive cca in comparison based framework | <|reference_start|>Relations between semantic security and indistinguishability against cpa, non-adaptive cca and adaptive cca in comparison based framework: In this paper we try to unify the frameworks of definitions of semantic security, indistinguishability and non-malleability by defining semantic security in comparison based framework. This facilitates the study of relations among these goals against different attack models and makes the proof of the equivalence of semantic security and indistinguishability easier and more understandable. Besides, our proof of the equivalence of semantic security and indistinguishability does not need any intermediate goals such as non devidability to change the definition framework.<|reference_end|> | arxiv | @article{bagherzandi2005relations,
title={Relations between semantic security and indistinguishability against
cpa, non-adaptive cca and adaptive cca in comparison based framework},
author={Ali Bagherzandi, Kooshiar Azimian, Javad Mohajeri and Mahmoud
Salmasizadeh},
journal={arXiv preprint arXiv:cs/0508110},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508110},
primaryClass={cs.CR}
} | bagherzandi2005relations |
arxiv-673250 | cs/0508111 | A Generic Framework for the Analysis and Specialization of Logic Programs | <|reference_start|>A Generic Framework for the Analysis and Specialization of Logic Programs: The relationship between abstract interpretation and partial deduction has received considerable attention and (partial) integrations have been proposed starting from both the partial deduction and abstract interpretation perspectives. In this work we present what we argue is the first fully described generic algorithm for efficient and precise integration of abstract interpretation and partial deduction. Taking as starting point state-of-the-art algorithms for context-sensitive, polyvariant abstract interpretation and (abstract) partial deduction, we present an algorithm which combines the best of both worlds. Key ingredients include the accurate success propagation inherent to abstract interpretation and the powerful program transformations achievable by partial deduction. In our algorithm, the calls which appear in the analysis graph are not analyzed w.r.t. the original definition of the procedure but w.r.t. specialized definitions of these procedures. Such specialized definitions are obtained by applying both unfolding and abstract executability. Our framework is parametric w.r.t. different control strategies and abstract domains. Different combinations of such parameters correspond to existing algorithms for program analysis and specialization. Simultaneously, our approach opens the door to the efficient computation of strictly more precise results than those achievable by each of the individual techniques. The algorithm is now one of the key components of the CiaoPP analysis and specialization system.<|reference_end|> | arxiv | @article{puebla2005a,
title={A Generic Framework for the Analysis and Specialization of Logic
Programs},
author={German Puebla, Elvira Albert, Manuel Hermenegildo},
journal={arXiv preprint arXiv:cs/0508111},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508111},
primaryClass={cs.PL cs.SE}
} | puebla2005a |
arxiv-673251 | cs/0508112 | A study of set-sharing analysis via cliques | <|reference_start|>A study of set-sharing analysis via cliques: We study the problem of efficient, scalable set-sharing analysis of logic programs. We use the idea of representing sharing information as a pair of abstract substitutions, one of which is a worst-case sharing representation called a clique set, which was previously proposed for the case of inferring pair-sharing. We use the clique-set representation for (1) inferring actual set-sharing information, and (2) analysis within a top-down framework. In particular, we define the abstract functions required by standard top-down analyses, both for sharing alone and also for the case of including freeness in addition to sharing. Our experimental evaluation supports the conclusion that, for inferring set-sharing, as it was the case for inferring pair-sharing, precision losses are limited, while useful efficiency gains are obtained. At the limit, the clique-set representation allowed analyzing some programs that exceeded memory capacity using classical sharing representations.<|reference_end|> | arxiv | @article{navas2005a,
title={A study of set-sharing analysis via cliques},
author={Jorge Navas, Francisco Bueno, Manuel Hermenegildo},
journal={arXiv preprint arXiv:cs/0508112},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508112},
primaryClass={cs.LO}
} | navas2005a |
arxiv-673252 | cs/0508113 | Asymptotically fast polynomial matrix algorithms for multivariable systems | <|reference_start|>Asymptotically fast polynomial matrix algorithms for multivariable systems: We present the asymptotically fastest known algorithms for some basic problems on univariate polynomial matrices: rank, nullspace, determinant, generic inverse, reduced form. We show that they essentially can be reduced to two computer algebra techniques, minimal basis computations and matrix fraction expansion/reconstruction, and to polynomial matrix multiplication. Such reductions eventually imply that all these problems can be solved in about the same amount of time as polynomial matrix multiplication.<|reference_end|> | arxiv | @article{jeannerod2005asymptotically,
title={Asymptotically fast polynomial matrix algorithms for multivariable
systems},
author={Claude-Pierre Jeannerod (LIP), Gilles Villard (LIP)},
journal={arXiv preprint arXiv:cs/0508113},
year={2005},
number={JeVi05},
archivePrefix={arXiv},
eprint={cs/0508113},
primaryClass={cs.SC cs.CC}
} | jeannerod2005asymptotically |
arxiv-673253 | cs/0508114 | A Family of Binary Sequences with Optimal Correlation Property and Large Linear Span | <|reference_start|>A Family of Binary Sequences with Optimal Correlation Property and Large Linear Span: A family of binary sequences is presented and proved to have optimal correlation property and large linear span. It includes the small set of Kasami sequences, No sequence set and TN sequence set as special cases. An explicit lower bound expression on the linear span of sequences in the family is given. With suitable choices of parameters, it is proved that the family has exponentially larger linear spans than both No sequences and TN sequences. A class of ideal autocorrelation sequences is also constructed and proved to have large linear span.<|reference_end|> | arxiv | @article{zeng2005a,
title={A Family of Binary Sequences with Optimal Correlation Property and Large
Linear Span},
author={Xiangyong Zeng, Lei Hu, Qingchong Liu},
journal={arXiv preprint arXiv:cs/0508114},
year={2005},
doi={10.1093/ietfec/e89-a.7.2029},
archivePrefix={arXiv},
eprint={cs/0508114},
primaryClass={cs.CR cs.IT math.IT}
} | zeng2005a |
arxiv-673254 | cs/0508115 | New Sequence Sets with Zero-Correlation Zone | <|reference_start|>New Sequence Sets with Zero-Correlation Zone: A method for constructing sets of sequences with zero-correlation zone (ZCZ sequences) and sequence sets with low cross correlation is proposed. The method is to use families of short sequences and complete orthogonal sequence sets to derive families of long sequences with desired correlation properties. It is a unification of works of Matsufuji and Torii \emph{et al.}, and there are more choices of parameters of sets for our method. In particular, ZCZ sequence sets generated by the method can achieve a related ZCZ bound. Furthermore, the proposed method can be utilized to derive new ZCZ sets with both longer ZCZ and larger set size from known ZCZ sets. These sequence sets are applicable in broadband satellite IP networks.<|reference_end|> | arxiv | @article{zeng2005new,
title={New Sequence Sets with Zero-Correlation Zone},
author={Xiangyong Zeng, Lei Hu, Qingchong Liu},
journal={arXiv preprint arXiv:cs/0508115},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508115},
primaryClass={cs.IT math.IT}
} | zeng2005new |
arxiv-673255 | cs/0508116 | Quantum Algorithm Processors to Reveal Hamiltonian Cycles | <|reference_start|>Quantum Algorithm Processors to Reveal Hamiltonian Cycles: Quantum computer versus quantum algorithm processor in CMOS are compared to find (in parallel) all Hamiltonian cycles in a graph with m edges and n vertices, each represented by k bits. A quantum computer uses quantum states analogous to CMOS registers. With efficient initialization, number of CMOS registers is proportional to (n-1)! Number of qubits in a quantum computer is approximately proportional to kn+2mn in the approach below. Using CMOS, the bits per register is about proportional to kn, which is less since bits can be irreversibly reset. In either concept, number of gates, or operations to identify Hamiltonian cycles is proportional to kmn. However, a quantum computer needs an additional exponentially large number of operations to accomplish a probabilistic readout. In contrast, CMOS is deterministic and readout is comparable to ordinary memory.<|reference_end|> | arxiv | @article{burger2005quantum,
title={Quantum Algorithm Processors to Reveal Hamiltonian Cycles},
author={John Robert Burger},
journal={arXiv preprint arXiv:cs/0508116},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508116},
primaryClass={cs.AR cs.CG}
} | burger2005quantum |
arxiv-673256 | cs/0508117 | Long-term neuronal behavior caused by two synaptic modification mechanisms | <|reference_start|>Long-term neuronal behavior caused by two synaptic modification mechanisms: We report the first results of simulating the coupling of neuronal, astrocyte, and cerebrovascular activity. It is suggested that the dynamics of the system is different from systems that only include neurons. In the neuron-vascular coupling, distribution of synapse strengths affects neuronal behavior and thus balance of the blood flow; oscillations are induced in the neuron-to-astrocyte coupling.<|reference_end|> | arxiv | @article{shen2005long-term,
title={Long-term neuronal behavior caused by two synaptic modification
mechanisms},
author={Xi Shen (1), Philippe De Wilde (2) ((1) Imperial College London,
United Kingdom, (2) Heriot-Watt University, United Kingdom)},
journal={arXiv preprint arXiv:cs/0508117},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508117},
primaryClass={cs.NE cs.CE}
} | shen2005long-term |
arxiv-673257 | cs/0508118 | Unified Theory of Source Coding: Part I -- Two Terminal Problems | <|reference_start|>Unified Theory of Source Coding: Part I -- Two Terminal Problems: Since the publication of Shannon's theory of one terminal source coding, a number of interesting extensions have been derived by researchers such as Slepian-Wolf, Wyner, Ahlswede-K\"{o}rner, Wyner-Ziv and Berger-Yeung. Specifically, the achievable rate or rate-distortion region has been described by a first order information-theoretic functional of the source statistics in each of the above cases. At the same time several problems have also remained unsolved. Notable two terminal examples include the joint distortion problem, where both sources are reconstructed under a combined distortion criterion, as well as the partial side information problem, where one source is reconstructed under a distortion criterion using information about the other (side information) available at a certain rate (partially). In this paper we solve both of these open problems. Specifically, we give an infinite order description of the achievable rate-distortion region in each case. In our analysis we set the above problems in a general framework and formulate a unified methodology that solves not only the problems at hand but any two terminal problem with noncooperative encoding. The key to such unification is held by a fundamental source coding principle which we derive by extending the typicality arguments of Shannon and Wyner-Ziv. Finally, we demonstrate the expansive scope of our technique by re-deriving known coding theorems. We shall observe that our infinite order descriptions simplify to the expected first order in the known special cases.<|reference_end|> | arxiv | @article{jana2005unified,
title={Unified Theory of Source Coding: Part I -- Two Terminal Problems},
author={Soumya Jana},
journal={arXiv preprint arXiv:cs/0508118},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508118},
primaryClass={cs.IT math.IT}
} | jana2005unified |
arxiv-673258 | cs/0508119 | Unified Theory of Source Coding: Part II -- Multiterminal Problems | <|reference_start|>Unified Theory of Source Coding: Part II -- Multiterminal Problems: In the first paper of this two part communication, we solved in a unified framework a variety of two terminal source coding problems with noncooperative encoders, thereby consolidating works of Shannon, Slepian-Wolf, Wyner, Ahlswede-K\"{o}rner, Wyner-Ziv, Berger {\em et al.} and Berger-Yeung. To achieve such unification we made use of a fundamental principle that dissociates bulk of the analysis from the distortion criterion at hand (if any) and extends the typicality arguments of Shannon and Wyner-Ziv. In this second paper, we generalize the fundamental principle for any number of sources and on its basis exhaustively solve all multiterminal source coding problems with noncooperative encoders and one decoder. The distortion criteria, when applicable, are required to apply to single letters and be bounded. Our analysis includes cases where side information is, respectively, partially available, completely available and altogether unavailable at the decoder. As seen in our first paper, the achievable regions permit infinite order information-theoretic descriptions. We also show that the entropy-constrained multiterminal estimation problem can be solved as a special case of our theory.<|reference_end|> | arxiv | @article{jana2005unified,
title={Unified Theory of Source Coding: Part II -- Multiterminal Problems},
author={Soumya Jana},
journal={arXiv preprint arXiv:cs/0508119},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508119},
primaryClass={cs.IT math.IT}
} | jana2005unified |
arxiv-673259 | cs/0508120 | Iterative Algorithm for Finding Frequent Patterns in Transactional Databases | <|reference_start|>Iterative Algorithm for Finding Frequent Patterns in Transactional Databases: A high-performance algorithm for searching for frequent patterns (FPs) in transactional databases is presented. The search for FPs is carried out by using an iterative sieve algorithm by computing the set of enclosed cycles. In each inner cycle of level FPs composed of elements are generated. The assigned number of enclosed cycles (the parameter of the problem) defines the maximum length of the desired FPs. The efficiency of the algorithm is produced by (i) the extremely simple logical searching scheme, (ii) the avoidance of recursive procedures, and (iii) the usage of only one-dimensional arrays of integers.<|reference_end|> | arxiv | @article{berman2005iterative,
title={Iterative Algorithm for Finding Frequent Patterns in Transactional
Databases},
author={Gennady P. Berman, Vyacheslav N. Gorshkov, Edward P. MacKerrow
(Theoretical Division, Los Alamos National Laboratory), and Xidi Wang (Global
Consumer Bank, Citigroup)},
journal={arXiv preprint arXiv:cs/0508120},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508120},
primaryClass={cs.DB}
} | berman2005iterative |
arxiv-673260 | cs/0508121 | How Good is Phase-Shift Keying for Peak-Limited Rayleigh Fading Channels in the Low-SNR Regime? | <|reference_start|>How Good is Phase-Shift Keying for Peak-Limited Rayleigh Fading Channels in the Low-SNR Regime?: This paper investigates the achievable information rate of phase-shift keying (PSK) over frequency non-selective Rayleigh fading channels without channel state information (CSI). The fading process exhibits general temporal correlation characterized by its spectral density function. We consider both discrete-time and continuous-time channels, and find their asymptotics at low signal-to-noise ratio (SNR). Compared to known capacity upper bounds under peak constraints, these asymptotics usually lead to negligible rate loss in the low-SNR regime for slowly time-varying fading channels. We further specialize to case studies of Gauss-Markov and Clarke's fading models.<|reference_end|> | arxiv | @article{zhang2005how,
title={How Good is Phase-Shift Keying for Peak-Limited Rayleigh Fading Channels
in the Low-SNR Regime?},
author={Wenyi Zhang and J. Nicholas Laneman},
journal={arXiv preprint arXiv:cs/0508121},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508121},
primaryClass={cs.IT math.IT}
} | zhang2005how |
arxiv-673261 | cs/0508122 | Streaming and Sublinear Approximation of Entropy and Information Distances | <|reference_start|>Streaming and Sublinear Approximation of Entropy and Information Distances: In many problems in data mining and machine learning, data items that need to be clustered or classified are not points in a high-dimensional space, but are distributions (points on a high dimensional simplex). For distributions, natural measures of distance are not the $\ell_p$ norms and variants, but information-theoretic measures like the Kullback-Leibler distance, the Hellinger distance, and others. Efficient estimation of these distances is a key component in algorithms for manipulating distributions. Thus, sublinear resource constraints, either in time (property testing) or space (streaming) are crucial. We start by resolving two open questions regarding property testing of distributions. Firstly, we show a tight bound for estimating bounded, symmetric f-divergences between distributions in a general property testing (sublinear time) framework (the so-called combined oracle model). This yields optimal algorithms for estimating such well known distances as the Jensen-Shannon divergence and the Hellinger distance. Secondly, we close a $(\log n)/H$ gap between upper and lower bounds for estimating entropy $H$ in this model. In a stream setting (sublinear space), we give the first algorithm for estimating the entropy of a distribution. Our algorithm runs in polylogarithmic space and yields an asymptotic constant factor approximation scheme. We also provide other results along the space/time/approximation tradeoff curve.<|reference_end|> | arxiv | @article{guha2005streaming,
title={Streaming and Sublinear Approximation of Entropy and Information
Distances},
author={Sudipto Guha, Andrew McGregor and Suresh Venkatasubramanian},
journal={arXiv preprint arXiv:cs/0508122},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508122},
primaryClass={cs.DS cs.IT math.IT}
} | guha2005streaming |
arxiv-673262 | cs/0508123 | On Algorithms and Complexity for Sets with Cardinality Constraints | <|reference_start|>On Algorithms and Complexity for Sets with Cardinality Constraints: Typestate systems ensure many desirable properties of imperative programs, including initialization of object fields and correct use of stateful library interfaces. Abstract sets with cardinality constraints naturally generalize typestate properties: relationships between the typestates of objects can be expressed as subset and disjointness relations on sets, and elements of sets can be represented as sets of cardinality one. Motivated by these applications, this paper presents new algorithms and new complexity results for constraints on sets and their cardinalities. We study several classes of constraints and demonstrate a trade-off between their expressive power and their complexity. Our first result concerns a quantifier-free fragment of Boolean Algebra with Presburger Arithmetic. We give a nondeterministic polynomial-time algorithm for reducing the satisfiability of sets with symbolic cardinalities to constraints on constant cardinalities, and give a polynomial-space algorithm for the resulting problem. In a quest for more efficient fragments, we identify several subclasses of sets with cardinality constraints whose satisfiability is NP-hard. Finally, we identify a class of constraints that has polynomial-time satisfiability and entailment problems and can serve as a foundation for efficient program analysis.<|reference_end|> | arxiv | @article{marnette2005on,
title={On Algorithms and Complexity for Sets with Cardinality Constraints},
author={Bruno Marnette, Viktor Kuncak, and Martin Rinard},
journal={arXiv preprint arXiv:cs/0508123},
year={2005},
number={MIT CSAIL Technical Report MIT-LCS-TR-997},
archivePrefix={arXiv},
eprint={cs/0508123},
primaryClass={cs.PL cs.LO cs.SE}
} | marnette2005on |
arxiv-673263 | cs/0508124 | Coding Schemes for Line Networks | <|reference_start|>Coding Schemes for Line Networks: We consider a simple network, where a source and destination node are connected with a line of erasure channels. It is well known that in order to achieve the min-cut capacity, the intermediate nodes are required to process the information. We propose coding schemes for this setting, and discuss each scheme in terms of complexity, delay, achievable rate, memory requirement, and adaptability to unknown channel parameters. We also briefly discuss how these schemes can be extended to more general networks.<|reference_end|> | arxiv | @article{pakzad2005coding,
title={Coding Schemes for Line Networks},
author={Payam Pakzad, Christina Fragouli, Amin Shokrollahi},
journal={arXiv preprint arXiv:cs/0508124},
year={2005},
doi={10.1109/ISIT.2005.1523666},
archivePrefix={arXiv},
eprint={cs/0508124},
primaryClass={cs.IT cs.DC cs.NI math.IT}
} | pakzad2005coding |
arxiv-673264 | cs/0508125 | A Sorting Algorithm Based on Calculation | <|reference_start|>A Sorting Algorithm Based on Calculation: This article introduces an adaptive sorting algorithm that can relocate elements accurately by substituting their values into a function which we name it the guessing function. We focus on building this function which is the mapping relationship between record values and their corresponding sorted locations essentially. The time complexity of this algorithm O(n),when records distributed uniformly. Additionally, similar approach can be used in the searching algorithm.<|reference_end|> | arxiv | @article{bao2005a,
title={A Sorting Algorithm Based on Calculation},
author={Sheng Bao, De-Shun Zheng},
journal={arXiv preprint arXiv:cs/0508125},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508125},
primaryClass={cs.DS}
} | bao2005a |
arxiv-673265 | cs/0508126 | A Closed-Form Solution for the Finite Length Constant Modulus Receiver | <|reference_start|>A Closed-Form Solution for the Finite Length Constant Modulus Receiver: In this paper, a closed-form solution minimizing the Godard or Constant Modulus (CM) cost function under the practical conditions of finite SNR and finite equalizer length is derived. While previous work has been reported by Zeng et al., IEEE Trans. Information Theory. 1998, to establish the link between the constant modulus and Wiener receivers, we show that under the Gaussian approximation of intersymbol interference at the output of the equalizer, the CM finite-length receiver is equivalent to the nonblind MMSE equalizer up to a complex gain factor. Some simulation results are provided to support the Gaussian approximation assumption.<|reference_end|> | arxiv | @article{laot2005a,
title={A Closed-Form Solution for the Finite Length Constant Modulus Receiver},
author={Christophe Laot and Nicolas Le Josse},
journal={arXiv preprint arXiv:cs/0508126},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508126},
primaryClass={cs.GT cs.IT math.IT}
} | laot2005a |
arxiv-673266 | cs/0508127 | On context-tree prediction of individual sequences | <|reference_start|>On context-tree prediction of individual sequences: Motivated by the evident success of context-tree based methods in lossless data compression, we explore, in this paper, methods of the same spirit in universal prediction of individual sequences. By context-tree prediction, we refer to a family of prediction schemes, where at each time instant $t$, after having observed all outcomes of the data sequence $x_1,...,x_{t-1}$, but not yet $x_t$, the prediction is based on a ``context'' (or a state) that consists of the $k$ most recent past outcomes $x_{t-k},...,x_{t-1}$, where the choice of $k$ may depend on the contents of a possibly longer, though limited, portion of the observed past, $x_{t-k_{\max}},...x_{t-1}$. This is different from the study reported in [1], where general finite-state predictors as well as ``Markov'' (finite-memory) predictors of fixed order, were studied in the regime of individual sequences. Another important difference between this study and [1] is the asymptotic regime. While in [1], the resources of the predictor (i.e., the number of states or the memory size) were kept fixed regardless of the length $N$ of the data sequence, here we investigate situations where the number of contexts or states is allowed to grow concurrently with $N$. We are primarily interested in the following fundamental question: What is the critical growth rate of the number of contexts, below which the performance of the best context-tree predictor is still universally achievable, but above which it is not? We show that this critical growth rate is linear in $N$. In particular, we propose a universal context-tree algorithm that essentially achieves optimum performance as long as the growth rate is sublinear, and show that, on the other hand, this is impossible in the linear case.<|reference_end|> | arxiv | @article{ziv2005on,
title={On context-tree prediction of individual sequences},
author={Jacob Ziv and Neri Merhav},
journal={arXiv preprint arXiv:cs/0508127},
year={2005},
number={CCIT Technical Report no. 545, Dept. of EE, Technion, July 2005},
archivePrefix={arXiv},
eprint={cs/0508127},
primaryClass={cs.IT math.IT}
} | ziv2005on |
arxiv-673267 | cs/0508128 | Mapping DEVS Models onto UML Models | <|reference_start|>Mapping DEVS Models onto UML Models: Discrete event simulation specification (DEVS) is a formalism designed to describe both discrete state and continuous state systems. It is a powerful abstract mathematical notation. However, until recently it lacked proper graphical representation, which made computer simulation of DEVS models a challenging issue. Unified modeling language (UML) is a multipurpose graphical modeling language, a de-facto industrial modeling standard. There exist several commercial and open-source UML editors and code generators. Most of them can save UML models in XML-based XMI files ready for further automated processing. In this paper, we propose a mapping of DEVS models onto UML state and component diagrams. This mapping may lead to an eventual unification of the two modeling formalisms, combining the abstractness of DEVS and expressive power and ``computer friendliness'' of the UML.<|reference_end|> | arxiv | @article{zinoviev2005mapping,
title={Mapping DEVS Models onto UML Models},
author={Dmitry Zinoviev},
journal={D. Zinoviev, "Mapping DEVS Models onto UML Models," Proc. of the
2005 DEVS Integrative M&S Symposium, San Diego, CA, April 2005, pp. 101-106},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508128},
primaryClass={cs.OH}
} | zinoviev2005mapping |
arxiv-673268 | cs/0508129 | Temporal Phylogenetic Networks and Logic Programming | <|reference_start|>Temporal Phylogenetic Networks and Logic Programming: The concept of a temporal phylogenetic network is a mathematical model of evolution of a family of natural languages. It takes into account the fact that languages can trade their characteristics with each other when linguistic communities are in contact, and also that a contact is only possible when the languages are spoken at the same time. We show how computational methods of answer set programming and constraint logic programming can be used to generate plausible conjectures about contacts between prehistoric linguistic communities, and illustrate our approach by applying it to the evolutionary history of Indo-European languages. To appear in Theory and Practice of Logic Programming (TPLP).<|reference_end|> | arxiv | @article{erdem2005temporal,
title={Temporal Phylogenetic Networks and Logic Programming},
author={Esra Erdem, Vladimir Lifschitz, and Don Ringe},
journal={arXiv preprint arXiv:cs/0508129},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508129},
primaryClass={cs.LO cs.AI cs.PL}
} | erdem2005temporal |
arxiv-673269 | cs/0508130 | A Fresh Look at the Reliability of Long-term Digital Storage | <|reference_start|>A Fresh Look at the Reliability of Long-term Digital Storage: Many emerging Web services, such as email, photo sharing, and web site archives, need to preserve large amounts of quickly-accessible data indefinitely into the future. In this paper, we make the case that these applications' demands on large scale storage systems over long time horizons require us to re-evaluate traditional storage system designs. We examine threats to long-lived data from an end-to-end perspective, taking into account not just hardware and software faults but also faults due to humans and organizations. We present a simple model of long-term storage failures that helps us reason about the various strategies for addressing these threats in a cost-effective manner. Using this model we show that the most important strategies for increasing the reliability of long-term storage are detecting latent faults quickly, automating fault repair to make it faster and cheaper, and increasing the independence of data replicas.<|reference_end|> | arxiv | @article{baker2005a,
title={A Fresh Look at the Reliability of Long-term Digital Storage},
author={Mary Baker, Mehul Shah, David S. H. Rosenthal, Mema Roussopoulos,
Petros Maniatis, TJ Giuli, Prashanth Bungale},
journal={arXiv preprint arXiv:cs/0508130},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508130},
primaryClass={cs.DL cs.DB cs.OS}
} | baker2005a |
arxiv-673270 | cs/0508131 | Point Process Models of 1/f Noise and Internet Traffic | <|reference_start|>Point Process Models of 1/f Noise and Internet Traffic: We present a simple model reproducing the long-range autocorrelations and the power spectrum of the web traffic. The model assumes the traffic as Poisson flow of files with size distributed according to the power-law. In this model the long-range autocorrelations are independent of the network properties as well as of inter-packet time distribution.<|reference_end|> | arxiv | @article{gontis2005point,
title={Point Process Models of 1/f Noise and Internet Traffic},
author={V. Gontis, B. Kaulakys, J. Ruseckas},
journal={AIP Conf. Proc. 776, 144 (2005) 144-149},
year={2005},
doi={10.1063/1.1985385},
archivePrefix={arXiv},
eprint={cs/0508131},
primaryClass={cs.NI}
} | gontis2005point |
arxiv-673271 | cs/0508132 | Planning with Preferences using Logic Programming | <|reference_start|>Planning with Preferences using Logic Programming: We present a declarative language, PP, for the high-level specification of preferences between possible solutions (or trajectories) of a planning problem. This novel language allows users to elegantly express non-trivial, multi-dimensional preferences and priorities over such preferences. The semantics of PP allows the identification of most preferred trajectories for a given goal. We also provide an answer set programming implementation of planning problems with PP preferences.<|reference_end|> | arxiv | @article{son2005planning,
title={Planning with Preferences using Logic Programming},
author={Tran Cao Son and Enrico Pontelli},
journal={arXiv preprint arXiv:cs/0508132},
year={2005},
archivePrefix={arXiv},
eprint={cs/0508132},
primaryClass={cs.AI}
} | son2005planning |
arxiv-673272 | cs/0508133 | Decompositions of graphs of functions and efficient iterations of lookup tables | <|reference_start|>Decompositions of graphs of functions and efficient iterations of lookup tables: We show that every function f implemented as a lookup table can be implemented such that the computational complexity of evaluating f^m(x) is small, independently of m and x. The implementation only increases the storage space by a small_constant_ factor.<|reference_end|> | arxiv | @article{tsaban2005decompositions,
title={Decompositions of graphs of functions and efficient iterations of lookup
tables},
author={Boaz Tsaban},
journal={Discrete Applied Mathematics 155 (2007), 386--393},
year={2005},
doi={10.1016/j.dam.2006.06.006},
archivePrefix={arXiv},
eprint={cs/0508133},
primaryClass={cs.CC cs.CR}
} | tsaban2005decompositions |
arxiv-673273 | cs/0509001 | Asymptotic Behavior of Error Exponents in the Wideband Regime | <|reference_start|>Asymptotic Behavior of Error Exponents in the Wideband Regime: In this paper, we complement Verd\'{u}'s work on spectral efficiency in the wideband regime by investigating the fundamental tradeoff between rate and bandwidth when a constraint is imposed on the error exponent. Specifically, we consider both AWGN and Rayleigh-fading channels. For the AWGN channel model, the optimal values of $R_z(0)$ and $\dot{R_z}(0)$ are calculated, where $R_z(1/B)$ is the maximum rate at which information can be transmitted over a channel with bandwidth $B/2$ when the error-exponent is constrained to be greater than or equal to $z.$ Based on this calculation, we say that a sequence of input distributions is near optimal if both $R_z(0)$ and $\dot{R_z}(0)$ are achieved. We show that QPSK, a widely-used signaling scheme, is near-optimal within a large class of input distributions for the AWGN channel. Similar results are also established for a fading channel where full CSI is available at the receiver.<|reference_end|> | arxiv | @article{wu2005asymptotic,
title={Asymptotic Behavior of Error Exponents in the Wideband Regime},
author={X. Wu and R. Srikant},
journal={arXiv preprint arXiv:cs/0509001},
year={2005},
archivePrefix={arXiv},
eprint={cs/0509001},
primaryClass={cs.IT math.IT}
} | wu2005asymptotic |
arxiv-673274 | cs/0509002 | Component Based Programming in Scientific Computing: The Viable Approach | <|reference_start|>Component Based Programming in Scientific Computing: The Viable Approach: Computational scientists are facing a new era where the old ways of developing and reusing code have to be left behind and a few daring steps are to be made towards new horizons. The present work analyzes the needs that drive this change, the factors that contribute to the inertia of the community and slow the transition, the status and perspective of present attempts, the principle, practical and technical problems that are to be addressed in the short and long run.<|reference_end|> | arxiv | @article{lázár2005component,
title={Component Based Programming in Scientific Computing: The Viable Approach},
author={Zsolt I. L'az'ar, Jouke R. Heringa, Bazil P^arv, Simon W. de Leeuw},
journal={arXiv preprint arXiv:cs/0509002},
year={2005},
archivePrefix={arXiv},
eprint={cs/0509002},
primaryClass={cs.CE}
} | lázár2005component |
arxiv-673275 | cs/0509003 | COMODI: Architecture for a Component-Based Scientific Computing System | <|reference_start|>COMODI: Architecture for a Component-Based Scientific Computing System: The COmputational MODule Integrator (COMODI) is an initiative aiming at a component based framework, component developer tool and component repository for scientific computing. We identify the main ingredients to a solution that would be sufficiently appealing to scientists and engineers to consider alternatives to their deeply rooted programming traditions. The overall structure of the complete solution is sketched with special emphasis on the Component Developer Tool standing at the basis of COMODI.<|reference_end|> | arxiv | @article{lázár2005comodi:,
title={COMODI: Architecture for a Component-Based Scientific Computing System},
author={Zsolt I. L'az'ar, Lehel Istv'an Kov'acs, Bazil P^arv},
journal={arXiv preprint arXiv:cs/0509003},
year={2005},
archivePrefix={arXiv},
eprint={cs/0509003},
primaryClass={cs.CE}
} | lázár2005comodi: |
arxiv-673276 | cs/0509004 | Precoloring co-Meyniel graphs | <|reference_start|>Precoloring co-Meyniel graphs: The pre-coloring extension problem consists, given a graph $G$ and a subset of nodes to which some colors are already assigned, in finding a coloring of $G$ with the minimum number of colors which respects the pre-coloring assignment. This can be reduced to the usual coloring problem on a certain contracted graph. We prove that pre-coloring extension is polynomial for complements of Meyniel graphs. We answer a question of Hujter and Tuza by showing that ``PrExt perfect'' graphs are exactly the co-Meyniel graphs, which also generalizes results of Hujter and Tuza and of Hertz. Moreover we show that, given a co-Meyniel graph, the corresponding contracted graph belongs to a restricted class of perfect graphs (``co-Artemis'' graphs, which are ``co-perfectly contractile'' graphs), whose perfectness is easier to establish than the strong perfect graph theorem. However, the polynomiality of our algorithm still depends on the ellipsoid method for coloring perfect graphs.<|reference_end|> | arxiv | @article{jost2005precoloring,
title={Precoloring co-Meyniel graphs},
author={Vincent Jost (Leibniz - IMAG), Benjamin L'ev^eque (Leibniz - IMAG),
Fr'ed'eric Maffray (Leibniz - IMAG)},
journal={Graphs and Combinatorics 23, 3 (07/07/2007) 291-301},
year={2005},
doi={10.1007/s00373-007-0724-1},
archivePrefix={arXiv},
eprint={cs/0509004},
primaryClass={cs.DM}
} | jost2005precoloring |
arxiv-673277 | cs/0509005 | Combining Structured Corporate Data and Document Content to Improve Expertise Finding | <|reference_start|>Combining Structured Corporate Data and Document Content to Improve Expertise Finding: In this paper, we present an algorithm for automatically building expertise evidence for finding experts within an organization by combining structured corporate information with different content. We also describe our test data collection and our evaluation method. Evaluation of the algorithm shows that using organizational structure leads to a significant improvement in the precision of finding an expert. Furthermore we evaluate the impact of using different data sources on the quality of the results and conclude that Expert Finding is not a "one engine fits all" solution. It requires an analysis of the information space into which a solution will be placed and the appropriate selection and weighting scheme of the data sources.<|reference_end|> | arxiv | @article{mclean2005combining,
title={Combining Structured Corporate Data and Document Content to Improve
Expertise Finding},
author={Alistair McLean (CSIRO Ict Center), Mingfang Wu (CSIRO Ict Center),
Anne-Marie Vercoustre (CSIRO Ict Center)},
journal={arXiv preprint arXiv:cs/0509005},
year={2005},
archivePrefix={arXiv},
eprint={cs/0509005},
primaryClass={cs.IR}
} | mclean2005combining |
arxiv-673278 | cs/0509006 | Optimal space-time codes for the MIMO amplify-and-forward cooperative channel | <|reference_start|>Optimal space-time codes for the MIMO amplify-and-forward cooperative channel: In this work, we extend the non-orthogonal amplify-and-forward (NAF) cooperative diversity scheme to the MIMO channel. A family of space-time block codes for a half-duplex MIMO NAF fading cooperative channel with N relays is constructed. The code construction is based on the non-vanishing determinant criterion (NVD) and is shown to achieve the optimal diversity-multiplexing tradeoff (DMT) of the channel. We provide a general explicit algebraic construction, followed by some examples. In particular, in the single relay case, it is proved that the Golden code and the 4x4 Perfect code are optimal for the single-antenna and two-antenna case, respectively. Simulation results reveal that a significant gain (up to 10dB) can be obtained with the proposed codes, especially in the single-antenna case.<|reference_end|> | arxiv | @article{yang2005optimal,
title={Optimal space-time codes for the MIMO amplify-and-forward cooperative
channel},
author={Sheng Yang and Jean-Claude Belfiore},
journal={arXiv preprint arXiv:cs/0509006},
year={2005},
archivePrefix={arXiv},
eprint={cs/0509006},
primaryClass={cs.IT math.IT}
} | yang2005optimal |
arxiv-673279 | cs/0509007 | Non-Data-Aided Parameter Estimation in an Additive White Gaussian Noise Channel | <|reference_start|>Non-Data-Aided Parameter Estimation in an Additive White Gaussian Noise Channel: Non-data-aided (NDA) parameter estimation is considered for binary-phase-shift-keying transmission in an additive white Gaussian noise channel. Cramer-Rao lower bounds (CRLBs) for signal amplitude, noise variance, channel reliability constant and bit-error rate are derived and it is shown how these parameters relate to the signal-to-noise ratio (SNR). An alternative derivation of the iterative maximum likelihood (ML) SNR estimator is presented together with a novel, low complexity NDA SNR estimator. The performance of the proposed estimator is compared to previously suggested estimators and the CRLB. The results show that the proposed estimator performs close to the iterative ML estimator at significantly lower computational complexity.<|reference_end|> | arxiv | @article{brannstrom2005non-data-aided,
title={Non-Data-Aided Parameter Estimation in an Additive White Gaussian Noise
Channel},
author={Fredrik Brannstrom and Lars K. Rasmussen},
journal={arXiv preprint arXiv:cs/0509007},
year={2005},
archivePrefix={arXiv},
eprint={cs/0509007},
primaryClass={cs.IT math.IT}
} | brannstrom2005non-data-aided |
arxiv-673280 | cs/0509008 | Joint Equalization and Decoding for Nonlinear Two-Dimensional Intersymbol Interference Channels with Application to Optical Storage | <|reference_start|>Joint Equalization and Decoding for Nonlinear Two-Dimensional Intersymbol Interference Channels with Application to Optical Storage: An algorithm that performs joint equalization and decoding for nonlinear two-dimensional intersymbol interference channels is presented. The algorithm performs sum-product message-passing on a factor graph that represents the underlying system. The two-dimensional optical storage (TWODOS) technology is an example of a system with nonlinear two-dimensional intersymbol interference. Simulations for the nonlinear channel model of TWODOS show significant improvement in performance over uncoded performance. Noise tolerance thresholds for the algorithm for the TWODOS channel, computed using density evolution, are also presented and accurately predict the limiting performance of the algorithm as the codeword length increases.<|reference_end|> | arxiv | @article{singla2005joint,
title={Joint Equalization and Decoding for Nonlinear Two-Dimensional
Intersymbol Interference Channels with Application to Optical Storage},
author={N. Singla, J. A. O'Sullivan},
journal={arXiv preprint arXiv:cs/0509008},
year={2005},
archivePrefix={arXiv},
eprint={cs/0509008},
primaryClass={cs.IT math.IT}
} | singla2005joint |
arxiv-673281 | cs/0509009 | Joint Equalization and Decoding for Nonlinear Two-Dimensional Intersymbol Interference Channels | <|reference_start|>Joint Equalization and Decoding for Nonlinear Two-Dimensional Intersymbol Interference Channels: An algorithm that performs joint equalization and decoding for channels with nonlinear two-dimensional intersymbol interference is presented. The algorithm performs sum-product message-passing on a factor graph that represents the underlying system. The two-dimensional optical storage (TwoDOS) technology is an example of a system with nonlinear two-dimensional intersymbol interference. Simulations for the nonlinear channel model of TwoDOS show significant improvement in performance over uncoded performance. Noise tolerance thresholds for the TwoDOS channel computed using density evolution are also presented.<|reference_end|> | arxiv | @article{singla2005joint,
title={Joint Equalization and Decoding for Nonlinear Two-Dimensional
Intersymbol Interference Channels},
author={N. Singla, J. A. O'Sullivan},
journal={arXiv preprint arXiv:cs/0509009},
year={2005},
archivePrefix={arXiv},
eprint={cs/0509009},
primaryClass={cs.IT math.IT}
} | singla2005joint |
arxiv-673282 | cs/0509010 | Minimum Mean-Square-Error Equalization using Priors for Two-Dimensional Intersymbol Interference | <|reference_start|>Minimum Mean-Square-Error Equalization using Priors for Two-Dimensional Intersymbol Interference: Joint equalization and decoding schemes are described for two-dimensional intersymbol interference (ISI) channels. Equalization is performed using the minimum mean-square-error (MMSE) criterion. Low-density parity-check codes are used for error correction. The MMSE schemes are the extension of those proposed by Tuechler et al. (2002) for one-dimensional ISI channels. Extrinsic information transfer charts, density evolution, and bit-error rate versus signal-to-noise ratio curves are used to study the performance of the schemes.<|reference_end|> | arxiv | @article{singla2005minimum,
title={Minimum Mean-Square-Error Equalization using Priors for Two-Dimensional
Intersymbol Interference},
author={N. Singla, J. A. O'Sullivan},
journal={arXiv preprint arXiv:cs/0509010},
year={2005},
doi={10.1109/ISIT.2004.1365168},
archivePrefix={arXiv},
eprint={cs/0509010},
primaryClass={cs.IT math.IT}
} | singla2005minimum |
arxiv-673283 | cs/0509011 | Clustering Mixed Numeric and Categorical Data: A Cluster Ensemble Approach | <|reference_start|>Clustering Mixed Numeric and Categorical Data: A Cluster Ensemble Approach: Clustering is a widely used technique in data mining applications for discovering patterns in underlying data. Most traditional clustering algorithms are limited to handling datasets that contain either numeric or categorical attributes. However, datasets with mixed types of attributes are common in real life data mining applications. In this paper, we propose a novel divide-and-conquer technique to solve this problem. First, the original mixed dataset is divided into two sub-datasets: the pure categorical dataset and the pure numeric dataset. Next, existing well established clustering algorithms designed for different types of datasets are employed to produce corresponding clusters. Last, the clustering results on the categorical and numeric dataset are combined as a categorical dataset, on which the categorical data clustering algorithm is used to get the final clusters. Our contribution in this paper is to provide an algorithm framework for the mixed attributes clustering problem, in which existing clustering algorithms can be easily integrated, the capabilities of different kinds of clustering algorithms and characteristics of different types of datasets could be fully exploited. Comparisons with other clustering algorithms on real life datasets illustrate the superiority of our approach.<|reference_end|> | arxiv | @article{he2005clustering,
title={Clustering Mixed Numeric and Categorical Data: A Cluster Ensemble
Approach},
author={Zengyou He, Xiaofei Xu, Shengchun Deng},
journal={arXiv preprint arXiv:cs/0509011},
year={2005},
number={Tr-2002-10},
archivePrefix={arXiv},
eprint={cs/0509011},
primaryClass={cs.AI}
} | he2005clustering |
arxiv-673284 | cs/0509012 | Kriging Scenario For Capital Markets | <|reference_start|>Kriging Scenario For Capital Markets: An introduction to numerical statistics.<|reference_end|> | arxiv | @article{suslo2005kriging,
title={Kriging Scenario For Capital Markets},
author={T. Suslo},
journal={arXiv preprint arXiv:cs/0509012},
year={2005},
archivePrefix={arXiv},
eprint={cs/0509012},
primaryClass={cs.CE}
} | suslo2005kriging |
arxiv-673285 | cs/0509013 | On the variational distance of independently repeated experiments | <|reference_start|>On the variational distance of independently repeated experiments: Let P and Q be two probability distributions which differ only for values with non-zero probability. We show that the variational distance between the n-fold product distributions P^n and Q^n cannot grow faster than the square root of n.<|reference_end|> | arxiv | @article{renner2005on,
title={On the variational distance of independently repeated experiments},
author={Renato Renner},
journal={arXiv preprint arXiv:cs/0509013},
year={2005},
archivePrefix={arXiv},
eprint={cs/0509013},
primaryClass={cs.IT math.IT}
} | renner2005on |
arxiv-673286 | cs/0509014 | Density Evolution for Asymmetric Memoryless Channels | <|reference_start|>Density Evolution for Asymmetric Memoryless Channels: Density evolution is one of the most powerful analytical tools for low-density parity-check (LDPC) codes and graph codes with message passing decoding algorithms. With channel symmetry as one of its fundamental assumptions, density evolution (DE) has been widely and successfully applied to different channels, including binary erasure channels, binary symmetric channels, binary additive white Gaussian noise channels, etc. This paper generalizes density evolution for non-symmetric memoryless channels, which in turn broadens the applications to general memoryless channels, e.g. z-channels, composite white Gaussian noise channels, etc. The central theorem underpinning this generalization is the convergence to perfect projection for any fixed size supporting tree. A new iterative formula of the same complexity is then presented and the necessary theorems for the performance concentration theorems are developed. Several properties of the new density evolution method are explored, including stability results for general asymmetric memoryless channels. Simulations, code optimizations, and possible new applications suggested by this new density evolution method are also provided. This result is also used to prove the typicality of linear LDPC codes among the coset code ensemble when the minimum check node degree is sufficiently large. It is shown that the convergence to perfect projection is essential to the belief propagation algorithm even when only symmetric channels are considered. Hence the proof of the convergence to perfect projection serves also as a completion of the theory of classical density evolution for symmetric memoryless channels.<|reference_end|> | arxiv | @article{wang2005density,
title={Density Evolution for Asymmetric Memoryless Channels},
author={C.-C. Wang (1), S. R. Kulkarni (1), H. V. Poor (1) ((1) Princeton
University)},
journal={arXiv preprint arXiv:cs/0509014},
year={2005},
doi={10.1109/TIT.2005.858931},
archivePrefix={arXiv},
eprint={cs/0509014},
primaryClass={cs.IT math.IT}
} | wang2005density |
arxiv-673287 | cs/0509015 | Optimal Prefix Codes with Fewer Distinct Codeword Lengths are Faster to Construct | <|reference_start|>Optimal Prefix Codes with Fewer Distinct Codeword Lengths are Faster to Construct: A new method for constructing minimum-redundancy binary prefix codes is described. Our method does not explicitly build a Huffman tree; instead it uses a property of optimal prefix codes to compute the codeword lengths corresponding to the input weights. Let $n$ be the number of weights and $k$ be the number of distinct codeword lengths as produced by the algorithm for the optimum codes. The running time of our algorithm is $O(k \cdot n)$. Following our previous work in \cite{be}, no algorithm can possibly construct optimal prefix codes in $o(k \cdot n)$ time. When the given weights are presorted our algorithm performs $O(9^k \cdot \log^{2k}{n})$ comparisons.<|reference_end|> | arxiv | @article{belal2005optimal,
title={Optimal Prefix Codes with Fewer Distinct Codeword Lengths are Faster to
Construct},
author={Ahmed Belal and Amr Elmasry},
journal={arXiv preprint arXiv:cs/0509015},
year={2005},
archivePrefix={arXiv},
eprint={cs/0509015},
primaryClass={cs.DS cs.IT math.IT}
} | belal2005optimal |
arxiv-673288 | cs/0509016 | NP-hardness of the cluster minimization problem revisited | <|reference_start|>NP-hardness of the cluster minimization problem revisited: The computational complexity of the "cluster minimization problem" is revisited [L. T. Wille and J. Vennik, J. Phys. A 18, L419 (1985)]. It is argued that the original NP-hardness proof does not apply to pairwise potentials of physical interest, such as those that depend on the geometric distance between the particles. A geometric analog of the original problem is formulated, and a new proof for such potentials is provided by polynomial time transformation from the independent set problem for unit disk graphs. Limitations of this formulation are pointed out, and new subproblems that bear more direct consequences to the numerical study of clusters are suggested.<|reference_end|> | arxiv | @article{adib2005np-hardness,
title={NP-hardness of the cluster minimization problem revisited},
author={A. B. Adib},
journal={J. Phys. A: Math. Gen. 38, 8487 (2005)},
year={2005},
doi={10.1088/0305-4470/38/40/001},
archivePrefix={arXiv},
eprint={cs/0509016},
primaryClass={cs.CC cond-mat.stat-mech physics.chem-ph}
} | adib2005np-hardness |
arxiv-673289 | cs/0509017 | Traders imprint themselves by adaptively updating their own avatar | <|reference_start|>Traders imprint themselves by adaptively updating their own avatar: Simulations of artificial stock markets were considered as early as 1964 and multi-agent ones were introduced as early as 1989. Starting the early 90's, collaborations of economists and physicists produced increasingly realistic simulation platforms. Currently, the market stylized facts are easily reproduced and one has now to address the realistic details of the Market Microstructure and of the Traders Behaviour. This calls for new methods and tools capable of bridging smoothly between simulations and experiments in economics. We propose here the following Avatar-Based Method (ABM). The subjects implement and maintain their Avatars (programs encoding their personal decision making procedures) on NatLab, a market simulation platform. Once these procedures are fed in a computer edible format, they can be operationally used as such without the need for belabouring, interpreting or conceptualising them. Thus ABM short-circuits the usual behavioural economics experiments that search for the psychological mechanisms underlying the subjects behaviour. Finally, ABM maintains a level of objectivity close to the classical behaviourism while extending its scope to subjects' decision making mechanisms. We report on experiments where Avatars designed and maintained by humans from different backgrounds (including real traders) compete in a continuous double-auction market. We hope this unbiased way of capturing the adaptive evolution of real subjects behaviour may lead to a new kind of behavioural economics experiments with a high degree of reliability, analysability and reproducibility.<|reference_end|> | arxiv | @article{daniel2005traders,
title={Traders imprint themselves by adaptively updating their own avatar},
author={Gilles Daniel, Lev Muchnik, Sorin Solomon},
journal={arXiv preprint arXiv:cs/0509017},
year={2005},
archivePrefix={arXiv},
eprint={cs/0509017},
primaryClass={cs.MA cs.CE}
} | daniel2005traders |
arxiv-673290 | cs/0509018 | Requirements for Digital Preservation Systems: A Bottom-Up Approach | <|reference_start|>Requirements for Digital Preservation Systems: A Bottom-Up Approach: The field of digital preservation is being defined by a set of standards developed top-down, starting with an abstract reference model (OAIS) and gradually adding more specific detail. Systems claiming conformance to these standards are entering production use. Work is underway to certify that systems conform to requirements derived from OAIS. We complement these requirements derived top-down by presenting an alternate, bottom-up view of the field. The fundamental goal of these systems is to ensure that the information they contain remains accessible for the long term. We develop a parallel set of requirements based on observations of how existing systems handle this task, and on an analysis of the threats to achieving the goal. On this basis we suggest disclosures that systems should provide as to how they satisfy their goals.<|reference_end|> | arxiv | @article{rosenthal2005requirements,
title={Requirements for Digital Preservation Systems: A Bottom-Up Approach},
author={David S. H. Rosenthal, Thomas S. Robertson, Tom Lipkis, Vicky Reich,
Seth Morabito},
journal={arXiv preprint arXiv:cs/0509018},
year={2005},
archivePrefix={arXiv},
eprint={cs/0509018},
primaryClass={cs.DL}
} | rosenthal2005requirements |
arxiv-673291 | cs/0509019 | Comparing hierarchies of total functionals | <|reference_start|>Comparing hierarchies of total functionals: In this paper we consider two hierarchies of hereditarily total and continuous functionals over the reals based on one extensional and one intensional representation of real numbers, and we discuss under which asumptions these hierarchies coincide. This coincidense problem is equivalent to a statement about the topology of the Kleene-Kreisel continuous functionals. As a tool of independent interest, we show that the Kleene-Kreisel functionals may be embedded into both these hierarchies.<|reference_end|> | arxiv | @article{normann2005comparing,
title={Comparing hierarchies of total functionals},
author={Dag Normann},
journal={Logical Methods in Computer Science, Volume 1, Issue 2 (October 5,
2005) lmcs:2268},
year={2005},
doi={10.2168/LMCS-1(2:4)2005},
archivePrefix={arXiv},
eprint={cs/0509019},
primaryClass={cs.LO}
} | normann2005comparing |
arxiv-673292 | cs/0509020 | Transitive Text Mining for Information Extraction and Hypothesis Generation | <|reference_start|>Transitive Text Mining for Information Extraction and Hypothesis Generation: Transitive text mining - also named Swanson Linking (SL) after its primary and principal researcher - tries to establish meaningful links between literature sets which are virtually disjoint in the sense that each does not mention the main concept of the other. If successful, SL may give rise to the development of new hypotheses. In this communication we describe our approach to transitive text mining which employs co-occurrence analysis of the medical subject headings (MeSH), the descriptors assigned to papers indexed in PubMed. In addition, we will outline the current state of our web-based information system which will enable our users to perform literature-driven hypothesis building on their own.<|reference_end|> | arxiv | @article{stegmann2005transitive,
title={Transitive Text Mining for Information Extraction and Hypothesis
Generation},
author={Johannes Stegmann, Guenter Grohmann (Charite, Berlin)},
journal={arXiv preprint arXiv:cs/0509020},
year={2005},
archivePrefix={arXiv},
eprint={cs/0509020},
primaryClass={cs.IR cs.AI}
} | stegmann2005transitive |
arxiv-673293 | cs/0509021 | The Throughput-Reliability Tradeoff in MIMO Channels | <|reference_start|>The Throughput-Reliability Tradeoff in MIMO Channels: In this paper, an outage limited MIMO channel is considered. We build on Zheng and Tse's elegant formulation of the diversity-multiplexing tradeoff to develop a better understanding of the asymptotic relationship between the probability of error, transmission rate, and signal-to-noise ratio. In particular, we identify the limitation imposed by the multiplexing gain notion and develop a new formulation for the throughput-reliability tradeoff that avoids this limitation. The new characterization is then used to elucidate the asymptotic trends exhibited by the outage probability curves of MIMO channels.<|reference_end|> | arxiv | @article{azarian2005the,
title={The Throughput-Reliability Tradeoff in MIMO Channels},
author={Kambiz Azarian and Hesham El Gamal},
journal={arXiv preprint arXiv:cs/0509021},
year={2005},
archivePrefix={arXiv},
eprint={cs/0509021},
primaryClass={cs.IT math.IT}
} | azarian2005the |
arxiv-673294 | cs/0509022 | Achievable Rates for Pattern Recognition | <|reference_start|>Achievable Rates for Pattern Recognition: Biological and machine pattern recognition systems face a common challenge: Given sensory data about an unknown object, classify the object by comparing the sensory data with a library of internal representations stored in memory. In many cases of interest, the number of patterns to be discriminated and the richness of the raw data force recognition systems to internally represent memory and sensory information in a compressed format. However, these representations must preserve enough information to accommodate the variability and complexity of the environment, or else recognition will be unreliable. Thus, there is an intrinsic tradeoff between the amount of resources devoted to data representation and the complexity of the environment in which a recognition system may reliably operate. In this paper we describe a general mathematical model for pattern recognition systems subject to resource constraints, and show how the aforementioned resource-complexity tradeoff can be characterized in terms of three rates related to number of bits available for representing memory and sensory data, and the number of patterns populating a given statistical environment. We prove single-letter information theoretic bounds governing the achievable rates, and illustrate the theory by analyzing the elementary cases where the pattern data is either binary or Gaussian.<|reference_end|> | arxiv | @article{westover2005achievable,
title={Achievable Rates for Pattern Recognition},
author={M. Brandon Westover, Joseph A. O'Sullivan},
journal={arXiv preprint arXiv:cs/0509022},
year={2005},
archivePrefix={arXiv},
eprint={cs/0509022},
primaryClass={cs.IT cs.CV math.IT}
} | westover2005achievable |
arxiv-673295 | cs/0509023 | Coloring vertices of a graph or finding a Meyniel obstruction | <|reference_start|>Coloring vertices of a graph or finding a Meyniel obstruction: A Meyniel obstruction is an odd cycle with at least five vertices and at most one chord. A graph is Meyniel if and only if it has no Meyniel obstruction as an induced subgraph. Here we give a O(n^2) algorithm that, for any graph, finds either a clique and coloring of the same size or a Meyniel obstruction. We also give a O(n^3) algorithm that, for any graph, finds either aneasily recognizable strong stable set or a Meyniel obstruction.<|reference_end|> | arxiv | @article{cameron2005coloring,
title={Coloring vertices of a graph or finding a Meyniel obstruction},
author={Kathie Cameron (WLU), Jack Edmonds (EP INSTITUTE), Benjamin
L'ev^eque (LGS), Fr'ed'eric Maffray (LGS)},
journal={arXiv preprint arXiv:cs/0509023},
year={2005},
archivePrefix={arXiv},
eprint={cs/0509023},
primaryClass={cs.DM}
} | cameron2005coloring |
arxiv-673296 | cs/0509024 | Well-founded and Stable Semantics of Logic Programs with Aggregates | <|reference_start|>Well-founded and Stable Semantics of Logic Programs with Aggregates: In this paper, we present a framework for the semantics and the computation of aggregates in the context of logic programming. In our study, an aggregate can be an arbitrary interpreted second order predicate or function. We define extensions of the Kripke-Kleene, the well-founded and the stable semantics for aggregate programs. The semantics is based on the concept of a three-valued immediate consequence operator of an aggregate program. Such an operator approximates the standard two-valued immediate consequence operator of the program, and induces a unique Kripke-Kleene model, a unique well-founded model and a collection of stable models. We study different ways of defining such operators and thus obtain a framework of semantics, offering different trade-offs between precision and tractability. In particular, we investigate conditions on the operator that guarantee that the computation of the three types of semantics remains on the same level as for logic programs without aggregates. Other results show that, in practice, even efficient three-valued immediate consequence operators which are very low in the precision hierarchy, still provide optimal precision.<|reference_end|> | arxiv | @article{pelov2005well-founded,
title={Well-founded and Stable Semantics of Logic Programs with Aggregates},
author={Nikolay Pelov, Marc Denecker, Maurice Bruynooghe},
journal={arXiv preprint arXiv:cs/0509024},
year={2005},
archivePrefix={arXiv},
eprint={cs/0509024},
primaryClass={cs.LO}
} | pelov2005well-founded |
arxiv-673297 | cs/0509025 | A formally verified proof of the prime number theorem | <|reference_start|>A formally verified proof of the prime number theorem: The prime number theorem, established by Hadamard and de la Vall'ee Poussin independently in 1896, asserts that the density of primes in the positive integers is asymptotic to 1 / ln x. Whereas their proofs made serious use of the methods of complex analysis, elementary proofs were provided by Selberg and Erd"os in 1948. We describe a formally verified version of Selberg's proof, obtained using the Isabelle proof assistant.<|reference_end|> | arxiv | @article{avigad2005a,
title={A formally verified proof of the prime number theorem},
author={Jeremy Avigad, Kevin Donnelly, David Gray, and Paul Raff},
journal={arXiv preprint arXiv:cs/0509025},
year={2005},
archivePrefix={arXiv},
eprint={cs/0509025},
primaryClass={cs.AI cs.LO cs.SC}
} | avigad2005a |
arxiv-673298 | cs/0509026 | Sampling to estimate arbitrary subset sums | <|reference_start|>Sampling to estimate arbitrary subset sums: Starting with a set of weighted items, we want to create a generic sample of a certain size that we can later use to estimate the total weight of arbitrary subsets. For this purpose, we propose priority sampling which tested on Internet data performed better than previous methods by orders of magnitude. Priority sampling is simple to define and implement: we consider a steam of items i=0,...,n-1 with weights w_i. For each item i, we generate a random number r_i in (0,1) and create a priority q_i=w_i/r_i. The sample S consists of the k highest priority items. Let t be the (k+1)th highest priority. Each sampled item i in S gets a weight estimate W_i=max{w_i,t}, while non-sampled items get weight estimate W_i=0. Magically, it turns out that the weight estimates are unbiased, that is, E[W_i]=w_i, and by linearity of expectation, we get unbiased estimators over any subset sum simply by adding the sampled weight estimates from the subset. Also, we can estimate the variance of the estimates, and surpricingly, there is no co-variance between different weight estimates W_i and W_j. We conjecture an extremely strong near-optimality; namely that for any weight sequence, there exists no specialized scheme for sampling k items with unbiased estimators that gets smaller total variance than priority sampling with k+1 items. Very recently Mario Szegedy has settled this conjecture.<|reference_end|> | arxiv | @article{duffield2005sampling,
title={Sampling to estimate arbitrary subset sums},
author={Nick Duffield, Carsten Lund, Mikkel Thorup},
journal={arXiv preprint arXiv:cs/0509026},
year={2005},
archivePrefix={arXiv},
eprint={cs/0509026},
primaryClass={cs.DS}
} | duffield2005sampling |
arxiv-673299 | cs/0509027 | Haskell's overlooked object system | <|reference_start|>Haskell's overlooked object system: Haskell provides type-class-bounded and parametric polymorphism as opposed to subtype polymorphism of object-oriented languages such as Java and OCaml. It is a contentious question whether Haskell 98 without extensions, or with common extensions, or with new extensions can fully support conventional object-oriented programming with encapsulation, mutable state, inheritance, overriding, statically checked implicit and explicit subtyping, and so on. We systematically substantiate that Haskell 98, with some common extensions, supports all the conventional OO features plus more advanced ones, including first-class lexically scoped classes, implicitly polymorphic classes, flexible multiple inheritance, safe downcasts and safe co-variant arguments. Haskell indeed can support width and depth, structural and nominal subtyping. We address the particular challenge to preserve Haskell's type inference even for objects and object-operating functions. The OO features are introduced in Haskell as the OOHaskell library. OOHaskell lends itself as a sandbox for typed OO language design.<|reference_end|> | arxiv | @article{kiselyov2005haskell's,
title={Haskell's overlooked object system},
author={Oleg Kiselyov and Ralf Laemmel},
journal={arXiv preprint arXiv:cs/0509027},
year={2005},
archivePrefix={arXiv},
eprint={cs/0509027},
primaryClass={cs.PL}
} | kiselyov2005haskell's |
arxiv-673300 | cs/0509028 | Projecting the Forward Rate Flow onto a Finite Dimensional Manifold | <|reference_start|>Projecting the Forward Rate Flow onto a Finite Dimensional Manifold: Given a Heath-Jarrow-Morton (HJM) interest rate model $\mathcal{M}$ and a parametrized family of finite dimensional forward rate curves $\mathcal{G}$, this paper provides a technique for projecting the infinite dimensional forward rate curve $r_{t}$ given by $\mathcal{M}$ onto the finite dimensional manifold $\mathcal{G}$.The Stratonovich dynamics of the projected finite dimensional forward curve are derived and it is shown that, under the regularity conditions, the given Stratonovich differential equation has a unique strong solution. Moreover, this projection leads to an efficient algorithm for implicit parametric estimation of the infinite dimensional HJM model. The feasibility of this method is demonstrated by applying the generalized method of moments.<|reference_end|> | arxiv | @article{bayraktar2005projecting,
title={Projecting the Forward Rate Flow onto a Finite Dimensional Manifold},
author={Erhan Bayraktar, Li Chen, H. Vincent Poor},
journal={arXiv preprint arXiv:cs/0509028},
year={2005},
archivePrefix={arXiv},
eprint={cs/0509028},
primaryClass={cs.CE cs.IT math.IT}
} | bayraktar2005projecting |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.