corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-672601
cs/0502026
Quantum mechanics can provide unbiased result
<|reference_start|>Quantum mechanics can provide unbiased result: Getting an unbiased result is a remarkably long standing problem of collective observation/measurement. It is pointed out that quantum coin tossing can generate unbiased result defeating dishonesty.<|reference_end|>
arxiv
@article{mitra2005quantum, title={Quantum mechanics can provide unbiased result}, author={Arindam Mitra}, journal={arXiv preprint arXiv:cs/0502026}, year={2005}, archivePrefix={arXiv}, eprint={cs/0502026}, primaryClass={cs.CR} }
mitra2005quantum
arxiv-672602
cs/0502027
Markets are Dead, Long Live Markets
<|reference_start|>Markets are Dead, Long Live Markets: Researchers have long proposed using economic approaches to resource allocation in computer systems. However, few of these proposals became operational, let alone commercial. Questions persist about the economic approach regarding its assumptions, value, applicability, and relevance to system design. The goal of this paper is to answer these questions. We find that market-based resource allocation is useful, and more importantly, that mechanism design and system design should be integrated to produce systems that are both economically and computationally efficient.<|reference_end|>
arxiv
@article{lai2005markets, title={Markets are Dead, Long Live Markets}, author={Kevin Lai}, journal={arXiv preprint arXiv:cs/0502027}, year={2005}, archivePrefix={arXiv}, eprint={cs/0502027}, primaryClass={cs.OS} }
lai2005markets
arxiv-672603
cs/0502028
aDORe: a modular, standards-based Digital Object Repository
<|reference_start|>aDORe: a modular, standards-based Digital Object Repository: This paper describes the aDORe repository architecture, designed and implemented for ingesting, storing, and accessing a vast collection of Digital Objects at the Research Library of the Los Alamos National Laboratory. The aDORe architecture is highly modular and standards-based. In the architecture, the MPEG-21 Digital Item Declaration Language is used as the XML-based format to represent Digital Objects that can consist of multiple datastreams as Open Archival Information System Archival Information Packages (OAIS AIPs).Through an ingestion process, these OAIS AIPs are stored in a multitude of autonomous repositories. A Repository Index keeps track of the creation and location of all the autonomous repositories, whereas an Identifier Locator registers in which autonomous repository a given Digital Object or OAIS AIP resides. A front-end to the complete environment, the OAI-PMH Federator, is introduced for requesting OAIS Dissemination Information Packages (OAIS DIPs). These OAIS DIPs can be the stored OAIS AIPs themselves, or transformations thereof. This front-end allows OAI-PMH harvesters to recurrently and selectively collect batches of OAIS DIPs from aDORe, and hence to create multiple, parallel services using the collected objects. Another front-end, the OpenURL Resolver, is introduced for requesting OAIS Result Sets. An OAIS Result Set is a dissemination of an individual Digital Object or of its constituent datastreams. Both front-ends make use of an MPEG-21 Digital Item Processing Engine to apply services to OAIS AIPs, Digital Objects, or constituent datastreams that were specified in a dissemination request.<|reference_end|>
arxiv
@article{van de sompel2005adore:, title={aDORe: a modular, standards-based Digital Object Repository}, author={Herbert Van de Sompel, Jeroen Bekaert, Xiaoming Liu, Luda Balakireva, Thorsten Schwander}, journal={arXiv preprint arXiv:cs/0502028}, year={2005}, archivePrefix={arXiv}, eprint={cs/0502028}, primaryClass={cs.DL} }
van de sompel2005adore:
arxiv-672604
cs/0502029
Scalability of Genetic Programming and Probabilistic Incremental Program Evolution
<|reference_start|>Scalability of Genetic Programming and Probabilistic Incremental Program Evolution: This paper discusses scalability of standard genetic programming (GP) and the probabilistic incremental program evolution (PIPE). To investigate the need for both effective mixing and linkage learning, two test problems are considered: ORDER problem, which is rather easy for any recombination-based GP, and TRAP or the deceptive trap problem, which requires the algorithm to learn interactions among subsets of terminals. The scalability results show that both GP and PIPE scale up polynomially with problem size on the simple ORDER problem, but they both scale up exponentially on the deceptive problem. This indicates that while standard recombination is sufficient when no interactions need to be considered, for some problems linkage learning is necessary. These results are in agreement with the lessons learned in the domain of binary-string genetic algorithms (GAs). Furthermore, the paper investigates the effects of introducing utnnecessary and irrelevant primitives on the performance of GP and PIPE.<|reference_end|>
arxiv
@article{ondas2005scalability, title={Scalability of Genetic Programming and Probabilistic Incremental Program Evolution}, author={Radovan Ondas, Martin Pelikan, Kumara Sastry}, journal={arXiv preprint arXiv:cs/0502029}, year={2005}, archivePrefix={arXiv}, eprint={cs/0502029}, primaryClass={cs.NE cs.AI} }
ondas2005scalability
arxiv-672605
cs/0502030
Fixed Type Theorems
<|reference_start|>Fixed Type Theorems: This submission has been withdrawn at the request of the author.<|reference_end|>
arxiv
@article{g2005fixed, title={Fixed Type Theorems}, author={Raju Renjit G}, journal={arXiv preprint arXiv:cs/0502030}, year={2005}, archivePrefix={arXiv}, eprint={cs/0502030}, primaryClass={cs.CC} }
g2005fixed
arxiv-672606
cs/0502031
Logic Column 11: The Finite and the Infinite in Temporal Logic
<|reference_start|>Logic Column 11: The Finite and the Infinite in Temporal Logic: This article examines the interpretation of the LTL temporal operators over finite and infinite sequences. This is used as the basis for deriving a sound and complete axiomatization for Caret, a recent temporal logic for reasoning about programs with nested procedure calls and returns.<|reference_end|>
arxiv
@article{pucella2005logic, title={Logic Column 11: The Finite and the Infinite in Temporal Logic}, author={Riccardo Pucella}, journal={SIGACT News, 36(1), pp. 86-99, 2005}, year={2005}, archivePrefix={arXiv}, eprint={cs/0502031}, primaryClass={cs.LO} }
pucella2005logic
arxiv-672607
cs/0502032
On Dynamic Range Reporting in One Dimension
<|reference_start|>On Dynamic Range Reporting in One Dimension: We consider the problem of maintaining a dynamic set of integers and answering queries of the form: report a point (equivalently, all points) in a given interval. Range searching is a natural and fundamental variant of integer search, and can be solved using predecessor search. However, for a RAM with w-bit words, we show how to perform updates in O(lg w) time and answer queries in O(lglg w) time. The update time is identical to the van Emde Boas structure, but the query time is exponentially faster. Existing lower bounds show that achieving our query time for predecessor search requires doubly-exponentially slower updates. We present some arguments supporting the conjecture that our solution is optimal. Our solution is based on a new and interesting recursion idea which is "more extreme" that the van Emde Boas recursion. Whereas van Emde Boas uses a simple recursion (repeated halving) on each path in a trie, we use a nontrivial, van Emde Boas-like recursion on every such path. Despite this, our algorithm is quite clean when seen from the right angle. To achieve linear space for our data structure, we solve a problem which is of independent interest. We develop the first scheme for dynamic perfect hashing requiring sublinear space. This gives a dynamic Bloomier filter (an approximate storage scheme for sparse vectors) which uses low space. We strengthen previous lower bounds to show that these results are optimal.<|reference_end|>
arxiv
@article{mortensen2005on, title={On Dynamic Range Reporting in One Dimension}, author={Christian Worm Mortensen, Rasmus Pagh and Mihai Patrascu}, journal={arXiv preprint arXiv:cs/0502032}, year={2005}, archivePrefix={arXiv}, eprint={cs/0502032}, primaryClass={cs.DS} }
mortensen2005on
arxiv-672608
cs/0502033
Pseudo-Codewords of Cycle Codes via Zeta Functions
<|reference_start|>Pseudo-Codewords of Cycle Codes via Zeta Functions: Cycle codes are a special case of low-density parity-check (LDPC) codes and as such can be decoded using an iterative message-passing decoding algorithm on the associated Tanner graph. The existence of pseudo-codewords is known to cause the decoding algorithm to fail in certain instances. In this paper, we draw a connection between pseudo-codewords of cycle codes and the so-called edge zeta function of the associated normal graph and show how the Newton polyhedron of the zeta function equals the fundamental cone of the code, which plays a crucial role in characterizing the performance of iterative decoding algorithms.<|reference_end|>
arxiv
@article{koetter2005pseudo-codewords, title={Pseudo-Codewords of Cycle Codes via Zeta Functions}, author={Ralf Koetter, Wen-Ching W. Li, Pascal O. Vontobel, Judy L. Walker}, journal={arXiv preprint arXiv:cs/0502033}, year={2005}, archivePrefix={arXiv}, eprint={cs/0502033}, primaryClass={cs.IT math.IT} }
koetter2005pseudo-codewords
arxiv-672609
cs/0502034
Multiobjective hBOA, Clustering, and Scalability
<|reference_start|>Multiobjective hBOA, Clustering, and Scalability: This paper describes a scalable algorithm for solving multiobjective decomposable problems by combining the hierarchical Bayesian optimization algorithm (hBOA) with the nondominated sorting genetic algorithm (NSGA-II) and clustering in the objective space. It is first argued that for good scalability, clustering or some other form of niching in the objective space is necessary and the size of each niche should be approximately equal. Multiobjective hBOA (mohBOA) is then described that combines hBOA, NSGA-II and clustering in the objective space. The algorithm mohBOA differs from the multiobjective variants of BOA and hBOA proposed in the past by including clustering in the objective space and allocating an approximately equally sized portion of the population to each cluster. The algorithm mohBOA is shown to scale up well on a number of problems on which standard multiobjective evolutionary algorithms perform poorly.<|reference_end|>
arxiv
@article{pelikan2005multiobjective, title={Multiobjective hBOA, Clustering, and Scalability}, author={Martin Pelikan, Kumara Sastry, David E. Goldberg}, journal={arXiv preprint arXiv:cs/0502034}, year={2005}, number={IlliGAL Report No. 2005005}, archivePrefix={arXiv}, eprint={cs/0502034}, primaryClass={cs.NE cs.AI} }
pelikan2005multiobjective
arxiv-672610
cs/0502035
Near Maximum-Likelihood Performance of Some New Cyclic Codes Constructed in the Finite-Field Transform Domain
<|reference_start|>Near Maximum-Likelihood Performance of Some New Cyclic Codes Constructed in the Finite-Field Transform Domain: It is shown that some well-known and some new cyclic codes with orthogonal parity-check equations can be constructed in the finite-field transform domain. It is also shown that, for some binary linear cyclic codes, the performance of the iterative decoder can be improved by substituting some of the dual code codewords in the parity-check matrix with other dual code codewords formed from linear combinations. This technique can bring the performance of a code closer to its maximum-likelihood performance, which can be derived from the erroneous decoded codeword whose euclidean distance with the respect to the received block is smaller than that of the correct codeword. For (63,37), (93,47) and (105,53) cyclic codes, the maximum-likelihood performance is realised with this technique.<|reference_end|>
arxiv
@article{tjhai2005near, title={Near Maximum-Likelihood Performance of Some New Cyclic Codes Constructed in the Finite-Field Transform Domain}, author={C. Tjhai, M. Tomlinson, R. Horan, M. Ambroze and M. Ahmed}, journal={arXiv preprint arXiv:cs/0502035}, year={2005}, archivePrefix={arXiv}, eprint={cs/0502035}, primaryClass={cs.IT math.IT} }
tjhai2005near
arxiv-672611
cs/0502036
Improved Iterative Decoding for Perpendicular Magnetic Recording
<|reference_start|>Improved Iterative Decoding for Perpendicular Magnetic Recording: An algorithm of improving the performance of iterative decoding on perpendicular magnetic recording is presented. This algorithm follows on the authors' previous works on the parallel and serial concatenated turbo codes and low-density parity-check codes. The application of this algorithm with signal-to-noise ratio mismatch technique shows promising results in the presence of media noise. We also show that, compare to the standard iterative decoding algorithm, an improvement of within one order of magnitude can be achieved.<|reference_end|>
arxiv
@article{papagiannis2005improved, title={Improved Iterative Decoding for Perpendicular Magnetic Recording}, author={E. Papagiannis, C. Tjhai, M. Ahmed, M. Ambroze, M. Tomlinson}, journal={arXiv preprint arXiv:cs/0502036}, year={2005}, archivePrefix={arXiv}, eprint={cs/0502036}, primaryClass={cs.IT math.IT} }
papagiannis2005improved
arxiv-672612
cs/0502037
GF(2^m) Low-Density Parity-Check Codes Derived from Cyclotomic Cosets
<|reference_start|>GF(2^m) Low-Density Parity-Check Codes Derived from Cyclotomic Cosets: Based on the ideas of cyclotomic cosets, idempotents and Mattson-Solomon polynomials, we present a new method to construct GF(2^m), where m>0 cyclic low-density parity-check codes. The construction method produces the dual code idempotent which is used to define the parity-check matrix of the low-density parity-check code. An interesting feature of this construction method is the ability to increment the code dimension by adding more idempotents and so steadily decrease the sparseness of the parity-check matrix. We show that the constructed codes can achieve performance very close to the sphere-packing-bound constrained for binary transmission.<|reference_end|>
arxiv
@article{tjhai2005gf(2^m), title={GF(2^m) Low-Density Parity-Check Codes Derived from Cyclotomic Cosets}, author={C. Tjhai, M. Tomlinson, R. Horan, M. Ambroze and M. Ahmed}, journal={arXiv preprint arXiv:cs/0502037}, year={2005}, archivePrefix={arXiv}, eprint={cs/0502037}, primaryClass={cs.IT math.IT} }
tjhai2005gf(2^m)
arxiv-672613
cs/0502038
The Number of Spanning Trees in Kn-complements of Quasi-threshold Graphs
<|reference_start|>The Number of Spanning Trees in Kn-complements of Quasi-threshold Graphs: In this paper we examine the classes of graphs whose $K_n$-complements are trees and quasi-threshold graphs and derive formulas for their number of spanning trees; for a subgraph $H$ of $K_n$, the $K_n$-complement of $H$ is the graph $K_n-H$ which is obtained from $K_n$ by removing the edges of $H$. Our proofs are based on the complement spanning-tree matrix theorem, which expresses the number of spanning trees of a graph as a function of the determinant of a matrix that can be easily constructed from the adjacency relation of the graph. Our results generalize previous results and extend the family of graphs of the form $K_n-H$ admitting formulas for the number of their spanning trees.<|reference_end|>
arxiv
@article{nikolopoulos2005the, title={The Number of Spanning Trees in Kn-complements of Quasi-threshold Graphs}, author={Stavros D. Nikolopoulos and Charis Papadopoulos}, journal={Graphs and Combinatorics 20(3): 383-397, 2004}, year={2005}, archivePrefix={arXiv}, eprint={cs/0502038}, primaryClass={cs.DM} }
nikolopoulos2005the
arxiv-672614
cs/0502039
Efficient Parallel Simulations of Asynchronous Cellular Arrays
<|reference_start|>Efficient Parallel Simulations of Asynchronous Cellular Arrays: A definition for a class of asynchronous cellular arrays is proposed. An example of such asynchrony would be independent Poisson arrivals of cell iterations. The Ising model in the continuous time formulation of Glauber falls into this class. Also proposed are efficient parallel algorithms for simulating these asynchronous cellular arrays. In the algorithms, one or several cells are assigned to a processing element (PE), local times for different PEs can be different. Although the standard serial algorithm by Metropolis, Rosenbluth, Rosenbluth, Teller, and Teller can simulate such arrays, it is usually believed to be without an efficient parallel counterpart. However, the proposed parallel algorithms contradict this belief proving to be both efficient and able to perform the same task as the standard algorithm. The results of experiments with the new algorithms are encouraging: the speed-up is greater than 16 using 25 PEs on a shared memory MIMD bus computer, and greater than 1900 using 2**14 PEs on a SIMD computer. The algorithm by Bortz, Kalos, and Lebowitz can be incorporated in the proposed parallel algorithms, further contributing to speed-up. [In this paper I invented the update-cites-of-local-time-minima parallel simulation scheme. Now the scheme is becoming popular. Many misprints of the original 1987 Complex Systems publication are corrected here.-B.L.]<|reference_end|>
arxiv
@article{lubachevsky2005efficient, title={Efficient Parallel Simulations of Asynchronous Cellular Arrays}, author={Boris D. Lubachevsky}, journal={Complex Systems, vol.1, no.6, December 1987, pp.1099--1123, S. Wolfram, (ed.)}, year={2005}, archivePrefix={arXiv}, eprint={cs/0502039}, primaryClass={cs.DC cond-mat.mtrl-sci cs.PF} }
lubachevsky2005efficient
arxiv-672615
cs/0502040
Testing Systems of Concurrent Black-boxes--an Automata-Theoretic and Decompositional Approach
<|reference_start|>Testing Systems of Concurrent Black-boxes--an Automata-Theoretic and Decompositional Approach: The global testing problem studied in this paper is to seek a definite answer to whether a system of concurrent black-boxes has an observable behavior in a given finite (but could be huge) set "Bad". We introduce a novel approach to solve the problem that does not require integration testing. Instead, in our approach, the global testing problem is reduced to testing individual black-boxes in the system one by one in some given order. Using an automata-theoretic approach, test sequences for each individual black-box are generated from the system's description as well as the test results of black-boxes prior to this black-box in the given order. In contrast to the conventional compositional/modular verification/testing approaches, our approach is essentially decompositional. Also, our technique is complete, sound, and can be carried out automatically. Our experiment results show that the total number of tests needed to solve the global testing problem is substantially small even for an extremely large "Bad".<|reference_end|>
arxiv
@article{xie2005testing, title={Testing Systems of Concurrent Black-boxes--an Automata-Theoretic and Decompositional Approach}, author={Gaoyan Xie and Zhe Dang}, journal={arXiv preprint arXiv:cs/0502040}, year={2005}, archivePrefix={arXiv}, eprint={cs/0502040}, primaryClass={cs.SE} }
xie2005testing
arxiv-672616
cs/0502041
Logarithmic Lower Bounds in the Cell-Probe Model
<|reference_start|>Logarithmic Lower Bounds in the Cell-Probe Model: We develop a new technique for proving cell-probe lower bounds on dynamic data structures. This technique enables us to prove an amortized randomized Omega(lg n) lower bound per operation for several data structural problems on n elements, including partial sums, dynamic connectivity among disjoint paths (or a forest or a graph), and several other dynamic graph problems (by simple reductions). Such a lower bound breaks a long-standing barrier of Omega(lg n / lglg n) for any dynamic language membership problem. It also establishes the optimality of several existing data structures, such as Sleator and Tarjan's dynamic trees. We also prove the first Omega(log_B n) lower bound in the external-memory model without assumptions on the data structure (such as the comparison model). Our lower bounds also give a query-update trade-off curve matched, e.g., by several data structures for dynamic connectivity in graphs. We also prove matching upper and lower bounds for partial sums when parameterized by the word size and the maximum additive change in an update.<|reference_end|>
arxiv
@article{patrascu2005logarithmic, title={Logarithmic Lower Bounds in the Cell-Probe Model}, author={Mihai Patrascu and Erik D. Demaine}, journal={arXiv preprint arXiv:cs/0502041}, year={2005}, archivePrefix={arXiv}, eprint={cs/0502041}, primaryClass={cs.DS cs.CC} }
patrascu2005logarithmic
arxiv-672617
cs/0502042
Unified Large System Analysis of MMSE and Adaptive Least Squares Receivers for a class of Random Matrix Channels
<|reference_start|>Unified Large System Analysis of MMSE and Adaptive Least Squares Receivers for a class of Random Matrix Channels: We present a unified large system analysis of linear receivers for a class of random matrix channels. The technique unifies the analysis of both the minimum-mean-squared-error (MMSE) receiver and the adaptive least-squares (ALS) receiver, and also uses a common approach for both random i.i.d. and random orthogonal precoding. We derive expressions for the asymptotic signal-to-interference-plus-noise (SINR) of the MMSE receiver, and both the transient and steady-state SINR of the ALS receiver, trained using either i.i.d. data sequences or orthogonal training sequences. The results are in terms of key system parameters, and allow for arbitrary distributions of the power of each of the data streams and the eigenvalues of the channel correlation matrix. In the case of the ALS receiver, we allow a diagonal loading constant and an arbitrary data windowing function. For i.i.d. training sequences and no diagonal loading, we give a fundamental relationship between the transient/steady-state SINR of the ALS and the MMSE receivers. We demonstrate that for a particular ratio of receive to transmit dimensions and window shape, all channels which have the same MMSE SINR have an identical transient ALS SINR response. We demonstrate several applications of the results, including an optimization of information throughput with respect to training sequence length in coded block transmission.<|reference_end|>
arxiv
@article{peacock2005unified, title={Unified Large System Analysis of MMSE and Adaptive Least Squares Receivers for a class of Random Matrix Channels}, author={Matthew J.M. Peacock, Iain B. Collings, Michael L. Honig}, journal={arXiv preprint arXiv:cs/0502042}, year={2005}, archivePrefix={arXiv}, eprint={cs/0502042}, primaryClass={cs.IT math.IT} }
peacock2005unified
arxiv-672618
cs/0502043
Compatible Triangulations and Point Partitions by Series-Triangular Graphs
<|reference_start|>Compatible Triangulations and Point Partitions by Series-Triangular Graphs: We introduce series-triangular graph embeddings and show how to partition point sets with them. This result is then used to improve the upper bound on the number of Steiner points needed to obtain compatible triangulations of point sets. The problem is generalized to finding compatible triangulations for more than two point sets and we show that such triangulations can be constructed with only a linear number of Steiner points added to each point set.<|reference_end|>
arxiv
@article{danciger2005compatible, title={Compatible Triangulations and Point Partitions by Series-Triangular Graphs}, author={Jeff Danciger, Satyan L. Devadoss, Don Sheehy}, journal={Computational Geometry: Theory and Applications, 34 (2006) 195-202}, year={2005}, archivePrefix={arXiv}, eprint={cs/0502043}, primaryClass={cs.CG cs.DM} }
danciger2005compatible
arxiv-672619
cs/0502044
The complexity of computing the Hilbert polynomial of smooth equidimensional complex projective varieties
<|reference_start|>The complexity of computing the Hilbert polynomial of smooth equidimensional complex projective varieties: We continue the study of counting complexity begun in [Buergisser, Cucker 04] and [Buergisser, Cucker, Lotz 05] by proving upper and lower bounds on the complexity of computing the Hilbert polynomial of a homogeneous ideal. We show that the problem of computing the Hilbert polynomial of a smooth equidimensional complex projective variety can be reduced in polynomial time to the problem of counting the number of complex common zeros of a finite set of multivariate polynomials. Moreover, we prove that the more general problem of computing the Hilbert polynomial of a homogeneous ideal is polynomial space hard. This implies polynomial space lower bounds for both the problems of computing the rank and the Euler characteristic of cohomology groups of coherent sheaves on projective space, improving the #P-lower bound of Bach (JSC 1999).<|reference_end|>
arxiv
@article{buergisser2005the, title={The complexity of computing the Hilbert polynomial of smooth equidimensional complex projective varieties}, author={Peter Buergisser, Martin Lotz}, journal={arXiv preprint arXiv:cs/0502044}, year={2005}, archivePrefix={arXiv}, eprint={cs/0502044}, primaryClass={cs.SC cs.CC} }
buergisser2005the
arxiv-672620
cs/0502045
Adaptive grids as parametrized scale-free networks
<|reference_start|>Adaptive grids as parametrized scale-free networks: In this paper we present a possible model of adaptive grids for numerical resolution of differential problems, using physical or geometrical properties, as viscosity or velocity gradient of a moving fluid. The relation between the values of grid step and these entities is based on the mathematical scheme offered by the model of scale-free networks, due to Barabasi, so that the step can be connected to the other variables by a constitutive relation. Some examples and an application are discussed, showing that this approach can be further developed for treatment of more complex situations.<|reference_end|>
arxiv
@article{argentini2005adaptive, title={Adaptive grids as parametrized scale-free networks}, author={Gianluca Argentini}, journal={arXiv preprint arXiv:cs/0502045}, year={2005}, archivePrefix={arXiv}, eprint={cs/0502045}, primaryClass={cs.NA math-ph math.AP math.MP physics.flu-dyn} }
argentini2005adaptive
arxiv-672621
cs/0502046
Proof obligations for specification and refinement of liveness properties under weak fairness
<|reference_start|>Proof obligations for specification and refinement of liveness properties under weak fairness: In this report, we present a formal model of fair iteration of events for B event systems. The model is used to justify proof obligations for basic liveness properties and preservation under refinement of general liveness properties. The model of fair iteration of events uses the dovetail operator, an operator proposed by Broy and Nelson to model fair choice. The proofs are mainly founded in fixpoint calculations of fair iteration of events and weakest precondition calculus.<|reference_end|>
arxiv
@article{barradas2005proof, title={Proof obligations for specification and refinement of liveness properties under weak fairness}, author={Hector Ruiz Barradas (LSR - IMAG), Didier Bert (LSR - IMAG)}, journal={arXiv preprint arXiv:cs/0502046}, year={2005}, number={Rapport LSR IMAG : RR 1071-I LSR 20}, archivePrefix={arXiv}, eprint={cs/0502046}, primaryClass={cs.LO} }
barradas2005proof
arxiv-672622
cs/0502047
The succinctness of first-order logic on linear orders
<|reference_start|>The succinctness of first-order logic on linear orders: Succinctness is a natural measure for comparing the strength of different logics. Intuitively, a logic L_1 is more succinct than another logic L_2 if all properties that can be expressed in L_2 can be expressed in L_1 by formulas of (approximately) the same size, but some properties can be expressed in L_1 by (significantly) smaller formulas. We study the succinctness of logics on linear orders. Our first theorem is concerned with the finite variable fragments of first-order logic. We prove that: (i) Up to a polynomial factor, the 2- and the 3-variable fragments of first-order logic on linear orders have the same succinctness. (ii) The 4-variable fragment is exponentially more succinct than the 3-variable fragment. Our second main result compares the succinctness of first-order logic on linear orders with that of monadic second-order logic. We prove that the fragment of monadic second-order logic that has the same expressiveness as first-order logic on linear orders is non-elementarily more succinct than first-order logic.<|reference_end|>
arxiv
@article{grohe2005the, title={The succinctness of first-order logic on linear orders}, author={Martin Grohe and Nicole Schweikardt}, journal={Logical Methods in Computer Science, Volume 1, Issue 1 (June 29, 2005) lmcs:2276}, year={2005}, doi={10.2168/LMCS-1(1:6)2005}, archivePrefix={arXiv}, eprint={cs/0502047}, primaryClass={cs.LO} }
grohe2005the
arxiv-672623
cs/0502048
An Automated Analysis of the Security of Quantum Key Distribution
<|reference_start|>An Automated Analysis of the Security of Quantum Key Distribution: This paper discusses the use of computer-aided verification as a practical means for analysing quantum information systems; specifically, the BB84 protocol for quantum key distribution is examined using this method. This protocol has been shown to be unconditionally secure against all attacks in an information-theoretic setting, but the relevant security proof requires a thorough understanding of the formalism of quantum mechanics and is not easily adaptable to practical scenarios. Our approach is based on probabilistic model-checking; we have used the PRISM model-checker to show that, as the number of qubits transmitted in BB84 is increased, the equivocation of the eavesdropper with respect to the channel decreases exponentially. We have also shown that the probability of detecting the presence of an eavesdropper increases exponentially with the number of qubits. The results presented here are a testament to the effectiveness of the model-checking approach for systems where analytical solutions may not be possible or plausible.<|reference_end|>
arxiv
@article{nagarajan2005an, title={An Automated Analysis of the Security of Quantum Key Distribution}, author={Rajagopal Nagarajan, Nikolaos Papanikolaou, Garry Bowen, Simon Gay}, journal={arXiv preprint arXiv:cs/0502048}, year={2005}, archivePrefix={arXiv}, eprint={cs/0502048}, primaryClass={cs.CR quant-ph} }
nagarajan2005an
arxiv-672624
cs/0502049
Generalised Bent Criteria for Boolean Functions (I)
<|reference_start|>Generalised Bent Criteria for Boolean Functions (I): Generalisations of the bent property of a boolean function are presented, by proposing spectral analysis with respect to a well-chosen set of local unitary transforms. Quadratic boolean functions are related to simple graphs and it is shown that the orbit generated by successive Local Complementations on a graph can be found within the transform spectra under investigation. The flat spectra of a quadratic boolean function are related to modified versions of its associated adjacency matrix.<|reference_end|>
arxiv
@article{riera2005generalised, title={Generalised Bent Criteria for Boolean Functions (I)}, author={Constanza Riera and Matthew G. Parker}, journal={arXiv preprint arXiv:cs/0502049}, year={2005}, archivePrefix={arXiv}, eprint={cs/0502049}, primaryClass={cs.IT math.IT} }
riera2005generalised
arxiv-672625
cs/0502050
Generalised Bent Criteria for Boolean Functions (II)
<|reference_start|>Generalised Bent Criteria for Boolean Functions (II): In the first part of this paper [16], some results on how to compute the flat spectra of Boolean constructions w.r.t. the transforms {I,H}^n, {H,N}^n and {I,H,N}^n were presented, and the relevance of Local Complementation to the quadratic case was indicated. In this second part, the results are applied to develop recursive formulae for the numbers of flat spectra of some structural quadratics. Observations are made as to the generalised Bent properties of boolean functions of algebraic degree greater than two, and the number of flat spectra w.r.t. {I,H,N}^n are computed for some of them.<|reference_end|>
arxiv
@article{riera2005generalised, title={Generalised Bent Criteria for Boolean Functions (II)}, author={Constanza Riera, George Petrides and Matthew G. Parker}, journal={arXiv preprint arXiv:cs/0502050}, year={2005}, archivePrefix={arXiv}, eprint={cs/0502050}, primaryClass={cs.IT math.IT} }
riera2005generalised
arxiv-672626
cs/0502051
A Semantic Grid-based E-Learning Framework (SELF)
<|reference_start|>A Semantic Grid-based E-Learning Framework (SELF): E-learning can be loosely defined as a wide set of applications and processes, which uses available electronic media (and tools) to deliver vocational education and training. With its increasing recognition as an ubiquitous mode of instruction and interaction in the academic as well as corporate world, the need for a scaleable and realistic model is becoming important. In this paper we introduce SELF; a Semantic grid-based E-Learning Framework. SELF aims to identify the key-enablers in a practical grid-based E-learning environment and to minimize technological reworking by proposing a well-defined interaction plan among currently available tools and technologies. We define a dichotomy with E-learning specific application layers on top and semantic grid-based support layers underneath. We also map the latest open and freeware technologies with various components in SELF.<|reference_end|>
arxiv
@article{abbas2005a, title={A Semantic Grid-based E-Learning Framework (SELF)}, author={Zaheer Abbas, Muhammad Umer, Mohammed Odeh, Richard McClatchey, Arshad Ali, Farooq Ahmad}, journal={arXiv preprint arXiv:cs/0502051}, year={2005}, archivePrefix={arXiv}, eprint={cs/0502051}, primaryClass={cs.DC} }
abbas2005a
arxiv-672627
cs/0502052
Log Analysis Case Study Using LoGS
<|reference_start|>Log Analysis Case Study Using LoGS: A very useful technique a network administrator can use to identify problematic network behavior is careful analysis of logs of incoming and outgoing network flows. The challenge one faces when attempting to undertake this course of action, though, is that large networks tend to generate an extremely large quantity of network traffic in a very short period of time, resulting in very large traffic logs which must be analyzed post-generation with an eye for contextual information which may reveal symptoms of problematic traffic. A better technique is to perform real-time log analysis using a real-time context-generating tool such as LoGS.<|reference_end|>
arxiv
@article{mogilevsky2005log, title={Log Analysis Case Study Using LoGS}, author={Dmitry Mogilevsky}, journal={arXiv preprint arXiv:cs/0502052}, year={2005}, archivePrefix={arXiv}, eprint={cs/0502052}, primaryClass={cs.CR cs.IR} }
mogilevsky2005log
arxiv-672628
cs/0502053
A low-cost time-hopping impulse radio system for high data rate transmission
<|reference_start|>A low-cost time-hopping impulse radio system for high data rate transmission: We present an efficient, low-cost implementation of time-hopping impulse radio that fulfills the spectral mask mandated by the FCC and is suitable for high-data-rate, short-range communications. Key features are: (i) all-baseband implementation that obviates the need for passband components, (ii) symbol-rate (not chip rate) sampling, A/D conversion, and digital signal processing, (iii) fast acquisition due to novel search algorithms, (iv) spectral shaping that can be adapted to accommodate different spectrum regulations and interference environments. Computer simulations show that this system can provide 110Mbit/s at 7-10m distance, as well as higher data rates at shorter distances under FCC emissions limits. Due to the spreading concept of time-hopping impulse radio, the system can sustain multiple simultaneous users, and can suppress narrowband interference effectively.<|reference_end|>
arxiv
@article{molisch2005a, title={A low-cost time-hopping impulse radio system for high data rate transmission}, author={Andreas F. Molisch, Ye Geoffrey Li, Yves-Paul Nakache, Philip Orlik, Makoto Miyake, Yunnan Wu, Sinan Gezici, Harry Sheng, S. Y. Kung, H. Kobayashi, H. Vincent Poor, Alexander Haimovich and Jinyun Zhang}, journal={arXiv preprint arXiv:cs/0502053}, year={2005}, doi={10.1155/ASP.2005.397}, archivePrefix={arXiv}, eprint={cs/0502053}, primaryClass={cs.IT math.IT} }
molisch2005a
arxiv-672629
cs/0502054
Improved Tag Set Design and Multiplexing Algorithms for Universal Arrays
<|reference_start|>Improved Tag Set Design and Multiplexing Algorithms for Universal Arrays: In this paper we address two optimization problems arising in the design of genomic assays based on universal tag arrays. First, we address the universal array tag set design problem. For this problem, we extend previous formulations to incorporate antitag-to-antitag hybridization constraints in addition to constraints on antitag-to-tag hybridization specificity, establish a constructive upper bound on the maximum number of tags satisfying the extended constraints, and propose a simple greedy tag selection algorithm. Second, we give methods for improving the multiplexing rate in large-scale genomic assays by combining primer selection with tag assignment. Experimental results on simulated data show that this integrated optimization leads to reductions of up to 50% in the number of required arrays.<|reference_end|>
arxiv
@article{mandoiu2005improved, title={Improved Tag Set Design and Multiplexing Algorithms for Universal Arrays}, author={Ion I. Mandoiu, Claudia Prajescu, Dragos Trinca}, journal={arXiv preprint arXiv:cs/0502054}, year={2005}, archivePrefix={arXiv}, eprint={cs/0502054}, primaryClass={cs.DS} }
mandoiu2005improved
arxiv-672630
cs/0502055
On quasi-cyclic interleavers for parallel turbo codes
<|reference_start|>On quasi-cyclic interleavers for parallel turbo codes: We present an interleaving scheme that yields quasi-cyclic turbo codes. We prove that randomly chosen members of this family yield with probability almost 1 turbo codes with asymptotically optimum minimum distance, i.e. growing as a logarithm of the interleaver size. These interleavers are also very practical in terms of memory requirements and their decoding error probabilities for small block lengths compare favorably with previous interleaving schemes.<|reference_end|>
arxiv
@article{boutros2005on, title={On quasi-cyclic interleavers for parallel turbo codes}, author={Joseph Boutros and Gilles Z'emor}, journal={IEEE Transactions on Information Theory, IT-52, No 4 (2006) pp. 1732--1739.}, year={2005}, archivePrefix={arXiv}, eprint={cs/0502055}, primaryClass={cs.IT math.IT} }
boutros2005on
arxiv-672631
cs/0502056
Co-Authorship Networks in the Digital Library Research Community
<|reference_start|>Co-Authorship Networks in the Digital Library Research Community: The field of digital libraries (DLs) coalesced in 1994: the first digital library conferences were held that year, awareness of the World Wide Web was accelerating, and the National Science Foundation awarded $24 Million (U.S.) for the Digital Library Initiative (DLI). In this paper we examine the state of the DL domain after a decade of activity by applying social network analysis to the co-authorship network of the past ACM, IEEE, and joint ACM/IEEE digital library conferences. We base our analysis on a common binary undirectional network model to represent the co-authorship network, and from it we extract several established network measures. We also introduce a weighted directional network model to represent the co-authorship network, for which we define $AuthorRank$ as an indicator of the impact of an individual author in the network. The results are validated against conference program committee members in the same period. The results show clear advantages of PageRank and AuthorRank over degree, closeness and betweenness centrality metrics. We also investigate the amount and nature of international participation in Joint Conference on Digital Libraries (JCDL).<|reference_end|>
arxiv
@article{liu2005co-authorship, title={Co-Authorship Networks in the Digital Library Research Community}, author={Xiaoming Liu, Johan Bollen, Michael L. Nelson, Herbert Van de Sompel}, journal={arXiv preprint arXiv:cs/0502056}, year={2005}, archivePrefix={arXiv}, eprint={cs/0502056}, primaryClass={cs.DL} }
liu2005co-authorship
arxiv-672632
cs/0502057
Decomposable Problems, Niching, and Scalability of Multiobjective Estimation of Distribution Algorithms
<|reference_start|>Decomposable Problems, Niching, and Scalability of Multiobjective Estimation of Distribution Algorithms: The paper analyzes the scalability of multiobjective estimation of distribution algorithms (MOEDAs) on a class of boundedly-difficult additively-separable multiobjective optimization problems. The paper illustrates that even if the linkage is correctly identified, massive multimodality of the search problems can easily overwhelm the nicher and lead to exponential scale-up. Facetwise models are subsequently used to propose a growth rate of the number of differing substructures between the two objectives to avoid the niching method from being overwhelmed and lead to polynomial scalability of MOEDAs.<|reference_end|>
arxiv
@article{sastry2005decomposable, title={Decomposable Problems, Niching, and Scalability of Multiobjective Estimation of Distribution Algorithms}, author={Kumara Sastry, Martin Pelikan, David E. Goldberg}, journal={arXiv preprint arXiv:cs/0502057}, year={2005}, number={IlliGAL Report No. 2005004}, archivePrefix={arXiv}, eprint={cs/0502057}, primaryClass={cs.NE cs.AI} }
sastry2005decomposable
arxiv-672633
cs/0502058
The Complexity of Computing the Size of an Interval
<|reference_start|>The Complexity of Computing the Size of an Interval: Given a p-order A over a universe of strings (i.e., a transitive, reflexive, antisymmetric relation such that if (x, y) is an element of A then |x| is polynomially bounded by |y|), an interval size function of A returns, for each string x in the universe, the number of strings in the interval between strings b(x) and t(x) (with respect to A), where b(x) and t(x) are functions that are polynomial-time computable in the length of x. By choosing sets of interval size functions based on feasibility requirements for their underlying p-orders, we obtain new characterizations of complexity classes. We prove that the set of all interval size functions whose underlying p-orders are polynomial-time decidable is exactly #P. We show that the interval size functions for orders with polynomial-time adjacency checks are closely related to the class FPSPACE(poly). Indeed, FPSPACE(poly) is exactly the class of all nonnegative functions that are an interval size function minus a polynomial-time computable function. We study two important functions in relation to interval size functions. The function #DIV maps each natural number n to the number of nontrivial divisors of n. We show that #DIV is an interval size function of a polynomial-time decidable partial p-order with polynomial-time adjacency checks. The function #MONSAT maps each monotone boolean formula F to the number of satisfying assignments of F. We show that #MONSAT is an interval size function of a polynomial-time decidable total p-order with polynomial-time adjacency checks. Finally, we explore the related notion of cluster computation.<|reference_end|>
arxiv
@article{hemaspaandra2005the, title={The Complexity of Computing the Size of an Interval}, author={Lane A. Hemaspaandra, Christopher M. Homan, Sven Kosub, Klaus W. Wagner}, journal={arXiv preprint arXiv:cs/0502058}, year={2005}, number={URCS-TR-2005-856}, archivePrefix={arXiv}, eprint={cs/0502058}, primaryClass={cs.CC cs.DM} }
hemaspaandra2005the
arxiv-672634
cs/0502059
New approach for Finite Difference Method for Thermal Analysis of Passive Solar Systems
<|reference_start|>New approach for Finite Difference Method for Thermal Analysis of Passive Solar Systems: Mathematical treatment of massive wall systems is a useful tool for investigation of these solar applications. The objectives of this work are to develop (and validate) a numerical solution model for predication the thermal behaviour of passive solar systems with massive wall, to improve knowledge of using indirect passive solar systems and assess its energy efficiency according to climatic conditions in Bulgaria. The problem of passive solar systems with massive walls is modelled by thermal and mass transfer equations. As a boundary conditions for the mathematical problem are used equations, which describe influence of weather data and constructive parameters of building on the thermal performance of the passive system. The mathematical model is solved by means of finite differences method and improved solution procedure. In article are presented results of theoretical and experimental study for developing and validating a numerical solution model for predication the thermal behaviour of passive solar systems with massive wall.<|reference_end|>
arxiv
@article{shtrakov2005new, title={New approach for Finite Difference Method for Thermal Analysis of Passive Solar Systems}, author={Stanko Shtrakov and Anton Stoilov}, journal={arXiv preprint arXiv:cs/0502059}, year={2005}, archivePrefix={arXiv}, eprint={cs/0502059}, primaryClass={cs.NA} }
shtrakov2005new
arxiv-672635
cs/0502060
Perspectives for Strong Artificial Life
<|reference_start|>Perspectives for Strong Artificial Life: This text introduces the twin deadlocks of strong artificial life. Conceptualization of life is a deadlock both because of the existence of a continuum between the inert and the living, and because we only know one instance of life. Computationalism is a second deadlock since it remains a matter of faith. Nevertheless, artificial life realizations quickly progress and recent constructions embed an always growing set of the intuitive properties of life. This growing gap between theory and realizations should sooner or later crystallize in some kind of paradigm shift and then give clues to break the twin deadlocks.<|reference_end|>
arxiv
@article{rennard2005perspectives, title={Perspectives for Strong Artificial Life}, author={J.-Ph Rennard}, journal={Rennard, J.-Ph., (2004), Perspective for Strong Artificial Life in De Castro, L.N. & von Zuben F.J. (Eds), Recent Developments in Biologically Inspired Computing, Hershey:IGP, 301-318}, year={2005}, archivePrefix={arXiv}, eprint={cs/0502060}, primaryClass={cs.AI} }
rennard2005perspectives
arxiv-672636
cs/0502061
A Geographic Directed Preferential Internet Topology Model
<|reference_start|>A Geographic Directed Preferential Internet Topology Model: The goal of this work is to model the peering arrangements between Autonomous Systems (ASes). Most existing models of the AS-graph assume an undirected graph. However, peering arrangements are mostly asymmetric Customer-Provider arrangements, which are better modeled as directed edges. Furthermore, it is well known that the AS-graph, and in particular its clustering structure, is influenced by geography. We introduce a new model that describes the AS-graph as a directed graph, with an edge going from the customer to the provider, but also models symmetric peer-to-peer arrangements, and takes geography into account. We are able to mathematically analyze its power-law exponent and number of leaves. Beyond the analysis we have implemented our model as a synthetic network generator we call GdTang. Experimentation with GdTang shows that the networks it produces are more realistic than those generated by other network generators, in terms of its power-law exponent, fractions of customer-provider and symmetric peering arrangements, and the size of its dense core. We believe that our model is the first to manifest realistic regional dense cores that have a clear geographic flavor. Our synthetic networks also exhibit path inflation effects that are similar to those observed in the real AS graph.<|reference_end|>
arxiv
@article{bar2005a, title={A Geographic Directed Preferential Internet Topology Model}, author={Sagy Bar, Mira Gonen, Avishai Wool}, journal={arXiv preprint arXiv:cs/0502061}, year={2005}, archivePrefix={arXiv}, eprint={cs/0502061}, primaryClass={cs.NI cs.AR} }
bar2005a
arxiv-672637
cs/0502062
Tree Parity Machine Rekeying Architectures
<|reference_start|>Tree Parity Machine Rekeying Architectures: The necessity to secure the communication between hardware components in embedded systems becomes increasingly important with regard to the secrecy of data and particularly its commercial use. We suggest a low-cost (i.e. small logic-area) solution for flexible security levels and short key lifetimes. The basis is an approach for symmetric key exchange using the synchronisation of Tree Parity Machines. Fast successive key generation enables a key exchange within a few milliseconds, given realistic communication channels with a limited bandwidth. For demonstration we evaluate characteristics of a standard-cell ASIC design realisation as IP-core in 0.18-micrometer CMOS-technology.<|reference_end|>
arxiv
@article{volkmer2005tree, title={Tree Parity Machine Rekeying Architectures}, author={Markus Volkmer and Sebastian Wallner}, journal={IEEE Transactions on Computers Vol. 54 No. 4, pp. 421-427, April 2005}, year={2005}, archivePrefix={arXiv}, eprint={cs/0502062}, primaryClass={cs.CR cs.AR} }
volkmer2005tree
arxiv-672638
cs/0502063
Nonlinear MMSE Multiuser Detection Based on Multivariate Gaussian Approximation
<|reference_start|>Nonlinear MMSE Multiuser Detection Based on Multivariate Gaussian Approximation: In this paper, a class of nonlinear MMSE multiuser detectors are derived based on a multivariate Gaussian approximation of the multiple access interference. This approach leads to expressions identical to those describing the probabilistic data association (PDA) detector, thus providing an alternative analytical justification for this structure. A simplification to the PDA detector based on approximating the covariance matrix of the multivariate Gaussian distribution is suggested, resulting in a soft interference cancellation scheme. Corresponding multiuser soft-input, soft-output detectors delivering extrinsic log-likelihood ratios are derived for application in iterative multiuser decoders. Finally, a large system performance analysis is conducted for the simplified PDA, showing that the bit error rate performance of this detector can be accurately predicted and related to the replica method analysis for the optimal detector. Methods from statistical neuro-dynamics are shown to provide a closely related alternative large system prediction. Numerical results demonstrate that for large systems, the bit error rate is accurately predicted by the analysis and found to be close to optimal performance.<|reference_end|>
arxiv
@article{tan2005nonlinear, title={Nonlinear MMSE Multiuser Detection Based on Multivariate Gaussian Approximation}, author={Peng Hui Tan and Lars K. Rasmussen}, journal={arXiv preprint arXiv:cs/0502063}, year={2005}, archivePrefix={arXiv}, eprint={cs/0502063}, primaryClass={cs.IT math.IT} }
tan2005nonlinear
arxiv-672639
cs/0502064
The Lattice of Machine Invariant Sets and Subword Complexity
<|reference_start|>The Lattice of Machine Invariant Sets and Subword Complexity: We investigate the lattice of machine invariant classes. This is an infinite completely distributive lattice but it is not a Boolean lattice. We show the subword complexity and the growth function create machine invariant classes. So the lattice would serve as a measure of words cryptographic quality if we like to identify new stream ciphers suitable for widespread adoption.<|reference_end|>
arxiv
@article{buls2005the, title={The Lattice of Machine Invariant Sets and Subword Complexity}, author={Janis Buls}, journal={arXiv preprint arXiv:cs/0502064}, year={2005}, archivePrefix={arXiv}, eprint={cs/0502064}, primaryClass={cs.CR cs.CC cs.DM} }
buls2005the
arxiv-672640
cs/0502065
Highly Scalable Algorithms for Robust String Barcoding
<|reference_start|>Highly Scalable Algorithms for Robust String Barcoding: String barcoding is a recently introduced technique for genomic-based identification of microorganisms. In this paper we describe the engineering of highly scalable algorithms for robust string barcoding. Our methods enable distinguisher selection based on whole genomic sequences of hundreds of microorganisms of up to bacterial size on a well-equipped workstation, and can be easily parallelized to further extend the applicability range to thousands of bacterial size genomes. Experimental results on both randomly generated and NCBI genomic data show that whole-genome based selection results in a number of distinguishers nearly matching the information theoretic lower bounds for the problem.<|reference_end|>
arxiv
@article{dasgupta2005highly, title={Highly Scalable Algorithms for Robust String Barcoding}, author={Bhaskar DasGupta, Kishori M. Konwar, Ion I. Mandoiu, Alex A. Shvartsman}, journal={arXiv preprint arXiv:cs/0502065}, year={2005}, archivePrefix={arXiv}, eprint={cs/0502065}, primaryClass={cs.DS} }
dasgupta2005highly
arxiv-672641
cs/0502066
On the Complexity of Real Functions
<|reference_start|>On the Complexity of Real Functions: We develop a notion of computability and complexity of functions over the reals, which seems to be very natural when one tries to determine just how "difficult" a certain function is. This notion can be viewed as an extension of both BSS computability [Blum, Cucker, Shub, Smale 1998], and bit computability in the tradition of computable analysis [Weihrauch 2000] as it relies on the latter but allows some discontinuities and multiple values.<|reference_end|>
arxiv
@article{braverman2005on, title={On the Complexity of Real Functions}, author={Mark Braverman}, journal={arXiv preprint arXiv:cs/0502066}, year={2005}, archivePrefix={arXiv}, eprint={cs/0502066}, primaryClass={cs.CC cs.NA math.NA} }
braverman2005on
arxiv-672642
cs/0502067
Master Algorithms for Active Experts Problems based on Increasing Loss Values
<|reference_start|>Master Algorithms for Active Experts Problems based on Increasing Loss Values: We specify an experts algorithm with the following characteristics: (a) it uses only feedback from the actions actually chosen (bandit setup), (b) it can be applied with countably infinite expert classes, and (c) it copes with losses that may grow in time appropriately slowly. We prove loss bounds against an adaptive adversary. From this, we obtain master algorithms for "active experts problems", which means that the master's actions may influence the behavior of the adversary. Our algorithm can significantly outperform standard experts algorithms on such problems. Finally, we combine it with a universal expert class. This results in a (computationally infeasible) universal master algorithm which performs - in a certain sense - almost as well as any computable strategy, for any online problem.<|reference_end|>
arxiv
@article{poland2005master, title={Master Algorithms for Active Experts Problems based on Increasing Loss Values}, author={Jan Poland and Marcus Hutter}, journal={Proc. 14th Dutch-Belgium Conf. on Machine Learning (Benelearn 2005) 59-66}, year={2005}, number={IDSIA-01-05}, archivePrefix={arXiv}, eprint={cs/0502067}, primaryClass={cs.LG cs.AI} }
poland2005master
arxiv-672643
cs/0502068
Limits of Rush Hour Logic Complexity
<|reference_start|>Limits of Rush Hour Logic Complexity: Rush Hour Logic was introduced in [Flake&Baum99] as a model of computation inspired by the ``Rush Hour'' toy puzzle, in which cars can move horizontally or vertically within a parking lot. The authors show how the model supports polynomial space computation, using certain car configurations as building blocks to construct boolean circuits for a cpu and memory. They consider the use of cars of length 3 crucial to their construction, and conjecture that cars of size 2 only, which we'll call `Size 2 Rush Hour', do not support polynomial space computation. We settle this conjecture by showing that the required building blocks are constructible in Size 2 Rush Hour. Furthermore, we consider Unit Rush Hour, which was hitherto believed to be trivial, show its relation to maze puzzles, and provide empirical support for its hardness.<|reference_end|>
arxiv
@article{tromp2005limits, title={Limits of Rush Hour Logic Complexity}, author={John Tromp and Rudi Cilibrasi}, journal={arXiv preprint arXiv:cs/0502068}, year={2005}, archivePrefix={arXiv}, eprint={cs/0502068}, primaryClass={cs.CC} }
tromp2005limits
arxiv-672644
cs/0502069
Koordinatenfreies Lokationsbewusstsein (Localization without Coordinates)
<|reference_start|>Koordinatenfreies Lokationsbewusstsein (Localization without Coordinates): Localization is one of the fundamental issues in sensor networks. It is almost always assumed that it must be solved by assigning coordinates to the nodes. This article discusses positioning algorithms from a theoretical, practical and simulative point of view, and identifies difficulties and limitations. Ideas for more abstract means of location awareness are presented and the resulting possible improvements for applications are shown. Nodes with certain topological or environmental properties are clustered, and the neighborhood structure of the clusters is modeled as a graph. Eines der fundamentalen Probleme in Sensornetzwerken besteht darin, ein Bewusstsein fuer die Position eines Knotens im Netz zu entwickeln. Dabei wird fast immer davon ausgegangen, dass dies durch die Zuweisung von Koordinaten zu erfolgen hat. In diesem Artikel wird auf theoretischer, praktischer und simulativer Ebene ein kritischer Blick auf entsprechende Verfahren geworfen, und es werden Grenzen aufgezeigt. Es wird ein Ansatz vorgestellt, mit dem in der Zukunft eine abstrakte Form von Lokationsbewusstsein etabliert werden kann, und es wird gezeigt, wie Anwendungen dadurch verbessert werden koennen. Er basiert auf einer graphenbasierten Modellierung des Netzes: Knoten mit bestimmten topologischen oder Umwelteigenschaften werden zu Clustern zusammengefasst, und Clusternachbarschaften dann als Graphen modelliert.<|reference_end|>
arxiv
@article{kroeller2005koordinatenfreies, title={Koordinatenfreies Lokationsbewusstsein (Localization without Coordinates)}, author={Alexander Kroeller, Sandor P. Fekete, Carsten Buschmann, Stefan Fischer and Dennis Pfisterer}, journal={arXiv preprint arXiv:cs/0502069}, year={2005}, archivePrefix={arXiv}, eprint={cs/0502069}, primaryClass={cs.DC} }
kroeller2005koordinatenfreies
arxiv-672645
cs/0502070
Bidimensionality, Map Graphs, and Grid Minors
<|reference_start|>Bidimensionality, Map Graphs, and Grid Minors: In this paper we extend the theory of bidimensionality to two families of graphs that do not exclude fixed minors: map graphs and power graphs. In both cases we prove a polynomial relation between the treewidth of a graph in the family and the size of the largest grid minor. These bounds improve the running times of a broad class of fixed-parameter algorithms. Our novel technique of using approximate max-min relations between treewidth and size of grid minors is powerful, and we show how it can also be used, e.g., to prove a linear relation between the treewidth of a bounded-genus graph and the treewidth of its dual.<|reference_end|>
arxiv
@article{demaine2005bidimensionality,, title={Bidimensionality, Map Graphs, and Grid Minors}, author={Erik D. Demaine and MohammadTaghi Hajiaghayi}, journal={arXiv preprint arXiv:cs/0502070}, year={2005}, archivePrefix={arXiv}, eprint={cs/0502070}, primaryClass={cs.DM cs.DS} }
demaine2005bidimensionality,
arxiv-672646
cs/0502071
Analysis of Second-order Statistics Based Semi-blind Channel Estimation in CDMA Channels
<|reference_start|>Analysis of Second-order Statistics Based Semi-blind Channel Estimation in CDMA Channels: The performance of second order statistics (SOS) based semi-blind channel estimation in long-code DS-CDMA systems is analyzed. The covariance matrix of SOS estimates is obtained in the large system limit, and is used to analyze the large-sample performance of two SOS based semi-blind channel estimation algorithms. A notion of blind estimation efficiency is also defined and is examined via simulation results.<|reference_end|>
arxiv
@article{li2005analysis, title={Analysis of Second-order Statistics Based Semi-blind Channel Estimation in CDMA Channels}, author={Husheng Li and H. Vincent Poor}, journal={arXiv preprint arXiv:cs/0502071}, year={2005}, archivePrefix={arXiv}, eprint={cs/0502071}, primaryClass={cs.IT math.IT} }
li2005analysis
arxiv-672647
cs/0502072
Batch is back: CasJobs, serving multi-TB data on the Web
<|reference_start|>Batch is back: CasJobs, serving multi-TB data on the Web: The Sloan Digital Sky Survey (SDSS) science database describes over 140 million objects and is over 1.5 TB in size. The SDSS Catalog Archive Server (CAS) provides several levels of query interface to the SDSS data via the SkyServer website. Most queries execute in seconds or minutes. However, some queries can take hours or days, either because they require non-index scans of the largest tables, or because they request very large result sets, or because they represent very complex aggregations of the data. These "monster queries" not only take a long time, they also affect response times for everyone else - one or more of them can clog the entire system. To ameliorate this problem, we developed a multi-server multi-queue batch job submission and tracking system for the CAS called CasJobs. The transfer of very large result sets from queries over the network is another serious problem. Statistics suggested that much of this data transfer is unnecessary; users would prefer to store results locally in order to allow further joins and filtering. To allow local analysis, a system was developed that gives users their own personal databases (MyDB) at the server side. Users may transfer data to their MyDB, and then perform further analysis before extracting it to their own machine. MyDB tables also provide a convenient way to share results of queries with collaborators without downloading them. CasJobs is built using SOAP XML Web services and has been in operation since May 2004.<|reference_end|>
arxiv
@article{omullane2005batch, title={Batch is back: CasJobs, serving multi-TB data on the Web}, author={William OMullane, Nolan Li, Maria Nieto-Santisteban, Alex Szalay, Ani Thakar, Jim Gray}, journal={arXiv preprint arXiv:cs/0502072}, year={2005}, number={Microsoft Technical Report MSR TR 2005 19}, archivePrefix={arXiv}, eprint={cs/0502072}, primaryClass={cs.DC cs.DB} }
omullane2005batch
arxiv-672648
cs/0502073
A note on the Burrows-Wheeler transformation
<|reference_start|>A note on the Burrows-Wheeler transformation: We relate the Burrows-Wheeler transformation with a result in combinatorics on words known as the Gessel-Reutenauer transformation.<|reference_end|>
arxiv
@article{crochemore2005a, title={A note on the Burrows-Wheeler transformation}, author={Maxime Crochemore (IGM), Jacques D'esarm'enien (IGM), Dominique Perrin (IGM)}, journal={arXiv preprint arXiv:cs/0502073}, year={2005}, number={CDP04tcs}, archivePrefix={arXiv}, eprint={cs/0502073}, primaryClass={cs.DS} }
crochemore2005a
arxiv-672649
cs/0502074
On sample complexity for computational pattern recognition
<|reference_start|>On sample complexity for computational pattern recognition: In statistical setting of the pattern recognition problem the number of examples required to approximate an unknown labelling function is linear in the VC dimension of the target learning class. In this work we consider the question whether such bounds exist if we restrict our attention to computable pattern recognition methods, assuming that the unknown labelling function is also computable. We find that in this case the number of examples required for a computable method to approximate the labelling function not only is not linear, but grows faster (in the VC dimension of the class) than any computable function. No time or space constraints are put on the predictors or target functions; the only resource we consider is the training examples. The task of pattern recognition is considered in conjunction with another learning problem -- data compression. An impossibility result for the task of data compression allows us to estimate the sample complexity for pattern recognition.<|reference_end|>
arxiv
@article{ryabko2005on, title={On sample complexity for computational pattern recognition}, author={Daniil Ryabko}, journal={Algorithmica, 49:1 (Sept): 69-77, 2007, pp. 334-347}, year={2005}, doi={10.1007/s00453-007-0037-z}, archivePrefix={arXiv}, eprint={cs/0502074}, primaryClass={cs.LG cs.CC} }
ryabko2005on
arxiv-672650
cs/0502075
How far will you walk to find your shortcut: Space Efficient Synopsis Construction Algorithms
<|reference_start|>How far will you walk to find your shortcut: Space Efficient Synopsis Construction Algorithms: In this paper we consider the wavelet synopsis construction problem without the restriction that we only choose a subset of coefficients of the original data. We provide the first near optimal algorithm. We arrive at the above algorithm by considering space efficient algorithms for the restricted version of the problem. In this context we improve previous algorithms by almost a linear factor and reduce the required space to almost linear. Our techniques also extend to histogram construction, and improve the space-running time tradeoffs for V-Opt and range query histograms. We believe the idea applies to a broad range of dynamic programs and demonstrate it by showing improvements in a knapsack-like setting seen in construction of Extended Wavelets.<|reference_end|>
arxiv
@article{guha2005how, title={How far will you walk to find your shortcut: Space Efficient Synopsis Construction Algorithms}, author={Sudipto Guha}, journal={arXiv preprint arXiv:cs/0502075}, year={2005}, archivePrefix={arXiv}, eprint={cs/0502075}, primaryClass={cs.DS cs.DB} }
guha2005how
arxiv-672651
cs/0502076
Learning nonsingular phylogenies and hidden Markov models
<|reference_start|>Learning nonsingular phylogenies and hidden Markov models: In this paper we study the problem of learning phylogenies and hidden Markov models. We call a Markov model nonsingular if all transition matrices have determinants bounded away from 0 (and 1). We highlight the role of the nonsingularity condition for the learning problem. Learning hidden Markov models without the nonsingularity condition is at least as hard as learning parity with noise, a well-known learning problem conjectured to be computationally hard. On the other hand, we give a polynomial-time algorithm for learning nonsingular phylogenies and hidden Markov models.<|reference_end|>
arxiv
@article{mossel2005learning, title={Learning nonsingular phylogenies and hidden Markov models}, author={Elchanan Mossel, S'ebastien Roch}, journal={Annals of Applied Probability 2006, Vol. 16, No. 2, 583-614}, year={2005}, doi={10.1214/105051606000000024}, number={IMS-AAP-AAP0161}, archivePrefix={arXiv}, eprint={cs/0502076}, primaryClass={cs.LG cs.CE math.PR math.ST q-bio.PE stat.TH} }
mossel2005learning
arxiv-672652
cs/0502077
On the Achievable Information Rates of Finite-State Input Two-Dimensional Channels with Memory
<|reference_start|>On the Achievable Information Rates of Finite-State Input Two-Dimensional Channels with Memory: The achievable information rate of finite-state input two-dimensional (2-D) channels with memory is an open problem, which is relevant, e.g., for inter-symbol-interference (ISI) channels and cellular multiple-access channels. We propose a method for simulation-based computation of such information rates. We first draw a connection between the Shannon-theoretic information rate and the statistical mechanics notion of free energy. Since the free energy of such systems is intractable, we approximate it using the cluster variation method, implemented via generalized belief propagation. The derived, fully tractable, algorithm is shown to provide a practically accurate estimate of the information rate. In our experimental study we calculate the information rates of 2-D ISI channels and of hexagonal Wyner cellular networks with binary inputs, for which formerly only bounds were known.<|reference_end|>
arxiv
@article{shental2005on, title={On the Achievable Information Rates of Finite-State Input Two-Dimensional Channels with Memory}, author={Ori Shental, Noam Shental and Shlomo Shamai (Shitz)}, journal={arXiv preprint arXiv:cs/0502077}, year={2005}, archivePrefix={arXiv}, eprint={cs/0502077}, primaryClass={cs.IT math.IT} }
shental2005on
arxiv-672653
cs/0502078
Semantical Characterizations and Complexity of Equivalences in Answer Set Programming
<|reference_start|>Semantical Characterizations and Complexity of Equivalences in Answer Set Programming: In recent research on non-monotonic logic programming, repeatedly strong equivalence of logic programs P and Q has been considered, which holds if the programs P union R and Q union R have the same answer sets for any other program R. This property strengthens equivalence of P and Q with respect to answer sets (which is the particular case for R is the empty set), and has its applications in program optimization, verification, and modular logic programming. In this paper, we consider more liberal notions of strong equivalence, in which the actual form of R may be syntactically restricted. On the one hand, we consider uniform equivalence, where R is a set of facts rather than a set of rules. This notion, which is well known in the area of deductive databases, is particularly useful for assessing whether programs P and Q are equivalent as components of a logic program which is modularly structured. On the other hand, we consider relativized notions of equivalence, where R ranges over rules over a fixed alphabet, and thus generalize our results to relativized notions of strong and uniform equivalence. For all these notions, we consider disjunctive logic programs in the propositional (ground) case, as well as some restricted classes, provide semantical characterizations and analyze the computational complexity. Our results, which naturally extend to answer set semantics for programs with strong negation, complement the results on strong equivalence of logic programs and pave the way for optimizations in answer set solvers as a tool for input-based problem solving.<|reference_end|>
arxiv
@article{eiter2005semantical, title={Semantical Characterizations and Complexity of Equivalences in Answer Set Programming}, author={Thomas Eiter, Michael Fink, and Stefan Woltran}, journal={arXiv preprint arXiv:cs/0502078}, year={2005}, number={1843-05-01}, archivePrefix={arXiv}, eprint={cs/0502078}, primaryClass={cs.AI cs.CC} }
eiter2005semantical
arxiv-672654
cs/0502079
Multilevel expander codes
<|reference_start|>Multilevel expander codes: We define multilevel codes on bipartite graphs that have properties analogous to multilevel serial concatenations. A decoding algorithm is described that corrects a proportion of errors equal to half the Blokh-Zyablov bound on the minimum distance. The error probability of this algorithm has exponent similar to that of serially concatenated multilevel codes.<|reference_end|>
arxiv
@article{barg2005multilevel, title={Multilevel expander codes}, author={Alexander Barg and Gilles Zemor}, journal={"Algebraic Coding Theory and Information Theory," Providence, RI: AMS (2005), pp. 69-83.}, year={2005}, archivePrefix={arXiv}, eprint={cs/0502079}, primaryClass={cs.IT math.IT} }
barg2005multilevel
arxiv-672655
cs/0502080
Sensor Configuration and Activation for Field Detection in Large Sensor Arrays
<|reference_start|>Sensor Configuration and Activation for Field Detection in Large Sensor Arrays: The problems of sensor configuration and activation for the detection of correlated random fields using large sensor arrays are considered. Using results that characterize the large-array performance of sensor networks in this application, the detection capabilities of different sensor configurations are analyzed and compared. The dependence of the optimal choice of configuration on parameters such as sensor signal-to-noise ratio (SNR), field correlation, etc., is examined, yielding insights into the most effective choices for sensor selection and activation in various operating regimes.<|reference_end|>
arxiv
@article{sung2005sensor, title={Sensor Configuration and Activation for Field Detection in Large Sensor Arrays}, author={Youngchul Sung, Lang Tong and H. Vincent Poor}, journal={arXiv preprint arXiv:cs/0502080}, year={2005}, archivePrefix={arXiv}, eprint={cs/0502080}, primaryClass={cs.IT math.IT} }
sung2005sensor
arxiv-672656
cs/0502081
Tables, Memorized Semirings and Applications
<|reference_start|>Tables, Memorized Semirings and Applications: We define and construct a new data structure, the tables, this structure generalizes the (finite) $k$-sets sets of Eilenberg \cite{Ei}, it is versatile (one can vary the letters, the words and the coefficients). We derive from this structure a new semiring (with several semiring structures) which can be applied to the needs of automatic processing multi-agents behaviour problems. The purpose of this account/paper is to present also the basic elements of this new structures from a combinatorial point of view. These structures present a bunch of properties. They will be endowed with several laws namely : Sum, Hadamard product, Cauchy product, Fuzzy operations (min, max, complemented product) Two groups of applications are presented. The first group is linked to the process of "forgetting" information in the tables. The second, linked to multi-agent systems, is announced by showing a methodology to manage emergent organization from individual behaviour models.<|reference_end|>
arxiv
@article{bertelle2005tables,, title={Tables, Memorized Semirings and Applications}, author={Cyrille Bertelle (LIH), G'erard Henry Edmond Duchamp (LIPN), Khalaf Khatatneh (LIFAR)}, journal={arXiv preprint arXiv:cs/0502081}, year={2005}, archivePrefix={arXiv}, eprint={cs/0502081}, primaryClass={cs.MA cs.DM} }
bertelle2005tables,
arxiv-672657
cs/0502082
Graphs and colorings for answer set programming
<|reference_start|>Graphs and colorings for answer set programming: We investigate the usage of rule dependency graphs and their colorings for characterizing and computing answer sets of logic programs. This approach provides us with insights into the interplay between rules when inducing answer sets. We start with different characterizations of answer sets in terms of totally colored dependency graphs that differ in graph-theoretical aspects. We then develop a series of operational characterizations of answer sets in terms of operators on partial colorings. In analogy to the notion of a derivation in proof theory, our operational characterizations are expressed as (non-deterministically formed) sequences of colorings, turning an uncolored graph into a totally colored one. In this way, we obtain an operational framework in which different combinations of operators result in different formal properties. Among others, we identify the basic strategy employed by the noMoRe system and justify its algorithmic approach. Furthermore, we distinguish operations corresponding to Fitting's operator as well as to well-founded semantics. (To appear in Theory and Practice of Logic Programming (TPLP))<|reference_end|>
arxiv
@article{konczak2005graphs, title={Graphs and colorings for answer set programming}, author={Kathrin Konczak, Thomas Linke, Torsten Schaub}, journal={arXiv preprint arXiv:cs/0502082}, year={2005}, archivePrefix={arXiv}, eprint={cs/0502082}, primaryClass={cs.AI cs.LO} }
konczak2005graphs
arxiv-672658
cs/0502083
Impulse Radio Systems with Multiple Types of Ultra-Wideband Pulses
<|reference_start|>Impulse Radio Systems with Multiple Types of Ultra-Wideband Pulses: Spectral properties and performance of multi-pulse impulse radio ultra-wideband systems with pulse-based polarity randomization are analyzed. Instead of a single type of pulse transmitted in each frame, multiple types of pulses are considered, which is shown to reduce the effects of multiple-access interference. First, the spectral properties of a multi-pulse impulse radio system is investigated. It is shown that the power spectral density is the average of spectral contents of different pulse shapes. Then, approximate closed-form expressions for bit error probability of a multi-pulse impulse radio system are derived for RAKE receivers in asynchronous multiuser environments. The theoretical and simulation results indicate that impulse radio systems that are more robust against multiple-access interference than a "classical" impulse radio system can be designed with multiple types of ultra-wideband pulses.<|reference_end|>
arxiv
@article{gezici2005impulse, title={Impulse Radio Systems with Multiple Types of Ultra-Wideband Pulses}, author={Sinan Gezici, Zafer Sahinoglu, Hisashi Kobayashi, and H. Vincent Poor}, journal={arXiv preprint arXiv:cs/0502083}, year={2005}, archivePrefix={arXiv}, eprint={cs/0502083}, primaryClass={cs.IT math.IT} }
gezici2005impulse
arxiv-672659
cs/0502084
On the Typicality of the Linear Code Among the LDPC Coset Code Ensemble
<|reference_start|>On the Typicality of the Linear Code Among the LDPC Coset Code Ensemble: Density evolution (DE) is one of the most powerful analytical tools for low-density parity-check (LDPC) codes on memoryless binary-input/symmetric-output channels. The case of non-symmetric channels is tackled either by the LDPC coset code ensemble (a channel symmetrizing argument) or by the generalized DE for linear codes on non-symmetric channels. Existing simulations show that the bit error rate performances of these two different approaches are nearly identical. This paper explains this phenomenon by proving that as the minimum check node degree $d_c$ becomes sufficiently large, the performance discrepancy of the linear and the coset LDPC codes is theoretically indistinguishable. This typicality of linear codes among the LDPC coset code ensemble provides insight into the concentration theorem of LDPC coset codes.<|reference_end|>
arxiv
@article{wang2005on, title={On the Typicality of the Linear Code Among the LDPC Coset Code Ensemble}, author={Chih-Chun Wang (1), H. Vincent Poor (1), Sanjeev R. Kulkarni (1) ((1) Princeton University)}, journal={arXiv preprint arXiv:cs/0502084}, year={2005}, archivePrefix={arXiv}, eprint={cs/0502084}, primaryClass={cs.IT math.IT} }
wang2005on
arxiv-672660
cs/0502085
Fast generation of random connected graphs with prescribed degrees
<|reference_start|>Fast generation of random connected graphs with prescribed degrees: We address here the problem of generating random graphs uniformly from the set of simple connected graphs having a prescribed degree sequence. Our goal is to provide an algorithm designed for practical use both because of its ability to generate very large graphs (efficiency) and because it is easy to implement (simplicity). We focus on a family of heuristics for which we prove optimality conditions, and show how this optimality can be reached in practice. We then propose a different approach, specifically designed for typical real-world degree distributions, which outperforms the first one. Assuming a conjecture which we state and argue rigorously, we finally obtain an log-linear algorithm, which, in spite of being very simple, improves the best known complexity.<|reference_end|>
arxiv
@article{viger2005fast, title={Fast generation of random connected graphs with prescribed degrees}, author={Fabien Viger (LIAFA, Regal Ur-R Lip6), Matthieu Latapy (LIAFA)}, journal={arXiv preprint arXiv:cs/0502085}, year={2005}, archivePrefix={arXiv}, eprint={cs/0502085}, primaryClass={cs.NI cond-mat.dis-nn cs.DM} }
viger2005fast
arxiv-672661
cs/0502086
The Self-Organization of Speech Sounds
<|reference_start|>The Self-Organization of Speech Sounds: The speech code is a vehicle of language: it defines a set of forms used by a community to carry information. Such a code is necessary to support the linguistic interactions that allow humans to communicate. How then may a speech code be formed prior to the existence of linguistic interactions? Moreover, the human speech code is discrete and compositional, shared by all the individuals of a community but different across communities, and phoneme inventories are characterized by statistical regularities. How can a speech code with these properties form? We try to approach these questions in the paper, using the "methodology of the artificial". We build a society of artificial agents, and detail a mechanism that shows the formation of a discrete speech code without pre-supposing the existence of linguistic capacities or of coordinated interactions. The mechanism is based on a low-level model of sensory-motor interactions. We show that the integration of certain very simple and non language-specific neural devices leads to the formation of a speech code that has properties similar to the human speech code. This result relies on the self-organizing properties of a generic coupling between perception and production within agents, and on the interactions between agents. The artificial system helps us to develop better intuitions on how speech might have appeared, by showing how self-organization might have helped natural selection to find speech.<|reference_end|>
arxiv
@article{oudeyer2005the, title={The Self-Organization of Speech Sounds}, author={Pierre-Yves Oudeyer}, journal={Journal of Theoretical Biology 233 (2005) Issue 3, Pages 435-449}, year={2005}, archivePrefix={arXiv}, eprint={cs/0502086}, primaryClass={cs.LG cs.AI cs.CL cs.NE cs.RO math.DS} }
oudeyer2005the
arxiv-672662
cs/0502087
Self-Replicating Strands that Self-Assemble into User-Specified Meshes
<|reference_start|>Self-Replicating Strands that Self-Assemble into User-Specified Meshes: It has been argued that a central objective of nanotechnology is to make products inexpensively, and that self-replication is an effective approach to very low-cost manufacturing. The research presented here is intended to be a step towards this vision. In previous work (JohnnyVon 1.0), we simulated machines that bonded together to form self-replicating strands. There were two types of machines (called types 0 and 1), which enabled strands to encode arbitrary bit strings. However, the information encoded in the strands had no functional role in the simulation. The information was replicated without being interpreted, which was a significant limitation for potential manufacturing applications. In the current work (JohnnyVon 2.0), the information in a strand is interpreted as instructions for assembling a polygonal mesh. There are now four types of machines and the information encoded in a strand determines how it folds. A strand may be in an unfolded state, in which the bonds are straight (although they flex slightly due to virtual forces acting on the machines), or in a folded state, in which the bond angles depend on the types of machines. By choosing the sequence of machine types in a strand, the user can specify a variety of polygonal shapes. A simulation typically begins with an initial unfolded seed strand in a soup of unbonded machines. The seed strand replicates by bonding with free machines in the soup. The child strands fold into the encoded polygonal shape, and then the polygons drift together and bond to form a mesh. We demonstrate that a variety of polygonal meshes can be manufactured in the simulation, by simply changing the sequence of machine types in the seed.<|reference_end|>
arxiv
@article{ewaschuk2005self-replicating, title={Self-Replicating Strands that Self-Assemble into User-Specified Meshes}, author={Robert Ewaschuk, Peter Turney}, journal={arXiv preprint arXiv:cs/0502087}, year={2005}, number={NRC-47442, ERB-1121}, archivePrefix={arXiv}, eprint={cs/0502087}, primaryClass={cs.NE cs.CE cs.MA} }
ewaschuk2005self-replicating
arxiv-672663
cs/0502088
Towards a Systematic Account of Different Semantics for Logic Programs
<|reference_start|>Towards a Systematic Account of Different Semantics for Logic Programs: In [Hitzler and Wendt 2002, 2005], a new methodology has been proposed which allows to derive uniform characterizations of different declarative semantics for logic programs with negation. One result from this work is that the well-founded semantics can formally be understood as a stratified version of the Fitting (or Kripke-Kleene) semantics. The constructions leading to this result, however, show a certain asymmetry which is not readily understood. We will study this situation here with the result that we will obtain a coherent picture of relations between different semantics for normal logic programs.<|reference_end|>
arxiv
@article{hitzler2005towards, title={Towards a Systematic Account of Different Semantics for Logic Programs}, author={Pascal Hitzler}, journal={arXiv preprint arXiv:cs/0502088}, year={2005}, archivePrefix={arXiv}, eprint={cs/0502088}, primaryClass={cs.AI cs.LO} }
hitzler2005towards
arxiv-672664
cs/0502089
The QuarkNet/Grid Collaborative Learning e-Lab
<|reference_start|>The QuarkNet/Grid Collaborative Learning e-Lab: We describe a case study that uses grid computing techniques to support the collaborative learning of high school students investigating cosmic rays. Students gather and upload science data to our e-Lab portal. They explore those data using techniques from the GriPhyN collaboration. These techniques include virtual data transformations, workflows, metadata cataloging and indexing, data product provenance and persistence, as well as job planners. Students use web browsers and a custom interface that extends the GriPhyN Chiron portal to perform all of these tasks. They share results in the form of online posters and ask each other questions in this asynchronous environment. Students can discover and extend the research of other students, modeling the processes of modern large-scale scientific collaborations. Also, the e-Lab portal provides tools for teachers to guide student work throughout an investigation. http://quarknet.uchicago.edu/elab/cosmic<|reference_end|>
arxiv
@article{bardeen2005the, title={The QuarkNet/Grid Collaborative Learning e-Lab}, author={M. Bardeen, E. Gilbert, T. Jordan, P. Neywoda, E. Quigg, M. Wilde, Y. Zhao}, journal={Future Gener.Comput.Syst.22:700-708,2006}, year={2005}, doi={10.1016/j.future.2006.03.001}, number={FERMILAB-CONF-04-366-LSS}, archivePrefix={arXiv}, eprint={cs/0502089}, primaryClass={cs.DC physics.ed-ph} }
bardeen2005the
arxiv-672665
cs/0502090
UNICORE - From Project Results to Production Grids
<|reference_start|>UNICORE - From Project Results to Production Grids: The UNICORE Grid-technology provides a seamless, secure and intuitive access to distributed Grid resources. In this paper we present the recent evolution from project results to production Grids. At the beginning UNICORE was developed as a prototype software in two projects funded by the German research ministry (BMBF). Over the following years, in various European-funded projects, UNICORE evolved to a full-grown and well-tested Grid middleware system, which today is used in daily production at many supercomputing centers worldwide. Beyond this production usage, the UNICORE technology serves as a solid basis in many European and International research projects, which use existing UNICORE components to implement advanced features, high level services, and support for applications from a growing range of domains. In order to foster these ongoing developments, UNICORE is available as open source under BSD licence at SourceForge, where new releases are published on a regular basis. This paper is a review of the UNICORE achievements so far and gives a glimpse on the UNICORE roadmap.<|reference_end|>
arxiv
@article{streit2005unicore, title={UNICORE - From Project Results to Production Grids}, author={A. Streit, D. Erwin, Th. Lippert, D. Mallmann, R. Menday, M. Rambadt, M. Riedel, M. Romberg, B. Schuller, Ph. Wieder}, journal={arXiv preprint arXiv:cs/0502090}, year={2005}, archivePrefix={arXiv}, eprint={cs/0502090}, primaryClass={cs.DC cs.OS} }
streit2005unicore
arxiv-672666
cs/0502091
An Audit Logic for Accountability
<|reference_start|>An Audit Logic for Accountability: We describe and implement a policy language. In our system, agents can distribute data along with usage policies in a decentralized architecture. Our language supports the specification of conditions and obligations, and also the possibility to refine policies. In our framework, the compliance with usage policies is not actively enforced. However, agents are accountable for their actions, and may be audited by an authority requiring justifications.<|reference_end|>
arxiv
@article{cederquist2005an, title={An Audit Logic for Accountability}, author={J.G. Cederquist, R. Corin, M.A.C. Dekker, S. Etalle and J.I. den Hartog}, journal={arXiv preprint arXiv:cs/0502091}, year={2005}, doi={10.1109/POLICY.2005.5}, archivePrefix={arXiv}, eprint={cs/0502091}, primaryClass={cs.CR cs.LO} }
cederquist2005an
arxiv-672667
cs/0502092
Divergence-free Wavelets for Navier-Stokes
<|reference_start|>Divergence-free Wavelets for Navier-Stokes: In this paper, we investigate the use of compactly supported divergence-free wavelets for the representation of the Navier-Stokes solution. After reminding the theoretical construction of divergence-free wavelet vectors, we present in detail the bases and corresponding fast algorithms for 2D and 3D incompressible flows. In order to compute the nonlinear term, we propose a new method which provides in practice with the Hodge decomposition of any flow: this decomposition enables us to separate the incompressible part of the flow from its orthogonal complement, which corresponds to the gradient component of the flow. Finally we show numerical tests to validate our approach.<|reference_end|>
arxiv
@article{deriaz2005divergence-free, title={Divergence-free Wavelets for Navier-Stokes}, author={Erwan Deriaz (LMC - IMAG), Val'erie Perrier (LMC - IMAG)}, journal={Journal of Turbulence, Volume 7, N 3 2006}, year={2005}, doi={10.1080/14685240500260547}, number={1072 - M}, archivePrefix={arXiv}, eprint={cs/0502092}, primaryClass={cs.NA} }
deriaz2005divergence-free
arxiv-672668
cs/0502093
Online Permutation Routing in Partitioned Optical Passive Star Networks
<|reference_start|>Online Permutation Routing in Partitioned Optical Passive Star Networks: This paper establishes the state of the art in both deterministic and randomized online permutation routing in the POPS network. Indeed, we show that any permutation can be routed online on a POPS network either with $O(\frac{d}{g}\log g)$ deterministic slots, or, with high probability, with $5c\lceil d/g\rceil+o(d/g)+O(\log\log g)$ randomized slots, where constant $c=\exp (1+e^{-1})\approx 3.927$. When $d=\Theta(g)$, that we claim to be the "interesting" case, the randomized algorithm is exponentially faster than any other algorithm in the literature, both deterministic and randomized ones. This is true in practice as well. Indeed, experiments show that it outperforms its rivals even starting from as small a network as a POPS(2,2), and the gap grows exponentially with the size of the network. We can also show that, under proper hypothesis, no deterministic algorithm can asymptotically match its performance.<|reference_end|>
arxiv
@article{mei2005online, title={Online Permutation Routing in Partitioned Optical Passive Star Networks}, author={Alessandro Mei and Romeo Rizzi}, journal={arXiv preprint arXiv:cs/0502093}, year={2005}, archivePrefix={arXiv}, eprint={cs/0502093}, primaryClass={cs.DC} }
mei2005online
arxiv-672669
cs/0502094
Coalition Formation: Concessions, Task Relationships and Complexity Reduction
<|reference_start|>Coalition Formation: Concessions, Task Relationships and Complexity Reduction: Solutions to the coalition formation problem commonly assume agent rationality and, correspondingly, utility maximization. This in turn may prevent agents from making compromises. As shown in recent studies, compromise may facilitate coalition formation and increase agent utilities. In this study we leverage on those new results. We devise a novel coalition formation mechanism that enhances compromise. Our mechanism can utilize information on task dependencies to reduce formation complexity. Further, it works well with both cardinal and ordinal task values. Via experiments we show that the use of the suggested compromise-based coalition formation mechanism provides significant savings in the computation and communication complexity of coalition formation. Our results also show that when information on task dependencies is used, the complexity of coalition formation is further reduced. We demonstrate successful use of the mechanism for collaborative information filtering, where agents combine linguistic rules to analyze documents' contents.<|reference_end|>
arxiv
@article{aknine2005coalition, title={Coalition Formation: Concessions, Task Relationships and Complexity Reduction}, author={Samir Aknine, Onn Shehory}, journal={arXiv preprint arXiv:cs/0502094}, year={2005}, archivePrefix={arXiv}, eprint={cs/0502094}, primaryClass={cs.MA} }
aknine2005coalition
arxiv-672670
cs/0502095
Gradient Vector Flow Models for Boundary Extraction in 2D Images
<|reference_start|>Gradient Vector Flow Models for Boundary Extraction in 2D Images: The Gradient Vector Flow (GVF) is a vector diffusion approach based on Partial Differential Equations (PDEs). This method has been applied together with snake models for boundary extraction medical images segmentation. The key idea is to use a diffusion-reaction PDE to generate a new external force field that makes snake models less sensitivity to initialization as well as improves the snake's ability to move into boundary concavities. In this paper, we firstly review basic results about convergence and numerical analysis of usual GVF schemes. We point out that GVF presents numerical problems due to discontinuities image intensity. This point is considered from a practical viewpoint from which the GVF parameters must follow a relationship in order to improve numerical convergence. Besides, we present an analytical analysis of the GVF dependency from the parameters values. Also, we observe that the method can be used for multiply connected domains by just imposing the suitable boundary condition. In the experimental results we verify these theoretical points and demonstrate the utility of GVF on a segmentation approach that we have developed based on snakes.<|reference_end|>
arxiv
@article{giraldi2005gradient, title={Gradient Vector Flow Models for Boundary Extraction in 2D Images}, author={Gilson A. Giraldi, Leandro S. Marturelli, Paulo S. Rodrigues}, journal={arXiv preprint arXiv:cs/0502095}, year={2005}, archivePrefix={arXiv}, eprint={cs/0502095}, primaryClass={cs.CV} }
giraldi2005gradient
arxiv-672671
cs/0502096
Property analysis of symmetric travelling salesman problem instances acquired through evolution
<|reference_start|>Property analysis of symmetric travelling salesman problem instances acquired through evolution: We show how an evolutionary algorithm can successfully be used to evolve a set of difficult to solve symmetric travelling salesman problem instances for two variants of the Lin-Kernighan algorithm. Then we analyse the instances in those sets to guide us towards deferring general knowledge about the efficiency of the two variants in relation to structural properties of the symmetric travelling sale sman problem.<|reference_end|>
arxiv
@article{van hemert2005property, title={Property analysis of symmetric travelling salesman problem instances acquired through evolution}, author={J.I. van Hemert}, journal={arXiv preprint arXiv:cs/0502096}, year={2005}, archivePrefix={arXiv}, eprint={cs/0502096}, primaryClass={cs.NE cs.AI} }
van hemert2005property
arxiv-672672
cs/0503001
Top-Down Unsupervised Image Segmentation (it sounds like oxymoron, but actually it is not)
<|reference_start|>Top-Down Unsupervised Image Segmentation (it sounds like oxymoron, but actually it is not): Pattern recognition is generally assumed as an interaction of two inversely directed image-processing streams: the bottom-up information details gathering and localization (segmentation) stream, and the top-down information features aggregation, association and interpretation (recognition) stream. Inspired by recent evidence from biological vision research and by the insights of Kolmogorov Complexity theory, we propose a new, just top-down evolving, procedure of initial image segmentation. We claim that traditional top-down cognitive reasoning, which is supposed to guide the segmentation process to its final result, is not at all a part of the image information content evaluation. And that initial image segmentation is certainly an unsupervised process. We present some illustrative examples, which support our claims.<|reference_end|>
arxiv
@article{diamant2005top-down, title={Top-Down Unsupervised Image Segmentation (it sounds like oxymoron, but actually it is not)}, author={Emanuel Diamant}, journal={arXiv preprint arXiv:cs/0503001}, year={2005}, archivePrefix={arXiv}, eprint={cs/0503001}, primaryClass={cs.CV cs.IR} }
diamant2005top-down
arxiv-672673
cs/0503002
Local and Global Analysis: Complementary Activities for Increasing the Effectiveness of Requirements Verification and Validation
<|reference_start|>Local and Global Analysis: Complementary Activities for Increasing the Effectiveness of Requirements Verification and Validation: This paper presents a unique approach to connecting requirements engineering (RE) activities into a process framework that can be employed to obtain quality requirements with reduced expenditures of effort and cost. We propose a two-phase model that is novel in that it introduces the concept of verification and validation (V&V) early in the requirements life cycle. In the first phase, we perform V&V immediately following the elicitation of requirements for each individually distinct system function. Because the first phase focuses on capturing smaller sets of related requirements iteratively, each corresponding V&V activity is better focused for detecting and correcting errors in each requirement set. In the second phase, a complementary verification activity is initiated; the corresponding focus is on the quality of linkages between requirements sets rather than on the requirements within the sets. Consequently, this approach reduces the effort in verification and enhances the focus on the verification task. Our approach, unlike other models, has a minimal time delay between the elicitation of requirements and the execution of the V&V activities. Because of this short time gap, the stakeholders have a clearer recollection of the requirements, their context and rationale; this enhances the stakeholder feedback. Furthermore, our model includes activities that closely align with the effective RE processes employed in the software industry. Thus, our approach facilitates a better understanding of the flow of requirements, and provides guidance for the implementation of the RE process.<|reference_end|>
arxiv
@article{lobo2005local, title={Local and Global Analysis: Complementary Activities for Increasing the Effectiveness of Requirements Verification and Validation}, author={Lester Lobo, James D. Arthur}, journal={arXiv preprint arXiv:cs/0503002}, year={2005}, archivePrefix={arXiv}, eprint={cs/0503002}, primaryClass={cs.SE} }
lobo2005local
arxiv-672674
cs/0503003
An Objectives-Driven Process for Selecting Methods to Support Requirements Engineering Activities
<|reference_start|>An Objectives-Driven Process for Selecting Methods to Support Requirements Engineering Activities: This paper presents a framework that guides the requirements engineer in the implementation and execution of an effective requirements generation process. We achieve this goal by providing a well-defined requirements engineering model and a criteria based process for optimizing method selection for attendant activities. Our model, unlike other models, addresses the complete requirements generation process and consists of activities defined at more adequate levels of abstraction. Additionally, activity objectives are identified and explicitly stated - not implied as in the current models. Activity objectives are crucial as they drive the selection of methods for each activity. Our model also incorporates a unique approach to verification and validation that enhances quality and reduces the cost of generating requirements. To assist in the selection of methods, we have mapped commonly used methods to activities based on their objectives. In addition, we have identified method selection criteria and prescribed a reduced set of methods that optimize these criteria for each activity defined by our requirements generation process. Thus, the defined approach assists in the task of selecting methods by using selection criteria to reduce a large collection of potential methods to a smaller, manageable set. The model and the set of methods, taken together, provide the much needed guidance for the effective implementation and execution of the requirements generation process.<|reference_end|>
arxiv
@article{lobo2005an, title={An Objectives-Driven Process for Selecting Methods to Support Requirements Engineering Activities}, author={Lester Lobo, James D. Arthur}, journal={arXiv preprint arXiv:cs/0503003}, year={2005}, doi={10.1109/SEW.2005.18}, archivePrefix={arXiv}, eprint={cs/0503003}, primaryClass={cs.SE} }
lobo2005an
arxiv-672675
cs/0503004
Effective Requirements Generation: Synchronizing Early Verification & Validation, Methods and Method Selection Criteria
<|reference_start|>Effective Requirements Generation: Synchronizing Early Verification & Validation, Methods and Method Selection Criteria: This paper presents an approach for the implementation and execution of an effective requirements generation process. We achieve this goal by providing a well-defined requirements engineering model that includes verification and validation (V&V), and analysis. In addition, we identify focused activity objectives and map popular methods to lower-level activities, and define a criterion based process for optimizing method selection for attendant activities. Our model, unlike other models, addresses the complete requirements generation process and consists of activities defined at more adequate levels of abstraction. Furthermore, our model also incorporates a unique approach to V&V that enhances quality and reduces the cost of generating requirements. Additionally, activity objectives are identified and explicitly stated - not implied as in the current models. To assist in the selection of an appropriate set of methods, we have mapped commonly used methods to activities based on their objectives. Finally, we have identified method selection criteria and prescribed a reduced set of methods that optimize these criteria for each activity in our model. Thus, our approach assists in the task of selecting methods by using selection criteria to reduce a large collection of potential methods to a smaller, manageable set. The model, clear mapping of methods to activity objectives, and the criteria based process, taken together, provide the much needed guidance for the effective implementation and execution of the requirements generation process.<|reference_end|>
arxiv
@article{lobo2005effective, title={Effective Requirements Generation: Synchronizing Early Verification & Validation, Methods and Method Selection Criteria}, author={Lester Lobo, James D. Arthur}, journal={arXiv preprint arXiv:cs/0503004}, year={2005}, archivePrefix={arXiv}, eprint={cs/0503004}, primaryClass={cs.SE} }
lobo2005effective
arxiv-672676
cs/0503005
High efficiency and low absorption Fresnel compound zone plates for hard X-ray focusing
<|reference_start|>High efficiency and low absorption Fresnel compound zone plates for hard X-ray focusing: Circular and linear zone plates have been fabricated on the surface of silicon crystals for the energy of 8 keV by electron beam lithography and deep ion plasma etching methods. Various variants of compound zone plates with first, second, third diffraction orders have been made. The zone relief height is about 10 mkm, the outermost zone width of the zone plate is 0.4 mkm. The experimental testing of the zone plates has been conducted on SPring-8 and ESRF synchrotron radiation sources. A focused spot size and diffraction efficiency measured by knife-edge scanning are accordingly 0.5 mkm and 39% for the first order circular zone plate.<|reference_end|>
arxiv
@article{kuyumchyan2005high, title={High efficiency and low absorption Fresnel compound zone plates for hard X-ray focusing}, author={A. Kuyumchyan, A. Isoyan, E. Shulakov, V. Aristov, M. Kondratenkov, A. Snigirev, I. Snigireva, A. Souvorov, K. Tamasaku, M. Yabashi, T. Ishikawa, K. Trouni}, journal={Proceedings of SPIE, 2002, vol. 4783, p. 92-96}, year={2005}, doi={10.1117/12.450480}, number={SPIE-4783-11}, archivePrefix={arXiv}, eprint={cs/0503005}, primaryClass={cs.OH} }
kuyumchyan2005high
arxiv-672677
cs/0503006
A New Non-Iterative Decoding Algorithm for the Erasure Channel : Comparisons with Enhanced Iterative Methods
<|reference_start|>A New Non-Iterative Decoding Algorithm for the Erasure Channel : Comparisons with Enhanced Iterative Methods: This paper investigates decoding of binary linear block codes over the binary erasure channel (BEC). Of the current iterative decoding algorithms on this channel, we review the Recovery Algorithm and the Guess Algorithm. We then present a Multi-Guess Algorithm extended from the Guess Algorithm and a new algorithm -- the In-place Algorithm. The Multi-Guess Algorithm can push the limit to break the stopping sets. However, the performance of the Guess and the Multi-Guess Algorithm depend on the parity-check matrix of the code. Simulations show that we can decrease the frame error rate by several orders of magnitude using the Guess and the Multi-Guess Algorithms when the parity-check matrix of the code is sparse. The In-place Algorithm can obtain better performance even if the parity check matrix is dense. We consider the application of these algorithms in the implementation of multicast and broadcast techniques on the Internet. Using these algorithms, a user does not have to wait until the entire transmission has been received.<|reference_end|>
arxiv
@article{cai2005a, title={A New Non-Iterative Decoding Algorithm for the Erasure Channel : Comparisons with Enhanced Iterative Methods}, author={J. Cai, C. Tjhai, M. Tomlinson, M. Ambroze and M. Ahmed}, journal={arXiv preprint arXiv:cs/0503006}, year={2005}, archivePrefix={arXiv}, eprint={cs/0503006}, primaryClass={cs.IT math.IT} }
cai2005a
arxiv-672678
cs/0503007
Toward alternative metrics of journal impact: A comparison of download and citation data
<|reference_start|>Toward alternative metrics of journal impact: A comparison of download and citation data: We generated networks of journal relationships from citation and download data, and determined journal impact rankings from these networks using a set of social network centrality metrics. The resulting journal impact rankings were compared to the ISI IF. Results indicate that, although social network metrics and ISI IF rankings deviate moderately for citation-based journal networks, they differ considerably for journal networks derived from download data. We believe the results represent a unique aspect of general journal impact that is not captured by the ISI IF. These results furthermore raise questions regarding the validity of the ISI IF as the sole assessment of journal impact, and suggest the possibility of devising impact metrics based on usage information in general.<|reference_end|>
arxiv
@article{bollen2005toward, title={Toward alternative metrics of journal impact: A comparison of download and citation data}, author={Johan Bollen, Herbert Van de Sompel, Joan Smith and Rick Luce}, journal={arXiv preprint arXiv:cs/0503007}, year={2005}, archivePrefix={arXiv}, eprint={cs/0503007}, primaryClass={cs.DL} }
bollen2005toward
arxiv-672679
cs/0503008
Approximation of dynamical systems using S-systems theory : application to biological systems
<|reference_start|>Approximation of dynamical systems using S-systems theory : application to biological systems: In this paper we propose a new symbolic-numeric algorithm to find positive equilibria of a n-dimensional dynamical system. This algorithm implies a symbolic manipulation of ODE in order to give a local approximation of differential equations with power-law dynamics (S-systems). A numerical calculus is then needed to converge towards an equilibrium, giving at the same time a S-system approximating the initial system around this equilibrium. This algorithm is applied to a real biological example in 14 dimensions which is a subsystem of a metabolic pathway in Arabidopsis Thaliana.<|reference_end|>
arxiv
@article{tournier2005approximation, title={Approximation of dynamical systems using S-systems theory : application to biological systems}, author={Laurent Tournier (LMC - IMAG)}, journal={arXiv preprint arXiv:cs/0503008}, year={2005}, archivePrefix={arXiv}, eprint={cs/0503008}, primaryClass={cs.SC math.DS q-bio.MN} }
tournier2005approximation
arxiv-672680
cs/0503009
Minimal chordal sense of direction and circulant graphs
<|reference_start|>Minimal chordal sense of direction and circulant graphs: A sense of direction is an edge labeling on graphs that follows a globally consistent scheme and is known to considerably reduce the complexity of several distributed problems. In this paper, we study a particular instance of sense of direction, called a chordal sense of direction (CSD). In special, we identify the class of k-regular graphs that admit a CSD with exactly k labels (a minimal CSD). We prove that connected graphs in this class are Hamiltonian and that the class is equivalent to that of circulant graphs, presenting an efficient (polynomial-time) way of recognizing it when the graphs' degree k is fixed.<|reference_end|>
arxiv
@article{leao2005minimal, title={Minimal chordal sense of direction and circulant graphs}, author={R. S. C. Leao, V. C. Barbosa}, journal={Lecture Notes in Computer Science 4162 (2006), 670-680}, year={2005}, doi={10.1007/11821069_58}, archivePrefix={arXiv}, eprint={cs/0503009}, primaryClass={cs.DM} }
leao2005minimal
arxiv-672681
cs/0503010
Optimized network structure and routing metric in wireless multihop ad hoc communication
<|reference_start|>Optimized network structure and routing metric in wireless multihop ad hoc communication: Inspired by the Statistical Physics of complex networks, wireless multihop ad hoc communication networks are considered in abstracted form. Since such engineered networks are able to modify their structure via topology control, we search for optimized network structures, which maximize the end-to-end throughput performance. A modified version of betweenness centrality is introduced and shown to be very relevant for the respective modeling. The calculated optimized network structures lead to a significant increase of the end-to-end throughput. The discussion of the resulting structural properties reveals that it will be almost impossible to construct these optimized topologies in a technologically efficient distributive manner. However, the modified betweenness centrality also allows to propose a new routing metric for the end-to-end communication traffic. This approach leads to an even larger increase of throughput capacity and is easily implementable in a technologically relevant manner.<|reference_end|>
arxiv
@article{krause2005optimized, title={Optimized network structure and routing metric in wireless multihop ad hoc communication}, author={Wolfram Krause (1 and 2), Jan Scholz (1 and 3), Martin Greiner (1) ((1) Corporate Technology, Siemens AG, (2) FIAS/FIGSS, Universitaet Frankfurt, (3) ITP, Universitaet Giessen)}, journal={arXiv preprint arXiv:cs/0503010}, year={2005}, doi={10.1016/j.physa.2005.06.085}, archivePrefix={arXiv}, eprint={cs/0503010}, primaryClass={cs.NI} }
krause2005optimized
arxiv-672682
cs/0503011
Shuffling a Stacked Deck: The Case for Partially Randomized Ranking of Search Engine Results
<|reference_start|>Shuffling a Stacked Deck: The Case for Partially Randomized Ranking of Search Engine Results: In-degree, PageRank, number of visits and other measures of Web page popularity significantly influence the ranking of search results by modern search engines. The assumption is that popularity is closely correlated with quality, a more elusive concept that is difficult to measure directly. Unfortunately, the correlation between popularity and quality is very weak for newly-created pages that have yet to receive many visits and/or in-links. Worse, since discovery of new content is largely done by querying search engines, and because users usually focus their attention on the top few results, newly-created but high-quality pages are effectively ``shut out,'' and it can take a very long time before they become popular. We propose a simple and elegant solution to this problem: the introduction of a controlled amount of randomness into search result ranking methods. Doing so offers new pages a chance to prove their worth, although clearly using too much randomness will degrade result quality and annul any benefits achieved. Hence there is a tradeoff between exploration to estimate the quality of new pages and exploitation of pages already known to be of high quality. We study this tradeoff both analytically and via simulation, in the context of an economic objective function based on aggregate result quality amortized over time. We show that a modest amount of randomness leads to improved search results.<|reference_end|>
arxiv
@article{pandey2005shuffling, title={Shuffling a Stacked Deck: The Case for Partially Randomized Ranking of Search Engine Results}, author={Sandeep Pandey, Sourashis Roy, Christopher Olston, Junghoo Cho, and Soumen Chakrabarti}, journal={arXiv preprint arXiv:cs/0503011}, year={2005}, number={CMU-CS-05-116}, archivePrefix={arXiv}, eprint={cs/0503011}, primaryClass={cs.IR} }
pandey2005shuffling
arxiv-672683
cs/0503012
First-order Complete and Computationally Complete Query Languages for Spatio-Temporal Databases
<|reference_start|>First-order Complete and Computationally Complete Query Languages for Spatio-Temporal Databases: We address a fundamental question concerning spatio-temporal database systems: ``What are exactly spatio-temporal queries?'' We define spatio-temporal queries to be computable mappings that are also generic, meaning that the result of a query may only depend to a limited extent on the actual internal representation of the spatio-temporal data. Genericity is defined as invariance under groups of geometric transformations that preserve certain characteristics of spatio-temporal data (e.g., collinearity, distance, velocity, acceleration, ...). These groups depend on the notions that are relevant in particular spatio-temporal database applications. These transformations also have the distinctive property that they respect the monotone and unidirectional nature of time. We investigate different genericity classes with respect to the constraint database model for spatio-temporal databases and we identify sound and complete languages for the first-order and the computable queries in these genericity classes. We distinguish between genericity determined by time-invariant transformations, genericity notions concerning physical quantities and genericity determined by time-dependent transformations.<|reference_end|>
arxiv
@article{geerts2005first-order, title={First-order Complete and Computationally Complete Query Languages for Spatio-Temporal Databases}, author={Floris Geerts, Sofie Haesevoets, Bart Kuijpers}, journal={arXiv preprint arXiv:cs/0503012}, year={2005}, archivePrefix={arXiv}, eprint={cs/0503012}, primaryClass={cs.DB} }
geerts2005first-order
arxiv-672684
cs/0503013
Pr\'ediction de Performances pour les Communications Collectives
<|reference_start|>Pr\'ediction de Performances pour les Communications Collectives: Des travaux r\'{e}cents visent l'optimisation des op\'{e}rations de communication collective dans les environnements de type grille de calcul. La solution la plus r\'{e}pandue est la s\'{e}paration des communications internes et externes \`{a} chaque grappe, mais cela n'exclut pas le d\'{e}coupage des communications en plusieurs couches, pratique efficace d\'{e}montr\'{e}e par Karonis et al. [10]. Dans les deux cas, la pr\'{e}diction des performances est un facteur essentiel, soit pour le r\'{e}glage fin des param\`{e}tres de communication, soit pour le calcul de la distribution et de la hi\'{e}rarchie des communications. Pour cela, il est tr\`{e}s important d'avoir des mod\`{e}les pr\'{e}cis des communications collectives, lesquels seront utilis\'{e}s pour pr\'{e}dire ces performances. Cet article d\'{e}crit notre exp\'{e}rience sur la mod\'{e}lisation des op\'{e}rations de communication collective. Nous pr\'{e}sentons des mod\`{e}les de communication pour diff\'{e}rents patrons de communication collective comme un vers plusieurs, un vers plusieurs personnalis\'{e} et plusieurs vers plusieurs. Pour \'{e}valuer la pr\'{e}cision des mod\`{e}les, nous comparons les pr\'{e}dictions obtenues avec les r\'{e}sultats des exp\'{e}rimentations effectu\'{e}es sur deux environnements r\'{e}seaux diff\'{e}rents, Fast Ethernet et Myrinet.<|reference_end|>
arxiv
@article{barchet-estefanel2005pr\'{e}diction, title={Pr\'{e}diction de Performances pour les Communications Collectives}, author={Luiz Angelo Barchet-Estefanel (ID - Imag, Apache Ur-Ra Id Imag), Gr'egory Mouni'e (ID - Imag, Apache Ur-Ra Id Imag)}, journal={arXiv preprint arXiv:cs/0503013}, year={2005}, archivePrefix={arXiv}, eprint={cs/0503013}, primaryClass={cs.DC} }
barchet-estefanel2005pr\'{e}diction
arxiv-672685
cs/0503014
ADF95: Tool for automatic differentiation of a FORTRAN code designed for large numbers of independent variables
<|reference_start|>ADF95: Tool for automatic differentiation of a FORTRAN code designed for large numbers of independent variables: ADF95 is a tool to automatically calculate numerical first derivatives for any mathematical expression as a function of user defined independent variables. Accuracy of derivatives is achieved within machine precision. ADF95 may be applied to any FORTRAN 77/90/95 conforming code and requires minimal changes by the user. It provides a new derived data type that holds the value and derivatives and applies forward differencing by overloading all FORTRAN operators and intrinsic functions. An efficient indexing technique leads to a reduced memory usage and a substantially increased performance gain over other available tools with operator overloading. This gain is especially pronounced for sparse systems with large number of independent variables. A wide class of numerical simulations, e.g., those employing implicit solvers, can profit from ADF95.<|reference_end|>
arxiv
@article{straka2005adf95:, title={ADF95: Tool for automatic differentiation of a FORTRAN code designed for large numbers of independent variables}, author={Christian W. Straka}, journal={arXiv preprint arXiv:cs/0503014}, year={2005}, doi={10.1016/j.cpc.2005.01.011}, archivePrefix={arXiv}, eprint={cs/0503014}, primaryClass={cs.MS} }
straka2005adf95:
arxiv-672686
cs/0503015
A Systematic Aspect-Oriented Refactoring and Testing Strategy, and its Application to JHotDraw
<|reference_start|>A Systematic Aspect-Oriented Refactoring and Testing Strategy, and its Application to JHotDraw: Aspect oriented programming aims at achieving better modularization for a system's crosscutting concerns in order to improve its key quality attributes, such as evolvability and reusability. Consequently, the adoption of aspect-oriented techniques in existing (legacy) software systems is of interest to remediate software aging. The refactoring of existing systems to employ aspect-orientation will be considerably eased by a systematic approach that will ensure a safe and consistent migration. In this paper, we propose a refactoring and testing strategy that supports such an approach and consider issues of behavior conservation and (incremental) integration of the aspect-oriented solution with the original system. The strategy is applied to the JHotDraw open source project and illustrated on a group of selected concerns. Finally, we abstract from the case study and present a number of generic refactorings which contribute to an incremental aspect-oriented refactoring process and associate particular types of crosscutting concerns to the model and features of the employed aspect language. The contributions of this paper are both in the area of supporting migration towards aspect-oriented solutions and supporting the development of aspect languages that are better suited for such migrations.<|reference_end|>
arxiv
@article{van deursen2005a, title={A Systematic Aspect-Oriented Refactoring and Testing Strategy, and its Application to JHotDraw}, author={Arie van Deursen, Marius Marin, Leon Moonen}, journal={arXiv preprint arXiv:cs/0503015}, year={2005}, archivePrefix={arXiv}, eprint={cs/0503015}, primaryClass={cs.SE cs.PL} }
van deursen2005a
arxiv-672687
cs/0503016
File-based storage of Digital Objects and constituent datastreams: XMLtapes and Internet Archive ARC files
<|reference_start|>File-based storage of Digital Objects and constituent datastreams: XMLtapes and Internet Archive ARC files: This paper introduces the write-once/read-many XMLtape/ARC storage approach for Digital Objects and their constituent datastreams. The approach combines two interconnected file-based storage mechanisms that are made accessible in a protocol-based manner. First, XML-based representations of multiple Digital Objects are concatenated into a single file named an XMLtape. An XMLtape is a valid XML file; its format definition is independent of the choice of the XML-based complex object format by which Digital Objects are represented. The creation of indexes for both the identifier and the creation datetime of the XML-based representation of the Digital Objects facilitates OAI-PMH-based access to Digital Objects stored in an XMLtape. Second, ARC files, as introduced by the Internet Archive, are used to contain the constituent datastreams of the Digital Objects in a concatenated manner. An index for the identifier of the datastream facilitates OpenURL-based access to an ARC file. The interconnection between XMLtapes and ARC files is provided by conveying the identifiers of ARC files associated with an XMLtape as administrative information in the XMLtape, and by including OpenURL references to constituent datastreams of a Digital Object in the XML-based representation of that Digital Object.<|reference_end|>
arxiv
@article{liu2005file-based, title={File-based storage of Digital Objects and constituent datastreams: XMLtapes and Internet Archive ARC files}, author={Xiaoming Liu, Lyudmila Balakireva, Patrick Hochstenbach, Herbert Van de Sompel}, journal={arXiv preprint arXiv:cs/0503016}, year={2005}, archivePrefix={arXiv}, eprint={cs/0503016}, primaryClass={cs.DL} }
liu2005file-based
arxiv-672688
cs/0503017
A Fast Combined Decimal Adder/Subtractor
<|reference_start|>A Fast Combined Decimal Adder/Subtractor: This paper has been withdrawn.<|reference_end|>
arxiv
@article{nikmehr2005a, title={A Fast Combined Decimal Adder/Subtractor}, author={Hooman Nikmehr, Braden Phillips, Cheng-Chew Lim}, journal={arXiv preprint arXiv:cs/0503017}, year={2005}, archivePrefix={arXiv}, eprint={cs/0503017}, primaryClass={cs.OH} }
nikmehr2005a
arxiv-672689
cs/0503018
Probabilistic Algorithmic Knowledge
<|reference_start|>Probabilistic Algorithmic Knowledge: The framework of algorithmic knowledge assumes that agents use deterministic knowledge algorithms to compute the facts they explicitly know. We extend the framework to allow for randomized knowledge algorithms. We then characterize the information provided by a randomized knowledge algorithm when its answers have some probability of being incorrect. We formalize this information in terms of evidence; a randomized knowledge algorithm returning ``Yes'' to a query about a fact \phi provides evidence for \phi being true. Finally, we discuss the extent to which this evidence can be used as a basis for decisions.<|reference_end|>
arxiv
@article{halpern2005probabilistic, title={Probabilistic Algorithmic Knowledge}, author={Joseph Y. Halpern, Riccardo Pucella}, journal={Logical Methods in Computer Science, Volume 1, Issue 3 (December 20, 2005) lmcs:2261}, year={2005}, doi={10.2168/LMCS-1(3:1)2005}, archivePrefix={arXiv}, eprint={cs/0503018}, primaryClass={cs.AI cs.LO} }
halpern2005probabilistic
arxiv-672690
cs/0503019
Duality Bounds on the Cut-Off Rate with Applications to Ricean Fading
<|reference_start|>Duality Bounds on the Cut-Off Rate with Applications to Ricean Fading: We propose a technique to derive upper bounds on Gallager's cost-constrained random coding exponent function. Applying this technique to the non-coherent peak-power or average-power limited discrete time memoryless Ricean fading channel, we obtain the high signal-to-noise ratio (SNR) expansion of this channel's cut-off rate. At high SNR the gap between channel capacity and the cut-off rate approaches a finite limit. This limit is approximately 0.26 nats per channel-use for zero specular component (Rayleigh) fading and approaches 0.39 nats per channel-use for very large specular components. We also compute the asymptotic cut-off rate of a Rayleigh fading channel when the receiver has access to some partial side information concerning the fading. It is demonstrated that the cut-off rate does not utilize the side information as efficiently as capacity, and that the high SNR gap between the two increases to infinity as the imperfect side information becomes more and more precise.<|reference_end|>
arxiv
@article{lapidoth2005duality, title={Duality Bounds on the Cut-Off Rate with Applications to Ricean Fading}, author={Amos Lapidoth and Natalia Miliou}, journal={arXiv preprint arXiv:cs/0503019}, year={2005}, archivePrefix={arXiv}, eprint={cs/0503019}, primaryClass={cs.IT math.IT} }
lapidoth2005duality
arxiv-672691
cs/0503020
Earlier Web Usage Statistics as Predictors of Later Citation Impact
<|reference_start|>Earlier Web Usage Statistics as Predictors of Later Citation Impact: The use of citation counts to assess the impact of research articles is well established. However, the citation impact of an article can only be measured several years after it has been published. As research articles are increasingly accessed through the Web, the number of times an article is downloaded can be instantly recorded and counted. One would expect the number of times an article is read to be related both to the number of times it is cited and to how old the article is. This paper analyses how short-term Web usage impact predicts medium-term citation impact. The physics e-print archive (arXiv.org) is used to test this.<|reference_end|>
arxiv
@article{brody2005earlier, title={Earlier Web Usage Statistics as Predictors of Later Citation Impact}, author={Tim Brody, Stevan Harnad}, journal={arXiv preprint arXiv:cs/0503020}, year={2005}, archivePrefix={arXiv}, eprint={cs/0503020}, primaryClass={cs.IR} }
brody2005earlier
arxiv-672692
cs/0503021
Fast-Forward on the Green Road to Open Access: The Case Against Mixing Up Green and Gold
<|reference_start|>Fast-Forward on the Green Road to Open Access: The Case Against Mixing Up Green and Gold: This article is a critique of: "The 'Green' and 'Gold' Roads to Open Access: The Case for Mixing and Matching" by Jean-Claude Guedon (in Serials Review 30(4) 2004). Open Access (OA) means: free online access to all peer-reviewed journal articles. Jean-Claude Guedon argues against the efficacy of author self-archiving of peer-reviewed journal articles (the "Green" road to OA). He suggests instead that we should convert to Open Access Publishing (the "Golden" road to OA) by "mixing and matching" Green and Gold as follows: o First, self-archive dissertations (not published, peer-reviewed journal articles). o Second, identify and tag how those dissertations have been evaluated and reviewed. o Third, self-archive unrefereed preprints (not published, peer-reviewed journal articles). o Fourth, develop new mechanisms for evaluating and reviewing those unrefereed preprints, at multiple levels. The result will be OA Publishing (Gold). I argue that rather than yet another 10 years of speculation like this, what is actually needed (and imminent) is for OA self-archiving to be mandated by research funders and institutions so that the self-archiving of published, peer-reviewed journal articles (Green) can be fast-forwarded to 100% OA.<|reference_end|>
arxiv
@article{harnad2005fast-forward, title={Fast-Forward on the Green Road to Open Access: The Case Against Mixing Up Green and Gold}, author={Stevan Harnad}, journal={Ariadne 42 January 2005; http://www.ariadne.ac.uk/issue42/harnad/}, year={2005}, archivePrefix={arXiv}, eprint={cs/0503021}, primaryClass={cs.IR} }
harnad2005fast-forward
arxiv-672693
cs/0503022
Theory and Practice of Transactional Method Caching
<|reference_start|>Theory and Practice of Transactional Method Caching: Nowadays, tiered architectures are widely accepted for constructing large scale information systems. In this context application servers often form the bottleneck for a system's efficiency. An application server exposes an object oriented interface consisting of set of methods which are accessed by potentially remote clients. The idea of method caching is to store results of read-only method invocations with respect to the application server's interface on the client side. If the client invokes the same method with the same arguments again, the corresponding result can be taken from the cache without contacting the server. It has been shown that this approach can considerably improve a real world system's efficiency. This paper extends the concept of method caching by addressing the case where clients wrap related method invocations in ACID transactions. Demarcating sequences of method calls in this way is supported by many important application server standards. In this context the paper presents an architecture, a theory and an efficient protocol for maintaining full transactional consistency and in particular serializability when using a method cache on the client side. In order to create a protocol for scheduling cached method results, the paper extends a classical transaction formalism. Based on this extension, a recovery protocol and an optimistic serializability protocol are derived. The latter one differs from traditional transactional cache protocols in many essential ways. An efficiency experiment validates the approach: Using the cache a system's performance and scalability are considerably improved.<|reference_end|>
arxiv
@article{pfeifer2005theory, title={Theory and Practice of Transactional Method Caching}, author={Daniel Pfeifer and Peter C. Lockemann}, journal={arXiv preprint arXiv:cs/0503022}, year={2005}, number={2005-9}, archivePrefix={arXiv}, eprint={cs/0503022}, primaryClass={cs.DB} }
pfeifer2005theory
arxiv-672694
cs/0503023
The Weighted Maximum-Mean Subtree and Other Bicriterion Subtree Problems
<|reference_start|>The Weighted Maximum-Mean Subtree and Other Bicriterion Subtree Problems: We consider problems in which we are given a rooted tree as input, and must find a subtree with the same root, optimizing some objective function of the nodes in the subtree. When this function is the sum of constant node weights, the problem is trivially solved in linear time. When the objective is the sum of weights that are linear functions of a parameter, we show how to list all optima for all possible parameter values in O(n log n) time; this parametric optimization problem can be used to solve many bicriterion optimizations problems, in which each node has two values xi and yi associated with it, and the objective function is a bivariate function f(SUM(xi),SUM(yi)) of the sums of these two values. A special case, when f is the ratio of the two sums, is the Weighted Maximum-Mean Subtree Problem, or equivalently the Fractional Prize-Collecting Steiner Tree Problem on Trees; for this special case, we provide a linear time algorithm for this problem when all weights are positive, improving a previous O(n log n) solution, and prove that the problem is NP-complete when negative weights are allowed.<|reference_end|>
arxiv
@article{carlson2005the, title={The Weighted Maximum-Mean Subtree and Other Bicriterion Subtree Problems}, author={Josiah Carlson and David Eppstein}, journal={arXiv preprint arXiv:cs/0503023}, year={2005}, archivePrefix={arXiv}, eprint={cs/0503023}, primaryClass={cs.CG cs.DS} }
carlson2005the
arxiv-672695
cs/0503024
Fine-Grained Word Sense Disambiguation Based on Parallel Corpora, Word Alignment, Word Clustering and Aligned Wordnets
<|reference_start|>Fine-Grained Word Sense Disambiguation Based on Parallel Corpora, Word Alignment, Word Clustering and Aligned Wordnets: The paper presents a method for word sense disambiguation based on parallel corpora. The method exploits recent advances in word alignment and word clustering based on automatic extraction of translation equivalents and being supported by available aligned wordnets for the languages in the corpus. The wordnets are aligned to the Princeton Wordnet, according to the principles established by EuroWordNet. The evaluation of the WSD system, implementing the method described herein showed very encouraging results. The same system used in a validation mode, can be used to check and spot alignment errors in multilingually aligned wordnets as BalkaNet and EuroWordNet.<|reference_end|>
arxiv
@article{tufis2005fine-grained, title={Fine-Grained Word Sense Disambiguation Based on Parallel Corpora, Word Alignment, Word Clustering and Aligned Wordnets}, author={Dan Tufis, Radu Ion, Nancy Ide}, journal={In proceedings of the 20th International Conference on Computational Linguistics, COLING2004, Geneva, 2004, pp. 1312-1318}, year={2005}, archivePrefix={arXiv}, eprint={cs/0503024}, primaryClass={cs.AI cs.CL} }
tufis2005fine-grained
arxiv-672696
cs/0503025
A Taxonomy of Workflow Management Systems for Grid Computing
<|reference_start|>A Taxonomy of Workflow Management Systems for Grid Computing: With the advent of Grid and application technologies, scientists and engineers are building more and more complex applications to manage and process large data sets, and execute scientific experiments on distributed resources. Such application scenarios require means for composing and executing complex workflows. Therefore, many efforts have been made towards the development of workflow management systems for Grid computing. In this paper, we propose a taxonomy that characterizes and classifies various approaches for building and executing workflows on Grids. We also survey several representative Grid workflow systems developed by various projects world-wide to demonstrate the comprehensiveness of the taxonomy. The taxonomy not only highlights the design and engineering similarities and differences of state-of-the-art in Grid workflow systems, but also identifies the areas that need further research.<|reference_end|>
arxiv
@article{yu2005a, title={A Taxonomy of Workflow Management Systems for Grid Computing}, author={Jia Yu, Rajkumar Buyya}, journal={arXiv preprint arXiv:cs/0503025}, year={2005}, number={GRIDS-TR-2005-1, Grid Computing and Distributed Systems Laboratory, University of Melbourne, Australia, March 10, 2005}, archivePrefix={arXiv}, eprint={cs/0503025}, primaryClass={cs.DC} }
yu2005a
arxiv-672697
cs/0503026
On Generalized Computable Universal Priors and their Convergence
<|reference_start|>On Generalized Computable Universal Priors and their Convergence: Solomonoff unified Occam's razor and Epicurus' principle of multiple explanations to one elegant, formal, universal theory of inductive inference, which initiated the field of algorithmic information theory. His central result is that the posterior of the universal semimeasure M converges rapidly to the true sequence generating posterior mu, if the latter is computable. Hence, M is eligible as a universal predictor in case of unknown mu. The first part of the paper investigates the existence and convergence of computable universal (semi)measures for a hierarchy of computability classes: recursive, estimable, enumerable, and approximable. For instance, M is known to be enumerable, but not estimable, and to dominate all enumerable semimeasures. We present proofs for discrete and continuous semimeasures. The second part investigates more closely the types of convergence, possibly implied by universality: in difference and in ratio, with probability 1, in mean sum, and for Martin-Loef random sequences. We introduce a generalized concept of randomness for individual sequences and use it to exhibit difficulties regarding these issues. In particular, we show that convergence fails (holds) on generalized-random sequences in gappy (dense) Bernoulli classes.<|reference_end|>
arxiv
@article{hutter2005on, title={On Generalized Computable Universal Priors and their Convergence}, author={Marcus Hutter}, journal={Theoretical Computer Science, 364 (2006) 27-41}, year={2005}, number={IDSIA-05-05}, archivePrefix={arXiv}, eprint={cs/0503026}, primaryClass={cs.LG cs.CC math.PR} }
hutter2005on
arxiv-672698
cs/0503027
Authentication with Distortion Criteria
<|reference_start|>Authentication with Distortion Criteria: In a variety of applications, there is a need to authenticate content that has experienced legitimate editing in addition to potential tampering attacks. We develop one formulation of this problem based on a strict notion of security, and characterize and interpret the associated information-theoretic performance limits. The results can be viewed as a natural generalization of classical approaches to traditional authentication. Additional insights into the structure of such systems and their behavior are obtained by further specializing the results to Bernoulli and Gaussian cases. The associated systems are shown to be substantially better in terms of performance and/or security than commonly advocated approaches based on data hiding and digital watermarking. Finally, the formulation is extended to obtain efficient layered authentication system constructions.<|reference_end|>
arxiv
@article{martinian2005authentication, title={Authentication with Distortion Criteria}, author={Emin Martinian, Gregory W. Wornell, and Brian Chen}, journal={arXiv preprint arXiv:cs/0503027}, year={2005}, archivePrefix={arXiv}, eprint={cs/0503027}, primaryClass={cs.IT cs.CR cs.MM math.IT} }
martinian2005authentication
arxiv-672699
cs/0503028
Stabilization of Cooperative Information Agents in Unpredictable Environment: A Logic Programming Approach
<|reference_start|>Stabilization of Cooperative Information Agents in Unpredictable Environment: A Logic Programming Approach: An information agent is viewed as a deductive database consisting of 3 parts: an observation database containing the facts the agent has observed or sensed from its surrounding environment, an input database containing the information the agent has obtained from other agents, and an intensional database which is a set of rules for computing derived information from the information stored in the observation and input databases. Stabilization of a system of information agents represents a capability of the agents to eventually get correct information about their surrounding despite unpredictable environment changes and the incapability of many agents to sense such changes causing them to have temporary incorrect information. We argue that the stabilization of a system of cooperative information agents could be understood as the convergence of the behavior of the whole system toward the behavior of a "superagent", who has the sensing and computing capabilities of all agents combined. We show that unfortunately, stabilization is not guaranteed in general, even if the agents are fully cooperative and do not hide any information from each other. We give sufficient conditions for stabilization and discuss the consequences of our results.<|reference_end|>
arxiv
@article{dung2005stabilization, title={Stabilization of Cooperative Information Agents in Unpredictable Environment: A Logic Programming Approach}, author={Phan Minh Dung, Do Duc Hanh, and Phan Minh Thang (Asian Institute of Technology)}, journal={arXiv preprint arXiv:cs/0503028}, year={2005}, archivePrefix={arXiv}, eprint={cs/0503028}, primaryClass={cs.LO cs.MA cs.PL} }
dung2005stabilization
arxiv-672700
cs/0503029
The Effect of Use and Access on Citations
<|reference_start|>The Effect of Use and Access on Citations: It has been shown (S. Lawrence, 2001, Nature, 411, 521) that journal articles which have been posted without charge on the internet are more heavily cited than those which have not been. Using data from the NASA Astrophysics Data System (ads.harvard.edu) and from the ArXiv e-print archive at Cornell University (arXiv.org) we examine the causes of this effect.<|reference_end|>
arxiv
@article{kurtz2005the, title={The Effect of Use and Access on Citations}, author={Michael J. Kurtz, Guenther Eichhorn, Alberto Accomazzi, Carolyn Grant, Markus Demleitner, Edwin Henneken, Stephen S. Murray}, journal={Inform Process Manag 41:1395-1402 (2005)}, year={2005}, doi={10.1016/j.ipm.2005.03.010}, archivePrefix={arXiv}, eprint={cs/0503029}, primaryClass={cs.DL} }
kurtz2005the